《Microservices Security In Action》part 1

Part 1 Overview

1 Microservices security landscape

1.1 How security works in a monolithic application

In most monolithic applications, security is enforced centrally, and individual components need not worry about carrying out additional checks unless there is a desperate requirement to do so. As a result, the security model of a monolithic application is much more straightforward than that of an application built around microservices architecture.

1.2 Challenges of securing microservices

Mostly because of the inherent nature of microservices architecture, security is challenging.


1.2.1 The broader the attack surface, the higher the risk of attack

1.2.2 Distributed security screening may result in poor performance

These repetitive, distributed security checks and remote connections could contribute heavily to latency and considerably degrade the performance of the system.

1.2.3 Deployment complexities make bootstrapping trust among microservices a nightmare

Managing a large-scale microservices deployment with thousands of services would be extremely challenging if you didn’t know how to automate.

Fortunately, things didn’t happen that way, and that’s why we believe that microservices and containers are a match made in heaven. Microservices and containers (Docker) were born at the right time to complement each other nicely.

1.2.4 Requests spanning multiple microservices are harder to trace

Observability is a measure of what you can infer about the internal state of a system based on its external outputs. Logs, metrics, and traces are known as the three pillars of observability.

1.2.5 Immutability of containers challenges how you maintain service
credentials and access-control policies

The whole purpose of expecting servers to be immutable in a microservices
deployment is to make deployment clean and simple. At any point, you can kill a running container and create a new one with the base configuration without worrying about runtime data. ( scale horizontally easier)


1.2.6 The distributed nature of microservices makes sharing
user context harder

The challenge is to build trust between two microservices so that the receiving microservice accepts the user context passed from the calling microservice.


1.2.7 Polyglot architecture demands more security expertise
on each development team

In a multiteam environment, in which each team develops its own set of microservices, each team has the flexibility to pick the optimal technology stack for its requirements. This architecture, which enables the various
components in a system to pick the technology stack that is best for them, is known as a polyglot architecture.

A polyglot architecture makes security challenging. Because different teams use different technology stacks for development, each team has to have its own security experts.

1.3 Key security fundamentals

How much you should worry about security isn’t only a technical decision, but also an economic decision. The level of security you need depends on the assets you intend to protect.

1.3.1 Authentication protects your system against spoofing

1.3.2 Integrity protects your system from data tampering

Systems protected for integrity don’t ignore this possibility; they introduce measures so that if a message is altered, the recipient can detect and discard the request. The most common way to protect a message for integrity is to sign it.

Along with the data in transit, the data at rest must be protected for integrity. Of all your business data, audit trails matter most for integrity checks.

One way is to periodically calculate the message digests of audit trails, encrypt them, and store them securely.

1.3.3 Nonrepudiation: Do it once, and you own it forever

Nonrepudiation is an important aspect of information security that prevents you from denying anything you’ve done or committed.

Even in the digital world, a signature helps you achieve nonrepudiation; in this case, you use a digital signature.

You also need to make sure that you record transactions along with the timestamp and the signature—and maintain those records for a considerable amount of time.

1.3.4 Confidentiality protects your systems from unintended information disclosure

1.3.5 Availability: Keep the system running, no matter what


1.3.6 Authorization: Nothing more than you’re supposed to do

Authentication helps you learn about the user or the requesting party. Authorization
determines the actions that an authenticated user can perform on the system.


1.4 Edge security

1.4.1 The role of an API gateway in a microservices deployment

APIs have also become many companies’ main revenue-generation channel. The key role of the API gateway in a microservices deployment is to expose a selected set of microservices to the outside world as APIs and build quality-of-service (QoS) features. These QoS features are security, throttling, and analytics.


1.4.2 Authentication at the edge

Certificate-based authentication protects an API at the edge with mutual Transport Layer Security (mTLS).

OAuth 2.0, which is an authorization framework for delegated access control, is the recommended approach for protecting APIs when one system wants to access an API on behalf of another system or a user.

1.4.3 Authorization at the edge

In addition to figuring out who the requesting party is during the authentication process, the API gateway could enforce corporatewide access-control policies, which are probably coarse-grained. More fine-grained access-control policies are enforced at the service level by the microservice itself .

1.4.4 Passing client/end-user context to upstream microservices

But you need a way to protect the communication channels between the gateway and the corresponding microservice, as well as a way to pass the initial client/user context.
User context carries basic information about the end user, and client context carries information about the client application. This information probably could be used by upstream microservices for service-level access control.

You have a couple of options: pass the user context in an HTTP header, or create a JWT with the user data. The first option is straightforward but raises some trust concerns when the first microservice passes the same user context in an HTTP header to another microservice. The second microservice doesn’t have any guarantee that the
user context isn’t altered. But with JWT, you have an assurance that a man in the middle can’t change its content and go undetected, because the issuer of the JWT signs it.


1.5 Securing service-to-service communication

The security model that you develop to protect service-to-service communication should consider the communication channels that cross trust boundaries, as well as how the actual communication takes place between microservices: synchronously or asynchronously.

In most cases, synchronous communication happens over HTTP.
Asynchronous communication can happen over any kind of messaging system.

1.5.1 Service-to-service authentication

You have three common ways to secure communications among services in a microservices deployment: trust the network, mTLS, and JWTs.


trust the network

The trust-the-network approach is an old-school model in which no security is enforced in service-to-service communication; rather, the model relies on network-level security .

mTLS

Mutual TLS , in fact, this method is the most common form of authentication used today. Each microservice in the deployment has to carry a public/private key pair and uses that key pair to authenticate to the recipient microservices via mTLS.

JWT

Unlike mTLS, JWT works at the application layer, not at the transport layer. JWT is a container that can carry a set of claims from one place to another.

1.5.2 Service-level authorization

Two approaches are used to enforce authorization at the service level: the centralized policy decision point (PDP) model and the embedded PDP model.

This method creates a lot of dependency on the PDP and also increases the latency because of the cost of calling the remote PDP endpoint. In some cases, the effect on latency can be prevented by caching policy decisions at the service level, but other than cache expiration time, there’s no way to communicate policy update events to the service. In practice, policy
updates happen less frequently, and cache expiration may work in most cases.

The challenge with embedded PDPs is how to get policy updates
from the centralized policy administration point (PAP).

1.5.3 Propagating user context among microservices

When one microservice invokes another microservice, it needs to carry both the enduser identity and the identity of the microservice itself. When one microservice authenticates to another microservice with mTLS or JWT, the identity of the calling microservice can be inferred from the embedded credentials.

There are three common ways to pass the end-user context from one microservice to another microservice:
 Send the user context as an HTTP header.
 Use a JWT.
 Use a JWT issued by an external STS that is trusted by all the microservices in the deployment. ( This is the most secure approach. )

1.5.4 Crossing trust boundaries

In terms of security, when one microservice talks to another microservice, and both microservices are in the same trust domain, each microservice may trust one STS in the same domain or a certificate authority in the same domain. Based on this trust, the recipient microservice can validate a security token sent to it by a calling microservice. Typically, in a single trust domain, all the microservices trust one STS and accept only security tokens issued by that STS.

Summary
 Securing microservices is quite challenging with respect to securing a monolithic application, mostly because of the inherent nature of the microservices architecture.

 A microservices security design starts by defining a process to streamline development and engage security-scanning tools to the build system, so that we can discover the code-level vulnerabilities at a very early stage in the development cycle.

 We need to worry about edge security of a microservices deployment and securing communications among microservices.

 Edge security is about authenticating and authorizing requests coming into the microservices deployment from client applications, at the edge, probably with an API gateway.

 Securing communications among microservices is the most challenging part.We discussed multiple techniques in this chapter, and which you choose will depend on many factors, such as the level of security, the type of communication (synchronous or asynchronous), and trust boundaries.


2 First steps in securing microservices

2.1 Building your first microservice

2.1.1 Downloading and installing the required software
INSTALLING THE JDK
INSTALLING APACHE MAVEN
INSTALLING CURL
INSTALLING THE GIT COMMAND-LINE TOOL

2.1.2 Clone samples repository
git clone https://github.com/microservices-security-in-action/samples.git

2.1.3 Compiling the Order Processing microservice
mvn clean install
mvn spring-boot:run

2.1.4 Accessing the Order Processing microservice
2.1.5 What is inside the source code directory?

Often, a resource represents an object or entity that you intend to inspect or manipulate. When mapped to HTTP, a resource is usually identified by a request URI, and an action is represented by an HTTP method;

2.1.6 Understanding the source code of the microservice

2.2 Setting up an OAuth 2.0 server

2.2.1 The interactions with an authorization server

It’s recommended that the client communicate with the microservice over HTTPS and send the token in an HTTP header instead of a query parameter. Because query parameters are sent in the URL, those can be recorded in server logs. Hence, anyone who has access to the logs can see this information.

2.2.2 Running the OAuth 2.0 authorization server
2.2.3 Getting an access token from the OAuth 2.0 authorization server
2.2.4 Understanding the access token response

{
“access_token”:”8c017bb5-f6fd-4654-88c7-c26ccca54bdd”,
“token_type”:”bearer”,
“expires_in”:300,
“scope”:”read write”
}


2.3 Securing a microservice with OAuth 2.0

2.3.1 Security based on OAuth 2.0

2.3.2 Running the sample

2.4 Invoking a secured microservice from a client application

curl -v http://localhost:8080/orders \
-H ‘Content-Type: application/json’ \
-H “Authorization: Bearer b9a405cf-b4e2-4be8-aa25-19475d8993b1” \
–data-binary @- << EOF
{
“items”:[
{
“itemCode”:”IT0001″,
“quantity”:3
},
{
“itemCode”:”IT0004″,
“quantity”:1
}
],
“shippingAddress”:”No 4, Castro Street, Mountain View, CA, USA”
}
EOF

2.5 Performing service-level authorization with OAuth 2.0 scopes

A privilege describes the actions you’re permitted to perform on a resource.

More often than not, your role or roles in an organization describe which actions you’re permitted to perform within that organization and which actions you’re not permitted to perform. A privilege may also indicate status or credibility.

Likewise, a privilege is an indication of the level of access that a user or an application possesses in a system.

In the world of OAuth 2.0, privilege is mapped to a scope. A scope is way of
abstracting a privilege. A privilege can be a user’s role, membership status, credibility, or something else. It can also be a combination of a few such attributes. You use scopes to abstract the implication of a privilege. A scope declares the privilege required by a calling client application to grant access to a resource.

2.5.1 Obtaining a scoped access token from the authorization server

curl -u orderprocessingservice:orderprocessingservicesecret \
-H “Content-Type: application/json” \
-d ‘{ “grant_type”: “client_credentials”, “scopes”: “read write” }’ \
http://localhost:8085/oauth/token


2.5.2 Protecting access to a microservice with OAuth 2.0 scopes

{
“error”:”insufficient_scope”,
“error_description”:”Insufficient scope for this resource”,
“scope”:”write”
}

Summary
 OAuth 2.0 is an authorization framework, which is widely used in securing microservices deployments at the edge.

 OAuth 2.0 supports multiple grant types. The client credentials grant type, which we used in this chapter, is used mostly for system-to-system authentication.

 Each access token issued by an authorization server is coupled with one or more scopes. Scopes are used in OAuth 2.0 to express the privileges attached to an access token.

 OAuth 2.0 scopes are used to protect and enforce access-control checks in certain operations in microservices.

 All samples in this chapter used HTTP (not HTTPS) endpoints to spare you
from having to set up proper certificates and to make it possible for you to
inspect messages being passed on the wire (network), if required. In production systems, we do not recommend using HTTP for any endpoint.




《KAFKA STREAMS IN ACTION》part 3

Part 3 Administering Kafka Streams

7. Monitoring and performance

1. Basic Kafka monitoring

Measuring consumer and producer performance
Checking for consumer lag

Intercepting the producer and consumer

Although interceptors aren’t typically your first line for debugging, they can prove useful in observing the behavior of your Kafka streaming application, and they’re a valuable addition to your toolbox.

2. Application metrics

3. More Kafka Streams debugging techniques

Using the StateListener
State restore listener

8. Testing a Kafka Streams application

1. Testing a topology

ProcessorTopologyTestDriver

The critical point to keep in mind with this test is that you now have a
repeatable test running a record through your entire topology, without the overhead of running Kafka.

2. Integration testing

Integration tests with the EmbeddedKafkaCluster should be used sparingly, and only when you have interactive behavior that can only be verified with a live,running Kafka broker.




Kafka Streams In Action part 2

Part 2 Kafka Streams development

3. Kafka Streams development

1. Hello World for Kafka Streams

2. Working with customer data

3. New requirements

Filtering Purchases
Splitting/Branching The Stream
Generating A Key
Foreach Actions

4. Streams and state

1. Applying stateful operations to Kafka Streams

The transformValues processor

2.Repartitioning The Data

Repartitioning In Kafka Streams

3. Updating the rewards processor

4. Data locality

5. Failure recovery and fault tolerance

The state stores provided by Kafka Streams meet both the locality and fault-tolerance requirements. They’re local to the defined processors and don’t share access across processes or threads. State stores also use topics for backup and quick recovery.

6. Joining streams for added insight

Generating keys containing customer IDs to perform joins

Constructing the join

Outer joins

With an inner join, if either record isn’t present, the join doesn’t occur . Outer joins always output a record .

Left-outer join

7. Timestamps in Kafka Streams

5. The KTable API

1. The relationship between streams and tables

The record stream

Updates to records or the changelog

2. Record updates and KTable configuration

3. Aggregations and windowing operations

Aggregating share volume by industry

Windowing operations


Session windows

There are a couple of key points to remember from this section:
 Sessions are not a fixed-size window. Rather, the size of a session is driven by the amount of activity within a given time frame.
 Timestamps in the data determine whether an event fits into an existing session or falls into an inactivity gap.

Tumbling windows

By not specifying the duration of the window, you’ll get the default retention of 24 hours .

Sliding or hopping windows

 Session windows aren’t fixed by time but are driven by user activity.
 Tumbling windows give you a set picture of events within the specified time frame.
 Hopping windows are of fixed length, but they’re frequently updated and can contain overlapping records in each window.

In conclusion, the key thing to remember is that you can combine event streams (KStream) and update streams (KTable), using local state. Additionally, when the lookup data is of a manageable size, you can use a GlobalKTable. GlobalKTables replicate all partitions to each node in the Kafka Streams application, making all data available, regardless of which partition the key maps to.

6.The Processor API

1. The trade-offs of higher-level abstractions vs. more control

It’s a DSL that allows developers to create robust applications with minimal code. The ability to quickly put together processing topologies is an important feature of the Kafka Streams DSL.

It allows you to iterate quickly to flesh out ideas for working on your data without getting bogged down in the intricate setup details that some other frameworks may need.

What the Processor API lacks in ease of development, it makes up for in power. You can write custom processors to do almost anything you want.

2.Working with sources, processors, and sinks to create a topology

3. Digging deeper into the Processor API with a stock analysis processor

4. The co-group processor

Adding the sink node

The Processor API gives you more flexibility at the cost of more code .