Bootstrapping Microservices


Chapter 1: Why microservices?


1.5 What is a microservice?

DEFINITION A microservice is a tiny and independent software process that runson its own deployment schedule and can be updated independently.

1.6 What is a microservices application?

DEFINITION A microservices application is a distributed program composed ofmany tiny services that collaborate to achieve the features and functionality ofthe overall project.

Chapter 2: Creating your first microservice


2.6.8 Live reloading for fast iteration
In development mode, we’d like to optimize for fast iterations and productivity.Alternately, in production mode, we’d like to optimize for performance and security.These needs are at odds with each other; hence, these must be treated separately.

Chapter 3: Publishing your first microservice


3.9.1 Creating a private container registry在阿里云创建了一个私有镜像,并且可以push,可以pull自己构建的镜像,以前都是用共有的DockerHub.

Chapter 4: Data management for microservices


4.4 Adding file storage to our application使用了ali-oss,并且使用Node.js的SDK来完成文件的浏览。

Chapter 5: Communication between microservices


5.5 Live reload for fast iterations

NOTE Not being able to quickly update the code in a running application is aterrible thing for our development process and can be a huge drain on ourproductivity. We’ll address this early and find a way to restore our live reloadcapability.


5.8 Indirect messaging with RabbitMQ
5.8.6 Single-recipient indirect messaging

NOTE Single-recipient messages are one-to-one: a message is sent from onemicroservice and received by only a single other. This is a great way of makingsure that a particular job is done only once within your application.

5.8.7 Multiple-recipient messages

NOTE Multiple-recipient messages are one-to-many: a message is sent fromonly a single microservice but potentially received by many others. This is agreat way of publishing notifications within your application.

Chapter 6: Creating your production environment


6.4 Infrastructure as codeIt’s called infrastructure as code because rather than manually creating infrastructure, we will write code that creates our infrastructure.The fact that this code both describes and builds our infrastructure makes it a form of executable documentation.


6.7 Creating infrastructure with Terraform
6.7.1 Why Terraform?Terraform is a tool and a language for configuring infrastructure for cloud-based applications.


由于在国内,访问外网极其慢,使用aliyun 时terraform init都无法继续下去……,在这一块就可以确定要想Infrastructure as code这种比较前沿技术在国内应用并流行起来,这一定是容器的盛行才能带得动,像aliyun这样的云提供商就要想办法解决这样的一系列问题才能使其更好的流行起来。
当然,如果只是简单的应用部署,而且没有使用k8s集群的方式,使用GUI的控制台来说也是可以的,但是站在更高的抽象程度上来看,那些都是资源,如果可以解决开发,测试及上产上的一些问题,使用抽象的资源来看,比起一台台服务器,网络端口,持久化卷等等具体的资源,使用代码的方式,更直观,宜用。
真正使用的技术一定是视具体的环境和场景和条件来定夺的。
经过很多次尝试后,运行成功了。

Chapter 7: Getting to continuous delivery


7.4 Continuous delivery (CD)Continuous delivery (CD) is a technique in software development where we do frequentautomated deployments of our updated code to a production (or testing) environment.
This is an important aspect of our application because it’s how we reliably andfrequently deliver features into the hands of our customers. Getting feedback fromcustomers is vital to building a product that’s relevant. CD allows us to quickly andsafely get code changes into production and promotes a rapid pace of development.


7.7 Continuous delivery with Bitbucket PipelinesWe don’t want to manually invoke Terraformfor every change to our infrastructure or microservices. We’d like to deploy changes frequently, and we want it to be automated and streamlined so that we can spend the majority of our time building features rather than deploying our software. In addition,automation also greatly reduces the potential for human error.

Chapter 8: Automated testing for microservices


8.5 Testing with Jest

8.5.10 Mocking with Jest

DEFINITION Mocking is where we replace real dependencies in our code withfake or simulated versions of those.


The purpose of mocking is to isolate the code we are testing.Isolating particular sections of code allows us to focus on just testing only that codeand nothing else. Isolation is important for unit testing and test-driven development.
DI (dependency injection) is a technique where we inject dependencies into our code ratherthan hard-coding them.

function square(n, multiply) {     
     return multiply(n, n); 
}

 test("can square two", () => {     
     const mockMultiply = (n1, n2) => {         
                      expect(n1).toBe(2);         
                      expect(n2).toBe(2);         
                      return 4;     
};          

  const result = square(2, mockMultiply);     
  expect(result).toBe(4); 
}) 


You might note at this point that we have just implemented the square function,tested it, and proved that it works—and the real version of the multiply functiondoesn’t even exist yet!
This is one of the superpowers of test-driven development(TDD). TDD allows us to reliably test incomplete versions of our code.

真正测试的时候,发现:

Chapter 9: Exploring FlixTube


9.7 FlixTube deep dive9.7.2 Mocking storageFor convenience during development, we replaced the Azure version of the videostorage microservice with a mock version.
When running in development, we’d prefer to eliminate external dependencies likeconnections to cloud storage. In this case, limiting our storage to the local filesystemmakes the setup for development easier. Performance is improved because videos arestored locally and not sent out to the cloud.

NOTE Removing or replacing big complex microservices—possibly evenwhole groups of microservices—is an important technique for reducing thesize of our application so that it can fit on a single computer and be able torun during development.


9.11 FlixTube in the future

学习到两个新词——

Recommendations/ Likes / Favorites

喜欢—>likes收藏—->favorites
9.12 Continue your learningPracticing the art of development is what takes you to the next level.
Development is not without challenges. In fact, it is a never-ending rollercoaster ofproblems and solutions.
The references at the end of each chapter will help you continue your learning journey. But just remember that your key to success and your key to retaining these skills is consistent practice.

Chapter 10: Healthy microservices

10.2 Monitoring your microservices

  • Logging
  • Error handling
  • Log aggregation
  • Automatic health checks

10.2.7 Automatic restarts with Kubernetes health checksKubernetes has a great feature for automated health checks that allows us to automatically detect and restart unhealthy microservices.


The readiness probe shows if the microservice has started and is ready to start acceptingrequests. The liveness probe then shows whether the microservice is still alive and is stillaccepting requests.


First, if we didn’t use readiness and liveness probes, our history microservice will constantly start up, crash, and restart while RabbitMQ is down. This constant restarting isn’t an efficient use of our resources, and it’s going to generate a ton of error logging that we’d have to analyze (in case there’s a real problem buried in there!).


This would save the microservice from constantly crashing and restarting, but it requiressignificantly more sophisticated code in our microservice to handle the disconnectionand reconnection to RabbitMQ. We don’t need to write such sophisticated codebecause that‘s what the probes are doing for us.

Chapter 11: Pathways to scalability


11.2 Scaling the development process11.2.6 Creating multiple environments
现在使用多环境开发是一件比较容易的事情,但是有时要考虑数据问题。比如生产环境的数据是否就可以直接导入到仿真,测试,开发环境呢?不一定,如果直接使用的是生产的数据,可能导致的一个问题就是每个环境都有自己的资源库,如果是生产环境的数据,我们可能有时候不能直接在仿真黄静使用,比如OSS文件,可能每个环境都有自己的bucket等等,如果链接直接使用生产环境的,可能就访问不了了。

当让所有的问题如果存在肯定是有解决方案的,只是成本问题。我们在引入多环境的初衷是好的,但是同时我们也要注意到我们同时也引入了一些额外的成本。

11.5 Refactoring to microservices
DO YOU REALLY NEED MICROSERVICES?

  1. Is it really worth the cost of doing the conversion?
  2. Do you really need to scale?
  3. Do you really need the flexibility of microservices?

PLAN YOUR CONVERSION AND INVOLVE EVERYONEKNOW YOUR LEGACY CODE

IMPROVE YOUR AUTOMATION
With microservices, you can’t get away from automation. If you can’t afford to investin automation, you probably can’t afford to convert to microservices.

BUILD YOUR MICROSERVICES PLATFORMCARVE ALONG NATURAL

SEAMSEXTRACT THE PARTS THAT CHANGE MOST FREQUENTLYAND

REPEAT . . .

IT DOESN’T HAVE TO BE PERFECT
A SPECTRUM OF POSSIBILITIES

11.7 From simple beginnings . . .

这本书的阅读笔记和小结至此结束,下面总结一下这本书介绍的技术。

ToolVersionPurpose
Git2.27.0Version control
Node.js12.18.1Runtime environment
Visual Studio (VS) Code1.46.1code editor
Docker19.03.12package, publish, and test our microservices.
Docker Compose1.26.2configure, build, run, and manage
multiple containers at the same time.
Azure Storage2.10.3store files in the cloud
MongoDB4.2.8NoSQL type of database
RabbitMQ3.8.5message queuing software
amqplib0.5.6 configure RabbitMQ and to send and receive me-ssages from JavaScript.
Kubernetes1.18.8 computing platform that we use to host our microservices in production.
Terraform0.12.29script the creation of cloud resources and applica-tion infrastructure.
Kubectl1.18.6command-line tool for interacting with a Kubernetes cluster
Azure CLI2.9.1managing Azure accounts and cloud resources
Bitbucket Pipelines The hosted service from Atlassian that we’ll use for CD to automate the deployment of our application
Jest26.2.2a tool for automated testing of JavaScript code.
Cypress4.12.1a tool for automated testing of web pages.

这本书通过一个简单的FlixTube的例子来一步步将微服务从开发,测试,部署等一个个环节打通,最终通过Terraform+Kubernetes+Bitbuket Pipeline实现CICD.

《MICROSERVICES SECURITY IN ACTION》PART 3

Part 3 Service-to-service communications

6 Securing east/west traffic with certificates

6.1 Why use mTLS?

TLS protects communications between two parties for confidentiality and integrity. Using TLS to secure data in transit has been a practice for several years. Recently, because of increased cybersecurity threats, it has become a mandatory practice in any business that has serious concerns about data security.

6.1.1 Building trust between a client and a server with a certificate authority
6.1.2 Mutual TLS helps the client and the server to identify each other

6.1.3 HTTPS is HTTP over TLS

6.2 Creating certificates to secure access to microservices

If your microservice endpoints aren’t public, you don’t need to have a public CA sign the corresponding certificates. You can use your own CA, trusted by all the microservices in your deployment.

6.2.1 Creating a certificate authority
6.2.2 Generating keys for the Order Processing microservice
6.2.3 Generating keys for the Inventory microservice

6.2.4 Using a single script to generate all the keys

6.3 Securing microservices with TLS


6.3.1 Running the Order Processing microservice over TLS


curl -v -k https://localhost:6443/orders/11
using –k in the curl command to instruct curl to avoid trust validation. Ideally, you shouldn’t do this in a production deployment.

6.3.2 Running the Inventory microservice over TLS
6.3.3 Securing communications between two microservices with TLS

6.4 Engaging mTLS

Now you have two microservices communicating with each other over TLS, but it’s one-way TLS. Only the calling microservice knows what it communicates with, and the recipient has no way of identifying the client. This is where you need mTLS.

6.5 Challenges in key management


6.5.1 Key provisioning and bootstrapping trust

6.5.2 Certificate revocation

6.6 Key rotation

Key rotation is more challenging in a microservices deployment with an increased number of services spinning on and off. Automation is the key to addressing this problem.

6.7 Monitoring key usage

We discuss the observability of a system under three categories, which we call the three pillars of observability: logging, metrics, and tracing.

Monitoring a microservices deployment is challenging, as many service-to-service interactions occur. We use tools like Zipkin, Prometheus, and Grafana in a microservices deployment to monitor key use.


Summary
 There are multiple options in securing communications among microservices, including mutual TLS (mTLS) and JSON Web Tokens (JWTs).

 Transport Layer Security protects communications between two parties for confidentiality and integrity. Using TLS to secure data in transit has been a practice for several years.

 mTLS is the most popular way of securing interservice communications among microservices.

 TLS is also known as one-way TLS, mostly because it helps the client identify the server it’s talking to, but not the other way around. Two-way TLS, or mTLS, fills this gap by helping the client and server identify themselves to each other.

 Key management in a microservices deployment is quite challenging, and we need to be concerned about bootstrapping trust and provisioning keys and certificates to workloads or microservices, key revocation, key rotation, and key use monitoring.

 Certificate revocation can happen for two main reasons: the corresponding private key is compromised, or the private key of the CA that signed the certificate is compromised.

 Using a certificate revocation list (CRL), defined in RFC 2459, was among one of the very first approaches suggested to overcome issues related to certificate revocation.

 Unlike CRL, the Online Certificate Status Protocol (OCSP) doesn’t build one
bulky list of all revoked certificates. Each time the TLS client application sees a certificate, it has to talk to the corresponding OCSP endpoint and check whether the certificate is revoked.

 OCSP stapling makes OCSP a little better. It takes the overhead of talking to the OCSP endpoint from the TLS client and hands it over to the server.

 The approach suggested by short-lived certificates ignores certificate revocation, relying instead on expiration.

 All the keys provisioned into microservices must be rotated before they expire.

 Observability is an essential ingredient of a typical microservices deployment. It’s about how well you can infer the internal state of a system by looking at the external outputs. Monitoring is about tracking the state of a system.




《MICROSERVICES SECURITY IN ACTION》PART 2

Part 2 Edge security

3 Securing north/south traffic with an API gateway

In an ideal world, the microservices developer should worry only about the business functionality of a microservice, and the rest should be handled by specialized components with less hassle.

The API Gateway and Service Mesh are two architectural patterns that help us reach that ideal.

The API Gateway pattern is mostly about edge security, while the Service Mesh pattern deals with service-to-service security. Or, in other words, the API Gateway deals with north/south traffic, while the Service Mesh deals with east/west traffic.

3.1 The need for an API gateway in a microservices deployment

3.1.1 Decoupling security from the microservice

One key aspect of microservices best practices is the single responsibility principle. This principle, commonly used in programming, states that every module, class, or function should be responsible for a single part of the software’s functionality. Under this principle, each microservice should be performing only one particular function.

Executing all these steps becomes a problem because the microservice loses its atomic characteristics by performing more operations than it’s supposed to.

The coupling of security and business logic introduces unwanted complexity and maintenance overhead to the microservice.

(1)Changes in the security protocol require changes in the microservice

(2)Scaling up the microservice results in more connections to the authorization server


3.1.2 The inherent complexities of microservice deployments make them harder to consume

3.1.3 The rawness of microservices does not make them ideal for external exposure

3.2 Security at the edge

3.2.1 Understanding the consumer landscape of your microservices

Applications running within the organization’s computing infrastructure may consume both internal-facing and external-facing microservices.

3.2.2 Delegating access

3.2.3 Why not basic authentication to secure APIs?

 The username and password are static, long-living credentials.
 No restrictions on what the application can do.

3.2.4 Why not mutual TLS to secure APIs?

mTLS solves one of the problems with basic authentication by having a lifetime for its certificates. The certificates used in mTLS are time-bound, and whenever a certificate expires, it’s no longer considered valid.

However, just as in basic authentication, mTLS fails to meet access delegation requirements we discussed in section 3.2.2 in a microservices deployment.

Therefore, mTLS is mostly used to secure communication between a client application and a microservice, or communications among microservices. In other words, mTLS is mostly used to secure communications among systems.

3.2.5 Why OAuth 2.0?

To understand why OAuth 2.0 is the best security protocol for securing your microservices at the edge, first you need to understand your audience. You need to figure out who wants access to your resources, for what purpose, and for how long.

3.3 Setting up an API gateway with Zuul

3.3.1 Compiling and running the Order Processing microservice

3.3.2 Compiling and running the Zuul proxy

3.3.3 Enforcing OAuth 2.0–based security at the Zuul gateway

Enforcing token validation at the zuul gateway

A filter can be one of four types :

Prerequest filter—Executes before the request is routed to the target service
Route filter—Can handle the routing of a message
Post-request filter—Executes after the request has been routed to the target service
Error filter—Executes if an error occurs in the routing of a request

Oauth2.0 token introspection profile

{
“active”: true,
“client_id”: “application1”,
“scope”: “read write”,
“sub”: “application1”,
“aud”: “http://orders.ecomm.com”
}

Self-validation of tokens without integrating with an authorization server

But the gateway component relies heavily on the authorization server to enable access to your microservices, so the gateway component is coupled with another entity.

The way to deal with this problem is to find a mechanism that enables the gateway to validate tokens by itself without the assistance of an authorization server.

JWTs are designed to solve this problem . A JSON Web Signature (JWS) is a JWT signed by the authorization server. By verifying the signature of the JWS, the gateway knows that this token was issued by a trusted party and
that it can trust the information contained in the body.


Pitfalls of self-validating tokens and how to avoid them

If one of these tokens is prematurely revoked, the API gateway won’t know that the token has been revoked, because the
revocation happens at the authorization server end, and the gateway no longer communicates with the authorization server for the validation of tokens.

In practice, however, applications with longer user sessions have to keep refreshing their access tokens when the tokens carry a shorter expiration.

Another problem with the self-contained access token is that the certificate used to verify a token signature might have expired.

To solve this problem, you need to make sure that whenever a certificate is renewed, you deploy the new certificate on the gateway.

Then the gateway can fetch the token issuer’s certificate dynamically from an endpoint exposed by the authorization server to do the JWT signature validation, and check whether that certificate is signed by a certificate authority it trusts.

3.4 Securing communication between Zuul and the microservice

3.4.1 Preventing access through the firewall

3.4.2 Securing the communication between the API gateway and microservices by using mutual TLS

But mTLS verification happens at the Transport layer of the microservice and doesn’t propagate up to the Application layer.

Summary
 The API Gateway pattern is used to expose microservices to client applications as APIs.

 The API gateway helps to expose microservices of different flavors by using a consistent and easy-to-understand interface to the consumers of these microservices.

 We do not have to expose all microservices through the API gateway. Some microservices are consumed internally only, in which case they will not be exposed through the gateway to the outside world.

 Protocols such as basic authentication and mutual TLS are not sufficient to secure APIs, and microservices are exposed to the outside world via APIs.

 OAuth 2.0 is the de facto standard for securing APIs and microservices at the edge.

 OAuth 2.0 is an extensible authorization framework, which has a wide array of grant types. Each grant type defines the protocol indicating how a client application would obtain an access token to access an API/microservice.

 We need to choose the right OAuth 2.0 grant type for our client applications based on their security characteristics and trustworthiness.

 An access token can be a reference token or a self-contained token (JWT). If it is a reference token, the gateway has to talk to the issuer (or the authorization server) always to validate it. For self-contained tokens, the gateway can perform the validation by verifying the token signature.

 A self-contained access token has its own pitfalls, and one way to get around token revocation is to have short-lived JWTs for self-contained access tokens.

 The communication between the gateway and the microservice can be protected with either firewall rules or mutual TLS—or a combination of both.

 All samples in this chapter use HTTP (not HTTPS) endpoints to spare you
from having to set up proper certificates and to make it possible for you to
inspect messages being passed on the wire (network), if required. In production systems, we do not recommend using HTTP for any endpoint.

4 Accessing a secured microservice via a single-page application

We believe in completing an end-to-end architecture with a microservices deployment, from data to screen. And SPAs are the most used client
application type.

4.1 Running a single-page application with Angular

4.1.1 Building and running an Angular application from the source code

4.1.2 Looking behind the scenes of a single-page application

4.2 Setting up cross-origin resource sharing

4.2.1 Using the same-origin policy

URL consists of the URI scheme, hostname, and port.

The same-origin policy exists to prevent a malicious script on one website from accessing data on other websites unintentionally. The same-origin policy applies only to data access, not to CSS, images, and scripts, so you could write web pages that consist of links to CSS, images, and scripts of
other origins.

4.2.2 Using cross-origin resource sharing

Web browsers have an exception to the same-origin policy: cross-origin resource sharing (CORS), a specification that allows web browsers to access selected resources on different origins;

Web browsers use the OPTIONS HTTP method along with special HTTP headers to determine whether to allow or deny a cross-origin request. Let’s see how the protocol works.

You can observe this request, known as a preflight request, by inspecting it on the Network tab of your browser’s developer tools. The request includes the following HTTP headers:
 Access-Control-Request-Headers
 Access-Control-Request-Method
 Origin


The server responds to this preflight request with the following headers:
 Access-Control-Allow-Credentials
 Access-Control-Allow-Headers
 Access-Control-Allow-Methods
 Access-Control-Allow-Origin
 Access-Control-Max-Age

4.2.3 Inspecting the source that allows cross-origin requests

@CrossOrigin(origins = “http://localhost:4200”)

4.2.4 Proxying the resource server with an API gateway

4.3 Securing a SPA with OpenID Connect

OpenID Connect (OIDC) is an identity layer built on top of OAuth 2.0.

4.3.1 Understanding the OpenID Connect login flow

{
“access_token”:”92ee7d17-cfab-4bad-b110-f287b4c2b630″,
“token_type”:”bearer”,
“refresh_token”:”dcce6ad7-9520-43fd-8170-a1a2857818b3″,
“expires_in”:1478,
“scope”:”openid”
}

4.3.2 Inspecting the code of the applications

4.4 Using federated authentication

4.4.1 Multiple trust domains

4.4.2 Building trust between domains

Summary
 Single-page applications perform better by reducing network chattiness as they perform all rendering on the web browser and by reducing the workload on the web server.

 The SPA architecture brings simplicity to microservices architectures because they do not require complex web application-hosting facilities such as JBoss or Tomcat.

 The SPA architecture abstracts out the data layer from the user experience layer.

 SPAs have security restrictions and complexities due to the same-origin policy on web browsers.

 The same-origin policy ensures that scripts running on a particular web page can make requests only to services running on the same origin.

 The same-origin policy applies only to data access, not to CSS, images, and
scripts, so you can write web pages that consist of links to CSS, images, and scripts of other origins.

 OpenID Connect is an identity layer built on top of OAuth 2.0. Most SPAs
use OpenID Connect to authenticate users.

 Because SPAs may consume APIs (data sources) from multiple trust domains, a token obtained from one trust domain may not be valid for another trust domain. We need to build token-exchange functionality when a SPA hops across multiple trust boundaries.

 All samples in this chapter used HTTP (not HTTPS) endpoints to spare you
from having to set up proper certificates and to make it possible for you to
inspect messages being passed on the wire (network), if required. In production systems, we do not recommend using HTTP for any endpoint.

5 Engaging throttling, monitoring, and access control

5.1 Throttling at the API gateway with Zuul

5.1.1 Quota-based throttling for applications

5.1.2 Fair usage policy for users

5.1.3 Applying quota-based throttling to the Order Processing microservice

5.1.4 Maximum handling capacity of a microservice

5.1.5 Operation-level throttling

5.1.6 Throttling the OAuth 2.0 token and authorize endpoints

5.1.7 Privilege-based throttling


5.2 Monitoring and analytics with Prometheus and Grafana

The modern term for monitoring and analytics is known as observability.

Prometheus is a popular open source monitoring tool for microservices. It helps us keep track of system metrics over a given time period and can be used to determine the health of a software system. Metrics include memory usage and CPU consumption.
Grafana is an open source data visualization tool. It can help you build dashboards to visualize the metrics being provided by Prometheus or any other data source. At this time of writing, Grafana is the most popular data-visualizing tool in the market.

5.2.1 Monitoring the Order Processing microservice


As you can see, Grafana gives you a much more user-friendly view of the metrics exposed by the Order Processing microservice.

5.2.2 Behind the scenes of using Prometheus for monitoring

5.3 Enforcing access-control policies at the API gateway with Open Policy Agent

The API gateway here is acting as a policy enforcement point. OPA is a lightweight general-purpose policy engine that has no dependency on microservices. You can use OPA to define fine-grained access-control
policies and enforce those policies at different places in a microservices deployment.

5.3.1 Running OPA as a Docker container
5.3.2 Feeding the OPA engine with data
5.3.3 Feeding the OPA engine with access-control policies
5.3.4 Evaluating OPA policies
5.3.5 Next steps in using OPA


Summary
 Quota-based throttling policies for applications help to monetize APIs/microservices and to limit a given application from overconsuming APIs/microservices.

 Fair-usage policies need to be enforced on applications to ensure that all users get a fair quota of requests.

 User privilege-based throttling is useful for allowing different quotas for users with different privilege levels.

 An API gateway can be used to apply throttling rules in a microservices
deployment.

 Prometheus is the most popular open source monitoring tool available as of this writing.

 Grafana helps to visualize the data being recorded by Prometheus.

 Open Policy Agent (OPA) helps control access to a microservices deployment.

 OPA data, OPA input data, and OPA policies are used together to apply various access-control rules.

 All samples in this chapter used HTTP (not HTTPS) endpoints to spare you
from having to set up proper certificates and to make it possible for you to
inspect messages being passed on the wire (network), if required. In production systems, we do not recommend using HTTP for any endpoint.