Creating an application on .NET Core and Kubernetes: our experience

Hello!
 
 
Today we will tell about the experience of one of our DevOps projects. We decided to implement a new application for Linux using .Net Core on the micro-service architecture.
 
 
We expect that the project will actively develop, and users will be more and more. Therefore, it should be easy to scale both in terms of functionality and performance.
 
 
We need a fault-tolerant system - if one of the functionality blocks does not work, then the rest should work. We also want to ensure continuous integration, including deployment of the solution on the customer's servers.
 
 
Therefore, such technologies were used:
 
 
 
.Net Core for the implementation of microservices. In our project we used version 2.?
 
Kubernetes for the orchestration of microservices,
 
Docker for the creation of images of microservices,
 
Integration bus Rabbit MQ,
 
EK for logging,
 
TFS for the implementation of the CI /CD pipeline.
 
 
In this article we will share the details of our solution.
 
 
Creating an application on .NET Core and Kubernetes: our experience  
 
This is the decoding of our performance on .NET-Mitape, here is link to video speeches.
 
Microservices. NET. The architecture of container applications. NET "Offers three possible implementations of interaction with microservices. We reviewed all three and chose the most suitable one.
 
 
• API Gateway service
 
API Gateway service is a façade implementation for user requests to other services. The problem of the solution is that if the facade does not work, then the entire solution will cease to function. We decided to abandon this approach for the sake of fault tolerance.
 
 
• API Gateway with Azure API Management
 
Microsoft provides an opportunity to use the cloud facade in Azure. But this decision did not work, because we were going to deploy the solution not in the cloud, but on the servers of the customer.
 
 
• Direct Client-To-Microservice communication
 
As a result, we have the last option - direct interaction of users with microservices. We chose him.
 
 
 
 
Its plus in fault tolerance. The downside is that part of the functionality will have to be played on each service separately. For example, you had to configure the authorization separately for each micro service that users have access to.
 
 
Of course, the question arises as to how the load will be balanced and how fault tolerance is implemented. Everything is simple - it is the Ingress Controller Kubernetes.
 
 
 
 
Node ? node ? and node 3 are replicas of a single microservice. If one of the replicas stops working, the load balancer automatically redirects the load to other microservices.
 
 

Physical architecture


 
Here's how we organized the infrastructure of our solution:
 
 
• Each microservice has its own database (if it is, of course, necessary to it), other services to the database of another micro service do not apply.
 
• Microservices communicate only with the RabbitMQ bus and with HTTP requests.
 
• Each service has a clearly defined responsibility.
 
• For logging, we use EK and the library to work with it Serilog .
 
 
 
 
The database service was deployed on a separate virtual machine, not in Kubernetes, because Microsoft does not recommend using Docker on product environments.
 
 
The logging service was also deployed on a separate virtual server for reasons of fault tolerance - if we have problems with Kubernetes, then we will be able to figure out what the problem is.
 
 

Deployment: how we organized the development and product environments


 
On our infrastructure, there are 3 projects in Kubernetes. All three environments access one database service and one logging service. And, of course, every environment looks at its database.
 
 
 
 
On the customer's infrastructure, we also have two environments - pre-production and production. On production, we have separate database servers for pre-sale and the product environment. For logging, we have one dedicated ELK server on our infrastructure and on the customer's infrastructure.
 
 

How to deploy 5 environments with 10 micro services in each?


 
On average, we have 10 services for the project and three environments: QA, DEV, Stage, which are deployed in the amount of about 30 microservices. And this is only on the development infrastructure! Add two more environments on the customer's infrastructure, and we get 50 microservices.
 
 
 
 
It is clear that such a quantity of services must be managed somehow. In this we are helped by Kubernetes.
 
 
In order to deploy the microservice, it is necessary
 
• Expand the secret,
 
• Expand deployement,
 
• Expand the service.
 
 
Pro secret will be written below.
 
Deployment is an instruction for Kubernetes, on the basis of which it will launch the Docker container of our microservice. Here is the deployment deployment team:
 
 
kubectl apply -f. (yaml deployment configuration) --namespace = DEV
 
 
    apiVersion: apps /v1beta1
kind: Deployment
metadаta:
name: imtob-etr-it-dictionary-api
spec:
replicas: 1
template:
metadаta:
labels:
name: imtob-etr-it-dictionary-api
spec:
containers:
- name: imtob-etr-it-dictionary-api
image: nexus3.company.ru:18085/etr-it-dictionary-api:18289
resources:
requests:
memory: "256Mi"
limits:
memory: "512Mi"
volumeMounts:
- name: secrets
mountPath: /app /secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: secret-appsettings-dictionary

 
 
This file describes the name of the installation (imtob-etr-it-dictionary-api), what image it needs to use for execution, plus other settings. In the secret section, we will configure our environment.
 
 
After deploying deployment, we need to deploy the service if necessary.
 
 
Services are needed when access to the microservice from outside is necessary. For example, when it is necessary for a user or other microservice to make a request to another micro service.
 
 
kubectl apply -f .imtob-etr-it-dictionary-api.yml --namespace = DEV
 
 
    apiVersion: v1
kind: Service
metadаta:
name: imtob-etr-it-dictionary-api-services
spec:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
name: imtob-etr-it-dictionary-api

 
 
Usually the service description is small. In it, we see the name of the service, how to access it and the port number.
 
 
As a result, we need
to deploy the environment.  
 
• a set of files with secret for all micro services,
 
• a set of files with deployment of all micro services,
 
• a set of files with the services of all micro services.
 
 
All these scripts we store in the git repository.
 
 
To deploy the solution, we have obtained a set of three kinds of scripts:
 
 
• Sicret folder - this is the configuration for each environment,
 
• a deployment folder for all micro services,
 
• a folder with services for some microservices,
 
 
in each - about ten teams, one for each microservice. For convenience, we have a page with scripts in Confluence, which helps us quickly deploy a new environment.
 
 
There is a deployment deployment script (similar sets are for secret and for service):
 
 
Deployment deployment script [/b]
kubectl apply -f .imtob-etr-it-image-api.yml --namespace = DEV
 
kubectl apply -f .imtob-etr-it-mobile-api.yml --namespace = DEV
 
kubectl apply -f .imtob-etr-it-planning-api.yml --namespace = DEV
 
kubectl apply -f .imtob-etr-it-result-api.yml --namespace = DEV
 
kubectl apply -f .imtob-etr-it-web.yml --namespace = DEV
 
kubectl apply -f .imtob-etr-it-report-api.yml --namespace = DEV
 
kubectl apply -f .imtob-etr-it-template-constructor-api.yml --namespace = DEV
 
kubectl apply -f .imtob-etr-it-dictionary-api.yml --namespace = DEV
 
kubectl apply -f .imtob-etr-it-integration-api.yml --namespace = DEV
 
kubectl apply -f .imtob-etr-it-identity-api.yml --namespace = DEV
 

 
 

Implementation of CI /CD


 
 
Each service is in its own folder, plus we have one folder with common components.
 
 
 
 
Also for each micro service is Build Definition and Release Definition. We set up the launch of Build Definion when you commit to the appropriate service or when you commit to the appropriate folder. If the contents of a folder with common components are updated, then all micro services are deployed.
 
 
What are the advantages of such an organization Build-a we see:
 
 
1. The solution is in one git-repository,
 
2. When changing in several microservices, the assembly runs in parallel if there are free assembly agents,
 
3. Each Build Definition represents a simple script from the assembly of the image and its push into the Nexus Registry.
 
 

Build definition and Release Definition


 
How to deploy a VSTS agent, we previously described in this article .
 
 
 
 
First comes Build Definition. At the command of TFS VSTS, the agent starts the Dockerfile build. As a result, we get an image of the microservice. This image is saved locally on the environment where the VSTS agent is running.
 
 
After the build, Push is started, which sends the image that we received in the previous step to the Nexus Registry. Now it can be used from the outside. Nexus Registry is a kind of Nuget, not just for libraries, but for Docker images and not only.
 
 
After the image is ready and available from the outside, it needs to be deployed. For this, we have a Release Definition. Everything is simple - we execute the set image:
command.  
 
kubectl set image deployment /imtob-etr-it-dictionary-api imtob-etr-it-dictionary-api = nexus3.company.ru: 18085 /etr-it-dictionary-api: $ (Build.BuildId)
 
 
After that, he will update the image for the desired microservice and launch a new container. As a result, our service has been updated.
 
 
Let's now compare the assembly with Dockerfile and without it.
 
 
 
 
Without Dockerfile, we get a lot of steps, in which a lot of specifics .Net. On the right, we see a build of the Docker image. Everything became much easier.
 
 
The entire process of image assembly is described in the Dockerfile. This assembly can be debugged locally.
 
 
 
 

Total: we got a simple and transparent CI /CD


 
 
1. Separation of development and deployment. The assembly is described in the Dockerfile and lies on the shoulders of the developer.
 
2. When configuring CI /CD, you do not need to know about the details and features of the assembly - the work is done only with Dockerfile.
 
3. We update only the modified microservices.
 
 
Next, you need to cut the RabbitMQ in K8S: we wrote about it a separate article .
 
 

Setting up the environment


 
Anyway, we need to configure the microservices. The main part of the environment is configured in the root configuration file Appsettings.json. This file stores settings that do not depend on the environment.
 
 
Those settings that depend on the environment, we store in the secrets folder in the appsettings.secret.json file. We took the approach described in article Managing ASP.NET Core App Settings on Kubernetes .
 
 
    var configuration = new ConfigurationBuilder ()
.AddJsonFile ($ "appsettings.json", true)
.AddJsonFile ("secrets /appsettings.secrets.json", optional: true)
.Build ();

 
 
The appsettings.secrets.json file stores the Elastic Search index settings and the connection string to the database.
 
    {
"Serilog": {
"WriteTo":[
{
"Name": "Elasticsearch",
"Args": {
"nodeUris": "http://192.168.150.114:9200",
"indexFormat": "dev.etr.it.ifield.api.dictionary-{0:yyyy.MM.dd}",
"templateName": "dev.etr.it.ifield.api.dictionary",
"typeName": "dev.etr.it.ifield.api.dictionary.event"
}
}
]
},
"ConnectionStrings": {
"DictionaryDbContext": "Server = ???.162; Database = DEV.ETR.IT.iField.Dictionary; User Id = it_user; Password = PASSWORD;"
}
}

 
 

Add the configuration file to Kubernetes


 
In order to add this file, you need to deploy it in the Docker container. This is done in the Cubanet's deployment file. In deployment, it is described in which folder you need to create a file with secret and with what secret you need to associate the file.
 
 
    apiVersion: apps /v1beta1
kind: Deployment
metadаta:
name: imtob-etr-it-dictionary-api
spec:
replicas: 1
template:
metadаta:
labels:
name: imtob-etr-it-dictionary-api
spec:
containers:
- name: imtob-etr-it-dictionary-api
image: nexus3.company.ru:18085/etr-it-dictionary-api:18289
resources:
requests:
memory: "256Mi"
limits:
memory: "512Mi"
volumeMounts:
- name: secrets
mountPath: /app /secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: secret-appsettings-dictionary

 
 
You can create a secret in Kubernetes using the kubectl utility. We see here the name of the secret and the path to the file. Also here we specify the name of the environment for which we create the secret.
 
 
kubectl create secret generic secret-appsettings-dictionary
 
--from-file =. /Dictionary /appsettings.secrets.json --namespace = DEMO

 
 

Conclusions


 

The disadvantages of the chosen approach are


 
1. High entry threshold. If you do such a project for the first time, there will be a lot of new information.
 
2. Microservices → more complex design. It is necessary to apply many unobvious decisions due to the fact that we do not have a monolithic solution, but a microservice solution.
 
3. Not everything is implemented for Docker. Not everything can be run in the micro-service architecture. For example, while SSRS is not in the docker.
 
 

Pros of the approach, checked on itself


 
1. Infrastructure as the code
 
The description of the infrastructure is stored in source control. At the time of deployment, you do not need to adapt the environment.
 
2. Scaling both at the functional level and at the performance level out of the box.
 
3. Microservices are well isolated
 
Virtually no critical parts, the failure of which leads to the inoperability of the system as a whole.
 
4. Fast delivery of changes
 
Only those microservices that have been modified are updated. If you do not take into account the time for reconciliation and other things related to the human factor, then the update of one microservice takes place in 2 minutes or less.
 
 

The conclusions for us are


 
1. On .NET Core it is possible and necessary to implement industrial solutions.
 
2. K8S really made life easier, simplified updating of environments, facilitates the configuration of services.
 
3. TFS can be used to implement CI /CD for Linux.
+ 0 -

Add comment