Docker Remote API with certificate authentication with revocation checking

Problem Description
 
For the needs of remote control of Docker, Docker can provide web API.
 
This API can either not require authentication at all (which is highly discouraged), or use certificate authentication.
 
The problem is that native certificate authentication does not provide for certificate revocation checking. And this can have serious consequences.
 
I want to tell how I solved this problem.
3r318.
 
Solution of the problem 3r3203.
 
First you need to say what to say, I will be about Docker for Windows. Maybe Linux is not so bad, but now is not about that.
 
What we have? We have a Docker, with this config:
 
3r3196. 3r3197. {
"hosts":["tcp://0.0.0.0:2376", "npipe://"],
"tlsverify": true,
"tlscacert": "C: ssl ca.cer",
"tlscert": "C: ssl server.cer",
"tlskey": "C: ssl server.key"
} 3r-33199.
 
Customers can connect with their certificates, but these certificates are not checked for revocation.
 
The idea of ​​solving the problem is to write your own proxy service, which would act as an intermediary. Our service will be installed on the same server as Docker, pick up port 2376 for itself, communicate with Docker via //./pipe/docker_engine.
 
Without thinking, I created an ASP.NET Core project and did the simplest proxying:
 
The simplest proxy code is 3r3r60. 3r361 3r3196. 3r3633. app.Run (async (context) => 3r33232. {3r33232. var certificate = context.Connection.ClientCertificate;
if (certificate! = null) 3r33216. {
logger .LogInformation ($ "Certificate subject: {certificate. Subject}, serial: {certificate.SerialNumber} "); 3r3-33216.}
Var handler = new ManagedHandler (async (host, port, cancellationToken) =>
{
Var stream = new NamedPipeClientStream.". " , "docker_engine", PipeDirection.InOut, PipeOptions.Asynchronous); 3r321616. var dockerStream = new DockerPipeStream (stream);
3r-321616. await stream.ConnectAsync (namedPipeConnectTimeout.Milliseconds, ooppo, ooppo, ooppo, ooppo, otooppo, otooppo, otoooppo, oooapp.Text.Milliseconds; ;
Using (var client = new HttpClient (handler, true))
{
Var method = new HttpMethod (context.Request.Method);
Var builder = new UriBuilder ("http) /dockerengine ")
{
Path = context.Request.Path,
Query = context.Request.QueryString.ToUriComponent ()
};
using (var request = new HttpRequestMessage (method, builder.Uri))
{
request.Version = new Version (? 11);
request.Headers.Add ("User-Agent", "proxy");
if (method! = HttpMethod.Get)
{
request.Content = new StreamContent (context.Request.Body);
request.Content.Headers.ContentType = new MediaTypeHeaderValue (context.Request.ContentType);
}
using (var response = await client.SendAsync (request, HttpCompletionOption.ResponseHeadersRead, context.RequestAborted))
{
context.Response.ContentType = response.Content.Headers.ContentType.ToString ();
var output = await response.Content.ReadAsStreamAsync ();
await output.CopyToAsync (context.Response.Body, 409? context.RequestAborted);
}
}
}
}); 3r3208. 3r-33199.
 
That was enough for simple GET and POST requests from the Docker API. But this is not enough, because for more complex operations (requiring user input), Docker uses something similar to WebSocket. The ambush was that Kestrel flatly refused to accept requests that came from the Docker Client, arguing that the request with the Connection: Upgrade header could not have a body. And it was.
 
I had to abandon Kestrel and write a little more code. In fact - your web server. Independently open a port, create a TLS connection, parse HTTP headers, establish an internal connection with Docker and exchange I /O streams. And it worked.
 
Sources can be viewed here is .
 
So, the application is written and it is necessary to run it somehow. The idea is to create a container with our application, flip inside npine: //and publish port 2376
 
Build Docker image
 
To build the image, we need a public certificate of the certification authority (ca.cer), which will issue certificates to users.
 
This certificate will be installed in the trusted root certificate authorities of the container in which our proxy will run.
 
Installing it is necessary for the certificate verification procedure.
 
I did not bother with writing such a Docker file that I would build the application myself.
 
Therefore, it must be collected independently. From the folder with the dockerfile run:
 
3r3196. 3r3197. dotnet publish -c Release -o publish .DockerTLSDockerTLS.csproj 3r-33199.
 
Now we should have: Dockerfile , 3r3207. publish , 3r3207. ca.cer 3r3208. . We collect image:
 
3r3196. 3r3197. docker build -t vitaliyorg.azurecr.io/docker/proxy:1809.
docker push vitaliyorg.azurecr.io/docker/proxy:1809 3r-33199.
 

Of course, the name of the image can be any.


 
3r3176. Run
 

To run the container, we need the server certificate 3r3207. certificate.pfx and password file 3r3207. password.txt . The entire contents of the file is considered a password. Therefore, there should be no extra line feeds.
 
Let all this stuff be in the folder: 3r3207. c: data 3r3208. on the server where Docker is installed.


 

On the same server, run:


 
3r3196. 3r3197. docker run --name docker-proxy -d -v "c: /dаta: c: /data" -v .pipedocker_engine: . pipedocker_engine --restart always -p 2376: 2376 vitaliyorg.azurecr.io/docker/proxy : 1809 3r-33199.
 
3r3r2202. Logging 3r3203.
 

Using 3r3207. docker logs You can see who did what. You can also see the connection attempts that failed.

.NET / C# / DevOps
+ 0 -

Add comment