Check server security protocol with openssl

I have a framework application that connects to different servers depending on how it is used. Openssl is used for https connections. My problem is that I need to know if the server I'm connecting to is using SSL or TLS, so I can create the correct SSL context. Currently if I am using the wrong context trying to establish the connection time.

For TLS, I use:

SSL_CTX *sslContext = SSL_CTX_new(TLSv1_client_method());

      

For SSL I am using:

SSL_CTX *sslContext = SSL_CTX_new(SSLv23_client_method());

      

So, is there a way to find out what protocol the server is running on before making a connection?

Edit: the way I understand it should now work anyway as it SSLv23_client_method()

also contains the TLS protocol. So the question is, why isn't it? What could be causing the timeout with one client method but not another?

+3


source to share


2 answers


For SSL I use:
SSL_CTX *sslContext = SSL_CTX_new(SSLv23_client_method());

      

TLS is just the current name for the legacy SSL protocol, i.e. TLS1.0 is actually SSL3.1, etc. SSLv23_client_method

is actually the most compatible way to establish SSL / TLS connections and will use the best protocol available. This means that it will also create TLS1.2 connections if the server supports it. See also SSL_CTX_new documentation :

SSLv23_method (void), SSLv23_server_method (void), SSLv23_client_method (void)

A TLS / SSL connection established using these methods can understand the SSLv2, SSLv3, TLSv1, TLSv1.1, and TLSv1.2 protocols.

... the client will send client TLSv1 hello messages including extensions and indicate that it also understands TLSv1.1, TLSv1.2, and allows SSLv3 opt-out. The server will support SSLv3, TLSv1, TLSv1.1 and TLSv1.2 protocols. It is the best choice when compatibility is an issue.



Any protocols you don't want (like SSL3.0) you can disable with SSL_OP_NO_SSLv3

etc. and using SSL_CTX_set_options

.

Currently if I am using the wrong context trying to establish the connection time.

Then either the server is corrupted or your code. If the server receives a protocol connection, it does not realize that it should return an "unknown protocol" warning. Other servers just close the connection. Timeout usually only happens with a broken server or staging block like the old F5 Big IP load balancer .

+3


source


So, is there a way to find out what protocol the server is running on before making a connection?

Not. But now you have to use it "TLS 1.0 and above".

As Steffen pointed out, you also use SSLv23_method

context options to implement "TLS 1.0 and up". Here's the complete code. You can use it on client or server:

const SSL_METHOD* method = SSLv23_method();
if(method == NULL) handleFailure();

SSL_CTX* ctx = SSL_CTX_new(method);
if(ctx == NULL) handleFailure();

const long flags = SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 | SSL_OP_NO_COMPRESSION;
SSL_CTX_set_options(ctx, flags);

      

Now this implies an implicit assumption that is not obvious; and this assumption is wrong. This assumption is that there is a "TLS min" and "TLS max" version.

What happens is there is a basic SSL / TLS recording layer that contains the protocol payload. The TLS recording layer is independent of the protocol layer and has its own version. People interpret the TLS record layer version as the "TLS min" version; and the protocol version as the "TLS max" version. Most servers, sites and client services use it this way.

However, the IETF does not specify it that way, and the browser does not use it that way. Because of this, we recently got a TLS Fallback Signaling Cipher Suite Value (SCSV) .

The browser is correct. This is how it should be done:



  • try TLS 1.2, use Fallback Signaling to detect downstream attacks.
  • if TLS 1.2 fails, then try TLS 1.1, use Fallback Signaling to detect downgrade attacks.
  • if TLS 1.1 fails try TLS 1.0, use Fallback Signaling to detect downgrade attacks.

Many give up after the failure of TLS 1.0. Some user agents may continue to work with SSLv3.


Why didn't the IETF go over to give us "TLS min" and "TLS max"? This is still a mystery. This is an effective argument, I believe, "suppose the client wants to use TLS 1.0, 1.2, and 1.3, but not 1.1". I don't know anyone who refuses to use this version of the protocol, so for me it's just straw. (This is one of those times where I wonder if law enforcement or national interests like the NSA are influencing standards.)

This issue has recently been raised in the TLS Working Group. From TLS: Disallow support on 1.3+ servers on 1.3+ servers (but allow clients) (May 21, 2015):

Now might be a good time to add (3) for TLS 1.3: have the client specify both the minimum TLS version they want to use and which they want to use. And the MAC or get it from it so cannot be changed or downgraded.

You can still provide a TLS recording layer version and you can save it without a MAC address so it can be spoofed to cause an expansion or crash :)

In fact, both versions are in the recording layer and the client protocol. It stops these silly dances in browsers and other user agents perform without the need for TLS Fallback SCSV.

If part of the IETF's mission is to document existing practices, the IETF is not fulfilling its mission.

+1


source







All Articles