Quantcast
Channel: Unleashed
Viewing all 71 articles
Browse latest View live

ISAPI Filter to reject HTTP/1.0 requests

$
0
0

There is a known problem on IIS where the IP Address is leaked in the content-location header of the HTTP response. There is a fix for this and its documented here:

http://support.microsoft.com/kb/834141

The above KB also mentions that the issue might still occur even after using the above fix. It is discussed in detail in the “Mitigating Factors” section. Below is a snippet from the above article:

Mitigating Factors

After you set the UseHostName or SetHostName properties in IIS 6.0, it is still possible to see the server’s IP address in an HTTP response. By default, this does not occur. It results from how the response is generated and sent. For example, if you configure an HTTP redirect that results in an HTTP 302 response being sent, and your redirect code uses the server’s IP address, the IP address may appear in the Content-Location or Location header of the response. To work around this issue, do not use the server’s IP address in the redirect logic. Instead, use its host name or fully qualified machine name.

A similar type of behavior can occur if you configure custom error pages to perform a REDIRECT operation and you use IIS Manager to set the redirect target as a URL instead of a file. In this scenario, specify the file instead of the URL to keep the IP address hidden.
The server's IP address can also be sent in an HTTP response if the following conditions are true:

  • The corresponding HTTP request did not include an HTTP:Host header value.
  • An ISAPI filter that makes a call to GetServerVariables(servername) during the SF_NOTIFY_PREPROC_HEADERS event is configured in IIS.

This is because PREPROC_HEADERS is called before IIS has read the configuration data; in this case, either UseHostName or SetHostName. Therefore, there is no other option but to return the IP address. If the request contains a Host value and the GetServerVariables(servername) call is made in PREPROC_HEADERS, SERVER_NAME will contain the value of the client's Host header. HTTP/1.1 Web browsers must include a Host header in their requests. Therefore, this scenario is much more likely to occur when the HTTP request is generated and sent by something other than a Web browser or when a Web browser uses HTTP/1.0.

 

So in real world the admins run a PCI scan on the server and it reports the server is vulnerable as it is leaking IP Address and suggests to follow the above support article.

However, even after applying the above fix, the issue is still seen as the scanners use a custom client which issue HTTP Requests over HTTP/1.0 protocol. The problem is here with the protocol and not with the product.

In the Mitigating factors section discussed above, it states 2 reasons why the issue is seen even after applying the fix. The first reason comes into picture for HTTP/1.0 protocol. There was a limitation in this protocol which assumed the client would send the hostname as a part of the HTTP Request, but never enforced it. This is the problem, since it assumes, if the incoming request doesn’t contain a hostname it would send back the response containing the IP Address of the server in the content-location header. This issue shouldn’t be seen in HTTP/1.1 as it mandates that the client should send the hostname as part of the HTTP Request.

Now, the questions is what is the fix?

ANSWER: Reject all HTTP/1.0 requests. This is the ideal solution as HTTP/1.0 protocol is obsolete and none of the current day browsers use this version.

Next questions is, how do we implement this on IIS?

ANSWER: Firstly, this issue is seen on all versions of IIS. As every server will support the protocol. So you will have to disable it. Going by the IIS versions,

For IIS 7.0 and higher:

Please refer the following blog on how to fix this issue on IIS 7.0 & higher:

http://www.asprangers.com/post/2012/02/09/IIS-7-IP-Address-revealed-on-redirection-requests-on-HTTP10-protocol.aspx

In the above article, they have used the URL Rewrite Module reject HTTP/1.0 requests.

For IIS 6:

There is no simple solution available on IIS 6. One way is to write a isapi filter which would reject all incoming requests. Below is a sample code to do the same.

  • Create a Empty C++ Project in Visual Studio 2005.
  • Add a .cpp file to the project and name it HttpVersionBlocker.cpp (or a name of your choice)
  • As we know, we need to define 2 important functions, GetFilterVersion and HttpFilterProc.
  • Below is the code snippet. Copy this to the above .cpp file
#include<afxcoll.h>
#include<stdio.h>
#include<afxisapi.h>
#include<afx.h>
#include<stdlib.h>
 
//-------------------------------------------------------
// This function is the entry point to the ISAPI Filter 
//-------------------------------------------------------
 
BOOL WINAPI _stdcall GetFilterVersion(HTTP_FILTER_VERSION *pVer)
{
        pVer->dwFlags = (SF_NOTIFY_PREPROC_HEADERS ); 
        pVer->dwFilterVersion = HTTP_FILTER_REVISION; 
        strcpy_s(pVer->lpszFilterDesc, "HTTP/1.0 Blocker"); 
returnTRUE; 
}
 
//-------------------------------------------------------
// This function will be invoked on every request 
//-------------------------------------------------------
 
DWORD WINAPI __stdcall HttpFilterProc(HTTP_FILTER_CONTEXT *pfc, DWORD NotificationType, VOID *pvData) 
{ 
char buffer[256];
    DWORD buffSize = sizeof(buffer);
    HTTP_FILTER_PREPROC_HEADERS *p;
 
switch (NotificationType)  
    {
case SF_NOTIFY_PREPROC_HEADERS :
      p = (HTTP_FILTER_PREPROC_HEADERS *)pvData;
      BOOL bHeader = p->GetHeader(pfc,"version",buffer,&buffSize); 
      CString Version(buffer);
 
if(Version.Find("HTTP/1.0") != -1) 
      {
// If HTTP/1.0 then Request, then change the URL
            p->SetHeader(pfc, "url", "/Rejected:HTTP/1.0_is_not_supported");
      }
return SF_STATUS_REQ_HANDLED_NOTIFICATION; 
    }
return SF_STATUS_REQ_NEXT_NOTIFICATION; 
}
  • Go to Project properties and change the configuration type to Dynamic Library (.dll).
  • Add a new file and call it HttpversionBlocker.def. copy paste the below section into it:

LIBRARY "test"

EXPORTS

HttpFilterProc

GetFilterVersion

  • Build the project.
  • This will generate the HttpVersionBlocker.dll
  • Configure this as an Isapi filter in IIS. You can refer the below link to do this: Installing ISAPI Filters (IIS 6.0)

Alternatively, if one is not comfortable with coding there is another option. There is a 3rd party product that I’ve used in the past to achieve the same result. It is called WebKnight. Here is the link: http://www.aqtronix.com/?PageID=136.

This product is kind of a Application Firewall and can be used to block incoming requests on IIS.

The product can be downloaded freely, while they charge for the support.

You can try it at your own risk.

I don’t have much understanding of the product and neither do we support it so I’ll best leave it here.

NOTE: Neither of the solutions for IIS 6 is supported by Microsoft, as this is a problem with the protocol and not IIS.


Do we need to install/move IIS related folders to a non-System drive?

$
0
0

It is not possible to install IIS on a non-system drive. Well “not possible” may be too restrictive, I would say it is not recommended or not supported to do so.

At CSS we see a lot of issues relating to the above topic. One needs to relocate (or even Install) the IIS related folder to other drive than system drive.

They say that it is a Security Vulnerability. This is the confusing part. What is this Vulnerability?

  • The important point is how the web-application is configured and not where IIS is installed. None of the application will ever have access to the IIS related folders.
  • Consider a scenario where you configure your application to run under the context of an administrator or Local System. If the application is compromised, then the entire server is compromised.
  • Irrespective of where the application is installed, if it is not configured properly, then it is of now use where or how you install the web-app.

The recommended suggestion is to configure your application on a non-system drive, so that in case if there is a compromise, it doesn’t have access to system drive.

NOTE: W3WP.exe cannot access the IIS Installation folders or Data directories. You can restrict access to folders on the server via NTFS permissions.

It is neither supported nor recommended to delete or re-locate the original IIS directories. A support article has been issued to address this situation.

Here is the link: http://support.microsoft.com/kb/2752331 

This contains the script that can be used to relocate the IIS data directories to a non-system drive keeping the original directories intact.

NOTE: Do not delete the original directories under “%systemdrive%/inetpub”. Don’t even think of touching the INETSRV folder. The script in the above support article re-configures the folders to another non-system drive. During event of Windows Update, the original directories will be updated and not the re-configured ones. So, now you know why they should not be deleted.

SSL/TLS Alert Protocol & the Alert Codes

$
0
0

There have been many occasions where a event corresponding to SChannel is logged in the System event logs which indicates a problem with the SSL/TLS handshake and many a times depicts a number. The logging mechanism is a part of the SSL/TLS Alert Protocol. These alerts are used to notify peers of the normal and error conditions. The numbers especially, play a trivial role in understanding the problem/failure with the SSL/TLS handshake.

SChannel logging may have to be enabled on the windows machines to get detailed SChannel messages. Please refer the following article to do so: http://support.microsoft.com/kb/260729

Below is an example of one such event:

Log Name:      System
Source:        Schannel
Date:          x/xx/xxxx x:xx:xx
Event ID:      36887
Task Category: None
Level:         Error
Keywords:     
User:          SYSTEM
Computer:      xxxxxxx
Description:
The following fatal alert was received: 47.

These warnings sometimes are very helpful in troubleshooting SSL related issues and provide important clues. However, there is not much documentation available on the description of the alert codes.

These alert codes have been defined precisely in TLS/SSL RFC’s for all the existing protocol versions. For example lets consider the RFC 5246 (TLS 1.2). This RFC corresponds to the latest protocol version and it defines the alert messages.

Follow this link: http://tools.ietf.org/html/rfc5246#appendix-A.3

Below is a snippet from the above RFC describing the various alert messages:

A.3.  Alert Messages

   enum { warning(1), fatal(2), (255) } AlertLevel;
   enum {
       close_notify(0),
       unexpected_message(10),
       bad_record_mac(20),
       decryption_failed_RESERVED(21),
       record_overflow(22),
       decompression_failure(30),
       handshake_failure(40),
       no_certificate_RESERVED(41),
       bad_certificate(42),
       unsupported_certificate(43),
       certificate_revoked(44),
       certificate_expired(45),
       certificate_unknown(46),
       illegal_parameter(47),
       unknown_ca(48),
       access_denied(49),
       decode_error(50),
       decrypt_error(51),
       export_restriction_RESERVED(60),
       protocol_version(70),
       insufficient_security(71),
       internal_error(80),
       user_canceled(90),
       no_renegotiation(100),
       unsupported_extension(110),           /* new */
       (255)
   } AlertDescription;
   struct {
       AlertLevel level;
       AlertDescription description;
   } Alert;

There is MSDN article which describes these messages more briefly. Here is the link: http://technet.microsoft.com/en-us/library/cc783349%28v=ws.10%29.aspx.

However, the article never mentions the alert codes while explaining the messages. For simplicity, I have created a simpler table combining both the MSDN documentation and the RFC for usability. Below is the table:

Alert Code

Alert
Message

Description

0

close_notify

Notifies the recipient that the sender will not send any more messages on this connection.

10

unexpected_message

Received an inappropriate message This alert should never be observed in communication between proper implementations. This message is always fatal.

20

bad_record_mac

Received a record with an incorrect MAC. This message is always fatal.

21

decryption_failed

Decryption of a TLSCiphertext record is decrypted in an invalid way: either it was not an even multiple of the block length or its padding values, when checked, were not correct. This message is always fatal.

22

record_overflow

Received a TLSCiphertext record which had a length more than 2^14+2048 bytes, or a record decrypted to a TLSCompressed record with more than 2^14+1024 bytes. This message is always fatal.

30

decompression_failure

Received improper input, such as data that would expand to excessive length, from the decompression function. This message is always fatal.

40

handshake_failure

Indicates that the sender was unable to negotiate an acceptable set of security parameters given the options available. This is a fatal error.

42

bad_certificate

There is a problem with the certificate, for example, a certificate is corrupt, or a certificate contains signatures that cannot be verified.

43

unsupported_certificate

Received an unsupported certificate type.

44

certificate_revoked

Received a certificate that was revoked by its signer.

45

certificate_expired

Received a certificate has expired or is not currently valid.

46

certificate_unknown

An unspecified issue took place while processing the certificate that made it unacceptable.

47

illegal_parameter

Violated security parameters, such as a field in the handshake was out of range or inconsistent with other fields. This is always fatal.

48

unknown_ca

Received a valid certificate chain or partial chain, but the certificate was not accepted because the CA certificate could not be located or could not be matched with a known, trusted CA. This message is always fatal.

49

access_denied

Received a valid certificate, but when access control was applied, the sender did not proceed with negotiation. This message is always fatal.

50

decode_error

A message could not be decoded because some field was out of the specified range or the length of the message was incorrect. This message is always fatal.

51

decrypt_error

Failed handshake cryptographic operation, including being unable to correctly verify a signature, decrypt a key exchange, or validate a finished message.

60

export_restriction

Detected a negotiation that was not in compliance with export restrictions; for example, attempting to transfer a 1024 bit ephemeral RSA key for the RSA_EXPORT handshake method. This message is always fatal.

70

protocol_version

The protocol version the client attempted to negotiate is recognized, but not supported. For example, old protocol versions might be avoided for security reasons. This message is always fatal.

71

insufficient_security

Failed negotiation specifically because the server requires ciphers more secure than those supported by the client. Returned instead of handshake_failure. This message is always fatal.

80

internal_error

An internal error unrelated to the peer or the correctness of the protocol makes it impossible to continue, such as a memory allocation failure. The error is not related to protocol. This message is always fatal.

90

user_cancelled

Cancelled handshake for a reason that is unrelated to a protocol failure. If the user cancels an operation after the handshake is complete, just closing the connection by sending a close_notify is more appropriate. This alert should be followed by a close_notify. This message is generally a warning.

100

no_renegotiation

Sent by the client in response to a hello request or sent by the server in response to a client hello after initial handshaking. Either of these would normally lead to renegotiation; when that is not appropriate, the recipient should respond with this alert; at that point, the original requester can decide whether to proceed with the connection. One case where this would be appropriate would be where a server has spawned a process to satisfy a request; the process might receive security parameters (key length, authentication, and so on) at start-up and it might be difficult to communicate changes to these parameters after that point. This message is always a warning.

255

unsupported_extension

 

There were few articles that I found while searching that contain additional alert codes. However, I don’t find these to be part of the RFC. Here is one: http://botan.randombit.net/doxygen/classBotan_1_1TLS_1_1Alert.html

It includes additional alerts like 110, 111, 112, 113, 114, 115. You can browse the above link for further reading.

Hope someone finds the above table useful. It may not help you in solving any issue but would provide useful pointers.

Until then, Ciao! Smile

Error HRESULT: 0x80070520 when adding SSL binding in IIS

$
0
0

Today I will be discussing the very infamous error that is seen while adding a SSL binding in IIS 7 & higher. Below is a snapshot of the error message while trying to add the SSL binding in IIS.

image

Well, the error is definitely not descriptive enough, neither does it provide any vital information to troubleshoot the issue. However, if you look at the Event logs, you will find the clue and the reason why the error is seen.

Log Name:      System
Source:        Schannel
Date:          07-10-2012 02:13:15
Event ID:      36870
Task Category: None
Level:         Error
Keywords:     
User:          SYSTEM
Computer:      xxxxxxxxx
Description:
A fatal error occurred when attempting to access the SSL server credential private key. The error code returned from the cryptographic module is 0x8009030d. The internal error state is 10001.

Event message logged in the system event logs on failure.

The event logs should give you some clue regarding the problem. The primary reason for the above error is the problem in accessing the “Private Key” of the certificate due to a broken keyset.

For those who may not be following, Public Key Cryptography deals with “Public Key” & “Private Key”. The Public key is distributed to the clients, while only the Server has access to the Private key as it is used for decrypting the SSL Request. So “Private Key” is of utmost importance here.

There are few scenarios where we could see a problem accessing the “Private Key” of the SSL Cert. I will discuss a few in this article:


SCENARIO 1

The most common scenario is when the users use the IIS MMC to import a certificate and they uncheck the option “Allow this certificate to be exported”. This results in a broken keyset and thus results in the problem.

image

Solution:

There are 2 ways to fix this problem. Before we start off, delete/remove the existing certificate from the store.

  1. If using IIS MMC to import the certificate, then ensure that the “Allow this certificate to be exported” is checked.
  2. If making the private key exportable is not an option, then use the Certificates MMC to import the certificate. Please go through the following KB on how to import a certificate using the MMC: http://support.microsoft.com/kb/232137

SCENARIO 2

Another reason which can result in a broken keyset is due to missing permissions on the MachineKeys folder. This is the location where all the private keys are stored. The folder path (IIS 7 & higher) is as shown below: C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys

The default permissions on this folder are described in the following articles:

http://support.microsoft.com/kb/278381

http://msdn.microsoft.com/en-us/library/ee248638(v=vs.100).aspx

Solution:

Firstly, delete/remove the broken certificate from the store. Ensure the permissions are as per the articles mentioned above. So we need to permissions to the Administrators and Everyone account. Do remember to select the

image

 

NOTE: There might be a possibility that the issue might be seen even after ensuring right permissions. In this case, use the procmon.exe tool and fix the access denied error on the specific file inside the machinekeys folder.
You may also try giving the System account Full Permissions on the MachineKeys folder.

After giving the necessary permissions, re-import the certificate as described in SCENARIO 1.


SCENARIO 3

There is another possibility, that the issue might occur even after ensuring the both mentioned above. I have observed this behavior typically on Windows Server 2008. This depends on the KeySpec property of the certificate.

The KeySpec property specifies whether the private key can be used for encryption, or signing, or both.

The following MSDN article describes KeySpec property:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa379020%28v=vs.85%29.aspx

In order to examine the KeySpec property of the certificate, use the following command:

certutil –v –store my <thumbprint>

NOTE: In the above command the thumbprint information can be found in the details tab of the certificate. The following are valid commands:

certutil -v -store my "32 b5 39 8e d3 c9 c6 f1 a3 50 bc d4 b5 14 eb b5 a4 5d 1f c6"

certutil -v -store my "32b5398ed3c9c6f1a350bcd4b514ebb5a45d1fc6"

certutil -v -store my 32b5398ed3c9c6f1a350bcd4b514ebb5a45d1fc6

Get the output of the above command in a notepad and then search for KeySpec, which is part of the CERT_KEY_PROV_INFO_PROP_ID section. The KeySpec is represented as a hexadecimal value.

certutil -v -store my 32b5398ed3c9c6f1a350bcd4b514ebb5a45d1fc6

...

...

CERT_KEY_PROV_INFO_PROP_ID(2):
  Key Container = {00F81886-5F70-430A-939C-BB7DD58ECE2A}
Unique container name: 99247943bd018ca78ef945b82652598d_3ade29bb-f050-41f3-b0db-f2b69957a1d7
  Provider = Microsoft Strong Cryptographic Provider
  ProviderType = 1
  Flags = 20
  KeySpec = 2 -- AT_SIGNATURE

...

As described above it can take three values:

Numerical
Value

Value

Description

0

AT_NONE

The intended use is not identified. This value should be used if the provider is a Cryptography API: Next Generation (CNG) key storage provider (KSP).

1

AT_KEYEXCHANGE

The key can be used for encryption or key exchange.

2

AT_SIGNATURE

The key can be used for signing.

So the issue is seen if the KeySpec value is set to anything other than 1. The issue is more likely to be occur when the CSR is generated using a custom template and the KeySpec is not specified.

Whenever the KeySpec attribute is not explicitly specified, it takes the default value of 2 i.e., it can be used for signing purposes only.

Solution:

So one thing that you need to remember is that the KeySpec attribute has to be specified explicitly.

  1. If you are generating a certificate via the code, then ensure you are explicitly setting the KeySpec attribute to 1.
  2. If using certreq.exe utility along with an inf file to submit a request to SAN, ensure that you explicitly specify the KeySpec attribute to be 1.
  • Remember the KeySpec attribute is specified while creating the Certificate Signing Request. This cannot be modified once the certificate has been issued. So remember to set the value appropriately.
  • Also compare the KeySpec with the Key Usage attribute and make sure that both match logically.
    For example, for a certificate whose KeySpec equals to AT_KEYEXCHANGE, the Key Usage should be
    XCN_NCRYPT_ALLOW_DECRYPT_FLAG | XCN_NCRYPT_ALLOW_KEY_AGREEMENT_FLAG.
  • XCN_NCRYPT_ALLOW_USAGES_NONE

    The permitted uses are not defined.

    XCN_NCRYPT_ALLOW_DECRYPT_FLAG

    The key can be used to decrypt content. This maps to the following X509KeyUsageFlags values:

    • XCN_CERT_DATA_ENCIPHERMENT_KEY_USAGE
    • XCN_CERT_DECIPHER_ONLY_KEY_USAGE
    • XCN_CERT_ENCIPHER_ONLY_KEY_USAGE
    • XCN_CERT_KEY_ENCIPHERMENT_KEY_USAGE

    XCN_NCRYPT_ALLOW_SIGNING_FLAG

    The key can be used for signing. This maps to the following X509KeyUsageFlags values:

    • XCN_CERT_CRL_SIGN_KEY_USAGE
    • XCN_CERT_DIGITAL_SIGNATURE_KEY_USAGE
    • XCN_CERT_KEY_CERT_SIGN_KEY_USAGE

    XCN_NCRYPT_ALLOW_KEY_AGREEMENT_FLAG

    The key can be used to establish key agreement between entities.

    XCN_NCRYPT_ALLOW_ALL_USAGES

    All of the uses defined for this enumeration are permitted.

      
    More Information
    :
     

    For further read on KeyUsage refer the below 2 links:

    http://msdn.microsoft.com/en-us/library/windows/desktop/aa379021%28v=vs.85%29.aspx
    http://msdn.microsoft.com/en-us/library/windows/desktop/aa379417%28v=vs.85%29.aspx

    Configuring and Troubleshooting Certificate Services Client–Credential Roaming: http://technet.microsoft.com/en-us/library/dd277392.aspx

    How to create a certificate request with CertEnroll (JavaScript): http://blogs.msdn.com/b/alejacma/archive/2009/01/28/how-to-create-a-certificate-request-with-certenroll-javascript.aspx

    Generating a certificate (self-signed) using PowerShell and CertEnroll interfaces: http://blogs.technet.com/b/vishalagarwal/archive/2009/08/22/generating-a-certificate-self-signed-using-powershell-and-certenroll-interfaces.aspx

    Hope this helps. Smile

    Central Certificate Store (CCS) with IIS 8 (Windows Server 2012)

    $
    0
    0

    In my previous posts on IIS 8, I discussed how scalability was  achieved in IIS 8 via SNI.

    Below are the links to previous posts:

                      ·         SSL Scalability with IIS 8

                      ·         SNI with IIS 8

    In the first post I mentioned that scalability was achieved in IIS via Server Name Indication (SNI) and Central Certificate Store (CCS). In my second post linked above I discussed how scalability was achieved via SNI.

    In this article I’ll discuss CCS in detail and also its functionality.


    What is CCS?

    Central Certificate Store or Centralized SSL Certificate Support is a feature which allows certificates to be stored on a central location like a file share. This feature is very similar to Shared Configuration, where the certificates are stored on a file share and the servers in farm load them on demand.

    In CCS the files are exported along with the private key (in .pfx format) and stored centrally on a file share. Files are named specifically using a naming convention and stored in the file share which are loaded on demand basis for an incoming SSL request. CCS uses the Server Name Indication information from the Client Hello for functionality.

    Why do we need CCS when we already have SNI?

    While SNI addressed only the SSL scalability problem with IIS, CCS addresses both SSL scalability and manageability of the certificates.

    Also consider a hosting scenario where typically there are close to 1000 sites. If all of these were SSL enabled, then there would be close to 1000 SSL bindings. These explicit bindings are specific to a site and are loaded in memory during start-up of IIS Services. In case of CCS there exists only binding and the certs are loaded on demand and cached for future use, this way the memory consumption is lesser and there is a slight performance gain.

    How does CCS improve manageability of Certificates?

    Prior to IIS 8, IIS always picked up the certificates store (Personal store of MY Computer account) which is local to every machine. In case of a stand-alone server this is not a problem. However, consider a web-farm scenario with 2 or more servers in the farm. If one has to configure a site to use SSL, the certificate has to be installed on all the servers along with the private key. If the certificate expires, again the same step has to be repeated on all the servers. So there was lot of manual work involved. If there were more servers in the farm or if you were to introduce another SSL site, it would be a bigger headache for the server admins.

    In the server farm, we configure all the servers to use the CCS binding which reads from this Central Certificate Store. Now IIS picks the certificate from the file share and not the local certificate store. The server admins have the task simplified and they need to install/renew the certificate on a single location i.e., the file share.


    Installing CCS

    Unlike SNI, CCS is not pre-installed it has to be installed separately. It is shipped as a native module and has to be installed via the Server Manager console on Windows Server 2012 & via Programs & Features on Windows 8. Below are instructions to

    Installing CCS on Windows Server 2012:

    ·         Launch Server Manager.

    ·         Under Manage menu, select Add Roles and Features:

            image

    ·         In "Add Roles and Features Wizard" click "Next".

    ·         Select "Role-based or Feature-based Installation" and click on Next.

    ·         Select the appropriate server (local is selected by default) and click on Next.

    ·         Select Web Server (IIS):

            image

    ·         No additional features are needed for IIS, so click "Next".

    ·         Click on Next again.

    ·         By default, Centralized Certificates is not selected. Expand Security and then select "Centralized SSL Certificates Support" and click on Next.

            image

    ·         Click on Install and wait until the installation completes.

    ·         Upon successful installation the wizard would reflect the status:

            image

    Installing CCS on Windows 8:

    ·         Go to run prompt, type "appwiz.cpl", and hit Enter key.

    ·         This would launch the Programs and Features Console.

    ·         Click on "Turn Windows features on or off".

    ·         Select "Internet Information Services" and expand the tree.

            image

    ·         Go to World Web Wide Services->Security

    ·         Select "Centralized SSL Certificate Support" and click on ok.

             image

    ·         Centralized Certificates is installed successfully.


    Configuring Central Certificate Store

    1.                   Launch IIS Manager.

    2.                   Under "Connections" select <MachineName>.

                    image

    3.                   In the middle-pane, under "Management", double-click on "Centralized Certificates"

        image

    4.                   Under "Actions" pane select Edit Feature Settings:

        image

    5.                   Select the check box "Enable Centralized Certificates" and provide the following details:

        image

    Element Name

    Description

    Enable Centralized Certificates

    Select the Enable Centralized Certificates check box if you want to create a central certificate store for your web farm. Otherwise, clear the check box.

    Physical path

    Type the physical path to the directory on the central certificate store server where you want the certificates stored.

    User name

    Type the name of the user account to use when accessing the central certificate store.

    Password

    Enter the password for the User Account

    Confirm password

    Enter the password for User Account again to confirm.

    Private Key Password (Optional)

    ·         This is optional. If the certificates do not have password, leave this empty.

    ·         If the certificates have one global password, enter that password here.

    6.                   Centralized SSL Certificate Support feature is now ready to be used.

    7.                   One manageability feature that is noteworthy is the ability to group the certificates by their expiration dates:

      image

    8.                   The webserver is setup to use Centralized Certificate Store.


    How CCS works?

    Below steps outline, how the SSL handshake works with a CCS binding on the IIS 8 web server:

    1. The client and the server establish a TCP connection via TCP handshake.
    2. The client sends a Client Hello to the server. This packet contains the specific protocol version, list of supported cipher suites along with the hostname (let’s say www.outlook.com provided its a SNI compliant browser). The TCP/IP headers in the packet contain the IPAddress and the Port number.
    3. The server checks the registry (legacy bindings) to find a certificate hash/thumbprint corresponding to the above combination of IP:Port.
    4. If there is no legacy binding for that IP:Port, then server uses the port number from the Client Hello to check the registry for a CCS binding for this port. The server checks the below key to find the binding information: HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\SslCcsBindingInfo
    5. If the above step fails i.e., if the server couldn’t find a corresponding CCS binding for that port, then it would fallback to the legacy binding. (If this is absent then the SSL handshake would fail).
    6. If Step 4 succeeds. The hostname (from Client Hello) is used to generate a filename like hostname.pfx. The filename is passed as a parameter along with the other details (CCS Configuration) to the crypto API’s which in turn call the File System API's to retrieve the corresponding certificate from the Central Certificate Store (File Share). The retrieved certificate is cached and the corresponding certificate without private key is added to the Server Hello and sent to the client.
    7. If it cannot find a filename, then it falls back to Step 5.

  • File Naming Convention

    Centralized Certificate Store follows a specific naming convention for the certificates. When the client sends a Client Hello, IIS uses the hostname available from SNI to construct a filename (hostname.pfx), and searches the File share to find this file. Once it finds the file it loads it in memory and responds to the client with a Server Hello.

    For IIS to find the exact file match, a naming convention has to be used while storing certificates on the CCS file share. As per naming convention the name of the certificate should be:

    Filename Syntax:                <subject-name-of-cert.pfx>  

    But how does IIS handle Wild-Card & SAN Certificates. What is the naming convention for such certificates? Below is the solution:

    SL
    NO

    Description

    1

    Certificate with single Subject Name

            If the subject name is "www.contoso.com" then the IIS provider will look for www.contoso.com.pfx).

    2

    Wildcard certificate

            The IIS provider uses the underscore character (“_”) as a special character to indicate that it is a wildcard certificate. If the subject name in the SSL certificate is *.contoso.com, then the file name should be "_.contoso.com.pfx".

    NOTE: IIS provider would first try to search for a SSL certificate with the filename that exactly matches the domain name of the destination site. For example, if the destination site is www.contoso.com, the IIS provider first tries to locate www.consoto.com.pfx.  If that is unsuccessful, then it tries to locate _.contoso.com

     

    3

    SAN Certificates

            In this case, the certificate must be duplicated with the file names matching Subject names in the certificate. For example, if the certificate is issued for "www.contoso1.com" & "www.contoso2.com", then the file names should be www.contoso1.com.pfx& www.contoso2.com.pfx, respectively.

    So if the SAN Certificate is issued for 3 hostnames then there would be 3 files for those 3 hostnames respectively.

     

    NOTE: A SAN Certificate is like a global set. It can also be a wild card certificate 

     

        


    Configuring a website to use CCS Bindings

    1.                   Open IIS Manager.

    2.                   Under Connections pane, right click "Sites" and select "Add Website…"

    3.                   Fill the details as shown below

    a.              Site name: CentralSSL0

    b.              Physical path: C:\inetpub\wwwroot\CentralSSL0\

    c.               Type: https

    d.              Hostname: CentralSSL0

    ·         In Windows Server 8, host name must be specified when using CCS. (New )

    ·         The value depends on the certificate being used.

    e.              Require Server Name Indication: Selected

    f.              Use Centralized Certificate Store: Selected

    NOTE: There is no need to select a specific certificate.

    g.                With the use of SNI and the naming contract, the corresponding certificate is selected automatically. In this example, IIS tries to read CentralSSL0.pfx from the Centralized SSL Certificates file share.

    image

    h.                Click on "OK".

    You have successfully created a website using Centralized Certificate Store. The management experience is similar to that of Shared Configuration and traditional SSL. There are some differences though:

    ·         The certificates are stored centrally on a file share.

    ·         Host name has to be specified for SSL site when using CCS.

    ·         SSL binding is not managed explicitly 1-to-1. They are loaded on-demand.


    CCS Bindings

    To view the CCS bindings we execute the same netsh command as earlier. Execute the following from an elevated command prompt:

    netsh http show sslcert

    NOTE: the first line in the output reads "Central Certificate store" and not "IP:Port", as in earlier versions of IIS. The "Certificate Hash" is "null" too.

    The null indicates that the certificates are loaded on runtime.

     

    The above command reads the following registry key and enumerates the values. Below is the location:

    HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\SslCcsBindingInfo

    image


    CCS Configuration

    In my previous post I had mentioned a new attribute called sslFlags. This attribute specifies whether the SSL binding is using SNI or CCS or both. CCS Configuration

     

    Using
    CCS

    Using
    S
    NI

    sslFlags

    Description

    0

    0

    0

    Legacy SSL binding. Neither uses SNI nor CCS

    0

    1

    1

    SSL binding using SNI.

    1

    0

    2

    SSL binding uses CCS, but SNI is not enforced.

    1

    1

    3

    SSL binding uses CCS, but SNI is enforced.

    If the sslFlags attribute is set to either 2 or 3, then it is using the CCS bindings. If you check the applicationhost.config this is what the binding section would contain:

    <bindings>
    < binding protocol="https" bindingInformation="*:443:centralssl0" sslFlags="2" />
    < /bindings>

    NOTE: The IIS manager exposes the above settings via configuration API’s in the IIS UI. It is not recommended to change the registry values directly. You will have to start

    However, you wont find the configuration for the CCS Module in applicationhost.config. Well, this information is not stored in any of the config files. It is stored in registry under the following node:

    HKLM\SOFTWARE\Microsoft\IIS\CentralCertProvider

    image 

    However, you wont find the configuration for the CCS Module in applicationhost.config. Well, this information is not stored in any of the config files


    More Information:

    ·         Microsoft Virtual Academy: IIS8 Centralized Certificate Store

    ·         IIS 8.0 Centralized SSL Certificate Support: SSL Scalability and Manageability

    ·         Plan SSL Central Certificate Store

     


    I’m not done yet. There are few things that I need to address like, if someone has all the 3 types of bindings then which cert would be served? I’m not going to answer it now.

    I will do it in my next blog post. Until then Ciao! Smile

    Disable Client Certificate Revocation (CRL) Check on IIS

    $
    0
    0

    I have been asked this question on several occasions on how to disable revocation check in IIS 7.  It was pretty easy for IIS 6, on IIS 7 there is no documentation on how to do so. This post will describe on how to achieve this task.

    Firstly, list out all the existing IIS bindings via command line as shown below:

    netsh http show sslcert

    Default SSL Binding when added via IIS Manager

    IP:port                      : 0.0.0.0:443
    Certificate Hash             : 40db5bb1bf5659a155258d1d007c530fcb8996c2
    Application ID               : {4dc3e181-e14b-4a21-b022-59fc669b0914}
    Certificate Store Name       : My
    Verify Client Certificate Revocation    : Enabled
    Verify Revocation Using Cached Client Certificate Only    : Disabled
    Usage Check                  : Enabled
    Revocation Freshness Time    : 0
    URL Retrieval Timeout        : 0
    Ctl Identifier               : (null)
    Ctl Store Name               : (null)
    DS Mapper Usage              : Disabled
    Negotiate Client Certificate : Disabled

    NOTE:

    1. Client Certificate Revocation is always enabled by default.
    2. Application ID of “{4dc3e181-e14b-4a21-b022-59fc669b0914}” corresponds to IIS.

        
    In order to disable the revocation check, we need to delete the existing binding first. Before you do that, make a note of the above details, especially the certificate hash.

    NETSH command to delete existing SSL binding:

    netsh http delete sslcert ipport=0.0.0.0:443

    Now add the binding again using netsh as shown below:

    NETSH command to add an SSL binding to disable CRL Check:

    netsh http add sslcert ipport=0.0.0.0:443 certhash=40db5bb1bf5659a155258d1d007c530fcb8996c2
    appid={4dc3e181-e14b-4a21-b022-59fc669b0914}
    certstorename=My verifyclientcertrevocation=disable

     

    Highlighted portion of the above command depicts that we are disabling the client certificate revocation. This adds a DWORD at the following location in registry:

    REGISTRY  : HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\SslBindingInfo
    DWORD    : DefaultSslCertCheckMode
    Value         : 1

    DefaultSslCertCheckModecan take the following values. Click here for more info.

    VALUE

    MEANING

    0Enables the client certificate revocation check
    1Client certificate is not to be verified for revocation.
    2Only cached certificate revocation is to be used
    4The DefaultRevocationFreshnessTime setting is enabled
    0x10000No usage check is to be performed

     

    Review the SSL bindings after executing the above command. The CRL check would be disabled.

    netsh http show sslcert

    SSL Binding added via NETSH to disable CRL:

    IP:port                      : 0.0.0.0:443
    Certificate Hash             : 40db5bb1bf5659a155258d1d007c530fcb8996c2
    Application ID               : {4dc3e181-e14b-4a21-b022-59fc669b0914}
    Certificate Store Name       : My
    Verify Client Certificate Revocation    : Disabled
    Verify Revocation Using Cached Client Certificate Only    : Disabled
    Usage Check                  : Enabled
    Revocation Freshness Time    : 0
    URL Retrieval Timeout        : 0
    Ctl Identifier               : (null)
    Ctl Store Name               : (null)
    DS Mapper Usage              : Disabled
    Negotiate Client Certificate : Disabled

    NOTE: Client Certificate Revocation is always enabled by default.

    More details on the netsh commands for HTTP can be found here: http://technet.microsoft.com/en-us/library/cc725882(v=ws.10).aspx#BKMK_2

    MORE INFORMATION

    NETSH Commands for HTTP in IIS 8:

    With IIS there are 2 new SSL bindings viz. SNI Bindings and CCS Bindings. So the above commands would have to be modified slightly to incorporate these changes. So we have 2 additional parameters than what are listed in the above TechNet article. They are:

    Tag

    Value

    hostnameportUnicode hostname and port for binding.
    CCSCentral Certificate Store binding. 

    hostnameport is very similar to the ipport. The only difference is that it takes a Unicode string as an input along with the port number.

    Below are the modified commands for the corresponding bindings in IIS 8:

    To delete a SNI Binding

    netsh http delete sslcerthostnameport=www.sni.com:443

    To delete a CCS Binding

    netsh http delete sslcertccs=443

    To add a SNI Binding

    netsh http add sslcerthostnameport=www.sni.com:443certhash=40db5bb1bf5659a155258d1d007c530fcb8996c2 appid={4dc3e181-e14b-4a21-b022-59fc669b0914} certstorename=My verifyclientcertrevocation=disable

    To add a CCS Binding

    netsh http add sslcertccs=443appid={4dc3e181-e14b-4a21-b022-59fc669b0914} verifyclientcertrevocation=disable

    Windows Azure Web Sites : Cannot upload a Self-Signed Certificate created with PowerShell

    $
    0
    0

    As SSL functionality was added to Windows Azure Web Sites, I started playing around with it. I was trying to upload self-signed certificates when I ran into a issue.

    I created a self-signed certificate using Windows PowerShell ISE (New-SelfSignedCertificate Module). Below is a snippet of the command I ran:

    New-SelfSignedCertificate -CertStoreLocationcert:\LocalMachine\My-DnsNamewww.kaushalz.com

    I exported the certificate in the PFX format and then tried uploading the certificate to WAWS.

    image

    But it threw an error as shown below:

    image

    I clicked on DETAILS, and it showed up this.

    image

    Out of curiosity I wondered if WAWS allowed self-signed certificates to be uploaded. So I created a self-signed certificate via IIS Manager and exported it in PFX format and tried uploading it on to WAWS. This was successful, no errors at all.

    Even a self-signed certificate created using selfssl.exe tool could be uploaded to WAWS.

    It seems that the certificate created using PowerShell misses keyset permissions which doesn’t work well with WAWS. I see this as a limitation with PowerShell. However, I’m no PowerShell expert to confirm if nothing can be done further.

     

    Windows Azure Web Sites: How to configure a custom domain

    $
    0
    0

    By default when users create a website on WAWS they hostname would be SITENAME.azurewebsites.net. So when I create a website called Kaushalz then the hostname would be Kaushalz.azurewebsites.net.

    Windows Azure Web Sites allows user to configure custom domains as well. But this option is available when the website is scaled to either SHARED or STANDARD (Previously called RESERVED) mode.

    There is already documentation on how to configure custom domain. Here is the link: http://www.windowsazure.com/en-us/develop/net/common-tasks/custom-dns-web-site/

    However, I have still seen few customers running into issues in spite of this. I will be discussing in as much detail as possible.

    So as we know there are 2 ways to configure domain names:

    1. Add an A record.
    2. Or you can add a CNAME record.

    To explain this scenario I purchased a domain called www.kaushalz.org from GoDaddy.

    Now I will show step by step how to configure this.


    Create a website and scale it to SHARED/STANDARD

    Logon to the Azure portal and create a website. If this is the first site being created in a specific data center then it would be scaled to FREE mode by default. You will have to scale it to either SHARED or STANDARD.

    Refer this URL on how to do so: http://www.windowsazure.com/en-us/develop/net/common-tasks/custom-dns-web-site/#bkmk_configsharedmode

    Once the website has been scaled to either SHARED or STANDARD, we can proceed further.


    Configuration on the DNS Service Provider/DNS Server

    As I stated earlier, I purchased a domain www.kaushalz.org form GoDaddy. So I will have to configure the routings on GoDaddy’s DNS servers.

    When you click on Manage Domains under the DASHBOARD management page, you will see the following pop-up window providing the details on how to configure:

    image

    image

    As seen above it provides the IP which needs to be used while configuring A records on the DNS server.

    Additionally it also suggests that before we add custom domains for the site Azure needs to verify that the user is authorized to use the domain name or not.

    HOST

    RECORD TYPE

    POINTS TO

    www.kaushalz.org

    CNAME

    kaushalz.azurewebsites.net

          

    My DNS service provider is GoDaddy so I need to have www pointing to kaushalz.azurewebsites.net. Below is a snapshot of my DNS Manager.

    image

     


    SCENARIO: MIGRATION FROM PRODUCTION SITE TO WAWS

    Now consider a scenario where the user is wants to migrate his production websites to AZURE, without affecting any of his current sites. He would like to perform the verification of the hostname and then would like to modify his DNS records to point to the site hosted on Azure. What should he do?

    For this he would have to create CNAME records on the DNS service provider as shown below:

    HOST

    RECORD TYPE

    POINTS TO

    awverify.kaushalz.org

    CNAME

    awverify.kaushalz.azurewebsites.net

    awverify.www.kaushalz.org

    CNAME

    awverify.kaushalz.azurewebsites.net

     

    I’m going to point both kaushalz.org and www.kaushalz.org to kaushalz.azurewebsites.net. Therefore I’m adding the above entries.

    Note that kaushalz.org and www.kaushalz.org are 2 different hostnames. The above step is only for verification.

    Below is a snapshot of my configuration from GoDaddy. I don’t need any other records apart from these 2 CNAME entries.

    image

    NOTE: I have seen cases where the propagation of the DNS entries takes up to 48 hours or sometime even more than that. In these cases you may have to wait for the propagation to succeed and then do go back to azure and then perform the verification. The user needs to ensure that the awverify entry resolves to the corresponding entry which has been added in the DNS server when performing a look up.

    There are online DNS Query tools. I am using dnsquery.org for this. Provide the hostname awverify hostname and then click on query. it will produce results as shown below.

    imageIn case of GoDaddy I have always observed that the propagation is very quick and the results are almost immediate.

    You can skip this, if you have an CNAME entry pointing to SITENAME.azurewebsites.net on the DNS server.

     

    DNS VERIFICATION LOGIC

    The WAWS verification logic has 2 steps:

    1. First we try to validate using CNAME. That means we look whether the custom domain provided is a CNAME record. If it is, then it is expanded. If the next record in the chain is also a CNAME, it is further expanded. The expansion goes on until we get an A record or something ending with “azurewebsites.net”. If it is found that the custom domain is eventually pointing to  SITENAME.azurewebsites.net, it is verified that the SITENAME is really the site for which the custom domain is being added and verification is done.
    2. If for some the validation of the CNAME failed, it proceeds with alternative validation. That is useful for users when they want to just test their sites, but not really point the record to AZURE, or if it is a naked domain (some DNS registrars don’t support a CNAME for such hostname). The alternative validation puts “awverify” in front of the provided hostname and follows up the similar logic as above. The only difference is that this time the target has to be the awverify.SITENAME.azurewebsites.net.

    TXT records are also supported. It is another alternative, because some DNS registrars don’t support CNAME’s pointing to invalid hostnames (the “awverify” version doesn’t truly exist). So one can also create just a TXT record instead of a CNAME (otherwise it is completely same setup as in validations above).

    Thanks to Petr Podhorsky (DEV on WINDOWS AZURE WEB SITES) who shared the above information with me.


    Add the Domain Name in WAWS Portal

    Lets assume the DNS records have propagated successfully, now the only task is to add the domain name to the portal.

    Go to the DASHBOARD Management page and click on MANAGE DOMAINS at the bottom of the page.

    Enter the domain name and give it some time to verify. If it succeeds, then you would see something as shown below:

    image

    If the verification fails then you would get a corresponding warning message that it was unable to perform the 2 step verification logic and would request the user to update the DNS entries.

    image

    You will have to try resolving the hostname and confirm that it is resolving to the entries you specified.

    As I suggested earlier, go to dnsquery.org and query the hostname and see what does the hostname resolves to. This will indicate whether the propagation of entries has been done or not.

    Once the verification is done, user can modify the DNS entries to point to the AZURE front end IP Address provided to him. Below is a snapshot of the final configuration of my domain name on GoDaddy.

    image


    ***MORE INFORMATION***

    NOTE: You will have to add both www.SITENAME.com and SITENAME.com in the portal as they are 2 different hostnames. Below is a snapshot:
    image

    Once added successfully you are ready to go. Browse the url’s to confirm the site’s availability.


    Windows Azure Web Sites: DASHBOARD Management Page

    $
    0
    0

    This post will address the Web Sites DASHBOARD management page. In one of my earlier posts I discussed the QuickStart management page.

    The DASHBOARD management page is one of the most important pages. We'll discuss all the options on this page one by one. Before we proceed further, here is a snapshot of the DASHBOARD page:

    There are several sections which depicts different data and provides the users with certain information and options.

    The CPU METRIC CHART

    As soon as the user lands on this page, he will see the graphical representation of certain CPU metrics which provides some insight into the overall usage of the site. I will not discuss this entirely here as this is a replica of the same information which is available on the MONITOR management page. The only change is that here the users have the ability to select only these metrics:

    • CPU TIME
    • DATA IN
    • DATA OUT
    • HTTP SERVER ERRORS
    • REQUESTS
    • RESPONSE TIME (This option would become available when Endpoint monitoring)

    Below is a snapshot of the chart when the endpoint monitoring is enabled:

    We will discuss this in more detail when I would post about MONITOR Management Page later.

    Command Bar

    At the bottom of the page is the Command Bar that provides basic manage functionalities to stop, start and browse a website. Below is a snapshot:

    As seen above it includes the following options:

    • BROWSE– launches the website URL in a web browser.
    • STOP– Stops the website.
    • RESTART– Re-starts the website.
    • MANAGEDOMAINS– Takes the user to the Domain Names section under the CONFIGURE Management Page. The users can add custom domain names. This option is enabled if the website is running in either SHARED or STANDARD mode.
    • DELETE– Deletes the website from your subscription.
    • WEBMATRIX– Launches the WebMatrix tool on the client side so that the users can edit the website. If WebMatrix is not installed, then it would prompt the user to install it.

    Web Endpoint Status

    Currently this feature is in PREVIEW(or BETA) and is available only if the website is running in STANDARD mode. This provides for monitoring functionality for the Web Site's HTTP or HTTPS endpoints from up to 3 geographically distributed locations. Below is a snapshot before the web endpoint is configured for monitoring:

    This section corresponds to the monitoring section under the CONFIGURE Management Page.

    The users can configure a maximum of 2 endpoints each of which can be monitored from up to 3 geographic locations. Rephrasing again, "one endpoint can be monitored from up to 3 geographical locations". There are 8 geographical locations to choose from, they are:

    1. Chicago, Illinois (US)
    2. San Antonio, Texas (US)
    3. San Jose, California (US)
    4. Ashburn, Virginia (US)
    5. Dublin, Ireland
    6. Amsterdam, Netherlands
    7. Hong Kong
    8. Singapore

    To set this up the user needs to browse to the Configure management page. After the user has created the endpoint, the Dashboard management page takes some time to update the changes. It may take approximately 10-15 minutes for the portal to reflect the changes. Until the changes are reflected the user will see this:

    Once the portal has finished updating, the user will see something similar to this:

    As shown above, the user will get an option to view the result of the tests. UI will display the endpoints that were tested (from the chosen geo-locations) along with the timestamp. The time stamp seen in the above image reflects the user's local time-zone. The user has to click on the endpoint's name to view the results. I setup 2 endpoints called Kaushal and test as seen above. Below is the output when I click on one of the endpoints.

    Autoscale Status

    This section displays the data corresponding to the AUTOSCALE option under SCALE Management page. Currently this feature is in PREVIEW(or BETA) and is available only if the website is running in STANDARD mode. Below is a snapshot of the section before autoscale has been configured:

    Once the website has been scaled to STANDARD and autoscale has been set to CPU, the portal would update the section and this is what it would display:

    Usage Overview

    This section displays the usage quotas for Data Out, CPU Time, File System Storage,Memory Usage, SQL Server Database Size etc.

    • The green bar for each resource indicates how much of the subscription's resource usage quota is being consumed by the current web site.
    • The grey bar displayed for each resource indicates how much of a subscription's resource usage quota is being consumed by all other shared mode web sites associated with user's Web Site subscription.

    This section displays different set of data depending on which mode the website might be running in. Typically this section depicts the quota restriction enforced by Windows Azure on the utilization of system resources.

    • Data Out– a measure of the amount of data sent from web sites to their clients in the current quota interval (24 hours).
    • CPU Time– the amount of CPU time used by web sites running in Free/Shared mode for the current quota interval.
    • File System Storage– The amount of file system storage in use by the web site.
    • Memory Usage – The amount of physical memory in use by the web site.
    • Database Size – The total SQL Server storage space utilized by the website on this DB.

    NOTE: The database info is related to SQL Server and not MySQL. There are no metrics available for MySQL on WAWS currently. To view this please logon to ClearDB's site.

    As I mentioned earlier, WAWS prevents over usage of the resources through quota restrictions on the website. It takes subsequent actions when a website overuses the resources, this is done to prevent any subscriber from exhausting resources to the detriment of other subscribers.

    What happens when a resource usage quota is exceeded?

    Windows Azure takes the following actions if a subscription's resource usage quotas are exceeded in a quota interval:

    • Data Out– when this quota exceeds Windows Azure stops all web sites for a subscription, which are configured to run in SHAREDmode for the remainder of the current quota interval. Windows Azure will start the web sites at the beginning of the next quota interval.
    • CPU Time– when this quota is exceeded Windows Azure stops all web sites for a subscription, which are configured to run in SHAREDmode for the remainder of the current quota interval. Windows Azure will start the web sites at the beginning of the next quota interval.
    • File System StorageWindows Azure prevents deployment of any web sites for a subscription, which are configured to run in Shared mode if the deployment will cause the File System Storage usage quota to be exceeded. When the File System Storage resource has grown to the maximum size allowed by its quota, file system storage remains accessible for read operations but all write operations, including those required for normal web site activity are blocked. When this occurs you could configure one or more web sites running in SHAREDweb site mode to run in STANDARD web site mode and reduce usage of file system storage below the File System Storage usage quota.
    • Memory Usage – when this quota is exceeded Windows Azure stops all web sites for a subscription which are configured to run in SHAREDmode for the remainder of the current quota interval. Windows Azure will start the web sites at the beginning of the next quota interval. Resource metering service on the web worker pushes worker process stats (private bytes memory in this case) to metering DB (twice every minute). Quota enforcement monitors the Metering DB to see if it crossed over 512MB (again twice every minute). Since this quota is per hour basis, if Quota Enforcement sees 512MB in DB it will block this site for the next clock hour. So it could detect violation at 10:59 and block the site only for 1 minute and unblock it at 11.00, similarly if it detects at 10:05 then it will be clock for 55 min till 11th hour.

    Linked resources

    This will display all the resources and dependencies of the user's web sites. The user can link new or existing Windows Azure SQL Database instances, MySQL instances, or Storage accounts to the web site.

    If there are no linked resources then the DASHBOARD page reflects something as shown in the image below:

    The MANAGE LINKED RESOURCES (a hyperlink) points to the LINKED RESOURCES management page.

    Quick Glance

    Towards the right side of the page is the quick glance section which provides the user with few important options and information.

    As seen in the above snapshot, it provides the following information:

    View connection strings – This link when clicked displays the connection string to the user, provided the application connects to either a SQL or a MySQL database.

    • Download publish profile –Clicking this link will prompt the user to download a xml file which contains all of the information required to publish a web application to a Web Site, this file is known as publish profile. This file is saved with the extension "*.PublishSettings". This can be with MicrosoftWeb Matrix to automate publishing of applications to Web Sites.
    • Reset publish profile credentials –This option when clicked & confirmed, makes any previously downloaded publish profiles irrelevant. It creates a new publish profile with updated security information. (It is important to note that this changes the password hash, but does not the password. Because the password hash is changed the previous publish profiles are no longer usable)
    • Reset deployment profile credentials –Thissounds similar to the previous option but should not to be confused with it. When clicked the users will be prompted to change the deployment credentials. When the users deploy to a FTP host or a GIT repository, they must authenticate using the deployment credentials created from Web Site's Quick Start or Dashboard management pages.

    NOTE: The difference between Reset deployment profile credentials & Reset publish profile credentials has been discussed in more detail in this blog post: Click Here

    • Set up deployment from source control – This option allows the users set up publishing for your web site using a wide variety of source control providers, including Team Foundation Service (TFS), Local Git (a Git repository on your local computer), GitHub, CodePlex, BitBucket, DropBox, or Mercurial. Once configured the users can manage the deployments on the DEPLOYMENTS page.

    NOTE: Once the source control is configured the option changes to reflect a new value. For example if the user configures GitHub as a source control, the quick glance section would contain an option called Disconnect from GitHub. See below snapshot:

     

     

    There are other sections below quick glance (right side of the page) which provide additional information regarding the website.

    • STATUSProvides the status of the Web Site, whether it is Running or Stopped.
    • SITE URLSpecifies the public address used to access the site. This can be modified under the CONFIGURE management page provided the site is running in either Shared or Reserved mode.
    • VIRTUAL IP ADDRESSSpecifies the IP which gets assigned to the website upon enabling IP-based SSL.OnceIP-based SSL is enabled the users need to use this IP to configuring custom domains and not the IP of the WAWS Front-end servers.
    • COMPUTE MODESpecifies the mode the Web Site has been configured to run. This can be FREE, SHARED or STANDARD.
    • FTP HOSTNAME Specifies the URL to use when the user is publishing the site over FTP. (Also specifies the datacenter where the website is hosted)
    • FTPS HOSTNAMESpecifies the URL to use when the user is publishing the site over FTP.
    • DEPLOYMENT/ FTP USERSpecifies the user account to be used when deploying the web application using FTP. Ensure to prepend the username with the Web Site name followed by backslash. If the websites name is Kaushal and the username is called test, then the value is Kaushal/test.
    • FTP DIAGNOSTIC LOGS Specifies the location where the diagnostics are stored and allow the user to download the same. The user must note to prepend the username with Web Site name followed by the backslash as mentioned earlier. Diagnostics options for a Web Site are available on the CONFIGURE management page for the Web Site. After configuring diagnostics for a Web Site, the user can download the resulting log files via FTP. Consider using a client such as FileZilla to download diagnostic logs from the FTP or FTPS site. A standalone client should provide usability superior to a web browser for specifying credentials, viewing folders and downloading files from FTP or FTPS sites.
    • FTPS DIAGNOSTIC LOGS – Same as previous one except that the user can choose FTPS as the protocol.
    • LOCATIONSpecifies the location of the Data Centerwhere the Web Site is hosted. Users get the option to choose the data center during Web Site creation. Currently there is no option in the Windows Azure Portal to migrate the website to a different data center post website creation.
    • SUBBSCRIPTION NAME –Specifies the Windows Azure subscription name used to create the Web Site.
    • SUBBSCRIPTION IDSpecifies the Windows Azure subscription ID used to create the Web Site.

     

    So this is the summary of the DASHBOARD Management Page. I will update this post to keep in sync with the new features that either get added or removed.

     

     

    SSL Handshake and HTTPS Bindings on IIS

    $
    0
    0

    Secure Socket Layer (SSL) also known as Transport Layer Security (TLS) is a cryptographic protocol which defines how 2 entities (client and server) communicate with each other securely. TLS is the successor of SSL. You can read more about it here: http://en.wikipedia.org/wiki/Transport_Layer_Security

    These are the following protocols which are most commonly used:

    • SSL 2.0
    • SSL 3.0
    • TLS 1.0 (SSL 3.1)
    • TLS 1.1 (SSL 3.1)
    • TLS 1.2 (SSL 3.1)

    SSL 2.0 had many security flaws which led to the development of its successor SSL 3.0. It is present only for backward compatibility. I have rarely seen anyone using this version and I would highly recommend against it.

    As we know TLS/SSL is an application layer protocol. Below is a diagram depicting the TCP/IP model:

    I am not going to discuss the SSL/TLS protocol in this post as it is beyond the scope of this topic. However I would be discussing SSL handshake in brief and relate it to IIS.

    The above diagram makes it clear that TLS/SSL runs on top of TCP/IP like any other application layer protocol. Before we delve into SSL handshake we need to know something about TCP handshake too.

    TCP/IP Handshake

    Microsoft has published a support article explaining the 3-way TCP/IP handshake. Here is the link: http://support.microsoft.com/kb/172983

    Below diagram should give you a gist of the TCP/IP handshake:

    If we were to capture a network trace (or a TCP Dump) and look at the details available and analyze the details available; the IP Layer provides the TCP layer with IP Address of the client and server. The TCP layer contains the details about the source port and the destination port, TCP Flags and other details like checksum, Windows Size etc.

    When the user launches a browser and punches in the web address, let's say https://www.kaushalz.com, the client and the server would perform the TCP/IP handshake as seen below

    So basically this is what is passed on from the TCP/IP layer to the application layer:

    • IP Address of the source and destination
    • Source Port and Destination Port

    The host header is neither present in the IP or the TCP layer. This actually leads to a problem which was addressed via the introduction of SERVER NAME INDICATION (a TLS Extension).

    Problem due to above Limitation

    Before I describe the problem we need to understand a little about the server side bindings. When routing a HTTP request to a website the server determines which process the request to be routed based on the IP, PORT& the HOSTNAME. These 3 are always available to the server during a normal HTTP communication. So basically the combination of IP+PORT+HOSTNAME is used as a unique identity to route the site to a specific process. The server admin can have the same IP+PORT for all the HTTP websites and alter only the HOSTNAME and maintain the uniqueness throughout. Which also makes the server scalable.

    However in case of SSL the server has access to IP& Port only. Since the HOSTNAME is not available, the server has to route the request to the process depending on IP+PORT. This limitation leaves the server handicapped, as it has to changes the design for websites running on HTTPS. Due to this, the uniqueness for the websites running on HTTPS is determined through combination of IP+PORT. In real world, having a separate IP for a website is not ideal due to hardware & monetary limitations. Also changing the port number for all SSL bindings may not be ideal as changing the port number to anything other than the default SSL port would require the client to specifically put out the port number in the request. As a result, the server is not scalable for HTTPS sites.

    This was a protocol limitation and severely affected the scalability of the sites

    This problem was addressed by introducing a TLS Extension called Server name Indication. The client sends the server the hostname it is requesting for as a part of the CLIENT HELLO in the form of TLS EXTENSIONS. You can read more about it here: RFC 3546 (Section 3.1)

    TLS/SSL Handshake

    Let's consider a scenario where the client launches the browser and punches in https://www.kaushalz.com.

    • Client will try to resolve the hostname to an IPAddress via ARP.
    • Once the client has the Destination IP, it will send a TCP SYN to the server.
    • The Server responds with ACK to this SYN.
    • The client responds with an ACK to the ACK it received from the server. Now a TCP connection has been established between the client and the server. The client will now forward the requests to the Destination IP on port 443 (Default TLS/SSL port)
    • The control is now transferred to the SSL Protocol in the application layer. It has the IP & the Port information handy from previous steps. However, it still has no clue whatsoever about the hostname.
    • The client creates a TLS Packet called as CLIENT HELLO. This contains the following details:
      • SSL Protocol version
      • Session ID
      • List of Cipher Suites supported by the client.
      • List of CLIENT HELLO Extensions

      The Client typically selects the most secure protocol version and sends it to the server. Below is a snippet from the RFC 3546:

    Blake-Wilson, et. al.            Standards Track                       [Page 4]
    RFC 3546                         TLS Extensions                       June 2003


    2.1. Extended Client Hello


    Clients MAY request extended functionality from servers by sending the extended client hello message format in place of the client hello message format. The extended client hello message format is:

    struct {
            ProtocolVersion client_version;
            Random random;
            SessionID session_id;
            CipherSuite cipher_suites<2..2^16-1>;
            CompressionMethod compression_methods<1..2^8-1>;
            Extension client_hello_extension_list<0..2^16-1>;
    } ClientHello;

    Here the new "client_hello_extension_list" field contains a list of extensions. The actual "Extension" format is defined in Section 2.3.

    In the event that a client requests additional functionality using  the extended client hello, and this functionality is not supplied by the server, the client MAY abort the handshake.

    Note that [TLS], Section 7.4.1.2, allows additional information to be added to the client hello message.  Thus the use of the extended client hello defined above should not "break" existing TLS 1.0 servers.

     A server that supports the extensions mechanism MUST accept only client hello messages in either the original or extended ClientHello format, and (as for all other messages) MUST check that the amount of data in the message precisely matches one of these formats; if not then it MUST send a fatal "decode_error" alert.  This overrides the "Forward compatibility note" in [TLS].

     If you were to capture a network trace (or a TCP Dump) this is how the CLIENT HELLO would look like:

    Frame 310: 187 bytes on wire (1496 bits), 187 bytes captured (1496 bits) on interface 0
    Ethernet II, Src: WistronI_86:74:54 (3c:97:0e:86:74:54), Dst: Cisco_e5:44:00 (10:bd:18:e5:44:00)
    Internet Protocol Version 4, Src: 10.171.71.21 (10.171.71.21), Dst: 10.168.3.213 (10.168.3.213)
    Transmission Control Protocol, Src Port: 42079 (42079), Dst Port: http (80), Seq: 226, Ack: 116, Len: 133
    Secure Sockets Layer
        TLSv1 Record Layer: Handshake Protocol: Client Hello

            Content Type: Handshake (22)
            Version: TLS 1.0 (0x0301)
            Length: 128
            Handshake Protocol: Client Hello
                Handshake Type: Client Hello (1)
                Length: 124
                Version: TLS 1.0 (0x0301)
                Random
                    gmt_unix_time: Aug  3, 2013 06:45:04.000000000 India Standard Time

                    random_bytes: 894966609a64a0b0ba0b4cd5adcc431aad77f0ff6108590e...
                Session ID Length: 0
                Cipher Suites Length: 24
                Cipher Suites (12 suites)
                    Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f)

                    Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035)
                    Cipher Suite: TLS_RSA_WITH_RC4_128_SHA (0x0005)
                    Cipher Suite: TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a)
                    Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013)
                    Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)
                    Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA (0xc009)
                    Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA (0xc00a)
                    Cipher Suite: TLS_DHE_DSS_WITH_AES_128_CBC_SHA (0x0032)
                    Cipher Suite: TLS_DHE_DSS_WITH_AES_256_CBC_SHA (0x0038)
                    Cipher Suite: TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA (0x0013)
                    Cipher Suite: TLS_RSA_WITH_RC4_128_MD5 (0x0004)
                Compression Methods Length: 1
                Compression Methods (1 method)
                Extensions Length: 59
                Extension: renegotiation_info
               Extension: server_name
                Extension: status_request
                Extension: elliptic_curves
                Extension: ec_point_formats
                Extension: SessionTicket TLS

     

    • The client sends a CLIENT HELLO to the server on the IP & Port it obtained during TCP handshake.
    • For this scenario I will consider IIS 7.5 as the SERVER entity. Upon receiving the CLIENT HELLO, the server has access to the following information:
      • IP Address (10.168.3.213)
      • Port Number (443)
      • Protocol Version (TLS 1.0)
      • List of Cipher Suites
      • Session ID
      • List of CLIENT HELLO Extensions etc.

      The Server will first check if it supports the above protocol version and if any of the cipher suites in the provided list. If not, the handshake fails there itself.

      The Server will now try to determine if there is an end point listening on the IP and PORT. If it finds an endpoint and if it is IIS, then the TCPIP.SYS driver moves the packet to the HTTP.SYS layer.

      • HTTP.SYS moves the request into the generic SSL Queue.
      • Until IIS 7.5 the SSL bindings were IP based i.e., IP+ Port and were associated with a certificate hash.
      • The HTTP.SYS tries to determine the certificate has corresponding to this IP+Port combination. It does so by enumerating the following registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\SslBindingInfo
      • From the above, the certificate hash corresponding to the IP+PORT combination is determined. Now the HTTP.SYS calls the CRYPTO API's by passing on the cert hash to retrieve certificate blob, which calls the certificate store, finds the certificate and sends it back to the HTTP.SYS.
    • The Server responds to the client with SERVER HELLO. RFC 3546 defines the format of the SERVER HELLO:

    Blake-Wilson, et. al.            Standards Track                       [Page 5]
    RFC 3546                          TLS Extensions                      June 2003


    2.2. Extended Server Hello


    The extended server hello message format MAY be sent in place of the server hello message when the client has requested extended functionality via the extended client hello message specified in Section 2.1.  The extended server hello message format is:

          struct {
              ProtocolVersion server_version;
              Random random;
              SessionID session_id;
              CipherSuite cipher_suite;
              CompressionMethod compression_method;
              Extension server_hello_extension_list<0..2^16-1>;
          } ServerHello;

    Here the new "server_hello_extension_list" field contains a list of extensions.  The actual "Extension" format is defined in Section 2.3.

    Note that the extended server hello message is only sent in response to an extended client hello message.  This prevents the possibility that the extended server hello message could "break" existing TLS 1.0 clients.

    The Server typically responds back with the following details:

    • SSL/TLS Protocol version.
    • One of the cipher suites from the list of cipher suites provided by client. (whichever is the most secure)
    • Certificate of the server (Without the private key of course)
    • List of SERVER HELLO Extensions.
    • (OPTIONAL)If the web app associated with this binding requires a Client Certificate for authentication then it would request the client to send the certificate. Here the IIS Sever would send the client the distinguished names of the list of TRUSTED ROOT CA it supports.

    Below is a snippet of the network trace:

    Frame 320: 257 bytes on wire (2056 bits), 257 bytes captured (2056 bits) on interface 0
    Ethernet II, Src: Cisco_e5:44:00 (10:bd:18:e5:44:00), Dst: WistronI_86:74:54 (3c:97:0e:86:74:54)
    Internet Protocol Version 4, Src: 10.168.3.213 (10.168.3.213), Dst: 10.171.71.21 (10.171.71.21)
    Transmission Control Protocol, Src Port: http (80), Dst Port: 42079 (42079), Seq: 1576, Ack: 359, Len: 203
    [2 Reassembled TCP Segments (1663 bytes): #319(1460), #320(203)]
    Secure Sockets Layer
        TLSv1 Record Layer: Handshake Protocol: Multiple Handshake Messages

            Content Type: Handshake (22)
            Version: TLS 1.0 (0x0301)
            Length: 1658
           Handshake Protocol: Server Hello
                Handshake Type: Server Hello (2)

                Length: 81
                Version: TLS 1.0 (0x0301)
                Random
                Session ID Length: 32
                Session ID: 8d0a0000efffe1ad6a82edc6d6a8967bd759cd0f3bdf70e9...
                Cipher Suite: TLS_RSA_WITH_RC4_128_SHA (0x0005)
                Compression Method: null (0)

                Extensions Length: 9
                Extension: renegotiation_info
                Extension: server_name
           Handshake Protocol: Certificate
                Handshake Type: Certificate (11)
                Length: 1565
                Certificates Length: 1562
                Certificates (1562 bytes)
                    Certificate Length: 1559

                   Certificate (id-at-commonName=www.kaushalz.com,id-at-organizationalUnitName=Azure,id-at-organizationName=Microsoft,id-at-localityName=Bangalore,id-at-stateOrProvinceName=India,id-at-countryName=IN)
                        signedCertificate
                        algorithmIdentifier (shaWithRSAEncryption)
                        Padding: 0
                        encrypted: bcd1c6d0a5e548eea94749e950d9ed8d7b73a79ac63306f0...
            Handshake Protocol: Server Hello Done
                Handshake Type: Server Hello Done (14)
                Length: 0

     

    • The Client uses the SERVER HELLO to perform SERVER AUTHENTICATION. This is described in detail here: http://support.microsoft.com/kb/257587.If the server cannot be authenticated, the user is warned and informed that an encrypted and authenticated connection cannot be established. If the server is successfully authenticated, the client proceeds to the next step.

    NOTE: If you captured a network trace for a SSL Handshake you could see the details until SERVER HELLO, after that the encryption begins and nothing would be available and would make sense as the packets are encrypted.

    • The Client uses the data provided from the server to generate a pre-master secret for the session,encrypts it with the server's public key (obtained from the server's certificate), and then sends the encrypted pre-master secret to the server. If the server had requested for CLIENT CERTIFICATE, then client also signs another piece of data that is unique to this handshake and known by both the client and server. In this case, the client sends both the signed data and the client's own certificate to the server along with the encrypted pre-master secret.
    • If the server had requested for client authentication, the server attempts to authenticate the client. If the client cannot be authenticated, the session ends. If the client is successfully authenticated, the server uses its private key to decrypt the pre-master secret, and then performs a series of steps (which the client also performs, starting from the same pre-master secret) to generate the master secret.
    • Both the client and the server use the master secret to generate the session keys, which are symmetric keys used to encrypt and decrypt information exchanged during the SSL session and to verify its integrity (that is, to detect any changes in the data between the time it was sent and the time it is received over the SSL connection).
    • The CLIENT& the SERVER send each other a message informing that future messages from them will be encrypted with the session key. It then sends a separate (encrypted) message indicating that its portion of the handshake is finished.
    • The SSL Handshake is done. The Client and the Server send each other messages which are encrypted/decrypted using the session keys generated in the previous step.
    • It is now that the Client sends the actual HTTP Request packet to the Server in the encrypted form.
    • The Server decrypts the request via the symmetric key and generates a response, encrypts it and sends it back to the client.
    • This continues normally for the entire session of secure communication. However, at any time either the client or the server may renegotiate the connection. In this case the process repeats again.

    Below is a diagrammatic representation of the SSL Handshake:

    Identifying problems during SSL Handshake

    Eventually, once the handshake completes and the data exchange has been done, either both or one of the entities will eventually close down the connection gracefully. If there was a problem during the SSL Handshake then you there would be an exception raised within the SSL Layer (SSL ALERT PROTOCOL). These exceptions may or may not be fatal i.e. not all exceptions would cause the handshake to fail.

    As we know there we can see details only until SERVER HELLO. Anything beyond this point is not visible. However, in case of a SSL ALERT we would see some notification which can be viewed in the network traces.

    Servers also tend to propagate this information through some sort of server logging. On Windows we have SCHANNEL logging which throws a corresponding SCHANNEL event in the SYSTEM event logs. Watch out for these events. Below is a snippet of one such event:

    Log Name:      System
    Source:        Schannel
    Date:          05-08-2013 20:16:02

    Event ID:      36888
    Task Category: None
    Level:         Error
    Keywords:     
    User:          SYSTEM
    Computer:      My-Computer
    Description:

    The following fatal alert was generated: 40. The internal error state is 1205.

     

    However, do remember that not all the alerts that you see are fatal. Try to reproduce the error and confirm that this was the error message that was logged due to the problem you were facing.

    You could either use WireShark or Network Monitor (NETMON). I using both depending on what scenario I am running into.

    That's it for now folks, hopefully this would give you some idea on how SSL handshake works. Let me know if you have any queries/suggestions.

    MORE INFORMATION

    TCP/IP Handshake: http://support.microsoft.com/kb/172983

    Description of SSL Handshake: http://support.microsoft.com/kb/257591

    Description of Server Authentication during SSL Handshake: http://support.microsoft.com/kb/257587

    SSL/TLS Alert protocol & the alert codes: http://blogs.msdn.com/b/kaushal/archive/2012/10/06/ssl-tls-alert-protocol-amp-the-alert-codes.aspx

    Server Name Indication: http://en.wikipedia.org/wiki/Server_Name_Indication

    Windows Azure Web Sites: PHP, .user.ini and WAWS

    $
    0
    0

    Happy New Year 2014 to all the readers!! Smile

    Its been quite some time I wrote a blog post. I am back and today I will be writing a small post on the importance of .user.ini file w.r.t Windows Azure Web Sites (WAWS)

    As you know WAWS supports PHP framework. As of today it supports the following versions of the framework:

    • PHP 5.3 (5.3.19)
    • PHP 5.4 (5.4.9)
    • PHP 5.5 (5.5.3)

    image

    Go to the CONFIGURE page under your WAWS site to configure this setting. As seen in the above image the user can choose any one of the framework versions which are supported by WAWS.

    NOTE: WAWS supports specific runtime version of a framework version as they have been tested against WAWS. The user can also configure their own customized version of PHP framework which their application supports. This way they will have complete control over the php.ini. Please refer this URL: How to: Use a custom PHP runtime

    You can check the PHP runtime configuration by adding a sample php page to your WAWS site with the following one line of code:

    <?php
        phpinfo();
    ?>

    PHP picks up its configuration from the file php.ini when it starts up. If you access the above page for your site, you will see that this file resides under: C:\DWASFiles\Sites\<SiteName>\Config\PHP-5.5.3\php.ini

    You can read more PHP.ini here:

    On premise, the server admins would modify the contents of this file to configure the PHP runtime to tailor to the needs of the application.

    On WAWS, with the default runtime versions that are provided the users cannot modify the contents of the php.ini file, so there is hardly any room for customization of PHP runtime configuration. However, the users aren’t completely helpless here. There is another configuration file, .user.ini.

    Support for these files were included in PHP Version 5.3.0 as they added support for configuration INI files on a per-directory basis. You can read more about it here: .user.ini files.

    So we can use the above file to customize/override the PHP runtime settings with the modes PHP_INI_PERDIR and PHP_INI_USER. List of php.ini directives is available here: http://www.php.net/manual/en/ini.list.php

    NOTE: PHP directives with modes set to PHP_INI_SYSTEM cannot be overridden.

    So the question remains where do we place this .user.ini file? On WAWS the recommended place is to place this file in the root of the website which is wwwroot folder.

    • connect to your website via FTP using FileZilla
    • Under Remote site: expand ”/” and then expand the site node.
    • Click on wwwroot
    • Right click and select “Create new file”. enter the name of the file as “.user.ini” (remove quotes).
    • Right click the .user.ini and select “View/Edit” to edit the file in notepad.
    • Override the PHP settings here. Save it and then close the file.
    • You will receive a prompt notifying the file content has been changed.
    • Select the check box “Finish editing and delete local file” and click on Yes.
    • Start and stop the website to force the settings to be read immediately.
    • Browse the phpinfo page you created earlier to see the changes yourself.

    I will write further posts on how to take advantage of the .user.ini configuration file. Until then CIAO Smile

    Windows Azure Web Sites: File upload limit for PHP sites hosted on WAWS

    $
    0
    0

    In my previous post I discussed about .user.ini file and how it is useful in WAWS.

    In today’s post I will address the issue of increasing the limit of the file size to a PHP site hosted on WAWS using the .user.ini file.

    As mentioned in my previous post, we cannot edit the contents of the php.ini file as it is not permitted in WAWS. However we can add a .user.ini file and override certain settings.

    One of the most common scenarios seen for PHP sites hosted on WAWS is to increase the limit of the file size that is permitted. The default setting defined in the PHP runtime (php.ini) is 2 MB. Since we cannot edit the php.ini we need to override this in .user.ini.

    BACKGROUND:

    The PHP directive which governs the limit for the maximum file size that can be uploaded to WAWS is upload_max_filesize. It mode is PHP_INI_PERDIR as per List of php.ini directives and its default value is “2M” i.e., 2 MB. Below is a snapshot from PHP’s online documentation (List of php.ini directives).

    image

    So we know that we can override this setting in .user.ini file as it is allowed as per PHP documentation. So this is how we do it.

    NOTE: If you are using custom PHP runtime as described here, then you have complete control over php.ini and can edit the corresponding sections and ignore this blog post.

    Pre-requisites:

    Download and install FileZilla, Click here to download FileZilla. If the download link fails then please visit the FileZilla site to download the file: https://filezilla-project.org/download.php 

    Add a sample php page to your WAWS site with the following one line of code:

    <?php
        phpinfo(); 
    ?>

    Steps to update the .user.ini file via FTP using FileZilla:

      • Go to the Windows Azure Management Portal.
      • Click on Web Sites.
      • Go to the Site (Running PHP) for which you want to increase the upload file size limit.
      • Go to the DASHBOARD page and click on “Download the publish profile
      • Save the file and open it in notepad.exe.
      • The file contains 2 <publishProfile> sections, one for Web Deploy and another for FTP.
      • You can use either method to add a .user.ini file, in this post I will be using FTP.
      • Under the <publishProfile> section for FTP make a not of the following values:
        • publishUrl
        • userName
        • userPWD

    image

      • Launch FileZilla.
      • Go to File Menu —>Site Manager.
      • Under Site Manager window click on New Site button and give it a descriptive name.
      • Under the General tab set the values  for the following accordingly
        • Host: Paste the hostname from publishUrl obtained from the publishsettings file above.

    image

        • Logon Type: set this to Normal.
        • User: Paste the userName obtained from the publishsettings file above.
        • Password:Paste the userPWD obtained from the publishsettings file above.
      • Click on Connect to connect to the site over FTP.
      • Under Remote site: expand ”/” and then expand the site node.
      • Click on wwwroot
      • Right click and select “Create new file”. enter the name of the file as “.user.ini” (remove quotes).
      • Right click the .user.ini and select “View/Edit” to edit the file in notepad.
      • Add these lines to the file:
      ;Maximum size of the files that can be uploaded
       upload_max_filesize = 16M
      • Save it and close the file.
      • You will receive a prompt notifying the file content has been changed.
      • Select the check box “Finish editing and delete local file” and click on Yes.
      • Start and stop the website to force the settings to be read immediately.
      • Browse the phpinfo page you created earlier.
      • search for the upload_max_filesize.
      • You will find this under the Core section.
    image

    As seen in the above snippet, the upload_max_filesize directive is reading the values from the .user.ini file.

    Thus, we have successfully overridden the PHP runtime settings.

    Hope this helps. Until then CIAO! Smile

    SSL Diagnostics for IIS 6 (Windows Server 2003)

    $
    0
    0

    SSL Diagnostics tool for Windows Server 2003 is no longer available for download on TechNet. I had to ping my peers to who had a copy of the tool which they had downloaded earlier.

    Main stream support Windows Server 2003 ended in 2010, while extended support will end next year in 2015. Click here for more details

    For server admins and support folks who still work on IIS 6 realise that SSLDiag was a great tool. I’m sharing it here, in case anyone needs to download it:

    NOTE: This tool is provided "as is" without warranty of any kind. Microsoft and Tool Developer, Tool Supplier further disclaim all implied warranties including but not limited to any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the samples remains with you. In no event shall Microsoft or the Tool Developer, Suppliers be liable for any damages whatsoever (including but not limited to damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the tool, even if Microsoft, the Tool Developer, Suppliers have advised of the possibility of such damages.

    Download links are here:




    32-bit version
    :



    64-bit version
    :

    Microsoft Azure Web Sites: Deploying wordpress to a virtual directory within the azure web site

    $
    0
    0

    Microsoft Azure Web Sites allows you to have a virtual directory created within the site. There are many advantages to this. Consider a scenario where you your org's site is deployed to root http://<sitename>.azurewebsites.net. You now want to have separate branches for different departments within your org. For example:

    Another example could be where you would want to setup a blog within your site. In this article I will demonstrate to deploy wordpress to virtual directory called BLOG within my site.

    Here is my current set-up.

    • SiteName: Kaushal
    • HostName: kaushal.azurewebsites.net
    • Application: ASP.NET MVC
    • No databases are currently linked to my site

    I would host the wordpress under my site so that it is accessible under http://kaushal.azurewebsites.net/blog. Here is what we need to do.

    1. Create a virtual directory within my site called BLOG via azure portal and link a MySQL database to this site.
    2. On my Local Machine, download and install WordPress via WebMatrix and deploy it to the virtual directory we created above.

    Sounds easy right? Let's go ahead and deploy it.

    Microsoft Azure Portal

    • Logon to Azure portal.
    • Go to the CONFIGURE page for the site and scroll to the virtual applications and directories section at the bottom of the page.
    • Add an entry as seen in the below image:

       

    • Click on SAVE.
    • Now go to the LINKED RESOURCES page and link a MySQL database to your site.

    NOTE: Choose an existing MySQL DB or create a new one. Let's say you already have a freeMySQL DB associated with your subscription but you want a separate MySQL database for the application. You will have to purchase a plan from CLEARDB for this

     

    • Once, linked. Go to the DASHBOARD page.
    • Under quick glance section a hyperlink called View connection strings will be created.

    NOTE: You could retrieve the connection string parameters from the LINKED RESOURCES page too. Click on MANAGE in the bottom pane for the site. This will redirect you to ClearDB site which will provide you with the following

    • Database
    • Data Source
    • User Id
    • Password

     

    • Download & save the publishsettings file for the website by clicking the hyperlink "Download the publish profile" under quick glance section of the DASHBOARD page.

    Local Machine

    • Launch Microsoft WebMatrix
    • Click on New -> App Gallery

    • Select WordPress from the App Gallery and click on Next.

    • This will take you to through the WordPress setup.
    • Accept the EULA by clicking on "I ACCEPT"

    • Once done it will start downloading the contents to your local machine (C:\Users\<username>\Documents\My Web Sites\WordPress)

    • During these process it allows you to configure certain application parameters as shown below:

    • Once you specify the parameters and click on Next it proceeds with the installation.

    • Once installed, click on "Copy user names and passwords". This will copy the details to clipboard which you could save in a text file.
    • Click on OK.
    • Now click on Publish

    • This will prompt you with another window.
    • Click on Import publish profile and point it to the location where we saved the publishsettings file we downloaded earlier.

    • Once selected, it will auto-populate the parameters from the publishsettings file.
    • We need to modify the following sections as shown below:

       

    NOTE: Don't chose FTPas the protocol as it doesn't allow you to publish databases.

     

    • Click on Validate Connection. Once validated, you will see the confirmation.
    • Click on Save.
    • This will take you to the Publish Compatibility page. Click on Continue.
    • Once compatibility check has been performed. Click on Continue again.
    • It will display the list of files that will deployed to the server.

    • Click on Continue to start the deployment.

    • Once publishing is completed you could open the log file and analyze.
    • Click on the hyperlink as shown below to browse to the site:

    HTH,

    Kaushal

     

     

    WEBINAR: Developer Live Series

    $
    0
    0

    Developer Live is an initiative started by the "Developer Tools" Support team in Microsoft Support where we present sessions on various developer technologies like Visual Studio 2013, BizTalk Server, IIS, ASP.NET, C++, C#, VB.NET, Internet Explorer, Setup and Deployment Projects etc. We have designed these sessions keeping in mind the common issues that developers run into showcasing powerful features and tools that we use on a daily basis that can help boost your development productivity.

    This series will be hosted via LYNC and is available freely to public.

    I will keep updating this post with the upcoming sessions. If you have any questions, you could comment on the post or send an email to either me or Puneet.

    Contact Information:

    Puneet Gupta

    puneetg@microsoft.com

    Kaushal Kumar Panday

    kaushalp@microsoft.com

     

    Webinar Session Details

     

    Session Link: https://join.microsoft.com/meet/puneetg/Y2JYG4PW

    Click on the above link to join this session.

     

    Series of Webinar Sessions

    We have the following sessions coming up

    Webinar – Building Real-Time web applications with ASP.NET SignalR

    Webinar : Demystifying ASP.NET Identity

    ASP.NET SignalR is a new library for ASP.NET developers that makes developing real-time web functionality easy. SignalR allows bi-directional communication between server and client. Servers can now push content to connected clients instantly as it becomes available. SignalR supports Web Sockets, and falls back to other compatible techniques for older browsers.

    Attend an online Webinar Sessionwhere we discuss how to SignalR Works and how to use it in real time applications to its full advantage and how this technology can help resolve the most difficult of application issues regardless of where they exist.

    When: May 28, 2014 4:00 PM - 5:30 PM IST

    Click hereto calculate your local time.

    Duration : 60 minutes

    If the answer to any of the above questions is YES, then new ASP.Net Identity is your friend. Join this session where we discuss how to use ASP.Net to its full advantage and how this technology can be a help for managing security and authorization in the context of a public-facing web application.

    Attend an online Webinar Sessionwhere we discuss how to ASP.Net Identity works and how you can easily use to integrate your web applications with an external service store like Microsoft Accounts, Google accounts, Facebook and twitter authentication etc.

    When: June 4, 2014 4:00 PM - 5:30 PM IST

    Click hereto calculate your local time.

    Duration : 60 minutes

    What will you learn :

    • What is SignalR?
    • Why SignalR?
    • Supported platforms
    • How to get SignalR?
    • SignalR Demo
    • Troubleshooting SignalR Issues

    What will you learn :

    • What is ASP.Net Identity?
    • Why ASP.Net Identity?
    • Migrating an existing website from SQL Membership to ASP.Net Identity
    • Implementing a custom Identity storage provider
    • OWIN Integration

    Manjunath is a Developer Support Engineer with Microsoft Developer Tools business. He has been working with Microsoft for more than 1.5 years. He works with Microsoft Enterprise customers and is an expert on web technologies. He solves technically challenging and business critical issues for Microsoft customers and partners. Hear from him as he talks about how he makes use of SignalR in Real Time Applications.

    Ravindra is a Technical Advisor with Microsoft Developer Tools business. He has been working with Microsoft for around 1.5 years. He works with Microsoft Enterprise customers and is an expert on web technologies. He solves technically challenging and business critical issues for Microsoft customers and partners. Hear from him as he talks about how to make use of ASP.Net identity in your web applications.

     


    Microsoft Azure: Services availability by region

    $
    0
    0

    While Microsoft Azure continues to enhance the services it provides it is also expanding its footprint in various geographical regions. Currently Azure provides its services in 4 continents (11 DC's):

    1. Asia
    2. Europe
    3. North America
    4. South America

    Web Sites is one of the few services which is available in all the data centres. You may view this information on Azure site to see where the data centres are located in a specific region.

    http://azure.microsoft.com/en-us/regions/#services

    Microsoft Azure Web Site: Connect to your site via FTP and upload/download files

    $
    0
    0

    In this post I will describe on how to connect to the Azure Web Sites via FTP using FileZilla as the client. The readers are free to choose their FTP client

     Download publishsettings file from Azure Portal:

    • Go to the Windows Azure Management Portal.
    • Click on Web Sites.
    • Go to the Site's DASHBOARD page and click on "Download the publish profile"
    • Save the file and open it in notepad.exe.
    • The file contains 2 <publishProfile> sections. One for Web Deploy and another for FTP.
    • Under the <publishProfile> section for FTP make a note of the following values:
      • publishUrl (hostname only)
      • userName --------------------------> This is the information you are looking for
      • userPWD

      Below is a publishsettings file from one of my test sites.Every file has unique username and password. The user could also reset the password, however that is beyond the scope of this post. I will discuss in another post altogether.

    <publishData>
           <publishProfile      
                  profileName="kaushals - Web Deploy"
                 
    publishMethod="MSDeploy"
                  publishUrl="kaushals.scm.azurewebsites.net:443"
                  msdeploySite="kaushals"
                  userName="$kaushals"
                  userPWD="nGc9c8RmmRtwqF8hx2Fg6n8osiczuo8sJaZ32C02ZnBMzS627uagERwHM4NE"
                  destinationAppUrl="http://kaushals.azurewebsites.net"
                  SQLServerDBConnectionString=""
                  mySQLDBConnectionString=""hostingProviderForumLink=""
                  controlPanelLink="http://windows.azure.com">
                  <databases/>
           </publishProfile>
           <publishProfile
                  profileName="kaushals - FTP"
                 
    publishMethod="FTP"
                  publishUrl="ftp://waws-prod-db3-011.ftp.azurewebsites.windows.net/site/wwwroot"
                  ftpPassiveMode="True"
                  userName="kaushals\$kaushals"
                 
    userPWD="nGc9c8RmmRtwqF8hx2Fg6n8osiczuo8sJaZ32C02ZnBMzS627uagERwHM4NE"
                  destinationAppUrl="http://kaushals.azurewebsites.net"
                  SQLServerDBConnectionString=""
                  mySQLDBConnectionString=""
                  hostingProviderForumLink=""
                  controlPanelLink="http://windows.azure.com">
                  <databases/>
           </publishProfile>
    </publishData>

     NOTE: We need only the hostname (waws-prod-db3-011.ftp.azurewebsites.windows.net) from the FTP's publishURL section and not the complete path.

    Connect using FileZilla:

    • Download and install FileZilla, Click here to download FileZilla.
    • Launch FileZilla.
    • Go to File Menu —>Site Manager.
    • Under Site Manager click on New Site button and give it a descriptive name.
    • Under the General tab set the values  for the following accordingly
      • Host: Paste the hostname from publishUrl obtained from the publishsettings file above.

      • Logon Type: set this to Normal.
      • User: Paste the userName obtained from the publishsettings file above.
      • Password: Paste the userPWD obtained from the publishsettings file above.
    • Click on Connect to connect to the site over FTP.
    • You would see two folder under the root: Logfiles and Site.

    • Logfiles folder as the name indicates provides storage for various logging options you see under the CONFIGURE management page on the Azure Portal.
    • Site folder is where the application resides. To be more specific the code resides here: /site/wwwroot

    Thus, Azure Web Sites gives the user the flexibility to create/upload/download files/folder(s) to their corresponding site via FTP. HTH!

    POODLE Vulnerability: Padding Oracle on Downgraded Legacy Encryption

    $
    0
    0

    INTRODUCTION

    POODLE abbreviates to Padding Oracle On Downgraded Legacy Encryption. This vulnerability was discovered by Bodo Möller, Thai Duong& Krzysztof Kotowicz from the GOOGLE security team and published here. I'm using the information published in this article as a reference for this blog post. This vulnerability has been listed in NVD; here is the link: CVE-2014-3566. Microsoft also has released a security bulleting on this issue: https://technet.microsoft.com/library/security/3009008.aspx

    As the name suggests, POODLE exploits the design flaws of the legacy cryptographic protocol SSL 3.0.

    • Secure Socket Layer 3.0 or SSL 3.0 is a legacy cryptographic protocol which is a predecessor to TLS. It is considered obsolete and insecure due to many flaws in its design. For more technical details refer RFC 6101.
    • Transport Layer Security or TLS is a successor to SSL. There were several security flaws and design issue in SSL which lead to development of TLS 1.0 (also sometimes referred to as SSL 3.1). TLS 1.0 provides a fall-back mechanism to SSL 3.0 so that clients/servers which are not compatible with TLS1.0 can use SSL 3.0.

    NOTE: SSL 2.0 alsoexists which is a predecessor to SSL 3.0. These are the version of SSL or TLS drafted till date:

    1. SSL 2.0
    2. SSL 3.0
    3. TLS 1.0 (version field 3.1)
    4. TLS 1.2 (version field 3.2)
    5. TLS 1.2 (version field 3.3)

     

    • Protocol Negotiation– The SSL/TLS protocol handshake allows the client and server to negotiate the latest protocol version common to the client and the server. If during negotiation if they fail to use TLS then they fall-back to SSL 3.0.

    The POODLE attack exploits the fall-back mechanism to downgrade a secure connection to SSL 3.0 and then breaks the cryptographic security of SSL 3.0 to steal sensitive data such as HTTP cookies, HTTP Authorization header contents etc.

    How does POODLE work?

    To understand this attack knowledge of CBC mode encryption is required. Refer this post of mine to understand how this works: http://blogs.msdn.com/b/kaushal/archive/2011/10/03/taming-the-beast-browser-exploit-against-ssl-tls.aspx

    SSL handshake: http://blogs.msdn.com/b/kaushal/archive/2013/08/03/ssl-handshake-and-https-bindings-on-iis.aspx

    Almost all of the clients that exist today support SSL 3.0 and provide a fall-back mechanism from TLS 1.0 to SSL 3.0 (This downgrade purely exists for backward compatibility for legacy clients/servers). It uses this downgrade option to implement Man-in-the-Middle attack to exploit a vulnerability in SSL 3.0.

    Many clients/servers have a fall-back mechanism to work with legacy server/clients. During the SSL handshake, the client offers the highest protocol version supported by it, if this is not supported on the server, then server responds with what is supported and re-attempts the handshake. However there is a problem with the fall-back mechanism from TLS 1.0àSSL 3.0. This fall-back can also be triggered by network glitches or by an attacker who resides in between client and the server.

    Once the attacker has succeeded in downgrading the connection to SSL 3.0, he proceeds ahead. SSL 3.0 like any other protocol supports RC4 stream ciphers and block ciphers in CBC mode. RC4 is considered to be not secure; refer this blog post to know why: http://blog.cryptographyengineering.com/2013/03/attack-of-week-rc4-is-kind-of-broken-in.html

    On the security of RC4 in TLS & WPA: http://www.isg.rhul.ac.uk/tls/

    Also read this wiki article: http://en.wikipedia.org/wiki/RC4. Refer to section Roos' biases and key reconstruction from permutationandBiased outputs of the RC4 which explains the problem in RC4.

    Coming to the CBC mode encryption in SSL 3.0, it has a serious flaw. Its block cipher padding is not deterministic, and not covered by the MAC (Message Authentication Code). As a result of this the encrypted text is exposed to the Man-in-the-middle attack which uses oracle padding to retrieve the encrypted text. This has been well explained in the article I outlined earlier. Below, is a snippet of the article from Google Security team explaining the attack in detail:

    The most severe problem of CBC encryption in SSL 3.0 is that its block cipher padding is not deterministic, and not covered by the MAC (Message Authentication Code): thus the integrity of padding cannot be fully verified when decrypting. Padding by 1 to L bytes (where L is the block size in bytes) is used to obtain an integral number of blocks before performing blockwise CBC (cipher-block chaining) encryption. The weakness is the easiest to exploit if there's an entire block of padding; which (before encryption) consists of L-1 arbitrary bytes followed by a single byte of value L-1. To process an incoming ciphertext record Ci…Cn also given an initialization vector C0 (where each Ci is one block), the recipient first determines Pi… Pn as Pi = Dk (Ci) Ci-1, 1 (where Dk denotes block-cipher decryption using per-connection key K), then checks and removes the padding at the end and finally checks and removes a MAC. Now observe that if there's a full block of padding and an attacker replaces Cn by any earlier ciphertext block C from the same encrypted stream, the ciphertext will still be accepted if Dk (Ci) Cn-1 happens to have L-1 as its final byte, but will in all likelihood be rejected otherwise, giving rise to a padding oracle attack [tls-cbc].

    In the web setting, this SSL 3.0 weakness can be exploited by a man-in-the middle attacker to decrypt "secure" HTTP cookies; using techniques from the BEAST attack (BEAST). To launch the POODLE attack (Padding Oracle On Downgraded Legacy Encryption), run a JavaScript agent on evil.com (or on http://examplecom) to get the victim's browser to send cookie-bearing HTTPS requests to https://example.com, and Intercept and modify the SSL records sent by the browser in such a way that there's a non-negligible chance that example.com will accept the modified record. If the modified record is accepted, the attacker can decrypt one byte of the cookies.

    Assume that each block C has 16 bytes, C[0]…C[15]. (Eight-byte blocks can be handled similarly.) Also assume for now; that the size of the cookies is known. (Later we will show how to start the attack if it isn't) The MAC size in SSL 3.0 CBC cipher suites is typically 20 bytes, so below the CBC layer, an encrypted POST request will look as follows:

    POST /path Cookie: name=value… \r\n\r\nbody || 20-byte MAC || padding

    The attacker controls both the request path and the request body, and thus can induce requests such that the following two conditions hold:

    • The padding fills an entire block (encrypted into Cn).
    • The cookies' first as of-yet unknown byte appears as the final byte in an earlier block (encrypted into Ci)

    The attacker then replaces Cn by Ci and forwards this modified SSL record to the server. Usually, the server will reject this record, and the attacker will simply try again with a new request. Occasionally (on average, once in 256 requests), the server will accept the modified record, and the attacker will conclude that Dk (Ci)[15] Cn-1[15]=15, and thus that Pi[15] = 15Cn-1[15] Ci-1[15]. This reveals the cookies' first previously unknown byte. The attacker proceeds to the next byte by changing the sizes of request path and body simultaneously such that the request size stays the same but the position of the headers is shifted, continuing until it has decrypted as much of the cookies as desired. The expected overall effort is 256 SSL 3.0 requests per byte.

    As the padding hides the exact size of the payload, the cookies' size is not immediately apparent, but inducing requests GET /, GET /A, GET /AA, … allows the attacker to observe at which point the block boundary gets crossed: after at most 16 such requests this will reveal the padding size and thus the size of the cookies.

    The attack is pretty much similar to BEAST as it uses the oracle padding attack. However unlike BEAST there is no work-around mechanism or a fix that can be deployed to fix this vulnerability in SSL 3.0.

    Workaround/Solution

    The attack works by establishing a secure connection using SSL 3.0. Therefore, one could simply disable the SSL 3.0 protocol at their end to safeguard themselves against this attack.

    However if the client/server supports only SSL 3.0 then there is no workaround. The Google security team has submitted a draft which suggests the TLS_FALLBACK_SCSV mechanism to address the protocol downgrade problem. They have already implemented this in their own products. Read this article for more details: http://tools.ietf.org/html/draft-ietf-tls-downgrade-scsv-00

    So to summarize, to protect against this attack one has to disable SSL 3.0 on the server/client. I will summarize the steps for doing this on IIS 6 and higher, Microsoft Azure Web Sites, Firefox, Chrome and IE.

    For IIS Web Server (version 6.0 and higher)

    Note that IIS relies on the cryptographic implementation provided by Windows (SChannel) for secure communication (HTTPS). So IIS uses the SChannel component to implement HTTPS communication with various clients.

    1. Click Start, click Run, type regedit, and then click OK.
    2. In Registry Editor, locate the following registry key (if not present you can create it)

      HKey_Local_Machine\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server

    3. On the Edit menu, click AddValue.
    4. In the DataType list, click DWORD.
    5. In the ValueName box, type Enabled, and then click OK
    6. Type 00000000 in Binary Editor to set the value of the new key equal to "0".
    7. Click OK. Restart the computer.

    NOTE: the above is a server wide change and will affect all the server side components which use SCHANNEL.

    Another way to do this is:

    • Launch notepad.exe
    • Copy the below in to notepad:

    Windows Registry Editor Version 5.00
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server]
    "Enabled"=dword:00000000

    • Save the file as disablessl3.reg (Note that the file extension is .reg)
    • Double click this file.
    • You would receive a prompt saying it is going to make changes to the registry. Click on OK.
    • Restart the computer for the changes to take effect.

    For Microsoft Azure Web Sites

    SSL 3.0 has been disabled by default for all sites running on Azure Websites. The product group has published a blog on how to disable this for AzureVM and Cloud Services:

    http://azure.microsoft.com/blog/2014/10/19/how-to-disable-ssl-3-0-in-azure-websites-roles-and-virtual-machines/

    For Internet Explorer

    Similar to IIS, IE also uses the SChannel component of Windows for cryptography and encryption. However it uses the client component of SChannel. To disable SSL 3.0 for IE follow the below instructions:

    1. Launch Internet Explorer.
    2. Go to Tools àInternet OptionsàAdvanced tab.
    3. Scroll down to the bottom and here you will find the protocol list for IE. Uncheck SSL 3.0.

    4. Click on OK and restart the browser for the changes to take effect.

    For Google Chrome

    Google Chrome will be removing support for SSL .30 completely in its upcoming versions. Read this for more details: http://googleonlinesecurity.blogspot.com.au/2014/10/this-poodle-bites-exploiting-ssl-30.html

    1. Right click at your Chrome's desktop icon and click on Properties
    2. At the end of the target field enter:
      "
       --ssl-version-min=tls1" (with space but without quotes).
    3. Click on Apply and then click on OK.

    4. Restart Chrome completely via the menu or with [Ctrl + Shift + Q]

    For Mozilla Firefox

    As per Mozilla security blog, SSL 3.0 will be disabled by default in Firefox v34, which will be released on Nov 25th 2014.

    1. For users who cannot wait, Firefox has released SSL Version Control Firefox extension to disable SSLv3 immediately.
    2. Click on the above link and click on Add to Firefox button.

    3. This doesn't require a restart.

    The above extension changes the security.tls.version.min to 1. There are 2 keys which govern the SSL protocol for Firefox. They are:

    1. security.tls.version.min specifies the minimum supported protocol version.
    2. security.tls.version.max specifies the maximum supported protocol version.

    Both security.tls.version.min and security.tls.version.max currently range from 0 to 3. If security.tls.version.min and security.tls.version.max are equal, only one protocol version will be supported. The behavior is undefined if security.tls.version.min is larger than the security.tls.version.max value.

    Value

    Protocol

    0

    SSL 3.0

    1

    TLS 1.0

    2

    TLS 1.1

    3

    TLS 1.2

     

    You can access the above mentioned keys in Firefox by typing about:config in the address bar. Enter TLS in the search bar to retrieve TLS related settings.

    NOTE: Please refer Google, Firefox and other related browser documentation for detailed information on the corresponding browsers.

     

    Hope this article helps. Provide comments, for any corrections.

    WinDBG - Modifying icons to identify 32/64 bit debuggers

    $
    0
    0

    This article is not very technical in nature. Its purpose is to simplify an administrative task. Sharing this in case someone finds it useful.

    I use WinDBG for debugging memory dumps on a daily basis. It is a helpful tool in diagnosing performance issues of an application.

    It comes in 2 flavors, 32 bit & 64 bit.

    One thing which I always found frustrating was if I was debugging both 32 bit and 64 bit dumps at the same time then it was difficult to switch windows as the icon for both the processes are same.

    Here is how the default WinDBG icon looks like:

    So, in order to simplify this task we need to have separate icons for 32 bit & 64 bit WinDBG.

    NOTE: Icon files have .ico as the File Extension. Below is the MSDN article on how to create icons for Windows XP. I believe most of the section remains applicable for Windows 7/8.

    https://msdn.microsoft.com/en-us/library/ms997636.aspx

    Frankly, I didn't follow the above article. I used a .png image and created 2 copies where one was tagged with x64 & the other with x32.

    Here is the image I used:

    You could download it from here: https://cdn2.iconfinder.com/data/icons/computer-hardware-3/170/Layer_11-01-512.png

     

    STEP 1: Create the image files (.PNG)

     

    In order to maintain the transparency of the image, we need to use an image editor which provides this. Microsoft Paint will not retain the transparency while editing .PNG images, so there was no point using it.

    There are many tools which does this for us. Adobe Photoshop is one of them. For my purpose, I used an online tool called FotoFlexer.

    I tagged the images as x64 & x32 as shown below:

            

     

    STEP 2: Convert the images to ICON files (.ICO) 

     

    Again there are many online tools which can serve our purpose. I used http://icoconvert.com/

    You need to upload the image and then click on convert. This takes couple of seconds (it took 3-5s for me). Once done click on the hyperlink below that to download the ICON file.

    I had converted both the images above to icons. Now I could use them for my purpose.

     

    STEP 3: Change the icon of WinDBG

     

    First thing I did was to create a shortcut of WinDBG on the Desktop.

    • Go to the folder where you have installed WinDBG.

    • Right click WinDBG and select Send --> Desktop as shown below

    • We need to do this for both the 32 bit and the 64 bit version of the debugger.
    • Now we will have 2 icons on the desktop (one for the 32 bit and another for 64 bit) as shown below:

    • Right click one of the icons & select Properties.
    • Click on Change Icon…

    • Click on Browse… & navigate to the folder where the files are saved.
    • Select the file and then click on Apply& then click on OK.
    • Repeat the steps for the other shortcut.
    • The icon on the Desktop now reflect the changes we made

    • Now double click the shortcuts and launch them.
    • Right click on the icon in taskbar and select "Pin this program to the taskbar"

    • Once pinned, next time when we launch multiple windows of the debuggers it will be easier to identify which one corresponds to what bitness:

    This is a cosmetic change to help the user identify which icon corresponds to 32 bit or 64 bit. This is how I approached this problem. There might be other ways too. If you are aware of any then feel free to add your comments below.

    Hope someone finds this useful. I have uploaded the icons to OneDrive. You could download it from here

    https://onedrive.live.com/redir?resid=619140cabec4294a%2158320

    Client Certificate Authentication

    $
    0
    0

    SSL/TLS certificates are commonly used for both encryption and identification of the parties. In this blog post, I'll be describing Client Certificate Authentication in brief.

    Client Certificate Authentication is a mutual certificate based authentication, where the client provides its Client Certificate to the Server to prove its identity. This happens as a part of the SSL Handshake (it is optional).

    Before we proceed further, we need to understand

    • What is a client certificate?
    • What is authentication & why do we need it?

    Client Certificates

    Client Certificate is a digital certificate which confirms to the X.509 system. It is used by client systems to prove their identity to the remote server. Here is a simple way to identify where a certificate is a client certificate or not:

    • In the Details tab, the certificates intended purpose has the following text:
      "Proves your identity to a remote computer"
    • Verify that the Enhanced Key Usage field of the certificate has the OID set to (1.3.6.1.5.5.7.3.2).

    Below is a screenshot of a sample Client Certificate:

    Refer RFC 5246

    Authentication & Authorization

    In Computer Science, Authentication is a mechanism used to prove the identity of the parties involved in a communication. It verifies that "you are who you say you are". Not to be confused with Authorization, which is to verify that "you are permitted to do what you are trying to do".

    There are several types of authentication. Here is a list of authentication widely used on IIS (in no specific order):

    • Anonymous Authentication (No Authentication)
    • Basic Authentication
    • Client Certificate Authentication
    • Digest Authentication
    • Forms Authentication
    • NTLM
    • Kerberos
    • Smart Card Authentication

    NOTE: As the SSL Handshake happens before HTTP communication, Client Certificate Authentication takes the highest precedence over any other type of authentication that takes place over HTTP protocol.

    Kerberos, Client Certificate Authentication and Smart Card Authentication are examples for mutual authentication mechanisms. Authentication is typically used for access control, where you want to restrict the access to known users. Authorization on the other hand is used to determine the access level/privileges granted to the users.

    On Windows, a thread is the basic unit of execution. Any task performed by the user is executed by the thread under the context of a specific account/identity. Authentication is one of the ways used to determine the thread identity, whose privileges will be used by the thread for execution.

    Client Certificate Authentication in SSL/TLS Handshake

    I have already discussed SSL Handshake in one of my blog posts. Browse to:
    http://blogs.msdn.com/b/kaushal/archive/2013/08/03/ssl-handshake-and-https-bindings-on-iis.aspx

    Here is a screenshot describing the SSL/TLS Handshake:

    • Client sends CLIENT HELLO as described in the above image
    • Upon receiving the CLIENT HELLO, if the server is configured for Client Certificate Authentication, it will send a list of Distinguished CA names& Client Certificate Request to the client as a part of the SERVER HELLO apart from other details depicted above.
    • Upon receiving the Server Hello containing the Client Certificate request& list of Distinguished CA names, the client will perform the following steps:
      • The client uses the CA list available in the SERVER HELLO to determine the mutually trusted CA certificates.
      • Theclient will then determine the Client Certificates that have been issued by the mutually trusted Certification Authorities.
      • The client will then present the client certificate list to the user so that they can select a certificate to be sent to the user.

    NOTE:

    • On the Client the Client Certificates must have a Private Key. If absent, then the certificate is ignored.
    • If the server doesn't provide the list of Distinguished CA Names in the SERVER HELLO, then the client will present the user with all the client certificates that it has access to.
    • Upon selection, the client responds with a
      • ClientKeyExchange message which contains the Pre-master secret
      • Certificate message which contains the Client certificate(Doesn't contain the private key).
      • CertificateVerifymessage, which is used to provide explicit verification of a client certificate. This message is sent only if the Client Certificate message was sent. The client is authenticated by using its private key to sign a hash of all the messages up to this point. The recipient verifies the signature using the public key of the signer, thus ensuring it was signed with the client's private key. Refer RFC 5246 for more details.
    • Post this Client & Server use the random numbers and the Pre-Master secret to generate symmetric (or Master) keys which will used for encrypting & decrypting messages for further communication.
    • Both respond with ChangeCipherSpec indicating that they have finished the process.
    • SSL Handshake stands completed now and both the parties own a copy of the master key which can be used for encryption and decryption.

    Design Problems

    We know that the server sends the list of Distinguished CA names as a part of SERVER HELLO. The RFC never mandates the list of Distinguished CA Names should contain Root CA or Intermediate CA certificates. Here is a snippet of this section defined in the RFC5246:

    certificate_authorities

    A list of the distinguished names [X501] of acceptable
    certificate_authorities, represented in DER-encoded format. These
    distinguished names may specify a desired distinguished name for a
    root CA or for a subordinate CA; thus, this message can be used to
    describe known roots as well as a desired authorization space. If
    the certificate_authorities list is empty, then the client MAY
    send any certificate of the appropriate ClientCertificateType,
    unless there is some external arrangement to the contrary

    Refer the below blog post for information on Root & Intermediate CA certificates:
    http://blogs.msdn.com/b/kaushal/archive/2013/01/10/self-signed-root-ca-and-intermediate-ca-certificates.aspx

    This can lead to a problem where few systems require Root CA's while few require Intermediate CA's to be present in the list sent in the SERVER HELLO. This makes the communicating parties incompatible on certain occasions.

    Both the implementations are debatable. On one hand the list sent by the server cannot exceed a certain limit (on windows the size is 12,228 bytes). If exceeded, the auth will fail. The list of Intermediate CA's always exceeds the list of Root CA by 2-3 folds or even higher. This is one of the reasons why some systems send the ROOT CA's in the list of Distinguished CA Names. On the other hand, the Intermediate CA names are readily available in the client certificate provided by the user, so it makes it easier during the certificate chain validation, therefore some systems prefer this over the previous one. Both have their own merits.

    One example I have personally encountered is Apple's Safari browser communicating to a site hosted on IIS 7 or higher which requires Client Certificate for authentication. Safari expects a list of Intermediate CA's in the SERVER HELLO. On the other hand, IIS sends only Root CA's in that list. As a result the authentication fails as the client is unable to provide a client certificate to the server.

    A solution to the above problem is to configure IIS to not send any the CA list in the SERVER HELLO. To achieve this follow the Method 3 described in the support article below:
    https://support.microsoft.com/en-us/kb/933430/

    The above article requires you to add a registry key, SendTrustedIssuerList, which is set to 0.

    As a result the server doesn't send any list to the client, but requires it to pass a client certificate. The client will present the complete list of client certificates to choose from and it will proceed further as expected.

    NOTE: In Windows Server 2012 and Windows 8, changes were made to the underlying authentication process so that:

    • CTL-based trusted issuer list management is no longer supported.
    • The behavior to send the Trusted Issuer List by default is off: Default value of the SendTrustedIssuerList registry key is now 0 (off by default) instead of 1.
    • Compatibility to previous versions of Windows operating systems is preserved.

    Further read: https://technet.microsoft.com/en-in/library/hh831771.aspx

    Viewing all 71 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>