Windows Azure PowerShell Cmdlets

Microsoft Azure The Windows Azure PowerShell Cmdlets 2.2.2 enable you to browse, configure, and manage Windows Azure Compute and Storage services directly from PowerShell.  These tools can be helpful when developing and testing applications that use Windows Azure Services.  For instance, using these tools you can easily script the deployment and upgrade of Windows Azure applications, change the configuration for a role, and set and manage your diagnostic configuration and diagnostic data. 

You can view a full list of the PowerShell Cmdlets here.

 

 

The cmdlets are based on the the Windows Azure Management and Diagnostics APIs and the full source code is available through this CodePlex project so you can better understand the underlying APIs.   Finally, the documentation included with the download package also shows how to use the cmdlets to perform a new Windows Azure deployment and retrieve information about a hosted service.

(source : http://wappowershell.codeplex.com/ )

Microsoft Launches Company’s First-Ever Direct Startup Accelerator

Microsoft Azure
Today, Microsoft is launching the first startup accelerator* in the company’s history in an effort to encourage more entrepreneurs to build their cloud-based applications using Windows Azure. The program will take place at the Microsoft Israel Research and Development Center, and is a part of the Israel R&D Center’s outreach program Think Next as well as the Microsoft BizSpark program for startups.

Like most accelerators, Microsoft will provide the typical accouterments, including free office space, coaching, mentorship, legal assistance and more, but in this case, it’s specifically after companies building cloud-based startups. The companies will be provided with free access to Windows Azure, but will not receive seed funding.

According to Zack Weisfeld, Sr. Director of Strategy and Business Development at Microsoft’s Israel Development Center, the decision to launch the company’s first accelerator in Israel had to do with the center’s strategic location for Microsoft, and is part of its ongoing efforts to bring startup culture back to the company.

Israel has a very active startup community, Weisfeld explains. “We have 4,900 startups in Israel today, and the third largest V.C. spending in the world after Silicon Valley and New England,” he says. He also notes that out of the startups participating in Microsoft’s BizSpark One, a sort of “best of breed” selection from the larger BizSpark program, 25% of the companies are located in Israel.

“Because we have such an innovation-driven and startup-driven R&D center, we basically came with a proposal to basically change the way Microsoft deals with entrepreneurs,” Weisfeld says of the program’s beginnings. “Part of that proposal was to start, for the first time in the world for Microsoft, our own startup accelerator.”

The new accelerator aims to tap into the region’s activity, by encouraging startups to launch using Microsoft software.

The Windows Azure Accelerator, as it’s called, will be the first of many themed accelerators Microsoft plans to launch in the same space. The program will also serve as a blueprint for future accelerators Microsoft plans to launch globally, Weisfeld says. And those global programs will arrive sooner than later, it seems.

“I don’t think we’ll wait a year,” Weisfeld says, “we’re going to learn a lot through the first class…we’ll be ready to move as soon as possible to other places.”

During the four-month, biannual program, entrepreneurs will have access to 850 square meters of newly renovated, shared workspace in the center, which also includes meeting rooms, a usability lab, and a place to record their demo videos. Over 30 mentors from the industry (names TBA) have been lined up to provide leadership, coaching and support. These include startup CEOs, investors, marketing experts and more.

The startups will also receive all the software currently available through Microsoft BizSpark, including, of course, access to Windows Azure. Companies will receive two years (up to $60,000) of Windows Azure, Weisfeld tells us. This is the same offering available through the BizSpark Plus program, which is now offered to TechStars and all members of its Global Accelerator Network. This new program, too, will function as a member of that network, we’re told.

Startups in the program may choose to use open source software, Weisfeld says, but Microsoft “would like” them to use Azure. (Read: should use Azure).

Although the new program is being positioned as an accelerator, unlike many of today’s incubators, there isn’t seed funding involved, nor will Microsoft take an equity stake in the participating companies. However, there will be something else of value offered: access to Microsoft’s partners who may serve as potential customers.

“We know it’s so critical, even in early stages, for startups to have access to people who may use whatever they produce or can start working as beta customers.” Weisfeld says. “We’re going to work both locally and internationally [on that],” he adds.

Microsoft will reach out both to its own customers with whom it already has relationships with, he says, but will also be doing grassroots-level work when necessary.

The program will culminate with two demo days, one in Israel and a second in the U.S. The first class, which will include 10 companies, will be announced April 22nd at Think Next 2012, which is sort of like a TechCrunch Disrupt-style event for Israel.

As for what’s in it for Microsoft? Besides the obvious (Azure adoption), it’s about an attempt to reinvigorate Microsoft culture.

“We believe that for the Israel R&D center, it’s going to make the center much better. We’re going to have fast-moving, agile startups that want to change the world working closely with our engineers,” explains Weisfeld. “It’s going to make us much better by working with them.”

Interested startup founders can apply to the program here: accelerato.rs/azure/apply

* UPDATE: To be clear, it is Microsoft itself, and those involved with The Windows Azure accelerator, who are positioning the program as the “first-ever” accelerator run by the company. Microsoft, however, has involvement with TechStars, including with its Kinect Accelerator, it should be noted. In that case, though, TechStars takes its usual 6% equity stake, which may be the reason for the distinction. Also, the Azure accelerator will be run by two Microsoft employees, who are in the process of being hired now from the outside startup community.

Here’s further explanation, per Weisfeld:

“Windows Azure Accelerator is the first Microsoft accelerator. The one in Seattle is a TechStars accelerator. Startups get funded and equity by TechStars. The Kinect accelerator is operated by TechStars. It is not a Microsoft accelerator.

WAA is the only by Microsoft, at Microsoft, run by our employees. This is certainly our first direct accelerator. The other one is a TechStars driven accelerator.”

(source : http://techcrunch.com/2012/03/13/to-boost-windows-azure-microsoft-launches-companys-first-ever-startup-accelerator/ )

How to: Using Windows Azure to create Micro Finance App

Microsoft Azure

We recently embarked on a proof of concept project to create a line of business web application using some of the latest technologies such as Windows Azure and HTML5 and see if we could combine these technologies with great design to produce an awesome experience for the end user. You can see the application itself, MicroFinance, at our web site at http://labs.mandogroup.com.

 

On the road to creating our MicroFinance application, there were some key technical requirements that we were aiming to achieve;

  • The core data behind the application should be consumable on several platforms including Windows Phone.
  • The application should be scalable to be able to cope with hypothetical growth in demands.
  • The application should be reliable and should be available whenever required.

We wanted to see if Windows Azure could help us achieve the above objectives and how easy it would be to work with the Azure platform.

With this in mind we then set about splitting the planned application into several discreet components;

  • The raw data behind the application (i.e. customer details, task lists)
  • The HTML5 web application itself
  • A service to allow the data to be consumed where needed

Now that we had the above separation we needed to decide how each of them could be implemented using the Azure platform.

The Data

For the data behind the application and how it is stored / accessed, we had a couple of options.

  • The first was SQL Azure, a cloud based implementation of the familiar SQL Server database we all know and love. Using SQL Azure would allow us to use large data sets and perform heavy data-processing on the data held, as well as being able to define strong relationships between our data entities using out of the box functionality such as SQL joins etc.
  • Our second option was to use Azure Table Storage, which provides a persistent and durable storage medium, but without many of the features (and complexities) of SQL Azure.

We decided to use Azure Table Storage as in the initial version of the application we are not storing hugely complex or massive amounts of data, meaning we could easily implement Table Storage for our data, perform joins on our data using LINQ once we have retrieved it from storage. We felt that SQL Azure was overkill for our requirements, but that’s not to say that it isn’t a good solution for many scenarios.

MSDN Magazine featured an article by Joseph Fultz which provides a detailed comparison of SQL Azure and Table Storage and I would highly recommend you read it if you are facing a similar decision.

The Web Application

The main web application which would be the primary method of accessing and managing the data for the end user needed to be responsive, reliable and scalable. As well as the core requirements, I wanted to be able to work with the technologies and tools I have always worked with (ASP.NET, C#, Visual Studio etc.). Happily for me, there is a great Windows Azure SDK Toolkit available with tools for Visual Studio, which makes creating applications and services that run in Azure an absolute snap.

Using the built in project templates that come with the Azure SDK and tools I was quickly able to create an Azure Web Role, used for hosting front end applications behind IIS, within which we could create the web application itself. From this point, where you have your Azure project created, it becomes business as usual from a development perspective, with the same old familiar ASP.NET pages and techniques you would always use, which meant that it was very easy for us to start writing the application even though it was to be hosted on a different platform than we would normally use.

The big advantage to hosting our application in Azure is the ability to scale at short notice. Should demand increase, you can simply log into your Management Portal and increase the number of instances of your application that are available to cope with the increased load. Should the demand drop back again, you can simply reduce the number of instances running back to a more suitable level. The notion of this rapid and responsive ‘spinning up’ and ‘spinning down’ within Azure is probably my favourite feature and the reason that Azure first grabbed my attention.

The Service

We needed to expose our data to allow it to be consumed by other platforms, as well as our web application. Initially this was to only be our Windows Phone 7 application (which we will talk about in more detail later in this series), but it is likely that other platforms may wish to access this data in the future. For this reason we decided to implement a WCF service to allow the required flexibility and for our data to be passed out as needed in an efficient manner. The WCF service was configured to have a couple of different endpoints, allowing us to expose data to both the Windows Phone application, returning our complex data types representing our data (such as customers and tasks) and also as a plain JSON output upon a RESTful request from platforms such as JavaScript.

Testing

From a testing perspective, I only have more good things to say about the SDK and the tools. There is a set of very capable emulators that work with zero config within Visual Studio. This allowed us to run the application in a ‘cloud-like’ environment and ensure that the components were working together correctly. This was especially true with the storage emulator, which allowed testing of our code to create and access Azure Tables.

Deployment

Deployment was my only real niggle with Azure development. There were a couple of routes available to me to get the solution into Azure. The first was to package the solution within Visual Studio, which produced two package files, and then to log into the Azure Management Portal and manually setup and upload these packages in order to deploy them. The second, more favourable option, is an automated deployment from within Visual Studio, however this option is only available to those with top-end Visual Studio editions (Ultimate I believe) and still require some manual configuration within your Management Portal. Therefore for many people I believe that the deployment procedure could be streamlined considerably. I also encountered an issue with the configuration of my application which was still configured to use local development storage whilst I was attempting to deploy and therefore causing the operation to fail. Unfortunately, the error message was incredibly vague and unhelpful, which could be improved.

Summary and What’s Next?

When all is said and done, the experience of developing an application to be hosted within Azure was on the whole a straight-forward and positive experience. I think that the small issues I encountered when deploying would not be enough to prevent me from recommending Azure to others.

At this point we now had a working core application running in the cloud, with the ability to serve data to a variety of platforms and scale when necessary. Next we needed to make it useable, look good and just more exciting for the end user in general and this will be discussed in a dedicated series of blog posts on MSDN in the coming weeks.

 

Author Bio

Gary Pretty
Deputy Head of Programming, Mando Group
http://www.mandogroup.com

Gary Pretty is the Deputy Head of Programming at Mando Group, a leading digital agency specialising in creating enterprise web sites and RIAs. Gary works with technologies across the Microsoft stack, including Windows Azure, Sharepoint, ASP.NET and Windows Phone. Gary can be found on twitter @GaryPretty.

 

(Source : http://blogs.msdn.com/b/ukmsdn/archive/2012/01/20/using-windows-azure-to-create-microfinance.aspx )

Sending emails from Windows Azure using Exchange Online web services

Microsoft AzureThere already have been several blog posts (e.g. 123) about why and how to send emails from an Azure-hosted application.

I just wanted to summarize the essence and show some code on how to send email from Azure code via Exchange Online web services if you have an Exchange Online email subscription.

Turns out I was able to register a nice domain for my Exchange Online trial: windowsazure.emea.microsoftonline.com  Winking smile

So, here is the essential code snippet to send an email via EWS (Exchange Web Services) by leveraging the EWS Managed API 1.1 (get the download here):

var service = new ExchangeService(ExchangeVersion.Exchange2007_SP1);
service.Url = new Uri(
  "https://red002.mail.emea.microsoftonline.com/ews/exchange.asmx");
service.Credentials = new WebCredentials(userName, password);

var message = new EmailMessage(service);
message.ToRecipients.Add("joe@doe.com");
message.From = new EmailAddress(
  "foobarbaz@windowsazure.emea.microsoftonline.com");
message.Subject = "Hello EMail - from Windows Azure";
message.Body = new MessageBody(BodyType.HTML, "Email from da cloud :)");

message.SendAndSaveCopy();

In the code above I am sending the email via the EWS host for Europe – you may need different URLs for your location:

Asia Pacific (APAC): https://red003.mail.apac.microsoftonline.com
Europe, the Middle East, and Africa (EMEA): https://red002.mail.emea.microsoftonline.com
North America: https://red001.mail.microsoftonline.com

Hope this helps.

(source : http://weblogs.thinktecture.com/cweyer/2010/12/sending-emails-from-windows-azure-using-exchange-online-web-services-bpos-for-the-search-engines.html )

Ten Basic Troubleshooting Tips for Windows Azure

Microsoft AzureThe last few posts I’ve talked a LOT about configuring diagnostics. Much of that comes not because I love pretty graphs, but because I end up working with customers who are troubleshooting problems with applications running on Windows Azure.

Here is MY list of recommendations to help things run a little smoother:

  1. Keep your diagnostics account separate from your production account. This will help with performance of both production and diagnostics since they won’t be competing for the same storage account.
  2. Make sure your storage account is in the same data center as your compute. I know. Just saying.
  3. Make sure you collect the right set of performance counters AND check that you are actually collecting data. Make sure you are collecting the .Net 4.0 counters for ASP.NET where applicable.
  4. Use either the .wadcfg or PowerShell scripts I’ve talked about here to configure diagnostics. Hard coding it will overwrite any changes you make when an instance restarts.
  5. Knowing and understanding your baseline workload is important. You should look at your performance on a regular basis, and over a period of time.
  6. To troubleshoot, you can enable RDP and perform an upgrade of your service. You can then RDP into specific instances to troubleshoot.
  7. When you are working with Microsoft’s product support, try not to delete old deployments. Once you do it makes finding a root cause more difficult. You can always VIP swap them out and leave them running while they troubleshoot.
  8. Having more instances running means more people can look at the problem at the same time.
  9. Check the status of the service at http://www.microsoft.com/windowsazure/support/status/servicedashboard.aspx
  10. Invest in a diagnostics data viewing tool, such as Cerebrata, or grab a free trail of ManageAxis by following the rabbit hole from the Cloud Cover Show.

 

(source : http://www.davidaiken.com/2011/11/15/ten-basic-troubleshooting-tips-for-windows-azure/ )

Windows Azure Security Best Practices – Tips, Tools, Coding Best Practices

Microsoft AzureWhile writing the series of posts, I kept running into more best practices. So here are a few more items you should consider in securing your Windows Azureapplication.

Here are some tools, coding tips, and best practices:

  • Running on the Operating System
    • Getting the latest security patches
    • If you can, run in partial trust
  • Error handling
    • How to implement your retry logic
    • Logging errors in Windows Azure
  • Access to Azure Storage
    • Access to Blobs
    • Storing your connection string
    • Gatekeepers patterns
    • Rotating your storage keys
    • Crypto services for your data security

Running on the Operating System

Get the Latest Security Patches

When creating a new application with Visual Studio the default behavior is to set the Guest OS version like this in the ServiceConfiguration.cscfg file:

osFamily="1" osVersion="*"

This is good because you will get automatic updates, which is one of the key benefits of PaaS. It is less than optimal because you are not using the latest OS. In order to use the latest OS version (Windows Server 2008 R2), the setting should look like this:

osFamily="2" osVersion="*"

Many customers unfortunately decide to lock to a particular version of the OS in the hopes of increasing uptime by avoiding guest OS updates. This is only a reasonable strategy for enterprise customers that systematically tests each update in staging and then schedules a VIP Swap to their mission critical application running in production. For everyone else that does not test each guest OS update, not configuring automatic updates is putting your Windows Azure application at risk.

— from Troubleshooting Best Practices for Developing Windows Azure Applications

If You Can, Run in Partial Trust

By default, roles deployed to Windows Azure run under full trust. You need full trust if you are invoking non-.NET Code or using .NET libraries that require full trust or anything that requires admin rights. Restricting your code to run in partial trust means that anyone who might have access to your code is more limited to what they can do.

If your web application gets compromised in some way, using partial trust will limit your attacker in the amount of damage he can do. For example, a malicious attacker couldn’t modify any of your ASP.NET pages on disk by default, or change any of the system binaries.

Because the user account is not an administrator on the virtual machine, using partial trust adds even further restrictions than those imposed by Windows. This trust level is enforced by .NET’s Code Access Security (CAS) support.

Partial trust is similar to the “medium trust” level in .NET. Access is granted only to certain resources and operations. In general, your code is allowed only to connect to external IP addresses over TCP, and is limited to accessing files and folders only in its “local store,” as opposed to any location on the system. Any libraries that your code uses must either work in partial trust or be specially marked with an “allow partially trusted callers” attribute.

You can explicitly configure the trust level for a role within the service definition file. The service definition schema provides an enableNativeCodeExecution attribute on the WebRole element and the WorkerRole element. To run your role under partial trust, you must add the enableNativeCodeExecution attribute on the WebRole or WorkerRole element and set it tofalse.

But partial trust does restrict what your application can do. Several useful libraries (such as those used for accessing the registry or accessing a well-known file location) don’t work in such an environment. r trivial reasons. Even some of Microsoft’s own frameworks don’t work in this environment because they don’t have the “partially trusted caller” attribute set.

See Windows Azure Partial Trust Policy Reference for information about what you get when you run in partial trust.

Handling Errors

Windows Azure automatically heals itself, but can your application?

Retry Logic

Transient faults are errors that occur because of some temporary condition such as network connectivity issues or service unavailability. Typically, if you retry the operation that resulted in a transient error a short time later, you find that the error has disappeared.

Different services can have different transient faults, and different applications require different fault handling strategies.

While it may not appear to be security related, it is a best practice build retry logic into your application.

Azure Storage

The Windows Azure Storage Client Library that ships with the SDK already has retry behavior that you need to switch on. You can set this on any storage client by setting the RetryPolicy Property.

SQL, Service Bus, Cache, and Azure Storage

But SQL Azure doesn’t provide a default retry mechanism out of the box, since it uses the SQL Server client libraries. Neither does Service Bus also doesn’t provide a retry mechanism.

So the Microsoft patterns & practices team and the Windows Azure Customer Advisory Team developed a The Transient Fault Handling Application Block. The block provides a number of ways to handle specific SQL Azure, Storage, Service Bus and Cache conditions.

The Transient Fault Handling Application Block encapsulates information about the transient faults that can occur when you use the following Windows Azure services in your application:

  • SQL Azure
  • Windows Azure Service Bus
  • Windows Azure Storage
  • Windows Azure Caching Service

The block now includes enhanced configuration support, enhanced support for wrapping asynchronous calls, provides integration of the block’s retry strategies with the Windows Azure Storage retry mechanism, and works with the Enterprise Library dependency injection container.

Catch Your Errors

Unfortunately systems fail. And Windows Azure is built to fail. And even with retry logic, you will occasionally experience a failure. You can add your own custom error handling to your ASP.NET Web applications. Custom error handling can ease debugging and improve customer satisfaction.

Eli Robillard, a member of the Microsoft MVP program, shows how you can create an error-handling mechanism that shows a friendly face to customers and still provides the detailed technical information developers will need in his article Rich Custom Error Handling with ASP.NET.

If an error page is displayed, it should serve both developers and end-users without sacrificing aesthetics. An ideal error page maintains the look and feel of the site, offers the ability to provide detailed errors to internal developers—identified by IP address—and at the same time offers no detail to end users. Instead, it gets them back to what they were seeking—easily and without confusion. The site administrator should be able to review errors encountered either by e-mail or in the server logs, and optionally be able to receive feedback from users who run into trouble.

Logging Errors in Windows Azure

ELMAH (Error Logging Modules and Handlers) itself is extremely useful, and with a few simple modifications can provide a very effective way to handle application-wide error logging for your ASP.NET web applications. My colleague Wade Wegnerdescribes the steps he recommends in his post Using ELMAH in Windows Azure with Table Storage.

Once ELMAH has been dropped into a running web application and configured appropriately, you get the following facilities without changing a single line of your code:

  • Logging of nearly all unhandled exceptions.
  • A web page to remotely view the entire log of recoded exceptions.
  • A web page to remotely view the full details of any one logged exception, including colored stack traces.
  • In many cases, you can review the original yellow screen of death that ASP.NET generated for a given exception, even with customErrors mode turned off.
  • An e-mail notification of each error at the time it occurs.
  • An RSS feed of the last 15 errors from the log.

To learn more about ELMAH, see the MSDN article Using HTTP Modules and Handlers to Create Pluggable ASP.NET Components by Scott Mitchell and Atif Aziz. And see the ELMAH project page.

Accessing Your Errors Remotely

There are a number of scenarios where it is useful to have the ability to manage your Windows Azure storage accounts remotely. For example, during development and testing, you might want to be able to examine the contents of your tables, queues, and blobs to verify that your application is behaving as expected. You may also need to insert test data directly into your storage.

In a production environment, you may need to examine the contents of your application’s storage during troubleshooting or view diagnostic data that you have persisted. You may also want to download your diagnostic data for offline analysis and to be able to delete stored log files to reduce your storage costs.

A web search will reveal a growing number of third-party tools that can fulfill these roles. See Windows Azure Storage Management Tools for some useful tools.

Access to Storage

Keys

One thing to note right away is that no application should ever use any of the keys provided by Windows Azure as keys to encrypt data. An example would be the keys provided by Windows Azure for the storage service. These keys are configured to allow for easy rotation for security purposes or if they are compromised for any reason. In other words, they may not be there in the future, and may be too widely distributed.

Rotate Your Keys

When you create a storage account, your account is assigned two 256-bit account keys. One of these two keys must be specified in a header that is part of the HTTP(S) request. Having two keys allows for key rotation in order to maintain good security on your data. Typically, your applications would use one of the keys to access your data. Then, after a period of time (determined by you), you have your applications switch over to using the second key. Once you know your applications are using the second key, you retire the first key and then generate a new key. Using the two keys this way allows your applications access to the data without incurring any downtime.

See How to View, Copy, and Regenerate Access Keys for a Windows Azure Storage Account to learn how to view and copy access keys for a Windows Azure storage account, and to perform a rolling regeneration of the primary and secondary access keys.

Restricting Access to Blobs

By default, a storage container and any blobs within it may be accessed only by the owner of the storage account. If you want to give anonymous users read permissions to a container and its blobs, you can set the container permissions to allow public access. Anonymous users can read blobs within a publicly accessible container without authenticating the request. SeeRestricting Access to Containers and Blobs.

Shared Access Signature is a URL that grants access rights to containers and blobs. A Shared Access Signature grants access to the Blob service resources specified by the URL’s granted permissions. Care must be taken when using Shared Access Signatures in certain scenarios with HTTP requests, since HTTP requests disclose the full URL in clear text over the Internet.

By specifying a Shared Access Signature, you can grant users who have the URL access to a specific blob or to any blob within a specified container for a specified period of time. You can also specify what operations can be performed on a blob that’s accessed via a Shared Access Signature. Supported operations include:

  • Reading and writing blob content, block lists, properties, and metadata
  • Deleting a blob
  • Leasing a blob
  • Creating a snapshot of a blob
  • Listing the blobs within a container

Both block blobs and page blobs support Shared Access Signatures.

If a Shared Access Signature has rights that are not intended for the general public, then its access policy should be constructed with the least rights necessary. In addition, a Shared Access Signature should be distributed securely to intended users using HTTPS communication, should be associated with a container-level access policy for the purpose of revocation, and should specify the shortest possible lifetime for the signature.

See Creating a Shared Access Signature and Using a Shared Access Signature (REST API).

Storing the Connection String

If you have a hosted service that uses the Windows Azure Storage Client library to access your Windows Azure Storage account it is recommended that you store your connection string in the service configuration file. Storing the connection string in the service configuration file allows a deployed service to respond to changes in the configuration without redeploying the application.

Examples of when this is beneficial are:

  • Testing – If you use a test account while you have your application deployed to the staging environment and must switch it over to the live account when your move the application to the production environment.
  • Security – If you must rotate the keys for your storage account due to the key in use being compromised.

For more information on configuring the connection string, see Configuring Connection Strings.

For more information about using the connection strings, see Reading Configuration Settings for the Storage Client Library and Handling Changed Settings.

Gatekeeper Design Pattern

A Gatekeeper is a design pattern in which access to storage is brokered so as to minimize the attack surface of privileged roles by limiting their interaction to communication over private internal channels and only to other web/worker roles.

The pattern is explained in the paper Security Best Practices For Developing Windows Azure Applications from Microsoft Download.

These roles are deployed on separate VMs. image

In the event of a successful attack on a web role, privileged key material is not compromised. The pattern can best be illustrated by the following example which uses two roles:

  • The GateKeeper – A Web role that services requests from the Internet. Since these requests are potentially malicious, the Gatekeeper is not trusted with any duties other than validating the input it receives. The GateKeeper is implemented in managed code and runs with Windows Azure Partial Trust. The service configuration settings for this role do not contain any Shared Key information for use with Windows Azure Storage.
  • The KeyMaster – A privileged backend worker role that only takes inputs from the Gatekeeper and does so over a secured channel (an internal endpoint, or queue storage – either of which can be secured with HTTPS). The KeyMaster handles storage requests fed to it by the GateKeeper, and assumes that the requests have been sanitized to some degree. The KeyMaster, as the name implies, is configured with Windows Azure Storage account information from the service configuration to enable retrieval of data from Blob or Table storage. Data can then be relayed back to the requesting client. Nothing about this design requires Full Trust or Native Code, but it offers the flexibility of running the KeyMaster in a higher privilege level if necessary.
Multiple Keys

In scenarios where a partial-trust Gatekeeper cannot be placed in front of a full-trust role, a multi-key design pattern can be used to protect trusted storage data. An example case of this scenario might be when a PHP web role is acting as a front-end web role, and placing a partial trust Gatekeeper in front of it may degrade performance to an unacceptable level.

The multi-key design pattern has some advantages over the Gatekeeper/KeyMaster pattern:

· Providing separation of duty for storage accounts. In the event of Web Role A’s compromise; only the untrusted storage account and associated key are lost.

· No internal service endpoints need to be specified. Multiple storage accounts are used instead.

· Windows Azure Partial Trust is not required for the externally-facing untrusted web role. Since PHP does not support partial trust, the Gatekeeper configuration is not an option for PHP hosting.

See Security Best Practices For Developing Windows Azure Applications from Microsoft Download.

Cyptro Services

You can use encryption to help securing application-layer data. Cryptographic Service Providers (CSPs) are implementations of cryptographic standards, algorithms and functions presented in a system program interface.

An excellent article that will provide you insight into how you can provide these kinds of services in your application is by Jonathan Wiggs in his MSDN Magazine article, Crypto Services and Data Security in Windows Azure. He explains, “A consistent recommendation is to never create your own or use a proprietary encryption algorithm. The algorithms provided in the .NET CSPs are proven, tested and have many years of exposure to back them up.”

There are many you can choose from. Microsoft provides:

Microsoft Base Cryptographic Provider. A broad set of basic cryptographic functionality that can be exported to other countries or regions.

Microsoft Strong Cryptographic Provider. An extension of the Microsoft Base Cryptographic Provider available with Windows 2000 and later.

Microsoft Enhanced Cryptographic Provider. Microsoft Base Cryptographic Provider with through longer keys and additional algorithms.

Microsoft AES Cryptographic Provider. Microsoft Enhanced Cryptographic Provider with support for AES encryption algorithms.

Microsoft DSS Cryptographic Provider. Provides hashing, data signing, and signature verification capability using the Secure Hash Algorithm (SHA) and Digital Signature Standard (DSS) algorithms.

Microsoft Base DSS and Diffie-Hellman Cryptographic Provider .  superset of the DSS Cryptographic Provider that also supports Diffie-Hellman key exchange, hashing, data signing, and signature verification using the Secure Hash Algorithm (SHA) and Digital Signature Standard (DSS) algorithms.

Microsoft Enhanced DSS and Diffie-Hellman Cryptographic Provider. Supports Diffie-Hellman key exchange (a 40-bit DES derivative), SHA hashing, DSS data signing, and DSS signature verification.

Microsoft DSS and Diffie-Hellman/Schannel Cryptographic Provider. Supports hashing, data signing with DSS, generating Diffie-Hellman (D-H) keys, exchanging D-H keys, and exporting a D-H key. This CSP supports key derivation for the SSL3 and TLS1 protocols.

Microsoft RSA/Schannel Cryptographic Provider. Supports hashing, data signing, and signature verification. The algorithm identifier CALG_SSL3_SHAMD5 is used for SSL 3.0 and TLS 1.0 client authentication. This CSP supports key derivation for the SSL2, PCT1, SSL3 and TLS1 protocols.

Microsoft RSA Signature Cryptographic Provider. Provides data signing and signature verification.

Key Storage

The data security provided by encrypting data is only as secure as the keys used, and this problem is much more difficult than people may think at first. You should not use your Azure Storage keys. Instead, you can create your own using the providers in the previous section.

Storing your own key library within the Windows Azure Storage services is a good way to persist some secret information since you can rely on this data being secure in the multi-tenant environment and secured by your own storage keys. This is different from using storage keys as your cryptography keys. Instead, you could use the storage service keys to access a key library as you would any other stored file.

Key Management

To start, always assume that the processes you’re using to decrypt, encrypt and secure data are well-known to any attacker. With that in mind, make sure you cycle your keys on a regular basis and keep them secure. Give them only to the people who must make use of them and restrict your exposure to keys getting outside of your control.

Cleanup

And using while you are using keys, it is recommended that such data be stored in buffers such as byte arrays. That way, as soon as you’re done with the information, you can overwrite the buffer with zeroes or any other data that ensures the data is no longer in that memory.

Again, Jonathan’s article Crypto Services and Data Security in Windows Azure is a great place to study and learn how all the pieces fit together.

Summary

Security is not something that can be added on as the last step in your development process. Rather it should be make part of your ongoing development process. And you should make security decisions based on the needs of your own application.

Have a methodology where every part of your application development cycle considers security. Look for places in your architecture and code where someone might have access to your data.

Windows Azure makes security a shared responsibility. With Platform as a Service, you can focus on your application and your own security needs in deeper ways that before.

In a series of blog posts, I provided you a look into how you can secure your application in Windows Azure. This seven-part series described the threats, how you can respond, what processes you can put into place for the lifecycle of your application, and prescribed a way for you to implement best practices around the requirements of your application.

I also showed ways for you to incorporate user identity and some of services Azure provides that will enable your users to access your cloud applications in new says.

Here are links to the articles in this series:

(source : http://blogs.msdn.com/b/usisvde/archive/2012/03/15/windows-azure-security-best-practices-part-7-tips-tools-coding-best-practices.aspx)