Showing posts with label Cloud Computing. Show all posts
Showing posts with label Cloud Computing. Show all posts

The Fabric Controller

Operating systems have at their core a kernel. This kernel is responsible for being the traffic cop in the system. It manages the sharing of resources, schedules the use of precious assets (CPU time), allocates work streams as appropriate, and keeps an eye on security. The fabric has a kernel called the Fabric Controller (FC). Understanding these relationships will help you get the most out of the platform.

The FC handles all of the jobs a normal operating system�s kernel would handle. It manages the running servers, deploys code, and makes sure that everyone is happy and has a seat at the table.

The FC is an Azure application in and of itself, running multiple copies of itself for redundancy�s sake. It�s largely written in managed code. The FC contains the complete state of the fabric internally, which is replicated in real time to all the nodes that are part of the FC. If one of the primary nodes goes offline, the latest state information is available to the remaining nodes, which then elect a new primary node.

The FC manages a state machine for each service deployed, setting a goal state that�s based on what the service model for the service requires. Everything the FC does is in an effort to reach this state and then to maintain that state when it�s reached. We�ll go into the details of what the service model is in the next few pages, but for now, just think of it as a model that defines the needs and expectations that your service has.

The FC is obviously very busy. Let�s look at how it manages to seamlessly perform all these tasks.


How the FC works: the driver model
The FC follows a driver model, just like a conventional OS. Windows has no idea how to specifically work with your video card. What it does know is how to speak to a video driver, which in turn knows how to work with a specific video card. The FC works with a series of drivers for each type of asset in the fabric. These assets include the machines, as well as the routers, switches, and load balancers. Although the variability of the environment is low today, over time new types of each asset are likely to be introduced. The goal is to reduce unnecessary diversity, but you�ll have business needs that require breadth in the platform. Perhaps you�ll get a software load balancer for free, but you�ll have to pay a little bit more per month to use a hardware load balancer.

A customer might choose a certain option, such as a hardware load balancer, to meet a specific need. The FC would have a different driver for each piece of infrastructure it controls, allowing it to control and communicate with that infrastructure.

The FC uses these drivers to send commands to each device that help these devices reach the desired running state. The commands might create a new VLAN to a switch or allocate a pool of virtual IP addresses. These commands help the FC move the state of the service towards the goal state. While the FC is moving all your services toward the running state, it�s also allocating resources and managing the health of the nodes in the fabric and of your services.


Resource allocation
One of the key jobs of the FC is to allocate resources to services. It analyzes the service model of the service, including the fault and update domains, and the availability of resources in the fabric. Using a greedy resource allocation algorithm, it finds which nodes can support the needs of each instance in the model. When it has reserved the capacity, the FC updates its data structures in one transaction. After the update, the goal state of each node is changed, and the FC starts moving each node towards its goal state by deploying the proper images and bits, starting up services, and issuing other commands through the driver model to all the resources needed for the change.


Instance management
The FC is also responsible for managing the health of all of the nodes in the fabric, as well as the health of the services that are running. If it detects a fault in a service, it tries to remediate that fault, perhaps by restarting the node or taking it offline and replacing it with a different node in the fabric. When a new container is added to the data center, the FC performs a series of burn-in tests to ensure that the hardware delivered is working correctly. Part of this process results in the new resource being added into the inventory for the data center, making it available to be allocated by the FC. If hardware is determined to be faulty, either during installation or during a fault, the hardware is flagged in the inventory as being unusable and is left alone until later. When a container has enough failures, the remaining workloads are moved to different containers and then the whole container is taken offline for repair. After the problems have been fixed, the whole container is retested and returned into service.

Source of Information : Manning Azure in Action 2010

Windows Azure, an operating system for the cloud

Think of the computer on your desk today. When you write code for that computer, you don�t have to worry about which sound card it uses, which type of printer it�s connected to, or which or how many monitors are used for the display. You don�t worry, to a degree, about the CPU, about memory, or even about how storage is provided (solidstate drive [SSD], carrier pigeon, or hard disk drive). The operating system on that computer provides a layer of abstraction away from all of those gritty details, frees you up to focus on the application you need to write, and makes it easy to consume the resources you need. The desktop operating system protects you from the details of the hardware, allocates time on the CPU to the code that�s running, makes sure that code is allowed to run, plays traffic cop by controlling shared access to resources, and generally holds everything together.

Now think of that enterprise application you want to deploy. You need a DNS, networking, shared storage, load balancers, plenty of servers to handle load, a way to control access and permissions in the system, and plenty of other moving parts. Modern systems can get complicated. Dealing with all of that complexity by hand is like compiling your own video driver; it doesn�t provide any value to the business. Windows Azure does all this work, but on a much grander scale and for distributed applications by using something called the fabric.

Windows Azure takes care of the whole platform so you can focus on your application. The term fabric is used because of the similarity of the Azure fabric to a woven blanket. Each thread on its own is weak and can�t do a lot. When they�re woven together into a fabric, the whole blanket becomes strong and warm. The Azure fabric consists of thousands of servers, woven together and working as a cohesive unit. In Azure, you don�t need to worry about which hardware, which node, what underlying operating system, or even how the nodes are load balanced or clustered. Those are just gritty details best left to someone else. You just need to worry about your application and whether it�s operating effectively. How much time do you spend wrangling with these details for your on-premises projects? It�s probably at least 10�20 percent of the total project cost in meetings alone. There are savings to be gained by abstracting away these issues.

In fact, Azure manages much more than just servers. There are plenty of other assets that are managed. Azure manages routers, switches, IP addresses, DNS servers, load balancers, and dynamic virtual local area networks (VLANs). In a static data center, managing all these assets is a complex undertaking. It�s even more complex when you�re managing multiple data centers that need to operate as one cohesive pool of resources, in a dynamic and real-time way.

If the fabric is the operating system, then the Fabric Controller is the kernel.

Source of Information : Manning Azure in Action 2010

How many administrators do you need for Data centers?

Data centers are staffed with IT pros to care and feed the servers. Data centers need a lot of attention, ranging from hardware maintenance to backup, disaster recovery, and monitoring. Think of your company. How many people are allocated to manage your servers? Depending on how optimized your IT center is, the ratio of person-toservers can be anywhere from 1:10 to 1:100. With that ratio, Microsoft would need 35,000 server managers. Hiring that many server administrators would be hard, considering that Microsoft employs roughly 95,000 people already.

To address this demand, Azure was designed to use as much automation as possible, using a strategy called lights-out operations. This strategy seeks to centralize and automate as much of the work as possible by reducing complexity and variability. The result is a person-to-servers ratio closer to 1:30,000 or higher.

Microsoft is achieving this level of automation mostly by using its own off-the-shelf software. Microsoft is literally eating its own dog food. It�s using System Center Operations Manager and all the related products to oversee and automate the management of the underlying machines. It�s built custom automation scripts and profiles, much like any customer would do.

One key strategy in effectively managing a massive number of servers is to provision them with identical hardware. In traditional data centers where we�ve worked, each year brought the latest and greatest of server technology, resulting in a wide variety of technology and hardware diversity. We even gave each server a distinct name, such as Protoss, Patty, and Zelda. With this many servers, you can�t name them; you have to number them. Not just by server, but by rack, room, and facility. Diversity is usually a great thing, but not when you�re managing millions of boxes.

The hardware in each Azure server is optimized for power, cost, density, and management. The optimization process drives exactly which motherboard, chipset, and every other component needs to be in the server; this is truly bang for your buck in action. Then that server recipe is kept for a specific lifecycle, only moving to a new bill of materials when there are significant advantages to doing so.

Source of Information : Manning Azure in Action 2010

The latest Azure data centers

Even with the advances found in Generation 2 data centers, companies couldn�t find the efficiencies and scale needed to combat rising facility costs, let alone meet the demands that the cloud would generate. The density of the data center needed to go up dramatically, and the costs of operations had to plummet. The first Generation 3 data center, located in Chicago, Illinois, went online on June 20, 2009. Microsoft considers
it to be a mega data center, which is a class designation that defines how large the data center is. The Chicago data center looks like a large parking deck, with parking spaces and ramps for tractor trailers. Servers are placed into containers, called CBlox, which are parked in this structure. A smaller building that looks more like a traditional data center is also part of the complex. This area is for high-maintenance workloads that can�t run in Azure.

CBlox are made out of the shipping containers that you see on ocean-going vessels and on eighteen wheelers on the highways. They�re sturdily built and follow a standard size and shape that are easy to move around. One CBlox can hold anywhere from 1,800 to 2,500 servers. This is a massive increase in data-center density, 10 times more dense than a traditional data center. The Chicago mega data center holds about 360,000 servers and is the only primary consumer of a dedicated nuclear power plant core run by Chicago Power & Light. How many of your data centers are nuclear powered?

Each parking spot in the data center is anchored by a refrigerator-size device that acts as the primary interconnect to the rest of the data center. Microsoft developed a standard coupler that provides power, cooling, and network access to the container. Using this interconnect and the super-dense containers, massive amounts of capacity can be added in a matter of hours. Compare how long it would take your company to plan, order, deploy, and configure 2,500 servers. It would take at least a year, and a lot of people, not to mention how long it would take to recycle all the cardboard and extra parts you always seem to have after racking a server. Microsoft�s goal with this strategy is to make it as cheap and easy as possible to expand capacity as demand increases.

The containers are built to Microsoft�s specifications by a vendor and delivered on site, ready for burn-in tests and allocation into the fabric. Each container includes networking gear, cooling infrastructure, servers, and racks, and is sealed against the weather.

Not only are the servers now packaged and deployed in containers, but the necessary generators and cooling machinery are designed to be modular as well. To set up an edge data center, one that�s located close to a large-demand population, all that�s needed is the power and network connections, and a level paved surface. The trucks with the power and cooling equipment show up first, and the equipment is deployed. Then the trucks with the computing containers back in and drop their trailers, leaving the containers on the wheels that were used to deliver them. The facility is protected by a secure wall and doorway with monitoring equipment. The use of laser fences is pure speculation and just a rumor, as far as we know. The perimeter security is important, because the edge data center doesn�t have a roof! Yes, no roof! Not using a roof reduces the construction time and the cooling costs. A roof isn�t needed because the containers are completely sealed.

Microsoft opened a second mega data center, the first outside the United States, in Dublin, Ireland, on July 1, 2009. When Azure became commercially available in January 2010, the following locations were known to have an Azure data center: Texas, Chicago, Ireland, Amsterdam, Singapore, and Hong Kong. Although Microsoft won�t tell where all its data centers are for security reasons, it purports to have more than 10 and fewer than 100 data centers. Microsoft already has data centers all over the world to support its existing services, such as Virtual Earth, Bing Search, Xbox Live, and others. If we assume there are only 10, and each one is as big as Chicago, then Microsoft needs to manage 3.5 million servers as part of Azure. That�s a lot of work.

Source of Information : Manning Azure in Action 2010

Microsoft data centers of yore

Microsoft is continually improving all the important aspects of its data centers. It closely manages all the costs of a data center, including power, cooling, staff, local laws, risk of disaster, availability of natural resources, and many other factors. While managing all this, it has designed its fourth generation of data centers. Microsoft didn�t just show up at this party; it planned it by building on a deep expertise in building and running global data centers over the past few decades.

The first generation of data centers is still the most common in the world. Think of the special room with servers in it. It has racks, cable ladders, raised floors, cooling, uninterruptable power supplies (UPSs), maybe a backup generator, and it�s cooled to a temperature that could safely house raw beef. The focus is placed on making sure the servers are running; no thought or concern is given to the operating costs of the data center. These data centers are built to optimize the capital cost of building them, with little thought given to costs accrued beyond the day the center opens. (By the way, the collection of servers under your desk doesn�t qualify as a Generation 1 data center. Please be careful not to kick a cord loose while you do your work.)

Generation 2 data centers take all the knowledge learned by running Generation 1 data centers and apply a healthy dose of thinking about what happens on the second day of operation. Ongoing operational costs are reduced by optimizing for sustainability and energy efficiency. To meet these goals, Microsoft powers its Quincy, Washington, data center with clean hydroelectric power. Its data center in San Antonio, Texas, uses recycled civic gray water to cool the data center, reducing the stress on the water sources and infrastructure in the area.

Source of Information : Manning Azure in Action 2010

Microsoft big shift

When Azure was first announced at the PDC in 2008, Microsoft wasn�t a recognized player in the cloud industry. It was the underdog to the giants Google and Amazon, which had been offering cloud services for years by that time. Building and deploying Azure was a big bet for Microsoft. It was a major change in the company�s direction, from where Microsoft had been and where it needed to go in the future. Up until that time, Microsoft had been a product company. It designed and built a product, burnt it to CD, and sold it to customers. Over time, the product was enhanced, but the product was installed and operated in the client�s environment. The trick was to build the right product at the right time, for the right market.

With the addition of Ray Ozzie to the Microsoft culture, there was a giant shift toward services. Microsoft wasn�t abandoning the selling of products, but it was expanding its expertise and portfolio to offer its products as services. Every product team at Microsoft was asked if what they were doing could be enhanced and extended with services. They wanted to do much more than just put Exchange in a data center and rent it to customers. This became a fundamental shift in how Microsoft developed code, how the code was shipped, and how it was marketed and sold to customers.

This shift toward services wasn�t an executive whim, thought up during an exclusive executive retreat at a resort we�ll never be able to afford to even drive by. It was based on the trends and patterns the leaders saw in the market, in the needs of their customers, and on the continuing impact of the internet on our world. Those in charge saw that people needed to use their resources in a more flexible way, more flexible than even the advances in virtualization were providing. Companies needed to easily respond to a product�s sudden popularity as social networking spread the word. Modern businesses were screaming that six months was too long to wait for an upgrade to their infrastructure; they needed it now.

Customers were also becoming more sensitive to the massive power consumption and heat that was generated by their data centers. Power and cooling bills were often the largest component of their total data-center cost. Coupling this with a concern over global warming, customers were starting to talk about the greening of IT. They wanted to reduce the carbon footprint that these beasts produced. Not only did they want to reduce the power and cooling waste, but also the waste of lead, packing materials, and the massive piles of soda cans produced by the huge number of server administrators that they had to employ.

Source of Information : Manning Azure in Action 2010

WHAT IS SQL AZURE?

Version 1.0 of SQL Azure, which was released at PDC 2009, provides the core capabilities of SQL Server in the cloud. The first release can be likened to running an instance of SQL Server Express Edition on a shared host, with some changes to security so that you can�t mess with other databases on the same server. Communication with SQL Azure is via the Tabular Data Stream (TDS) protocol, which is the same protocol that�s used for the on-premises editions of SQL Server. You can connect SQL Management Studio directly to your database hosted in the cloud, as if it were hosted locally.

In the first release of SQL Azure, security is limited to SQL Server user accounts. Windows Integrated Security isn�t yet supported. Expect some sort of support beyond SQL Security at a later date.

Because you can connect to SQL Azure with a regular connection string, any existing data access layers continue to work normally. Communication between SQL Azure and applications that are hosted both inside and outside Windows Azure.

If your application works today using SQL Server Express Edition and doesn�t use some of the more advanced features of SQL Server, then your application should work in the cloud with little or no modification.

Although on-premises applications can talk to SQL Azure, latency might make this a less attractive option. The closer your application is to the database, the faster it�ll go. You can reduce the impact of latency by making your application less chatty.


HOW SCALABLE IS SQL AZURE?
In version 1.0 of SQL Azure, there�s no built-in support for data partitioning (the ability to split your data across multiple servers). The initial release is targeted for databases that are sized up to 10 GB; larger databases aren�t suitable for SQL Azure in this initial release, but support for larger databases will be available in future service updates. If you need to perform partitioning, you need to implement it in the application layer.

Let�s turn now to Azure platform�s enterprise services. Known as AppFabric, these services include the Access Control Service (ACS) and the Service Bus.

Source of Information : Manning Azure in Action 2010

Treating computing power as a utility service

In traditional on-premises or managed-hosting solutions, you either rent or own the infrastructure that your service is hosted on. You�re paying for future capacity that you�re currently not using. The Windows Azure platform, like other cloud platforms, follows a model of utility computing.

Utility computing treats computing power or storage in the same way you treat a utility service (such as gas or electricity). Your usage of the Windows Azure platform is metered, and you pay only for what you consume.


PAY AS YOU GROW
If you have to pay only for the resources you use, you can launch a scalable service without making a huge investment up front in hardware. In the early days of a new venture, a start-up company survives from investment funding and generates very little income. The less money the company spends, the more chance it has of surviving long enough to generate sufficient income to sustain itself. If the service is successful, then the generated income will pay for the use of the resources.

It�s not unusual for technology start-ups to purchase large and expensive hardware solutions for new ventures to cope with predicted future demand. If the service is successful, then it�ll require the extra capacity; in the meantime, the start-up is paying for resources that it�s not using. Utility computing offers the best of both worlds, giving you the ability to use extra capacity as the service grows without making up-front investments in hardware, and to pay only for the resources that that you use.


SCALE ON DEMAND
Some situations involve large, unpredictable growth; you want to handle the load, but not pay for the unused capacity. This situation might appear in the following scenarios:

� Viral marketing campaigns
� Referrals by a popular website
� Concert ticket sales

Let�s say you run a Hawaiian Shirt Shop, and you typically have a predictable pattern of usage. If, for example, Ashton Kutcher (who has 2,000,000 Twitter followers) tweets that he buys his shirts from your website, and he posts a link to your site to all his followers, it�s likely that your website will experience a surge in traffic.

An example shows that your website normally receives around 1,000 hits per day. After Ashton Kutcher tweeted about your website, that increased to 100,000 hits per day. The traffic dropped off after about a week, and then the website had a new baseline of around 10,000 hits per day.

With Windows Azure, you can dynamically scale up to handle the increased traffic load for that week (and get all the extra shirt sales); then, as the traffic decreases, you can scale back down again, paying only for the resources you use. Scaling up and down in Azure is quite simple. You have several options at your disposal. It�s important to remember that Azure doesn�t scale your service for you. Because it costs money, you have to tell Azure how many servers you want. Azure gives you tools to do this. You can simply log into the portal and make a small change to a configuration file, which is the easy, manual way. You can also use the Service Management API. This API lets you change the resources you have allocated to your application in an automated way.

It�s not only possible to scale up and down for predictable (or unpredictable) bursts of growth; you can also dynamically scale your service based on normal, varied usage patterns.


VARIED USAGE PATTERNS
Returning to the Hawaiian Shirt Shop example: after the Ashton Kutcher hype died down a little, your website leveled off at around 10,000 hits per day. Most of the time there�s little traffic on the site, apart from during lunch and in the evening. Evidently, most people don�t buy Hawaiian shirts when they�re at work.

Because it takes only a few minutes to provision a new web server in Windows Azure, you can dynamically scale your website as your usage patterns dictate. For the Hawaiian Shirt Shop, you might decide to run one instance of the website during the day, but in the evening to run three instances to deal with the increased traffic.

This sort of scenario is a perfect example of when cloud computing, specifically using Windows Azure, is a perfect fit for your business. If you need to scale beyond a single web server for certain periods of the day when traffic is high, Windows Azure is a cost-effective choice because it allows you to scale back when traffic dies down. Cloud computing solutions are the only offerings that give you this elastic scalability. Other solutions typically require you to over-provision your hardware to cope with the peak periods, but that hardware is underused at off-peak times. So far, we�ve discussed the cost savings you can achieve by scaling your application up and down. Let�s now look at how you can save money in maintenance costs.


Simplified data-center management
We�ll look at how the effort involved in operationally maintaining your application is reduced in the Windows Azure environment.

BUILT-IN FAULT TOLERANCE
In Windows Azure, if the physical hardware that your service instance resides on fails, that failed instance is redeployed to another machine. Hardware maintenance resides solely in Microsoft�s domain, and you don�t have to worry about it.

When you have more than two instances of your application running in Windows Azure, each instance of your web role doesn�t live on the same physical server as another instance. This arrangement ensures that if the hardware for one instance dies, the other instance can continue to perform incoming requests. Not only does the second instance live on a different physical server, but it also lives on a different rack (in case the server rack fails). The server rack where the second instance resides is connected to a different network and power grid from the first rack. This level of fault tolerance ensures that if there�s a failure on the physical server, the server rack, the network, or in the electricity supply, your service continues to run and is able to service requests.

When you install your application, Windows Azure decides what servers to place your instances on, with the intention of providing the maximum levels of fault tolerance. Because all data-center assets are tracked and mapped, the placement of applications on each physical server is determined by an algorithm designed to match the fault-tolerance criteria. Even with only two instances of an application, these considerations are pretty complex, but Windows Azure maximizes fault tolerance even when there are hundreds of instances.

Although fault tolerance is maintained within a physical data center, Azure doesn�t currently run across the data centers, but runs only within a single data center. You still need to perform offsite backups (if you need them). You can replicate your data to a second data center and run your applications in more than one data center if georedundancy is required.

One of the key differences between Windows Azure�hosted applications and regular on-premises solutions or other cloud solutions is that Windows Azure abstracts away everything about the infrastructure, including the underlying operating system, leaving you to focus on your application. Let�s see how Azure�s ability to maintain the servers your applications run on reduces cost.


SERVER SOFTWARE MAINTENANCE BEGONE!
Whether you�re running an entire data center or hosting a website on a dedicated server at a hosting company, maintaining the operating system is usually your responsibility. Maintenance tasks can include managing antivirus protection, installing Windows updates, applying service packs, and providing security. If you�re running your own dedicated machine on your own premises rather than it being hosted for you, then you�re even responsible for performing backups.

In Windows Azure, because the tasks associated with maintaining the server are the responsibility of Microsoft, you can focus completely on your application. This situation greatly simplifies and reduces the cost of running a service.

A final cost consideration is that if you have a service hosted in Windows Azure, you don�t have to worry about the licensing costs for the underlying operating system. You gain all the benefits of running under the latest version of Windows without paying for the costs of maintaining that software. The underlying software is abstracted away from your service, but the base underlying operating system of your service is Windows Server 2008. If you�re running multiple servers, the cost of licensing usually runs into thousands of dollars.

Although you don�t have to worry about hardware or software maintenance from an operational or cost perspective, you do have to worry about it from a software design perspective.


DESIGNING FOR DISTRIBUTION
Your services won�t always be running on the same machine, and they might be failed over to another machine at any time. Failover might be caused by hardware failures, software maintenance cycles, or load distribution. You must design your software so that it can handle these failures. This might mean automatically retrying a failed operation when an exception occurs or reloading any local caches when a service restarts.

Source of Information : Manning Azure in Action 2010

Why run in the cloud?

So far we�ve said, �Isn�t Azure shiny and cool?� We�ve also said, �Wow, it�s so great I can take my existing IT app and put it in the cloud.� But what we haven�t asked is, �Why would I want to stick it in the cloud? Why would I want to host my applications with Microsoft rather than host them myself? What advantages do I get using this new platform?� The answers to these questions include the following:

� You can save lots of money.

� You won�t need to buy any infrastructure to run your application.

� You don�t need to manage the infrastructure to run your application.

� Your application runs on the same infrastructure that Microsoft uses to host its
services, not some box under a desk.

� You can scale out your application on demand to use whatever resources it needs to meet its demands.

� You pay only for the resources that you use, when you use them.

� You�re provided with a framework that allows you to develop scalable software that runs in the Windows Azure platform so your applications can run at internet scale.

� You can focus on what you�re good at: developing software.

� You can watch football and drink milkshakes without being disturbed because
someone pulled out the server power cable so they could do the vacuuming.

� You can save lots of money.

In case you think we�re repeating ourselves by saying �You can save lots of money� twice, well, it�s the key point: you can save a lot. We�re often involved in large-scale systems for which the infrastructure costs millions (and most of the time, the servers sit idle). That�s not including the cost of running these systems. The equivalent systems in Azure are about 10 percent of the cost. With that in mind, this section will show you a few of the ways the Windows Azure platform can help you out and save lots of money.

Source of Information : Manning Azure in Action 2010

Storing data in the cloud with Azure

Suppose you�re developing a new podcasting application for Windows 7. For this application, you want to convert MP3 files to WMA. To convert an MP3 file, you first need to read the file from a hard disk (and eventually write the result). Even though there are thousands of different disk drives, you don�t need to concern yourself with the implementation of these drives because the operating system provides you with an abstracted view of the disk drive. To save the converted file to the disk, you can write the file to the filesystem; the operating system manages how it�s written to the physical device. The same piece of code that you would use to save your podcast will work, regardless of the physical disk drive.

In the same way that Windows 7 abstracts the complexities of the physical hardware of a desktop PC away from your application, Windows Azure abstracts the physical cloud infrastructure away from your applications using configuration and managed APIs.

Applications can�t subsist on code alone; they usually need to store and retrieve data to provide any real value. In the next section, we�ll discuss how Azure provides you with shared storage, and then we�ll take a quick tour of the BLOB storage service, messaging, and the Table storage service.


Understanding Azure�s shared storage mechanism
If we consider the MP3 example in the context of Windows Azure, rather than abstracting your application away from a single disk, Windows Azure needs to abstract your application away from the physical server (not just the disk). Your application doesn�t have to be directly tied to the storage infrastructure of Azure. You�re abstracted away from it so that changes in the infrastructure don�t impact your code or application. Also, the data needs to be stored in shared space, which isn�t tied to a physical server and can be accessed by multiple physical servers.

Your services won�t always be continually running on the same physical machine. Your roles (web or worker) could be shut down and moved to another machine at any time to handle faults or upgrades. In the case of web roles, the load balancer could be distributing requests to a pool of web servers, meaning that an incoming request could be performed on any machine.

To run services in such an environment, all instances of your roles (web and worker) need access to a consistent, durable, and scalable storage service. Windows Azure provides scalable storage service, which can be accessed both inside and outside the Microsoft data centers. When you register for Windows Azure, you�ll be able to create your own storage accounts with a set of endpoint URIs that you can use to access access the storage services for your account. The storage services are accessed via a set of REST APIs that�s secured by an authentication token.

Windows Azure storage services are hosted in the fabric in the same way as your own roles are hosted. Windows Azure is a scalable solution; you never need to worry about running out of capacity.


Storing and accessing BLOB data
Windows Azure provides the ability to store binary files (BLOBs) in a storage area known as BLOB storage.

In your storage account, you create a set of containers (similar to folders) that you can store your binary files in. In the initial version of the BLOB storage service, containers can either be restricted to private access (you must use an authentication key to access the files held in this container) or to public access (anyone on the internet can access the file, without using an authentication key).

We return to the audio file conversion (MP3 to WMA) scenario. In this example, you�re converting a source recording of your podcast (Podcast01.mp3) to Windows Media Audio (Podcast01.wma). The source files are held in BLOB storage in a private container called Source Files, and the destination files are held in BLOB storage in a public container called Converted Files. Anyone in the world can access the converted files because they�re held in a public container, but only you can access the files in the private container because it�s secured by your authentication token. Both the private and public containers are held in the storage account called MyStorage.

BLOBs can be split up into more manageable chunks known as blocks for more efficient uploading of files. This is only the tip of the iceberg in terms of what you can do with BLOB storage in Azure. In part 4, we�ll explore BLOB storage and usage in much more detail.

BLOBs play the role of a filesystem in the cloud, but there are other important aspects of the storage subsystem. One of those is the ability to store and forward messages to other services through a message queue.


Messaging via queues
Message queues are the primary mechanism for communicating with worker roles. Typically, a web role or an external service places a message in the queue for processing. Instances of the worker role poll the queue for any new messages and then process the retrieved message. After a message is read from the queue, it�s not available to any other instances of the worker role. Queues are considered part of the Azure storage system because the messages are stored in a durable manner while they wait to be picked up in the queue.

In the audio file conversion example, after the source podcast BLOB (Podcast01.mp3) is placed in the Source Files container, a web role or external service places a message (containing the location of the BLOB) in the queue. A worker role retrieves the message and performs the conversion. After the worker role converts the file from MP3 to WMA, it places the converted file (Podcast01.wma) in the Converted Files container. Windows Azure also provides you with the ability to store data in a highly scalable, simple Table storage service.


Storing data in tables
The Table storage service provides the ability to store serialized entities in a big table; entities can then be partitioned across multiple servers.

Using tables is a simple storage mechanism that�s particularly suitable for session management or user authentication. Tables don�t provide a relational database in the cloud, and if you need the power of a database (such as when using server-side joins), then SQL Azure.

Source of Information : Manning Azure in Action 2010

How the fabric and the Fabric Controller work

Azure contains a massive number of servers, and there isn�t any way they can possibly be managed on an individual basis. This is where the Azure operating system concept comes into play. By abstracting away all of those individual servers into a swarm or cloud, you only have to manage the cloud as a whole. This swarm of servers is called the fabric, and your applications run in the fabric when you deploy them to the cloud.

The fabric is managed by a software overlord known as the Fabric Controller. The Fabric Controller plays the role of the kernel and is aware of every hardware and software asset in the fabric. It�s responsible for installing your web and worker roles onto the physical or virtual servers living in the fabric (this process is similar to how the kernel assigns memory or CPU to an application in a traditional operating system). The Fabric Controller is responsible for maintaining its inventory by monitoring the health of all its assets. If any of the assets are unhealthy, it�s responsible for taking steps to resolve the fault, which might include the following:

� Restarting your role
� Restarting a server
� Reprogramming a load balancer to remove the server from the active pool
� Managing upgrades
� Moving instances of your role in fault situations

Windows Azure follows a cloud computing paradigm known as the fabric, which is another way of describing the data center. Like in the movie The Matrix, the fabric is everywhere. Every single piece of hardware (server, router, switch, network cable, and so on) and every VM is connected together to form the fabric. Each resource in the fabric is designed and monitored for fault tolerance. The fabric forms an abstract representation of the physical data center, allowing your applications to run in the fabric without knowledge of the underlying infrastructure.

The Fabric Controller monitors and interacts with the servers. It�s the central traffic cop, managing the servers and the code that�s running on those servers. The Fabric Controller performs the job of the kernel (except across multiple servers at a server level rather than at CPU and memory level) in terms of allocating resources and monitoring resources.

One of the jobs that the Fabric Controller doesn�t do (but that a kernel does) is the abstraction of the I/O devices. In Azure, this job is performed by storage services.

Source of Information : Manning Azure in Action 2010

Specialized Cloud Software Providers

Appistry
Appistry delivers Cloud IT infrastructure to the enterprise, for intelligence and defense organizations, and to ISVs and SaaS providers.

Appistry supports public, private, and hybrid clouds. Appistry serves large enterprise customers, including FedEx, Sprint, and the U.S. government.
Web site: www.appistry.com


BMC Software
BMC Software is a venerable software vendor dating from the early mainframe days. The new BMC product, dubbed Cloud Lifecycle Management, primarily addressed towards management of private clouds, also supports hybrid configuration. It provides self-service management, provisioning, and unified management. It includes:


� A Service Catalog, purpose-built for cloud offerings

� A Self-Service Portal, allowing service owners to perform basic administrative activities on their own services (and only on their own services), and to request new services with . . .

� A Service Request Wizard, permitting customized, on-demand provisioning of services, within constraints defined and controlled by the catalog designer, service retirement and resource reclamation.

� Pre-built workflows that integrate with existing IT management systems, allowing organizations to introduce Cloud while continuing to leverage existing policies, processes, people, and tools

� Automated full-stack layered provisioning across heterogeneous platforms

Source: http://communities.bmc.com/communities/community/bsm_initiatives/cloud/blog/2010/05/25/introducing-cloud-lifecyclemanagement

BMC states that the new product was created in conjunction with several (unnamed) strategic design partners�both enterprise and service providers in the United States as well as in EMEA and APAC countries� and further validated by an additional 25 customers. Working closely with such a broad set of customers during the design and implementation has helped BMC ensure that the resulting product meets clear and compelling customer needs and delivers immediate real-world value.
Web sites:

� http://documents.bmc.com/products/documents/18/26/141826/141826.pdf

� www.bmc.com/Cloud

� http://documents.bmc.com/products/documents/32/99/133299/133299.pdf


Nasuni
Nasuni is an intereresting offering for cloud data storage. Nasuni has partnered with four cloud providers (Amazon AWS, Iron Mountain Digital, Nirvanix, and Rackspace) to provide encrypted, portable file storage. Data is encrypted by the Nasuni Filer on the client premises, using OpenPGP with AES-256, and remains encrypted in the cloud. This guarantees end-to-end data protection, as the data is never visible to anyone at Nasuni or the cloud.

Customers can escrow their encryption keys with Nasuni or a trusted third party, or they can generate and store their own keys. The Nasuni Filer also functions like a traditional NAS, retaining such NAS functionality as Windows Shares and Active Directory. Shares are created and access control is managed in a familiar way. You can also consolidate any number of NAS boxes into one Nasuni Filer.

Nasuni�s charges vary by the cloud provider. Monthly storage costs are $0.15 on AWS and on Rackspace per GB, $0.48 per GB for Nirvanix, and $0.65/GB for Iron Mountain. The last two provide various value-added services, which justify their higher charges. For example, Nirvanix�s standard service utilizes RAID 6, striping files to three separate discs to ensure data integrity. Customers have the option of replicating data on multiple nodes where each node utilizes RAID 6. This process is automated using Nirvanix�s policy-based Checksum operations on data uploads, along with frequent
internal integrity checks to preserve the integrity of data in transit and at rest. Moreover, every file copy or transfer within the Storage Delivery Network is validated against a stored MD5 hash to verify that the additional copy has not been corrupted in transit.

The ability to move data easily among its providers and the customer�s facilities helps prevent lock-in.
Web site: www.nasuni.com

Source of Information : Implementing and Developing Cloud Computing Applications 2011

SaaS Providers

NetSuite
Netsuite�s SuiteCloud Platform is a comprehensive offering of on-demand products and development tools. It is also a leading provider of Web-based business software suites for CRM, ERP tools, accounting and e-commerce.
Web site: www.netsuite.com/portal/home.shtml


Intuit
Founded in 1983, Intuit makes the well-known QuickBooks software; its annual revenue exceeds $3 billion. The Intuit Partner Platform allows customers to:

� Rapidly build and deploy rich Software as a Service (SaaS) apps capable of seamless integration with QuickBooks data

� Reach a potential market of nearly 25 million users within the 4 million small businesses using QuickBooks
Web site: https://ipp.developer.intuit.com/ipp/native


Intacct
Intacct is a leading provider of cloud financial management and accounting applications for companies of all sizes. The Intacct cloud financial management system provides comprehensive functionality, is easy to use and configure to match your business processes, and is designed to integrate easily with your other key business systems.

� Fully auditable and Sarbanes-Oxley compliant; prevents fraud with strict permissions, separation of duties and financial controls.

� Supports distributed and global businesses, making it easy and quick to consolidate financials from multiple business units who may also transact in multiple currencies.

� Automates key business processes, freeing you from time-consuming and manual work.

� Works well with other key business applications, including Salesforce CRM, ADP, Avalara, Avectra, BNA Software, Boomi, Callidus, CCH, Compupay, expensecloud, Pervasive, SmartTurn, and SpringCM, eliminating islands of information and wasted time, errors and duplication from re-keying.

� Provides a Web-based interface, enabling you to access your financials from anywhere, at anytime as long as you have an Internet connection. Intacct runs on the Oracle dbms and uses SunGard for disaster recovery. The AICPA and its subsidiary CPA2Biz have named Intacct their preferred provider of financial applications, and CPA2Biz is now the exclusive distributor of Intacct to the CPA profession.
Web site: http://us.intacct.com/products/index.php


FinancialForce.com
FinancialForce.com, developed on Force.com, delivers comprehensive
accounting and is pre-integrated with Salesforce.com�s CRM.
Web site: www.financialforce.com


Coupa Software
Coupa Software is a spend management and procurement as a service provider.

The promise of Coupa is that it is an extension of existing enterprise planning systems. Coupa tries to make procurement easier for employees of all stripes within an enterprise. Coupa recently added enterprise clients such as Michaels, Diagnostic Health Corp., and PGi. Salesforce.com and Taleo are also customers.

Coupa is looking to solve the buyside problem and apply to a broader audience of enterprise workers. �You get into any company and any person sitting in a cubicle and they ask �how do you buy something?�� said Coupa CEO Rob Bernshteyn. �We�re looking at wider use cases.�

Coupa now competes with Ariba, somewhat, as Ariba launched its Commerce Cloud, a Web-based platform for buying and selling between corporations. Ariba is more similar to eBay, with an on-premise software focus compared to Coupa, a cloud procurement option built on Amazon Web Services infrastructure.

Bernshteyn says that Coupa is designed to be a strategic extension of SAP and Oracle.


AT&T
AT&T TopLineISV SM enables independent software vendors (ISVs) to launch a software-as-a-service (SaaS) business. AT&T now allows ISVs to tap into AT&T�s infrastructure, methodologies, expertise, and reputation to avoid newcomers� missteps and immediately seize a competitive advantage.
Web site: http://www.business.att.com/enterprise/Family/applicationhosting-enterprise/software-as-a-service-enablement-enterprise

Source of Information : Implementing and Developing Cloud Computing Applications 2011

PaaS Providers

AT&T
AT&T Enterprise Hosting Services support business applications with a range of monitoring and management options. Their customer portal, BusinessDirect �, allows customers personalized, secure online access to detailed information about hosted infrastructure and applications.


Terremark
Terremark offers Cloud Express, a cloud designed for developmental teams and department needs. This offering permits quick setup and gives their clients access to flexible, high-performance computing, how and when they need it. The client can configure resources exactly as they are needed, and pay as they go.

Terremark cloud users can harness the extra level of security and control in the cloud with the VPN-Cubed Overlay Network. Terremark offers the VPN-Cubed IPsec Free Edition and SSL Free Edition.

Terremark�s main offering is called Enterprise Cloud. It offers precise, dynamic allocation of computing resources, with the scale, performance, and security to handle enterprise-wide applications. Its solutions offer multiuser capacity, dedicated resource pool architecture, and a role-based security model, as well as private network connectivity and physical device integration.

The Enterprise Cloud combines the power and flexibility of IaaS with the expertise, security, and availability that large organizations with mission-critical computing needs demand of their infrastructure. It provides An easy-to-use Web-based management interface that gives its clients command and control over a cloud-based resource pool of compute, storage, and network, built on a fully clustered enterprise-class computing architecture featuring virtualization technology from VMware.

Terremark�s charges are based on resources rather than on large and inflexible server units, allowing for more precise and dynamic allocation of computing resources when and where they�re needed.

The Enterprise Cloud supports more than 450 operating system configurations, including Windows Server 2003 and 2008, Red Hat Enterprise Linux, CentOS, FreeBSD, Solaris, SUSE Linux, and Ubuntu. If you have pre-existing ISOs or custom OS needs, you have the freedom to upload and install on blank virtual machines.

The Enterprise Cloud can be combined with your dedicated servers and managed from the same Web-based interface. Exposing this server in The Enterprise Cloud takes only a few minutes; once a LAN-LAN connection is established, you have complete control over these dedicated resources. This provides the best of both worlds�the elasticity of the cloud combined with the security and compliance of dedicated hardware.
Web site: www.terremark.com


EngineYard
Engine Yard AppCloud is an hosting alternative for a wide range of Rails applications, from smaller-scale Web applications that run within a single compute instance to production applications that require the elasticity, scalability, and reliability of a Rails Application Cloud. Engine Yard offers automated load balancing and full spectrum monitoring, along with essential infrastructure services such as persistent storage, Web-based gem installs, data backup/restore, and system monitoring�all backed by Engine Yard Rails support and the battle-tested Engine Yard Ruby on Rails stack, a standardized Ruby on Rails Web application framework stack, designed and optimized for business-critical Ruby on Rails application development. The stack has been refined over the past three years by expert cluster engineers and seasoned DBAs for Web server performance, database efficiency and long-term scalability.
Web site: www.engineyard.com

Source of Information : Implementing and Developing Cloud Computing Applications 2011

IaaS Providers

Rackspace
More than 99,000 customers�including more than 80,000 cloud computing customers�and 2,900 employees, make Rackspace a leading cloud hosting service. They offer Rackspace Cloud, providing Cloud Sites, scalable, virtualized servers, Cloud Servers for hosting scalable Web sites, and Cloud Files, storage service in the cloud. They are also behind OpenStack. Web site: www.rackspacecloud.com/cloud_hosting_products


GoGrid
GoGrid offers a wide variety of ready-to-go Windows and Linux Cloud Servers, either as on-demand, self-service hosting, or as dedicated hosting of scalable networks in a hybrid environment. It has the most generous SLA in industry: 24/7 support and one hundred percent uptime.


ElasticHosts
ElasticHosts provides virtual servers based on Linux KVM, running on their own server farms, located in three fully independent data centers across two continents. ElasticHosts monitor, service, and upgrade the underlying infrastructure, providing customers with high performance, availability and scalability.

The company states that its virtual servers offer the power and control of a traditional hosted server, with the following benefits:

� Instant flexibility: to scale your machine up or down using any web browser

� Peace of mind: of running on professionally managed infrastructure, with automatic failover

� Cost efficiency: of buying exactly the capacity you need on an infrastructure built to scale, and growing only when you need to

Source: www.elastichosts.com/cloud-hosting/infrastructure
Web site: www.elastichosts.com


SymetriQ
SymetriQ is a provider of cloud computing Infrastructure-as-a-Service (IaaS). SymetriQ offers virtual server hosting on a UK-based infrastructure, which the client can deploy in minutes to create a virtual data center. The company says:

Specifically we provide you with the following services via our virtual servers and all are available on a pay-as-you-go basis, like utility billing:

� Processing power
� Memory
� Storage
� Bandwidth
� Operating Systems (Windows, Solaris, Linux)

Source: www.symetriq.com/what-we-offer

Source of Information : Implementing and Developing Cloud Computing Applications 2011

Unisys

Unisys was formed in 1986 through the merger of two computer industry pioneers, Sperry and Burroughs. Its history goes back more than one hundred years.

Unisys� equipment line includes the the company�s ClearPath family of mainframes, capable of running not only mainframe software, but both the Java platform and the JBoss or Java EE Application Server concurrently. ClearPath systems are available in either a Unisys OS 2200-based system (Sperry) or an MCP-based system (Burroughs). Those ClearPath models are named Dorado and Libra, respectively. The ES7000 server family uses Intel processors and Windows and/or open source Linux operating systems.


Unisys Stealth Security
Unisys has a large base of customers in the public sector, it places strong emphasis on security; this has carried over to its cloud offerings. Unisys Stealth security solution, an innovative, patent pending data protection technology initially designed for government applications, is now available to commercial clients. The Unisys Stealth technology cloaks data through multiple levels of authentication and encryption, bit-splitting data into multiple packets so it moves invisibly across networks and protects data in the Unisys secure cloud. The Unisys Stealth technology enables encrypted �data in motion� to remain invisible as it traverses the infrastructure until it is reassembled upon delivery to authorized users.

Unisys also announced that it is planning a Stealth solution for data security on storage area networks (SAN) by providing the same cloaking capability for �data at rest� in a virtualized storage environment.

Supporting its cloud computing strategy, Unisys announced three principal offerings:


Unisys Secure Cloud Solution
Unisys Secure Cloud Solution is a managed cloud service providing comprehensive data security for multi-tenant environments, in which clients share a common IT infrastructure. Because the solution uses Stealth technology, Unisys says enterprise clients can move existing business applications� including those with secure or sensitive data, such as human resources, financial, and healthcare information�into a managed, shared cloud service, without needing to rewrite or alter applications.

As a managed public cloud service, Unisys Secure Cloud Solution enables global delivery of multiple new services: Secure Infrastructure as a Service (IaaS); Secure Platform as a Service (PaaS); My Secure Application as a Service (AaaS); three Secure Software as a Service (SaaS) offerings: Secure Unified Communication as a Service, Secure Virtual Office as a Service and Secure Document Delivery Service; and Secure Disaster Recovery as a Service (DRaaS).

Based on the Information Technology Infrastructure Library (ITIL) standard for service management, the Unisys Secure Cloud Solution uses the automation and virtualization capabilities of Unisys real-time infrastructure solutions, which enable the IT infrastructure to respond automatically to changes in the business environment.

The Secure Cloud Solution also provides a self-service portal that enables organizations to scale IT resources in real time. The Unisys Converged Remote Infrastructure Management Suite solution provides a unified view of the entire IT infrastructure and enables end-to-end management of the Secure Cloud Solution.

As client needs or data security requirements dictate, the Unisys Secure Cloud Solution can balance workloads across a global network of Unisys data centers, which are certified to key international standards such as ISO/IEC 27001:2005 for security, ISO/IEC 20000 for service management, and the SAS 70 Type II auditing standard.


Unisys Secure Private Cloud Solution
The Unisys Secure Private Cloud Solution is designed to help organizations realize the operational and economic benefits of cloud computing in their internal data centers. This solution leverages the technology and expertise behind Unisys Secure Cloud Solution. Unisys says that it enables clients to reduce IT costs through server and storage virtualization; remove IT bottlenecks through automated operations and immediate, as-needed self-service provisioning of resources; and align the supply of IT resources to fluctuating business demand. Clients can typically run existing Microsoft Windows applications without alteration, reducing migration costs and realizing the business benefits of cloud deployment more quickly.

Clients using the Secure Private Cloud Solution can either monitor the infrastructure themselves or have Unisys do it through the Unisys Converged Remote Infrastructure Management Suite solution. They can also optionally choose to implement Unisys Stealth for additional data protection.


Unisys ClearPath Cloud Solutions
Unisys ClearPath Cloud Solutions allow Unisys mainframe clients to subscribe to and access Unisys-owned computing facilities incrementally to run, modernize, or develop ClearPath applications. Unisys says this helps clients avoid unscheduled capital equipment expenditures and make more efficient use of their own ClearPath systems.

In July 2010, Unisys announced availability of the first solution in the series: the ClearPath Cloud Development and Test Solution, offered as �platform as a service� (PaaS). This solution provides access to Unisyshosted virtual resources, when needed, for creation, modernization, and functional testing of specific applications.

The ClearPath Cloud Development and Test Solution also draws on the infrastructure behind Unisys Secure Cloud Solution for provisioning and resource management�including Unisys Converged Remote Infrastructure Management Suite.

Unisys plans additional ClearPath Cloud Solutions, including a full cloud-based production environment for smaller applications; data replication and disaster recovery to preserve critical business information; and managed, industry-specific �software as a service� (SaaS) application solutions to complement existing solutions such as the Unisys Logistics Management System for air cargo management.


Unisys Cloud Transformation Services
In addition, Unisys Cloud Transformation Services are available to clients of all three Unisys cloud computing solutions. These advisory, assessment, and migration services allow clients to plan and migrate to the type of cloud environment that best meets their business goals. Unisys service professionals help clients assess what application workloads can be moved to the cloud, how that can be done, and the technological, financial, and security implications of their choices.


Unisys Partnerships
For Unisys Secure Cloud Solution and Secure Private Cloud Solution, Unisys has partnered with VMware on the software side. On the hardware side, Unisys has partnered with Intel to develop the computing architecture Unisys has collaborated with EMC, drawing on that partner�s storage technology to create the information infrastructure that enables fast storage and delivery of information securely in the cloud. Software from partners Scalent Systems and iWave Software enables repurposing and orchestration of IT resources to meet the service levels required for clients� business. Software from BMC helps power the Converged Remote Infrastructure Management Suite, which provides unified operational control of the cloud solutions.

Source of Information : Implementing and Developing Cloud Computing Applications 2011

CA Technologies

CA Technologies (CA) was formerly known as Computer Associates, Inc., and is now legally known as CA, Inc. CA was one of the first independent software vendors that sold software for IBM mainframes, creating an industry. Previously, there was no systems software industry, as IBM gave away its software and charged only for hardware. Founded in 1976, CA is still the world�s leading independent information technology (IT) management software company.

In February 2010, CA announced a definitive agreement to acquire privately held 3Tera, Inc., a pioneer in cloud computing. 3Tera�s AppLogic offers an innovative solution for building cloud services and deploying complex enterprise-class applications to public and private clouds using an intuitive graphical user interface (GUI). We discussed 3Tera.

In May 2010, CA unveiled the CA Cloud-Connected Management Suite, which will enable customers to meet this emerging challenge with four key products:

� CA Cloud Insight will enable organization to gain visibility into internal IT environments in terms of relative measures of service, such as quality, agility, risk, cost, capability, and security. Customers will be able to compare and contrast internal options with external ones to determine the best choice based on their current business situation.

� CA Cloud Compose will allow customers to abstract applications from their underlying infrastructure to make it easy to deploy, manage, and move composite infrastructure stacks to internal or external cloud environments. Using an extensible catalog of reusable software components, CA Cloud Compose customers will be able to leverage cloud computing capabilities to increase their quality of service while simultaneously reducing cost and time to market for new solutions. It�s an organized way of dealing with code reuse. By providing both the cloud infrastructure and the service creation and deployment capabilities in one product, CA Cloud Compose will provide integrated high availability, metering, operations monitoring and management capabilities to help customers rapidly achieve their business goals through their IT supply chain.

� CA Cloud Optimize will help customers analyze alternatives across business measurement characteristics and present options to the user for how to improve IT service delivery and sourcing choices. Customers will be able to actively lower their operating costs and be much more agile in aligning with business requirements for capacity, performance and other service parameters. This will give customers the ability to be more proactive about getting increasing business value out of their IT budgets.

� CA Cloud Orchestrate will enable customers to manage the deployment of the suggestions that come from CA Cloud Optimize. This will build on a broad array of capabilities from the existing product portfolio, recently acquired companies, and new
organic development within CA Technologies.

Following CA�s long-time strategy, CA has entered the cloud arena by acquisition, and new solutions will build upon AppLogic, technology recently acquired from 3Tera that simplifies the design and deployment of composite applications as a single logical entity in the cloud. CA plans to develop new software organically, as well. By unifying application configuration, application deployment, and a virtual server fabric�functions that are otherwise typically performed in a fragmented manner�AppLogic helps reduce costs, improve productivity and increase service quality

CA also recently acquired:

� Oblicore, service level management for cloud vendor management and assurance of cloud service quality

� Cassatt software solutions and technologies for operating green data Centers

� NetQoS, which provides network performance management software and services that improve application delivery across complex networks

Supporting this new product family are two other important initiatives:

The Service Measurement Index and Cloud Commons.

The Service Measurement Index (SMI) consortium, being led by Carnegie Mellon University (CMU), is an initiative to address the need for industry-wide, globally accepted measures for calculating the benefits and risks of cloud computing services. With the help of a planned consortium of educational institutions, end-user organizations, and technology providers, CMU expects to develop a set of business-centric performance indicators that will provide IT with a standardized method for comparing cloud services from internal or external providers.

Cloud Commons, sponsored by CA Technologies along with other organizations and participants and launched in 2010, provides a place for like-minded end users, partners, industry experts, and others to share experiences, best practices, and qualitative and quantitative information about the many types of cloud services available. Cloud Commons enables people to describe their experiences with cloud services and compare them with others� experiences. The site also facilitates collaboration through the ability to comment on news, best practices, and other insights, and to interact with others in the community.

Nimsoft Unified Monitoring Solutions, available either as an on-premise or on-demand (SaaS) application, provide complete visibility into the performance and availability of IT environments. The Nimsoft Unified Monitoring architecture eliminates the need to deploy a new monitoring solution for outsourced services, public or private clouds, or SaaS implementations with a single, highly scalable monitoring toolset.

In addition, CA is leveraging its own IT Management Software, which is now available using a Software as a Service delivery model (ITM SaaS) with offerings such as Clarity� On Demand for project and portfolio management. It is also investing heavily in its partnership with Carnegie Mellon.

CA is also planning SaaS offerings for most of its IT management solutions, not unlike IBM�s approach. CA also announced that CA Technologies designed an On Demand Self Service Portal, scheduled to be available in summer 2010 which will enable customers to easily buy, provision, and administer CA Technologies IT management solutions as a service through a common modern interface.

CA is focused on its traditional strengths in management and security. Businesses are increasingly consuming �composite services� that combine services from multiple sources, spanning virtual and physical resources both inside and outside the firewall. Without extending existing management disciplines and tools, this mash-up of services can create rapidly multiplying security gaps and monitoring blind spots. This can leave the business vulnerable to holes in security and disruptions in service availability and performance, while making it harder for IT to gain advanced warning of impending failures or to diagnose the root cause of a problem hiding in a virtual global haystack.

Greg Montjoie, general manager of Hosting Solutions at Internet Solutions, says that CA aims to distinguish between cloud computing and computing in the cloud. While cloud computing allows maintenance, migration, and hardware optimization irrespective of geographic location, computing in the Cloud involves an individual accessing specific applications via the Internet to perform computing tasks.

Source of Information : Implementing and Developing Cloud Computing Applications 2011

IBM Eleven Cloud Computing Competency Centers Worldwide

IBM�s cloud computing competency centers are designed to showcase IBM cloud computing infrastructures, applications, and services capabilities through specific competency and training solutions, as well as offer additional services to clients. By June 2010, IBM had opened eleven centers:

� Singapore
� Dublin, Ireland
� Beijing, China
� Tokyo, Japan
� Johannesburg, South Africa
� Sao Paulo, Brazil
� Bangalore, India
� Seoul, South Korea
� Hanoi, Vietnam
� The Netherlands
� Wuxi, China

Craig Sowell, Director, Cloud Marketing and Communications at IBM, was kind enough to discuss IBM�s cloud strategy with me. The points that IBM stresses are that IBM cloud solutions:

� Assist in reducing capital and licensing expenses as much as fifty to seventy-five percent using virtualized resources

� Help reduce operating and labor costs as much as thirty to fifty percent by automating development and testing resource provisioning and configuration

� Facilitate innovation and time to market by helping reduce development and testing setup time from weeks to minutes

� Improve quality through more accurate configurations and enhanced modeling to help reduce defects by as much as fifteen to thirty percent

� Achieve better utilization, as the average enterprise devotes up to fifty percent of its entire technology infrastructure to development and test, but typically up to ninety percent of it remains idle

IBM purchased Rational a number of years ago, and is particularly strong in supporting application development and testing. Smart Business Development and Test on the IBM Cloud offers flexible provisioning on demand, at a predetermined cost.

The IBM Smart Business Development and Test on the IBM Cloud is a dynamically provisioned and scaled runtime environment that provides a complete environment to develop and test application code; of course, it highlights IBM�s software offerings. These include tools to configure and manage the dynamic execution environment, an IDE that facilitates the direct use of the execution environment, and build and test tools that can exploit the execution environment. Other areas supported include SaaS solutions for collaboration and governance infrastructure and resource repositories for source and reusable assets. It is available now in North America.

The IBM Smart Business Development and Test Cloud provides an onpremises cloud, built by IBM Services. IBM CloudBurst offers pre-integrated hardware, storage, virtualization, and networking to create an onpremises cloud environment. Service management capability is delivered via IBM Tivoli Service Automation Manager V7.2. IBM�s Smart Business Desktop Cloud Solution is an on-premise approach providing virtualized desktops to provide replication of a standardized desktop to enterprises.

In May 2010, IBM acquired Cast Iron Systems, a cloud integrator, to deliver industry-leading cloud integration software, appliances, and services. It does this on a cross-vendor basis; for example, it integrates cloud applications from providers (such as Salesforce.com, Amazon, NetSuite, and ADP) with on-premise applications (such as SAP and JD Edwards). Cast Iron has hundreds of prebuilt templates and services expertise, reducing custom coding and allowing cloud integrations to be completed in the space of days, rather than weeks or longer. These results can be achieved using a physical appliance, a virtual appliance, or a cloud service. IBM also strengthened its offerings by acquiring Sterling Commerce from AT&T. PayPal, the online payment system, is extending its global payments platform, PayPal X, into the cloud. PayPal is working with the IBM Cloud Labs to quickly monetize new applications developed and made available via smart phones. �We want to provide a very simple way to make payments available on all platforms, including mobile applications,� said Osama Bedier, PayPal�s vice president of platform and emerging technologies. �The IBM cloud provides a platform for developers to come together as a community, to create, develop and test new applications. We look forward to seeing the payments innovations our developers create through the IBM cloud and bringing the wallet into the cloud.�

One of I.B.M.�s test beds for cloud computing has been the Interior Department�s National Business Center, a service center that handles payroll, human relations, financial reporting, contracting services and other computing tasks for dozens of federal agencies. The center runs two large data centers, one in Northern Virginia and another outside Denver. Douglas J. Bourgeois, the center�s director, said he is introducing several cloud-style applications over the next nine months including Web-based training, and staffing and recruitment software. In tests with financial and procurement software, the cloud-computing environment has delivered efficiencies of 40 to 60 percent in productivity and power consumption, he stated. �For us, like other data centers, the volume of data continues to explode,� Mr. Bourgeois said. �We want to solve some of those problems with cloud computing, so we don�t have to build another $20 million data center.�

Source: http://www.nytimes.com/2009/06/15/technology/businesscomputing/15blue.html

Internationally, IBM has several interesting collaborations:

� Nissay Information Technology is collaborating with IBM to build a cloud-based development and test environment for Japan�s Nippon Life Insurance that will support the development of mission critical Web systems and allow more flexibility for the allocation of IT resources. Prior to the cloud environment, developers needed a month to allocate resources; now the process takes only hours.

� IBM has created Malaysia�s first animation cloud for Multimedia Development Corporation (MDeC). The new center is allowing animators, start-ups, and creative content companies to take advantage of cloud computing to significantly reduce in-house production time�animators can perform rendering jobs up to 8 times faster than with local workstations�and produce high quality images for computer-generated content.

� IBM partnered with Spanish hotel company Sol Meli� to manage the company�s technology infrastructure, including its central IT platform and applications (including e-mail and reservation system) and host the Web site (used by guests to book hotel rooms) and the equipment used daily by 4,500 users in an effort to reduce operational cost and foster innovation. As part of the deal, IBM is providing desktop cloud services to Sol Meli�s users in Spain, Europe and 19 hotels in Latin America.

� Options IT, a leading provider of high-performance financial technology infrastructure-as-a-service (laaS) to buyside and sellside firms, is working with IBM to deliver an optimized private cloud environment for financial services firms. With the new infrastructure, Options IT is able to help clients take advantage of global trading opportunities with new resources.

Source of Information : Implementing and Developing Cloud Computing Applications 2011

IBM - Enterprise Cloud Vendors

�Cloud is an important new consumption and delivery model for IT and business services. Large enterprises want our help to capitalize on what this model offers in a way that is safe, reliable and efficient for business,� said Erich Clementi, General Manager, Enterprise Initiatives at IBM.

The company believes the digital and physical infrastructure are converging as a result of the instrumentation of all kind of business and IT assets, but that while the impact will be vary from industry to industry, cloud will be the delivery engine for emerging industry systems. Analytics are also going to play a huge role in collaborative industry clouds. To extract valuable insights, clients need tools to mine the massive amounts of data and pull out actionable information. Software is a focus as well�IBM is integrating software into services that can be sold through the cloud to deliver the high-value solutions enterprise clients are requesting, including integrated hardware, software, and services packages.

IBM was a very early proponent of both virtualization and cloud computing. IBM Smart Business cloud solutions support clouds built behind the enterprise firewall, or the IBM cloud. IBM�s public cloud offering is still new, while its private cloud offerings are, for the cloud, very mature.

Unlike Google, who has collaborated with VMware, IBM�s strategy is to handhold enterprises into the cloud both by providing services and also by working with Red Hat to offer Red Hat�s EL, bypassing VMware with the announcement of Smart Business Development & Test on the IBM Cloud. IBM has also partnered with SOASTA, who makes �CloudTest,� and is a leading provider of cloud testing. SOASTA also partners with CA, Amazon, and Microsoft. It is competing head-on with VMforce and Salesforce.com. IBM also partners non-exclusively with RightScale, which offers Hyperic HQ.

Source of Information : Implementing and Developing Cloud Computing Applications 2011

What Is Google App Engine?

Google App Engine lets you run (host) your own Web applications on Google�s infrastructure. However, by no means is this a �rent a piece of a server� hosting service. With App Engine, your application is not hosted on a single server. There are no servers to maintain: You just upload your application, and it�s ready to serve your users. Just as servicing a Google search request may involve dozens, or even hundreds of Google servers, all totally hidden and satisfied in a fraction of a second, Google App Engine applications run the same way, on the same infrastructure. This is the unique aspect of Google�s approach. Yes, you cede some control to Google, but you are rewarded by being totally free of the infrastructure, capacity management, and load balancing tasks that enterprise typically have to manage, irrespective of whether they are self-hosting or hosting on someone else�s PaaS or IaaS.

You can serve your app from your own domain name (such as http://www.example.com/) using Google Apps or, you can serve your app using a free name on the appspot.com domain. You can choose to share your application with the world, or limit access to members of your organization.

Google App Engine supports apps written in several programming languages:

With App Engine�s Java runtime environment, you can build your app using standard Java technologies, including the JVM, Java servlets, and the Java programming language�or any other language using a JVM-based interpreter or compiler, such as JavaScript or Ruby. App Engine also features a dedicated Python runtime environment, which includes a fast Python interpreter and the Python standard library. The Java and Python runtime environments are built to ensure that your application runs quickly, securely, and without interference from other apps on the system.

Source: http://code.google.com/appengine/docs/whatisgoogleappengine.html

As with most cloud-hosting services, with App Engine, you only pay for what you use. Google levies no set-up costs and no recurring fees. Similar to Amazon�s AWS, resources such as storage and bandwidth are measured by the gigabyte.

App Engine costs nothing to get started. All applications can use up to 500 MB of storage and enough CPU and bandwidth to support an efficient app serving around 5 million page views a month, absolutely free. When you enable billing for your application, your free limits are raised, and you only pay for resources you use above the free levels.

Application developers have access to persistent storage technologies such as the Google File System (GFS) and Bigtable, a distributed storage system for unstructured data. The Java version supports asynchronous nonblocking queries using the Twig Object Datastore interface. This offers an alternative to using threads for parallel data processing.

�With Google App Engine, developers can write Web applications based on the same building blocks that Google uses,� Kevin Gibbs, Google�s technical lead for the project, wrote in The Official Google Blog. �Google App Engine packages those building blocks and provides access to scalable infrastructure that we hope will make it easier for developers to scale their applications automatically as they grow.�

Google App Engine has appeared at a time when an increasing number of tech companies are moving their operations to the cloud; it places Google squarely in competition with Amazon�s Elastic Cloud Computing (EC2) and Simple Storage Service (S3) offerings.

Google says its vision with Google App Engine is to offer developers a more holistic, end-to-end solution for building and scaling applications online. Its servers are configured to balance the load of traffic to developers� applications, scaling to meet the demand of an influx of traffic. App Engine also includes APIs for user authentication to allow developers to sign on for services, and for e-mail, to manage communications.

InternetNews.com reported,

Through its initial preview, Google�s App Engine will be available free to the first 10,000 developers who sign up, with plans to expand that number in the future.
During that period, users will be limited to 500MB of storage, 10GB of daily bandwidth and 5 million daily page views, the company said. Developers will be able to register up to three applications.

Source of Information : Implementing and Developing Cloud Computing Applications 2011
 
Support : Creating Website | Johny Template | Mas Template
Copyright © 2011. Information Computer and Technology - All Rights Reserved
Template Modify by Creating Website
Proudly powered by Blogger