Apps, Software and Video Games shortly will go the way of the DVD – they will live in a ‘cloud’.

Bandwidth is the key to the cloud. If you’ve got enough access to it, meaning if you’ve got a fast enough connection, then you don’t need any physical media or software to live in your PC, Mac or for that matter very soon your mobile phone and tablets.

We used to have giant ‘desktop’ computers that had to have HUGE hard drives in order for us to install many applications. For example, Photoshop, Dreamweaver, MS Office, CAD software, etc. all are very large installation packages. Couple this with your collection of MP3’s, photo’s, video’s and documents and most of us ran out of room on a PC that had 50-100 gigs of space for a hard drive.

The obvious to the consumer

Today, as a consumer we see convenient repositories for photo’s, music and videos and documents. Skydrive, GoogleDocs, Dropbox, Box, Amazon Cloud Drive. Now consumers are beginning to understand and use these places to store what they used to store on their home computers. Why? Several key reasons – first, once uploaded to a large mainstream cloud drive (and I mean to the likes of Google, MS or Amazon) your collection of ‘whatever’ is safe. How many of us have dropped or lost a laptop, had a hard drive fail, spilled coffee on our desks and then PC, etc. If you didn’t back it up to an external hard drive you lost it all. Worse yet, I’ve had friends who did and THAT and the hard drive failed shortly thereafter. Years of precious photos (and now videos more than ever thanks for our mobile phones) you can never get back or thousands of MP3’s gone (at $.99 each). Second, consumers now are getting familiar with storing their digital belongings off site and in a cloud. We hear about Amazon’s or Google’s cloud storage drive initiatives more and more everyday. They are fast becoming the new norm. And third – they are not expensive. Certainly not when compared to a 1.5 Terabyte hard drive that can fail without warning.

The not so obvious to us all

What’s not so obvious to consumers is what’s happening in the enterprise business realm. Years ago, you wanted to put up a business domain web site or had a business that required large databases, some required separate servers for clients that are uber security conscious, some needed to have their domain living on a separate server from others (especially the financial and health industries). Others needed production servers, staging servers and then after testing finally deployed an application or web service. Sometimes IT had to physically travel to the colo facility to apply a ‘patch’ to a newly deployed application and hoped that the patch worked as it was supposed to or else everything came to a screeching halt. Businesses lost money, time, and face sometimes. You’d pay Sun, Oracle, Cisco, EMC, etc. millions to deploy servers and DB’s for your environment. You’d spend money on hiring the right technical IT staff to deploy and sync and stitch all of this together. This WAS the norm.

Enterprise today is all moving into a cloud based environment – virtualization is the norm now.

Sun servers were all the rage in the 90’s. But they were VERY expensive. Robust, great customer service, but very costly. Today, you can run a linux box for a fraction of the cost. No more hard drives or servers (blades or otherwise). You can fire up an ‘instance’ and server through AWS in a few minutes. No going into a colo facility. Start-up’s can get to market almost instantaneously and for far less of a cost. You pay for what you use. No more buying a million dollar license for ATG, Vignette or Broadvision and installing 15 discs in a cage. You rent it now. Patches get uploaded by the cloud vendor in a virtual environment and tested before they are deployed to you.

With the rise of this ‘virtualization’, more and more apps or processes now get built into the browser. Java script was written just for this purpose and has allowed for far more sophisticated applications to run in a network environment and now on browsers. Other software will be embedded in browsers as time goes on that will mimic the functionality and hardware on your PC. You can bet on it.

Platform as a Service (PaaS)

Whereas IaaS (infrastructure as a service) providers offer bare compute cycles and SaaS (software as a service) providers offeraccess to such apps as CRM online, PaaS offerings provide turnkey services for developers to get their apps up and running quickly, no infrastructure concerns needed.

Offered as a service, PaaS runs the gamut from development tools to middleware to database software to any “application platform” functionality that developers might require to construct applications. None of these above services come without their problems. But so did everything else before them.

IaaS focuses on managing virtual machines, and the risks are little different than with other cloud types — here, the main risk is rogue or unwarranted commandeering of services. IaaS requires governance and usage monitoring. But with this comes a good degree of convenience and business ROI.

Some of the most popular cloud services running virtually are; Microsoft Windows Azure, Googles App Engine (which offer a nonSQL relational SQL database service), VMware cloud foundry, Force.com ( from salesforce.com), Heroku (also from SF), Amazon Elastic Beanstalk, Engine Ysrd Cloud (for Ruby on Rails enthusiasts), Engine Yard Orchestra (for PHP enthusiasts) and CumuLogic (for Java developers). Consumers never see or hear any of this but use web services that live on these services day in and day out.

What will be obvious to consumers in about 10 years or less

All of this bring me back around to bandwidth and apps. Once we have enough consumers that have access to real fast broadband (100mbps or more down and ideally 200mbps down), then the Apple and Android app store will disappear. Software discs will become obsolete. Video game installation discs – gone. Why, because once you have enough speed, apps can be loaded and accessed wirelessly via the web. The calls to databases, functionality and such can all be received instantly online. Its already happening, slowly. Examples of this in the entertainment space is Ultraviolet, bring your DVD’s to Wal-Mart and upload them to your digital locker – no more disc. Onlive, Livestream, Gaikai all stream video games without the need for a disc, Netflix (you know about them). Consumers are aware of these, but then you’ve also got GoogleDocs and Skydrive for documents and the creation of word and excel docs. We don’t need an install disc anymore.

Last week, it took me 4 days to upload 12,934 MP3’s to my cloud locker at Amazon Music drive. Less time than I ever thought. Available anytime for me to download if need be. That’s nearly $ 13,000 worth of music, stored for as little as $ 20.00 a year.

Mobile apps, software suites, video game discs, movies, music photos and more will still be here but will not physically be in your home forever. It’s inevitable.

Advertisements

The Great Chaos Monkey!

Apr 25, 2011
Working with the Chaos Monkey

Late last year, the Netflix Tech Blog wrote about five lessons they learned moving to Amazon Web Services. AWS is, of course, the preeminent provider of so-called “cloud computing”, so this can essentially be read as key advice for any website considering a move to the cloud. And it’s great advice, too. Here’s the one bit that struck me as most essential:

We’ve sometimes referred to the Netflix software architecture in AWS as our Rambo Architecture. Each system has to be able to succeed, no matter what, even all on its own. We’re designing each distributed system to expect and tolerate failure from other systems on which it depends.

If our recommendations system is down, we degrade the quality of our responses to our customers, but we still respond. We’ll show popular titles instead of personalized picks. If our search system is intolerably slow, streaming should still work perfectly fine.

One of the first systems our engineers built in AWS is called the Chaos Monkey. The Chaos Monkey’s job is to randomly kill instances and services within our architecture. If we aren’t constantly testing our ability to succeed despite failure, then it isn’t likely to work when it matters most – in the event of an unexpected outage.

Which, let’s face it, seems like insane advice at first glance. I’m not sure many companies even understand why this would be a good idea, much less have the guts to attempt it. Raise your hand if where you work, someone deployed a daemon or service that randomly kills servers and processes in your server farm.

Now raise your other hand if that person is still employed by your company.

Who in their right mind would willingly choose to work with a Chaos Monkey?

Angry-monkey-family-guy

Sometimes you don’t get a choice; the Chaos Monkey chooses you. At Stack Exchange, we struggled for months with a bizarre problem. Every few days, one of the servers in the Oregon web farm would simply stop responding to all external network requests. No reason, no rationale, and no recovery except for a slow, excruciating shutdown sequence requiring the server to bluescreen before it would reboot.

We spent months — literally months — chasing this problem down. We walked the list of everything we could think of to solve it, and then some:

swapping network ports
replacing network cables
a different switch
multiple versions of the network driver
tweaking OS and driver level network settings
simplifying our network configuration and removing TProxy for more traditional X-FORWARDED-FOR
switching virtualization providers
changing our TCP/IP host model
getting Kernel hotfixes and applying them
involving high-level vendor support teams
some other stuff that I’ve now forgotten because I blacked out from the pain

At one point in this saga our team almost came to blows because we were so frustrated. (Well, as close to “blows” as a remote team can get over Skype, but you know what I mean.) Can you blame us? Every few days, one of our servers — no telling which one — would randomly wink off the network. The Chaos Monkey strikes again!

Even in our time of greatest frustration, I realized that there was a positive side to all this:

Where we had one server performing an essential function, we switched to two.
If we didn’t have a sensible fallback for something, we created one.
We removed dependencies all over the place, paring down to the absolute minimum we required to run.
We implemented workarounds to stay running at all times, even when services we previously considered essential were suddenly no longer available.

Every week that went by, we made our system a tiny bit more redundant, because we had to. Despite the ongoing pain, it became clear that Chaos Monkey was actually doing us a big favor by forcing us to become extremely resilient. Not tomorrow, not someday, not at some indeterminate “we’ll get to it eventually” point in the future, but right now where it hurts.
Now, none of this is new news; our problem is long since solved, and the Netflix Tech Blog article I’m referring to was posted last year. I’ve been meaning to write about it, but I’ve been a little busy. Maybe the timing is prophetic; AWS had a huge multi-day outage last week, which took several major websites down, along with a constellation of smaller sites.

Notably absent from that list of affected AWS sites? Netflix.

When you work with the Chaos Monkey, you quickly learn that everything happens for a reason. Except for those things which happen completely randomly. And that’s why, even though it sounds crazy, the best way to avoid failure is to fail constantly.

Guest Post by Jeff Atwood

Amazon’s EC2 ‘cloud’ outage is just a minor bump in major right road.

By now you’ve heard about Amazon’s EC2 (Elastic Compute Cloud) cloud service failure, or perhaps felt it. If you use Foursquare or read Reddit, use or Quora (among other services or websites) you no doubt felt the impact.

On 4.21 at 1:48am PDT. Quora even had a fun ‘down’ message: “We’d point fingers, but we wouldn’t be where we are today without EC2.” And this YouTube video:

Lew Moorman, chief strategy officer of Rackspace, said it best “It was the computing equivalent of an airplane crash. It is a major episode with widespread damage”. But airline travel, he noted, “is still safer than traveling in a car” — analogous to cloud computing being safer than data centers run by individual companies.

The fact remains, the cloud model is rapidly gaining popularity as a way for companies to outsource computing chores to avoid the costs and headaches of running their own data centers — simply tap in, over the Web, to computer processing and storage without owning the machines or operating software.

Consumers don’t realize that there are a host of sites that base a majority of their ‘up-time’ on cloud services, including Hotmail and Netflix to name just a few. Netflix was not affected by the recent outage because Netflix has taken full advantage of Amazon Web Services’ redundant cloud architecture (which is NOT inexpensive).

Industry analysts said the troubles would prompt many companies to reconsider relying on remote computers beyond their control. And while discussions surrounding that might happen in the next several weeks, in the long-term cloud computing will continue and thrive and evolve into what most industry experts and others already know it to be – a necessary and valued component of doing any kind of business or having any sort of web presence on the Internet. The truth is, every day many more companies around the globe experience ‘outages’ that take their services and sometimes web site down for hours. Added all together, they add up for far more lost time, money and engineering resources that Amazon’s interruption last week.

This round, the companies that were hit hardest by the Amazon interruption were start-ups who are focused on moving fast in pursuit of growth, and who are less likely to pay for extensive backup and recovery services or secondary redundancy in another data center (or Amazon’s redundant cloud architecture).

One of the things that most people are not aware of is that Amazon has an SLA (service level agreement) which is one of the weakest cloud compute SLA of any competing public cloud compute services, even though its uptime is actually very good. Most providers offer 99.99% or better, with many offering 100%, evaluated monthly, with service credit capping at 100% of that monthly bill. Amazon offers 99.95%, evaluated yearly, capping at 10% of that bill, and requires that at least two availability zones within a region be unavailable. Therefore, companies MUST take this into consideration when choosing a vendor as how it relates to what they do on the internet. Taking a secondary, back-up approach can close some of those holes, but it can get mighty expensive. Amazon’s EC2 pricing overall reflects this type of SLA and the ‘human’ support is not included — because of this aspect it can give a 10% to 20% uplift to the price, and it is geared primarily toward the very technically knowledgeable. Amazon is a cloud IaaS-focused (infrastructure-as-a-service) vendor with a very pure vision of highly automated, inexpensive, commodity infrastructure, bought without any commitment to a contract. Amazon is a thought leader; it is extraordinarily innovative, exceptionally agile and very responsive to the market.

That being said, the recent Verizon acquisition of Terremark should put most Tier 1 vendors on their toes including Amazon. Terremark offers colocation, managed hosting (including utility hosting on its Infinistructure platform), developer-centric public cloud IaaS (vCloud Express) and enterprise-class cloud IaaS (Enterprise Cloud). It is a close VMware partner (VMware is one of its investors), and is generally first to market with VMware-based solutions. It is a certified vCloud Datacenter provider. Some of Terremark’s perceived weak spots can and should now be addressed by the merger between the 2 service offerings, in particular the added personnel to better deliver on customer service and satisfaction (stretched thin’ has been the compliant). Now that it has a substantially bigger war chest from its parent Verizon and Verizon’s exceptional network worldwide (remember Uunet), it can take on and adapt more bleeding edge technologies, which it has done in the past, but has not been able to do so most recently.

Combinations like this will likely increase in this space over time as other vendors realize that 2 can be better than one. The devil is always in the details and the trick here is for company cultures to be merged efficiently with a clear and concise plan laid out for both sets of employees. The last thing you need are internal employees to wonder who is going to be replying to the same RFP (request for proposal) to any particular vendor moving forward. Strong, well thought out details by upper management should avoid these pitfalls for the most part, however, it can be pretty tricky to implement.

Long story short – I’d still bet heavily on the long-term success of this business. It’s a smart, cost efficient and labor efficient business model needed for most start-ups, mid-size and Enterprise clients. The days of sending your IT guys into a cage to update the companies software with numerous discs and software patches hoping that it doesn’t disrupt the companies servers should be long gone.

HP’s feet firmly planted in the clouds. The future is here, now. The Hard Drive is a dinosaur.

The announcement this week by the CEO of HP that HP wants to provide the platform of choice for cloud services and connectivity and that they will launch a public cloud offering in the near future is significant. Cloud computing has really come of age. No one will be laughing at Google’s CR-48 notebooks and Google’s Chrome OS anymore. Its no a flash in the pan.

The CEO, Leo Apotheker says everything HP will do in the future will be delivered as a service. HP also intends to install WebOS on a variety of devices, not just smartphones like Palm did. PC’s and laptops will have WebOS pre-installed and be able to run Windows as well. HP will perform a number of strategic acquisitions of innovative software and cloud-based service providers. And there is no shortage of innovators in the space. Take a look at the OnDemand 100 list of private companies in the space put together by Morgan Stanley, KPMG, Hewlett-Packard, Blackstone Group, Bridge Bank, Fenwick & West, Silicon Valley Bank, and industry experts: http://bit.ly/h4mqfK . HP certainly will find a few jewels in this crowd and probably won’t have to spend a fortune acquiring them given the numbers of competitors.

HP plans to establish an application store for enterprise customers and consumers. The app store will not just be mobile specific, like most other current app stores like Apple App Store and Android Market, but targeted at a wider range of devices. And its will be an open marketplace. Nice.

It’s a big change for HP. They are moving away from focusing on PCs, printers and hardware in general to the cloud, connectivity, security and services. HP does not plan on competing directly with other OS’s like Windows but rather to run in parallel. WebOS might also be able to run alongside Android on smartphones for example, giving user’s the choice of switching between platforms. Clearly, consumers and businesses will be changing the way they use PC’s and computers. The days of storing your data on your hard drive locally is numbered.

Where it is all going…into the clouds. Read on.

For the past five years, the Web hosting market has been evolving toward on-demand infrastructure provisioned on a flexible, pay-as-you-go basis. Never used to be this way before. The introduction of cloud computing offerings has radically accelerated innovation in the hosting market.

First some definitions you’ll need. Often times, these new services have the following anacronym

associated with them:

 

SaaS – software as a service

IaaS – infrastructure as a service

CaaS – compute as a service

PaaS – platform as a service

 

Cloud hosting can be seen in the following ways:

Self-managed IaaS, for cost-effective agile replacement of traditional data center infrastructure.

Lightly managed IaaS, for customers who wish to primarily self-manage but want the provider to be responsible for routine operations tasks.

Complex managed hosting, for customers who want to outsource operational responsibility for the infrastructure underlying Web content and applications.

The market for traditional Web hosting is very mature. Most Web hosters have very high levels of operational reliability and excellent support, and the best providers also have the ability to manage complex projects and proactively meet the customer’s needs. By contrast, the market for cloud IaaS is highly immature. While cloud IaaS reliability is still good, it is generally engineered to higher levels of availability than traditional dedicated hosting.  Service and support definitely varies from provider and web services that are looking for a provider have lots to sort out and consider, among them: SLA’s, quick deployment vs. not, back-up and large scale hosting ( more than 75 servers) , application support, location of the hosting (although this more for overseas clients), network availability, management capabilities including (but not limited to) ; infrastructure software, database servers, web servers, storage and back-up, security, testing and professional services.

Years ago (and not that long) there really wasn’t a place to put your server except in a colo facility or managed services facility. If you were in colo, you purchased an application platform for several million dollars (got about 20 discs sent to you – I always found this part quite amusing) and sent your IT guy trotting off to the colo to insert each of these discs and download any recent patches to upgrade what he (the co.) bought. If all went well, the new service was up and running within a week or less. If all didn’t go well, he’d be on the phone with the software company for HOURS trying to figure out what didn’t go right.  I was at many a company that did this – what a nightmare.  And, its still done like I’ve described even today.

 

With Saas/Iaas, sometimes configuration and deployment of your environment is as easy as a well done GUI (graphic user interface) for the client and once that’s been decided along with the associated cost, he hits the ‘submit’ or really the ‘deployment’ button and within a relatively short time, his environment is up and running. Patches, upgrades, security, all buttoned down and done and all monitored 24/7. The SLA’s today (with the exception of Amazon’s EC2) are mostly 100% uptime guaranteed, so for the most part your environment is quite stable.

If I had a new company today doing e-commerce or dependent on applications or even general uptime and a web server, I’d outsource the whole issue. The cloud environment has gotten to be too good and secure to instead go out and purchase my own equipment (which is outdated in 6 months to a year) and hire a bunch of IT guys (no offense guys). It just makes no economic or reasonable sense.

The newer cloud players are:

bluelock

connectria

virtualark

virtustream

voxel

carpathia hosting

datapipe

hosting.com

NTT Communications

Verizon

But there are many other main players as well. Among them;

Savvis

AT&T

Rackspace

Terremark

GoGrid

Joyent

IBM

Amazon

CSC

NTT

Media Temple

Layered Tech

Softlayer

SunGuard

NaviSite

OpSource

Akamai

Nirvanix

Choose your partner wisely and do NOT sign long term contracts – technology changes so rapidly as does new players sometimes its hard to lock yourself into a long term deal. Unless of course you get a good enough financial incentive to do so.  😉