On the TRAIL of a combined cancer therapy

The biotechnology world is littered with the wreckage of drugs that looked good on paper but, for whatever reason, didn't work out that well when they were sent to the clinic. A paper that will appear in PNAS this week details how two drugs that have limited effectiveness on their own seem to actually work pretty well when combined. The results (to me at least) are somewhat surprising, but I think they tell us a few important things. HangZhou Night Net

One of the two drugs targets the receptors for TRAIL, a molecule related to tumor necrosis factor. TRAIL is a signaling molecule that, under the right circumstances, can induce cells to commit an orderly form of suicide called apoptosis. Obviously, convincing tumor cells to undergo apoptosis would be a great thing, and there is an antibody (MD5-1) available that triggers TRAIL signaling.

The antibody isn't wholly effective, however, largely because cells don't want this system going off by accident. As a result, organisms have evolved a number of proteins that help moderate TRAIL signaling, and tumor cells can survive treatments with MD5-1 if they happen to carry mutations that upregulate one of these moderating systems. Given that cancer operates according to the rules of evolution, a few treatments of MD5-1 tend to slow the tumor growth down a bit, but ensure that none of the surviving cells are responsive to TRAIL signaling.

Enter the second type of drug: histone deacetylase (HDAC) inhibitors. HDACs chemically modify histones, the proteins that help package DNA inside the nucleus. Changes to the state of histone acetylation make the DNA more or less accessible, and thus more or less likely to be transcribed into messenger RNAs that go on to trigger protein production. Thus, an HDAC inhibitor can completely change the population of proteins made in a given cell, severely altering the cell's biochemistry.

I would have expected the results to be fairly catastrophic, but they're apparently not—HDAC inhibitors are generally nontoxic. Some tumor cells, however, do not tolerate them well, as the major changes they trigger can push an already unhealthy cancer cell past the breaking point, causing it to undergo apoptosis. A number of HDAC inhibitors are apparently in clinical trials for use against specific cancers.

The new research combined the two treatments, using the reasoning that two potentially weak apoptotic signals might do much better than one. Tests on cells with cultured cancer cells showed the two drugs had synergistic, rather than additive effects, while tests in mice showed that the combination could trigger tumor regression with minimal toxicity. In four of the 25 test animals, the cancer was completely wiped out and did not return.

There are a number of important take home messages here. For one, if these things really have few side effects when combined, there's no reason not to add even more drugs into the mix in the hope that this would prove even more potent—there are certainly more than two ways to induce apoptosis. This would also be an argument to go back and look into whether there were other drugs that were designed to promote apoptosis that were clinical disappointments, and see whether they might contribute something here. Finally, it's worth noting that the authors screened tumor cells for expression of TRAIL receptors before starting the tests, which is the sort of thing that personalized medicine advocates have been arguing is the future of medicine.

PNAS, 2008. DOI: 10.1073/pnas.0801868105

Over-driven: why our cars guzzle gas, what to do about it

We use how much gas?

From more drilling in the Gulf of Mexico to tapping the strategic oil reserve, politicians and pundits spar daily over the best solution to America's ongoing fuel crisis. But no matter which side of the fence they're on, almost everyone can agree on one thing: Americans are going to have to get used to paying more for their gas than they have in years past; possibly much more. The specter of permanently higher gas prices has already driven consumers and automakers to take a long, hard look at the fuel economy ratings on the vehicles that Americans own and produce, and what they've found has sparked an aggressive search for ways to squeeze more miles out of each gallon. In this brief article, I'll take a look at why the current American auto fleet is so inefficient, and at what's being done to improve it. HangZhou Night Net

Prior to the recent spike in oil prices, the US enjoyed a relatively long period of extremely cheap gas relative to the rest of the world. During that time, US automakers sold the public on the idea that large trucks and SUVs equipped with large engines were ideal for family transport. Better yet for the automakers, these large vehicles were cheap to make, which meant large profit margins. The technology packaged inside them wasn’t particularly advanced, based as they were on fairly unstressed, large-capacity engines that paid lip service to the idea of fuel efficiency.

The boom in SUV sales increased in the early 2000s, when small businesses were granted a tax break for purchasing vehicles that weighed more than 6000 lbs. Intended presumably to help general contractors offset the cost of their trucks for hauling, the ultimate effect of this change was that even more large SUVs appeared on the road. Automakers, domestic and foreign, moved quickly to capitalize on the public's newfound appetite for bigger rigs and lower MPGs. For instance, shortly after the tax break was enacted, BMW managed to find an extra 5 lbs somewhere on their X5 SUV that previously topped the scales at a mere 5995 lbs, no doubt ensuring a raft of extra sales. Indeed, between 1987 and 2004, light truck ownership (including both trucks and SUVs) almost doubled, from 28 percent to 53 percent.

A Ford F650: about as aerodynamic as the wall it's in front of. Image: Kecko @ Flickr

It would be wrong, however, to think that America’s fuel economy problem is solely due to the SUV and truck market. American consumers have traditionally demanded larger cars for their money than Europe or Japan. Large cars are heavy cars, and heavy cars also need big engines. Those big engines need more fuel, so pretty much everyone, be they a driver of a car, truck, or SUV got used to poor fuel efficiency.

And who can blame them? Not only was gas cheap, but Congress repeatedly failed to raise federal fuel efficiency regulations, known as Corporate Average Fuel Economy, or CAFE standards. Environmentalists and people with concerns over energy security railed against this trend, but corporate dollars trump good intentions. CAFE mandates that automakers achieve an average fuel economy figure across their range, and levies fines based on missing this target multiplied by the total number of vehicles sold each year. Currently, CAFE demands 27.5 mpg for cars, and 22.5 mpg for trucks below 8500 lbs. Vehicles weighing more than 8500 lbs are exempt from CAFE.

However, all good parties come to an end, and as crude oil prices show signs of settling well above $100 barrel, gas-guzzling drivers are beginning to have second thoughts.

Movial brings D-Bus bridge to web widgets for mobile Linux

Software company Movial has developed a bridge that brings D-Bus into web browsers through a JavaScript layer. This will make it possible for developers to use web-oriented technologies to build lightweight applications and widgets that can leverage native system functionality on mobile devices. HangZhou Night Net

The D-Bus interprocess communication framework, which was developed through FreeDesktop.org, is widely used on the Linux desktop to facilitate extensibility and provide access to some underlying system services. We have used it in the past ourselves for interfacing with desktop applications like Pidgin. D-Bus is also increasingly being used in the mobile space to provide language-neutral APIs for accessing phone data, including address books, call history, and similar things.

Movial's D-Bus bridge will expose that underlying platform functionality to local web-based widgets, which means that developers won't have to rely on native code to gain access to those parts of the system. Movial recently joined the LiMo Foundation and plans to help integrate this technology into the LiMo stack. The bridge supports several rendering engines, including Gecko, WebKit, and NetFront.

"Movial is proud to participate in LiMo and contribute to the soaring success of the LiMo platform and the continued growth of the Linux mobile community," said Movial creative technologies president Tomi Rauste in a statement. "The goals of LiMo and those of Movial are in lock step—to reduce complexity, development costs and fragmentation in the market while providing a richer mobile ecosystem through the contributions of leading industry partners."

I'm a big fan of D-Bus and I have long been enamored with the capabilities that it brings to the Linux desktop. The concept that Movial is delivering with its bridge builds on that and is clearly going to bring some much-needed extra power to widget development on the LiMo platform.

Researcher: encourage more, not less Internet traffic

Companies like Comcast and Bell Canada talk about drowning in data even as dire reports about "exafloods" warn of the consequences of continued Internet traffic growth. But new research out of the University of Minnesota finds that Internet traffic growth rates are stagnant or falling, even as transmission prices plummet. Perhaps now is the time for carriers to think about stimulating demand rather than throttling it. HangZhou Night Net

Andrew Odlyzko

That's one conclusion drawn from the newest numbers in the Minnesota Internet Traffic Studies (MINTS) project. Andrew Odlyzko, who oversees the project, is one of the recognized experts in Internet data metrics. His conclusion, after looking over the new 2007-2008 data set, is that "there is not a single sign of an unmanageable flood of traffic. If anything, a slowdown is visible more than a speedup. This suggests that the industry might benefit from shifting its emphasis towards methods of stimulating traffic growth."

In fact, the 2007-2008 growth numbers are some of the weakest in recent memory. While growth remains healthy at around 50 percent a year, growth rates are dropping in many places. Overall, the most recent growth rate is the lost since 2003 (a chart showing median growth rates is below).

Data source: MINTS

50 percent a year is still tremendous growth in nearly any industry, but it's the sort of growth that router makers and fiber optic equipment companies can largely cope with. In fact, you can tell they can cope simply by looking at the cost of transit; even as demand has surged, Odlyzko notes that transmission costs continue to drop by nearly a third a year.

While cheaper bandwidth sounds like good news, it's not so great for last-mile carriers. Odlyzko explained why in an article earlier this year.

"Now, annual traffic growth rates of 50 percent, when combined with cost declines on the order of 33 percent, result in no net increase in costs to provide the increased transmission capacity," he wrote for Internet Evolution. "In a competitive environment, that means no increase in revenues, which is hardly a cheering prospect for the industry. If traffic growth could be pushed back towards 100 percent, where it used to be for many years, we would have pressure for increased revenues, and also for new technologies."

His solution is to stimulate demand, rather than throttle it, in order to avoid a tremendous glut of dirt-cheap capacity on the worldwide market. This isn't the sort of idea that will be embraced by most last-mile ISPs in the US (except, perhaps, by Verizon), since many of these links (well, with cable modem service, at least) actually do experience local congestion, largely due to atrocious, shared upload bandwidth. Cable companies are rolling out DOCSIS 3.0, which promises tenfold speed increases in both directions, but the tech won't be fully in place for years.

Any day now, when everyone has fiber to the premises (*cough*), bandwidth caps and throttling issues should hopefully become relics of the distant past. The core has plenty of bandwidth and is growing along with traffic, so once that last-mile gets expanded into an eight-lane superhighway, everything should be copacetic. Until then, we'll continue to see enlightened policies like 5GB monthly caps.

Deep Zoom Composer update adds two new features

Silverlight 2 includes a feature called Deep Zoom, which in essence is Microsoft's Seadragon technology converted for use across multiple browser and platforms. The technology was demonstrated as a Microsoft Live Labs project at TED last year. It powers Photosynth, a complicated piece of photo software from Microsoft Research that recently moved out of Live Labs and onto Virtual Earth. Seadragon aims to make browsing images, no matter how large or how many, a speedy, seamless, and smooth experience. Deep Zoom has been put to use by a few companies, like the Hard Rock Café Memorabilia webpage, and individuals, like the Deep Zoom Obama website. HangZhou Night Net

The Deep Zoom Composer is a tool used for preparing content to use Silverlight's Deep Zoom feature. Microsoft released two updates to the tool, the latest of which can be downloaded via the Microsoft Download Center. Along with the usual bug, performance, and stability changes, two new features were added:

Panoramic Stitching: similar images can now be stitched together thanks to the integration of technology from Microsoft Research's Interactive Visual Group. A user can select photos that share similar characteristics, right click on them, and choose "Create panoramic photo." The time for the process to complete depends on the number of images, how large they are, and how much calculation is needed. After specifying the part of the stitched image that the user would like to save, it will appear in the design surface, just like any other image.PhotoZoom Upload: Deep Zoom Composer now integrates with the Live Labs PhotoZoom service; the user can choose Export, sign in with his or her Live ID, specify an album name, choose a cover image, and then upload the images. The newly created PhotoZoom album can then be shared with friends.

The second update to the Deep Zoom Composer was released due to problems with the PhotoZoom Upload feature: some uploads were timing out before completing (the time for this was significantly increased) and the log-in screen was not very well designed (an account with PhotoZoom needs to be created before using the upload feature). If you installed a new version of Deep Zoom Composer between August 1 and August 3, make sure to uninstall it and install the latest version (in Help => About the build date should be "3 August 2008").

Further readingExpression Blend and Design: Download the New Deep Zoom Composer Preview!Expression Blend and Design: Deep Zoom Composer Updated (Again!)

Google China takes on Baidu with legal music search (Updated)

Google is taking on Baidu in China by launching its own music search site—except this one will only point users to music that is free and legal to distribute. The site, located at google.cn/music and only accessible by Chinese Internet users, allows users to search by singer and song title. The search results will point to songs hosted by Top100.cn, a Chinese music site with financial backing from basketball star Yao Ming. Google's music site will be ad-supported, and the company says that ad revenue will be shared with Top100.cn and its music partners. HangZhou Night Net

Google's music search appears to be a direct response to the popular Chinese search engine Baidu, which has made a name for itself by providing deep links to seemingly unlimited quantities of illegal music. In fact, Baidu has finally come under fire for its MP3 deep-linking policies, as the Music Copyright Society of China and the IFPI have both filed lawsuits against the search engine for enabling rampant copyright infringement. The three labels represented by the IFPI are seeking maximum damages of 500,000 yuan (roughly US$71,000) per track on at least 127 tracks, totaling 63,500,000 yuan (US$9 million) in damages. That could just be the minimum, too, as the IFPI says Baidu may face damages in the billions.

Clearly, Chinese Internet users love music as much as the rest of the world, and there would be quite an uproar if Baidu were to shut down or stop deep linking MP3s. Google China believes that things don't have to be this way, though. "The Internet industry should by no means stand in the opposite camp against the music industry," Google China President Kai-fu Lee said in a statement to Reuters. "Google always believes profoundly that mutual interest, rather than monopoly, is the key to sustainable growth."

Because Google's Chinese music search isn't accessible from the US, we were unable to test it out firsthand. However, Music 2.0 has a few screenshots after taking the site for a spin, and claims that it's more impressive than similar international offerings like We7 and Spiral Frog due to the DRM-free nature of the MP3s. The downside to Google's offering over Baidu, though, is obviously that its selection is still quite limited—Top100.cn has yet to obtain licensing rights to most music, whereas Baidu makes it all (illegally) available.

Still, this is a small step against music piracy in China. By working with Top100 and Google to make music freely available and sharing in ad revenue, the labels are showing a commitment to compete with illegal distribution—while still making money. The next challenge will be to help grow Top100's and Google's catalog so that users will actually have a reason to make the switch from Baidu.

Update: Google contacted us about the revenue sharing situation in order to clarify some points. "Google does not share in the revenue generated by advertising in connection with its Music Onebox product in China," a Google spokesperson told Ars. "All ads visible on the product in connection with the product run on Top100's website and revenues from those ads are shared between Top100 and its music label and publisher partners."

VIA sales plunge as company repositions itself

July sales results for VIA are in, but they don't reflect the strong performance we've seen in other segments. VIA, the company behind the much-anticipated Nano processor, sold some $20 million in product through the month. That's a 7.58 percent increase over June, but a whopping 54.79 decrease year-on-year; company revenue in July 2007 was $44.1 million. The drop is not surprising, given the precipitous plunge in VIA's share of the GPU market, and it's the result of corporate repositioning. VIA has ceased (or almost ceased) providing chipsets to third-party vendors, and will focus entirely on its own brand of integrated boards and processors. HangZhou Night Net

VIA's decision to leave the third-party chipset business is the end of an era for the company, and it's hard not to wax the teeniest bit nostalgic about the company's past. Via has shrunk to a shell of what it was seven years ago, and it's hard to remember that there was a time when some predicted VIA could seriously challenge Intel for control of the motherboard chipset market. VIA ultimately lost that battle; the company refused to acknowledge consistent consumer complaints regarding product stability, and arrogantly assumed it could challenge Intel without a P4 bus license. NVIDIA ultimately stole most of VIA's AMD market share, while Intel, SiS, NVIDIA, and ATI took over the P4 market.

The story, however, isn't all bad. VIA chipsets provided P3 users with a viable (and vastly more affordable) alternative at a time when Intel's i820 was a disaster, and it provided AMD with the platforms that company needed to launch the Athlon and Athlon XP. When Intel chose to back the P4 with RDRAM, VIA was, for a time, the only DDR alternative to P4+RDRAM (very expensive) or P4+SDRAM (horrific performance). For all its blunders, the Taiwanese manufacturer played a crucial role in AMD's success—though some might argue that it also played a role in consumer perceptions of AMD as the less-stable alternative to Intel.

Going forward, all of VIA's development resources will be focused on supporting and enhancing Nano, and the chip itself seems worth the effort. If VIA can secure even a handful of solid netbook/nettop design wins, the company's financials in the third or fourth quarters could be significantly better than they are now, and that will hopefully put the company back on the road to profitability.

As for the company's chipset days, I had some hair-pulling problems (SBLive!+ VIA 686b southbridge) but also quite a bit of fun. My first Athlon system was a Duron 700 running in a KT133A motherboard—the IWILL KK266 to be exact. I pencil modded the Duron to unlock the multiplier, pencil modded the voltage so it would boot at 1.85v, bought myself some Tonicom DDR166 SDRAM, and ran the whole thing at 1.06GHz on a 183MHz FSB.

Good times.

Mozilla mocks up possible Firefox successors in idea factory

Mozilla Labs this week took steps to open up its idea factory to wider outside input, asking for community help to develop the next big ideas that might power future browsers. Like any good research lab, the goal is not an immediate product but a set of innovative ideas that can be played with and debated without the pressure of an immediate implementation. HangZhou Night Net

Mozilla Labs' "concepts" can consist of three parts: ideas, mockups, or prototypes. The idea of throwing open the lab to more voices was all about hearing from… new voices (surprise!), so Mozilla wants to make sure that plenty of people can contribute, even if they can't hack code.

"You don’t have to be a software engineer to get involved, and you don’t have to program," says the announcement. "Everyone is welcome to participate. We’re particularly interested in engaging with designers who have not typically been involved with open-source projects. And we’re biasing towards broad participation, not finished implementations."

Ideas are simple text descriptions of a new concept. They're meant to be thrown out by anyone, then talked about and possibly taken to the next level, which is the mockup. Mockups turn ideas into pictures or video clips that illustrate how the idea might look and operate in practice. Finally, prototypes are fully interactive implementations of ideas, though they may not be "fully functional or pretty."

To illustrate the process, Mozilla commissioned three videos from UI designers, each showing possible ideas for browser development. While the "Bookmarking & History Concept" and "Mobile Concept" are both quite cool, the "Aurora" idea from Adaptive Path is certainly the most radical potential change to the browser's look and feel. Each concept is highly visual and therefore difficult to explain in words, but all three are worth a look.

The Aurora concept

For now, "contributing" a concept is something of a nebulous process. According to Mozilla, users should just "use your favorite method of sharing an concept with the world. If it’s an idea, blog about it. If it’s a mockup, put it on Flickr. If it’s a prototype, host it on your web site." The organization promises more structure is coming soon, however.

A future bookmarking system?

Mozilla wants to encourage outside innovation in other ways, too, including contests like the recent "Extend Firefox 3" challenge. The contest, which will give away a Macbook Air and other prizes any day now, seeks to recognize the best third-party extensions to Firefox 3.

While Firefox 3-specific contests are directly related to the browser, the broader call for "concepts" is not. The ideas developed could easily be gleaned by rivals, but Mozilla isn't worried. One of the odd benefits of being an open-source developer is that you don't need to be (and can't be) as secretive as most in-house commercial development.

While the concepts shown so far may never see release, they do provide more evidence of how the revitalized Mozilla has been driving browser innovation in the last few years. And not all of that innovation comes from Mozilla itself—AT&T has used the Mozilla codebase as the foundation for its experimental Pogo browser, which seems to be working through visual ideas that are at least superficially similar to some of Mozilla's concepts.

Review: Texas Hold ‘Em for iPhone makes pocket poker fun again

HangZhou Night Net

Although the recent Texas Hold 'Em boom in the United States has certainly slowed, there still is quite a bit of interest in the poker space. Games continue to be played both legally in casinos and illegally in people's garages, and the dedicated players are always looking to get better by playing more hands. While electronic poker machines have been around for a long time, few are as handy for a quick game as Apple's Texas Hold 'Em for iPhone.

To start a single player game, you first must chose the venue. The first is simply labeled "Garage." This venue is free to play and you can win up to $1,250 for placing first. The garage is what you would expect to see if you somehow stumbled into Infinite Loop contributor Erik Kennedy's dwelling: Apple computer models strewn about the shelves and old Apple posters on the walls. As you bring in more money, you can choose to play at more lavish casinos all the way up to Dubai, which has a buy-in of $100,000 and a payout of $1.25 million.

While most applications we've seen thus far have not taken advantage of the iPhone's multiple screen orientations, Texas Hold 'Em does a great job of using both the landscape and portrait modes. When the iPhone is vertical, you get a face-to-face view of the game. You can see each player's face, their reactions when it is their turn to act, and the look when they slide their chips into the middle of the table. While I have now played over 40 games, thus far, I spend the majority of my time in the other mode so I haven't yet seen the "secret tells" that the developer says are included. In this mode, you simply tap on your pile or on the pot to see just how much money you have or what is at stake.

In portrait mode, you get an overhead view of the table, including how much money each player currently has and what they did on the current hand (raise, call, fold). Since you can't see their animated faces in this view, you can tap on any one of their avatars and view vital statistics, like how many times each player has won, lost, checked, called, folded, and raised in the current game. This comes in handy deep into the game when someone goes all-in; you can tell right away whether that player plays every hand or has folded 16 of the last 17.

The controls are very good; the entire game can be played with a single finger while holding the device with one hand. Folding can be done simply by flipping your cards into the pot, and going all-in is similarly simple. To bet or call, you simply tap your chips or chip count and a right-to-left wheel pops up, allowing you to easily raise and lower your bet with a simple spin. One small issue with the otherwise very good system is that, when you spin the wheel, it just goes and is imprecise. If you want to bet $500, you are more likely to get $503 or $517 than exactly $500.

While it certainly helps to know what you are doing, the game caters to beginners by putting a hand ratings list in the help menu, and by using a clever system for rating your current hand. To activate this system, all you need to do is tap on your current cards and a meter that goes all the way around your cards will light up. If you have pocket aces before the flop, your meter will be lit up like Vegas at night. 7-2 off suit? Not so much.

While the game is very well done and certainly a lot of fun, there are a few downfalls. The multiplayer function allows only for multiplayer play on the same WiFi network. If you have a buddy you want to play across the country in Portland, and you are in Baltimore, you had better have one hell of an omni-directional WiFi antenna. The statistics tracked by the application could also be significantly improved. It is nice to see your winnings, the biggest pot you have won, your best hand, and games played, but it would be even nicer if the developers added the number of hands played, number of all-ins, and other relevant poker stats. Last, but certainly not least, is the game's affect on battery life. All I can say is that if you intend on playing a lot, you might want to invest in some sort of awkward battery life extender.

For anyone who enjoys the occasional game of poker, for $4.99 this really isn't a bad price. With the ability to tap through animations, you can finish a game very quickly. It really is a handy little poker game to have with you all the time.

Name: Texas Hold 'Em for iPhone (iTunes Link)
Publisher: Apple
Price: $4.99

AT&T has head in the clouds with Synaptic Hosting

AT&T has become the latest company to launch a cloud computing service with its launch of Synaptic Hosting. The service provides pay-as-you-go access to managed hosting, providing computing, storage, security, and networking on an as-needed basis. HangZhou Night Net

In 2006, AT&T purchased USinternetworking, an application service provider offering managed hosting of enterprise applications like PeopleSoft and SAP. Synaptic Hosting combines this technology with AT&T's 38 global data centers. The company will upgrade five of its data centers into "super data centers"—three in the US, one each in Singapore and Amsterdam—to provide the infrastructure for large-scale computing applications.

Synaptic Hosting builds on virtualization technology. Customers will get a virtual environment with storage, operating system, network connectivity, a certain amount of processing power and memory, with management and monitoring facilities from AT&T. This virtual environment will be burstable so that it can get access to more resources as required. As well as the basic system infrastructure, Synaptic Hosting also offers management of applications like web servers and database servers, including configuration, patching, and other maintenance. And if customers have specific needs, dedicated hardware is also available. Synaptic Hosting, therefore, offers the benefits of cloud computing—ease of scaling, broad application support—with the hands-off convenience of software-as-a-service.

The target customers are those with variable capacity demands; for example, online retailers that have a Christmas rush, or the US Olympic team website (which uses Synaptic Hosting today). This variable demand is one of the big motivators behind the idea of cloud/utility computing; it allows businesses to satisfy their peak demand without having huge amounts of excess capacity during quiet periods. When a site only sees a lot of traffic for two weeks in every four years, this is a very valuable feature.

AT&T is describing Synaptic Hosting as enterprise-class; unlike services like Amazon's EC2 and S3, Synaptic Hosting offers service-level agreements, rapid support, and management of off-the-shelf applications, and the company believes that this enterprise-level support sets AT&T's cloud computing capabilities apart from anything else on the market. AT&T's objective is to provide a cloud platform suitable for the enterprise, and Synaptic Hosting's combination of the provision of the full stack (computing, storage, networking, operating system, and perhaps applications) along with service guarantees is the company's first step towards that. For customers bitten by Amazon S3's recent outage, the greater guarantees of AT&T's system may be very appealing.

This move by AT&T shows that the cloud computing market, although still young, is maturing fast. Using utility computing to provide IT infrastructure is still only a small market—some 5 percent of all data center outsourcing, according to a recent Gartner report—but it's one that's already worth $5 billion. With the availability of enterprise-ready solutions, this is an area sure to see further growth.