Yahoo relents, gives coupons, refunds to music DRM captives

Yahoo is trying to make the best of a potentially ugly situation that would leave many of its customers stuck between a rock and a hard place come September 30. The company, which announced last week that it was shutting down the defunct Yahoo! Music Store's DRM authentication servers, now plans to offer coupons to users so that they can purchase their songs again through Yahoo's new music partner, Rhapsody. HangZhou Night Net

Previously, Yahoo had been selling both an unlimited subscription service and individual music purchases, all under DRM. Yahoo has been a longtime foe of DRM, however, which is why it came as no surprise when rumors began to spread in April that Yahoo might open its own DRM-free music store. That ended up not being true, though, and Yahoo inked a deal with Rhapsody to transfer all of its subscription customers over to Rhapsody's unlimited subscription service. In June, Rhapsody launched its own DRM-free MP3 store, so in a way, Yahoo's plans eventually became realized—just not through its own brand name.

In the meantime, Yahoo was faced with deciding what to do with its license servers, and decided that, if it was leaving the music biz, it might as well finish the job. Last week, the company said that it would be taking its DRM license key servers offline at the end of September. This meant that those who had purchased individual Yahoo tracks with DRM would no longer be able to authenticate it on any new machines in the future. They'd either have to commit now or risk losing their music for good. Oh, and they could go the lame route recommended by Yahoo and burn their music to CD, only to rip it back to the computer, too.

If this all sounds familiar, it's because the same thing happened earlier this year with MSN Music. Both events have served as a stark reminder of the risks associated with buying DRMed media—content providers and DRM creators can flip the switch on their authentication servers at any time, leaving you without your legally-purchased music and videos. Of course, Microsoft has since relented, and now plans to keep the DRM authorization servers up and running through 2011.

Yahoo must have decided that the cost of running the DRM servers for longer than a couple of months wasn't worth it, and that it would rather just take care of the "small number" of users who had issues with the decision. Most users will receive credit at Rhapsody's DRM-free store so that they can repurchase their music at no cost, although refunds will also be offered to those who have "serious problems with this arrangement," Yahoo said on its FAQ page about the Rhapsody migration.

The Electronic Frontier Foundation, which had been hounding both Microsoft and Yahoo to fix the DRM mess, applauded Yahoo's decision. "Yahoo's decision sets a good precedent for when this problem inevitably arises again," the group wrote on its website. "Vendors that sold DRM-crippled music must either continue supporting tech that no one likes—as MSN Music chose to do—or take Yahoo's path and fairly compensate consumers with refunds. It's the right thing to do."

Scrabulous goes for bonus points, relaunches as Wordscraper

When the creators of a popular Facebook Scrabble knockoff disregarded notices from Hasbro, the game's US copyright holders, they eventually were hit with a lawsuit demanding that the game be taken down. Clearly upset over Hasbro's move, hackers attacked the officially sanctioned Scrabble game on Facebook, but now the imitation version is back to test the boundaries of the board game maker's copyright. HangZhou Night Net

Scrabulous was taken down this week (reportedly by its creators and not Facebook), but yesterday, hackers attacked the official Scrabble version and took it down for most of the day, frustrating Scrabulous refugees. There's no word on whether the hackers were just upset users or a hit squad, but the Scrabble knockoff has now risen from the ashes (via Pulse 2.0) as Wordscraper, a Scrabble clone that could be just different enough to avoid Hasbro's wrath… maybe.

At its peak, Scrabulous had over 500,000 daily active users on Facebook. Official versions launched for US, Canada and international users a few weeks ago before Hasbro's lawsuit, and their combined active user count has risen to about 77,500 at publication.

Already, Wordscraper has over 3,500 daily active users, and its creators have been careful not to repeat their previous mistakes like linking Wikipedia's definition of Scrabble in their documentation. The defining aspect of Scrabulous' Wordscraper's return, however, is that it contains a new minigame and utilizes a slightly different design and color pattern that no longer directly rips off Scrabble's classic presentation.

Hasbro's copyright for Scrabble covers only implementations of the game, not the general concept of lining up letters to form words and scoring points based on the letters used and where they are on the board. Wordscraper's seemingly minimal changes may be just enough to keep it from stepping on Scrabble's toes. Considering its previously massive user base, the creators could once again enjoy steady revenues if Wordscraper can crawl its way anywhere near to what Scrabulous enjoyed.

With the official Scrabble versions now appearing at the top of a Facebook search, though, Hasbro probably won't have to lose much sleep over Wordscraper.

Review: Grimm: A Boy Learns What Fear Is (PC)

If one were to take the time and read some of the original Grimm's Tales, you'd realize that… well, they're grim. Themes like abuse, murder, betrayal, and cannibalism are just a sampling of the darker elements that make regular appearances in the stories' original versions, though they've often been toned down quite a bit. American McGee's Grimm seems intent on bringing the stories back to their darker roots, though, starting with A Boy Learns What Fear Is. A little over a month ago, Ben got to check out a near-final build of the game, and while things aren't immensely different from then, they seem a little more polished. HangZhou Night Net

Grimm, an evil little troll, hates how fairy tales are so joyful and decides to make them the little gems of horror that he thinks they should be. Each episode starts out with a puppet show telling the mainstream version of the Grimm Tale in question, after which Grimm himself goes through the story and changes things for the worse. After he has gone through the whole tale, a darker version of the puppet show is put on, this time with an ending far less joyful. Each of these revised fairy tales take roughly an hour to complete and will be released at a rate of one a week. Playing through A Boy Learns What Fear Is, it's hard not to be reminded of Katamari Damacy; instead of rolling around a ball of stuff, Grimm spreads a kind of spiritual pollution throughout each level.

At the beginning of each level, players start off with a basic "darkness" level that they increase by spreading darkness throughout the realm. Spreading this spiritual pollution causes the world to become twisted and icky, such as the playground that gradually turns into a graveyard. There are two ways to spread darkness: it spreads in a radius around Grimm as he runs about, but he can also perform a move knows as the "butt stomp" which causes an increased burst of darkness to radiate out. Certain roadblocks stand in the way, and they, in turn, can only be removed after Grimm's nastiness level reaches a certain point. Initially, there are characters who will undo Grimm's corruption of their environment, so the first few minutes of each new area are spent in a frantic race to actually create enough darkness in order to overpower these do-gooders and convert them to the darker side of things.

Even though this concept sounds potentially grotesque, it's nicely balanced out by the game's puppet-like graphics. As a result, the changes to the story's world come across as hysterical and strange rather than horrifying and grotesque. Combine this with the fact that the dialogue and voice acting (mostly provided by industry veteran Roger Jackson) are goofy and amazing, and the overall feel of the game never veers outside of comedic waters.

The frustrating thing is that it's nearly impossible to actually reach the highest darkness level. After three play-throughs of A Boy Learns What Fear Is, I came close to the maximum rank by running about and performing Grimm's butt stomp everywhere, but I always seemed to miss some hidden portion that was left unsullied. The other problem in the episode is that there are occasional clipping issues with characters partially walking through objects as they go about their rounds.

Overall, it's hard not to like the first episode of Grimm, because it's simultaneously goofy, child-like, and completely twisted. While it isn't perfect, it certainly is a lot of fun to play. For the first entry into a weekly episodic series, it's a nice premiere; if each episode's quality improves from here, gamers will definitely be in line for a treat. Plus, right now the game is free to play, so you have no excuse.

Verdict: Buy? Subscribe? Just go and play because it's free!
Developer and Publisher: GameTap
Platform: PC
Price: Free this week
Rating: NA
Other recent reviews:

Final Fantasy IV
Rock Band WiiChocobo's DungeonSong Summoner: The Unsung HeroesFinal Fantasy Tactics A2

Among the clouds: CherryPal offers unbelievably tiny desktop

Laptops and subnotebooks compete on weight and power consumption, but desktops join that fray only rarely, and pretty much never come out on top. A new, tiny desktop computer from a startup company will do exactly that. The CherryPal company's only product is the CherryPal desktop, a tiny device with a Freescale embedded processor that runs a stripped-down linux variant, including a variety of common apps, at a very appealing price. HangZhou Night Net

Hardware-wise, the CherryPal is nothing short of remarkable, in a weird sort of way. It packs a tri-core Freescale processor, 4GB of NAND Flash, 256 MB of DRAM, and all its other operating components into a ten-ounce package the size of a disappointing sandwich. The tiny device has the horsepower to display films, play music, word process, and browse the internet, and purportedly can handle flash applications like YouTube. According to CherryPal, all this hardware consumes only two watts of electrical power. On the back of this tiny device are two USB ports, VGA, NIC, and stereo ports. All this goodness can be had for a mere $250.

The CherryPal boots in twenty seconds, but its linux variant has none of the usual controls or settings—instead, it boots directly into Firefox and is controlled entirely through the browser. Indeed, this is cloud computing in a very real sense. The device itself has only 4GB of storage locally, but it comes with 50GB more in an assigned cloud storage account with lifetime access provided by CherryPal. Apps and software are updated automatically.

The tiny computer idea is an interesting one, but it's not clear the CherryPal meets the need. Its FreeScale processor, however rugged and capable, is not x86 compatible, so the device can never run a mainstream linux distro or any variant of Windows. This makes it utterly unsuitable for HTPC and other such applications. Also, while the device itself is tiny and can be carried around, it needs to be hooked to the Internet, monitor, keyboard, and mouse, which are not available together in very many places. Because of this, it seems unlikely it will actually be moved that often, in which case the purpose of making its hardware so tiny is not really clear.

CherryPal feels that environmental concerns and advertising can overcome these problems. The firm touts its device as a "green" computing solution, consuming less power and material than other computers, and lasting a considerably longer period of time, up to ten years. That's pretty doubtful, but then some of the firm's other reasoning is dubious. Since the 2W figure is small compared to the power needed by any conceivable display, the difference between this solution and one powered by Intel's more muscular Atom processor is insignificant. And, if CherryPal really wanted to save materials, they probably should have gone the subnotebook route and thrown their hat into that ring. Marketing based on sheer price runs into the sad reality of the $99 linux box special Fry's does on some black Fridays and the thriving market in used computers.

The device has some problems, and many users will prefer usability on the go, more muscular processors, and x86 compatibility when laying out money for compact computers. But, if CherryPal is right, consumers interested in simple, minimal, cheap, and green computing in the home may latch onto this new device. Users who want to pop their cherries will have to wait some time, though; CherryPal doesn't even plan to release the full details of its plans until the third quarter of this year.

Ancient herb may hold relief for osteoarthritis sufferers

Osteoarthritis (OA) afflicts over 20 million people in the United States alone. It is characterized by a painful swelling of a person's joints that can restrict movement and make carrying out daily tasks difficult. The typical prescription is some form of non-steroidal anti-inflammatory drug (NSAID) to reduce the swelling in the joint. New research shows that an old herb may have a surprising palliative effect. HangZhou Night Net

Research published in the open access journal Arthritis Research & Therapy has found that Indian Frankincense, Boswellia serrata, has a significant impact on patients suffering from osteoarthritis. The drug, named 5-Loxin is a concentrated extract of3-O-acetyl-11-keto-beta-boswellic acid (AKBA) derived from the herb. The chemical is believed to be the most active ingredient in the plant and, according to corresponding authorSiba Raychaudhuri, "AKBA has anti-inflammatory properties, and we have shown that B. serrata enriched with AKBA can be an effective treatment for osteoarthritis of the knee."

The research consisted of a double blind study that involved 75 individuals who suffered from OA. The participants were split into three groups, each consisting of 25 individuals. One group received 100 mg/day of the drug, a second received 250 mg/day, and the third group was given a placebo. The study lasted for 90 days, with 70 people completing it. The patients were examined at 7, 30, 60, and 90 by using standard tests to evaluate their pain and physical functionality.

According to the results of the study, "both doses of 5-Loxin conferred clinically and statistically significant improvements in pain scores and physical function scores in OA patients." Those in the group who took the high dosage reported improvements at the 7 day evaluation exam. In addition to a reduction in the perceived pain, the researchers found that the drug reduced the amount of a proteinase present in the synovial fluid of people suffering from OA; this enzyme degrades cartilage and contributes to the progression of the disease. The authors of the paper believe that this compound is "a promising alternative therapeutic strategy that may be used as a nutritional supplement against OA."

Arthritis Research & Therapy, 2008. DOI: 10.1186/ar2461

Report: Google plans venture capital group, but why?

Due to its success in the online ad and search market, Google has amassed nearly $13 billion in cash. So far, this hoard has mostly been used for the purchase of startup companies that are eventually assimilated into the Google Borg. But the search giant has also set up a foundation, Google.org, that (among other activities) invests in companies that are pursuing goals that the Google founders deem worthwhile. Apparently, the company has liked the investment approach enough that it’s now considering creating an in-house venture capital effort. HangZhou Night Net

The Wall Street Journal is reporting that planning for a venture capital group is already under way. The group will apparently be run by Google’s chief legal officer, David Drummond, who apparently has time to kill when he’s not testifying before Congress. The Journal reports that a former entrepreneur and private investor named William Maris will be brought on board for his expertise.

It’s not entirely clear what Google hopes to accomplish through an investment arm. Although the economy is a major worry at the moment, there has been little indication that this has had a damaging impact on the venture capital markets. As such, it’s not clear that there is a desperate need for Google’s billions to float new ventures. The converse, of course, is that the investments might pay off for Google, but the company doesn’t seem to need help in that regard, either.

One alternate explanation is that Google seeks to influence the development of new markets and technologies through its choice of investments. It’s possible that they’ll structure the investments such that they have the option of buying the company outright should things work out.

This approach would entail a number of risks, however. Google is already being accused of having a monopoly on search by its competitors, and investments that seek to influence the development of this market would inevitably end up presented as evidence of monopolistic abuses. Meanwhile, startups might be leery of accepting investments from Google if they felt there were strings attached. Those behind new ventures will want to know that they can pursue anything that makes sense financially and technologically, rather than feeling they are restricted to chasing only those markets that Google thinks are appropriate or working with an eventual buyout by the search giant in mind.

The Journal notes that Google would join a significant list of technology companies, including Intel and Motorola, should it open an in-house investment group. It describes the experience of these other companies as mixed, and notes that their investments account for a shrinking slice of the venture capital pie. All of this makes the decision by Google that much more puzzling, as the company tends to avoid entering shrinking markets.

Street Fighter vs. Mortal Kombat: still trash-talking

Since the early 1990s, the Mortal Kombat and Street Fighter franchises have maintained a strong rivalry. It all began back when Kombat creator Ed Boon claimed that his game would "kick [Street Fighter]'s ass,"… a statement he continues to make to this day. Capcom, naturally, never took kindly to such words, and accused the gorier fighting game of simply imitating the greatness of its Street Fighter franchise. HangZhou Night Net

At last week's Comic Con, Eurogamer discovered that the rivalry is still going, and it seems to be just as powerful as ever thanks to the fact that both Mortal Kombat vs. DC Universe and Street Fighter IV are going to be hitting stores in the near future. When Johnny Minkley tracked down Ed Boon at the show, he asked if the developer still felt like he was competing with the Street Fighter games. "I think just from the history, we do," Boon said, "I think we'll kick their ass…. we're a wilder ride—a big rollercoaster ride—and they're a little bit tamer."

Meanwhile, at Capcom's booth, community manager Seth Killian was just as impassioned. "You can't touch the mechanics of Street Fighter," he said, "and [SFIV] is really channeling back the classic mechanics that ignited the world… Mortal Kombat was riding the coattails of Street Fighter then, and I think Mortal Kombat may be riding the tails of Street Fighter as we move into 2009."

At this point, the rivalry is starting to seem a bit absurd, as the two games have evolved into largely different animals. The Street Fighter core games have always felt faster-paced and maintained an anime feel thanks to its art style and 2D layout. The Mortal Kombat games, on the other hand, feel a little slower in terms of combat but feature showier moves and have often featured weapon mechanics, 3D environments, and strangely enthralling amounts of gore.

Maybe it's time to leave this competition behind and find something else to focus on.

Opt-in or opt-out? Street View case echoes privacy debate

The UK today cleared Google's Street View service for use after looking into the privacy implications of the program, but the panoramic pics of real-world streets and homes continue to generate controversy. Google's response to critics? Get over yourselves. HangZhou Night Net

A Pittsburgh couple, Aaron and Christine Boring, sued Google earlier this year in federal court after their $163,000 home appeared on Street View; the shot of the home was allegedly taken from the couple's private road. How do we know the home cost $163,000? Because photos of the property, details of the sale, and even a rough floorplan are already publicly available on the Internet from the Allegheny County assessor (not linked to preserve whatever privacy the Borings have left).

Google's lawyers, not one to miss a trick, have already pointed this out the court. The couple claims injury "even though similar photos of their home were already publicly available on the Internet, and even though they drew exponentially greater attention to the images in question by filing and publicizing this suit while choosing not to remove the images of their property from the Street View service," says Google.

Privacy? What privacy?

This statement is made in service of Google's larger point, which is: no one today has complete privacy. Except, perhaps, hermits.

"Today's satellite-image technology means that even in today's desert," Google writes, "complete privacy does not exist. In any event, Plaintiffs live far from the desert and are far from hermits… In today's society people drive on our driveways and approach our homes for all sorts of reasons—to make deliveries, to sell merchandise and services door-to-door, to turn around. As a society, we accept these 'intrusions'. They are customary, even expected." (The Smoking Gun unearthed the filing yesterday, though there's nothing "smoking" about it; anyone with PACER access to federal court cases can search for case 2:08-cv-00694-ARH.)

The pictures in question were "unremarkable photos" and the alleged trespass was "trivial." At every point in its response, Google passes off the complaints over Street View as simply too petty too bother with—even as it admits that its driver may, in fact, have driven past a sign marked "private road" to take the photo.

You complain, we fix

The company's point about complete privacy is well taken; we don't have it. But Google's preferred solution to the problem is for people to use "the simple removal option Google affords." Sound familiar? Sure it does, because it's the exact same argument the company uses against all rightsholders: tell us where the problem is, we'll fix it, but don't ask us to be proactive about clearing permissions first.

This dispute is at the heart of the Viacom/YouTube $1 billion lawsuit, among other cases. In this instance, Google basically says that it's up to people to scan Street View themselves, pick out photos that might be private, then notify the company. Staying off of private roads isn't Google's problem; it's the homeowner's.That might sound burdensome, but it's the same argument deployed against rightsholders over video.

This fundamental tension between the opt-in/get-permission/check-first model and the opt-out/seek-forgiveness/fix-later approach is shaping up as a fundamental point of contention on the Internet. NebuAd's opt-out approach to grabbing ISP clickstream data has become such a big deal that Congress has already held multiple hearings on the matter and has ISPs across the country running scared. When it comes to copyright, rightsholders have pushed (with some success) for video-sharing sites to screen uploaded content for possible violations before it goes live. User-generated content sites, which have powered the Web 2.0 revolution, are under attack over uploads of child pornography, regular pornography, and clips of public harassment and abuse of others. And a UK government commission this week recommended that user-generated content sites be forced to screen all uploads with human eyes before pushing them out to the web.

Copyright, privacy, and school bullying videos might not seem to have much in common, but the debate over screening first vs. fixing later could reshape the Internet as we know it. Having to get permission or screen content would hobble useful services like Street View and YouTube, and it would probably put companies like NebuAd out of business, even as it might lead to less objectionable or private content online.

But Wikipedia, YouTube, Flickr, and other services have shown us that the great mass of Internet users can produce a volume of content that boggles the mind and overwhelms attempts at centralized corporate screening and control.

Which approach do we want, and for which services do we want it? The Boring case is one tiny piece of this much larger debate, a debate which is about as interesting—and important—as Internet debates can be.

Superluminal waves make a theoretical splash

One of the fundamental conclusions of special relativity is that information cannot travel faster than the speed of light. This, of course, means that physical objects cannot travel faster than the speed of light either. However, every few years, someone reports an experiment that in a paper that contains the phrase "superluminal velocity." What is up with these claims? Is there a chunk of the physics community out there hiding something from the rest of us? HangZhou Night Net

Sadly, the answer is far more mundane than that. What the researchers are usually reporting is called the group velocity, which is allowed to travel faster than light because it carries no information. However, this begs the question of why researchers are still interested in something so apparently mundane. The answer is quantum mechanics.

In quantum mechanics, there is a phenomena called tunneling that describes how a particle, trapped by a barrier, can escape through that barrier even though it doesn't have sufficient energy to do so. Tunneling is of fundamental importance because it provides a framework through which our understanding of radioactive decay and modern electronics (among other fields) is derived. On close inspection, however, tunneling raises a few questions as well; for instance, what is the speed of the particle as it passes through the barrier, and how long does the tunneling process take?

The theoretical picture tells us that the speed of a particle in the barrier is purely imaginary (as in imaginary numbers, not pink elephants), as is the transit-time, which makes interpretations problematic. However, by treating the particle as a wave and examining the phase change induced by the barrier, a real transit time can be derived. This transit time becomes independent of the barrier thickness for thick barriers, which has some interesting consequences. For a sufficiently thick barrier, the particle must move faster than the speed of light. Unfortunately, measurements of this phenomena have proven to be quite difficult, and no conclusions have been drawn from the initial results.

Luckily, there is an alternative way to test this: light. Under the right circumstances, light will also tunnel across a barrier—a phenomena with the catchy name frustrated total internal reflectance is one example—and an analogous calculation for light can be made. But light causes its own set of experimental problems. All of the potential experiments involve detecting very weak pulses of light in the presence of much stronger reference signals. There is also the issue that the weak pulse is derived from the stronger pulse—as the intensity of a pulse gets weaker, it is quite likely that the pulse width will get shorter. Depending on how you define your measurement protocol, you might measure faster or slower transit-times.

Clearly, what is needed is a way to create a tunneling barrier through which all the photons will tunnel. This is exactly what a group of Russian and Swedish researchers have proposed. In a recent Physical Review E publication, they have described the properties of a coaxial transmission line where a chunk in the center is modified to create a tunneling barrier for microwave light.

This is achieved by gradually varying the composition of the plastic center so that the light "sees" a weak "U" shaped potential. Under the right conditions, none of the microwaves will be reflected by this barrier, but the properties of the barrier and the mathematics describing it tell us that the light must tunnel. Furthermore, the nature of a coaxial cable means that every color of light travels at the same speed—unlike fiber optic cables. These properties provide nearly ideal conditions for an experiment.

To make the measurements, the researchers propose taking a microwave source and sending half the radiation through the tunneling barrier and half along a normal coaxial cable. The radiation can then be recombined so that an interference pattern is detected. By shooting pulses of light down the cables, the amount of overlap between two equally intense pulses can be measured, which can be used to derive how big the time difference between the two paths was. If the tunneling pulse arrives first, it traveled at a superluminal velocity.

So that all seems pretty simple, right? Where are the experimental results? Well, there aren't any. Being a naturally suspicious type of person, I wonder if this is because they have tried and found they couldn't get the coaxial cable to perform as expected. On the other hand, I would be delighted by robust experimental results on superluminal (or lack of) group velocities, and I hope they are planning a follow-up paper. However, what I really want to see is the phase velocity measured. The group velocity is measured by how fast a pulse moves, while the phase velocity is the velocity of the underlying waves that make up the pulse. Measuring the phase velocity across such a tunneling barrier would be very challenging, but would present a real test of the correctness of special relativity. A test I would expect it would pass with flying colors, just as it has with all the other tests.

Physical Review E, 2008, DOI: 10.1103/PhysRevE.78.016601

Fixing the structural ills of US biomedical research

Although the US biomedical research endeavor might appear to be in great shape, it faces some significant problems. The rapid doubling of the NIH budget started during the Clinton administration was followed by several years of flat funding, and this has exposed structural cracks in the way that we're training and employing researchers. Here at Ars we've been banging that drum for a while now, and today in Science, Michael Teitelbaum, vice president of the Sloan Foundation, joins in with a policy forum article on the topic that contributes a list of suggestions for policymakers. HangZhou Night Net

Starting in 1998, the NIH underwent a doubling of its budget over five years. At the time, this was greeted incredibly favorably. The numbers of successful grant applications rose, more PhDs were awarded, and more foreign scientists were attracted to the US. Along with this, a lot of money was spent by universities and research institutes across the country on shiny new buildings. Things were looking good.

However, since 2003, budgetary pressures have meant that, in real terms, the NIH budget has actually been falling. Since this is the major funding source for biomedical R&D in the US, a lot of researchers who were accustomed to steady growth suddenly found that funding opportunities weren't keeping pace with the growing research community. After spending five to eight years being trained to be independent scientists, all those new PhDs and postdocs suddenly discovered that only one in five would ever be able to land a faculty appointment. Those recently promoted to the ranks of faculty discovered that instead of being mentored by their senior colleagues, they were instead competing with, and often losing to, those same senior scientists for grants.

One of the problems with the system is that those young scientists, the PhD students and postdocs, do the bulk of the work; it's a lot cheaper to employ a PhD graduate as a postdoc (whom you often don't need to offer benefits to) than it is to hire a technician. However, postdocs are supposed to be temporary positions; after several years, that young scientist should be able to apply for independent funding.

That's the theory. In practice, advances in medical research (ironically) and the concomitant advances in working lifespan mean that the existing tenured faculty across the country are not retiring, so there are far fewer positions than scientists looking to fill them. Furthermore, since these postdocs are working for a PI upon whom they depend for their salary, the expectation is that they will work on the PI's project; the training towards independence isn't happening.

A lot of those postdocs—well, more than half—are foreign-born and mainly working on temporary visas, and those on J-1s, can often be paid far below the prevailing wage; many universities have stories of unscrupulous faculty who bring researchers over from China or India to work 15-hour days in the lab with the knowledge that those who complain can just be sent back and exchanged for another.

After describing this situation, Dr. Teitelbaum makes a number of recommendations in the article. One is to better align the PhD/postdoc systems with demand in the labor market for graduates; if only one in five is ever going to get a tenure track faculty position, we need to be better preparing the remaining 80 percent for the alternative careers. More funding for positions such as staff scientists could limit the constant stream of young scientists who end up forgotten after three or four years. A rethinking of the numbers of foreign scientists who are awarded temporary visas each year is also advocated.

Other suggestions include an examination of the way the NIH budget is constructed and how it is spent. The US should avoid rapid growth followed by years of fallow; measured, but less extreme growth is sustainable and avoids situations such as the one we currently find ourselves in, where a glut of young scientists has nowhere to go. Also mentioned is the idea of limiting the percentage of faculty salary that can be paid by research grants, and adjusting overhead rules regarding funding new facilities and the debt created by building them.

As someone who's been involved in the debate over many of these issues, the suggestions all seem like pretty smart ones to me, although there appears to be institutional resistance to change and inertia on anything other than a small level. The Pathway to Independence awards are a good example of this; designed to help transition postdocs to faculty, they are a good solution, but with only 250 given out each year and about 90,000 postdocs, they're mere window dressing, and the researchers who manage to win them were always going to get their own funding anyway.

While the NIH's role in creating or allaying these structural problems is often discussed, the role of universities also needs addressing. These institutions are the primary beneficiaries; they get a lot of the grant money through overhead costs, they don't have to pay much in the way of salaries, and they benefit from the publications. Universities are some of the first to lobby Congress to increase NIH funding, but don't appear to be taking their share of the responsibility for the problems.

For example, the universities could consider covering the salaries of their senior faculty. This would free up those experienced researchers from the need to continually compete with (and out-compete) up-and-coming scientists, and instead let them effectively mentor and train the next generations. Of course, as long as scientists continue to use grants as measurement of success against their peers, I'd expect a lot of resistance toward this idea. One thing's for sure, though, things ought not to continue along as they are now.

Science, 2008. DOI: 10.1126/science.1160272