Newly-found hybrid attack embeds Java applet in GIF file

Researchers at NGSSoftware have developed a hybrid attack capable of hiding itself within an image and intend to present details on the exploit at the Black Hat security conference next week. New and esoteric attacks are part and parcel of what Black Hat is about, but this particular vector could target web sites with a particularly vulnerable population: MySpace and Facebook. Social networking web sites tend to attract younger users, and while this particular attack can be used in a variety of ways, embedding the hook in profile photos that are then seeded and targeted at the teen crowd could be a very effective tactic. HangZhou Night Net

The full details of the attack won't be available until next week, but Network World has managed to glean some key facts on its operation. The NGSSoftware team has found a way to embed a Java applet within a GIF; the hybridized file is referred to as a GIFAR. Just to make it clear, this is a file extension of convenience and not the literal name of any particular file type. The GIFAR exploit works because two different programs see the same file differently. The web server that actually holds the file sees it as a GIF file, and serves it accordingly, but when the "image" actually reaches the client, it's opened as a Java applet and run.

Simply viewing a GIFAR won't infect a system; the attack method requires that the user be linked to the hybridized infection from an appropriately malicious web site. Despite its name, this attack method is not limited to GIFs; ZDNet's Zero Day blog has additional information on the exploit, and states that a number of files could be combined with .JAR, including both JPEGs and DOCs. This seems to indicate that one could actually hide a Java applet inside another Java applet, and then tie both of them together with a BINK file, but the resulting mess would probably fail, even as comedic relief.

The root of the problem isn't within Java itself, but results from weak web application security. ZDNet's blog entry implies that the attack vector might be significantly reduced if web applications would actually parse a file's contents, rather than simply checking the extension. The research team will leave some details of the attack out in their presentation, to prevent immediate exploitation, and Sun intends to issue a patch that will serve as a short-term correction to the problem.

Mapping the peculiar velocities of stars

All things dark are all the rage is cosmology at the moment. There is dark matter—a type of matter that only weakly interacts with light. And dark energy—the label used to denote the observed increase in the rate of expansion of the universe. Our knowledge of what dark matter is and what dark energy denotes is woefully inadequate, opening up a theoretician's paradise. There are all sorts of models out there and, in the case of dark energy, they all have to fit one data point, making it kind of trivial to obtain a good result. In the meantime, astronomers are scrabbling around—in, yes, the dark—figuring out how to obtain more precise measurements of the increasing acceleration of the universe. HangZhou Night Net

In particular, there are a set of models that predict that the distribution of dark energy is not uniform, meaning that measurements of the velocity of stars at different distances and directions should be able to tell theoreticians whether barking up this particular tree is worthwhile. However, there is a problem: it is quite difficult to measure these velocities. Locally, astronomers use Type Ia supernova as references for distance and speed, but the further away the supernovas are, the weaker the signal, and the more significant confounding sources of noise become.

One source of noise is gravitational lensing, which causes an apparent change in the brightness of the supernova, resulting in an incorrect distance calculation. A pair of Chinese astronomers have now examined the problem and showed that the signature of gravitational lensing can be removed.

A gravitational lens will often smear the image of the star into an arc shape, depending on the relative location of the star, the lens, and the telescope. The behavior of the lens is relatively static and its influence can be calculated in two dimensions by examining the correlations between points on the image and calculating the spatial frequencies of those correlations—dark matter can be observed through this method.

However, this 2D power spectrum does not allow a correction to be made for the distance and velocity of the star. To do that, the researchers performed the correlation and power spectrum calculations in 3D. The supernova light has most of its power along the line of sight, while the lens power spectrum remains 2D and at right angles to the line of sight. This effectively separates out the contribution of the lens, allowing researchers to correct for gravitational lensing.

So, this seems like a pretty obscure bit of research to put on Nobel Intent, but I think it is important to show these slightly less sexy parts of the scientific process. Should dark energy models with a non-isotropic distribution of dark energy prove correct, measurements derived from observations of Type Ia supernova will play a critical role in confirming them. Before that can happen, these sorts of problems need to be solved.

To give you some insight into how important issue is to the astronomy community, during the time this paper was being written and going through peer review, four other papers on the topic were published or accepted for publication, presenting other ways to solve the same problem.

Physical Review D, 2008, DOI: 10.1103/PhysRevD.78.023006

Next up for pointless gaming laws? Illinois and FFXI

New York was the last state to pass a law forcing gaming companies to do something the already do, and it was such a great use of time and money that Illinois had to get into the action. Following the trials of two parents trying to cancel a Final Fantasy XI account, the state passed a law saying that online games had to have a way to cancel your account online. HangZhou Night Net

The summary of the bill, which was signed into law on Tuesday, follows:

…an Internet gaming service provider that provides service to a consumer… for a stated term that is automatically renewed for another term unless a consumer cancels the service must give a consumer who is an Illinois resident: (1) a secure method at the Internet gaming service provider's web site that the consumer may use to cancel the service, which method shall not require the consumer to make a telephone call or send U.S. Postal Service mail to effectuate the cancellation;

and (2) instructions that the consumer may follow to cancel the service at the Internet gaming service provider's web site. Makes it an unlawful business practice for an Internet gaming service provider to violate the new provisions.

I passed this over to our own Frank Caron. Caron, a while back, decided to work on his pasty Canadian complexion and canceled his Final Fantasy XI account in order to spend more time outside. How did he do it? He used the PlayOnline software that comes bundled with the game. As Frank points out, canceling your account is possible online, even if the software may seem obtuse to those who aren't familiar with this sort of service. "Besides, there are plenty of help files and 'contact us' notices to help guide users," he noted.

The law has good intentions, but are there many online games that doesn't allow you to do this? Was this a major problem? Don't you think someone would have looked into this a little more closely before writing the legislation? Sadly, we know the answer to that last question.

Thanks to GamePolitics for the heads up on this story. What do you guys think? Is canceling FFXI trickier than it needs to be? Do other games need to make this process more use-friendly? Sound off.

Report: Google plans venture capital group, but why?

Due to its success in the online ad and search market, Google has amassed nearly $13 billion in cash. So far, this hoard has mostly been used for the purchase of startup companies that are eventually assimilated into the Google Borg. But the search giant has also set up a foundation, Google.org, that (among other activities) invests in companies that are pursuing goals that the Google founders deem worthwhile. Apparently, the company has liked the investment approach enough that it’s now considering creating an in-house venture capital effort. HangZhou Night Net

The Wall Street Journal is reporting that planning for a venture capital group is already under way. The group will apparently be run by Google’s chief legal officer, David Drummond, who apparently has time to kill when he’s not testifying before Congress. The Journal reports that a former entrepreneur and private investor named William Maris will be brought on board for his expertise.

It’s not entirely clear what Google hopes to accomplish through an investment arm. Although the economy is a major worry at the moment, there has been little indication that this has had a damaging impact on the venture capital markets. As such, it’s not clear that there is a desperate need for Google’s billions to float new ventures. The converse, of course, is that the investments might pay off for Google, but the company doesn’t seem to need help in that regard, either.

One alternate explanation is that Google seeks to influence the development of new markets and technologies through its choice of investments. It’s possible that they’ll structure the investments such that they have the option of buying the company outright should things work out.

This approach would entail a number of risks, however. Google is already being accused of having a monopoly on search by its competitors, and investments that seek to influence the development of this market would inevitably end up presented as evidence of monopolistic abuses. Meanwhile, startups might be leery of accepting investments from Google if they felt there were strings attached. Those behind new ventures will want to know that they can pursue anything that makes sense financially and technologically, rather than feeling they are restricted to chasing only those markets that Google thinks are appropriate or working with an eventual buyout by the search giant in mind.

The Journal notes that Google would join a significant list of technology companies, including Intel and Motorola, should it open an in-house investment group. It describes the experience of these other companies as mixed, and notes that their investments account for a shrinking slice of the venture capital pie. All of this makes the decision by Google that much more puzzling, as the company tends to avoid entering shrinking markets.

Street Fighter vs. Mortal Kombat: still trash-talking

Since the early 1990s, the Mortal Kombat and Street Fighter franchises have maintained a strong rivalry. It all began back when Kombat creator Ed Boon claimed that his game would "kick [Street Fighter]'s ass,"… a statement he continues to make to this day. Capcom, naturally, never took kindly to such words, and accused the gorier fighting game of simply imitating the greatness of its Street Fighter franchise. HangZhou Night Net

At last week's Comic Con, Eurogamer discovered that the rivalry is still going, and it seems to be just as powerful as ever thanks to the fact that both Mortal Kombat vs. DC Universe and Street Fighter IV are going to be hitting stores in the near future. When Johnny Minkley tracked down Ed Boon at the show, he asked if the developer still felt like he was competing with the Street Fighter games. "I think just from the history, we do," Boon said, "I think we'll kick their ass…. we're a wilder ride—a big rollercoaster ride—and they're a little bit tamer."

Meanwhile, at Capcom's booth, community manager Seth Killian was just as impassioned. "You can't touch the mechanics of Street Fighter," he said, "and [SFIV] is really channeling back the classic mechanics that ignited the world… Mortal Kombat was riding the coattails of Street Fighter then, and I think Mortal Kombat may be riding the tails of Street Fighter as we move into 2009."

At this point, the rivalry is starting to seem a bit absurd, as the two games have evolved into largely different animals. The Street Fighter core games have always felt faster-paced and maintained an anime feel thanks to its art style and 2D layout. The Mortal Kombat games, on the other hand, feel a little slower in terms of combat but feature showier moves and have often featured weapon mechanics, 3D environments, and strangely enthralling amounts of gore.

Maybe it's time to leave this competition behind and find something else to focus on.

Opt-in or opt-out? Street View case echoes privacy debate

The UK today cleared Google's Street View service for use after looking into the privacy implications of the program, but the panoramic pics of real-world streets and homes continue to generate controversy. Google's response to critics? Get over yourselves. HangZhou Night Net

A Pittsburgh couple, Aaron and Christine Boring, sued Google earlier this year in federal court after their $163,000 home appeared on Street View; the shot of the home was allegedly taken from the couple's private road. How do we know the home cost $163,000? Because photos of the property, details of the sale, and even a rough floorplan are already publicly available on the Internet from the Allegheny County assessor (not linked to preserve whatever privacy the Borings have left).

Google's lawyers, not one to miss a trick, have already pointed this out the court. The couple claims injury "even though similar photos of their home were already publicly available on the Internet, and even though they drew exponentially greater attention to the images in question by filing and publicizing this suit while choosing not to remove the images of their property from the Street View service," says Google.

Privacy? What privacy?

This statement is made in service of Google's larger point, which is: no one today has complete privacy. Except, perhaps, hermits.

"Today's satellite-image technology means that even in today's desert," Google writes, "complete privacy does not exist. In any event, Plaintiffs live far from the desert and are far from hermits… In today's society people drive on our driveways and approach our homes for all sorts of reasons—to make deliveries, to sell merchandise and services door-to-door, to turn around. As a society, we accept these 'intrusions'. They are customary, even expected." (The Smoking Gun unearthed the filing yesterday, though there's nothing "smoking" about it; anyone with PACER access to federal court cases can search for case 2:08-cv-00694-ARH.)

The pictures in question were "unremarkable photos" and the alleged trespass was "trivial." At every point in its response, Google passes off the complaints over Street View as simply too petty too bother with—even as it admits that its driver may, in fact, have driven past a sign marked "private road" to take the photo.

You complain, we fix

The company's point about complete privacy is well taken; we don't have it. But Google's preferred solution to the problem is for people to use "the simple removal option Google affords." Sound familiar? Sure it does, because it's the exact same argument the company uses against all rightsholders: tell us where the problem is, we'll fix it, but don't ask us to be proactive about clearing permissions first.

This dispute is at the heart of the Viacom/YouTube $1 billion lawsuit, among other cases. In this instance, Google basically says that it's up to people to scan Street View themselves, pick out photos that might be private, then notify the company. Staying off of private roads isn't Google's problem; it's the homeowner's.That might sound burdensome, but it's the same argument deployed against rightsholders over video.

This fundamental tension between the opt-in/get-permission/check-first model and the opt-out/seek-forgiveness/fix-later approach is shaping up as a fundamental point of contention on the Internet. NebuAd's opt-out approach to grabbing ISP clickstream data has become such a big deal that Congress has already held multiple hearings on the matter and has ISPs across the country running scared. When it comes to copyright, rightsholders have pushed (with some success) for video-sharing sites to screen uploaded content for possible violations before it goes live. User-generated content sites, which have powered the Web 2.0 revolution, are under attack over uploads of child pornography, regular pornography, and clips of public harassment and abuse of others. And a UK government commission this week recommended that user-generated content sites be forced to screen all uploads with human eyes before pushing them out to the web.

Copyright, privacy, and school bullying videos might not seem to have much in common, but the debate over screening first vs. fixing later could reshape the Internet as we know it. Having to get permission or screen content would hobble useful services like Street View and YouTube, and it would probably put companies like NebuAd out of business, even as it might lead to less objectionable or private content online.

But Wikipedia, YouTube, Flickr, and other services have shown us that the great mass of Internet users can produce a volume of content that boggles the mind and overwhelms attempts at centralized corporate screening and control.

Which approach do we want, and for which services do we want it? The Boring case is one tiny piece of this much larger debate, a debate which is about as interesting—and important—as Internet debates can be.

Superluminal waves make a theoretical splash

One of the fundamental conclusions of special relativity is that information cannot travel faster than the speed of light. This, of course, means that physical objects cannot travel faster than the speed of light either. However, every few years, someone reports an experiment that in a paper that contains the phrase "superluminal velocity." What is up with these claims? Is there a chunk of the physics community out there hiding something from the rest of us? HangZhou Night Net

Sadly, the answer is far more mundane than that. What the researchers are usually reporting is called the group velocity, which is allowed to travel faster than light because it carries no information. However, this begs the question of why researchers are still interested in something so apparently mundane. The answer is quantum mechanics.

In quantum mechanics, there is a phenomena called tunneling that describes how a particle, trapped by a barrier, can escape through that barrier even though it doesn't have sufficient energy to do so. Tunneling is of fundamental importance because it provides a framework through which our understanding of radioactive decay and modern electronics (among other fields) is derived. On close inspection, however, tunneling raises a few questions as well; for instance, what is the speed of the particle as it passes through the barrier, and how long does the tunneling process take?

The theoretical picture tells us that the speed of a particle in the barrier is purely imaginary (as in imaginary numbers, not pink elephants), as is the transit-time, which makes interpretations problematic. However, by treating the particle as a wave and examining the phase change induced by the barrier, a real transit time can be derived. This transit time becomes independent of the barrier thickness for thick barriers, which has some interesting consequences. For a sufficiently thick barrier, the particle must move faster than the speed of light. Unfortunately, measurements of this phenomena have proven to be quite difficult, and no conclusions have been drawn from the initial results.

Luckily, there is an alternative way to test this: light. Under the right circumstances, light will also tunnel across a barrier—a phenomena with the catchy name frustrated total internal reflectance is one example—and an analogous calculation for light can be made. But light causes its own set of experimental problems. All of the potential experiments involve detecting very weak pulses of light in the presence of much stronger reference signals. There is also the issue that the weak pulse is derived from the stronger pulse—as the intensity of a pulse gets weaker, it is quite likely that the pulse width will get shorter. Depending on how you define your measurement protocol, you might measure faster or slower transit-times.

Clearly, what is needed is a way to create a tunneling barrier through which all the photons will tunnel. This is exactly what a group of Russian and Swedish researchers have proposed. In a recent Physical Review E publication, they have described the properties of a coaxial transmission line where a chunk in the center is modified to create a tunneling barrier for microwave light.

This is achieved by gradually varying the composition of the plastic center so that the light "sees" a weak "U" shaped potential. Under the right conditions, none of the microwaves will be reflected by this barrier, but the properties of the barrier and the mathematics describing it tell us that the light must tunnel. Furthermore, the nature of a coaxial cable means that every color of light travels at the same speed—unlike fiber optic cables. These properties provide nearly ideal conditions for an experiment.

To make the measurements, the researchers propose taking a microwave source and sending half the radiation through the tunneling barrier and half along a normal coaxial cable. The radiation can then be recombined so that an interference pattern is detected. By shooting pulses of light down the cables, the amount of overlap between two equally intense pulses can be measured, which can be used to derive how big the time difference between the two paths was. If the tunneling pulse arrives first, it traveled at a superluminal velocity.

So that all seems pretty simple, right? Where are the experimental results? Well, there aren't any. Being a naturally suspicious type of person, I wonder if this is because they have tried and found they couldn't get the coaxial cable to perform as expected. On the other hand, I would be delighted by robust experimental results on superluminal (or lack of) group velocities, and I hope they are planning a follow-up paper. However, what I really want to see is the phase velocity measured. The group velocity is measured by how fast a pulse moves, while the phase velocity is the velocity of the underlying waves that make up the pulse. Measuring the phase velocity across such a tunneling barrier would be very challenging, but would present a real test of the correctness of special relativity. A test I would expect it would pass with flying colors, just as it has with all the other tests.

Physical Review E, 2008, DOI: 10.1103/PhysRevE.78.016601

Fixing the structural ills of US biomedical research

Although the US biomedical research endeavor might appear to be in great shape, it faces some significant problems. The rapid doubling of the NIH budget started during the Clinton administration was followed by several years of flat funding, and this has exposed structural cracks in the way that we're training and employing researchers. Here at Ars we've been banging that drum for a while now, and today in Science, Michael Teitelbaum, vice president of the Sloan Foundation, joins in with a policy forum article on the topic that contributes a list of suggestions for policymakers. HangZhou Night Net

Starting in 1998, the NIH underwent a doubling of its budget over five years. At the time, this was greeted incredibly favorably. The numbers of successful grant applications rose, more PhDs were awarded, and more foreign scientists were attracted to the US. Along with this, a lot of money was spent by universities and research institutes across the country on shiny new buildings. Things were looking good.

However, since 2003, budgetary pressures have meant that, in real terms, the NIH budget has actually been falling. Since this is the major funding source for biomedical R&D in the US, a lot of researchers who were accustomed to steady growth suddenly found that funding opportunities weren't keeping pace with the growing research community. After spending five to eight years being trained to be independent scientists, all those new PhDs and postdocs suddenly discovered that only one in five would ever be able to land a faculty appointment. Those recently promoted to the ranks of faculty discovered that instead of being mentored by their senior colleagues, they were instead competing with, and often losing to, those same senior scientists for grants.

One of the problems with the system is that those young scientists, the PhD students and postdocs, do the bulk of the work; it's a lot cheaper to employ a PhD graduate as a postdoc (whom you often don't need to offer benefits to) than it is to hire a technician. However, postdocs are supposed to be temporary positions; after several years, that young scientist should be able to apply for independent funding.

That's the theory. In practice, advances in medical research (ironically) and the concomitant advances in working lifespan mean that the existing tenured faculty across the country are not retiring, so there are far fewer positions than scientists looking to fill them. Furthermore, since these postdocs are working for a PI upon whom they depend for their salary, the expectation is that they will work on the PI's project; the training towards independence isn't happening.

A lot of those postdocs—well, more than half—are foreign-born and mainly working on temporary visas, and those on J-1s, can often be paid far below the prevailing wage; many universities have stories of unscrupulous faculty who bring researchers over from China or India to work 15-hour days in the lab with the knowledge that those who complain can just be sent back and exchanged for another.

After describing this situation, Dr. Teitelbaum makes a number of recommendations in the article. One is to better align the PhD/postdoc systems with demand in the labor market for graduates; if only one in five is ever going to get a tenure track faculty position, we need to be better preparing the remaining 80 percent for the alternative careers. More funding for positions such as staff scientists could limit the constant stream of young scientists who end up forgotten after three or four years. A rethinking of the numbers of foreign scientists who are awarded temporary visas each year is also advocated.

Other suggestions include an examination of the way the NIH budget is constructed and how it is spent. The US should avoid rapid growth followed by years of fallow; measured, but less extreme growth is sustainable and avoids situations such as the one we currently find ourselves in, where a glut of young scientists has nowhere to go. Also mentioned is the idea of limiting the percentage of faculty salary that can be paid by research grants, and adjusting overhead rules regarding funding new facilities and the debt created by building them.

As someone who's been involved in the debate over many of these issues, the suggestions all seem like pretty smart ones to me, although there appears to be institutional resistance to change and inertia on anything other than a small level. The Pathway to Independence awards are a good example of this; designed to help transition postdocs to faculty, they are a good solution, but with only 250 given out each year and about 90,000 postdocs, they're mere window dressing, and the researchers who manage to win them were always going to get their own funding anyway.

While the NIH's role in creating or allaying these structural problems is often discussed, the role of universities also needs addressing. These institutions are the primary beneficiaries; they get a lot of the grant money through overhead costs, they don't have to pay much in the way of salaries, and they benefit from the publications. Universities are some of the first to lobby Congress to increase NIH funding, but don't appear to be taking their share of the responsibility for the problems.

For example, the universities could consider covering the salaries of their senior faculty. This would free up those experienced researchers from the need to continually compete with (and out-compete) up-and-coming scientists, and instead let them effectively mentor and train the next generations. Of course, as long as scientists continue to use grants as measurement of success against their peers, I'd expect a lot of resistance toward this idea. One thing's for sure, though, things ought not to continue along as they are now.

Science, 2008. DOI: 10.1126/science.1160272

Buffalo selling UpgradEees for subnotebook SSD

Asus' Eee PC line has met with a lot of interest, sold strongly, and has launched an entire industry sector of competing subnotebooks, but a persistent prjoblem has been the device's comparatively anemic storage portfolio. A new product by Buffalo may solve this problem by offering large amounts of upgradable SSD storage to Eee users. HangZhou Night Net

The base Eee offers only 2GB of flash storage, and even the 9" Eee 900-series parcels out additional NAND in miserly blocks of 4GB at heavy cost. To address this, Buffalo has developed and is selling a NAND Flash SSD compatible with the Eee 900-series. The new devices allow 32GB and 64GB of storage added to the Eee for $150 or $300. Price is not quite competitive with the OCZ Core line.

The specific device is a bit pricey for the Eee. The 64GB model costs as much as the bottom-of-the-line Eee 700 2G, which will probably make Eee users reluctant to shell out so much money, especially since doing so would allow them to step up to the HP Mini-Note or another competing product. On the other hand, Eees equipped with a hefty storage capacity, with their same small size and other appealing features, could compete directly with these other devices. Also, consumers who want more space can replace the unit's SSD when they need it.

The strategy involved is interesting. The new devices do not connect via a hard disk bus like SATA, but instead connect directly to PCIe, as the Eee's stock SSD does. This approach eliminates performance bottlenecks to do with the SATA bus, although it's very unlikely that the Buffalo drives would hit those limits. The new device is Eee-specific, since the direct-from-PCIe and the plug used appear to be incompatible with other PCs, for now.

On the other hand, this may spark a renaissance in disk interchangability on subnotebooks. Before now, subnotebooks seeking more storage or higher performance have had to use tiny hard disks like in the Everex Cloudbook, or settle for a size, weight, and power consumption sacrifice by going all the way to 2.5" hard disks. If the third-party Eee SSD catches on, other subnotebook vendors may start building their subnotebooks to accept the new devices, creating a de=facto standard for subnotebook SSD intercompatibility. This is a very exciting development.

Rumor: Nokia working on integrating Zune Marketplace

A new rumor has appeared on the Zune block, and this one has nothing to do with a possible Zune Phone. In fact, the source of this rumor again denied knowing anything about a possible Zune Phone and instead insisted that the Zune team was hard at work collaborating with Nokia. This rumor, however, says nothing of a new hardware device, and instead talks about the Zune Marketplace moving beyond the Zune, which has been speculated before, but only for Windows Mobile devices: HangZhou Night Net

Nokia is currently working with the Zune team on integration of Zune Marketplace content according to a well-placed source within Microsoft. The joint development is directed at content delivery rather than a hardware device according to the source.

Now, obviously Nokia does not support Windows Mobile, but Microsoft's move still makes sense: Nokia dominates the worldwide handset market. It is the only one of the five largest mobile phone manufacturers that does not have a Windows Mobile device, but that does not mean the two companies don't have a long history: Microsoft still offers various of its software and services on Nokia phones. The source also noted that the Nokia deal will not be exclusive, meaning that other mobile manufacturers could also be planning to do the same, at the very least on their Windows Mobile offerings.

The only thing that doesn't add up is that the mobile company has its "Nokia Music Store," and it's not clear what would happen to it if Nokia started to support the Zune Marketplace. Nokia last year also partnered with Universal to offer "Comes With Music," a service that rolls the $5 per month fee into the cost of a device or any accompanying service charges, making it look like a free one-year music subscription. It doesn't make sense for Nokia to offer all of this as well as integration with Zune Marketplace.

Pushing out the Zune Marketplace to Nokia phones would be a direct attack on Apple and iTunes, which can be accessed via both the iPod Touch and the iPhone. Depending on how many Nokia phones end up getting access to the Zune Marketplace and how well the connection is implemented, Apple could have a serious competitor on its hands. In regards to development and production timelines for Nokia-Zune Marketplace integration the source claimed "it's too soon to say." In other words, Apple has little to worry about for now, since it has the advantage of already offering a "music for your mobile" option.

Further readingZune Scene: Nokia Zune Deal in Works

9500GT, 9800GT, 9800GTX+ cards hit the streets

NVIDIA's just-launched GeForce 9500GT, 9800GT, and 9800GTX+ GPUs are already seeing commercial implementations on consumer graphics cards from a number of vendors, Digitimes reports. The new GPUs are positioned in between existing products, and the 9500GT is already destined for a 55nm migration. HangZhou Night Net

The 9500GT has half the stream processors (32) of the popular 9600GT, runs at slightly lower clock speeds, and is populated by 256MB or 512MB of DDR3 or DDR2, depending on the manufacturer. It features a standard pair of DVI ports and an HDTV out. The DVI ports allow audio-carrying HDMI with an included adapter, a feature which, prior to the 9600GT, was only supported on ATI cards. It's less than seven inches long and entirely bus-powered, adding to ease of installation and support for small cases. With fairly robust media features, HDMI, and support for hardware acceleration of high-definition video playback, the 9500GT is a fairly compelling HTPC card, although only moderately competent at games. It will retail for about $70.

As part of its 55nm migration plans, NVIDIA will move the 9500GT to 55nm fabrication within the year. This will reduce costs significantly, and allow lower-power and higher-clocked solutions, and possibly easier passive cooling.

The 9800GT and 9800GTX+ are gaming cards. The 9800GT is essentially a 9800GTX with one block of stream processors disabled, for 112 processor cores instead of 128, and downclocked from 675Mhz to 600Mhz. It should be quite competent as a gaming card; it's essentially an 8800 GT. This extra muscle comes at a price, though: the card is longer, needs external power, and consumes more electrical power. It retails at about $170. The 9800GTX+ is exactly what its name suggests: a 9800GTX die-shrunk to 55nm and clocked higher, accordingly. It retails at about $200.

MSI, Asus, Gigabyte, Foxconn, Biostar, and Leadtek are all shipping cards on these GPUs, as are several other manufacturers. Most implementations closely follow the reference design, although some are mildly overclocked. Biostar even has a passively-cooled 9500GT. The new cards are already appealing, and the promise of a lower-power 9500GT makes the future promise of this particular GPU even more alluring.

Microsoft: 120 million Office 2007 licenses now sold

HangZhou Night Net

Last week, at Microsoft's annual Financial Analysts Meeting, Stephen Elop, president of Microsoft's Business division, revealed a sales figure that wasn't meant to fight bad press around Vista. Elop was talking about Microsoft Office 2007, and he threw out a statistic that might be a bit surprising at first:

We made some very bold moves to improve the user experience with Office 2007. And as you can see in this graph, we're getting some really good pickup on that. There have been 120 million Office licenses sold since the launch of Office 2007, which is just a great result.

When compared with the 180 million Vista licenses sold, it appears at first glance that Vista is doing much better than Office 2007, considering their launch was simultaneous. Piracy rates cannot be measured, so it isn't really clear which software is being adopted faster. It is important to remember, however, that an operating system is more of a nuisance to pirate than an office suite, and that Vista is more expensive than Office 2007.

Also, Vista comes preinstalled on millions of OEM computers, and although Office 2007 trials sometimes come preinstalled as well, consumers usually have to make a conscious choice to purchase the latest version of Microsoft's office suite. Furthermore, while Vista may require a new computer, Office 2007 can be installed on Windows XP SP2, Windows Server 2003 SP1, or a later operating system. It's hard to say which is doing better, but I would guess that businesses are moving to Office 2007 faster than they are to Vista, and that this trend is the exact opposite for individual consumers.

Further reading:Microsoft: Press Release

UK group calls on YouTube to screen all uploaded videos

Social media sites, and those that host user-generated content, need to do more to screen the content on their sites and protect users—particularly children—from videos that could be considered harmful, according to a UK government agency. The House of Commons' Culture Media and Sport Committee released its tenth report today, titled "Harmful content on the Internet and in video games," which examines "the Internet’s dark side" and what should be done to keep users safe. The Committee feels that social media sites need to implement stricter policies, implement more content filtering, and make it easier to report abuse. HangZhou Night Net

The Committee starts off by describing the Internet as a place "where hardcore pornography and videos of fights, bullying or alleged rape can be found, as can websites promoting extreme diets, self-harm, and even suicide." Because of this, websites like MySpace, Facebook, and YouTube need to take a more active stance against offensive or illegal content than they do currently. The Committee expressed distress that there appeared to be an industry standard of 24 hours to remove content that contains child abuse, for example, and strongly recommended making such important issues higher-priority.

Another area of concern was over the apparent realization that videos uploaded to YouTube go through no filtering (human or computer) before being posted to the site. Google argued that the task of doing so would be nearly impossible, as some 10 hours of video are uploaded to the site every minute of the day, but the Committee was having none of it. "To plead that the volume of traffic prevents screening of content is clearly not correct: indeed, major providers such as MySpace have not been deterred from reviewing material posted on their sites," reads the report. It urges YouTube and user-generated content sites in general to implement technology that can screen file titles for questionable material (since we all know that people uploading illegal content always make sure that the filename is specific and accurate).

Other recommendations included making terms of service more prominent and easier for users to find, implementing all possible privacy controls by default, requiring users to deliberately and manually make them make their profiles more public, and implementing controls that make it easy for users to report instances of child porn directly to law enforcement. The agency encouraged the industry as a whole to come up with standards, and that a minister be appointed to oversee these developments.

Of course, the nature of the Internet means that those who are interested in spreading illegal content—whether it's copyrighted material or child porn—will always try to remain a step ahead of filters and law enforcement, and their sheer numbers make success likely. The Committee seems to realize this to some degree, but argues that perfection should not be the enemy of the good. Child safety should be of utmost priority, says the agency, and any costs or technical limitations should be considered second to protecting children when it comes to the Internet.

Further reading:Harmful content on the Internet and in video games (PDF)Harmful content on the Internet and in video games, oral and written evidence (PDF)