Pages

Monday, September 13, 2010

Recent Microsoft and Intel primers on Internet Explorer 9's accelerated graphics point to snappier Web browsing.

Microsoft will launch the beta of the upcoming Internet Explorer browser on Wednesday at an event in San Francisco as competition from Chrome, Firefox, and Safari has spurred Redmond to beef up its graphics acceleration, among other improvements. And Intel is slated to introduce its Sandy Bridge chip architecture, with features enhanced graphics silicon, at the Intel Developer Forum, which begins on Monday.

In a blog posted on Friday, Microsoft spelled out what it says are the merits of "full vs. partial acceleration," while Intel, in a new video, is claiming IE9 acceleration on its Core i series of chips--which will include new Sandy Bridge processors.

Graphics chip-based acceleration (Microsoft calls it "hardware acceleration") shifts some tasks from the main processor (CPU) to the graphics processor (GPU). Mainstream GPUs pack in dozens or even hundreds of processing cores. While each GPU core delivers a tiny fraction of the processing power of a CPU core, combined, they can tackle certain tasks much more quickly and efficiently than a CPU. Intel, for its part, has improved the built-in graphics on its Core i series of processors and will integrate its fastest graphics function yet onto the CPU in its upcoming Sandy Bridge processor.
Microsoft says 'full hardware acceleration' will be implemented in IE 9.

Microsoft says 'full hardware acceleration' will be implemented in IE 9.
(Credit: Microsoft)

In the Microsoft blog, Ted Johnson, program manager lead for Web graphics at Microsoft, explained the merits of a "fully-hardware accelerated display pipeline that runs from their markup to the screen."

In March, Johnson explains, Microsoft released the first IE9 Platform Preview with GPU-powered HTML5 turned on by default, enabling hardware acceleration on "everything on every Web page" including text, images, backgrounds, borders, SVG (scalable vector graphics) content, and HTML5 video and audio. And with Platform Preview 3 in July, IE 9 introduced a hardware-accelerated HTML5 canvas.

Johnson claims that full hardware acceleration is achieved in three steps: Content Rendering (common HTML elements), Page Composition (image-intensive scenarios), and Desktop Composition (composition of final screen display). As a result, IE9 doesn't sacrifice performance for cross-platform compatibility. "When there is a desire to run across multiple platforms, developers introduce abstraction layers and inevitably make trade-offs, which ultimately impact performance and reduce the ability of a browser to achieve 'native' performance" (on the GPU) Johnson writes.

He also cites a demo Microsoft did running HTML5 video on a Netbook running IE9: Microsoft played two HD-encoded, 720p videos using "very little of the CPU" while "another browser maxed out the CPU while dropping frames playing only one of the videos," Johnson writes.

But others are quick to point out that it may not be that cut and dried. "Microsoft marketing is making noises about IE9 having a monopoly on 'full hardware acceleration.' They're wrong; Firefox 4 has all the three levels of acceleration they describe," according to a blog posted Sunday at MozillaZine, an independent Mozilla news, community, and advocacy site.

Intel, on the other hand, is addressing acceleration from the hardware side. The chipmaker released a video Friday showing IE9 running on a Core i5 processor, claiming that "Internet Explorer 9 is hardware accelerated on any piece of graphics hardware that supports DirectX 9."

"The Intel Core i5 processor is calculating the movement of these images and then the built-in HD graphics is actually rendering these images on the screen," said Erik Lorhammer, Sandy Bridge graphics marketing manager, in the video.


Trend Micro redone security suites for 2011 introduce new names for their products and a new emphasis on cloud-based protection. Trend Micro Antivirus+, Trend Micro Titanium Internet Security, and Trend Micro Maximum Security include the overhauled cloud-based Smart Protection Network proprietary engine to protect against viruses, malware, phishing attacks, and other threats.

The suites are notable for the heavy reliance on cloud-based technology and Trend Micro's emphasis on its Smart Scan tech. According to the company, this works by constantly scanning for threats when connected to the Internet, and utilizing locally-cached databases when working offline. The offline database includes protections against viruses and malware that are known to spread by USB keys.

Unlike many of its competitors, Trend Micro does not offer a firewall component, instead relying on the default Windows firewall.

The products differ similarly to many of Trend Micro's competitors. Trend Micro Titanium Antivirus+ 2011 program offers the most basic protection, including antivirus and anti-malware guards, drive-by download protection, and the ability to block links to malicious sites and downloads in instant messages and e-mails. It retails for $39.95 for one PC.

Trend Micro Titanium Internet Security 2011 protects against the same as Titanium Antivirus+, as well as offering protection against unauthorized changes to your already-installed programs, spam blocking, Windows firewall optimization, parental controls and data theft guards. It retails for $69.95 for one computer.

Trend Micro Titanium Maximum Security 2011 includes the same as Titanium Internet Security, in addition to Wi-Fi hotspot authentication, Department of Defense-rated file shredding, remote locking of files and folders in case of theft, a system optimizer, and 10 GB of online backup. Titanium Maximum Security retails for $79.95 for one computer.

While Titanium Antivirus+ is in the mid-range for its category, Titanium Internet Security 2011 and Titanium Maximum Security 2011 are at the high end for their respective feature sets.

Correction: Pricing information has been corrected from an earlier version of this story.

Intel's walled garden plan to put A/V vendors out of business

In describing the motivation behind Intel's recent purchase of McAfee for a packed-out audience at the Intel Developer Forum, Intel's Paul Otellini framed it as an effort to move the way the company approaches security "from a known-bad model to a known-good model." Otellini went on to briefly describe the shift in a way that sounded innocuous enough--current A/V efforts focus on building up a library of known threats against which they protect a user, but Intel would live to move to a world where only code from known and trusted parties runs on x86 systems. It sounds sensible enough, so what could be objectionable about that?

Depending how enamored you are of Apple's App Store model, where only Apple-approved code gets to run on your iPhone, you may or may not be happy in Intel's planned utopia. Because, in a nutshell, the App Store model is more or less what Intel is describing. Regardless of what you think of the idea, its success would have at least two unmitigated upsides: 1) everyone will get vPro by default (i.e., it seems hard to imagine that Intel will still charge for security as an added feature), and 2) it would put every security company (except McAfee, of course), out of business. (The second one is of course a downside for security vendors, but it's an upside for users who despise intrusive A/V software.)
From a jungle to an ecosystem of walled gardens

For a company that made its fortune on the back of the x86 ISA, the shift that Intel envisions is nothing less than tectonic. x86 became the world's most popular ISA in part because anything and everything could (and eventually would) run on it. And don't forget Microsoft's role in all of this--remember the "Wintel" duopoly of years gone by? Like x86, Windows ended up being the default OS for the desktop software market, and everything else was niche. And, like x86, Windows spread because everyone who wanted it could get it and run anything they wanted on it.

The fact that x86 was so popular and open gave rise to today's A/V industry, where security companies spend 100 percent of their effort trying to identify and thwart every conceivable form of bad behavior. This approach is extremely labor-intensive and failure-prone, which the security companies love because it keeps them in business.

What Intel is proposing is that the entire x86 ecosystem move to the opposite approach, and run only the code that has been blessed as safe by some trusted authority.

Now, there are a few ways that this is likely to play out, and none of these options are mutually exclusive.

One way should be clear from Intel's purchase of McAfee: the company plans to have two roles as a security provider: a component provider role, and an end-to-end platform/software/services provider role. First, there's the company's traditional platform role, where Intel provides OEMs the basic tools for building their own walled gardens. Intel has been pushing this for some time, mainly in its ultramobile products. If anyone is using Intel's ingredients (an app store plus hardware with support for running only signed code) to build their own little version of the App Store ecosystem, it's probably one of the European or Asian carriers that sells rebadged Intel mobile internet devices (MIDs). It's clear that no one is really doing this on the desktop with vPro, though.

Then there's the McAfee purchase, which shows that Intel plans to offer end-to-end security solutions, in addition to providing the pieces out of which another vendor can build their own. So with McAfee, Intel probably plans to offer a default walled garden option, of sorts. At the very least, it's conceivable that Intel could build its own secure app store ecosystem, where developers send code to McAfee for approval and distribution. In this model, McAfee would essentially act as the "Apple" for everyone making, say, MeeGo apps.

In the world described above, the x86 ecosystem slowly transitions from being a jungle to network of walled gardens, with Intel tending one of the largest gardens. If you're using an x86-based GoogleTV, you might participate in Google's walled garden, but not be able to run any other x86 code. Or, if you have an Intel phone from Nokia, you might be stuck in the MeeGo walled garden.
A page from the web

None of the walled garden approaches described above sound very attractive for the desktop, and they'll probably be rejected outright by many Linux and open-source users. But there is another approach, one which Intel might decide to pursue on the desktop. The company could set up a number of trusted signing authorities for x86 code, and developers could approach any one of them to get their code signed for distribution. This is, of course, the same model used on the web, where e-commerce sites submit an application for an https certificate.

This distributed approach seems to work well enough online, and I would personally be quite happy to use it on all my PCs. I would also love to hear from users who object to this approach--please jump into the comments below and sound off.
Pick any two

Obviously, security has always been a serious problem in the wild and woolly world of x86 and Windows. This is true mainly because Wintel is the biggest animal in the ecosystem, so bad actors get the most bang for their buck by targeting it. So why has Intel suddenly gotten so serious about it that the company is making this enormous change to the very nature of its core platform?

The answer is fairly straightforward: Intel wants to push x86 into niches that it doesn't currently occupy (phones, appliances, embedded), but it can't afford to take the bad parts along for the ride. Seriously, if you were worried about a particular phone or TV being compromised, you just wouldn't buy it. Contrast this to the Windows desktop, which many users may be forced to use for various reasons.

So Intel's dilemma looks like this: open, secure, ubiquitous--pick any two, but given the economics of the semiconductor industry, "ubiquitous" has to be one of them. Open and ubiquitous have gotten Intel where it is today, and the company is betting that secure and ubiquitous can take it the rest of the way.

Sunday, August 15, 2010

Net neutrality protestors lay siege to Google (for an hour)


"We had a bunch of papers which had, like, talking points so that we could all be on the same page," explained the net neutrality activist leaning over the front seat of our chartered bus. "But we can't find them."

Laughter erupted from the rest of the vehicle. Nobody cared. It was Friday afternoon. And after all, this was San Francisco, where two or more people being on the same page about anything is a misdemeanor.

With that, a dozen or so protestors (and Ars) rode from the city's Opera Plaza to Mountain View, California, headquarters of Google, now fallen from grace since the release of its watered-down net neutrality manifesto with Verizon.

The objective—to deliver 300,000 signatures protesting the move.

"But what we do have, so that everybody will be really loud and excited and show the press how important this is, we have a few, like, rally cries," our bus captain continued. "You guys want to practice?"

We'll spare you the results. Suffice it to say that 40 minutes later we and several other caravans arrived at the Googleplex—maybe 100 people all told, plus reporters. The Save The Internet and Moveon.org staff who organized the rally kept the crowd on a grassy knoll about 20 yards south of the Google campus' main entrance.
Google will see you now

"We're here because we love the Internet and we want to keep it that way!" a speaker declared.

After about five minutes of this sort of commentary, somebody asked a sensible question. "Do we have an appointment to see Google or anything like that?"

"Yes," came the response. "We're standing outside just to let Google officials know how we feel about this deal that they have made with Verizon. We also have a whole bunch of petitions to deliver and we're going to stand outside here for about 30 minutes, kind of just making sure that they know we're here."

This didn't sit well with some of the demonstrators. The Bay Area, it should be noted, is full of activists who have years of experience laying siege to big buildings full of computers—most famously the nuclear weapons research facility Lawrence Livermore National Laboratory. Not surprisingly then, one attendant began pushing the envelope.
A pair of protesters outside Google headquarters

"Well, can't the rest of us just go up there?" she asked.

"It may be trespassing. I'm not sure," the main organizer explained.

"Well if it is trespassing then they have to tell us!" she shot back.

"Ok! Let's go!" the coordinator relented. War whoops (or peace whoops if you prefer) erupted from the crowd as they marched up to Google's rotunda entrance, equipped with "DON'T BE EVIL" placards and similar signs.

Representatives of the Raging Grannies were in full force, including one dressed in Victorian black and armed with a Morticia Addams umbrella.

"I'm in mourning," she explained to me—and to add effect displayed a pendant with a photograph of her Civil War era great, great grandfather.
Lack of awareness

Meanwhile security staff appeared, looking right right out of Google central casting. One officiously whizzed around the demo in a company laminated moto-tripod. Others sported bright blue Google t-shirts and rode mountain bikes while sipping bottled water.

At first they seemed nervous, but not for long. Many of the demonstrators—twenty-something web developers and content site managers—were far more interested in tweeting on their handsets then trying to get into the building.

I asked if I could speak to a Google official.

"I'm not aware of that," a security person told me.

"How about Vint Cerf. Is he there?"

"He's probably up in an airplane somewhere," somebody else replied.

Bereft of Google folk with whom to talk, I listened to James Rucker of ColorofChange.org make the big speech of the demonstration.

"The Internet has been the ground upon which we can do real work," Rucker told the crowd. "We can communicate across communities. We can hold politicians accountable. The Chairman of the FCC, Julius Genachowski, has said that he stands to protect the open Internet, but the FCC thus far has failed to secure a free and open Internet by passing rules that make it a legal reality.

"We're here because Google and Verizon have put forth a plan that while saying it protects the Internet, does quite the opposite. They're talking about producing a separate, fast lane, essentially. A higher tier for premium content, which means if you want to play in the twenty-first century Internet that is upon us, you're going to have to pay."

This observation was met with a long round of boos. The demonstrators then panned out and were interviewed by the media. Bloggers interviewed each other. One told an AM radio news talk reporter that he didn't want the Internet to become like AM radio. The reporter politely nodded and smiled.
Backing out

I asked various activists if they really thought that Google would flip its position yet again based on those petitions.

"They could," one told me. "They apologized and changed course with Buzz." This referred to Google's making amends for various privacy blunders associated with the application.

Other demonstrators were more skeptical.

"I don't know how they back themselves out of this," another responded. "I don't know how they got into it. I don't understand the thinking behind it."

"Do you think the net neutrality movement can prevail without Google?" I pressed.

"I think it will be tough," he conceded. "Google has so much control over so much of the Internet. But I think Google only exists with the trust of their users. If Google does this, as much as I hate Microsoft, I might go to Bing."

"Ah, c'mon," I demanded. "How many Google apps do you have?

Gmail, Blogger, Google Voice, and an Android phone, he admitted. "I did stop using Buzz, though."
Fierce support

I wandered back to the main entrance and asked Rucker if he thought net neutrality can win sans Google.

"Absolutely," he bravely replied. "The FCC has the authority to reclassify broadband, which will allow them to do two things. One is actually, by law, protect net neutrality. The other is to ensure that broadband is available to communities that are currently shut out. I absolutely think we can do it without Google, but we should be able to do it with Google."

Eventually Google permitted a small team of demonstrators to carry the petitions into the building and present them to the company's policy division staff.
Delivering the petitions to Google

The search engine giant issued a brief statement in response to the plea.

"This is an important, complex issue that should be discussed," declared Google's Nicklas Lundblad, Head of Public Policy.

"But let me be clear: Google remains a fierce supporter of the open Internet. We're not expecting everyone to agree with every aspect of our proposal, but we think believe that locking in key enforceable protections for consumers is preferable to no protection."

With that, the Siege of Google concluded. We all got back on the bus and headed to the city. Somebody on the return trip asked me to answer my own questions.
Tradeoffs

The net neutrality movement can still win some of their objectives without Google, but only if other parts of the Internet content industry step in to fill the gap. Since the late 19th century, there has never been a major policy change in the communications sector that didn't happen without strong backing from some big wing of the corporate sector.

The FCC's Carterfone open device decision, the breakup of NBC in the late 1940s, or the dismantling of AT&T in the 1980s—these seismic events didn't take place solely because the equivalents of Save The Internet asked for them. They happened because a hefty chunk of corporate America wanted them.

So the question now is whether Facebook, Netflix, eBay, IAC, the gaming industry, or some combination of these forces are willing to more prominently carry the torch.

But while the folks at that demo may have lost an ally in Google, Google has lost something too—the public as a resource.

The company now faces two potentially devastating legal challenges. Undaunted by its recent defeat in a district court, Viacom has pushed its billion-dollar infringement suit against YouTube up the appeals court ladder.

And in a move with huge implications for the open source cause, Oracle is suing Google for allegedly infringing on Java technology patents in its Android operating system.

If these lawsuits start getting too close for Google's comfort, the firm might want to do what it has so often done in the past—appeal to the public for support in the form of new federal agency rules or legislation, as it did with white space broadband devices.

That would have been an easy move for Google two weeks ago. Now the prospects for those kind of campaigns are far less clear.

Saturday, July 31, 2010

Security researcher demonstrates ATM hacking

Security researcher Barnaby Jack demonstrates how he bypassed the security of two ATMs.
Security researcher Barnaby Jack demonstrates how he bypassed the security of two ATMs.
(Credit: Declan McCullagh/CNET)
LAS VEGAS--Hacking into an ATM isn't impossible, a security researcher showed Wednesday. With the right software, it's actually pretty easy.
Barnaby Jack, director of security testing at Seattle-based IOActive, hauled two ATMs onto the Black Hat conference stage and demonstrated to a rapt audience the fond daydream of teenage hackers everywhere: pressing a button and having an automated teller machine spew out its cash until a pile of paper lay on the ground.
"I hope to change the way people look at devices that from the outside are seemingly impenetrable," said Jack, a New Zealand native who lives in the San Jose area. One vulnerability he demonstrated even allows a hacker to connect to the ATM through a telephone modem and, without knowing a password, instantly force it to disgorge its entire supply of cash.
Jack said he bought the pair of standalone ATMs--one manufactured by Tranax Technologies and the other by Triton--over the Internet and then spent years poring over the code. The vulnerabilities and programming errors he unearthed during that process, Jack said, let him gain complete access to those machines and learn techniques that can be used to open the built-in safes of many others made by the same companies.
"Every ATM I've looked at, I've found a game-over vulnerability that allows an attacker to get cash from the machine," Jack said. "I've looked at four ATMs. I'm four for four." (He said he has not evaluated built-in ATMs like those used by banks and credit unions.)
He said both Tranax and Triton had patched the security vulnerabilities since he brought them to the companies' attention a year ago. If a customer with an ATM such as a convenience store or a restaurant doesn't apply the fix, though, the machines remain vulnerable.
Hacking into ATMs is not exactly a new idea: It was immortalized by a young John Connor in the "Terminator 2" movie, and techniques like "card skimming" and "card trapping" are well-known by police.
Some enterprising thieves have even seized on ways to use a little-known configuration menu to trick ATMs into thinking that they're dispensing $1 bills instead of $20 ones. (Traditional methods of stealing an ATM, ramming it, cutting into its safe, or blowing it up still work too.)
But those other electronic cash-extraction techniques were limited because they didn't rely on a deep analysis of an ATM's code. Many run Windows CE with an ARM processor and an Internet connection or a dialup modem, all of which controls access to the armored safe through a serial port connection. Jack said he used standard debugging techniques to interrupt the normal boot process and instead start Internet Explorer, giving him access to the file system and allowing him to copy off the files for analysis.
In the case of Tranax, a Hayward, Calif.-based company, Jack said he found a remote access vulnerability that allows full access to an unpatched machine without a password needed. He wrote two pieces of software to exploit that programming error: a utility called Dillinger, which attacks an ATM remotely, and one called Scrooge, a rootkit that inserts a backdoor and then conceals itself from discovery.
Scrooge "hides itself from the process list, hides itself from the operating system," Jack said. "There's a hidden pop-up menu that can be activated by a special key sequence or a custom card."
Triton's ATMs didn't have an obvious remote access vulnerability. And the built-in vaults were well-armored. But the PC motherboard that dispenses cash from the vault was protected only by a standard (not unique) key that could be purchased over the Internet for about $10. So Jack did, and found he could force the machine to accept his backdoor-enabled software as a legitimate update.
Bob Douglas, Triton's vice president of engineering, showed up at the conference to stress to reporters that the vulnerability has been fixed. "We have developed a defense against that attack," he said. "We released it in November of last year."
In addition, Douglas said: "We have an optional kit available to replace the lock with a unique key. It's a high-security lock as well. I think it's a Medeco lock." But he said because some companies that service ATM machines might own 3,000 of them and visit dozens or hundreds a day, not all customers choose to upgrade.
Tranax did not respond to queries from CNET on Wednesday.
Jack was scheduled to present a similar talk at Black Hat last year, but it was pulled at the last minute after an ATM vendor complained to Juniper Networks, his then-employer.
The difficult part in hacking the ATMs was evaluating the software for vulnerabilities--but the Dilligner and Scrooge utilities Jack created as a result are easy enough for a child to use.
And will he release them? Teenage hackers, random criminals, and the Mob would surely be interested. "I'm not going to," Jack said in response to a question from CNET after his talk.

Mozilla's Tab Candy is the first step to sweeter browsing

Tabbed browsing has arguably had a significant impact on the way that people use the Web, but the feature hasn't really scaled to accommodate the increasing complexity of the average surfing session. The existing tab management and overflow handling mechanisms that are present in modern browsers are dated and suffer from some fundamental limitations that significantly detract from user productivity.
As more software shifts into the cloud and users increase their reliance on the browser for daily computing tasks, browser tabs will have to evolve from a primitive mechanism for switching between documents into a full-blown task management system. The mainstream browser vendors have been slow to address this issue and haven't applied much innovation to the problem over the past few years. Mozilla has stepped up to plate and is aiming to hit the ball out of the park with some unique and truly compelling improvements to the tab concept.
Mozilla's experimental Tab Candy project, which is led by talented designer Aza Raskin, offers a simple and intuitive new twist on tab management. It allows users to visually manage tabs by organizing them into spatial groups. It's far from being a complete solution to tab overflow, but it's a very good step in the right direction.
Mozilla has made available some experimental prerelease builds of Firefox 4 that have the Tab Candy enabled. We tested this preview version ourselves to get a hands-on look at the new feature. On the surface, the only major noticeable difference is an icon with black squares that appears in the tab bar. When you click the icon, the Tab Candy mode will be activated. The browser will show you a thumbnail view of all of your tabs in rectangles that represent groups. You can drag a tab from one group to another or drag it out into the field to create a new group.
Mozilla's Tab Candy user interface
When you click a thumbnail, the browser will activate that tab and close the Tab Candy view. During regular browsing, the tab bar in the window will only show the tabs from the group that is currently active. This makes it easy to treat tab groups like projects and easily switch from one tab context to another.
These features are just the start of what Mozilla has planned for Tab Candy. In a demo video that highlights some ideas for future features, Raskin discusses the possibility of enabling simple tab sharing through the Tab Candy interface and providing extensibility hooks that would enable third-party add-ons to augment Tab Candy with their own contextually relevant features.

Tab tree

Tab Candy is an impressive first step, but there are still a lot of unsolved tab management challenges that need to be addressed. The Tab Candy interface won't fully resolve the problem of an overflowing tab bar, because there are still likely to be cases where individual tab groups have more items than the regular tab bar can cleanly accommodate. Having to scroll back and forth to find a tab is frustrating.
Tab Candy's spatial view will help to simplify high-level tab management, but the downside is that it fragments the user experience by disconnecting the tab management interface from the regular browsing interface. It would be good to have a separate way for users to optionally view the complete stack of tabs from all groups alongside the actual content of the active page.
Mozilla has reduced the challenge of finding a specific tab by introducing a switch-to-tab feature in the AwesomeBar, but that doesn't help unless you remember the title of the page that you are looking for. The popular Tree Style Tabs add-on offers an elegant way to further simplify tab management—one that could potentially work well with the Tab Candy concepts and shore up some of the weak points.
The Tree Style Tab add-on allows users to see all of their tabs in a nested hierarchy in a sidebar. It presents tabs as a tree of collapsible nodes, which makes it easy to hide and show sets of nested tabs based on which ones are relevant to your current activity.
The Tree Style Tabs add-on
I think that something like Tree Style Tabs should be added as a sidebar, giving the user the ability to toggle between the regular horizontal tab bar and the richer tree view when the tab count becomes overwhelming. It could also potentially be adapted with a filtering mechanism so that the user can decide if it should show tabs from all of their Tab Candy groups or just the active group. The groups could be presented as tree nodes.
I think that a vertical interface is really the key to bringing saner overflow handling to the tab bar. Raskin is no stranger to this notion, and experimented with the idea of a vertical sidebar in some mockups last year.

Reading list

In the demo video, Raskin suggests that Tab Candy users might want to rely on groups to manage the tabs that they intend to read later. This approach makes sense, but it might not be sustainable in the long term. I know that I'd end up with a ton of groups that I haven't looked at in a while cluttering up my Tab Candy space and I'd have one enormous group of unrelated pages for future reading.
An obvious solution is to offer some kind of bridge between tabs and bookmarks, but I think that it might be more advantageous to make it feel more like the Read It Later add-on, a wrapper for Firefox's bookmark system that allows users to easily create and maintain a chronological stack of unread items.
It would be great to have something like that, but with a more elaborate timeline view that would allow you to explore other browsing history that transpired around items that you saved for later reading. Similarly, it would be useful to be able to have tab groups "expire" and shift collectively into the reading list stack after a certain amount of idle time. This could have some kind of Weave sync capability so that users would be able to easily work through their reading list from a mobile phone.
Taken together, the underlying concepts behind Tab Candy, Tree Style Tabs, and Read It Later hold the potential to revolutionize Web browsing and solve a wide range of the tab management and information overload problems that are faced by users.

Students finally wake up to Facebook privacy issues

Students care about Facebook privacy more than the world thinks, and their use of privacy controls has skyrocketed recently, according to two researchers. Eszter Hargittai, Associate Professor of Northwestern University, and Danah Boyd, Research Associate at Harvard’s Berkman Center for Internet & Society published their findings in the online peer-reviewed journal First Monday, noting that young people are very engaged with the privacy settings on Facebook, contrary to the popular belief that their age group is reckless with what they post publicly.
The researchers surveyed first-year writing students at the University of Illinois-Chicago during the 2008-2009 academic year, and then followed up with them again in 2010. The large majority—87 percent—said they used Facebook in 2009, which went up to 90 percent in 2010. Among frequent and occasional users, more than half posted their own status updates in addition to checking up (and leaving comments) on those of friends.
Among those who took the survey in both years, nine percent said they never touched Facebook's privacy settings in 2009, a figure that fell to a paltry two percent the next year. Similarly, nine percent said they had adjusted the settings just once in 2010, down from 28 percent in 2009. In contrast, the percentage of students who changed their privacy settings four or more times more than doubled from 24 to 51 percent over that period of time. The researchers noted that those who regularly contribute to activities on Facebook may be more conscious of their audience than those who use it less frequently, hence their motivation to modify their settings.
Hargittai and Boyd noted that there was little variation between men and women who were frequent Facebook users when it came to engagement with privacy controls. They say this is notable "given that in most other domains that require active online engagement (e.g., posting videos, editing Wikipedia entries), women report lower levels of involvement."
There was, however, a much higher likelihood of occasional-Facebook-using women changing their settings than occasional male users. Unsurprisingly, users who were "highly skilled" in Internet-related things were much more likely to have tweaked their privacy settings, though the researchers acknowledged that this could be either due to knowledge levels or simple unawareness of the importance of changing them.
The one thing the researchers were unsure of was why so many Facebook users started tweaking their privacy controls so much between 2009 and 2010. One theory was that there was an increase in public attention on Facebook privacy just before and during that time—indeed, Facebook's Beacon screw-up started in 2008 and got the ball rolling for a litany of complaints that have extended well into 2010. Facebook also greatly simplified its privacy controls recently, which may have led to an increase in awareness.
The important takeaway, according to Hargittai and Boyd, is that students do care about their privacy on Facebook, and a large number of them are now making regular changes to their settings. "Our results challenge widespread assumptions that youth do not care about and are not engaged with navigating privacy," they wrote. Their findings, combined with those from the Pew Internet & American Life Project from earlier this year, show that the young 'uns aren't so willing to show their drunken photos to the world as many of us thought.