Thursday, March 20, 2008

New Video of BigDog Quadruped Robot Is So Stunning It's Spooky



Boston Dynamics keeps working on their BigDog quadruped robot, which will probably grow to be the future AT-AT of the Pentagon. Its evolution since the last time we saw it is nothing sort of mindblowing, and a bit spooky.

It looks like an actual biological quadruped. Seeing it climb through rubble, snow, jumping over obstacles like a wild goat, and saving a near-fall on iced ground at the last second (fast forward to the middle of the video) defies belief. It feels so "animal" that I almost feel bad when they hit it to demonstrate how it regains balance on its own.

The new version of the robot can now carry 340 pounds, which is almost triple the previous weight. It looks to me that that $10 million funding they got from Darpa has been put to good use. [IEEE]

Tuesday, March 18, 2008

Saturday, March 15, 2008

Searchme project

Introduction to Searchme



Searchme Demo


Searchme is a new search engine that uses visual search and category refinement. We think it will help you find what you're looking for, faster, with a lot less spam. It's a new way to search that takes advantage of the size and bandwidth of today's Internet and the increasingly visual way that we all interact online.

The idea for Searchme came when Mark Kvamme, Searchme's chairman, got tired of looking through a bunch of unrelated results for articles on motocross. He suggested to founders Randy Adams and John Holland that they create a search engine that sorted results into categories. The Searchme visual interface came about when Randy, a father of seven, helped his four-year-old son search for children's web sites that he'd seen on TV. It struck Adams that if a search engine could show big pictures of the pages it found before users clicked through to a site, it'd be much easier to quickly find what they were looking for.

After more than three years of engineering, imaging billions of web pages, and fine-tuning our approach many times over, Searchme was born. We've built it from the ground up to optimize it for speed, but we still have a long way to go. We're just getting started on our first steps toward creating a smart new way to search today's Web.

How It Works

Searchme sorts your search results into relevant topics, then lets you scroll through and preview the web pages associated with your query, before you click through. As you start typing, category icons appear that relate to your search. For example, if you type "bonds", it suggests "savings" or "stocks" or "baseball". Choose a category, and you'll see pictures of the web pages that match your search. You can quickly review these pages to find the best one, then go right to that site.

Simply put, Searchme lets you see what you're searching for, before you click through.

Why It's Unique

Searchme's visual interface and category suggest features make us different from most other search engines. The visual interface delivers results as a browsable stack of "pages" - pictures of actual web pages that users can check out before visiting them. Category suggest, a seemingly simple feature, is based on a complicated analysis and categorization process that uses unique technology to divide results into predetermined, carefully-honed topics.

We're growing every day, and the quality of our results will vary as we make Searchme better and better. We'd love your help - if you have any suggestions, let us know!

Who Should Use It

Anyone can use Searchme - casual Internet users, professionals, college students. It especially appeals to visual people. Structured like a video game, Searchme's visual style and categories make it easy and fun to use, on a laptop or big screen.

Who's Involved

Searchme was founded in 2005 by Randy Adams, CEO, and John Holland, CMO, and received initial funding from Sequoia Capital. If you'd like to help us build Searchme, check out our jobs section.

Thursday, March 13, 2008

10 Future Web Trends

We’re well into the current era of the Web, commonly referred to as Web 2.0. Features of this phase of the Web include search, social networks, online media (music, video, etc), content aggregation and syndication (RSS), mashups (APIs), and much more. Currently the Web is still mostly accessed via a PC, but we’re starting to see more Web excitement from mobile devices (e.g. iPhone) and television sets (e.g. XBox Live 360).

What then can we expect from the next 10 or so years on the Web? As NatC commented in this week’s poll, the biggest impact of the Web in 10 years time won’t necessarily be via a computer screen - “your online activity will be mixed with your presence, travels, objects you buy or act with.” Also a lot of crossover will occur among the 10 trends below (and more) and there will be Web technologies that become enormously popular that we can’t predict now.

Bearing all that in mind, here are 10 Web trends to look out for over the next 10 years…
1. Semantic Web

Sir Tim Berners-Lee’s vision for a Semantic Web has been The Next Big Thing for a long time now. Indeed it’s become almost mythical, like Moby Dick. In a nutshell, the Semantic Web is about machines talking to machines. It’s about making the Web more ‘intelligent’, or as Berners-Lee himself described it: computers “analyzing all the data on the Web - the content, links, and transactions between people and computers.” At other times, Berners-Lee has described it as “the application of weblike design to data” - for example designing for re-use of information.

As Alex Iskold wrote in The Road to the Semantic Web, the core idea of the Semantic Web is to create the meta data describing data, which will enable computers to process the meaning of things. Once computers are equipped with semantics, they will be capable of solving complex semantical optimization problems.

So when will the Semantic Web arrive? The building blocks are here already: RDF, OWL, microformats are a few of them. But as Alex noted in his post, it will take some time to annotate the world’s information and then to capture personal information in the right way. Some companies, such as Hakia and Powerset and Alex’s own AdaptiveBlue, are actively trying to implement the Semantic Web. So we are getting close, but we are probably a few years off still before the big promise of the Semantic Web is fulfilled.

Semantic Web pic by dullhunk
2. Artificial Intelligence

Possibly the ultimate Next Big Thing in the history of computing, AI has been the dream of computer scientists since 1950 - when Alan Turing introduced the Turing test to test a machine’s capability to participate in human-like conversation. In the context of the Web, AI means making intelligent machines. In that sense, it has some things in common with the Semantic Web vision.

We’ve only begun to scratch the surface of AI on the Web. Amazon.com has attempted to introduce aspects of AI with Mechanical Turk, their task management service. It enables computer programs to co-ordinate the use of human intelligence to perform tasks which computers are unable to do. Since its launch on 2 November 2005, Mechanical Turk has gradually built up a following - there is a forum for “Turkers” called Turker Nation, which appears to have light-to-medium level patronage. However we reported in January that Mturk isn’t being used as much as the initial hype period in Nov-Dec 05.

Nevertheless, AI has a lot of promise on the Web. AI techniques are being used in “search 2.0″ companies like Hakia and Powerset. Numenta is an exciting new company by tech legend Jeff Hawkins, which is attempting to build a new, brain-like computing paradigm - with neural networks and cellular automata. In english this means that Numenta is trying to enable computers to tackle problems that come easy to us humans, like recognizing faces or seeing patterns in music. But since computers are much faster than humans when it comes to computation, we hope that new frontiers will be broken - enabling us to solve the problems that were unreachable before.
3. Virtual Worlds

Second Life gets a lot of mainstream media attention as a future Web system. But at a recent Supernova panel that Sean Ammirati attended, the discussion touched on many other virtual world opportunities. The following graphic summarizes it well:

Looking at Korea as an example, as the ‘young generation’ grows up and infrastructure is built out, virtual worlds will become a vibrant market all over the world over the next 10 years.

It’s not just about digital life, but also making our real life more digital. As Alex Iskold explained, on one hand we have the rapid rise of Second Life and other virtual worlds. On the other we are beginning to annotate our planet with digital information, via technologies like Google Earth.
4. Mobile

Mobile Web is another Next Big Thing on slow boil. It’s already big in parts of Asia and Europe, and it received a kick in the US market this year with the release of Apple’s iPhone. This is just the beginning. In 10 years time there will be many more location-aware services available via mobile devices; such as getting personalized shopping offers as you walk through your local mall, or getting map directions while driving your car, or hooking up with your friends on a Friday night. Look for the big Internet companies like Yahoo and Google to become key mobile portals, alongside the mobile operators.

Companies like Nokia, Sony-Ericsson, Palm, Blackberry and Microsoft have been active in the Mobile Web for years now, but one of the main issues with Mobile Web has always been usability. The iPhone has a revolutionary UI that makes it easier for users to browse the Web, using zooming, pinching and other methods. Also, as Alex Iskold noted, the iPhone is a strategy that may expand Apple’s sphere of influence, from web browsing to social networking and even possibly search.

So even despite the iPhone hype, in the US at least (and probably other countries when it arrives) the iPhone will probably be seen in 10 years time as the breakthrough Mobile Web device.
5. Attention Economy

The Attention Economy is a marketplace where consumers agree to receive services in exchange for their attention. Examples include personalized news, personalized search, alerts and recommendations to buy. The Attention Economy is about the consumer having choice - they get to choose where their attention is ’spent’. Another key ingredient in the attention game is relevancy. As long as the consumer sees relevant content, he/she is going to stick around - and that creates more opportunities to sell.

Expect to see this concept become more important to the Web’s economy over the next decade. We’re already seeing it with the likes of Amazon and Netflix, but there is a lot more opportunity yet to explore from startups.


Image from The Attention Economy: An Overview, by Alex Iskold
6. Web Sites as Web Services

Alex Iskold wrote in March that as more and more of the Web is becoming remixable, the entire system is turning into both a platform and the database. Major web sites are going to be transformed into web services - and will effectively expose their information to the world. Such transformations are never smooth - e.g. scalability is a big issue and legal aspects are never simple. But, said Alex, it is not a question of if web sites become web services, but when and how.

The transformation will happen in one of two ways. Some web sites will follow the example of Amazon, del.icio.us and Flickr and will offer their information via a REST API. Others will try to keep their information proprietary, but it will be opened via mashups created using services like Dapper, Teqlo and Yahoo! Pipes. The net effect will be that unstructured information will give way to structured information - paving the road to more intelligent computing.

Note that we can also see this trend play out currently with widgets and especially Facebook in 2007. Perhaps in 10 years time the web services landscape will be much more open, because the ‘walled garden’ problem is still with us in 2007.


Image from Web 3.0: When Web Sites Become Web Services, by Alex Iskold
7. Online Video / Internet TV

This is a trend that has already exploded on the Web - but you still get the sense there’s a lot more to come yet. In October 2006 Google acquired the hottest online video property on the planet, YouTube. Later on that same month, news came out that the founders of Kazaa and Skype were building an Internet TV service, nicknamed The Venice Project (later named Joost). In 2007, YouTube continues to dominate. Meanwhile Internet TV services are slowly getting off the ground.

Our network blog last100 has an excellent overview of the current Internet TV landscape, with reviews of 8 Internet TV apps. Read/WriteWeb’s Josh Catone also reviewed 3 of them - Joost, Babelgum, Zattoo.

It’s fair to say that in 10 years time, Internet TV will be totally different to what it is today. Higher quality pictures, more powerful streaming, personalization, sharing, and much more - it’s all coming over the next decade. Perhaps the big question is: how will the current mainstream TV networks (NBC, CNN, etc) adapt?


Zattoo, from Internet Killed The Television Star: Reviews of Joost, Babelgum, Zattoo, and More, by Josh Catone
8. Rich Internet Apps

As the current trend of hybrid web/desktop apps continues, expect to see RIA (rich internet apps) continue to increase in use and functionality. Adobe’s AIR platform (Adobe Integrated Runtime) is one of the leaders, along with Microsoft with its Windows Presentation Foundation. Also in the mix is Laszlo with its open source OpenLaszlo platform and there are several other startups offering RIA platforms. Let’s not forget also that Ajax is generally considered to be an RIA - it remains to be seen though how long Ajax lasts, or whether there will be a ‘2.0′.

As Ryan Stewart wrote for Read/WriteWeb back in April 2006 (well before he joined Adobe), “Rich Internet Apps allow sophisticated effects and transitions that are important in keeping the user engaged. This means developers will be able to take the amazing changes in the Web for granted and start focusing on a flawless experience for the users. It is going to be an exciting time for anyone involved in building the new Web, because the interfaces are finally catching up with the content.”

The past year has proven Ryan right, with Adobe and Microsoft duking it out with RIA technologies. And there’s a lot more innovation to happen yet, so in 10 years time I can’t wait to see what the lay of the RIA land is!
9. International Web

As of 2007, the US is still the major market in the Web. But in 10 years time, things might be very different. China is often touted as a growth market, but other countries with big populations will also grow - India and African nations for example.

For most web 2.0 apps and websites (R/WW included), the US market makes up over 50% of their users. Indeed, comScore reported in November 2006 that 3/4 of traffic to top websites is international. comScore said that 14 of the top 25 US Web properties now attract more visitors from outside the US than from within. That includes the top 5 US properties - Yahoo! Sites, Time Warner Network, Microsoft, Google Sites, and eBay.

However, it is still early days and the revenues are not big in international markets at this point. In 10 years time, revenue will probably be flowing from the International Web.
10. Personalization

Personalization has been a strong theme in 2007, particularly with Google. Indeed Read/WriteWeb did a feature week on Personalizing Google. But you can see this trend play out among a lot of web 2.0 startups and companies - from last.fm to MyStrands to Yahoo homepage and more.

What can we expect over the next decade? Recently we asked Sep Kamvar, Lead Software Engineer for Personalization at Google, whether there will be a ‘Personal PageRank’ system in the future. He replied:

“We have various levels of personalization. For those who are signed up for Web History, we have the deepest personalization, but even for those who are not signed up for Web History, we personalize your results based on what country you are searching from. As we move forward, personalization will continue to be a gradient; the more you share with Google, the more tailored your results will be.”

If nothing else, it’ll be fascinating to track how Google uses personalization over the coming years - and how it deals with the privacy issues.

Why Validate CSS?

This document attempts to answer the questions many people have regarding why they should bother with Validating their web sites and tries to dispel a few common myths.

The original version was written by Nick Kew of WebÞing Ltd. for their Site Valet service and he has generously donated it for our use. This version has been slightly modified, but is essentially the same.
What is Validation?

Validation is a process of checking your documents against a formal Standard, such as those published by the World Wide Web Consortium (W3C) for HTML and XML-derived Web document types, or by the WapForum for WML, etc. It serves a similar purpose to spell checking and proofreading for grammar and syntax, but is much more precise and reliable than any of those processes because it is dealing with precisely-specified machine languages, not with nebulously-defined human natural language.

It is important to note that validation has a very precise meaning. Unfortunately the issue is confused by the fact that some products falsely claim to "validate", whilst in fact applying an arbitrary selection of tests that are not derived from any standard. Such tools may be genuinely useful, but should be used alongside true validation, not in place of it.
Why Validate?

Well, firstly there is the very practical issue that non-valid pages are (by definition) relying on error-correction by a browser. This error correction can and does vary radically across different browsers and versions, so that many authors who unwittingly relied on the quirks of Netscape 1.1 suddenly found their pages appeared totally blank in Netscape 2.0. Whilst Internet Explorer initially set out to be bug-compatible with Netscape, it too has moved towards standards compliance in later releases. Other browsers differ further.

The three questions below deal with three different points of view on the issue of Validation.

The novice (or non-technical website owner) question:
"My site looks right and works fine - isn't that enough?"

The answer to this one is that markup languages are no more than data formats. So a website doesn't look like anything at all! It only takes on a visual appearance when it is presented by your browser.

In practice, different browsers can and do display the same page very differently. This is deliberate, and doesn't imply any kind of browser bug. A term sometimes used for this is WYSINWOG - What You See Is Not What Others Get (unless by coincidence). It is indeed one of the principal strengths of the web, that (for example) a visually impaired user can select very large print or text-to-speech without a publisher having to go to the trouble and expense of preparing a separate edition.

It is perhaps unfortunate that the best-known browsers - Netscape Navigator and MS Internet Explorer on Windows - are visually very similar indeed in their presentation of many documents, differing only in trivial details like margins and spacings. The "same" browser on a Mac or Unix/Linux display will often look far more different.
The perceptive observation
"Lots of websites out there don't validate - including household-name companies."

Do remember: household-name companies expect people to visit because of the name and in spite of dreadful websites. Can you afford that luxury?

Even if you can, do you want to risk being on the wrong side of a lawsuit if your site proves inaccessible to - for instance - a disabled person who cannot use a 'conventional' browser? Accessibility is the law in many countries. Whilst validation doesn't guarantee accessibility (there is no substitute for common sense), it should be an important component of exercising "due diligence". It is now just over a year since a court first awarded damages to a blind user against the owners of a website he found inaccessible (Maguire vs SOCOG, August 2000).
The strawman argument
"Validation means boring websites, and stifles creativity"

This is simply head-in-the-sand ignorance (indeed, it lies at the heart of the most spectacular hype-filled dot-com failures). Validation is fully compatible with a wide range of dynamic pages, multimedia presentations, scripting and active content, etc. It is part of the difference between doing it right and doing it wrong in a dynamic multimedia presentation, just as much as in a purely textual site.

It is perfectly in order for authors to express their creativity on the Web, though it is of course generally more appropriate to some sites (e.g. recreational ones) than to others (e.g. informational or functional sites like this one). But authors with creative ambitions should bear in mind that in any artistic field, you must start with a thorough understanding of the rules before breaking them. Otherwise you just look foolish.

http://validator.w3.org

Table Based Web Layout vs. CSS Positioning

I reviewed an excellent book on this topic recently that should be of interest to anyone who finds this blog post by searching regarding table based design or CSS. The book is the Zen of CSS Design.

When I learned HTML 3 a few years back, I enjoyed the power to publish content to the web where people from all over the world could access it easily. But as the web projects I developed became larger, I wished for a way to control design elements in a more centralized way.

In this pre-CSS world, if text links throughout the site were not visible enough to attract users’ attention, each link or at least the body element on each page would need to be edited to change the behavior and display of the text links throughout the site. In addition, the most frustrating aspect of design with HTML alone was being forced to use tables, designed to organize tabular data, to control the visual presentation of documents. Spacing needed to be done with images, and the code did not make sense in a text browser.

When I was introduced to Cascading Style Sheets (CSS) in its first incarnation, CSS was limited largely to the control of text and link behaviors. Even in its nascent form, it showed a great deal of potential to save repetitive work for the web designer. The many benefits of design with CSS include a lower page weight and faster load time, greater accessibility to text browsers and handheld devices, greater control over type display, cleaner (X)HTML code that often results in better search engine results.

Now that I am free of table based design, I have no desire to return. Of course, I still think tables should be used for tabular data like calendars and well, tables. The chief arguments against CSS design for visual control are that it is difficult to learn (true, but it is well worth the effort) and that it requires hacks in order to render equally well in current and older browsers. These are valid reasons to think twice before learning CSS, but not valid reasons to avoid using it if you already have mastered its intricacies. In fact, as skill level with CSS increases, it becomes almost imperative to use it rather than tables.

When I started using CSS 2.1 for document layout, I made the mistake of mostly substituting page divisions (DIV tags) for table cells. Though this increased the accessibility of the pages I designed and enabled more ease in maintenance and making site wide changes, it merely replaced one sort of cluttered code with another less cluttered code. As my comfort level increases, I have found ways to style natural page elements (semantic code) rather than introducing page divisions. So as knowledge increases, code can become increasingly simple, semantic, and accessible. As this takes place CSS begins to tower over tables as a visual design tool in its flexibility and power.

The next frontier that I must take my small design firm through is the mastery of accessibility to audio interpreters, such as screen readers, handheld devices, and full Section 508 compliance as mandated by law. Imagine a screen reader parsing a table based web design, “table data, image, table data, welcome to this web site, table row, table data, image, table”. You get the point. Within a mature understanding of CSS and semantic code, these standards can be gracefully implemented without a loss of visual appeal and without an increased page load. This is the future of web design, a virtual world with as few barriers as possible.

http://blog.designdelineations.com

Tables or CSS: Choosing a layout

Dave Winer walked into a long-simmering debate when he recently asked what's so important about tableless layout. The kinds of passionate responses he received have been regularly heard throughout the web design community—including places like evolt.org's own lists.
Not ready for prime time?

One of the common arguments against CSS-based design is that reality hasn't caught up with the technology's benefits.

Although no popular web browsers fully support CSS2 yet, many of the latest versions (Netscape, IE5+, Opera, and OmniWeb among others) have excellent support for CSS1 and strong support for CSS2. Even better, the public seems to be adopting these new browsers relatively quickly.

A year ago, more than one-quarter of the surfing population used browsers with poor CSS support (including IE 4). Now less than 12 percent do—putting support of CSS-based design at the same level as JavaScript and Java.
Three reasons why

Advocates of tableless design have their own pet reasons as to why style sheets are better (“it's faster”, “there's better design control”, “it's the right thing to do”), but three common reasons are presented again and again:

1. Semantics: The HTML table was conceived as a means to display tabular data. Using tables for layout was mentioned in HTML 3.2, but only to acknowledge existing use—the concept didn't appear in the original RFC. In future recommendations, the W3C said style sheets, not tables, should be used for layout. Using tables for layouts is like wearing dress shoes jogging—both work, but they're the wrong tools for the job.
2. Accessibility: Screen readers and text browsers struggle to read table-based layouts. In fact, the W3C, in its Web Content Accessibility Guidelines, explicitly says “[d]o not use tables for layout...” A tableless-layout designed using CSS can present the most appropriate, and usable design for each user agent—be it a cell phone, a screen reader, a TV-based browser, or a browser running on a desktop computer.
3. Efficiency: For both the site developer and the reader, a CSS-based design offers a degree of flexibility nearly impossible in restrictive table-based layouts. Not only can developers quickly and easily redesign an entire site by modifying one file, they can also present alternate designs for the reader. Separating content from the detailed structure table-based layouts provide, has the added benefit of future compatibility and portability.

Think about the future

Given the direction of current browser development and of the W3C itself, CSS-based design looks to become the de facto method of web design. Before switching to a tableless design, though, designers should consider the site's audience and goals (as they do whenever using anything other than pure HTML):

* Do most of the audience's browsers have good CSS support? If 30 percent of the audience uses Netscape 4.x, for example, switching may not be the best idea right now.
* Can CSS be implemented efficiently through all or part of the site? If all of the site cannot be changed immediately, small sections could benefit from CSS.
* Can there be two or more versions of the site? Some sites offer the previous tabled layout to older browsers, and the CSS version to the newer ones.
* Will the redesign by unveiled now, in six-months, or a year? The longer the planned roll-out, the more likely the audience will better support a CSS-based design.
* Is/will the site be available in multiple formats and media? If so, the benefits of CSS far outweigh the negatives.

Even if a tableless design may not be suitable for one site today, it will be for a growing number of others now and in the future.

http://www.evolt.org