Wednesday 6 October 2010

It has begun.

http://www.nytimes.com/2010/10/05/sc...ref=technology

Originally Posted by New York Times:
Since the start of the year, a team of researchers at Carnegie Mellon University — supported by grants from the Defense Advanced Research Projects Agency and Google, and tapping into a research supercomputing cluster provided by Yahoo — has been fine-tuning a computer system that is trying to master semantics by learning more like a human. Its beating hardware heart is a sleek, silver-gray computer — calculating 24 hours a day, seven days a week — that resides in a basement computer center at the university, in Pittsburgh. The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.

“For all the advances in computer science, we still don’t have a computer that can learn as humans do, cumulatively, over the long term,” said the team’s leader, Tom M. Mitchell, a computer scientist and chairman of the machine learning department.


The Never-Ending Language Learning system, or NELL, has made an impressive showing so far.
NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts, 390,000 to date, with an estimated accuracy of 87 percent. These facts are grouped into semantic categories — cities, companies, sports teams, actors, universities, plants and 274 others. The category facts are things like “San Francisco is a city” and “sunflower is a plant.”

NELL also learns facts that are relations between members of two categories. For example, Peyton Manning is a football player (category). The Indianapolis Colts is a football team (category). By scanning text patterns, NELL can infer with a high probability that Peyton Manning plays for the Indianapolis Colts — even if it has never read that Mr. Manning plays for the Colts. “Plays for” is a relation, and there are 280 kinds of relations. The number of categories and relations has more than doubled since earlier this year, and will steadily expand.


The learned facts are continuously added to NELL’s growing database, which the researchers call a “knowledge base.” A larger pool of facts, Dr. Mitchell says, will help refine NELL’s learning algorithms so that it finds facts on the Web more accurately and more efficiently over time.



22 comments:

  1. interesting stuff.. way over my head

    ReplyDelete
  2. Yeah, this is a very interesting post.
    I'm followin' you.

    ReplyDelete
  3. Wow that's gonna be pretty sweet if we can get some real AI up in here!

    ReplyDelete
  4. Keep up the good work! If you like bubbles, check out my other blog, Enhanced by MS Paint!

    ReplyDelete
  5. That's crazy. Can it learn when something is debunked as well?

    ReplyDelete
  6. nice little post, thanks for popping by man <3

    ReplyDelete
  7. Well, now I know how the end of the world started. Nice post.

    ReplyDelete
  8. that face in the pic scares the shit out of me

    ReplyDelete
  9. Robotface reporting in for the world take over.

    ReplyDelete
  10. that sounds creepy. i'm sure this technology will be used against the people

    ReplyDelete
  11. eek skynet



    mmawillgrow.blogspot.com

    ReplyDelete
  12. When it comes to your post, GTL can also stand for "Great Thoughts at Length"... keep it up!

    ReplyDelete
  13. You've made some interesting points... I also have some points (actually bubbles) on Enhanced by MS Paint :)

    ReplyDelete
  14. OH - MY - GOD.

    "The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself."

    Sounds so fucking awesome.

    ReplyDelete