After the embarrassment that was yesterday, and the Packers validating my Aaron Rodgers numbers, only to have the defense laugh it off come overtime, we’re back.

Getting. Closer. To. The. Superbowl.

Robot Dave is hell bent on Carolina and will not let up. I am sad to report that I agree. I know they’re young, but they’re smart. Peete Carroll can only be so lucky. Does he have enough luck to get him through today? Even though they’re healthy? Even though they have Wilson and Marshall Lynch? I can’t call that hat trick. The numbers are stacked against them. Plus, I haven’t forgiven Caroll for the heartbreak of the Superbowl last year. I can’t.

Here’s a ditty: Steelers can win. I know what you’re thinking, not with that injury report. But they’re not playing Carolina. Not today. A good coach anticipates players going down. He’s ready for it. Mike Tomlin isn’t a good coach- he’s stellar. Denver’s not as dominate as the Panthers, so they have chance and enough fire power to get through it. Roethlisberger knows how to play injured. If he doesn’t play, they’re out of the playoffs. He’s dragged himself out there before. Season’s a wash if he doesn’t.

Plus, I can’t always agree with the Robot. The smack talk must live on.

Share on FacebookTweet about this on TwitterPin on PinterestShare on TumblrShare on Google+Email this to someoneBuffer this pageShare on LinkedInShare on StumbleUponDigg thisShare on Reddit

Robot Flips Pancakes

September 16th, 2015 | Posted by Sarah in PureNerdism - (0 Comments)

 
 

The video shows a Barrett WAM robot learning to flip pancakes by reinforcement learning. The motion is encoded in a mixture of basis force fields through an extension of Dynamic Movement Primitives (DMP) that represents the synergies across the different variables through stiffness matrices. An Inverse Dynamics controller with variable stiffness is used for reproduction.

For pancake day special, the skill is first demonstrated via kinesthetic teaching, and then refined by Policy learning by Weighting Exploration with the Returns (PoWER) algorithm. After 50 trials, the robot learns that the first part of the task requires a stiff behavior to throw the pancake in the air, while the second part requires the hand to be compliant in order to catch the pancake without having it bounced off the pan.

from Dr. Petar Kormushev

via HAL

f33e8a874a3ec28bec89a5c1f7ead3f027a1f4e216b8ecc177

Share on FacebookTweet about this on TwitterPin on PinterestShare on TumblrShare on Google+Email this to someoneBuffer this pageShare on LinkedInShare on StumbleUponDigg thisShare on Reddit

 photo rf44fd_zpsd2jabbvv.jpg

 

It’s been almost 20 years since IBM’s Deep Blue supercomputer beat the reigning world chess champion, Gary Kasparov, for the first time under standard tournament rules. Since then, chess-playing computers have become significantly stronger, leaving the best humans little chance even against a modern chess engine running on a smartphone.

But while computers have become faster, the way chess engines work has not changed. Their power relies on brute force, the process of searching through all possible future moves to find the best next one.

 

Of course, no human can match that or come anywhere close. While Deep Blue was searching some 200 million positions per second, Kasparov was probably searching no more than five a second. And yet he played at essentially the same level. Clearly, humans have a trick up their sleeve that computers have yet to master.

This trick is in evaluating chess positions and narrowing down the most profitable avenues of search. That dramatically simplifies the computational task because it prunes the tree of all possible moves to just a few branches.

Computers have never been good at this, but today that changes thanks to the work of Matthew Lai at Imperial College London. Lai has created an artificial intelligence machine called Giraffe that has taught itself to play chess by evaluating positions much more like humans and in an entirely different way to conventional chess engines.

Straight out of the box, the new machine plays at the same level as the best conventional chess engines, many of which have been fine-tuned over many years. On a human level, it is equivalent to FIDE International Master status, placing it within the top 2.2 percent of tournament chess players.

The technology behind Lai’s new machine is a neural network. This is a way of processing information inspired by the human brain. It consists of several layers of nodes that are connected in a way that change as the system is trained. This training process uses lots of examples to fine-tune the connections so that the network produces a specific output given a certain input, to recognize the presence of face in a picture, for example.

In the last few years, neural networks have become hugely powerful thanks to two advances. The first is a better understanding of how to fine-tune these networks as they learn, thanks in part to much faster computers. The second is the availability of massive annotated datasets to train the networks.

That has allowed computer scientists to train much bigger networks organized into many layers. These so-called deep neural networks have become hugely powerful and now routinely outperform humans in pattern recognition tasks such as face recognition and handwriting recognition.

So it’s no surprise that deep neural networks ought to be able to spot patterns in chess and that’s exactly the approach Lai has taken. His network consists of four layers that together examine each position on the board in three different ways.

The first looks at the global state of the game, such as the number and type of pieces on each side, which side is to move, castling rights and so on. The second looks at piece-centric features such as the location of each piece on each side, while the final aspect is to map the squares that each piece attacks and defends.

Lai trains his network with a carefully generated set of data taken from real chess games. This data set must have the correct distribution of positions. “For example, it doesn’t make sense to train the system on positions with three queens per side, because those positions virtually never come up in actual games,” he says.

It must also have plenty of variety of unequal positions beyond those that usually occur in top level chess games. That’s because although unequal positions rarely arise in real chess games, they crop up all the time in the searches that the computer performs internally.

And this data set must be huge. The massive number of connections inside a neural network have to be fine-tuned during training and this can only be done with a vast dataset. Use a dataset that is too small and the network can settle into a state that fails to recognize the wide variety of patterns that occur in the real world.

Lai generated his dataset by randomly choosing five million positions from a database of computer chess games. He then created greater variety by adding a random legal move to each position before using it for training. In total he generated 175 million positions in this way.

The usual way of training these machines is to manually evaluate every position and use this information to teach the machine to recognize those that are strong and those that are weak.

But this is a huge task for 175 million positions. It could be done by another chess engine but Lai’s goal was more ambitious. He wanted the machine to learn itself.

Instead, he used a bootstrapping technique in which Giraffe played against itself with the goal of improving its prediction of its own evaluation of a future position. That works because there are fixed reference points that ultimately determine the value of a position—whether the game is later won, lost or drawn.

In this way, the computer learns which positions are strong and which are weak.

Having trained Giraffe, the final step is to test it and here the results make for interesting reading. Lai tested his machine on a standard database called the Strategic Test Suite, which consists of 1,500 positions that are chosen to test an engine’s ability to recognize different strategic ideas. “For example, one theme tests the understanding of control of open files, another tests the understanding of how bishop and knight’s values change relative to each other in different situations, and yet another tests the understanding of center control,” he says.

The results of this test are scored out of 15,000.

Lai uses this to test the machine at various stages during its training. As the bootstrapping process begins, Giraffe quickly reaches a score of 6,000 and eventually peaks at 9,700 after only 72 hours. Lai says that matches the best chess engines in the world.

“[That] is remarkable because their evaluation functions are all carefully hand-designed behemoths with hundreds of parameters that have been tuned both manually and automatically over several years, and many of them have been worked on by human grandmasters,” he adds.

Lai goes on to use the same kind of machine learning approach to determine the probability that a given move is likely to be worth pursuing. That’s important because it prevents unnecessary searches down unprofitable branches of the tree and dramatically improves computational efficiency.

Lai says this probabilistic approach predicts the best move 46 percent of the time and places the best move in its top three ranking, 70 percent of the time. So the computer doesn’t have to bother with the other moves.

That’s interesting work that represents a major change in the way chess engines work. It is not perfect, of course. One disadvantage of Giraffe is that neural networks are much slower than other types of data processing. Lai says Giraffe takes about 10 times longer than a conventional chess engine to search the same number of positions.

But even with this disadvantage, it is competitive. “Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC,” says Lai. By comparison, the top engines play at super-Grandmaster level.

That’s still impressive. “Unlike most chess engines in existence today, Giraffe derives its playing strength not from being able to see very far ahead, but from being able to evaluate tricky positions accurately, and understanding complicated positional concepts that are intuitive to humans, but have been elusive to chess engines for a long time,” says Lai. “This is especially important in the opening and end game phases, where it plays exceptionally well.”

And this is only the start. Lai says it should be straightforward to apply the same approach to other games. One that stands out is the traditional Chinese game of Go, where humans still hold an impressive advantage over their silicon competitors. Perhaps Lai could have a crack at that next.

source via Hal

Share on FacebookTweet about this on TwitterPin on PinterestShare on TumblrShare on Google+Email this to someoneBuffer this pageShare on LinkedInShare on StumbleUponDigg thisShare on Reddit

 
 

We’ve developed a new framework for reinforcement learning, a subset of machine learning. This video shows the framework applied to an autonomous RC car that learns to drift around a truck.

via Hal

Share on FacebookTweet about this on TwitterPin on PinterestShare on TumblrShare on Google+Email this to someoneBuffer this pageShare on LinkedInShare on StumbleUponDigg thisShare on Reddit

 
 

The better we come to understand the way intelligence develops in complex systems in the universe, the more clearly we’ll perceive our own role and limits in fostering technological evolutionary development. Top-down AI designers assume that human minds must furnish the most important goals to our AI systems as they develop. Certainly some such goal-assignment must occur, but it is becoming increasingly likely that this strategy has rapidly diminishing marginal returns. Evolutionary developmental computation (in both biological and technological systems) generally creates and discovers its own goals and encodes learned information in its own bottom-up, incremental, and context-dependent fashion, in a manner only partially accessible to our rational analysis. Ask yourself, for example, how much of your own mental learning has been due to inductive, trial-and-error internalization of experience, and how much was a deductive, architected, rationally-directed process. This topic, the self-organization of intelligence, is observed in all complex systems to the extent that each system’s physics allows, from molecules to minds.

In line with the new paradigm of evolutionary development of complex systems, we are learning that tomorrow’s most successful technological systems must be organic in nature. Self-organization emerges only through a process of cyclic development with limited evolution/variation within each cycle, a self-replicating development that becomes incrementally tuned for progressively greater self-assembly, self-repair, and self-reorganization, particularly at the lowest component levels. At the same time, progressive self-awareness (self-modelling) and general intelligence (environmental modelling) are emergent features of such systems.

Most of today’s technological systems are a long way from having these capacities. They are rigidly modular, and do not adapt to or interdepend with each other or their environment. They engage not in self-assembly, but are mostly externally constructed. In discussing proteins, Michael Denton reminds us of how far our technological systms have to go toward this ideal. Living molecular systems engage extensively in the features listed above. A proteins three dimensional shape is a result of a network of local and nonlocal physical interdependences (e.g., covalent, electrostatic, electrodynamic, steric, and solvent interactions). Both its assembly and its final form are a developmentally computed emergent feature of that interdependent network. A protein taken out of its interdependent milieu soon becomes nonfunctional, as its features are a convergent property of the interdependent system.

Today’s artificial neural networks, genetic algorithms, and evolutionary programs are promising examples of systems that demonstrate an already surprising degree of self-replication, self-assembly, self-repair, and self-reorganization, even at the component level. Implementing a hardware description language genotype, which in turn specifys a hardware-deployed neural net phenotype, and allowing this genotype-phenotype system to tune for ever more complex, modular, and interdependent neural net emergence is one future path likely to take us a lot further toward technological autonomy. At the same time, as Kurzweil has argued, advances in human brain scanning will allow us to instantiate ever more interdependent computational architectures directly into the technological substrate, architectures that the human mind will have less and less ability to model as we engage in the construction process. In this latter example, human beings are again acting as a decreasingly central part of the replication and variation loop for the continually improving technological substrate.

Collective or “swarm” computation is also a critical element of evolutionary development of complexity, and thus facilitating the emergence of systems we only partially understand, but collectively utilize (agents, distributed computation, biologically inspired computation), will be very important to achieving the emergences we desire. Linking physically-based self-replicating systems (SRS’s) to the emerging biologically inspired computational systems (neural networks, genetic algorithms, evolutionary systems) which are their current predecessors will be another important bottom up method, as first envisioned by John Von Neumann in the 1950′s.

Physical SRS’s, like today’s primitive self-replicating robots, provide an emerging body for the emerging mind of the coming machine intelligence, a way for it to learn, from the bottom up the myriad lessons of “common sense” interaction in the physical world (e.g., sensorimotor before instinctual before linguistic learning). As our simulation capacity, solid state physics, and fabrication systems allow us to develop ever more functional micro, meso and nano computational evolutionary hardware and evolutionary robotic SRS’s in coming decades (these will be functionally restricted versions of the “general assembler” goal in nanotechnology) we may come to view our technological systems simulation and fabrication capacity as their “DNA-guided protein synthesis”, their evolutionary hardware and software as their emerging “nervous system” and evolutionary robotics as the “body” of their emergent autonomous intelligence.

At best, we conscious humans may create selection pressures which reward for certain types of emergent complexity within the biologically inspired computation/SRS environment. At the same time, all our rational striving for a top down design and understanding of the AI we are now engaged in creating will remain an important (though ever decreasing) part of the process. Thus at this still-primitive stage of evolution of the coming autonomous technologic substrate a variety of differentiated, not-yet-convergent approaches to AI are to be expected. Comparing and contrasting the various paths available to us, and choosing carefully how to allocate our resources will be an essential part of humanity’s role as memetically driven catalysts of the coming transition.

In this spirit, let me now point out that on close inspection of the present state of AI research, one finds that there are very few investigators remaining who do not acknowledge the fundamental utility of evolution as a creative component in future AI systems. Those nonevolutionary, top-down AI approaches which still remain in vogue (whether classical symbolic or one of the many historical derivatives of this) are now few in number, and despite decades of iterative refinement, have consistently demonstrated only minor incremental improvements in performance and functional adaptation. To me, this is a strong indication that human-centric, human-envisioned design has reached a “saturation phase” in its attempt to add incremental complexity to technologic systems. We humans simply aren’t that smart, and the universe is showing us a much more powerful way to create complexity than by trying to develop or deduce it from logical first principles.

Thus we should not be surprised that on a human scale the handful of researchers working on systems to encode some kind of “general intelligence” in AI, after a surge of early and uneconomical attempts in the 1950′s to 1970′s, now pale in comparison to the 50,000 or so computer scientists who are investigating various forms of evolutionary computation. Over the last decade we have seen a growing number of real theoretical and commercial successes with genetic algorithms, genetic programming, evolutionary strategies, evolutionary programming, and other intrinsically chaotic and interdependent evolutionary computational approaches, even given their current primitive encapsulation of the critical evolutionary developmental aspects of genetic and neural computational systems and their currently severe hardware and software complexity limitations.

We may therefore expect that the numbers of those funded investigators who currently engage in this new evolutionary developmental paradigm will continue to swell exponentially in coming decades, as they are following what appears to be the most universally-permissive path to increasing adaptive computational complexity.

[via AccelerationWatch and HAL]

Share on FacebookTweet about this on TwitterPin on PinterestShare on TumblrShare on Google+Email this to someoneBuffer this pageShare on LinkedInShare on StumbleUponDigg thisShare on Reddit

Robot Rides Bicycle

August 17th, 2015 | Posted by Sarah in PureNerdism - (0 Comments)

 
 

Did you know that it’s easier for a Robot to ride a bike that walk? That’s due to the repetitive motion of pedaling. It also making balancing much less challenging for the robot.

Masahiko Yamaguchi, Tamagawa seiki
Bipedal Bike Riding Robot

Share on FacebookTweet about this on TwitterPin on PinterestShare on TumblrShare on Google+Email this to someoneBuffer this pageShare on LinkedInShare on StumbleUponDigg thisShare on Reddit

Dynamic Walking Robots in Shoes

August 17th, 2015 | Posted by Sarah in PureNerdism - (0 Comments)

 
 

Robots walk best when they get help from shoes. It helps stabilize the ground and makes balance easier.

Share on FacebookTweet about this on TwitterPin on PinterestShare on TumblrShare on Google+Email this to someoneBuffer this pageShare on LinkedInShare on StumbleUponDigg thisShare on Reddit

Get Started with Machine Learning

April 3rd, 2015 | Posted by Sarah in PureNerdism - (0 Comments)

 
 

Melanie Warrick gives a stellar intro to machine learning at Pycon 2014.

Share on FacebookTweet about this on TwitterPin on PinterestShare on TumblrShare on Google+Email this to someoneBuffer this pageShare on LinkedInShare on StumbleUponDigg thisShare on Reddit

I’m biased being a MIT graduate in machine learning, but I always felt the way the teachers broke down the lectures, especially ususing the fox, goose, grain, approach was genius.

Clear away 47 minutes and have a good listen.

 
 

Share on FacebookTweet about this on TwitterPin on PinterestShare on TumblrShare on Google+Email this to someoneBuffer this pageShare on LinkedInShare on StumbleUponDigg thisShare on Reddit