Subramanian Ramamoorthy blogging on AI ‘n’stuff

My old UTexas Qualitative Reason & Intelligent Robotics Labmate Subramanian “Ram” Ramamoorthy is blogging now as a lecturer (i.e. assistant prof) at the University of Edinburgh.  He is the second former labmate of mine to end up there.  (The first is former NNRG labmate Jim Bednar.)

Posted in AI, Robotics. 1 Comment »

PLASTK version 0.1

PLASTK GUI ScreenshotIn a departure from my recent blog themes, I’d like to get back to AI and machine learning today to announce the first development release of PLASTK: The Python Learning Agent Software Toolkit.

From the PLASTK README.txt file:

PLASTK is a Python class library for building and experimenting with learning agents, i.e. software programs that interact in a closed loop with an environment (agents) and that learn. Especially, but not limited to, reinforcement learning agents.
PLASTK has an extensible component architecture for defining the components that make up a software agent system, including agents, and environments, and various components needed to build agents, such as function approximators and vector quantizers.

PLASTK Pendulum DemoPLASTK started life as an ad-hoc collection of algorithms from the reinforcement learning and self-organization literature that I implemented for my dissertation experiments. While in grad school I managed to clean it up to the point where a couple of other students were able to use it. The current release is the first step in my effort to make it usable outside a close circle of labmates.

PLASTK currently contains implementations of Q-learning and Sarsa agents tabular state and linear feature representations, self-organizing (Kohonen) maps, growing neural gas, linear, affine, and locally weighted regression. It also contains some demo environments including a two dimensional “gridworld” (shown in the figure), and a pendulum. Included examples show how to set up agents to learn in the gridworld and pendulum environments. PLASTK also includes a simple, component-based GUI, shown in the screenshot on the right, for visualizing agent/environment interaction.

PLASTK’s biggest deficit right now is that the agent/environment interaction API is the long-deprecated RLI version 5, from Rich Suttons RLAI group. One of the first to-do-list items is to update the interface to the new RL-Glue standard.

The PLASTK code is redistributable under GPL v.2, and available for download on Please feel free to contribute! Patches are welcome.

Al Qaeda Developing Killer Daleks! Run!

Killer Daleks! Run!In the made-my-day department: Fox News recently ran a story on the possibility of Al Qaeda attacking the west with killer robots, complete with a photo of a Dalek from Dr. Who!

As points out, the whole thing stems from a speculative statement in a robot ethics talk by Noel Sharkey of the University of Sheffield.

Somethin’ Happenin’ Here…

Between my last few posts here and recent comments elsewhere, I’ve probably given the impression that I’m bearish on Pittsburgh, at least as far as the tech/start-up economy goes. Not so. While it’s sometimes hard for me to keep my mood up in the winter, this has actually been a bullish week for me, Burgh-wise.

Among the things that lifted my mood this week was lunch today with Matt Harbaugh from InnovationWorks. Before today I had lumped IW in with the slew of public/non-profit/consortium/partnership groups around here that purport to help the tech economy. And frankly, it was never very clear to me what any of them did, or whether anyone in the local tech industry would care if they just disappeared.

After talking with Matt, however, a couple of things became much more clear: (1) there is a burgeoning start-up scene here, and (2) a lot of it is in the IW portfolio. (Though by no means all of it!) I don’t think the scene reached the perpetual-motion-machine stage yet, but that just means you gotta keep pedaling. Just knowing that there are people out there taking real, concrete action to move things the right way is a great thing. Talk is cheap, especially in the blogosphere. Action, baby. That’s where it’s at. I expect exciting things from IW.

DARPA Urban Challenge has Started

The national qualifying event (NQE) forDARPA’ s Urban Challenge started today in Victorville, CA. The Urban Challenge is the current incarnation of the DARPA Grand Challenge autonomous car competition. Unlike the previous challenge, which was an off-road race, the current challenge takes place on city streets with traffic. The Austin Robot Technology team (ART)  made it to the NQE and is now trying to qualify for the finals, on Nov. 3. ART’s AI is being directed by UT Austin CS professor (and member of my thesis committee) Peter Stone, and AI development is being headed by my good friend and former labmate Pat Beeson. GO ART!

DARPA will be webcasting the finals, but unfortunately, I can’t find any sign of a scoreboard for the NQE which will be going on until Wednesday. Georgia Tech’s Sting Racing team is running a blog, though.

Great UT Computer Sciences Movie

I just learned about the new promotional video for the UT Austin Computer Sciences department. It gives some great shots of the UT campus, and a nice promo of the department, and the study of Computer Science generally. Highlights: The hilarious man-on-the-street interviews asking “what is an algorithm,” and appearances by my fellow UT AI-Lab/Neural-Networks grad students Nate Kohl and Igor Karpov, showing off the robot soccer lab and the NERO video game, respectively. Oh, and Prof. Calvin Lin saying “research is is the funnest part of CS.” Heh.

It’s not on YouTube yet, but it should be.

Python Robotics Programming With Pyro

I’ve mentioned before that I wanted to rebuild be research infrastructure once I got done with my dissertation proposal. I’ve started rebuilding, rebuilding the still-useful parts of my old Common Lisp codebase in Python. I’ve been helped a lot by Pyro, the Python robotics framework. It’s actually quite a nice framework, handling several popular robots and simulators, including Player/Stage, which I’m using.

The Pyro library and engine handle the work of communicating with the robot or simulator, leaving the robot programmer to concentrate on writing his controller (or “Brain” as it’s unfortunately called in Pyro terminology), as a Python object. The main work of the controller is encapsulated in the its .step() method, that gets called periodically (every 0.1 seconds). This is nice in some ways, although it makes things difficult when the controller/agent has to to perform complex, hierarchical, extended actions with subgoals, since it makes it impossible to use the Python program stack to track the hierarchical stack, since the .step() method presumably must return reasonably frequently. Instead, the controller and any extended actions must maintain their state between .step() calls. On the upside, this architecture allows the Pyro engine to handle GUI functions and any other periodic bookkeeping and communication without requiring the user to write in calls to special functions in the controller.

The other major downside is that, for Player/Stage, anyway, the Pyro developers seem to have assumed that all Player/Stage robots can be modeled as an ActivMedia Pioneer. It’s possible to write modules to support other robot configurations that use Player, but how to do this and get all the Pyro functionality is still mysterious. Luckily, for my work, it’s not that important, since my learning agent assumes very little prior knowledge about the nature of the robot it’s driving, so I can just pass it the raw laser scans from a simulated robot, without having implemented any of the routines that transform the sensors into a uniform system of units, etc.

In addition to just a robot interface, Pyro also has Python wrappers for various useful libraries including a neural net library, Self-Organizing Maps, and other fun goodies. No reinforcement learning, yet. But I’m writing my RL code in Python, so maybe I’ll contribute it.

Read the rest of this entry »