Good machine learning and AI lectures

A while back I posted about SciVee, a site for posting videos of science presentations. Today my old neural-networks labmate Tal Tversky commented pointing me at VideoLectures, a similar site containing academic lectures.  Although the site doesn’t seem to be explicitly intended for any one topic, the page of “top” lectures is dominated by talks in statistical machine learning.  Skimming the page, I noticed talks by such notable names as Tom Mitchell (chair of CMU’s machine learning department), Usama Fayyad (former VP of Research and “chief data officer” at Yahoo), Michael Jordan (UC Berkeley), and William Cohen (CMU).  Lots more, too.

In addition to ML and AI stuff, there are also talks by Tim Berners-Lee and Umberto Eco on the “Top Lectures” page.

Subramanian Ramamoorthy blogging on AI ‘n’stuff

My old UTexas Qualitative Reason & Intelligent Robotics Labmate Subramanian “Ram” Ramamoorthy is blogging now as a lecturer (i.e. assistant prof) at the University of Edinburgh.  He is the second former labmate of mine to end up there.  (The first is former NNRG labmate Jim Bednar.)

Posted in AI, Robotics. 1 Comment »

PLASTK version 0.1

PLASTK GUI ScreenshotIn a departure from my recent blog themes, I’d like to get back to AI and machine learning today to announce the first development release of PLASTK: The Python Learning Agent Software Toolkit.

From the PLASTK README.txt file:

PLASTK is a Python class library for building and experimenting with learning agents, i.e. software programs that interact in a closed loop with an environment (agents) and that learn. Especially, but not limited to, reinforcement learning agents.
PLASTK has an extensible component architecture for defining the components that make up a software agent system, including agents, and environments, and various components needed to build agents, such as function approximators and vector quantizers.

PLASTK Pendulum DemoPLASTK started life as an ad-hoc collection of algorithms from the reinforcement learning and self-organization literature that I implemented for my dissertation experiments. While in grad school I managed to clean it up to the point where a couple of other students were able to use it. The current release is the first step in my effort to make it usable outside a close circle of labmates.

PLASTK currently contains implementations of Q-learning and Sarsa agents tabular state and linear feature representations, self-organizing (Kohonen) maps, growing neural gas, linear, affine, and locally weighted regression. It also contains some demo environments including a two dimensional “gridworld” (shown in the figure), and a pendulum. Included examples show how to set up agents to learn in the gridworld and pendulum environments. PLASTK also includes a simple, component-based GUI, shown in the screenshot on the right, for visualizing agent/environment interaction.

PLASTK’s biggest deficit right now is that the agent/environment interaction API is the long-deprecated RLI version 5, from Rich Suttons RLAI group. One of the first to-do-list items is to update the interface to the new RL-Glue standard.

The PLASTK code is redistributable under GPL v.2, and available for download on Please feel free to contribute! Patches are welcome.

Al Qaeda Developing Killer Daleks! Run!

Killer Daleks! Run!In the made-my-day department: Fox News recently ran a story on the possibility of Al Qaeda attacking the west with killer robots, complete with a photo of a Dalek from Dr. Who!

As points out, the whole thing stems from a speculative statement in a robot ethics talk by Noel Sharkey of the University of Sheffield.

DARPA Urban Challenge has Started

The national qualifying event (NQE) forDARPA’ s Urban Challenge started today in Victorville, CA. The Urban Challenge is the current incarnation of the DARPA Grand Challenge autonomous car competition. Unlike the previous challenge, which was an off-road race, the current challenge takes place on city streets with traffic. The Austin Robot Technology team (ART)  made it to the NQE and is now trying to qualify for the finals, on Nov. 3. ART’s AI is being directed by UT Austin CS professor (and member of my thesis committee) Peter Stone, and AI development is being headed by my good friend and former labmate Pat Beeson. GO ART!

DARPA will be webcasting the finals, but unfortunately, I can’t find any sign of a scoreboard for the NQE which will be going on until Wednesday. Georgia Tech’s Sting Racing team is running a blog, though.

Great UT Computer Sciences Movie

I just learned about the new promotional video for the UT Austin Computer Sciences department. It gives some great shots of the UT campus, and a nice promo of the department, and the study of Computer Science generally. Highlights: The hilarious man-on-the-street interviews asking “what is an algorithm,” and appearances by my fellow UT AI-Lab/Neural-Networks grad students Nate Kohl and Igor Karpov, showing off the robot soccer lab and the NERO video game, respectively. Oh, and Prof. Calvin Lin saying “research is is the funnest part of CS.” Heh.

It’s not on YouTube yet, but it should be.

SciVee: YouTube for Science

The San Diego Supercomputing Center and PLoS have created SciVee, a YouTube-like website for scientific presentations. It seems like a potentially great vehicle for disseminating and promoting research. Right now the content is mostly in bio, but I would love to see more CS up there. It would be great, for example, if the major CS conferences videotaped their proceedings — or at least the major talks — and published them on SciVee. Another great use would be for departments to use SciVee to publish their various invited lecture series like FAI or the CMU Machine Learning Seminar Series.

SciVee itself seems like it still has a few of bugs that need to be worked out, but it’s new and I’m sure they’ll get them worked out. One particularly annoying one: the interface provides a vehicle to allow the producer of the video to provide text notes that are synchronized with the video. The problem is, the notes pop up over the video frame, interrupting the flow of the video and obscuring the screen and it doesn’t have a close box (though it is possible to close the boxes, if you search hard enough). I’m not sure exactly how this feature is supposed to be used effectively, but every instance of it that I saw was annoying to the point of ruining the video completely. Not only does it cover the screen and ruin the flow of the video, but it’s impossible to read a box of text and listen to a speaker at the same time.

I hope these kinds of things will be worked out as more people use the site and give feedback, and overall this seems like a cool way of advertising your research.

Posted in AI, Science, Web. 6 Comments »