Turtle Graphics In Python? Cool…

I just learned that Python has Turtle graphics built in.  Very cool.  How did I not know about this before?  Now I guess I know what Maggie’s first programming language will be. Inspired by this nice post showing how to draw fractal trees, here’s a quick Sierpinsky triangle algorithm in Python:

import turtle

def striangle(depth,base):
   turtle.down()
   if depth == 0:
      turtle.fill(1)
      for i in 0,1,2:
         turtle.forward(base)
         turtle.left(120)
      turtle.fill(0)
   else:
      for i in 0,1,2:
         striangle(depth-1,base)
         turtle.up()
         turtle.forward(base*2**depth)
         turtle.left(120)
         turtle.down()

turtle.reset()
striangle(6,5)
turtle-sierpinski

Depth 5 Sierpinski Triangle

Fun!

Guido Van Rossum’s new Blog

Neopythonic. Thoughts from the creator of the Python programming language, now working for Google.

PLASTK version 0.1

PLASTK GUI ScreenshotIn a departure from my recent blog themes, I’d like to get back to AI and machine learning today to announce the first development release of PLASTK: The Python Learning Agent Software Toolkit.

From the PLASTK README.txt file:

PLASTK is a Python class library for building and experimenting with learning agents, i.e. software programs that interact in a closed loop with an environment (agents) and that learn. Especially, but not limited to, reinforcement learning agents.
[...]
PLASTK has an extensible component architecture for defining the components that make up a software agent system, including agents, and environments, and various components needed to build agents, such as function approximators and vector quantizers.

PLASTK Pendulum DemoPLASTK started life as an ad-hoc collection of algorithms from the reinforcement learning and self-organization literature that I implemented for my dissertation experiments. While in grad school I managed to clean it up to the point where a couple of other students were able to use it. The current release is the first step in my effort to make it usable outside a close circle of labmates.

PLASTK currently contains implementations of Q-learning and Sarsa agents tabular state and linear feature representations, self-organizing (Kohonen) maps, growing neural gas, linear, affine, and locally weighted regression. It also contains some demo environments including a two dimensional “gridworld” (shown in the figure), and a pendulum. Included examples show how to set up agents to learn in the gridworld and pendulum environments. PLASTK also includes a simple, component-based GUI, shown in the screenshot on the right, for visualizing agent/environment interaction.

PLASTK’s biggest deficit right now is that the agent/environment interaction API is the long-deprecated RLI version 5, from Rich Suttons RLAI group. One of the first to-do-list items is to update the interface to the new RL-Glue standard.

The PLASTK code is redistributable under GPL v.2, and available for download on SourceForge.net. Please feel free to contribute! Patches are welcome.

Making C++ Dynamic with SWIG

Sometimes it seems like I’ve spent half of my programming life hooking up dynamic scripting front-ends to C/C++ back-end code. I did it when I worked on PsyScope and E-Prime, and for my own research code in Guile, PLT Scheme, and now Python. It’s amazing how similar and regular the process is in all the different languages, and how tedious it is to manually register each C/C++ entity that has to be visible to the scripting languages. Luckily, computers are great at automating regular, tedious procedures: enter SWIG the Simple Wrapper and Interface Generator. SWIG works with many languages including Python, PERL, Guile, PLT Scheme, Ruby, PHP, and even Java.

SWIG so simplifies the task that I never write C++ programs with main() functions anymore. I just write libraries that get wrapped by SWIG and loaded into Python as modules, allowing me to use all of Python as my user interface. It also makes it simple to integrate someone else’s C/C++ code into my existing Python codebase. I used it to make a version of the Stage robot simulator that can be loaded into Python as a module, instead of running as a separate process.

My latest SWIG project was much simpler. I wanted to use Rich Sutton’s Tile Coding software for converting continuous features (like LIDAR returns) into discrete boolean features. Unlike the work with Stage, this only involved wrapping an interface to one C function and one constant, and it was basically done in 30 minutes (including downloading the library code from Rich’s site, etc). Still, I felt like I learned something useful about SWIG.

The function I wrapped [GetTiles()] has lots of parameters, 4 of which are an input array and its length and an output array to be filled, and its length. Naturally, in Python, you’d like for the function to simply take the input array, sans length, and return the output array as a return value, rather than a parameter. I was a little worried when I started that it would be hard to figure out how to tell SWIG to generate the appropriate C code to convert all the arguments properly. However, when SWIG generates a wrapper, it creates code on both sides of the Python/C boundary. It creates “wrapper” functions in C that convert the arguments to and from the Python interpreter’s internal format. SWIG provides extremely flexible “typemaps” for specifying the details of these typemaps, but its not always obvious how to write a typemap to do some specific thing. Luckily, for Python, code SWIG also generates “proxy” code in Python, that further wraps (some of the) calls to the C/C++ wrappers. Furthermore, it allows the programmer to override the Python proxy for some wrapped function, letting me write the argument conversion for GetTiles() in Python, where it’s easy, rather than in C++, where I’m still not sure exactly how I would do it. By the same process, I can add auxiliary functions directly to the new wrapped module, written entirely in Python, and specified in my interface file.

With all the great dynamic languages out there, and tools like SWIG for grafting together dynamic languages and C++, I don’t understand why people would want to write entire programs in C++ (or, god forbid, C) anymore, unless they have specific resource limitations that require it like O.S. device drivers and embedded programs.

PyStage preview: a single-process robot simulator

Here’s a preview of something I’ve been working on, that’s almost finished. It’s a version of the Stage robot simulator that can be loaded into Python as a module, rather than running as a server in a separate process. My motivation for this is to be able to run simulations in batch mode as fast as possible. The standard Stage server is designed as a backend to Player, the generic robot driver and interface system that uses a networked client/server communications module. Essentially, Stage runs as a set of Player drivers, so that the fact that it’s a simulator is abstracted away, making the simulator virtually indistinguishable from a real robot. This is a great model for those who are using the simulator to test and debug code that will eventually be ported to a real robot. The downside is for those of us who want to use a simulator to speed up time, so we can run simulations that would be prohibitively long on physical robots (days or weeks).
Stage does allow speedup using the -f and -u flags, but since it’s running as a server communicating through a socket, it’s difficult to maintain synchronization if it’s sped up more than 3 or 4x. In my new library, that I’m tentatively calling PyStage (like PyPlayer), the Python program that’s running the “client” also controls exactly when the world’s Update() method gets called, and so it can take as long or as short as it wants between world updates.

This single-process, synchronized execution model provides some added benefits besides just running fast. First, Running in a single process w/o network communication allows simulation runs to be submitted to distributed batch queueing systems like Condor, easily allowing many simulations to run in parallel. While Condor does allow some limited network communication in a job, keeping a TCP connection open to a simulator process for a whole job isn’t really feasible. Second, the synchronization should make it possible to use Stage with simulations of cognitive or perceptual processes that are too computationally intensive to run in real time on current computing hardware (e.g. large scale neural simulations). In this case, instead of speeding up time in the simulator, you could slow down time, allowing a cognitive process that might take many seconds to simulate to only use a fraction of a second of simulator time. Again, this is technically feasible with client/server Stage, but for it to work you need to know in advance how long to make each cycle, and if there’s variability in computation time from one cycle to another, you need to add extra “padding” to the cycle time that will often go wasted.

I now have my initial prototype working, and have managed to run a few simulations, getting about a 3-4x speedup in raw cycle time (100-120Hz vs 33hz) over my client-server runs, however because of some other changes in the simulator, I’m able to do computation on every cycle (as opposed to about every 3 cycles before), for an effective speedup of 9-12x. Of course, then the top-speed was communications-limited, and now it’s processor-limited, so I should be able to get even more speedup on faster machines. Once the code has been cleaned up, and I learn to use the GNU build system, I should be able to release a 0.1 version (or should it be 1.0?). Or, if the Stage authors are willing, I’d be willing to fold the whole thing into Stage proper. Right now, I maintain my own slightly modified version of the Stage source for use with PyStage. Another possibility is to expand to allow clients to be written run in other languages like Java or Scheme. Since I used SWIG to generate the python wrappers, this shouldn’t be too difficult, though someone else would have to do the non-Python-specific work.


Footnote: This whole experience has been a great reinforcement of the value of Free Software. Because the source code of the program was free, I was able to modify it to serve a need that the authors hadn’t anticipated, and probably would have considered low- or no-priority, and the whole process only took me a couple of weeks. I can’t imagine this would have been possible if I’d been using a closed, proprietary system.

Read the rest of this entry »

Meet the New Blog, Same As the Old Blog

Here it is. I’ve managed to import all my postings from my Blosxom blog, although I had to manually set categories. I’m not so sure categories are that useful, though. The big problem is that I can’t figure out how to present code. It comes out double-spaced. Wierd. Anyway, I’ll be closing down the old blog soon.

UPDATE: actually I didn’t have to manually set the categories. The typepad imput format allows category specification. I just used the most-specific “leaf” category of the category tree. E.g. /science/ai/python -> “Python”.

Here is the Python script, export.py for converting a blosxom entry into an importable typepad/movabletype entry (sorry about the spacing. I blame TypePad):

Writing Robot Controllers with Python Generators

A typical form for a robot controller that produces some behavior has
at its core a loop that repeatedly sends control signals to the robot,
often with a some code before and after the loop for set-up and
clean-up. This is how, e.g., the example code in Sutton and
Barto’s Reinforcement Learning
is written.

The problem with this structure is that often the control algorithm
doesn’t have top level control. For example in Pyro,
the Pyro engine handles top level loop, and the main routine of the
controller is called periodically. This kind of time-sliced
architectures simplifies some aspects
of writing controllers, but breaks the nice single-program structure
above. Now the set-up and clean-up code need to be broken out into
separate functions or methods, and the body of the loop encapsulated
in a step() or next() function. Things get
even more complicated if the controller consists of a sequence of
sub-behaviors, requiring a sequence of control loops. Then the
step() function must embody a finite state
machine
, and branch to the correct behavior for the current state
on each call. The more complicated the behavior, the more of a pain
it becomes to program this way.

For my research I need extended, compound actions that are treated as
a single action. My solution? Write the controllers as Python generators.

Generators are a new standard language feature in Python 2.3, though
they’re available in Python 2.2 using

from __future__ import generators

Generators are syntactically the same as functions, but use the yield statement to return values. When the function is called, it returns an iterator that can be looped over. Each iteration returns to the point after the last yield. Here’s an example generator that generates the sequence of integers from zero to N, then back to zero again:

def updown(n):
i = 0
# first ascending order
while i = 0
yield i
i -= 1

Running it looks like this:

>>> for x in updown(3): print x,
...
0 1 2 3 2 1 0

Using generators, robot controllers in time-sliced systems can be
written without having to build in a finite-state machine, or even
separate set-up/clean-up methods. The main trick is to
yield each time the controller performs a primitive
control action. In my implementation, the controller actually yields
the action, either as a function to be called, or as an index to an
action in a table, and a generic higher-level step function iterating
over the controller receives that and acts upon it.

Here is an example of a controller that follows a hallway centerline
until it reaches an obstacle, when it turns around, and follows the
hallway in the other direction. It does this N times. The control
dynamics are deliberately oversimplified, and the code is totally
untested, but it illustrates the idea:

def pace(N):
for i in range(N):
# wall following phase
while not obstacle_in_front():
if right_distance() > left_distance():
yield turn_right
elif right_distance() < left_distance():
yield turn_left
else:
yield go_straight
# turning phase
while obstacle_in_front():
yield turn_left

My implementation supports hierarchical action, so for example,
turn_right could itself be a generator function, that
performs another extended action.

The top-level .step() method of the
Pyro Brain contains a controller stack. Each step calls
.next() on the iterator on the top of the stack. If that
call yields another generator function, the resulting iterator is
pushed on the stack and called. When the top item on the stack throws
a StopIteration exception, it is popped off and iteration resumes with
the iterator below

Read the rest of this entry »

Java is the SUV of programming languages

So says Phillip Greenspun.

… But the programmers and managers using Java will feel good about themselves because they are using a tool that, in theory, has a lot of power for handling problems of tremendous complexity. Just like the suburbanite who drives his SUV to the 7-11 on a paved road but feels good because in theory he could climb a 45-degree dirt slope. If a programmer is attacking a truly difficult problem he or she will generally have to use a language with systems programming and dynamic type extension capability, such as Lisp. This corresponds to the situation in which my friend, the proud owner of an original-style Hummer, got stuck in the sand on his first off-road excursion; an SUV can’t handle a true off-road adventure for which a tracked vehicle is required.

Gotta love it when someone can bash SUVs and Java in the same article. ;-) Greenspun mentions Perl and PHP in addition to Lisp as good alternatives to Java. Python also fits, of course.

Read the rest of this entry »

Python Robotics Programming With Pyro

I’ve mentioned before that I wanted to rebuild be research infrastructure once I got done with my dissertation proposal. I’ve started rebuilding, rebuilding the still-useful parts of my old Common Lisp codebase in Python. I’ve been helped a lot by Pyro, the Python robotics framework. It’s actually quite a nice framework, handling several popular robots and simulators, including Player/Stage, which I’m using.

The Pyro library and engine handle the work of communicating with the robot or simulator, leaving the robot programmer to concentrate on writing his controller (or “Brain” as it’s unfortunately called in Pyro terminology), as a Python object. The main work of the controller is encapsulated in the its .step() method, that gets called periodically (every 0.1 seconds). This is nice in some ways, although it makes things difficult when the controller/agent has to to perform complex, hierarchical, extended actions with subgoals, since it makes it impossible to use the Python program stack to track the hierarchical stack, since the .step() method presumably must return reasonably frequently. Instead, the controller and any extended actions must maintain their state between .step() calls. On the upside, this architecture allows the Pyro engine to handle GUI functions and any other periodic bookkeeping and communication without requiring the user to write in calls to special functions in the controller.

The other major downside is that, for Player/Stage, anyway, the Pyro developers seem to have assumed that all Player/Stage robots can be modeled as an ActivMedia Pioneer. It’s possible to write modules to support other robot configurations that use Player, but how to do this and get all the Pyro functionality is still mysterious. Luckily, for my work, it’s not that important, since my learning agent assumes very little prior knowledge about the nature of the robot it’s driving, so I can just pass it the raw laser scans from a simulated robot, without having implemented any of the routines that transform the sensors into a uniform system of units, etc.

In addition to just a robot interface, Pyro also has Python wrappers for various useful libraries including a neural net library, Self-Organizing Maps, and other fun goodies. No reinforcement learning, yet. But I’m writing my RL code in Python, so maybe I’ll contribute it.

Read the rest of this entry »

Hackers and Painters

Paul Graham, one of my favorite loquatious Lisp hackers, has written a great essay comparing software design and painting. His thesis is that as an art of making things, software design is really more like painting or architecture than engineering or science. One of the many implications of this view that he explores is choice of programming languages for software design:

A programming language is for thinking of programs, not for expressing programs you’ve already thought of. It should be a pencil, not a pen. Static typing would be a fine idea if people actually did write programs the way they taught me to in college. But that’s not how any of the hackers I know write programs. We need a language that lets us scribble and smudge and smear, not a language where you have to sit with a teacup of types balanced on your knee and make polite conversation with a strict old aunt of a compiler.

Lisp, of course, is like this, and so is Python.

The essay touches covers many other facets of software design, and it resonated strongly with me. Check it out.

Follow

Get every new post delivered to your Inbox.