More Python — keyword arguments

Hey, I almost forgot one of my favorite shared features between Common Lisp and Python: keyword arguments! A lot of my research code tends to have lots of parameters, especially in object initialization, but often elsewhere too, like in functions for displaying internal state or writing out data, where I’ll have parameters that usually take default values, but occasionally need to be changed. Languages like C/C++/Java allow optional arguments, but you still have to remember the order, and if you want a non-default the 8th optional argument, you have to give values to the 7 args before it in the argument list. This makes for crazy unreadable code, like this:

x = Foo(0,NULL,NULL,NULL,5,0,0,0,NULL,NULL,32);

I don’t know how I’d come back to this code and 6 months and understand what’s going on.

Lisp and Python’s keyword arguments are a great solution for this problem. They allow you to use the parameter name in the function call. In lisp you might write the function definition like this (assuming short variable names for brevity here):

(defun foo (&key (a 0) b c d (e 0) (f 0) (g 0) h i (j 0))
;; keyword args default to nil, unless another default is
;; specified, as with a, e, f, g, and j.
...)

Then call it like this:

(setq x (foo :e 5 :j 32))

Here all the unspecified parameters get their defaults. Also the parameters can be specified in any order.

Python’s keyword args work basically the same way, but they’re arguably even more powerful. In Python we’d do this:

def foo(a=0, b=None, c=None, d=None, e=0, f=0, g=0,
h=None, i=None, j=0):
...

and call it like this:

x = foo(e=5, j=32)

Again, the keyword arguments can be given in any order. You gotta admit this is much more readable than the C++/Java way. The interesting thing is that Python implements keyword args with dictionaries, a primitive data type. They expose that to the programmer, so you can pass a dictionary where you would put keyword arguments:

dict = {'e':5, 'j':32}
x = foo(**dict)

You can even mix-n-match:

dict = { 'j':32 }
x = foo(e=5,**dict)

You can also use a dictionary to hold a variable number of keyword arguments in the function definition, but I won’t go into that here. Anyway, this was a feature of Common Lisp I didn’t think I could give up. Scheme has them as an added syntax in a library, but they don’t seem to be a generally accepted part of the scheme programming paradigm like they are in Python

Posted in Python. 4 Comments »

Python for AI research, outline

Sunday I mentioned that I think Python is a good replacement for Lisp as an AI language. I’m not sure if I’ll have a chance to actually write up my thoughts on this anytime soon, but here’s a rough outline of points I’d like to cover:

  • I’m planning on moving away from Common Lisp for my own research, because no CL that I’ve encountered fully meets my needs anymore. The Lisp community is fragmented and no Lisp or Scheme implementation has the community support or available libraries that Python has.
  • AI is an empirical science, where the hypotheses to be tested are computational in nature. This requires a language that facilitates quickly and easily implementing and experimenting with computational hypotheses, i.e. rapid prototyping.
  • It also requires a language that facilitates building a well-instrumented and convenient software workbench or test-bed for experimentation on whatever class of hypotheses you’re interested in. What Ben Kuipers calls building a virtual machine and Paul Graham calls programming bottom-up.
  • Python’s agility makes it well suited for these tasks. Some basic language features that make it well suited (there may be more):
    • the interactive prompt
    • untyped variables
    • rich, powerful, built-in collection types (lists and
      dictionaries)
    • C++-like operator overloading
    • A wicked module system
    • A nice clean syntax that is easy and fast to type
  • Lisp’s primary unit of abstraction is the function, which is generally thought of as inherently stateless, while Python’s primary unit of abstraction is the inherently stateful object. This may make Python better suited for many interesting AI tasks. Robots and other agents that have to act in the world must keep internal state — the Markov assumption rarely holds in the real world.
  • We Lisp/Scheme hackers pride ourselves on the ease with which we can whip up any function to suit our needs in Lisp/Scheme. This ease is even the justification for the smallness of the scheme standard library: if you need it, just write it. But why should I? I want to spend my coding time on my research problem, not coding up support functions that are unrelated to my research. Python has a large and useful standard library and a huge set of available add-on modules

Python continues to amaze

I continue to be impressed by Python. I needed to write a script to process data files generated from my thesis research and produce some plots with gnuplot. I started looking at the gnuplot batch documentation, but it wasn’t at all clear how to process multiple files (given on the command line) with a gnuplot batch file. It was actually faster to get Gnuplot.py and install it and write the script in Python than it was to figure out if or how it was possible to do it directly in gnuplot.

Jim Bednar and I have decided to use Python as the scripting language for Topographica, after strongly considering using PLT Scheme. Both languages have a lot to recommend them, but the thing that really turned it for us was library support. Python seems to have a library for everything, and they all seem to be easy to install and use.

If I can find the time, I’m thinking of writing a long entry on why Python is a good replacement for Lisp as an AI language. Comparisons between the two have been discussed a lot, but that’s not exactly what I’m talking about. Python is not Lisp, but, like Lisp, it has many features that make it good for experimentation with computational hypotheses. It may actually be better suited than Lisp for investigating modern, non-symbolic AI concepts. But I’ll save that discussion for another time.

Forum for AI, Monday April 28

This Monday at 4:00 pm in ACES Auditorium, Robert Hecht-Nielsen will speak at FAI. I haven’t met Robert, but I understand he is an entertaining and provocative speaker. His talk certainly seems provocative — basically a unified theory of cognition based on associative memory between vector quantizors. Right up my alley! The info is below.

Thinking

Robert Hecht-Nielsen University of California, San Diego

This talk will overview the author’s recently published theory of the cerebral cortex (in: Hecht-Nielsen, R. and McKenna, T. [Eds.] (2003) Computational Models for Neuroscience, Springer-Verlag) and discuss some of its startling implications for neuroscience, AI, and philosophy. Mathematically, the theory views the cortical surface as a collection of about 120,000 notional vector quantizers which are organized and frozen at various points during childhood. These provide a fixed set of terms of reference so that knowledge can be accumulated. Cortical knowledge takes the form of a vast number of pairwise unidirectional links between tokens in these quantizer’s codebooks, with each link having a strength directly related to a particular conditional probability (the antecedent support probability of the source token given the presence of the target token). How this knowledge is used to carry out thinking is explicitly explained and a related local-circuit neuroscience prediction of the theory is described. This talk will show how this weird design, while inherently incapable of directly carrying out any sort of ordinary reasoning i.e., Aristotelian logic, learned neural network input-output mappings, Bayesian inferencing, fuzzy logic, etc.), is nonetheless able to arrive at excellent conclusions. Further, unlike the situation in the cases of existing AI reasoning schemes, the amount of antecedent support knowledge required by cortex, while large, is not combinatorially explosive and can be feasibly obtained during childhood. While scientific testing of this theory is probably a long way off, the theory can be put into service immediately as a new mathematical foundation for AI.

How to recycle everything

A cool Discover article on thermal depolymerization, a process that can break down any kind of organic waste, from human and animal waste to vinyl siding, into oil, natural gas, minerals and water. Philadelphia is starting to test it on municipal waste, and a turkey plant in Missouri is about to bring online a plant to process 200 tons of turkey offal per day. They claim to be able to produce oil at $15/barrel, and expect the price to go down as the process improves.

It sounds so amazing, I don’t want to get my hopes up. Not only would this reduce or eliminate our dependence on foreign oil, but it could eventually end global warming by returning us to a closed carbon cycle: carbon dioxide released into the atmosphere from burning this oil was taken from the atmosphere by plants (recently not 30 million years ago), that eventually became organic waste. The ultimate recycling.

Hiding email addresses from spammers

The Center for Democracy and Technology just published this report showing that obsuring your email address in web pages by HTML-encoding the characters as ASCII character codes effectively reduces spam. Here is a little CGI script that will make an encoded mailto link, that can be cut and pasted into a web page.

MIT Media Lab Woes

Philip Greenspun has an interesting take on the possible impending demise of the MIT media lab, predicted by Wired Magazine.

I had no idea about the Media Lab’s radical structure for doing research. Professional fundraisers and PR people — it explains so much, particularly the resentment often directed towards them from people in many mainstream CS/AI departments. The feeling among many people I’ve met is that what they do is more flash than substance.

I had a tour of the lab last summer while attending ICDL’02, and remember thinking that it seemed like a really nice place to work. As for the research, it’s tough to tell from demos, since they can often be mostly smoke and mirrors, but I thought Deb Roy’s work seemed very good. On the other hand, in another part of the lab there was this weird video game thing with wolves. It was very flashy and they obviously had put tons of money into equipment for displaying it (e.g. a plasma display on the wall). I played it, and we all stood around scratching our heads wondering what its scientific contribution was. I never did figure it out.

It’s interesting that the lab’s demise is reported/predicted by Wired, since they were a major conduit for Media Lab PR for much of the last 10 years.