Posts

Choosing a Python library

You’re working on a Python project, and you realise the next thing to do is a bit tricky. You don’t want to reinvent the wheel if you don’t have to. You wonder: has someone solved this problem before? The first place to look is the Python Standard Library . One of Python’s great strengths is that it comes with batteries included; there are well-documented, tried and tested libraries to do all sorts of useful things. No luck ? Turn to GitHub for help - it usually can! Most of the libraries I use are hosted on GitHub. Sometimes you’ll just find one candidate library; sometimes there will be more than one. You’ll need to decide if any fit the bill, and which looks best. As we’ll see, GitHub can tell you a lot about the quality of the project. Here are the things I like to ask about a library I’m considering. I’ve illustrated the checklist using the guizero project as an example, since I use it a lot and it ticks all the boxes. Does it have good documentation? Is the inten...

Time to retire my Rapsberry Pi Tensorflow Docker project?

I need your advice! Six years ago I did some experiments using TensorFlow on the Raspberry Pi.   It takes hours to compile TensorFlow on the Pi, and when I started the Pi platform wasn't officially supported. Sam Abrahams found his way thorough the rather scary compilation process, and I used his wheel to build a Docker image for the Pi that contained TensorFlow and Jupyter. That made it easy for users to experiment by installing Docker and then running the image. I was a bit anxious, as that was my first docker project, but it proved very popular. Things have change a lot since then. For a while, the TensorFlow team offered official support for the Raspberry Pi, though that has now stopped. You can still download a wheel but it's very out-of-date. I recently discovered Leigh Johnson's post on how to install full TensorFlow on the Pi. It's slightly out-of-date but the instructions on how to compile it yourself probably still work. Most Pi-based AI projects now use Tens...

Timings and Code for Spiking Neural Networks with JAX

Image
 I've been encouraged to flesh out my earlier posts about JAX to support  27DaysOfJAX . I've written simulations of a Leaky Integrate and Fire Neuron in *Plowman's* (pure) Python, Python + numpy, and Python + JAX. Here's a plot of a 2000-step simulation for a single neuron: Plot for a single neuron The speedups using Python, Jax and the JAX jit compiler are dramatic. Pure Python can simulate a single step for a single neuron in roughly 0.25 µs. so 1,000,000 neurons would take about 0.25 seconds. numpy can simulate a single step for 1,000,000 neurons in 13.7 ms . Python, JAX + JAX's jit compilation can simulate a single step for 1,000,000 neurons in 75 µs . Here's the core code for each version. # Pure Python def step(v, tr, injected_current): spiking = False if tr > 0: next_v = reset_voltage tr = tr - 1 elif v > threshold: next_v = reset_voltage tr = int(refactory_period / dt) spiking = True else...

Apologies to commenters!

I've just discovered that comments  on the blog have been queuing up for moderation without my realising it. I was expecting notification when new comments were posted but that hasn't been happening. I'm now working my way through the backlog. If you've been waiting for a response, I can only apologise.

JAX and APL

Regular readers will remember that I've been  exploring JAX . It's an amazing tool for creating high-performance applications that are written in Python but can run on GPUs and TPUs. The documentation mentions the importance of thinking in JAX . You need to change your mindset to get the most out of the language, and it's not always easy to do that. Learning APL could help APL is still my most productive environment for exploring complex algorithms. I've been using it for  over 50 years. In APL, tensors are first-class objects, and the language handles big data very well indeed. To write good APL you need to learn to think in terms of composing functions that transform whole arrays. That's exactly what you need to do in JAX . I've been using JAX to implement models of spiking neural networks, and I've achieved dramatic speed gains using my local GPU. The techniques I used are based on APL patterns I learned decades ago. Try APL APL is a wonderful programming...

More fun with Spiking Neural Networks and JAX

Image
I'm currently exploring Spiking Neural Networks. SNNs (Spiking Neural Networks) try to model the brain more accurately than most of the Artificial Neural Networks used in Deep Learning. There are some SNN implementations available in TensorFlow and PyTorch but I'm keen to explore them using pure Python. I find that Python code gives me confidence in my understanding. But there's a problem. SNNs need a lot of computing power. Even if I use numpy, large-scale simulations can run slowly. Spike generation - code below So I'm using JAX. JAX code runs in a traditional Python environment. JAX has array processing modules that are closely based on numpy's syntax. It also has a JIT (Just-in-time) compiler that lets you transparently deploy your code to GPUs. Jitting imposes some minor restrictions on your code but it leads to dramatic speed-ups. JAX and robotics As I mentioned in an earlier post, you can run JAX on NVIDIA's Jetson family. You get excellent performance on...

Installing Jax on the Jetson Nano

Image
Yesterday I got Jax running on a Jetson Nano. There's been some online interest and I promised to describe the installation. I'll cover the process below but first I'll briefly explain What's Jax, and why do I want it? tldr: The cool kids have moved from Tensorflow to Jax.   If you're interested in AI and Artificial Neural Networks (ANNs) you'll have heard of Google's TensorFlow. Tensorflow with Keras makes it easy to build an ANNs from standard software components train the network test it and deploy it into production Even better: TensorFlow can take advantage of a GPU or TPU if they are available, which allows a dramatic speedup of the training process. That's a big deal because training a complex network from scratch might take weeks of computer time. However , I find it hard to get TensorFlow and its competitors to use novel network architectures.  I am currently exploring types of  network that I can't implement easily in TensorFlow. I could c...