Pages

Friday, 6 December 2019

Pimoroni Explorer Hat Tricks

Yesterday I talked about my plans for the Pimoroni Explorer HAT.

I've started the GitHub repository for the code from Explorer Hat Tricks. That's the ebook I'm writing for users of the Explorer HAT, HAT Pro and pHAT.

At present there are just a few examples and a single project, but I only started yesterday :)

I'll post here and tweet (@rareblog) as more code becomes available.

Leanpub 


I'm developing the ebook on Leanpub, a publishing platform.

Leanpub is good for authors and great for readers.

Readers like it because they get
  • a chance to see a sample
  • free lifetime updates for every book in their library, paid-for or free
  • a 45-day money-back guarantee, and
  • DRM-free content in pdf, mobi and epub formats.

 

Try the code out


You'll need
  1. a Raspberry Pi with 40-pin header (zero, 2, 3 or 4) running Raspbian Buster
  2. a Pimoroni Explorer HAT Pro. Most of the code will also work on the original HAT or pHAT, but some will need the Pro features.
  3. some LEDs, resistors and jumper wires like the ones in the Pimoroni Explorer HAT Parts Kit.
Start with the Traffic Lights project:



The code is on GitHib. Please raise an issue if you see a problem or have a suggestion for a project that uses the Explorer HAT!

Thursday, 5 December 2019

Raspberry Pi fun with the Pimoroni Explorer Hat

Pimoroni Explorer Hat Pro
I love the pirate crew at Pimoroni more than ever!

Last week Pimoroni ran a series of competitions on twitter. The prizes were gift tokens and I was lucky enough to win one.

My Pirate booty included an Explorer Hat Pro and a parts kit. I've been playing with them ever since.

The Hat lets you protoype Raspberry Pi projects in minutes.

You don't need to do any soldering and the Python library is quick to install, well written and commented and really easy to use.

One good turn deserves another


I suspect that beginners would welcome more  'how-to' guides, and I'm putting some together.

I plan to publish open source code on GitHub, and to publish detailed instructions in a low-cost e-book.

If you've got suggestions or ideas for content that you'd be happy for me to use,  let me know in a comment, or tweet your ideas to @rareblog on Twitter.



Wednesday, 7 August 2019

Docker build for TensorFlow 1.14 + Jupyter for Rasapbian Buster on Raspberry Pi 4B

I've updated an old Docker build to create a Docker image for Raspberry Pi  Buster. It contains Katsuya Hyodo's TensorFlow wheel which has TensorFlow Lite enabled.





The image contains Jupyter, so you can connect to the running image from anywhere on your network and run TensorFlow notebooks on the Pi.

This is highly experimental - don't use it for anything important :)

Once I've done some tidying up (and a lot more testing!) I'll put the image up pon DockerHub.

To build/run it you need the Docker nightly build. I installed it by invoking

curl -fsSL get.docker.com | CHANNEL=nightly sh
If you find any problems please raise an issue on GitHub.
 

Sunday, 21 July 2019

Benchmarking process for TF-TRT, and a workaround for the Coral USB Accelerator

A couple of days ago I published some benchmarking results running a TF-TRT model on the Pi and Jetson Nano. I said I'd write up the benchmarking process. You'll find the details below. The code I used is on GitHub.

I've also managed to get a Coral USB Accelerator running with a Raspberry Pi 4. I encountered a minor problem, and I have explained my simple but very hacky workaround at the end of the post.

TensorFlow and TF-TRT benchmarks

Setup


The process  was based on this excellent article, written by Chengwei Zhang.

On my workstation


I started by following Chengwei Zhang's recipe. I trained the model on my workstation using and then copied trt_graph.pb from my workstation to the Pi 4.

On the Raspberry Pi 4


I used a virtual environment created with pipenv, and installed jupyter and pillow.

I downloaded and installed this unofficial wheel.

I tried to run step2.ipynb but encountered an import error. This turned out to be an old TensorFlow bug resurfacing.

The maintainer of the wheel will fix the problem when time permits, but I used a simple workaround.

I used cd `pipenv --venv` to go to the location of the virtual environment, and then ran cd lib/python3.7/site-packages/tensorflow/contrib/ to move to the location of the offending file __init__.py

The problem lines are

if os.name != "nt" and platform.machine() != "s390x":
     from tensorflow.contrib import cloud
These try to import cloud from tensorflow.contrib, which isn't there and fortunately isn't needed :)

I replaced the second line with pass using

sed -i '/from tensorflow.contrib import cloud/s/^/ pass # ' __init__.py

and captured the timings.

Later, I ran raw-mobile-netv2.ipynb to see how long it took to run the training session, and to save the model and frozen graph on the Pi.

On the Jetson Nano


I used the Nano that I had configured for my series on Getting Started with the Jetson Nano; it had NVIDIA's TensorFlow, pillow and jupyter lab installed.

I found that I could not load the saved/imported trt_graph.pb file on the Nano.

Since running the original training stage on the Pi did not take as long as I'd expected, I ran step1.ipynb on the Nano and used the locally created trt_graph.pb file which loaded OK.

Then I ran step2.ipynb and captured the timings which I published.

Using the Coral USB Accelerator with the Raspberry Pi 4


The Coral USB Accelerator comes with very clear installation instructions, but these do not currently work on the Pi 4.

A quick check of the install script revealed a hard-coded check for the Pi 3B or 3B+. Since I don't normally use the Pi 3B, I changed that entry to accept a Pi 4.

When I ran the modified I found a couple of further issues.

The wheel installed in the last step of the script expects you to be using python3.5 rather than the Raspbian Buster default of python3.7.

As a result, I had to (cough)

cd /usr/local/lib/python3.7/dist-packages/edgetpu/swig/
cp _edgetpu_cpp_wrapper.cpython-35m-arm-linux-gnueabihf.so _edgetpu_cpp_wrapper.cpython-37m-arm-linux-gnueabihf.so



and change all references in the demo paths from python3.5 to python3.7

With these changes, the USB accelerator works very well. There are plenty of demonstrations provided. In this image it's correctly identified two faces in an image from a workshop I ran at Pi towers a couple of years ago.

It's an impressive piece of hardware. I am particularly interested by the imprinting technique which allows you to add new image recognition capability without retraining the whole of a compiled model.

Imprinting is a specialised form of transfer learning. It was introduced in this paper, and it appears to have a lot of potential. Watch this space!

Friday, 19 July 2019

Benchmarking TF-TRT on the Raspberry Pi and Jetson Nano


Trying to choose between the Pi 4B and the Jetson Nano for a Deep Learning project?

I recently posted some results from benchmarks I ran training and running TensorFlow networks on the Raspberry Pi 4 and Jetson Nano. They generated a lot of interest, but some readers questioned their relevance. They were'n interested in training networks on edge devices.

Most people expect to train on higher-power hardware and then deploy the trained networks on  the Pi and Nano. If they use TensorFlow for training, they have are several choices for deployment:

  1. Standard TensorFlow
  2. TensorFlow Lite
  3. TF-TRT (a TensorFlow wrapper around NVIDIA's TensorRT, or TRT)
  4. Raw TensorRT
In this post I'll focus on timing Standard TensorFlow and TF-TRT. In a later post I plan to cover TensorFlow Lite on the Pi with and without accelerators like the Coral EDGE TPU coprocessor and the Intel Compute Stick.

I've run a number of benchmarks, and the results have been much as I expected.
I did encounter one surprising result, though, which I'll talk about at the end of the post. It's a pitfall that could easily have invalidated the figures I'm about to share.

Benchmarking MobileNet V2


The results I'll report are based on running MobileNetV2 pre-trained with ImageNet data. I've adapted the code from the excellent DLology blog which covers deployment to the Nano. I've also deployed the model on the Pi using a hacked community build of TensorFlow, obtained from here.  That has a wheel containing TF-TRT for python3.7, which is the default for Raspbian Buster.

(The wheel seems to have a minor bug. This weekend I'll set up a GitHub repo with the sed script I used to work around that, along with the notebooks I used to run the tests.)

The Pi 4B has 4GB of RAM, running a freshly updated version of Raspbian buster.

So here are the results you've been waiting for, expressed in seconds per image and frames per second (FPS).

Platform        Software    Seconds/image   FPS
Raspberry Pi    TF          0.226            4.42
Raspberry Pi    TF-TRT      0.20             5.13
Jetson Nano     TF          0.082           12.2
Jetson Nano     TF-TRT      0.04            25.63

According to these figures, the Nano is three to five times faster than the Pi, and TF-TRT is about twice as fast as raw TensorFlow on the Nano.

TF-TRT is only slightly faster than raw TensorFlow on the Pi. I'm not sure why this should be, but the timings are pretty consistent. At some stage I'll run some other models, but those will have to do for now.

A benchmarking pitfall


jtop
I mentioned one pitfall. When I re-ran the tests for this blog post I got much  slower performance for the Nano using TF-TRT - around 5 fps.

Fortunately Raffaello Bonghi's excellent jtop package saved the day.  jtop is an enhanced version of top for Jetsons which shows real-time GPU and memory usage.

Looking at its output,  I realised that an earlier session on the Nano was still taking up memory. Once I'd closed the session down, a re-run gave me the 25 fps which I and others had seen before.

I continue to be impressed by the Pi 4 and the Nano.

While the Nano's GPU offers significantly faster performance on Deep Learning tasks, it cost almost twice as much. Both represent excellent value for money, and your choice will depend on the requirements for your project.

Sunday, 14 July 2019

Training ANNs on the Raspberry Pi 4 and Jetson Nano


There have been several benchmarks published comparing performance of the Raspberry Pi and Jetson Nano. They suggest there is little to chose between them when running Deep Learning tasks.

I'm sure the results have been accurately reported, but I found them surprising.

Don't get me wrong. I love the Pi 4, and am very happy with the two I've been using. The Pi 4 is significantly faster than its predecessors, but...

The Jetson Nano has a powerful GPU that's optimised for many of the operations used by Artificial Neural Networks (ANNs).

I'd expect the Nano to significantly outperform the Pi running ANNs.


How can this be?


I think I've identified reasons for the surprising results.

At least one benchmark appears to have tested the Nano in 5W power mode. I'm not 100% certain, as the author has not responded to several enquiries, but the article talks about the difficulty in finding a 4A USB supply. That suggests that the author is not entirely familiar with the Nano and its documentation.

The docs makes it clear that the Nano will only run at full speed if 10 W mode is selected, and 10 W mode requires a PSU with a barrel jack rather than a USB connector. You can see the barrel jack in the picture of the Nano above.

Another common factor is that most of the benchmarks test inference with a pre-trained network rather than training speed. While many applications will use pre-trained nets, training speed still matters; some applications will need training from scratch, and others will require transfer learning.

I've done a couple of quick tests of relative speed with a 4GB Pi4 and a Nano running training tasks with the MNIST digits and fashion datasets data sets.

The Nano was in 10 W max-power mode. The Pi was cooled using the Pimoroni fan shim, and it did not get throttled by overheating.

Training times  using MNIST digits 


The first network uses MNIST digits data and looks like this:
model = tf.keras.models.Sequential([
  tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
  tf.keras.layers.MaxPooling2D(2, 2),
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dense(10, activation='softmax')
])
While training, that takes about 50 seconds per epoch on the Nano, and about 140 seconds per epoch in the Pi 4.

MNIST Fashion training times with a second, larger CNN layer


The second (borrowed from one of the deeplearning.ai course notebooks) uses the mnist fashion dataset and looks like this

model = tf.keras.models.Sequential([
  tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),
  tf.keras.layers.MaxPooling2D(2, 2),
  tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
  tf.keras.layers.MaxPooling2D(2,2),
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dense(10, activation='softmax')
])
While training, that takes about 59 seconds per epoch on the Nano and about 336 seconds per epoch on the Pi.

Conclusion


These figures suggest that the Nano is between 2.8 and 5.7 times faster than the Pi 4 when training.

That leaves me curious about relative performance when running pre-trained networks. I'll report when I have some more data.

Friday, 28 June 2019

Sambot - MeArm, servos, the Babelboard and Jetson Nano

Jud McCranie CC BY 3.0 via Wikimedia Commons
Way back in 1974 I took Tom Westerdale's Adaptive Systems course as part of my Masters degree. Tom's thesis advisor was John Holland, and a lot of the course covered genetic algorithms. Before that it covered early machine learning applications like Samuel's Checkers Player.

I've wanted to revisit those early AI applications for a while, and I recently decided to put a new spin on an old idea.

I want to build a robot that plays Checkers (that's draughts to us Brits) using a real board, a robot arm and a Jetson Nano using a Raspberry Pi camera.

The game play could be done using a variant of Samuel's approach, a Neural network, or a combination of the two. If AlphaGo can master Go playing against itself it shouldn't be too hard for a pair of Machine Learning programs to maser draughts!

First, though, I need to build a controllable arm that can pick up and move the pieces and a computer vision system that can recognise the location of the peices on the board.

I'm up to my eyes in projects and work at the moment, but I have justified making a start on the robot arm because it ties in nicely with a current priority. It's a chance to explore and explain how to do Physical Computing with the Jetson Nano.

Jetson Nano
One of the many things I love about the Nano is its use of the Raspberry Pi header layout. The Nano's heatsink (needed to cool its GPU brain) means that Pi hats cannot sit over the Nano, but the Pi header has been rotated thought 180 degrees, allowing you to use Raspberry Pi hats unmodified.

Recently I've been making use of the Grove system from Seeed studio, and I've built a series of low-cost boards to let me use the Grove components with the Pi, the Nano and Adafruit Feathers. I call them babelboards.

They are the hardware hacker's equivalent of Douglas Adam's babelfish. They allow lots of different hardware components to talk to each other using the I2C protocol.

Nano with minimal babelboard
The simplest Bableboard is a small piece of stripboard with a couple of connectors. On the right you can see it plugged into a Nano, with a Grove 4-wire cable plugged in, driving a Grove 16-channel servo controller. That's overkill, as I think my Checkers player will only need 4 servos, but I happen to have one to hand.

I've had a MeArm robotic arm waiting for me to assemble it for ages, and today I made a start. I'll talk more about that in a later post, but today I will focus on getting the Nano to control the MeArm's four servos.

The Grove servo controller uses the PCA9685. That's the same chip that Adafruit use in their servo controllers, and they have a Python library to control it.

Better still, Adafruit have ported their Blinka library to the Pi and the Nano so you can use their family of CircuitPython libraries on the Pi and the Nano as well as the Adafruit CircuitPython boards. Awesome!

I installed the software following the instructions on the Adafruit website, by invoking


pip3 install adafruit-circuitpython-servokit --user
Since some of the Adafruit code uses gpio as well as I2C, I made sure I could access them all from my user account, and then rebooted to make the changes take effect:

sudo groupadd -f -r gpio
sudo usermod -a -G gpio $USER
sudo usermod -a -G i2c $USER
sudo cp /opt/nvidia/jetson-gpio/etc/99-gpio.rules /etc/udev/rules.d/
sudo reboot


Next I connected a servo to the servo controller, and ran some sample code from the Adafruit website:

import time
from adafruit_servokit import ServoKit

kit = ServoKit(channels=16)
kit.servo[0].angle = 180
time.sleep(1)
kit.servo[0].angle = 0

And lo - the servo moved!

In the next post I'll give more details about the Babelboards and how to build them.

If you want to keep an eye on what I'm up to, I'm @rareblog on Twitter.

Monday, 24 June 2019

An excellent course for Jetson Nano owners

Jetson Nano
Regular readers will know than I'm a keen Jetson Nano owner.

Recently I posted a series about how to started with the computer but NVIDIA have now published an excellent course,  'Getting Started with the Jetson Nano', which is  free for members of the NVIDIA developers' program.

The course comes with a pre-built image which can run the Nano in headless mode. That's very useful - I had to buy a new monitor to get going, as none of my old monitors had native HDMI support.

The image provide with the course just needs a Nano and a Laptop or Desktop computer with a USB port.

The course is a great introduction to deep learning with a GPU. Once you've completed it you may want to delve deeper; there are lots of excellent Deep Learning courses available on-line, and many of them use Google's Colab for practical sessions.

Google Colab gives you free access to top-of-the range NVIDIA hardware, and if you want to run your trained models locally it's easy to move them onto the Nano. Of course the Nano is small enough to use for mobile robotics!

I'm designing an autonomous robot with the Nano, and I have been delighted with Adafruit's Nano port of their Blinka library. It makes it really easy to write Physical Computing code that runs without change on the Nano, the Raspberry Pi and  Adafruit's CircuitPython-enabled boards.

Come and see the Nano


Tomorrow I'll be at the London Raspberry Pint meet-up showing a Nano driving Adafruit, Grove and SparkFun peripherals using a simple interface board (the babelboard) and some straightforward Python code.

If you're in or near London, do come along. The meet-up starts at 7 PM. It's at CodeNode, and you'll save a lot of time if you pre-register at the CodeNode site.

If you can't join us, a video of the talk will be available in a few days and I will be blogging more about the babelboard later this week.

Thursday, 20 June 2019

Use websockets to build a browser-based Digital Voltmeter

This is the third and final part of a series about building a browser-based six-channel Digital Voltmeter (DVM) using an Arduino and a Raspberry Pi.

Part one covered the software on the Arduino and the USB connection to the Raspberry Pi.

Part two showed you how to use the pyserial package to read that data into a Python program and print the results as they arrived.

In this final part you'll see how to use Joe Walnes' brilliantly simple websocketd. You'll see how you can turn the Python program into a server without writing any additional code! Finally, you'll view a web-page that displays the voltages in more or less real-time.

Web sockets with websocketd


HTTP makes it easy for a web browser to ask for data from a web server. That approach is known as client pull because the client (the web browser) pulls (requests) the data from the server.

For the DVM application you want to use server push; in other words, you want to send changes in the voltages from a server on the Raspberry Pi to your browser. Websockets give you a simple, well-supported way of implementing web push.

You'll use websocketd to send the data from the Pi to the browser. websocketd is a really useful program which allows you to wrap any program or script that prints output and send that data to a browser via a websocket. (You can send data back from the browser as well, but the Web DVM doesn't need to do that.)

Four steps to use websocketd


I followed four simple steps to install and use websocketd with the Python program form part 2.

  1. I installed the Python program which you saw in Part two.
  2. Next  I copied the websocketd binary program onto your Raspberry Pi.
  3. I installed a web page with some simple javscript as well as some html.
  4. To wrap the program, I started websocketd and told it where to find the program it should run. For the web DVM I wanted webstocketd to act as a conventional web server. To do that, I provided an extra parameter telling websocketd where to find the web pages to serve.
To make things as simple as possible I have set up a public repository on GitHub which contains all the required software as well as the web page you'll need.

As you'll see below, you just need to download and unzip a file and then run the command that starts the server.

If you're a confident GitHub user you can fork or clone the repository instead of downloading the zip file.

The code on GitHub assumes that you are using a Raspberry Pi with the hostname of raspberrypi. If you want to use your own choice of hostname, you'll need to make one simple change to the web page. I'll cover that in the final step of this post.

Installing the Web DVM project on the Pi


Open a terminal window on the Pi and move into a directory of your choice.

Type

wget https://github.com/romilly/WebDVM/archive/master.zip
unzip master.zip
cd cd WebDVM-master/
./websocketd --port=8080 --staticdir=. ./read-dvm.py

You should see websocketd displaying messages as it starts up.

 

 Testing the application

If you open a browser on rasberrypi:8080  you should see a web page like this >>

For demo purposes, I've connected the analog inputs to various fixed and variable voltages. For example, A0 is ground,  A2 is connected to a light-dependent resistor and A3 is connected to the Arduino's 5V rail.

How does the web page work?


The web page is a basic html page with a little JavaScript.

Here's index.html:

<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Web DVM</title>
</head>
<body>
<h1>Web DVM - 6 Analog Channels</h1>
<div>A0: <span id="A0"></span></div>
<div>A1: <span id="A1"></span></div>
<div>A2: <span id="A2"></span></div>
<div>A3: <span id="A3"></span></div>
<div>A4: <span id="A4"></span></div>
<div>A5: <span id="A5"></span></div>
<script>
    
  // setup websocket with callbacks
  var ws = new WebSocket('ws://raspberrypi:8080/');
  ws.onopen = function() {
  };
  ws.onclose = function() {
  };
  ws.onmessage = function(event) {
      data = event.data.split('=');
      if (data[0].length === 2) {
          document.getElementById(data[0]).textContent = data[1];
      }
  };
</script>
</body>
</html>


The work is done by a small bit of JavaScript. The code
var ws = new WebSocket('ws://raspberrypi:8080/');
opens a websocket.

That's the bit of code you'll need to change if you want to use a different hstname for the Pi.

Whenever the websocket receives a message, the
ws.onmessage
code processes it and updates the contents of the web page.



Summary


In this three-part series you've met several useful techniques, and built the WebDVM. That's a useful piece of Test Equipment!

You can use the approach to  make lots of different measurements, and there are several ways in which you can add functionality.

In a future blog post I'll show you how to add scrolling graphics to your web-based DVM.

The babelboard is coming


Before that I'll be exploring the babelboard, another simple tool which you can use to quickly prototype physical computing applications.

The babelboard can be used with a Raspberry Pi or a Jetson Nano. Later posts will show how to build versions for the Adafruit feather and the BBC micro:bit.

The babelboard is a simple piece of open source hardware which you can build in an hour using readily available components.

To make sure you don't miss the next series, follow @rareblog on Twitter!
 
 

Friday, 14 June 2019

Using pyserial on a Raspberry Pi to read Arduino output - WebDVM part 2

This series describes how to build a simple web-based DVM (digital volt meter) that can display up to six voltages in a web browser.

It's a really useful hack if you need to measure several voltages at once.

In part 1 you saw how to send analog voltage data over a serial link to a Raspberry Pi. For testing purposes you displayed the data in a terminal window on the Pi.

In this part you will use read that data using Python on the Pi and print the data to standard output.

In the next part you'll turn your Python script into a dynamic application that displays the 6 voltages in a browser, updating the web page as the values change.

The code (available on github) uses the pyserial package. It's a simple approach, but there are a few issues to find your way around. You'll see the problems and solutions below.

Pyserial


pyserial is a Python package that allows Python programs to read and write data using serial ports. You can install it via pip.

pip3 install pyserial

When you connect the Arduino to the Pi via a USB cable, the Arduino connection appears as a serial port.

That's where the first problem bit me.

For part one of this series I used a Raspberry Pi zero with an Arduino Uno.
When I connected them the Arduino connection showed up as device /dev/ttyACM0.

For part two, I decided to use an Arduino Nano instead of the Uno. The Nano is smaller and cheaper than the Uno. Like the Uno it has six analog ports, so it looked ideal.

When I ran the software it fell over.

I did a quick check listing the teletype devices in /dev/tty*

There was no /dev/ttyACM0 but there was another new device: /dev/ttyUSB0.

I'd forgotten that the Nano uses a different approach for its USB-to-serial converter, and the Pi sees it as a different device.

So the software now starts with a check to see which device is available.

devices = [port.device for port in list_ports.comports()]
ports = [port for port in devices if port in ['/dev/ttyACM0','/dev/ttyUSB0']]
if len(ports) != 1:
    raise Exception('cannot identify port to use')
port = ports[0]

Once the script has found which prot to use,  pyserial reads from that device in much the same way as you would read from or write to a file.

Unlike a file, though, data will only be available to read once the Arduino has sent it! For that reason, the script wraps the serial connection in a buffer.

It also needs to  convert the serial input from a series of bytes to a unicode string.

Unfortunately, there is no way to synchronise the Pi and Arduino when they first connect. This means it's possible that the Python program starts reading an invalid sequence of bytes, so the program needs to cope with an error when the bytes are converted to a string.

Here's the code that does all that:

with serial.Serial(port, 9600, timeout=TIMEOUT_SECONDS) as ser:
    # We use a Bi-directional BufferedRWPair so people who copy + adapt can write as well as read
    sio = io.TextIOWrapper(io.BufferedRWPair(ser, ser))
    while True:
        try:
            line = sio.readline()
        except UnicodeDecodeError:
            continue # decode error - keep calm and carry on

Next the program checks to see that a non-empty line was actually received, prints it to standard output, and flushes the newly printed information. That last step is necessary to make sure that new data is printed out as soon as it has been received. That will become vital in part three of this series.

       if len(line) > 0:
           print(line)
           sys.stdout.flush() # to avoid buffering, needed for websocketd

Part three will describe how to make the anaolg data available in a browser that updates in real time. While you're waiting you can take a look at the code on github.




Tuesday, 4 June 2019

Build a Web-based DVM using Arduino and Pi zero W

This three-part series show you how to build build a Web-based 6-channel DVM (digital voltmeter)  using an Arduino and Raspberry Pi zero.

If you’re experienced and impatient you will find software and basic installation instructions on GitHub.

If you’re less experienced, don’t worry. All you need to know is:
  1. how to compile and download an Arduino sketch,
  2. how to connect your Raspberry Pi to your network, and
  3. how to open a terminal session on the Pi.
The project is fun, useful, and simple. You probably have the parts already, in which case the whole project will take about an hour to program and connect. You don’t need to solder anything.

The WebDVM uses some general techniques that you can apply to projects of your own

In Part 1 (this post)  you will install some simple Arduino code, connect the Arduino to a Pi, and check that the two are communicating.

In Part 2 you’ll use a short Python program on the Pi to read the Arduino’s output.

In Part 3 you'll use a simple technique to turn your Python program into a web server, and you'll use HTML and JavaScript to format the output. You’ll see the web page displaying the voltage the Arduino’s six Analog inputs in real-time.

Why Pi and Arduino?



The project plays to the strengths of both platforms.

The Arduino has 6 analog inputs.

The Pi zero runs Python, it can connect using WiFi without extra hardware, and it’s inexpensive.

Both are widely available.

What hardware do you need?


Here’s what I used:
  1. An Arduino Uno.
  2. A Raspberry Pi zero W with wifi set up.
  3. A battery to power the hardware.
  4. Cables to connect the Pi and Arduino, and to provide power to the Pi.
  5. Something that runs a web browser on the same network as the Pi.

I used an Arduino Uno but an older model or a Leonardo would be fine. If you have a nano or mini you will need a way of connecting what you want to measure using a breadboard and/or headers.

You can use a full sized Raspberry Pi if you want. Any model will work, and you can use a wired (Ethernet) connection instead of wifi.

Arduino software


The Arduino sketch is really simple. It
  1. sets up serial communication,
  2. reads each analog input, and
  3. prints the voltage to serial output.

Here’s the sketch:

const float vcc = 5.0;
const float scale = vcc / 1023;
const int analogPins[] = {A0, A1, A2, A3, A4, A5};
const int delay_ms = 50;
const int channels = 6;

// 6 Channel DVM over serial connection


void setup() {
 //Initialize serial and wait for port to open:
 Serial.begin(9600);
 while (!Serial) {
 }
}


void loop() {
 // read each Analog channel, scale it,
 // and print a formatted version to the Serial output
 for (int i = 0;i   float v = scale * analogRead(analogPins[i]);
   Serial.print('A');
   Serial.print(i);
   Serial.print('=');
   Serial.println(v);
   delay(delay_ms);
 }
}

Compile and upload the sketch to the Arduino.

If you open the serial monitor, you should see a window like this:


Now connect the Pi to the Arduino.

If you use an Arduino Uno or Leonardo, you’ll need a USB cable with an A-type connector at one end; it will probably have a B-type connector at the other. If you want to connect that to a Pi zero, you’ll need an adapater; if it’s a full sized Pi you wan connect directly.

Open a terminal window on the Pi.

If you type  
ls la /dev/ttyACM0
you should see an entry for /dev/ttyACM0



/dev/ttyACM0 is a temporary teletype device that raspbian created when you connected the Arduino.

It’s owned by root, and is in the dialout group. As user pi you should already be a member of the dialout group, so you can access the device.

Type

cat -v  /dev/ttyACM0

This lists the output that the Pi is reading from the Arduino, with non-printing characters displayed.



The ^M characters are newlines sent by the Arduino after each Analog reading.

Coming next


That concludes the first part of the project. In the next part you’ll read the Arduino’s output in a Python program on the Pi.

Don't forget to follow @rareblog on Twitter to see when the next part is posted!






Friday, 24 May 2019

apl-reggie: Regular Expressions made easier in APL

What's APL?

Like GNU, APL is a recursive acronym; it's A Programming Language.

I first met APL at the IBM education centre in Sudbury Towers in London. I was a student reading Maths at Cambridge University, and IBM asked me to do a summer research project into a new technology called Computer Assisted Instruction. (I wonder what happened to that crazy idea?)

APL was one of the first languages to offer a REPL (Read Evaluate Print Loop), so it looked like good technology for exploratory programming.

APL was created by a mathematician. Its notation and syntax rationalise mathematical notation, and it was designed to describe array (tensor) operations naturally and consistently.

For a while in the '70s and '80s APL ruled the corporate IT world. These days it's used to solve problems that involve complex calculations on large arrays.
It's not yet used as widely as it should be by AI researchers or Data Scientists, but I think it will be, for reasons that deserve a separate blog post.

I use it a lot, as I have done throughout most of my professional life. These days APL is well integrated with current technology. There's a bi-directional APL to Python bridge and APL programs sit naturally in version control systems like GitHub.

The leading commercial implementation is Dyalog APL, and there's an official port that runs on the Raspberry Pi. It's free for non-commercial use. Dyalog APL's IDE is called RIDE; it runs in a browser and you can use it to connect to a local or remote APL session.

One feature of Dyalog APL is support for PERL-style regexes (regular expressions).

Regular expressions are useful but hard to read. A while ago I blogged about reggie-dsl, a Python library that allows you to write readable regular expressions. I mentioned that Morten Kromberg and I were experimenting with an APL version of reggie. apl-reggie is now ready to share.

APL already has great tools for manipulation of character data. Many text processing tasks can be solved simply and concisely using APL's array-processing primitives.

As a simple example, imagine that you want to sanitize some text in the way that 18th Century authors did, by replacing the vowels in rude words by asterisks.

I'll save your blushes by using 'bigger' as the word to be sanitized.

In APL you can find which characters are vowels by using ∊, the membership function.
    
     'bigger' ∊ 'aeiou'
0 1 0 0 1 0

The boolean result has a 1 corresponding to each vowel in the character vector, and a zero for each non-vowel.

You can compose the membership function with its right argument to produce a new vowel-detecting function:

     vi ← ∊∘'aeiou' ⍝ short for 'vowels in'
     vi 'bigger'
0 1 0 0 1 0


You can combine that with @ (the 'at' operator) to replace vowels with asterisks:

  
('*'@(∊∘'aeiou')) 'bigger'
b*gg*r

If you want to do more complex pattern matching, regular expressions are a good solution.

Here's APL-reggie code to recognise and analyse telephone numbers in
North American format:

d3←3 of digit
d4←4 of digit
local←osp('exchange'defined d3)dash('number'defined d4)
area←optional osp('area'defined lp d3 rp)
international←'i'defined optional escape'+1'
number←international area local


You can use it like this:
    
'+1 (123) 345-2192' match number

and here is the result:
i +1
area (123)
exchange 345
number 2192

The original idea for reggie (and apl-reggie) came from a real application that processed CDRs (call detail records).

CDRs are records created by Telcos; they describe phone calls and other billable services. There are standards for CDRs. The example given below is a slightly simplified version of the real format.

N,+448000077938,+441603761827,09/08/2015,07:00:12,2

That's a record of a normal (N-type) call from +448000077938 to
+441603761827, made on the 9th of August 2105. It was made just after 7 AM, and it lasted for just 2 seconds.

Here's the declarative code that defines the format of a cdr

r←cdr
call_type←'call_type'defined one_of'N','V','D'
number←plus,12 15 of digit
dd←2 of digit
year←4 of digit
date←'date'defined dd,slash,dd,slash,year
time←'time'defined dd,colon,dd,colon,dd
duration←'duration'defined digits
cc←'class'defined 0 50 of capital
r←csv call_type('caller'defined number)('called'defined optional number)date time duration cc
 

 and here's the result when you run it on that record:

    'N,+448000077938,+441603761827,09/08/2015,07:00:12,2,' match cd
call_type N
caller +448000077938
called +441603761827
date 09/08/2015
time 07:00:12
duration 2
class 



apl-reggie is now a public repository on GitHub.

Feel free to ask questions in the comments below. You may also get some help via the Dyalog support forums, although they didn't write the software and it's not officially supported.

If you want to experiment with this unique language you can do so at  tryapl.org.



Thursday, 23 May 2019

Another free tool for Jetson Nano users

jtop outout
Raffaello Bonghi, one of the members of the unofficial Jetson Nano group on FaceBook has published jetson-stats, a toolkit for Jetson users.

jetson-stats works on all the members of the Jetson family.

My favourite program in jetson-stats is jtop. It's a greatly enhanced version of the linux top command.

jtop shows a very useful real-time view of CPU and GPU load, memory use and chip temperature.

Find jetson-stats on GitHub, or install it via pip/pip3.

AL/DL explorers - two great, free resources for you

I'd like to share two really useful free resources for anyone exploring Artificial Intelligence and Deep Learning.

Netron


The first is netron - an Open Source tool for displaying Deep Learning models.

The image on the right is a small part of netron's display of  resnet-18.

Netron

  • covers a wide range of saved model formats
  • is really easy to install
  • is MIT licensed 
  • is implemented in JavaScript and 
  • can be installed and invoked from Python.

Computer Vision Resources


The second find is Joshua Li's 'Jumble of Computer Vision' - a curated list of papers and blog posts about Computer Vision topics.

It's going to keep me reading for weeks to come :)

Many thanks to Joshua for making this available.

Wednesday, 22 May 2019

Five steps to connect Jetson Nano and Arduino

Yesterday's post showed how to link a micro:bit to the Jetson Nano.

One of the members of the (unofficial) NVIDIA Jetson Nano group on Facebook asked about connecting an Arduino to the Jetson.

Here's a simple recipe for getting data from the Arduino to the Jetson Nano. It should work on all the Jetson models, not just the Nano, but I only have Nanos to hand.

On request, I've added a recipe at the end of this post which sends data from the Jetson Nano to the Arduino; it turns the default LED on the Arduino on or off.

The recipe for sending data from the Arduino to the Jetson has just 5 stages:

  1. Program the Arduino. (I used the ASCIITable example).
  2. Connect the Arduino to the Jetson using a USB connector
  3. Install pyserial on the Jetson
  4. Download a three-line Python script
  5. Run the script.

 

Programming the Arduino


I used an Arduino Uno, and checked it on a verteran Duemilanove (above), but any Arduino should work.

You'll need to do this step using a workstation, laptop or Pi with the Arduino IDE installed. (There seems to be a problem with the Arduino IDE on the Nano, so I couldn't use that).

  1. Connect the Arduino to your workstation vai USB.
  2. Open the Arduino IDE.
  3. Select File/Examples/04/ Communication/ASCIITable.
  4. Upload the example sketch to your Arduino.
  5. Open the Serial Monitor from the Tools menu. You should see a list like this one:

Connect to the Nano


Unplug the Arduino from your workstation and plug it into one of the USB prots on the Jetson Nano.

Install pyserial


If you've been following my tutorial series, open a browser and open the Jupyter lab page. Open a terminal.

If you're using the Nano with a monitor, mouse and keyboard, open a terminal window by typing Ctrl-Alt-T.

Type sudo -H pip3 install pyserial

Once the installation is complete, you're ready to download a short Python script.

Download the script


In the terminal window, type

wget http://bit.ly/jetson-serialpy -O ser.py

(That's a capital O, not a zero!)

When that has finished, type

cat ser.py

The contents of the file should look like this:

import serial
with serial.Serial('/dev/ttyACM0', 9600, timeout=10) as ser:
    while True:
        print(ser.readline())

Run the file


Type python3 ser.py


You should see a display like this:



Sending data the other way


You'll need this program on the Arduino.

void setup() {
  // start serial port at 9600 bps:
  Serial.begin(9600);
  // initialize digital pin LED_BUILTIN as an output.
  pinMode(LED_BUILTIN, OUTPUT);
  while (!Serial) {
    ; // wait for serial port to connect.
  }

}

void loop() {
  char buffer[16];
  // if we get a command, turn the LED on or off:
  if (Serial.available() > 0) {
    int size = Serial.readBytesUntil('\n', buffer, 12);
    if (buffer[0] == 'Y') {
      digitalWrite(LED_BUILTIN, HIGH);
    }
    if (buffer[0] == 'N') {
      digitalWrite(LED_BUILTIN, LOW);
    }
  }
}
And this one on the Nano (call it serial_send.py)

import serial

with serial.Serial('/dev/ttyACM0', 9600, timeout=10) as ser:
    while True:
        led_on = input('Do you want the LED on? ')[0]
        if led_on in 'yY':
            ser.write(bytes('YES\n','utf-8'))
        if led_on in 'Nn':
            ser.write(bytes('NO\n','utf-8'))
 

Plug the Arduino's USB lead into the Nano, and run the program on the Nano by typing

python3 serial_send.py 

The prorgram will ask you repeatedly if you want the LED on or off.

Answer Yes and the LED on the Arduino will turn on.
Answer No and the LED will turn off.

Have fun!






Tuesday, 21 May 2019

Connect the Jetson Nano to the BBC micro:bit in 5 minutes or less

There's huge interest in using the Jetson Nano for Edge AI - the place where Physical Computing meets Artificial Intelligence.

As the last few posts have shown, you can easily train and run Deep Learning models on the Nano. You'll see today that it's just as easy to connect the Nano to the outside world.

In this simple proof-of-concept you'll see how to connect the Nano to a BBC micro:bit and have the Nano respond to an external signal (a press on the micro:bit's button A).

You'll need

  1. a Jetson Nano
  2. a computer with the Mu editor installed
  3. a BBC micro:bit
  4. a USB lead to connect the Jetson to the micro:bit

Here's a video that takes you through the whole process in less than 5 minutes.



I'll be posting more complex examples over the next few days. To make sure you don't miss them, follow @rareblog on twitter.

Monday, 20 May 2019

Getting Started with the Jetson Nano - part 4

I'm amazed at how much the Nano can do on its own, but there are times when it needs some help.

A frustrating problem...


Some deep learning models are too large to train on the Nano. Others need so much data that training times on the Nano would be prohibitive.

Today's post shows you one simple solution.

... and a simple solution


In the previous tutorial, you went through five main steps to deploy the TensorFlow model:
  1. Get training data
  2. Define the model
  3. Train the model
  4. Test the model
  5. Use the model to classify unseen data
Here's the key idea: you don't have to do all those steps on the same computer.


Saving and Loading Keras Models  



The Keras interface to TensorFlow makes it very easy to export a trained model to a file. That file contains information about the way the model is structured, and it also contains the weights which were set as the model learned from the training data.

That's all you need to recreate a usable copy of the trained model on a different computer.


 Rock, Paper, Scissors



The notebook you'll run in this tutorial will  make use of  Laurence Moroney's  open source rock paper scissors dataset.

This contains images of hands showing rock, paper or scissors gestures. The hands come in a variety of shapes, sizes and skin colours, and they are great for experimenting with image classification.

Laurence uses that dataset in the Convolutional Neural Networks with TensorFlow course on Coursera.

(I strongly recommend the course, which is very well designed and superbly taught).

In one of the exercises in the course, students save a trained model that recognises the rock, papers, scissors hands.

They create and run the model using Google's amazing Colab service.

If you haven't come across it, Colab allows users who have registered to run deep learning experiments on Google's powerful hardware free of charge. (Colab also has a great online introduction to Jupyter notebooks.)

Today's notebook will show you how to load some data, and then load the trained model from the course onto your Nano. Then you can use the model to classify images.

You don't need to go through the process of training the model - I've done it for you.  

(Thanks to Laurence Moroney for permission to use the course model for this tutorial!)

Before you run the notebook


There are a couple of steps you need to take before you can run the next notebook.

The first step will set up a swap file on your Nano. If you're not familiar with the term, a swap file can be used by your Nano's Ubuntu Operating System to pretend to programs that the Nano has more than 4 GB of RAM.
 
The TensorFlow model you'll be using with rock. paper, scissors has over 3.4 million weights, so it uses a lot of memory. The swap file will allow you to load and run the model on the Nano.

The second step will install scipy, another Python package that is required for the model to run.

Here's what you should do first:

Setting up a swap file


You'll be using a script set up by the folks at JetsonHacks. You'll clone their  repository from GitHub and then run the script. After that, you'll reboot the Nano to make sure that your swap file is properly set up.

1. Open a Jupyter lab window on your Nano as you did in the previous tutorial.


2. Click on the + sign to the left, just below the Edit menu to open a new Launcher.

3. Click on the Terminal icon at the bottom left hand of the right-hand pane.
A terminal window will open.


4. In the terminal window, type the following commands:

cd ~
git clone https://github.com/JetsonHacksNano/installSwapfile.git
cd installSwapfile/ 
sudo ./installSwapfile.sh -s 2
Enter your password when asked to do so.

5. You should see output like this:

Cloning into 'installSwapfile'...
remote: Enumerating objects: 22, done.
remote: Counting objects: 100% (22/22), done.
remote: Compressing objects: 100% (17/17), done.
remote: Total 22 (delta 8), reused 16 (delta 5), pack-reused 0
Unpacking objects: 100% (22/22), done.
romilly@nano-02:~$ cd installSwapfile/
romilly@nano-02:~/installSwapfile$ sudo ./installSwapfile.sh -s 2
[sudo] password for romilly:
Creating Swapfile at:  /mnt
Swapfile Size:  2G
Automount:  Y
-rw-r--r-- 1 root root 2.0G May 20 10:02 swapfile
-rw------- 1 root root 2.0G May 20 10:02 swapfile
Setting up swapspace version 1, size = 2 GiB (2147479552 bytes)
no label, UUID=b139aeed-54d0-4cda-9ccf-962b9fefd3b8
Filename                Type        Size    Used    Priority
/mnt/swapfile                              file        2097148    0    -1
Modifying /etc/fstab to enable on boot
/mnt/swapfile
Swap file has been created
Reboot to make sure changes are in effect
romilly@nano-02:~/installSwapfile$


6. Reboot the Nano and check the swap file has been installed.

Type sudo reboot

Enter your password if asked to do so.

Your Nano will close down and then re-start. You'll probably see a warning in your browser saying that a connection could not be established. That's quite normal; the browser has noticed that the Nano is re-booting.



After 30 seconds or so, reload the lab web page. It should reconnect to the nano.

Once again, open the launcher and start a terminal window.

In it, type

cat /etc/fstab

You should see output like this:




The key line reads /mnt/swapfile swap swap defaults 0 0

If you can see that, the swapfile has been installed correctly.

Install scipy


The TensorFlow example you'll be running requires another Python package called scipy.

scipy has a lot of useful features, but it takes a while to install, and it relies on a couple of other Ubuntu packages. Install the prerequisite packages first.



Type

sudo apt-get install build-essential gfortran libatlas-base-dev
 
 
And enter your password when it's requested.

When asked if you want to continue with installation, say yes.

You'll see output like this:



Now you're ready to install scipy.

Type

sudo -H pip3 install scipy
 
 
Once you've entered the line above,  leave the installation running. It took about 30 minutes on my Nano.

At the end you should see this:


You've one more steup to take in order to install the saved network and the next notebook, and it is very quick.

Make sure you are in the nano directory. (If not, just enter cd ~/nano ).

Type git pull

This will load the changes from my nano repository on GitHub. These include the saved network you'll be using and the notebook you'll run.

The left-hand pane of your browser window should look like this (of course, the updated times will be different):


 Double-click on the notebooks folder in the left-hand pane.

The pane should now look like this:



The data folder contains a file called rps.h5 which contains the saved neural network.

The notebooks folder contains three notebooks; you'll use the one called loading.ipynb.

Double-click on loading.ipynb

Your screen should look like this:


From the Run menu, select Run All Cells.


See the Notebook running




You'll see your notebook load the data, load the network and print out its structure. Then you'll see it classify an image and display the image it processed. Here's a video

Congratulations! Your Nano is now that little bit smarter :)

In the next post, you'll see transfer learning - a very valuable technique that allows you to take a pre-trained network and fine-tune it to perform a specialised task.

If you have questions, comments or suggestions, come and visit the (unofficial) Jetson Nano  group on FaceBook.