Monday, 24 June 2019

An excellent course for Jetson Nano owners

Jetson Nano
Regular readers will know than I'm a keen Jetson Nano owner.

Recently I posted a series about how to started with the computer but NVIDIA have now published an excellent course,  'Getting Started with the Jetson Nano', which is  free for members of the NVIDIA developers' program.

The course comes with a pre-built image which can run the Nano in headless mode. That's very useful - I had to buy a new monitor to get going, as none of my old monitors had native HDMI support.

The image provide with the course just needs a Nano and a Laptop or Desktop computer with a USB port.

The course is a great introduction to deep learning with a GPU. Once you've completed it you may want to delve deeper; there are lots of excellent Deep Learning courses available on-line, and many of them use Google's Colab for practical sessions.

Google Colab gives you free access to top-of-the range NVIDIA hardware, and if you want to run your trained models locally it's easy to move them onto the Nano. Of course the Nano is small enough to use for mobile robotics!

I'm designing an autonomous robot with the Nano, and I have been delighted with Adafruit's Nano port of their Blinka library. It makes it really easy to write Physical Computing code that runs without change on the Nano, the Raspberry Pi and  Adafruit's CircuitPython-enabled boards.

Come and see the Nano

Tomorrow I'll be at the London Raspberry Pint meet-up showing a Nano driving Adafruit, Grove and SparkFun peripherals using a simple interface board (the babelboard) and some straightforward Python code.

If you're in or near London, do come along. The meet-up starts at 7 PM. It's at CodeNode, and you'll save a lot of time if you pre-register at the CodeNode site.

If you can't join us, a video of the talk will be available in a few days and I will be blogging more about the babelboard later this week.

Thursday, 20 June 2019

Use websockets to build a browser-based Digital Voltmeter

This is the third and final part of a series about building a browser-based six-channel Digital Voltmeter (DVM) using an Arduino and a Raspberry Pi.

Part one covered the software on the Arduino and the USB connection to the Raspberry Pi.

Part two showed you how to use the pyserial package to read that data into a Python program and print the results as they arrived.

In this final part you'll see how to use Joe Walnes' brilliantly simple websocketd. You'll see how you can turn the Python program into a server without writing any additional code! Finally, you'll view a web-page that displays the voltages in more or less real-time.

Web sockets with websocketd

HTTP makes it easy for a web browser to ask for data from a web server. That approach is known as client pull because the client (the web browser) pulls (requests) the data from the server.

For the DVM application you want to use server push; in other words, you want to send changes in the voltages from a server on the Raspberry Pi to your browser. Websockets give you a simple, well-supported way of implementing web push.

You'll use websocketd to send the data from the Pi to the browser. websocketd is a really useful program which allows you to wrap any program or script that prints output and send that data to a browser via a websocket. (You can send data back from the browser as well, but the Web DVM doesn't need to do that.)

Four steps to use websocketd

I followed four simple steps to install and use websocketd with the Python program form part 2.

  1. I installed the Python program which you saw in Part two.
  2. Next  I copied the websocketd binary program onto your Raspberry Pi.
  3. I installed a web page with some simple javscript as well as some html.
  4. To wrap the program, I started websocketd and told it where to find the program it should run. For the web DVM I wanted webstocketd to act as a conventional web server. To do that, I provided an extra parameter telling websocketd where to find the web pages to serve.
To make things as simple as possible I have set up a public repository on GitHub which contains all the required software as well as the web page you'll need.

As you'll see below, you just need to download and unzip a file and then run the command that starts the server.

If you're a confident GitHub user you can fork or clone the repository instead of downloading the zip file.

The code on GitHub assumes that you are using a Raspberry Pi with the hostname of raspberrypi. If you want to use your own choice of hostname, you'll need to make one simple change to the web page. I'll cover that in the final step of this post.

Installing the Web DVM project on the Pi

Open a terminal window on the Pi and move into a directory of your choice.


cd cd WebDVM-master/
./websocketd --port=8080 --staticdir=. ./

You should see websocketd displaying messages as it starts up.


 Testing the application

If you open a browser on rasberrypi:8080  you should see a web page like this >>

For demo purposes, I've connected the analog inputs to various fixed and variable voltages. For example, A0 is ground,  A2 is connected to a light-dependent resistor and A3 is connected to the Arduino's 5V rail.

How does the web page work?

The web page is a basic html page with a little JavaScript.

Here's index.html:

<html lang="en">
    <meta charset="UTF-8">
    <title>Web DVM</title>
<h1>Web DVM - 6 Analog Channels</h1>
<div>A0: <span id="A0"></span></div>
<div>A1: <span id="A1"></span></div>
<div>A2: <span id="A2"></span></div>
<div>A3: <span id="A3"></span></div>
<div>A4: <span id="A4"></span></div>
<div>A5: <span id="A5"></span></div>
  // setup websocket with callbacks
  var ws = new WebSocket('ws://raspberrypi:8080/');
  ws.onopen = function() {
  ws.onclose = function() {
  ws.onmessage = function(event) {
      data ='=');
      if (data[0].length === 2) {
          document.getElementById(data[0]).textContent = data[1];

The work is done by a small bit of JavaScript. The code
var ws = new WebSocket('ws://raspberrypi:8080/');
opens a websocket.

That's the bit of code you'll need to change if you want to use a different hstname for the Pi.

Whenever the websocket receives a message, the
code processes it and updates the contents of the web page.


In this three-part series you've met several useful techniques, and built the WebDVM. That's a useful piece of Test Equipment!

You can use the approach to  make lots of different measurements, and there are several ways in which you can add functionality.

In a future blog post I'll show you how to add scrolling graphics to your web-based DVM.

The babelboard is coming

Before that I'll be exploring the babelboard, another simple tool which you can use to quickly prototype physical computing applications.

The babelboard can be used with a Raspberry Pi or a Jetson Nano. Later posts will show how to build versions for the Adafruit feather and the BBC micro:bit.

The babelboard is a simple piece of open source hardware which you can build in an hour using readily available components.

To make sure you don't miss the next series, follow @rareblog on Twitter!

Friday, 14 June 2019

Using pyserial on a Raspberry Pi to read Arduino output - WebDVM part 2

This series describes how to build a simple web-based DVM (digital volt meter) that can display up to six voltages in a web browser.

It's a really useful hack if you need to measure several voltages at once.

In part 1 you saw how to send analog voltage data over a serial link to a Raspberry Pi. For testing purposes you displayed the data in a terminal window on the Pi.

In this part you will use read that data using Python on the Pi and print the data to standard output.

In the next part you'll turn your Python script into a dynamic application that displays the 6 voltages in a browser, updating the web page as the values change.

The code (available on github) uses the pyserial package. It's a simple approach, but there are a few issues to find your way around. You'll see the problems and solutions below.


pyserial is a Python package that allows Python programs to read and write data using serial ports. You can install it via pip.

pip3 install pyserial

When you connect the Arduino to the Pi via a USB cable, the Arduino connection appears as a serial port.

That's where the first problem bit me.

For part one of this series I used a Raspberry Pi zero with an Arduino Uno.
When I connected them the Arduino connection showed up as device /dev/ttyACM0.

For part two, I decided to use an Arduino Nano instead of the Uno. The Nano is smaller and cheaper than the Uno. Like the Uno it has six analog ports, so it looked ideal.

When I ran the software it fell over.

I did a quick check listing the teletype devices in /dev/tty*

There was no /dev/ttyACM0 but there was another new device: /dev/ttyUSB0.

I'd forgotten that the Nano uses a different approach for its USB-to-serial converter, and the Pi sees it as a different device.

So the software now starts with a check to see which device is available.

devices = [port.device for port in list_ports.comports()]
ports = [port for port in devices if port in ['/dev/ttyACM0','/dev/ttyUSB0']]
if len(ports) != 1:
    raise Exception('cannot identify port to use')
port = ports[0]

Once the script has found which prot to use,  pyserial reads from that device in much the same way as you would read from or write to a file.

Unlike a file, though, data will only be available to read once the Arduino has sent it! For that reason, the script wraps the serial connection in a buffer.

It also needs to  convert the serial input from a series of bytes to a unicode string.

Unfortunately, there is no way to synchronise the Pi and Arduino when they first connect. This means it's possible that the Python program starts reading an invalid sequence of bytes, so the program needs to cope with an error when the bytes are converted to a string.

Here's the code that does all that:

with serial.Serial(port, 9600, timeout=TIMEOUT_SECONDS) as ser:
    # We use a Bi-directional BufferedRWPair so people who copy + adapt can write as well as read
    sio = io.TextIOWrapper(io.BufferedRWPair(ser, ser))
    while True:
            line = sio.readline()
        except UnicodeDecodeError:
            continue # decode error - keep calm and carry on

Next the program checks to see that a non-empty line was actually received, prints it to standard output, and flushes the newly printed information. That last step is necessary to make sure that new data is printed out as soon as it has been received. That will become vital in part three of this series.

       if len(line) > 0:
           sys.stdout.flush() # to avoid buffering, needed for websocketd

Part three will describe how to make the anaolg data available in a browser that updates in real time. While you're waiting you can take a look at the code on github.

Tuesday, 4 June 2019

Build a Web-based DVM using Arduino and Pi zero W

This three-part series show you how to build build a Web-based 6-channel DVM (digital voltmeter)  using an Arduino and Raspberry Pi zero.

If you’re experienced and impatient you will find software and basic installation instructions on GitHub.

If you’re less experienced, don’t worry. All you need to know is:
  1. how to compile and download an Arduino sketch,
  2. how to connect your Raspberry Pi to your network, and
  3. how to open a terminal session on the Pi.
The project is fun, useful, and simple. You probably have the parts already, in which case the whole project will take about an hour to program and connect. You don’t need to solder anything.

The WebDVM uses some general techniques that you can apply to projects of your own

In Part 1 (this post)  you will install some simple Arduino code, connect the Arduino to a Pi, and check that the two are communicating.

In Part 2 you’ll use a short Python program on the Pi to read the Arduino’s output.

In Part 3 you'll use a simple technique to turn your Python program into a web server, and you'll use HTML and JavaScript to format the output. You’ll see the web page displaying the voltage the Arduino’s six Analog inputs in real-time.

Why Pi and Arduino?

The project plays to the strengths of both platforms.

The Arduino has 6 analog inputs.

The Pi zero runs Python, it can connect using WiFi without extra hardware, and it’s inexpensive.

Both are widely available.

What hardware do you need?

Here’s what I used:
  1. An Arduino Uno.
  2. A Raspberry Pi zero W with wifi set up.
  3. A battery to power the hardware.
  4. Cables to connect the Pi and Arduino, and to provide power to the Pi.
  5. Something that runs a web browser on the same network as the Pi.

I used an Arduino Uno but an older model or a Leonardo would be fine. If you have a nano or mini you will need a way of connecting what you want to measure using a breadboard and/or headers.

You can use a full sized Raspberry Pi if you want. Any model will work, and you can use a wired (Ethernet) connection instead of wifi.

Arduino software

The Arduino sketch is really simple. It
  1. sets up serial communication,
  2. reads each analog input, and
  3. prints the voltage to serial output.

Here’s the sketch:

const float vcc = 5.0;
const float scale = vcc / 1023;
const int analogPins[] = {A0, A1, A2, A3, A4, A5};
const int delay_ms = 50;
const int channels = 6;

// 6 Channel DVM over serial connection

void setup() {
 //Initialize serial and wait for port to open:
 while (!Serial) {

void loop() {
 // read each Analog channel, scale it,
 // and print a formatted version to the Serial output
 for (int i = 0;i   float v = scale * analogRead(analogPins[i]);

Compile and upload the sketch to the Arduino.

If you open the serial monitor, you should see a window like this:

Now connect the Pi to the Arduino.

If you use an Arduino Uno or Leonardo, you’ll need a USB cable with an A-type connector at one end; it will probably have a B-type connector at the other. If you want to connect that to a Pi zero, you’ll need an adapater; if it’s a full sized Pi you wan connect directly.

Open a terminal window on the Pi.

If you type  
ls la /dev/ttyACM0
you should see an entry for /dev/ttyACM0

/dev/ttyACM0 is a temporary teletype device that raspbian created when you connected the Arduino.

It’s owned by root, and is in the dialout group. As user pi you should already be a member of the dialout group, so you can access the device.


cat -v  /dev/ttyACM0

This lists the output that the Pi is reading from the Arduino, with non-printing characters displayed.

The ^M characters are newlines sent by the Arduino after each Analog reading.

Coming next

That concludes the first part of the project. In the next part you’ll read the Arduino’s output in a Python program on the Pi.

Don't forget to follow @rareblog on Twitter to see when the next part is posted!

Friday, 24 May 2019

apl-reggie: Regular Expressions made easier in APL

What's APL?

Like GNU, APL is a recursive acronym; it's A Programming Language.

I first met APL at the IBM education centre in Sudbury Towers in London. I was a student reading Maths at Cambridge University, and IBM asked me to do a summer research project into a new technology called Computer Assisted Instruction. (I wonder what happened to that crazy idea?)

APL was one of the first languages to offer a REPL (Read Evaluate Print Loop), so it looked like good technology for exploratory programming.

APL was created by a mathematician. Its notation and syntax rationalise mathematical notation, and it was designed to describe array (tensor) operations naturally and consistently.

For a while in the '70s and '80s APL ruled the corporate IT world. These days it's used to solve problems that involve complex calculations on large arrays.
It's not yet used as widely as it should be by AI researchers or Data Scientists, but I think it will be, for reasons that deserve a separate blog post.

I use it a lot, as I have done throughout most of my professional life. These days APL is well integrated with current technology. There's a bi-directional APL to Python bridge and APL programs sit naturally in version control systems like GitHub.

The leading commercial implementation is Dyalog APL, and there's an official port that runs on the Raspberry Pi. It's free for non-commercial use. Dyalog APL's IDE is called RIDE; it runs in a browser and you can use it to connect to a local or remote APL session.

One feature of Dyalog APL is support for PERL-style regexes (regular expressions).

Regular expressions are useful but hard to read. A while ago I blogged about reggie-dsl, a Python library that allows you to write readable regular expressions. I mentioned that Morten Kromberg and I were experimenting with an APL version of reggie. apl-reggie is now ready to share.

APL already has great tools for manipulation of character data. Many text processing tasks can be solved simply and concisely using APL's array-processing primitives.

As a simple example, imagine that you want to sanitize some text in the way that 18th Century authors did, by replacing the vowels in rude words by asterisks.

I'll save your blushes by using 'bigger' as the word to be sanitized.

In APL you can find which characters are vowels by using ∊, the membership function.
     'bigger' ∊ 'aeiou'
0 1 0 0 1 0

The boolean result has a 1 corresponding to each vowel in the character vector, and a zero for each non-vowel.

You can compose the membership function with its right argument to produce a new vowel-detecting function:

     vi ← ∊∘'aeiou' ⍝ short for 'vowels in'
     vi 'bigger'
0 1 0 0 1 0

You can combine that with @ (the 'at' operator) to replace vowels with asterisks:

('*'@(∊∘'aeiou')) 'bigger'

If you want to do more complex pattern matching, regular expressions are a good solution.

Here's APL-reggie code to recognise and analyse telephone numbers in
North American format:

d3←3 of digit
d4←4 of digit
local←osp('exchange'defined d3)dash('number'defined d4)
area←optional osp('area'defined lp d3 rp)
international←'i'defined optional escape'+1'
number←international area local

You can use it like this:
'+1 (123) 345-2192' match number

and here is the result:
i +1
area (123)
exchange 345
number 2192

The original idea for reggie (and apl-reggie) came from a real application that processed CDRs (call detail records).

CDRs are records created by Telcos; they describe phone calls and other billable services. There are standards for CDRs. The example given below is a slightly simplified version of the real format.


That's a record of a normal (N-type) call from +448000077938 to
+441603761827, made on the 9th of August 2105. It was made just after 7 AM, and it lasted for just 2 seconds.

Here's the declarative code that defines the format of a cdr

call_type←'call_type'defined one_of'N','V','D'
number←plus,12 15 of digit
dd←2 of digit
year←4 of digit
date←'date'defined dd,slash,dd,slash,year
time←'time'defined dd,colon,dd,colon,dd
duration←'duration'defined digits
cc←'class'defined 0 50 of capital
r←csv call_type('caller'defined number)('called'defined optional number)date time duration cc

 and here's the result when you run it on that record:

    'N,+448000077938,+441603761827,09/08/2015,07:00:12,2,' match cd
call_type N
caller +448000077938
called +441603761827
date 09/08/2015
time 07:00:12
duration 2

apl-reggie is now a public repository on GitHub.

Feel free to ask questions in the comments below. You may also get some help via the Dyalog support forums, although they didn't write the software and it's not officially supported.

If you want to experiment with this unique language you can do so at

Thursday, 23 May 2019

Another free tool for Jetson Nano users

jtop outout
Raffaello Bonghi, one of the members of the unofficial Jetson Nano group on FaceBook has published jetson-stats, a toolkit for Jetson users.

jetson-stats works on all the members of the Jetson family.

My favourite program in jetson-stats is jtop. It's a greatly enhanced version of the linux top command.

jtop shows a very useful real-time view of CPU and GPU load, memory use and chip temperature.

Find jetson-stats on GitHub, or install it via pip/pip3.

AL/DL explorers - two great, free resources for you

I'd like to share two really useful free resources for anyone exploring Artificial Intelligence and Deep Learning.


The first is netron - an Open Source tool for displaying Deep Learning models.

The image on the right is a small part of netron's display of  resnet-18.


  • covers a wide range of saved model formats
  • is really easy to install
  • is MIT licensed 
  • is implemented in JavaScript and 
  • can be installed and invoked from Python.

Computer Vision Resources

The second find is Joshua Li's 'Jumble of Computer Vision' - a curated list of papers and blog posts about Computer Vision topics.

It's going to keep me reading for weeks to come :)

Many thanks to Joshua for making this available.

Wednesday, 22 May 2019

Five steps to connect Jetson Nano and Arduino

Yesterday's post showed how to link a micro:bit to the Jetson Nano.

One of the members of the (unofficial) NVIDIA Jetson Nano group on Facebook asked about connecting an Arduino to the Jetson.

Here's a simple recipe for getting data from the Arduino to the Jetson Nano. It should work on all the Jetson models, not just the Nano, but I only have Nanos to hand.

On request, I've added a recipe at the end of this post which sends data from the Jetson Nano to the Arduino; it turns the default LED on the Arduino on or off.

The recipe for sending data from the Arduino to the Jetson has just 5 stages:

  1. Program the Arduino. (I used the ASCIITable example).
  2. Connect the Arduino to the Jetson using a USB connector
  3. Install pyserial on the Jetson
  4. Download a three-line Python script
  5. Run the script.


Programming the Arduino

I used an Arduino Uno, and checked it on a verteran Duemilanove (above), but any Arduino should work.

You'll need to do this step using a workstation, laptop or Pi with the Arduino IDE installed. (There seems to be a problem with the Arduino IDE on the Nano, so I couldn't use that).

  1. Connect the Arduino to your workstation vai USB.
  2. Open the Arduino IDE.
  3. Select File/Examples/04/ Communication/ASCIITable.
  4. Upload the example sketch to your Arduino.
  5. Open the Serial Monitor from the Tools menu. You should see a list like this one:

Connect to the Nano

Unplug the Arduino from your workstation and plug it into one of the USB prots on the Jetson Nano.

Install pyserial

If you've been following my tutorial series, open a browser and open the Jupyter lab page. Open a terminal.

If you're using the Nano with a monitor, mouse and keyboard, open a terminal window by typing Ctrl-Alt-T.

Type sudo -H pip3 install pyserial

Once the installation is complete, you're ready to download a short Python script.

Download the script

In the terminal window, type

wget -O

(That's a capital O, not a zero!)

When that has finished, type


The contents of the file should look like this:

import serial
with serial.Serial('/dev/ttyACM0', 9600, timeout=10) as ser:
    while True:

Run the file

Type python3

You should see a display like this:

Sending data the other way

You'll need this program on the Arduino.

void setup() {
  // start serial port at 9600 bps:
  // initialize digital pin LED_BUILTIN as an output.
  while (!Serial) {
    ; // wait for serial port to connect.


void loop() {
  char buffer[16];
  // if we get a command, turn the LED on or off:
  if (Serial.available() > 0) {
    int size = Serial.readBytesUntil('\n', buffer, 12);
    if (buffer[0] == 'Y') {
      digitalWrite(LED_BUILTIN, HIGH);
    if (buffer[0] == 'N') {
      digitalWrite(LED_BUILTIN, LOW);
And this one on the Nano (call it

import serial

with serial.Serial('/dev/ttyACM0', 9600, timeout=10) as ser:
    while True:
        led_on = input('Do you want the LED on? ')[0]
        if led_on in 'yY':
        if led_on in 'Nn':

Plug the Arduino's USB lead into the Nano, and run the program on the Nano by typing


The prorgram will ask you repeatedly if you want the LED on or off.

Answer Yes and the LED on the Arduino will turn on.
Answer No and the LED will turn off.

Have fun!

Tuesday, 21 May 2019

Connect the Jetson Nano to the BBC micro:bit in 5 minutes or less

There's huge interest in using the Jetson Nano for Edge AI - the place where Physical Computing meets Artificial Intelligence.

As the last few posts have shown, you can easily train and run Deep Learning models on the Nano. You'll see today that it's just as easy to connect the Nano to the outside world.

In this simple proof-of-concept you'll see how to connect the Nano to a BBC micro:bit and have the Nano respond to an external signal (a press on the micro:bit's button A).

You'll need

  1. a Jetson Nano
  2. a computer with the Mu editor installed
  3. a BBC micro:bit
  4. a USB lead to connect the Jetson to the micro:bit

Here's a video that takes you through the whole process in less than 5 minutes.

I'll be posting more complex examples over the next few days. To make sure you don't miss them, follow @rareblog on twitter.

Monday, 20 May 2019

Getting Started with the Jetson Nano - part 4

I'm amazed at how much the Nano can do on its own, but there are times when it needs some help.

A frustrating problem...

Some deep learning models are too large to train on the Nano. Others need so much data that training times on the Nano would be prohibitive.

Today's post shows you one simple solution.

... and a simple solution

In the previous tutorial, you went through five main steps to deploy the TensorFlow model:
  1. Get training data
  2. Define the model
  3. Train the model
  4. Test the model
  5. Use the model to classify unseen data
Here's the key idea: you don't have to do all those steps on the same computer.

Saving and Loading Keras Models  

The Keras interface to TensorFlow makes it very easy to export a trained model to a file. That file contains information about the way the model is structured, and it also contains the weights which were set as the model learned from the training data.

That's all you need to recreate a usable copy of the trained model on a different computer.

 Rock, Paper, Scissors

The notebook you'll run in this tutorial will  make use of  Laurence Moroney's  open source rock paper scissors dataset.

This contains images of hands showing rock, paper or scissors gestures. The hands come in a variety of shapes, sizes and skin colours, and they are great for experimenting with image classification.

Laurence uses that dataset in the Convolutional Neural Networks with TensorFlow course on Coursera.

(I strongly recommend the course, which is very well designed and superbly taught).

In one of the exercises in the course, students save a trained model that recognises the rock, papers, scissors hands.

They create and run the model using Google's amazing Colab service.

If you haven't come across it, Colab allows users who have registered to run deep learning experiments on Google's powerful hardware free of charge. (Colab also has a great online introduction to Jupyter notebooks.)

Today's notebook will show you how to load some data, and then load the trained model from the course onto your Nano. Then you can use the model to classify images.

You don't need to go through the process of training the model - I've done it for you.  

(Thanks to Laurence Moroney for permission to use the course model for this tutorial!)

Before you run the notebook

There are a couple of steps you need to take before you can run the next notebook.

The first step will set up a swap file on your Nano. If you're not familiar with the term, a swap file can be used by your Nano's Ubuntu Operating System to pretend to programs that the Nano has more than 4 GB of RAM.
The TensorFlow model you'll be using with rock. paper, scissors has over 3.4 million weights, so it uses a lot of memory. The swap file will allow you to load and run the model on the Nano.

The second step will install scipy, another Python package that is required for the model to run.

Here's what you should do first:

Setting up a swap file

You'll be using a script set up by the folks at JetsonHacks. You'll clone their  repository from GitHub and then run the script. After that, you'll reboot the Nano to make sure that your swap file is properly set up.

1. Open a Jupyter lab window on your Nano as you did in the previous tutorial.

2. Click on the + sign to the left, just below the Edit menu to open a new Launcher.

3. Click on the Terminal icon at the bottom left hand of the right-hand pane.
A terminal window will open.

4. In the terminal window, type the following commands:

cd ~
git clone
cd installSwapfile/ 
sudo ./ -s 2
Enter your password when asked to do so.

5. You should see output like this:

Cloning into 'installSwapfile'...
remote: Enumerating objects: 22, done.
remote: Counting objects: 100% (22/22), done.
remote: Compressing objects: 100% (17/17), done.
remote: Total 22 (delta 8), reused 16 (delta 5), pack-reused 0
Unpacking objects: 100% (22/22), done.
romilly@nano-02:~$ cd installSwapfile/
romilly@nano-02:~/installSwapfile$ sudo ./ -s 2
[sudo] password for romilly:
Creating Swapfile at:  /mnt
Swapfile Size:  2G
Automount:  Y
-rw-r--r-- 1 root root 2.0G May 20 10:02 swapfile
-rw------- 1 root root 2.0G May 20 10:02 swapfile
Setting up swapspace version 1, size = 2 GiB (2147479552 bytes)
no label, UUID=b139aeed-54d0-4cda-9ccf-962b9fefd3b8
Filename                Type        Size    Used    Priority
/mnt/swapfile                              file        2097148    0    -1
Modifying /etc/fstab to enable on boot
Swap file has been created
Reboot to make sure changes are in effect

6. Reboot the Nano and check the swap file has been installed.

Type sudo reboot

Enter your password if asked to do so.

Your Nano will close down and then re-start. You'll probably see a warning in your browser saying that a connection could not be established. That's quite normal; the browser has noticed that the Nano is re-booting.

After 30 seconds or so, reload the lab web page. It should reconnect to the nano.

Once again, open the launcher and start a terminal window.

In it, type

cat /etc/fstab

You should see output like this:

The key line reads /mnt/swapfile swap swap defaults 0 0

If you can see that, the swapfile has been installed correctly.

Install scipy

The TensorFlow example you'll be running requires another Python package called scipy.

scipy has a lot of useful features, but it takes a while to install, and it relies on a couple of other Ubuntu packages. Install the prerequisite packages first.


sudo apt-get install build-essential gfortran libatlas-base-dev
And enter your password when it's requested.

When asked if you want to continue with installation, say yes.

You'll see output like this:

Now you're ready to install scipy.


sudo -H pip3 install scipy
Once you've entered the line above,  leave the installation running. It took about 30 minutes on my Nano.

At the end you should see this:

You've one more steup to take in order to install the saved network and the next notebook, and it is very quick.

Make sure you are in the nano directory. (If not, just enter cd ~/nano ).

Type git pull

This will load the changes from my nano repository on GitHub. These include the saved network you'll be using and the notebook you'll run.

The left-hand pane of your browser window should look like this (of course, the updated times will be different):

 Double-click on the notebooks folder in the left-hand pane.

The pane should now look like this:

The data folder contains a file called rps.h5 which contains the saved neural network.

The notebooks folder contains three notebooks; you'll use the one called loading.ipynb.

Double-click on loading.ipynb

Your screen should look like this:

From the Run menu, select Run All Cells.

See the Notebook running

You'll see your notebook load the data, load the network and print out its structure. Then you'll see it classify an image and display the image it processed. Here's a video

Congratulations! Your Nano is now that little bit smarter :)

In the next post, you'll see transfer learning - a very valuable technique that allows you to take a pre-trained network and fine-tune it to perform a specialised task.

If you have questions, comments or suggestions, come and visit the (unofficial) Jetson Nano  group on FaceBook.

To stay up to date, follow me (@rareblog) on Twitter.


Wednesday, 15 May 2019

Getting started with the Jetson Nano - part 3

Jetson Nano image courtesy of NVIDIA/Pimoroni
In part 2 of this series you prepared your Jetson Nano for software installation.

In this part you'll install Jupyter Notebook, Jupyter lab, TensorFlow and some other software that is needed to run the first TensorFlow notebook.

Once started, you can leave the software installation to run; it takes about an hour on a Nano in 10W power mode. It probably takes a little longer if you're using a 2.5A supply.

There's a final manual stage which takes a couple of minutes.

When that's complete you'll be able to work through the TensorFlow example, training a Neural Net to recognise item images from a Fashion database and then testing it in previously unseen images.

Here's what you'll do, in a little more detail.

Installing the software

Open a terminal window on the Nano  (A short-cut,  crl-alt-T should do it).

You'll be in your home directory; type

git clone

You should see a message 'cloning into nano' followed by some descriptive text.

Next, change into the newly-created nano directory and list the contents. Type

cd nano

You should see a display like this: 

Now change into the scripts directory, prepare the scripts for execution, and check the contents. Type

cd scripts
chmod a+x *.sh

The result should look like this:

If you're cautious (and you should be!) you can check the contents of the scripts before you run any of them. You can use the linux cat command. Type


The result should look like this:

The script has comments which explain what each line does.

You're ready to start installation.


sudo -H ./

You will be asked for your password. Enter it.

The installation process will start. You can go and drink a coffee, cook a meal, or do a run - the installation should need no input, but it will take an hour or more.

The final screen should look something like this:

 Nearly there! The next steps will only take a couple of minutes.

Configure Jupyter 

Now you need to configure Jupyter Notebook ready for use.

You had to run the previous command as the super-user, using the sudo command, but you shouldn't do that for the next command. Just type


That will set up the Jupyter notebook with nano as its password.

If you want a different password - foo for example - you could instead type

./ foo

One more step and you'll be ready to go. The next step will install the Jupyter server as a service. In other words, Jupyter will start up whenever the Nano is rebooted. Since you are changing the system configuration, you will need to run the final script as sudo. Type

sudo ./

Which will install the service and start it.

To check that it's running, go to a computer on your local network and open a browser on


Where is the hostname you chose when installing Ubuntu on your Nano.

You'll be asked for a password. Enter nano, or the password you chose just now when configuring Jupyter, if you decided not to accept the default.

You should see a screen like this:

Fantastic! You now have a powerful Deep Learning laboratory ready for your experiments.

And there's more good news. From now on, you have a choice. You can continue to use your Nano with a monitor, mouse and keyboard, or you can use it remotely from any computer on your network. That's how I work.

If you want to un-clutter your workspace by removing the monitor, mouse and keyboard, the next step will involve

  1. shutting down your Nano (see below)
  2. removing power.
  3. unplugging the peripherals, and then
  4. reapplying power

There are two ways you can shut down the Nano. You could use the mouse to click on the power icon at the top right of the Ubuntu desktop, but that won't work once you start running the Nano headless (without keyboard, mouse and monitor attached).

Instead you can shut the nano down from the browser window! Here's a short video:

To do that,

  1. Click on the terminal icon in the launcher window. A terminal window will open.
  2. In the terminal window, type sudo shutdown now -h 
  3. When prompted, enter your password.
In a couple of seconds yoiu should see the green light on the Nano go out, It is now safe to remove poser by unplugging the power supply. You can close the browser tab containing the Jupyter interface.

Unplug the monitor, mouse and keyboard. If you want to, you can move the Nano to a new home at this stage. Remember that it will need power and an Ethernet connection in its new home!

Once the Nano is where you want it, make sure the Ethernet cable is plugged in and apply power. The green light on the Nano will come on again.

Give the nano a few seconds to boot up and open a browser window on the same url as before: http://:8888

Once again, you'll be asked for the Jupyter password. Next you'll open a Jupyter Notebook and run your first TensorFlow example.

Below, you'll find step by step instructions. First, a video.

Here's what you need to do:

  1. In the left-hand pane, click on notebooks.
  2. Click on basic-classification.ipynb. The notebook will open.
  3. Click on the run menu and chose run all cells.
Jupyter will run each code cell in the notebook, displaying the results as the code completes. You'll see some of the fashion images that the notebook is using to traint the network, and at the end you'll see how well images are classified once training is complete.

If you're new to Deep Learning, Jupyter or TensorFlow you may not follow everything that's going on.

Don't worry.

The next posts will take a much slower-paced run through another notebook, with more detailed explanation, and introduce the ideas of model import and export, and  transfer learning.

To make sure you don't miss out on future posts, follow me (@rareblog) on twitter.

Monday, 13 May 2019

Getting Started with the Jetson Nano - Part 2

Learning with the Nano
It's taken a while to get here, but I now have a simple, repeatable set-up process for installing and running TensorFlow on the Jetson Nano using Jupyter Notebooks.

It's simple and repeatable but slow. Jupyter Notebook saves a lot of time and angst once it's available but it takes a while to install.

Fortunately I now have a script that automates the installation so you can go away and drink a coffee while the installation runs.

Before you can install the software, though, you need to complete the installation of Ubuntu. That's what this post covers.

My first post about the Nano described the hardware you need and pointed you to instructions that explain how to prepare your SD card.

Once you've got your hardware and have prepared the SD card, it's time to fire up the Nano.

Getting ready

Plug in the HDMI cable, the Ethernet cable, the keyboard and the mouse. There's a good illustration on the Jetson Nano 'getting started' page.

SD card before it's pushed in
Next, insert the SD card in its socket. The socket is not easy to find  and the picture on the NVIDIA site is a bit confusing.

I hope this picture makes things clearer >>>

This is taken from the bottom of the Nano. When you look at the Nano from above, the SD card should have its shiny contacts facing you.

Once the card is in its slot, push it in. It should click home. It is then almost invisible.

Now you're ready to apply power.

Powering up

The power-up process is slightly different depending on whether you are using the recommended 4A supply with a barrel Jack or a 2.5A supply with a micro USB connector.

Using a 4A supply

4A barrel jack power + jumper
If you are using a 5V 4A supply you should place a jumper on J48. It's just behind the Power Jack Socket and it's marked ADD JUMPER TO DISABLE USB POWER.

Now insert the power jack. The green light should come on and the Nano will boot.

Skip the next paragraph and carry on with the steps below headed 'Nano - first boot'

 Using a 2.5A supply

USB power
If you're using a 5V 2.5A supply, insert the USB micro-connector into the Nano's USB socket. The Nano should look like this.

Once again, the green light should come on and the Jetson will boot.


Nano - first boot

As the Nano starts up you'll see messages scroll by.  After about 20 seconds you should see a dialog box asking you to accept the NVIDIA license agreement.

Next Ubuntu will ask you questions to help it manage the installation process. You'll need to specify what language you want to use, what keyboard you're using, and which timezone you're in.

After that you'll be asked for your name, the hostname you want the nano to have oin your network, and a password. I don't recommend passwordless login; even on a home network, I don't think it's safe.

The installation will carry on for a few minutes and the Nano should then reboot. The process sometimes appears to hang with a dialog waiting for an auto-update. If so, you can just cancel the dialog. You can update the Jetson software at the start of the TensorFlow installation process.

Once the Jetson has booted you should see a standard Ubuntu login page. Select your username, enter your password, and you'll see the Ubuntu desktop.

Coming next

In the next post you will install TensorFlow and Jupyter notebook. Then you'll start the notebooks server and train a TensorFlow model which will recognise images of fashion items and classify them.

Don't miss the next steps:  follow me (@rareblog) on Twitter.

Monday, 6 May 2019

Getting Started with the Jetson Nano

If you want to get to grips with Deep Learning at the Edge, NVIDIA's Jetson Nano is an excellent choice.

It combines a powerful, fast Quad Core Arm processor with a powerful NVIDIA GPU capable of just under 0.5 Teraflops.

Better yet, it has 4 GigaBytes of RAM. That's enough to do serious work with TensorFlow or PyTorch, both of which are supported on the platform.

And it costs around $100 in the USA, or around £120 with shipping in the UK!

There are instructions for setup on the NVIDIA Jetson Nano website, but I found them a bit daunting, and they miss out on a couple of bits of advice I wish I'd had when I started.

Before you start

The first time you boot your Nano you'll want
  1. A micro-SD card. I recommend at least 32 GB; 64 GB would be better.
  2. An HDMI monitor. It must be HDMI. You cannot use a DVI monitor with a DVI-to-HDMI adaptor of the sort you might use with a Raspberry Pi or a workstation.
  3. A USB mouse.
  4. A USB keyboard.
  5. An ethernet cable.
  6. A suitable 5V power supply.

A 5V 2.5A supply with a micro USB connector will work. A 4A supply with a 5.5mm outer diameter (OD) x 2.1 mm inner diameter connector is better, for reasons I've set out below.

If you decide to go with a 4A supply you'll also need a Jumper.

Here's why I advise you to get a 4A supply.

Out of the box your Nano will use 2A of current. However, your mouse and keyboard may take the current draw over 2A. If the voltage drops your Nano will shut itself down. This happened to me on my first attempt, and it's very disconcerting.

A 2.5A supply should avoid that problem, but there's another.

The Nano has two power modes. The default mode only uses 2 amps but this restricts the CPU to single core operation. The max power configuration will give you full access to the Nano's processing power, but the Nano will then draw more than 2A of current.

If you decide to stick with a 2.5A power supply and you're based in North America you can use this one.

Alternatively you can use the official Raspberry Pi power supply for the Pi model 3B, since that also provides 2.5A.

I've recommended two 4A supplies below. One comes from North America, and the other is available from a UK supplier.
  1. In North America: Adafruit 5V 4A Switching PSU
  2. In the UK: 5V 4A 4000mA AC-DC Switching Adaptor Power Supply
The Adafruit supply was out of stock when I wrote this, possible due to the demand from Nano owners, but you can ask to be notified when stock is available.

If you're going to power the Nano with a 4A supply you'll also need a jumper to tell the Nano to power itself from its power Jack socket instead of the micro-USB socket.

The NVIDIA website has details, but I suggest you visit the excellent JetsonHacks website which has more information on power options and shows you just where to install the Jumper.

Prepare the SD card


You'll need to download the latest NVIDIA Jetson Nano image NVIDIA Jetson Nano image and write it onto the SDS card using an SD card writer and appropriate software on your computer.

If you've prepared an SD card for a Raspberry Pi the process is very similar.

The way you do it depends on the Operating System you're using. The NVIDIA website provides instructions for WIndows, MAC and Linux, and theyr'e pretty good.

Tomorrow I'll go through the remainder of the setup process and describe how to train your first Deep Learning model on the Nano. Later I'll be posting more details of my Nano experiments.

To stay up to date, you can follow me (@rareblog) on Twitter.