Showing posts from 2016

Three easy ways to avoid off-by-one errors in your software

Here's how to avoid one of the commonest software bugs If you're in the software business you've probably encountered the dreaded 'off-by-one' error. An off-by-one error often occurs when an iterative loop iterates one time too many or too few. Someone might use "is less than or equal to" where "is less than" should have been used. It's also easy to forget that a sequence starts at zero rather than one. To put it another way, we sometimes fail to distinguish between a count (how many are there?) and an offset (how far is that from here?) Ten gaps in eleven posts These problems aren't unique to programming. At the time of the last millennium, there was a lot of discussion as to whether it should have been celebrated on 1st January 2000 or 2001. In debates about the millennium off-by-one errors may not be too serious. In software they can be fatal. How can you avoid them? Here are three techniques which help. I'll

Learn APL on the $5 Raspberry Pi

APL is one of the most productive programming languages ever invented, and it's available free on all models of the Raspberry Pi. I've just launched an early access version of an Introductory eBook , and it's free for the next seven days. You should read this book if you want to • find out what programming in APL is like • learn how to use the language effectively • decide if APL is appropriate for your project • take part in the Dyalog annual APL problem-solving competition The fast-paced introductory text will teach you the core of the language in a few short, fun sessions. Once you've finished the book you'll get links to free resources you can use to master the rest of APL's amazing capabilities. The book is only 30% complete at present, but if you 'buy' the free version now you'll pay nothing for the book, and you'll get free updates as soon as they are published. I'll start to increase the price next Sunday, and it w

A new/old approach to Parallel processing

Morten Kromberg of Dyalog APL has just published a video of his keynote from the PLDI 2016 ARRAY Workshop. It's titled Notation for Parallel Thoughts and it describes some exciting innovations in the field of programming for parallel processing.

More reasons to enter the Dyalog Problem-solving competition

A few weeks ago I mentioned the Dyalog APL annual problem-solving competition . I've been researching previous contests, and wanted to share my findings. It's worth entering, even if you don't know APL. Many of the previous winners learned APL just for the competition . They spent a few days learning the language, and a few more working on the contest problems. Some won top prizes (worth $2000 this year). It's worth re-entering, even if you entered last year. Many of the winners had applied before. Some even won prizes in successive years. It's worth entering wherever you live. Winners have come from all over the world. It's worth entering even if you don't win. I've taken this list of reasons from my forthcoming introduction to APL. (The book should be available in time to help you with your competition entries!) 5 good reasons to learn this powerful language APL is concise and expressive, so you can try out new ideas very quick

Help me with a book title and win a Raspberry Pi model 3!

I need a snappy title for a book. The book is an introduction to the Dyalog implementation of theAPL programming language . The book is aimed primarily at people learning it on the Raspberry Pi. APL runs on all models of the Pi, including the £4/$5 Pi zero shown on the right. You can download a copy of Dyalog APL for the Pi here . If you submit a title as a comment, if you are the first to submit it, and if I use it, I will send you a Raspberry Pi model 3 complete with a power supply and an SD card. No royalties, though, and you will need to find a monitor, keyboard and mouse. If you have a Pi already I will send one to the beneficiary of your choice. Pi3B - Image (c) the Raspberry Pi Foundation The book is not yet complete but it should be available in early access format on Leanpub in a few days time. Have a go - post your title below.

ANNSER - A Neural Network Simulator for Education and Research

I've just launched ANNSER on GitHub . ANNSER stands for A Neural Network Simulator for Education and Research. It is licensed under the MIT license, so it can be used for both Open Source and Commercial projects. ANNSER is just a GitHub skeleton at the moment. I have some unreleased code which I will be committing over the next few days. I'm hoping that ANNSER will eventually offer the same set of features as established ANN libraries like TensorFlow, Caffe and Torch , and I would like to see a GUI interface to the ANNSER DSL. ANNSER will be implemented in Dyalog APL . The GUI will probably be implemented in JavaScript and run in a browser. All the code will run on the Raspberry Pi family, though you will be able to use other platforms if you wish. There's a huge amount of work to complete the project but we should have a useful Iteration 1 within a few weeks. Why APL? I have several reasons for choosing APL as the main implementation language. It'

Mapping Kent Beck's Mind :)

If you don't work in Software you may never have heard of Kent Beck but he's had a huge influence on the way we test and write code. Yesterday Kent posted a fascinating list on Facebook. He shared some of the key ideas that guide his thinking. The post is interesting, and stimulating, but it's a wall of text. I love reading, but I also like to think visually, so I started to mind map what he wrote. It's slowly growing. The map source (made with Freeplane ) and images are now on GitHub . Kent suggested that this might be the basis of a workshop: Seems like this could turn into a workshop pretty easily. Spend three days mapping your current ideas, figuring out the holes you want to fill, what you want to eliminate. Three days sounds a lot, but maybe we could do a shorter version via a google hangout. Anyone interested? If so, please comment.

Neural networks on the Raspberry Pi: Sigmoid, tanh and RL neurons

A brief introduction to ANNs - part 3 In the previous post about ANNs we looked at the linear neuron and the perceptron. Perceptrons have been used in neural networks for decades, but they are not the only type of neuron in use today. When they were first invented, they seemed capable of learning almost anything. However, in 1969, Minsky and Papert published their book 'Perceptrons' which showed that a single perceptron could never be trained to perform the XOR function. You'll see in the next post why this is so (and why it's not a huge problem), but for now, let's look at three other common neuron models. Like the linear neuron and perceptron, these start by calculating the weighted sum of their inputs. Recall that you can implement the linear neuron like this:       ln←{⍺+.×⍵} sigmoid neuron calculates the same weighted sum of inputs, but then it applies the sigmoid function to the result. The sigmoid function is defined in wikipedia

Student? Expert Problem Solver? Win $2000 and a free trip to Glasgow

If you like coding and s olving problems, and are a full-time student , you cou ld win up to $2000 and an expenses-paid trip to a conference in Glasgow later this year. All you need is a computer and some free software. The computer could be a Ras pberry Pi (any model) or a lap top running Windows, O S /X or Lin ux. I' ll tell you where to get th e APL software further down th is post .   First, though, a warning. If you enter this competition it could change your life! I’m serious. Just under f ifty years ago I had a chance to learn APL. I did, and it shaped my whole career. I'm still using A PL to research neural networks . Now, if you want, it’s your turn. The Dyalog APL 2016 problem solving competition   Dyalog have just announced their annual APL problems solving competition. They want to introduce more people to this extraordinary, powerful language. If you are a full time student you could win a big cash prize (up to $2000) and an expenses-paid trip to

A new Raspberry Pi robot joins the family

Yesterday saw the arrival of a Raspberry Pi robot kit from T he Pi Hut , and I'm finding it hard not to drop everything and have a play. The Pi Hut has close links with CamJam . CamJam is, I think, the first Raspberry Jam, based in the Cambridge area. Working with The Pi Hut they have created three excellent EduKits: inexpensive, fun kits which introduce Raspberry Pi owners of all owners to the fun of physical computing. The earlier kits came with excellent instructions and the Robot kit does too. I'm sure I will succumb to temptation and start exploring the kit in the next day or two. Expect a progress report soon. My immediate priority is more urgent. I'm talking at the BAA meeting tomorrow, and I need to make sure I'm properly prepared. Dyalog Visit I nearly blew it earlier this week. I went along to visit my friends at Dyalog to talk about my neural network research and show them APL running on the new Pi zero . I thought I had taken everything

The new Raspberry Pi zero is here - and it's snappy!

Spot the difference! The new Raspberry Pi zero is out and it has a camera connector . The picture on the right compares the new zero with its predecessor. They are very, very similar but the clever folks at Pi towers have re-routed the board to make room for a camera connector while keeping the size of the board unchanged. I've had a chance to play with the new Pi for a few days now and I love it. You can read my plans below but the main thing is that the new feature has been added without sacrificing the zero's already awesome capabilities. As you'd expect, existing software runs just as it did before. The new zero is currently in stock at several dealers in the UK and the USA. Details are on the Raspberry Pi website . Dealer info is at the bottom of their post. A camera has been one of the most-requested features for the zero. It opens up a huge range of new, exciting projects. There will be a huge demand for the new zero. Let's hope the stocks hold out

Neural networks on the Raspberry Pi: More Neurons

A brief introduction to ANNs - part 2 The previous example of a neuron was a bit far-fetched. Its activation function doubled the weighted sum of its inputs. A simpler (and more useful) variant just returns the sum of its weighted inputs. This is known as a linear neuron. The linear neuron In APL, you could implement the linear neuron like this:        ln←{+/⍺×⍵}    and use it like this:        1 0.5 1 ln 0.1 1 0.3 0.9 Inner product However, there's a neater and more idiomatic way to define it in APL. A mathematician would call the ln function the dot product or inner product of ⍺ and ⍵ and in APL you can write it as       ln←{⍺+.×⍵} There are several reasons to use the inner product notation. It's concise, it's fast to execute, and (as we'll see later) it allows us to handle more than one neuron without having to write looping code. Linear neurons are sometimes used in real-world applications; another type of neuron you're likely to