Pages

Wednesday, 7 June 2023

Draw pictures on your Pico!

Would you like to draw pictures on your Pico? 

A while ago I experimented with sketcher - a program for the Rasperry Pi Pico that connected to a Raspberry Pi. The Pico had two potentiometers that you could use to move a virtual pen in the Pi; the Pi ran the turtle graphics package which it used to draw the pictures.

The project was fun, and I wondered how easy it would be to run a similar program that would use a display with the Pico.

That potential project sat in my pending tray for ages, but I recently got an incentive to re-visit it.

A new resource

Pimoroni have asked me to write a booklet for their Pico Explorer Base. I've been a fan of the product since it first came out, and I'm really enjoying a chance to revisit its awesome capabilities.

I decided I'd write a simple sketcher program that ran on the Pico. It uses two potentiometers to move a virtual pen up/down and left/right. The pen draws on the Explorer base display.

I've now got a simple version running, and I intend to enhance it over the next day or so.

Here's the first sample output.

I'd like to claim that it's a carefully designed image of a dog, but it's actually the result of a completely random experiment. The software works, though, and users more dexterous than me should be able to create some more artistic images.

The booklet is still under development, and won't be available for a while, but when it is you'll be able to read about it on Twitter, on this blog, and on the Pimoroni website.



Tuesday, 9 May 2023

ChatGPT Meets de Bono: A Powerful Combo

I'm struggling to make a decision at the moment, so I've enlisted the help of ChatGPT.

I am experimenting a lot with some of the new AI tools, and it's a bit like playing whack-a-mole at the moment; as soon as I get to grips with one bit of software, another appears and demands my attention.

A lot of what I'm doing is based on OpenAI's ChatGPT. I'm concerned that I'll spend too much on paid services, so I am actively looking at Open Source alternatives that I could run locally.

Jetson Xavier dev kit
Alas, many Large Language Models (LLMs) are too resource-hungry to run on my workstation, and I've been thinking of trying out a Jetson Xavier which I'm not using at the moment.

Worth the time?

I know I'll need to spend a few days getting to the point where I know whether I can use it for the projects I have in mind. Should I invest the time?

I decide to take a Judo approach and use ChatGPT to help me make the decision.

I used a prompt that's worked well for me before now. It based on de Bono's Six Thinking Hats.

You've probably come across the Six Thinking Hats technique before, but if not you can read about them on Wikipedia.

The Prompt and Response

Here's my question, along with ChatGPT's response. I've made minor tweaks to the format but the text is unchanged.

I'm thinking of running an LLM as an alternative to ChatGPT on my 16 GB Jetson Xavier developer kit.
Use Edward de Bono's Six Thinking Hats to help me evaluate that idea.

ChatGPT

Edward de Bono's Six Thinking Hats is a powerful decision-making tool that encourages looking at a problem or idea from
multiple perspectives. In this case, we'll use it to evaluate running an LLM (Large Language Model) as an alternative to
ChatGPT on a 16 GB Jetson Xavier developer kit.

White Hat (Facts and Information):

  1. The Jetson Xavier developer kit has 16 GB of RAM and a powerful GPU, making it suitable for running AI models.
  2. LLMs typically require significant computational power and memory.
  3. ChatGPT is based on GPT-4, which is also an LLM.

Red Hat (Emotions and Intuition):

  1. You might feel excited about running an LLM on your Jetson Xavier kit, as it could be a challenge and an opportunity to learn.
  2. Concerns may arise about whether the hardware is capable of handling such a large model efficiently.

Black Hat (Critical Judgement):

  1. Running an LLM on a Jetson Xavier developer kit might be limited by the 16 GB of RAM, depending on the size of the model.
  2. Performance and latency might be issues due to the large size of the LLM, affecting real-time applications.
  3. Power consumption and heat generation could be concerns when running a large model continuously.

Yellow Hat (Optimism and Benefits):

  1. Implementing an LLM on the Jetson Xavier kit could lead to a deeper understanding of AI models and their deployment on edge devices.
  2. It could potentially offer better performance compared to smaller models, resulting in more accurate and contextually aware responses.
  3. Developing and optimising an LLM for the Jetson Xavier kit could contribute to the growing AI community.

Green Hat (Creativity and Alternatives):

  1. Consider using a smaller, more efficient model that is optimised for edge devices, while still providing a satisfactory level of performance.
  2. Explore the possibility of using model distillation or pruning techniques to create a smaller model suitable for the Jetson Xavier kit.
  3. Investigate other hardware solutions or cloud-based platforms that may better accommodate the computational requirements of an LLM.

Blue Hat (Process and Summary):

In summary, running an LLM as an alternative to ChatGPT on a 16 GB Jetson Xavier developer kit has its pros and cons.
While it may offer better performance and accuracy, it might also present challenges related to hardware limitations,
power consumption, and heat generation. Consider exploring alternative models or hardware solutions, or optimizing the
LLM for edge devices. Ultimately, the decision will depend on the specific use case and the desired balance between
performance and efficiency.

Not bad for an AI!

I'm impressed by the response, as I have been with other uses of the prompt. I've also used two other de Bono frameworks: Six Value Medals, and Six Action Shoes. You can ask ChatGPT about them.

Pro tip: Make sure you are using ChatGPT 4, as version 3.5 has confused Action Shoes with the Thinking Hats!

The books are less well known but they can form the basis of useful prompts.

You'll also see from the Wikipedia article about Six Thinking Hats that there are recommended sequences of Hat use for tackling particular types of problem. These could form the basis of further useful prompts for ChatGPT.

I'll share more useful prompts as I discover them. You can keep track of them by following me on Twitter at @rareblog.


Monday, 8 May 2023

Finding the Best Solution with ChatGPT: A Personal Experience

When you're developing software for technically savvy users, you'll often encounter an annoying problem: users provide their preferred solution instead of their requirements.

This issue is even more pronounced when you are both developer and user.

In such situations, you're likely to choose the first solution that seems to meet your needs, even though there might be better alternatives.

Pair programming can help, as your partner can spot what's going on and suggest other ideas. But what if you're programming on your own?

That's where ChatGPT comes in handy.

ChatGPT to the Rescue

I've recently been working on an application that downloads interesting arXiv PDFs, and I wanted to access them remotely. I could think of a couple of ways to do that, but I decided to ask ChatGPT for advice. I used a prompt that had been helpful before:

"Suggest three ways I could accomplish that, with the pros, cons, and interesting features of each approach."

I presented my problem: I have a directory containing PDFs on a Raspberry Pi computer, and I want to be able to read the PDF files from Python on another Linux computer.

ChatGPT suggested three approaches, complete with pros and cons for each:

  1. File sharing with Samba
  2. Remote access with SSHFS
  3. Web server with a file browser

I had considered using Samba shares or a web server, but I had forgotten about SSHFS. Since I already had password-less SSH access to the Raspberry Pi, SSHFS was incredibly easy to set up, and it has been working very well.

Reusable Prompt for Problem-Solving

Here's the reusable prompt I mentioned earlier:

"Suggest three ways I could accomplish that, with the pros, cons, and interesting features of each approach."

I now use this prompt whenever I'm trying to find a solution to a problem, and ChatGPT consistently delivers valuable insights.

I'll share more useful prompts as I discover them. You can keep track of them by following me on Twitter at @rareblog.

Friday, 28 April 2023

Pair Programming with ChatGPT

 

Earlier today I started work on a small admin project to help me organise my excessive collection of repositories on GitHub.

I wanted a list of my repositories, showing the name, url, description, stars and forks.

I decided to get ChatGPT to write the code for me.

In the conversation that follows, you can imagine an embodied ChatBot sitting next to me as we pair program. I'm the customer, ChatGPT is the programmer. If I'm not happy with the results, I ask my pair to change the code.

My part is in italics. Everything else came from ChatGPT.

As you'll see, there are a couple of wrinkles I didn't anticipate, but ChatGPT fixed them.

This took less time than I'd have taken to write the code, and it was much more fun. I recommend it.

Here's the session.

Write python to find the names of all my repositories on GitHub, My username is romilly.

You can use the GitHub API to retrieve the names of all your repositories. Here is the Python code to achieve that:

import requests

# Replace <username> with your GitHub username
username = "romilly"

# Make a GET request to the GitHub API to retrieve your repositories
response = requests.get(f"https://api.github.com/users/{username}/repos")

# Check if the response was successful
if response.status_code == 200:
    # Loop through the list of repositories and print their names
    for repo in response.json():
        print(repo["name"])
else:
    print("Failed to retrieve repositories")

This code uses the requests library to make a GET request to the GitHub API endpoint that retrieves your repositories. It then loops through the list of repositories in the response JSON and prints their names.

Modify that to output the name, url, description, stars and forks

Sure! Here's the modified code that retrieves and prints the name, URL, description, stars, and forks count of all your repositories:

import requests

# Replace <username> with your GitHub username
username = "romilly"

# Make a GET request to the GitHub API to retrieve your repositories
response = requests.get(f"https://api.github.com/users/{username}/repos")

# Check if the response was successful
if response.status_code == 200:
    # Loop through the list of repositories and print their details
    for repo in response.json():
        print(f"Name: {repo['name']}")
        print(f"URL: {repo['html_url']}")
        print(f"Description: {repo['description'] or 'No description available'}")
        print(f"Stars: {repo['stargazers_count']}")
        print(f"Forks: {repo['forks_count']}")
        print("="*50)
else:
    print("Failed to retrieve repositories")

This code uses the same requests library to make a GET request to the GitHub API endpoint that retrieves your repositories. It then loops through the list of repositories in the response JSON and prints their name, URL, description, stars count, and forks count. If the repository has no description, the code prints “No description available” instead. Finally, the code prints a line of equal signs (=) to separate each repository's details.

That only prints the fist few repositories. Modify the code to paginate though the full set.

You're right! By default, the GitHub API only returns the first 30 repositories, so we need to paginate through the full set to retrieve all your repositories. Here's the modified code that uses pagination:

import requests

# Replace <username> with your GitHub username
username = "romilly"

# Set the base URL for the GitHub API endpoint that retrieves your repositories
base_url = f"https://api.github.com/users/{username}/repos"

# Initialize the page number to 1
page = 1

# Loop through the pages of repositories
while True:
    # Make a GET request to the current page of the GitHub API endpoint
    response = requests.get(f"{base_url}?per_page=100&page={page}")

    # Check if the response was successful
    if response.status_code == 200:
        # If the response is empty, we have reached the last page of repositories
        if len(response.json()) == 0:
            break

        # Loop through the list of repositories on the current page and print their details
        for repo in response.json():
            print(f"Name: {repo['name']}")
            print(f"URL: {repo['html_url']}")
            print(f"Description: {repo['description'] or 'No description available'}")
            print(f"Stars: {repo['stargazers_count']}")
            print(f"Forks: {repo['forks_count']}")
            print("="*50)

        # Move to the next page
        page += 1
    else:
        print(f"Failed to retrieve repositories from page {page}")
        break

This code initialises the page number to 1 and loops through the pages of repositories using a while loop. It makes a GET request to the current page of the GitHub API endpoint with a per_page parameter of 100, which is the maximum number of repositories that the API allows to be returned per page. If the response is successful and not empty, the code loops through the list of repositories on the current page and prints their details. Finally, the code moves to the next page and repeats the process until there are no more repositories to retrieve. If the response is unsuccessful, the code prints an error message and breaks out of the loop.

What's the rate limit for GitHub API requests?

The rate limit for unauthenticated requests to the GitHub API is 60 requests per hour, while the rate limit for authenticated requests varies based on your authentication status and whether you are using a personal access token.

Here are the rate limits for authenticated requests:

Personal access token: 5,000 requests per hour
OAuth token: 5,000 requests per hour
OAuth token with public_repo or repo scope: 5,000 requests per hour
GitHub App installation token: 5,000 requests per hour
GitHub Actions token: 1,000 requests per hour

It's important to note that the rate limit is calculated on a per-IP address basis, so if multiple people are using the same IP address to make API requests, their combined rate limit will be shared.

To check your rate limit status, you can make a GET request to the following endpoint:

https://api.github.com/rate_limit

This endpoint returns a JSON object that contains information about your current rate limit status, including the number of remaining requests and the time at which the limit resets.

Modify the code to check the rate limit before each request. If the limit has been reached, delay the request until the limit resets.

Good idea! Here's the modified code that checks the rate limit before each request and delays the request until the limit resets if it has been reached:

import requests
import time

# Replace <username> with your GitHub username
username = "romilly"

# Set the base URL for the GitHub API endpoint that retrieves your repositories
base_url = f"https://api.github.com/users/{username}/repos"

# Initialize the page number to 1
page = 1

while True:
    # Check the rate limit status before making the request
    response = requests.get("https://api.github.com/rate_limit")
    if response.status_code != 200:
        print("Failed to retrieve rate limit status")
        break

    remaining_requests = response.json()["resources"]["core"]["remaining"]
    reset_timestamp = response.json()["resources"]["core"]["reset"]

    if remaining_requests == 0:
        # If we have reached the rate limit, delay the request until the limit resets
        delay_seconds = reset_timestamp - time.time() + 1
        print(f"Rate limit reached. Waiting for {delay_seconds:.2f} seconds...")
        time.sleep(delay_seconds)

    # Make a GET request to the current page of the GitHub API endpoint
    response = requests.get(f"{base_url}?per_page=100&page={page}")

    # Check if the response was successful
    if response.status_code == 200:
        # If the response is empty, we have reached the last page of repositories
        if len(response.json()) == 0:
            break

        # Loop through the list of repositories on the current page and print their details
        for repo in response.json():
            print(f"Name: {repo['name']}")
            print(f"URL: {repo['html_url']}")
            print(f"Description: {repo['description'] or 'No description available'}")
            print(f"Stars: {repo['stargazers_count']}")
            print(f"Forks: {repo['forks_count']}")
            print("="*50)

        # Move to the next page
        page += 1
    else:
        print(f"Failed to retrieve repositories from page {page}")
        break

This code first makes a GET request to the rate limit endpoint to retrieve the number of remaining requests and the time at which the limit resets. If we have reached the rate limit, the code calculates the number of seconds until the limit resets and waits for that amount of time using the time.sleep() function. After the delay, the code makes the GET request to the current page of the GitHub API endpoint as before. If the response is successful, the code loops through the list of repositories on the current page and prints their details. Finally, the code moves to the next page and repeats the process until there are no more repositories to retrieve. If the response is unsuccessful, the code prints an error message and breaks out of the loop.

What next?

I'm pretty happy with what ChatGPT did, but just like a real user I still want some changes - a database and a GUI to query it. I'll see if ChatGPT has time to do that and let you know.

I'll be posting on twitter as RAREblog.

If you found this interesting, please let others know: 

Saturday, 22 April 2023

Can ChatGPT code like a pro?

I've been experimenting with the code generation capabilities of ChatGPT, and I'm blown away.

As a test of its capabilities I decided to ask it to write a generic version of some code I've used, over and over, in slightly different forms, in real applications.

The Problem

The code implements a Throttler - a class that wraps something that makes requests and makes sure that requests are not made too often.

Lots of APIs apply rate limits to requests, and this code makes it easy to enforce those limits within your application.

A coding session with ChatGPT

Here'es the session I had with ChatGPT. My instructions are in italics. Everything else - code, comments, and explanations - was written by ChatGPT in response.

Write Python to create an Abstract Class called Requester. It should have a single abstract method called request that takes an argument called command. Add docstring comments.

Here is the Python code to create an Abstract Class called Requester with a single abstract method called request that takes an argument called command:

from abc import ABC, abstractmethod

class Requester(ABC):
    """
    Abstract class for requesting commands.
    """

    @abstractmethod
    def request(self, command):
        """
        Abstract method for requesting a command.

        Args:
            command (str): The command to request.

        Returns:
            None
        """
        pass

In this code, we import the ABC (Abstract Base Class) module from the abc package. Then, we define the Requester class as an abstract class that inherits from ABC. We also define the single abstract method called request, which takes an argument called command. The docstring comments provide information about the class and the abstract method, including the arguments and return values.

Write code and docstring comments for a subclass called MockRequester which stores each request in a list. The list should be initialised in the constructor init method.

Here is the Python code to create a subclass called MockRequester which stores each request in a list:

class MockRequester(Requester):
    """
    Subclass of Requester that stores each request in a list.
    """

    def __init__(self):
        """
        Constructor for MockRequester that initializes an empty list to store the requests.
        """
        self.requests = []

    def request(self, command):
        """
        Method to request a command and store it in the list of requests.

        Args:
            command (str): The command to request.

        Returns:
            None
        """
        self.requests.append(command)

In this code, we define the MockRequester class as a subclass of Requester. We also define the constructor method init which initializes an empty list to store the requests. The request method is also defined and it takes the command argument, which it stores in the list of requests using the append method. The docstring comments provide information about the class and methods, including the arguments and return values.

Now add a Throttler class subclass of Requester which takes a requester and a gap (a number of seconds) in its constructor. The Throttler should make sure that successive requests are separated by the sepcified number of seconds. As before, add docstring comments and add type hints.

Here is the Python code to create a Throttler class, a subclass of Requester, which takes a requester and a gap (a number of seconds) in its constructor:

import time

class Throttler(Requester):
    """
    Subclass of Requester that adds throttling functionality to limit the rate of requests.

    Args:
        requester (Requester): The requester object that Throttler will throttle.
        gap (float): The number of seconds to wait between each request.
    """

    def __init__(self, requester: Requester, gap: float):
        """
        Constructor for Throttler that initializes the requester and the gap.

        Args:
            requester (Requester): The requester object that Throttler will throttle.
            gap (float): The number of seconds to wait between each request.
        """
        self.requester = requester
        self.gap = gap
        self.last_request_time = None

    def request(self, command: str) -> None:
        """
        Method to request a command with throttling between requests.

        Args:
            command (str): The command to request.

        Returns:
            None
        """
        if self.last_request_time is not None:
            time_since_last_request = time.time() - self.last_request_time
            time_to_wait = self.gap - time_since_last_request
            if time_to_wait > 0:
                time.sleep(time_to_wait)

        self.requester.request(command)
        self.last_request_time = time.time()

In this code, we define the Throttler class as a subclass of Requester. We also define the constructor method init which takes a requester object and a gap parameter. The request method is also defined which makes sure that the time gap between successive requests is at least the specified gap in seconds, before forwarding the request to the requester object. The docstring comments provide information about the class and methods, including the arguments and return values, as well as the types of the arguments.

Write a unit test using the unit test framework, a Throttler and the MockThrottler class to verify that the delay is working correctly. The test should take less than one tenth of a second to run. Add comments for the test.

Here is the Python code for a unit test that uses the Throttler and MockRequester classes to verify that the delay is working correctly:

import unittest

class TestThrottler(unittest.TestCase):
    """
    Unit test for Throttler class.
    """

    def test_throttling(self):
        """
        Test that the Throttler enforces the specified delay between requests.
        """
        mock_requester = MockRequester()
        throttler = Throttler(mock_requester, 0.1)

        # Send 3 requests and verify that the delay between requests is at least 0.1 seconds
        start_time = time.time()
        throttler.request("command 1")
        throttler.request("command 2")
        throttler.request("command 3")
        end_time = time.time()

        # Verify that the requests were received in the correct order
        self.assertEqual(mock_requester.requests, ["command 1", "command 2", "command 3"])

        # Verify that the total time taken is at least 0.2 seconds (2 x 0.1 seconds)
        self.assertGreaterEqual(end_time - start_time, 0.2)

In this code, we define a unit test case TestThrottler that tests the Throttler class. The test method test_throttling creates a MockRequester object and a Throttler object with a delay of 0.1 seconds. It then sends 3 requests using the Throttler and verifies that the requests were received in the correct order. Finally, it checks that the total time taken is at least 0.2 seconds (2 x 0.1 seconds) to verify that the delay is working correctly. The comments provide information about the purpose of the test and what it is testing.

Conclusion

I can't believe the level of skill that ChatGPT is showing. 

Not only does it do a great job at coding, but it correctly worked out what I really wanted when I mistakenly asked it to use a MockThrottler instead of a MockRequester in the test!

I've been using it for coding tasks for the last three days and I've created documented, tested code about five times faster than I would on my own.

Try it yourself.

Thursday, 20 April 2023

Will ChatGPT save your job - or take it over?

Wondering whether ChatGPT will make you more effective, or do your job so well that you are no longer needed?

ChatGPT is one of the new generation of AI tools. You can ask it for help and it can turn its hand to a huge range of tasks.

ChatGPT as a Python coder

Today I used it to write a very simple but useful computer application. The task would have taken me half an hour or so. ChatGPT did the job in a few seconds, and the code it write was well-structured, readable and well commented.

So far so good - but what about other tasks?

I decided to make a short video about what I'd done. (Watch this space to see it when it's ready). 

I knew that my first step would be to write a script.

Could I ask ChatGPT to help me do that?

Sure.

Here's what happened next.

ChatGPT as scriptwriter

Here's what I typed:

Earlier today I got ChatGPT to write a simple application for me. Now I want to make a video showing the process and the result. I'd also like to explain why the automation of small coding tasks can have a huge impact on personal productivity in the medium term. Help me by writing a rough script for the video.

Here's how ChatGPT responded:

Sure, I can help you with that. Here's a rough script for the video:

[Introduction] Hello, in this video, we're going to show you how to automate small coding tasks and why it can have a huge impact on your personal productivity in the medium term.

[Explanation of the problem] As a developer, you know that even small coding tasks can take up a lot of time and energy. For example, writing a simple application may require hours of coding, debugging, and testing.

[Solution: automation] But what if there was a way to automate these tasks? That's where automation tools come in. By using automation tools, you can save time and focus on more important tasks that require your attention.

[Demo] Let's demonstrate this with an example. Earlier today, I asked ChatGPT, a large language model trained by OpenAI, to write a simple application for me. Here's how it works:

[Show the code generated by ChatGPT]

As you can see, ChatGPT was able to generate the code in just a few seconds. This saved me a lot of time and effort that I would have otherwise spent writing the code myself.

[Benefits of automation] Now, let's talk about the benefits of automation. By automating small coding tasks, you can:

  • Save time and increase productivity
  • Focus on more important tasks
  • Reduce errors and improve code quality
  • Learn new skills by exploring automation tools
  • Free up mental space for creative thinking and problem-solving

[Conclusion] 

In conclusion, automation can have a huge impact on your personal productivity in the medium term. By using automation tools, you can save time, reduce errors, and focus on more important tasks. So why not explore automation tools and see how they can help you become a more productive developer? Thank you for watching! 

Want to play? 

What do you need to do to use OpenAI's ChartGPT demo? I asked ChatGPT. Here's what it said.

To use OpenAI's ChatGPT demo, follow these steps:

  • Go to the OpenAI website (https://chat.openai.com/) and scroll down to find the "Chat" demo.
  • Click on the "Chat" demo button to open it.
  • Once the demo is open, you will see a text box where you can type in your message. Type in a message and hit the "Enter" key or click the "Send" button to send your message.
  • ChatGPT will respond to your message with a generated response. You can continue the conversation by typing in another message and hitting "Enter" or clicking the "Send" button again.
  • You can also adjust the settings of the demo by clicking on the gear icon in the top right corner. Here, you can change the model, the temperature, and the maximum length of the response.

Note that the demo is intended for demonstration purposes only and may not be suitable for all use cases. Also, keep in mind that ChatGPT is a language model and may generate responses that are inappropriate or offensive. It's important to use ChatGPT responsibly and ethically.

Give it a spin, and let us know how you got on! 


 

 

Sunday, 29 January 2023

Complex Network Analysis

 I've recently been working on two fascinating, challenging projects.

One is a guide to the Pimoroni Explorer Base. That's going to take a while, and I'll report on progress as i go.

The other is a tool to help me keep track of published papers on some of the AI areas the interest me.

Therese's so much published that it's a real challenge to keep up. I rely heavily on the wonderful service provided by Semantic Scholar; that helps me keep track of new papers that cite older papers that I've identified as interesting.

Here's the citation  network for one of the papers that interest me:


I've done quite a bit of work on networks with Python, but I've recently been reading Complex Network Analysis in Python by Dmitry Zinoviev. It's well-written, broad in coverage, and has plenty of useful sample code.

I love it.