Thursday 31 December 2020

Christmas lights : lightshowpi

 I have a very simple some coloured  5050 LED strips which provide a very colourful addition to Christmas.  Lightshowpi (LSPi) is wonderful Raspberry Pi (RPi) software which analyses music and allows you to configure lights accordingly.  I think most people use it with relays and big external displays but I prefer a small indoor spectacle when listening to Christmas songs.  My LSPi setup tends to change each year and this years effort works well.

The Raspberry Pi has rubbish sound quality through the headphone jack.  This year I attached a USB audio card which only cost about £5 and provides a signal quality which I can send through my hi-fi (is it still called that) to play pleasant sounding music.  It is possible to spend a more money (£60-£130) on an RPi DAC HAT or high quality USB audio but this is quite acceptable to me.

All my music is accessible using the venerable linux music player daemon (MPD) on an RPi which has a share to network attached storage (NAS) and can stream radio stations.  I use a web interface to mpd  so I can select, play and manage music from a browser.  LSPi is quite happy playing an http or icecast stream from MPD and I can direct its output to the USB audio card.

Previously I used bluetooth connection from RPi/MPD to my hifi but this makes synchronisation a little tricky as bluetooth introduces a selay of 2-3 seconds.  When playing sound through the USB card LSPi synchronisation is much better and I can see the colours changing in time with the notes.

The 4 colour LED strip runs at 12V and requires three GPIO pin signals to control Red, Green and Blue output.  The RPi can only provide 5V, limited current output so I have a simple setup with MOSFETs driving the LED strip with an external power supply.





Wednesday 30 December 2020

Bash in full screen mode with colour

 I rarely use a linux GUI and I find that command line output is often dull and difficult to read.  It is easy and common to embellish the shell prompt with extra information and colours.  Also it is possible to amend colours for directory listings, editor entities etc. and I found a good list of ideas on stackexchange.  I would like to be able to do some simple formatting tasks on displayed screen output, such as putting text at a specific cursor position, in colour, perhaps surrounded by a box as shown below.


I can do this with a few escape codes in a script as shown below.


We start with a command to clear the screen \e[2J.  The \e causes the escape character to be sent, and the following three characters are a "vt100 escape code".  A VT100 (see below) was a screen introduced by Digital Equipment Corporation in 1978, one of the first to support escape codes.  I used vt100s in the early 1980s for programming and became very familiar with their capabilities.  DEC produced wonderful minicomputers in the 1970s and early 1980s but fell from grace and were consumed by Compaq, who in turn were digested into HP.  When working for HP in the 2010s I talked to a few of the engineers who had remained after the demise about the good old days and we lamented the passing of a great company.



Returning to the subject we can see the command to clear a screen in a list of VT100 escape codes which includes "esc[2J" to clear the screen and "esc[<v>;<h>H" to position the cursor. Thus "esc[6;20H" positions the cursor at row 6, column 20.  Note that the linux TERM variable does not need to be set to vt100 for this to work, it worked for me with TERM=xterm or TERM=xterm-256color.

For line drawing characters we use VT100 character codes. In the table we see that unicode characters 
\u2500, \u2502, \u250c,\u2510,\u2514,\u2518 are what we need to draw the box.  We need quite a few \u2500 characters to draw the horizontal box line.  What we have so far gives us a very presentable box around our title.

We now want to add a bit of colour to the text / box, which is easy.  I googled a simple reference at bluesock. Within a bash script it isa easier to use variables for colours and unicode characters and I changed my script so that it was a bit more understandable when colouring it in.  For example to set text to cyan I used the variable $COLOR_LIGHT_CYAN with value "\e[1;36m"

We now have the capability to control our ssh shell display layout.  Clearly it would be a real pain to format our screens like this in practice but I find it really useful to be able to understand how to do it at this level.  One could setup a standard bash script containing variables for all the options/colours you use or combine with other linux utilities, for example tput for clearing the screen and positioning the cursor.

I like the idea of a simple C program to do some of the work.  I find bash a bit painful for processing and it makes sense to use C for this.  As an exercise I redid the title box in C without any extra thought.

It is very simple, you just invoke gcc to compile it, and I saved it as /usr/local/bin/colourtitle.  I could then display the title putting colourtitle in a script.  Of course I need to accept a title as an argument and allow selection of colours but this provides an easy proof of concept.




Tuesday 10 November 2020

Arduino MP3 Shield

 I saw a reference in an article to a small simple cheap mp3 shield for Arduino and it seemed like a fun and interesting idea.  The shield incorporates an SD card holder, and in fact is only slightly larger than a micro SD card in a 16 pin DIP format. I found that I could buy a couple on ebay for £5.



A quick look on the internet and I found a couple of useful tutorials at buildcircuit.com.  It turns out you don't even need an Arduino, you can wire up the mp3 player to 3V3 power and a speaker, put some music on an SD card and start playing.  There is a small catch - the player doesn't play MP3 or WAV files only an unusual format called AD4 so you have to convert them first.  I used some sample files which were available and copied them to the card as 0000.AD4, 0001.AD4.... etc.  Once the card was inserted I simulated play by grounding pin 9 and a snippet of music played.  It was only about a second but it was a great test to get started with.

Next came the Arduino setup.  The tutorial suggests an ESP8266 but any Arduino will work, you just need 4 digital pins, and I used a nano.  There is a simple WTV020 library available in the Arduino IDE Library Manager which I installed.  It only consists of a single C++ program and you can see what functions are available.  I used an article from electronoobs to help me a bit with the setup.  The sample sketch shown in the article is also provided in the Arduino library so you can simply use this sketch, amending pin numbers as necessary.  I compiled and downloaded the sketch to the Nano and it played a little music.

I found a text controlled by keyboard input via the serial monitor much easier to experiment and used a little sketch to test the functions.
 The results were excellent.  The music played is very reasonable qulaity and it is easy to change track, stop, pause etc.

Now I needed to use my own sounds/music.  AD4 format isn't well known or supported. There is a utility SOMO or UsbRecorder I downloaded from 4dsystems.com.au.  I first took an mp3 track and converted it in audacity (mix down to mono, Normalise to -6db, export Quality=16kbps).  I then used SOMO to convert it to AD4.  Converting the mp3 track to low quality mono decreases its size to about 5% but converting to AD4 increases its size about 5x.  The newly converted track, once copied to SD card works perfectly on the music player and sounds good.

Since we have a multitude of ways of playing high quality music files it isn't likely that in practice we will have a use for a low quality arduino controller music player.  However it will be much more useful for spoken words, we can get the arduino to speak!  The player has the ability to play up to 512 files (0000.AD4 to 0511.AD4) so we have a vocabulary of 512 words to play with.  Unsurprisingly I couldn't find a set of AD4 format words and even MP3 format words aren't that easy to come by.  I found shtooka.net which has a large collection of about 4000 English words in a female voice "Mary" I could use.






Tuesday 13 October 2020

More Jongo Music

Objective

It is nearly two months since my last post.  Mostly tech time over that period has been spent looking at Jongo music.  The multiroom music server MuSe and album player aMuSe control what music is played on an mpd music server.  The sound is sent to a Jongo using bluetooth and distributed amongst other Jongos using caskeid - a bespoke Jongo program which shares and synchronises sound.

For some reason the bluetooth connection is not very reliable.  It could be that the RPi bluetooth device or configuration has issues or the Jongo firmware/hardware is not good enough.  For whatever reason it is not always possible to use our web applications to play music and sometimes the system cannot be restarted, evening by rebooting the Jongo and RPI.

There isn't much you can do to monitor control bluetooth devices.  On RPi bluetoothctl allows you to check the status of connections, but it doesn't always allow you to see, diagnose or correct problems.  There is a python bluetooth module (pybluez) but only has similar capabilities.  Jongos can tell you very little about their bluetooth connections.  

Consequently I decided to look again at setting up Jongos using the wireless interface.  If using the Pure app Pure Connect to control Jongos you are using wireless and can play multiple Jongos in synchronisation.  We don't have any specific documentation telling us how Pure Connect controls Jongos but we would like to achieve something similar.

Jongo Access

Our first task is to be able to access Jongos reliably.    Jongos setup a random port for wireless communication and keep the same number until rebooted.  The only way to find this port number is to use ssdp.discovery but an ssdp.discovery dialog takes about 30s and it is inconvenient to wait that long for each access.  Ellis Percival has provided a python upnp library which provides ssdp / upnp functions. His github page also provides a great tutorial on using the library to interrogate / control devices interactively with ipython. I wrote a simple program jongoPorts.py which runs a discover once a minute and stores results in a file.  All other programs just need to read the file contents to be able to access each jongo.  I also wrote a program jongocheck.py which looks out for any port changes or devices appearing / disappearing so that if testing goes wrong I can check whether device or network access issue occured.

I have been unhappy with VirginMedia wireless coverage for some time (in particular interference from the microwave) and recently purchased a cheap and cheerful router with, hopefully, a better wifi range.  Jongos only work at 2.4GHz so I setup a new 2.4GHz network and made sure that all Jongos are on that segment with DHCP allocated fixed IP addresses.  We now have our hardware and network configured and monitored to increase reliability.

Packet Capture for Reverse Engineering

Pure have discontinued their Jongo range and it never really caught on (Sonos captured the market) so there is very limited information available on how Jongos work.  The Pure Connect (PC) app does a pretty good job at allowing you to play albums or stream radio stations.  We have previously been able to instruct a Jongo to play a URL (song, album or radio stream).  However I wasn't able to work out how PC controls multiple Jongos in synchronisation.

I wrote a short python program which interrogates Jongos to find what they are doing.  This can be set up to run  and  provide frequent upates with the Bash watch command.  It shows us that we add an extra Jongo to PC, it synchronises to the first one using an RTSP stream as shown in the picture below where device Jon was running and Job was added as a second synchronised device.


It is difficult to guess the commands which PC issued to set this up so we need to capture them.  I used wireshark to monitor the wireless network 192.168.0.0/24 but it only shows ssdp multicast packets.  It  doesn't show any http packets, possibly because the packets are WPA encrypted when they circulate the network.   The subject appears to get quite complicated.  Firstly, the interface has to be in "monitor" mode not "promiscuous" mode.  Secondly, Windows does not support monitor mode and nor does the RPi built in wireless adapter.  Eventually I found that my WiPi wireless adapter does support monitor mode.  A utility airmon-ng allows you to set the adapter into monitor mode and it does show lots of information - for example you can see all devices attached to the neighbours wifi networks.  However it still didn't allow wireshark-linux to show me http dialogs.  

Close to despair, I made a breakthrough when I found fiddler which does http packet capture and allows you to look at packets in a variety of formats.   Much more significantly it allows you to set up a proxy for wireless devices on the network.  I set up my Windows computer on the wireless 2.4GHz network and setup Fiddler with port 8888 as a proxy.  I then told fiddler to configure itself for Windows and to decode https traffic.  I setup an old ipad on the 2.4GHz network and setup Windows port 8888 as a manual proxy.  Fiddle now showed all traffic through my windows computer including all ipad http traffic.

I captured Jongo related packets using fiddler, they are all xml formatted and they work with the upnp protocol.  It was very easy to see how Jongos achieve synchronisation.  The original Jongo (Jon in our example above) starts music with a SyncPlay command, it also creates a unique session id with CreateSession.  When you add another Jongo in the PureConnect client it issues a GetSession command to determine the session id.  It then tells Jon to AddUnitToSession for the second Jongo (Job).  Command RemoveUnitFromSession is used to stop playing on the second Jongo. Fiddler organises the output so it is simple to understand as shown below.


Once we know the commands to issue it is simple to implement the play/stop add/remove functions in python.

Using Synchronised Wifi


We can setup mpd to output icecast2 or http streaming and then load the stream URL to Jongo and can share it on multiple devices.   This allows us to play whatever we decide on the MuSe or aMuSe web pages on synchronised Jongos throughout the house.  There is a catch - streaming introduces an inherent delay so if you start or stop aMuSe you must wait 8s before it takes effect.  This makes it irritating using the apps interactively.  However when we want to listen to the radio or an album streaming is generally fine and we should be able to expand functionality and usefulness of JongoPanel.  MuSe and aMuSe will be targetted at playing on a single Jongo in the study.






Friday 21 August 2020

ESP32 + Joy-IT touch screen

 Intro

As I have said a number of times before Elektor is a great place to find introductory projects.  The January/February 2020 edition contains an article on attaching a Joy-IT touch screen to an ESP32.  The example provides a very neat interface to the Elektor Weather Station project and shows what it is capable of.  The system makes use of littlevGL an application by Gabor Kiss-Valose which does all the graphics and touch screen heavy lifting.  It works on linux and other platforms as well as ESP32 and appears to be coming a popular/standard product.  To make life simple I purchased the Joy-IT touch screen from Elektor and also, after a false start, purchased an ESP32 devkitC from eBay.

Installation

The Arduino IDE is used for this project.  After connecting and testing board connectivity with a blink program (set LED to GPIO 2) I needed to set up the ESP32_eSPI software and littlevgl and configure them.  There are 10 wires used in the interface, including power and SPI, so it isn't too complicated.  The Elektor article comes with the required libraries and configuration files as a download.  This made it easy to setup, I copied libraries and configuration files across as appropriate.  In addition to TFT and LVGL libraries, other libraries are included for MQTT, json and CRC32.

Testing

There is a basic TFT test in the TFT_eSPI examples so I compiled/uploaded and it displayed pretty colours and text.


I then took the Elektor Weather Station Example, set the "demo" flag and compiled it.  Very impressively out comes the Weather Station screen, complete with touch screen capability and demo data .


Conclusion


This is great as it provides examples of tabs, gauges, meters, labels and touch screen input.  It should be easy to modify for my own purposes.  As an extra it gives me a way in to using MQTT and JSON data on Arduino/ESP32.  It is an excellent introduction to a wonderful product which can be quite challenging.  The littlevgl software and Joy-IT screen can also be used on an RPI.  At first sight the ESP32 usage, utilising wifi to obtain data updates seems best.  Then the ESP32/screen only need to be switched on when in use and don't need to be near an RPI.


Friday 7 August 2020

Bare Metal RPI: Dealing with files

 Intro

It is over a month since my last entry on bare metal programming.  At the time I was struggling to work out how to implement file access.  My programs could use terminal input/output and libraries, we had made a start on coding replacement for OS calls in syscalls.c.  This allows us to write programs which don't have storage requirements but this tends to limit application to mathematical type calculations, or perhaps text based games.

Searching through github for RPi bare metal sdcard i/o programs yields a few possibilities.  Chan has written FatFs, a filesystem module for small embedded systems, including ARM. John Cronin has written an RPI second stage bootloader which contains FatFs.  It uses file access to load  a program from disk, rather than having to set up a kernel.img file for each test.  I found it a little difficult to understand how to apply this.  Finally I settled on Marco Maccaferri's rpi bare metal repository.   Not only does he implement fileio but also has frame buffer and USB functions.  I decided to concentrate on Maccasoft initially.

Maccasoft investigation

 Maccasoft separates FatFs into a directory and implements RPI specific functions in a file emmc.c.  In fact he has done a grand job in setting up a kernel folder containing sub-folders for USB, FatFs and drivers for frame buffer, HDMI console, png images, Simple DirectMedia graphics, audio, GPIO and compression.  This is really what we are aiming for, a bare metal kernel, not dependent on linux, which allows us to write programs using a variety of drivers.

I started by testing the "template" application Maccasoft provided.  I ran make on the kernel folder, then make on the template folder and copied the resulting kernel.img to SDcard.  I connected an HDMI screen and a USB keyboard and booted.  Very impressively I get a low-res screen showing messages, the ability to type in and the ability to move the cursor around the screen. 

Optimistically I then ran make on the "abbaye des morts" game.  I had to change "uint" variables to "unsigned int" but  compilation was successful and I copied the image to SD card.  The game is excellent, a properly playable platform game with graphics and sound.  Marco has come up with a very good product.
 

Maccasoft File Access

At this stage I had made no further progress on file i/o as maccasoft doesn't seem to use fles although the template app does mount the SD card.  I looked at John Cronin's github and I was able to get it to read and write files in the root directory.

I then realised that Maccasoft has a reasonably complete syscalls.c with open/close/read/write functions implemented so I should be able to use standard fopen/fclose/fread/fwrite functions to use the SD card.

This proved to be the case we can open/close/read/write files in the root directory using standard C functions.  This is a big step forward for us.

Bootloader integration

David Welch's bootloader is indispensible for loading and testing new versions of programs.  It expects application programs to be loaded at 0x8000 and is itself loaded at 0x200000.  Maccasoft link script does load at 0x8000 and uses the space above for heap.  When we try to load a Maccasoft kernel.img using the bootloader it works fine.  This is great, we now have file access and a bootloader.

UART stdin/stdout

Maccasoft sets up a framebuffer to allow writing to an HDMI screen and configures USB for a keyboard input.  In fact I much prefer using a putty console for stdio.  I take my previous termio program and add uart_get and uart_put to main.c.  This is straightforward.  Now I add a function uart_init() and put all uart functions in uart.c so that I have added them to the kernel.  This also works fine.

Looking at syscalls.c we notice that the file pointers 0,1 and 2 which refer to stdin, stdout, stderr respectively are excluded from processing in _write and _read.  I add code to _write so that uart_putc or uart_puts is called if fp=1.  Now putchar, puts and printf all use UART output by default and I have a properly working stdout device.  This is great progress.

I have so far struggled to work out the details to make _read work for both character and string input so stdin is not fully working, but I wont worry about this for the moment as I can use uart_gets and ssprintf for stdin functions.

Summary

We have made great progress using Maccasoft's work.  We have working file access, standard output and (almost) standard input.  We have a working boot loader and a separate "kernel" to implement standard functions and drivers.  This gives us the capability to write a variety of standalone applications.  Maccasoft also provides us with a frame buffer, USB devices, audio, image processing and GPIO controls if we want them.

In fact I did try out USB drivers a bit further.  RPI mouse, USB stick and Ethernet interfaces are all correctly detected.  However Marco hasn't needed to progress them further. You could read a file sector or an ethernet frame but would have to write the higher levels yourself.  There is some discussion on the RPI forum. Rene Stange wrote the USPI RPI bare metal drivers and has gone further in his Circle  C++ bare metal OS project.







Tuesday 30 June 2020

Bare Metal program test

Previously we have managed to implement terminal i/o and libraries into our bare metal environment but as yet there is no breakthrough on setting up stdin, stdout or file i/o.  I will write more if / when problems are resolved but in the meantime we should try some programs.

Our simple starting point is to check whether a number is prime.
The first cut was to loop round trying all possible divisors upto the square root of the subject number.
The program was written on an RPi running linux first.  Then it was transferred to WSL where we compile using arm-none-eabi-gcc for bare metal.  In its simplest form it was quick to implement and test.
The bare metal programs we previously had working were re-organised so that the initial program (notmain.c) sets up uart i/o.  It calls a function jmain.c, which is intentionally not called main to avoid the compiler misinterpreting it.  jmain.c contains our program code with printf statements replaced by uart_puts to output strings to the serial terminal.

The second iteration improves the algorithm slightly, don't test even divisors, make test_prime() a function.  We also loop to ask the user for prime numbers.  We change the linux version of the program so it is easy to convert:
  use sprintf(buffer,.......); printf(buffer); if variable values need to be output for uart_puts convesion
  use gets(buffer); sscanf(buffer,.....) for input so uart_gets can be added on conversion.
Now we can easily enter and test programs under linux before a minimal conversion to run bare metal.


It is much more appropriate to calculate lots of prime numbers.  The program was amended to ask the user now many primes are required.  These are then calculated and printed.  To allow a flexible number of primes malloc is used to create an array of upto 1,000,000 primes which can then be calculated and printed on the screen.  
Previously I had no luck in implementing printf but for some unknown reason this started working so I updated my program to use printf and scanf, which makes life simpler.

To calculate 1,000,000 takes approximately 30 minutes on bare metal RPI 1B.



Monday 15 June 2020

Bare Metal C - libraries

Introduction

In my last bare metal C post I was able to complete a hello world program so that our RPI1B can do terminal input /output.  It was quite an achievement to dispense with our operating system and establish communications with the program running on hardware.

Now I could carry on in that direction and write everything else I need in C, it would include memory management, device drivers, utility functions and all the other things that C programmers can usually take for granted.  Life is too short for that so I recognise what I really need is a library, in particular the C-Library which provides so many basic necessities of C life.

However the C library makes operating system calls to the linux (or other) kernel whenever it need assistance in completing functions and I decided I didn't want an Operating System!  I didn't want the large amounts of useful and useless code that comes with it.  So our mission is to provide, ie write, the code needed by the C library.

C runtime


We also need to initialise the C environment which is the function of C-runtime (crt0.o). This is fairly easy and we deal with  it first. Brian Sidebothan provide an excellent description in part 2 of his Valvers Bare Metal C tutorial.  At its simplest we just have to setup a stack pointer so that C can use a stack.  This is set to the program load address 0x8000 on RPI1B.  The program is loaded in addresses from 0x8000 upwards and the stack uses memory downwards towards 0x0000. A second job that is required is to initialise variables to 0, which is a C standard requirement.  The variables are stored in the BSS segment and a small C program (cstartup) can be written to set values in the segment.

Library stubs


The tutorial goes on to explain how we can start using library functions and uses malloc (memory allocation) function to demonstrate.  Compiling a C program containing malloc functions gives an error "_sbrk" not found.  This is a low level function which we need to provide ourselves.  Luckily we can use a working example from newlib.  We copy this to a file c-stubs.c, compile without errors, and we can request / use memory allocated to our program.  In fact newlib contains a list of system calls which the C library expects.  Many of these are just dummy stubs with the function call and no code so you still have to do some work yourself.

Test Program


To demonstrate that I now have a working C library capability I use malloc and strcpy (string copy) in my "Hello World" program.  The program is based on David Welch's UART tutorial example with Valvers additions to use libraries.  The linker script and bss initialisation was a little difficult to write but David Welch comes to the rescue with his explanation of linking requirements.  RPI has simplified linking requirements as all code and data is incorporated into kernel.img and loaded into memory.  Initialising BSSS is not required - although I tested the program successfully both with and without initialisation.


Saturday 13 June 2020

FreeRTOS Tutorial

Background

I am always on the look out for simpler Operating Systems I can try.  I often see references to FreeRTOS in passing and was excited to see an Elektor article which shows how to run FreeRTOS on an ESP32.  My current ESP32 is happy running a micropython setup with its own firmware so I promptly ordered another. In the meantime I looked around for learning resources so that I can learn more.

I decided that a udemy course "Arduino FreeRTOS from the Ground up" fitted the bill and as it was cheap (£13) I gave it a try.  It turned out to be somewhat superficial and very slow paced but it does get you up and running and actually using FreeRTOS.  FreeRTOS can run on any Arduino so I quickly gave it a try.
Circuit Digests Arduino FreeRTOS tutorial would be a better place to start 

Task Creation


FreeRTOS is centred around tasks (threads) which run independently.  The OS simply arranges for a mix of tasks to run on the available hardware.  In the Arduino IDE you define a setup function which is run once to initialise the system and a loop function which carries out the repeated activities on the system (e.g. lighting LEDs, reading sensors, outputting results).  One of the headaches is making sure that all the activities are carried out when you need them.  For example you may have a sensor you want to read every 100 milliseconds, and a webpage which you want to send out whenever a suitable http request arrives.

FreeRTOS eliminates use of the loop function.  You simply create all the tasks in the setup function, provide details of their priorities and let FreeRTOS decide which one needs to run.  To use FreeRTOS you simple start the sketch with:
 #include <Arduino_FreeRTOS.h>

The task creation function takes the form: 
  xTaskCreate(functionName, label, stacksize, priority, handle);
For example:
 xTaskCreate(flashRedLed,"Flash",100, NULL,1,redHandle);
Now, within the function flashRedLed, you write standard blink code, it can even be copied from the blink example.
When compiled and uploaded the LED blinks. Each of the programs functions can be added in a similar manner and will work independently.  You can simply copy and paste working functions and FreeRTOS will take care of them.

The handle is a variable which allows reference to and control of the task, for example to suspend / resume the task you write:
xTaskSuspend(redHandle);
.... do something ....
xTaskResume(redHandle);

Passing Information between tasks


Tasks usually need to communicate with each other, for example a user input task would pass details of processing to a processing task which could then send results to an output task.
Queues are used to pass information.  In setup a queue is created allowing a certain number of entries.  Functions can then add an item to the queue.  The item is usually a structure to allow all necessary details to be included within the single parameter.
Queues can be grouped into queuesets so that tasks can easily process information from a number of queues.

Synchronising Tasks


Timers start and stop tasks based on clock ticks or milliseconds. Event groups are defined to set specific bits allowing tasks to wait for something to happen before taking action.
Semaphores prevent tasks conflicting for shared resources by flagging when they are in use.
Mutex semaphores allow a single task to control a specific resource.

Interrupts weren't covered much, but of course are important in Arduino programming.  There is a special function xQueueReceiveFromISR so that functions can process ISR follow-up.


Summary

FreeRTOS provides a simple view of an Operating Systems "responsibilities".  Its job is to facilitate tasks to carry out their work.  The Arduino implementation provides this in a very simple manner by replacing the loop with a powerful task mechanism.  Other "responsibilites" such as providing hardware drivers and a user interface are (rightly) left to the existing Arduino environment.
I am not convinced that the udemy tutorial was better than blog tutorials on this subject but it did achieve basic understanding for me.






Tuesday 9 June 2020

RPi Bare Metal C - Hello World

Background

Bare metal programming has a back to nature feel about it.  We have unimagineable amounts of software, interacting in complex ways when we want to use computer hardware.
On 8-bit processors such as PIC or Atmega you are very close.  A processor chip has a data sheet which you can use to see how you place instructions in memory so they will execute.  You add your own peripheral devices and are responsible for programming them.
The Arduino IDE allows you to program in C on small systems like Uno or ESP8266.  They are not far removed from the hardware, but have thorny implementation details removed and a simple setup/loop framework provided for your C programs.
For "real computers",32-bit or 64-bit devices which run linux (or Windows),  and are capable of running many tasks simultaneously you are far, far removed from the hardware.

Bare Metal computing on RPi allows you to rediscover the hardware in all its gory glory.
Programming in assembler is a mugs game but in fact only a tiny amount of standard assembler code is required.  Once the C environment is setup and you have a cross comiler to hand, you can write C programs without an OS.

Environment


My starting point is a RPi 1B.  RPI2 or RPI3 would be suitable but I don't need their extra power or complexity.
Not all startup steps on RPi devices are open source, we do know that after initialising hardware an RPi1B loads a program kernel.img from a FAT formatted SD card and starts the ARM processor.
Our focus is to compile an appropriate program, name it kernel.img and place it on an SD card so that it will execute.

We need a cross-compiler that will generate ARM code so I find it most practical to use GCC on Windows under WSL.

A variety of like-minded people have kindly provided tutorials.  They include valversJake Sandler, osdevS Matyukevich, BZT and David Welch have put a lot of work into explaining the intracacies to help you get started.
For RPI1B David Welch's tutorial is absolutely perfect.  His writeups are short but packed with pertinent details and his examples work faultlessly.

Blinking LEDs

Our first problem, as we start our program is that our bare RPIB has no software drivers.  As is traditional on embedded systems our objective will be to make an LED blink.

David Welch's example blinker01 provides an assembler code stub and linking script so that the program is loaded at location x8000 and initialises stack pointers and registers for C.  The C program then takes over and initialises GPIO16 (connected to "OK ACT" LED on board).
GPIO header details need to be included in the program to provide appropriate addresses for controlling the GPIO sub-system.  Finally GPIO16 is configured, enabled and set to 0/1 causing the LED to blink.

This is a huge step forward; we are running a program without any external threads or libraries, directly controlling the hardware.  In fact the executable kernel.img is only 148 bytes.  

Hello World

This is the traditional first program for most environments, printing out a message on the screen.  It is so much easier to develop programs when you have a method of communicating back to the user what is happening.  Flashing LEDs quickly lose their sparkle when they are the only way you have of understanding what the processor is doing.
Of course we don't have a screen to write output to.  We certainly don't want to delve into controlling the RPI HDMI output and GPU at this stage, but luckily we can use the RPI inbuilt UART (GPIO14/GPIO15) to send to a terminal/minicom/Putty session.  The GPIO TX/RX and GND pins were connected to an FTDI connector with a USB cable attached to Putty on the PC.

My first attempts at programming the UARt failed to work,  I don't know why.  I tried different tutorials, RPIs, compiler options, terminal connections, all without success.
On reading that the mini-UART is easier to program I tried this instead.  I was relieved and amazed when I put David Welch's UART01.bin on the SDcard and started it up to find it works perfectly, displaying digits as fast as it can.  The C program includes UART headers and initialises the UART.  It then has a simple putc function to output characters.
It is a small extra job to read input from the terminal session and echo characters out to the screen.  We can now do terminal I/O, another huge step forward.

Bootloader

Each time I want to test a program I have to take it out of the RPI, copy across a new kernel.img, replace in RPI and restart.  This quickly becomes irritating.  The marvellous David Welch has written a simple bootloader program.
To use this you put bootloader06.bin as kernel.img on the SDcard.  Now whenever you power up RPI a bootloader is started.  The bootloader waits for a program to be transferred across the serial link using xmodem file transfer.  I used minicom or ExtraPutty to transfer the file.  Pressing 'g' causes the program to run.  To use a different program simply power cycle the RPI and load another executable.
This is another huge step forward for me.  It gives me a practical environment to work in.


Saturday 6 June 2020

Linux AV

In May I had a note from Virgin Media, my ISP, to say that azorult, some malware, was present on a device using my internet connection. It was helpful of them to provide this but not specific enough for me to isolate the problem.

azorult is malware often spread by phishing, infection could occur from clicking on bad links. Information on the internet doesn't pin it down particular types of device.  My devices include PCs, ipads, android phones, rpi, music devices, ip cameras.  There are also visitors with other devices.

My first step was to warn home users to stay alert as a warning has been received.  I then needed to check virus protection on my systems is uptodate.  For hardware devices, there isn't much I can do.  Virus checking on phones is done automatically and Windows devices have updates applied automatically.  That means my main effort was geared towards Linux.

I don't generally virus check RPi systems as they have very limited external connectivity.  In this case, since infection is potentially inside the LAN I need to review them.  There dont appear to be many virus scanners appropriate to linux, Clamav, which is owned by Cisco seemed to be good and widely used.

On installation the software runs freshclam to download virus signature database files.  You then use clamscan on a file selection to check for viruses.   I ran a complete check on RPi SD cards and saved the results.  Mostly the checks worked well.  I had problems with the newest RPi 3+ running buster and split the scan down into chunks to narrow down the problem, which then "went away".  RPI 1+ had insufficient memory for a scan so I created a samba share for the root drive and successfully scanned from RPI 3+.

Results were encouraging, no viruses were found.  That leaves me with some confidence that I don't currently have a problem and have taken responsible efforts to protect us.

Thursday 28 May 2020

Linux FrameBuffer

I found a 4" touchscreen for RPi in my cupboard the other day.  I actually purchased it 3 years ago and had forgotten about.  I thought I should try it out and see what it can do.  It turns out to be a Waveshare 480x320 touchscreen.  Initially I wanted to use it for output/display.  I certainly don't want to run Xwindows on it so I needed to find out what is possible.

Installation


I plugged it in to an RPi 2B and installed the driver software. I run RPi servers headless and I was pleasantly suprised to see that the RPi console startup messages  are displayed.  It looks nice but isn't useful as the screen / characters are very small and I don't have the ability to type in to the console or login.



Display Text


To send output to a terminal it is easiest to do if the device is logged in.  Using raspi-config we can tell linux to login to the console as pi at startup.  We can now easily display messages, e.g.:
  echo ‘Hello little screen’ > /dev/tty1

To clear the screen or display text at specific positions we use control characters.  I have a fondness for these from my DEC PDP-11 programming days when a DEC VT100 was considered an excellent screen and I used it extensively.  I can clear the screen with:
 echo -e '\e[2J\e[1;1H' > /dev/tty1


Images and videos

It is surprisingly simple to play videos on the frame buffer with vlc:
 vlc starwars.avi > /dev/fb0
VLC can actually display images such as:
 vlc AtTheBeach.jpg /dev/fb0


As everything is a file in linux I can take a screen snapshot with:
 cat /dev/fb0  > screenshot.raw
and then redisplay it later with:
 cat screenshot.raw > /dev/fb0

However the best way to display images is to install the package fbi (framebuffer image) which gives you lots of image capabilities.




C Programs using the Framebuffer

A great tutorial is provided by Raspberry Compote explaining how to use the framebuffer in a C program.  It makes the whole process fairly straightforward.

Step 1 is to interrogate Linux to obtain characteristics of the device, in our case a 480x320 32bpp screen.
This is followed by a program to create a memory map for the buffer in RAM and write pixels to it.  Again it isn't very complicated and you end up with a pattern on your screen.

After that you need to investigate formats, palettes and colours in a little detail so that you can use the best approach for your application.  In practice, for your own programs an 8 bit format is preferred.  You have 16 basic colours provided and you can define the remaining 240 colours in the palette as needed.

Image formats and processing can become very complicated so C programs are probably best suited to displaying patterns, but it is great to have a bit-mapped screen available.

As a quick test I wrote a C program to display a grid on the screen and incorporated with the colour display.


Lvgl

In this post we are concentrating mainly on outputting to the framebuffer, however, its value lies in being a touch screen.  For touch screen usage we will be using lvgl (little versatile graphics library). As an introduction I found a great demo which uses lvgl to write a number of objects ("widgets") to the screen, for example text and buttons.  A subsequent blog will describe progress on touch related functions.











Friday 15 May 2020

Google Arts and Culture

One of the things I miss most during the Coronavirus lockdown is my visits to London Art Galleries.  A very good substitute is Google Arts and Culture GAC which provides access to the finest art from all over the world.
For some galleries, mainly in London, I have my own set of pictures, descriptons and opinions in my slideshow app.  It is interesting to visit these galleries in GAC and look at see which exhibits they highlight.
For the galleries I visit most often, Tate Britain and the National Gallery, GAC provides a street view virtual tour which includes stops at particular works.  The descriptions provided on these works are somewhat superficial but they also contain links to other works on the same theme, with similar subjects, by the same artist etc.  Linked works may be at other galleries, which can then be explored as well.


Thursday 14 May 2020

MSGEQ7 Graphic Equalizer with OLED Display

Elektor magazine gives a continual stream of great ideas for electronic projects.  About 9 months ago I saw a project which utilises MSI MSGEQ7 chips in a 7 channel spectrum analyser / graphic equaliser.  This little chip is ideal for creating a bar display showing an audio frequency response.
I tried it out with a Neopixel display and a small OLED screen.



Design

Hardware: I initially intended to use a NodeMCU ESP8266, but the software library appears not to support ESP8266, so I moved to an Arduino nano.  Neopixel and 0.96" OLED displays were used.
Input: Initially PC line out used for testing, L and R channel signals were combined. Final solution needed to work on hi-fi using RCA or Digital audio Jongo output.
Software: Nico Hood's MSGEQ7 is principal Arduino library providing the necessary functionality.
Output: Initially console output used to understand MSGEQ7 values, then dots, then bars were used for each channel.

Circuit


Nico Hood provides a lovely informal circuit diagram for the MSGEQ7.

The chip provides 7 values each time the input is sampled and these are obtained on the output pin using the strobe pin.  Frequency ranges are defined  using a 200k resistor and 30pF on pin 8.
Three pins were connected to the arduino nano: Strobe (D2), Reset (D3), Output (A0).  A further pin (D9) on the arduino was used for a neopixel display.

Testing


Nico Hood's library demo program worked well and I was quickly able to see what values are being output by showing them on the serial monitor.  There is a "noise" level of about 16 (in 8 bit value) which I subtracted.  I then scaled the remaining values and  translated them to dots on the Neopixel and got a fast responsive display.  I then replaced dots by bars and put caps on the bars.  To avoid excessive flicker on the display I only sent changes each cycle rather than re-displaying the entire screen.

The OLED B&W (£5) display provides some fine detail but it isn't as interesting as the Neopixel colours.  A 0.96" colour display is about £15 - a bit high for a little circuit.  OLED screen updates are responsive and look convincing.  

RCA analog and digital output can also be used.  To use the digital output I need to convert the signal to analog, I purchased a PROZOR 192KHz DAC convertor for about £5 to do this. Pure Jongos provide both formats so I can attach a "VU meter" at the same time as connecting Jongo to Hifi.





Wednesday 22 April 2020

Floureon IPCAM

A little while ago I purchased a very cheap Floureon Wifi IP Camera webcam.  It comes with a short user guide to setup the device.  You start by downloading the Camhi software onto your phone or ipad.  The webcam has wifi and ethernet connections.  You introduce the handheld device to the webcam by holding them physically close together and allowing the ipad to start an audio dialog with the webcam to establish its ip address.  Once they have "paired" in this rather novel fashion you can look at the camera output on the ipad and swipe in any direction on the screen to pan/tilt the camera so that it shows what you want to see.  The picture quality is detailed and clear with good colour, sound is also available, I found the movement functions tedious.  You can take snapshots or record videos and optionally save them to an SD card. I couldn't find any further details about the webcam online so at face value you have a very nice webcam with an interface to a handheld device.

My current application requirement is to provide a nightime garden webcam to look for activity, particularly from foxes.  I need to record video when motion is detected in the picture so that it can be looked at the next day.   I have a Raspberry pi noir webcam which isn't being terribly useful for this purpose although it did previously carry out great service as a puppy cam in our kitchen.

As the supplied CamHi software doesn't have the functions we require (in particular motion based recording), it is necessary to find other ways to access the webcam via its network interface. A very helpful open source site ispyconnect provides details of URLs which can be used to access webcams.  We know our ip address 192.168.0.175 and user/password admin:admin.
To take a snapshot from the camera we can use a chrome URL:
 http://admin:admin@192.168.0.175/snap.jpg
To stream a video from the camera we can Open a network stream in vlc with:
 rtsp://admin:admin@192.168.0.175/1/h264major
The video can be streamed at a lower resolution with:
 rtsp://admin:admin@192.168.0.175/2/h264major

For the picamera we have previously used linux motion software very successfully to process a video stream and record only when motion is detected.  We can utilise the stream coming from the webcam by specifying in motion.conf configuration file netcam_url the rtsp streaming address.  Once we restart the motion service (on RPi pi34) we can view camera output at http://pi34:8081.

I am quite keen to be able to be able to control pan/tilt from linux or PC.  I found that I can login to its network interface http://192.168.0.175.


In addition to showing the camera view it allows me to use pan and tilt.  "Left and right", "up and down" buttons cause the camera to scan throughout its range.  You can view two streams, full detail  (1280x720) or smaller (640x352).  There are 8 preset positions you can setup.

The settings tab is informative, allowing you to see/set many more features on the camera (e.g. turn IR on / off).  The network page includes settings for ONVIF, which is already on at port 8080.  This should allow me to use a general purpose ONVIF client to control the camera.  ispyconnect has a suitable client Agent DVR which I could install on Windows.


After a little experimentation I was able to setup the camera in ispy Agent UI and see the camera.  It also gives me a device service URL http://192.168.0.175:8080/onvif/devices .  Using Ispy PTZ controls are a bit better.

I was now ready to try out the solution.  It turns out, quite reasonably, that infra red light reflects well off glass (which is designed to let optical wavelengths through) so attempts at a setup inside were not successful.  As the weather is good I took the IPCAM outside and set it up in the garden.  Using the web interface I set the IR leds on continuously.  I setup linux motion software to record when movement is detected.  I set the sensitivity quite high to capture small changes, and I did capture various moths / insects whizzing by, but I did also see the fox doing his/her evening walk across the garden just before midnight.  The motion software worked well, it only recorded about 5 minutes video overnight so it isn't onerous to look through the recording.







Tuesday 21 April 2020

Pi-Lite

It is a little strange to be posting about Pi-Lite.  It is a Raspberry Pi add-on which I received in 2015.  I picked it up and tried it out after a gap of a few years and I was again struck by what a lovely little product it is.  It came from a company called Ciseco which folded in about 2017 so the original documentation is no longer available.  However Matt Hawkins produced a tutorial I used at the time for familiarisation and he (his site anyway) is still around so I was able to find what I needed.

The board is simply a set of 14x9 monochrome LEDs which can be used to display images or scrolling text.  It is connected to the RPi using a serial UART and just needs to be plugged straight in to the RPi or 4 pin TX, RX, 5V and GND connections.  LEDs are controlled by an ATmega328 chip on the board which accepts commands input from the serial port and changes the displays accordingly. You can program individual pixels, bars, or download a 14x9 pixel image using commands.  By default information downloaded is treated as text characters and scrolls across the display as a message, which you can speed up or slow down.

To try out the board you start up minicom, a simple terminal emulator.  Any text you type in is displayed on the Pi_lite as scrolling text.  It makes more sense to control the display using a program and python-serial is a good way of doing this.  Matt Hawkins / Ciseco provide some good demos.  A stock ticker and weather report would have been fun but their data sources no longer work.

Pi-Lite doesn't have to be connected to an RPi, it works equally well on a PC serial port (with appropriate convertor).   Of course you may not be satisfied with the commands provided on the ATmega328 and will probably have already realised this is an Arduino processor.  You can modify the inbuilt program using the Arduino IDE to amend the Pi-Lite sketch.  Alternatively you can write your own sketch using the Pi-Lite Arduino library to control the display however you want.


Wednesday 15 April 2020

Tiny Core Linux

Intro

Recently we have looked at Linux in a variety of ways.  Buildroot has been used to tailor and build a distribution, we have compiled Linux to run on the Atlas board, we have created many different systems on Qemu.  Along the way I came across Tiny Core Linux (TCL)  which provides a Linux system in 11MB. Kudos to Robert Shingledecker who developed it in 2008 and those who have supported it since then.  It has an extensive user manual and an active forum.

Core System

The simplest variant of TCL is called Core.  This comprises two files a recent linux kernel (5.4.3) and core.gz a compressed file system.  It is available as an ISO.  I could quickly boot from the iso file in Qemu and bring up a small system.  It looks like linux, feels like linux and quacks like linux.

PiCore

A version of the system has been provided to run on RPi.  I simply copied the iso to an SD card and booted on my RPI 1B.  It comes complete with network access and SSH.  Initially I connected a screen and keyboard but it was soon reliable enough to run headless.

TinyCore

The next step is to setup a windows manager FLWM (fast light windows manager) so that we have a working X system.  Again we have a small amount of software.  Using X under Qemu was a bit of a pain and I don't want to use an RPi like this so I installed on Roy's old laptop.  A 2GB partition is vast overkill for a system which is now about 50MB but it enabled me to run some variants.

Linux Structure

The irresistable attraction of a tiny system is that it is small enough to have a crack at understanding the overall makeup of linux.  The TCL manual was a big help as it describes the boot process initiated by the kernel which lays out the file system, sets up devices and hands over to init which is part of busybox.  Init provides utilities for all the processes running on the system and makes requests to the kernel where necessary.  Applications / programs can be added as necessary but aren't essential for a running system.  TCL minimises the ones provided by default but has a good variety of extensions available for download and install.
So we really just have the kernel, Busybox binaries, some libraries and a few start up scripts which make up our system.  As the icing on the cake Busybox has a link to Fabrice Ballards, linux in a browser running busybox.

Monday 6 April 2020

Atlas - Kernel Build

WS2

Following success with Rocketboards WS1 tutorial enabling me to write a C program running under linux in two way communication with an FPGA program I was excited by the prospect of WS2.
WS2 describes how to build a preloader (as before) build u-boot and build a kernel.  I was able to build the preloader and build u-boot but kernel build dates from about 2015 and not all the sources are still available.  The installation is very technical and I wasn't able to adapt it for myself.

Digikey

Robert Nelson, a modern day hero at Digikey has published, and kept upto date, a sequence of instructions to build u-boot and linux then create a bootable SDcard.  The instructions explain how to:
  • Install the Linaro ARM cross compiler
  • Obtain and compile u-boot
  • obtain and build kernel
  • Obtain Debian 10.3 root filesystem
  • SD card creation.
I created a Debian 10.3 linux system on a USB hard drive so I could do the build without affecting my Windows PC configuration and the tutorial worked like a dream. The new SD card boots up on the Atlas system complete with ethernet interface, ssh and nginx.  It used 550MB of the 2GB microSD card I supplied so there is plenty of space.






Sunday 29 March 2020

Building Linux Images

Buildroot


I have often wondered what it takes to start up linux. As I delve further into the Atlas tutorials I will understand it better.  I believe yocto is the hardcore build environment but buildroot is a simpler substitute which, unsurprisingly, runs on linux.
A tool called vagrant makes it ridiculously easy to setup buildroot.  As explained in a short tutorial by Dzone you simply download a vagrant configuration file and run vagrant to startup a VirtualBox ubuntu VM configured for buildroot.  It works a dream.
I followed the second part of the Dzone tutorial to create a simple i386 linux including openssh.
This builds all necessary software and creates two output files, a kernel image (bzImage) and a file system (rootfs.ext2)

Qemu


Armed with these two files we can bring up linux. We don't have a specific hardware processer in mind for this OS so it makes sense to bring it up in a virtual machine. Qemu is the workhorse for linux VMs and initially I installed qemu on my buildroot VM.  I can bring up linux and sign on to the console or ssh from the buildroot VM.  The linux I have built is very simple.  It is absolutely perfect to know that we have everything we need for a working linux system.

It doesn't make a lot of sense to run a qemu linux VM under VirtualBox ubuntu linux.  Qemu is available for Windows so I moved my two files across to my PC and installed Qemu for windows.  The batch file to start up qemu has a different format to the linux script but after a bit of trial and error I could start my qemu VM in Windows.  Establishing the network under windows took a little care, I had to enable openssh server in Windows so that ssh communication from qemu to windows was possible.

RPI image


Clearly buildroot can create images for real hardware and I would like one for my trusty old RPI model 1B.  Buildroot makes this childs play by giving you a default configuration for various RPI models. Within buildroot I simply ran "make raspberrypi_defconfig" and then "make" . The result was an image that I could burn to an SD card using Balena Etcher.  I booted it up and I have a simple RPI build.  This is excellent.

Qemu RPI


Rather than create SD cards for RPIs I would like to have an Qemu RPI VM working under Windows.  Alistair Chapman has provided a little blog showing how to do this.  It isn't a buildroot cutdown but a full RPI-buster-lite image.  The real RPI kernel needs some tweaking to run under windows but Dhruv Vras has provided suitable kernels for each Raspbian release. With a tailored kernel and a standard Raspbian image you can startup a Qemu VM.

In fact it is slightly more complicated than suggested to get the build to work, a couple of files need editing to make the image work and I followed instructions in wiki helpfully provided by Dhruv. Awesome, I now have an RPI VM working under Windows10 without VirtualBox.

Conclusion


It is very educational to build linux systems yourself to gain a feel for what software you need and what hardware components it uses.



Monday 23 March 2020

Atlas Workshop WS1

Introduction


The DE0-nano-SoC documentation provided some great tutorials to get started with FPGA, HPS and then establish links between them.  I can see that there is a lot more to find out about the device, software and working environment.
My efforts with the Golden Hardware Reference Design (GHRD) were eventually successful and the hard work involved was centred around choosing the right versions of software to match the tutorials.
I learned quite a bit through this process of trial and error, in particular Quartus and EDS version 15 (c.2016) are preferred where possible.  If there are bugs, version 16.1 may be more appropriate.  Version 18.1 is my newer favoured release but doesn't include suitable EDS/DS-5 capabilities as they have become chargeable.

Rocketboards.org provides a set of three workshops for familiarisation and they seem to go down a considerable depth into the internals of the product.  I am reasonably happy that I understand enough about FPGAs for my current purposes, I can compile and load a design using my own verilog or Altera IP modules.  However the Linux environment and FPGA-HPS bridges are mysterious.

The WS1 course materials provide an excellent overview of documentation available / required.  The format is a  slide deck so there isn't much detail included but technical areas are summarised.  The Cyclone V boot process is covered in some detail and forms the basis for the four labs within the workshop.

Preparation


In preparation for lab activities we download an SD card image which contains linux and course files.  The card is tailored for Atlas / DE0 and assumes that Quartus 15.1.2 is used.  Once the image is unzipped and burned to card using BalenaEtcher it can be booted.  This is excellent as I now have a known source for an SD card image which is tailored for my board.
We have a general WS1-IntroToSoc folder and device specific hps_isw_handoff, DE0_NANO_SOC.sof, soc_system.sopcinfo.

LAB 1 Generating and compiling the preloader


When you boot the Atlas board from SDcard a very small BootROM program loads the preloader into on-chip memory.  The preloaders job is to setup the FGPA, define HPS I/O and memory and then copy the Linux bootloader into DDR memory which allows the HPS to boot in the usual manner, load the OS etc.

Our first task is to generate a Board Support Package (BSP) which will define various hardware details relating to the FPGA design and HPS interface.  We have a pre-generated QSYS design, whose details are provideed in hps_isw_handoff/soc_system_hps_0. We create a new BSP using the Altera BSP Editor tool, give it a copy of the design and generate a BSP.
The BSP folder contains source code, preloader settings and a Makefile to build the Preloader.

We now run make to compile the preloader and create a image file preloader-mkpimage.bin.

Finally we use the Altera Boot Disk Utility to copy the file to the correct partition on the SDcard so that the Boot Rom program will read it in and execute it.

Inserting the SDcard into the Atlas board we see the preloader booting up the system.  This is great progress but the system doesn't do a lot at this stage so we reset to the original preloader for now.

The lab is very instructive in showing the files required produced and the steps needed to generate a preloader.  We don't need to understand more detail as the design provides details and the BSP Editor interprets them to generate a preloader.

LAB 2 Verifying Hardware with System Console


Quite a short lab showing you how to use the System Console which looks to be hugely powerful.    It allows you see and set values within the FPGA.  Our FPGA demo design runs a Fast Fourier Transform in hardware and sends the results back to HPS.  Using the system console you can provide values to the FPGA, run the algorithm and using scripts, capture and display output.
It requires a far better understanding of the hardware than I have currently so I don't expect to use it in practice.

LAB 3 Bare Metal FFT app


A bare metal C program is provided for us and our mission is to run it.
The file soc_system.sopcinfo which was given to us contains information about the FPGA memory layout. Using sopc-create-header-files utility a number of headers are created from .sopcinfo which are required by our C program.

Now run make to use the provided Makefile to compile our bare metal program fft.bin.
C the executable across to the SDcard FAT partition which is mounted on our PC.

We now boot the SDcard and type "stop" to get to a u-boot prompt.
Commands are used to:
 load the FPGA program
 configure HPS-to-FPGA bridges
 load the bare metal application into memory
 run the application
See the output, Hello world followed by FFT inputs and outputs.

Finally we automate the load of the bare metal application.  This is (intentionally) partially successful, when our application runs it only prints hello world.  This is because we haven't configured or started the FPGA and bridges.

The tutorial covers a lot of ground.  Although I am unlikely to write bare metal programs it is wonderful to set up a working example so that we can see what a program does without an operating system there to help it.

LAB 4 Linux FFT Application


After the complexities of the previous labs this one is quite easy.
The fft app is compiled in EDS and copied to the SDcard.
We can then run the fft program on Atlas linux.
The linux configuration includes a lighttpd web server which can be used to specify input parameters and graph the output from the FFT.
It is deceptively simple as the C program looks complex and has a lot of code concerned with sending fft values to the FPGA, controlling it and reading back results before formatting them for the web server.