SYSIN and SYSOUT
With a working NIOS processor available to us, thoughts turn more to software. Programs are written in C and compile by Eclipse. The first requirement for C programs is to have SYSIN and SYSOUT available. If we have a single serial port, for example MAX1000 JTAG UART SYSIN/OUT are assigned by default to JTAG. If we have multiple serial ports we can choose (in BSP) which ones to use.
Hello World
The simplest way to create C applications is to "create new application and BSP " in Eclipse. There is a basic "Hello World" program which sends a SYSOUT message. Tutorials generally instruct you to ensure you are using small C library and reduced device drivers to save space and the "Hello World small" causes these to be used.
Typically C "Hello World" specifies <stdio.h> and calls printf for terminal output.
If you select the small option you use an alternative library and functions.
Our first processor has about 40KB on-chip memory defined which is just about enough for printf output but not for input as well. To have a program including input and output we use the small libraries and alt_putstr/alt_getchar. These can use as little as 800B program memory.
SDRAM
The physical Altera FPGA chip has 64kB on-chip RAM and 64MB SDRAM so it makes sense to use SDRAM for our C programs and data.A tutorial from University of Las Vegas suggests that you can simply add SDRAM and directly replace on-chip memory with it.
I ran into difficulties when I tried to add SDRAM as it requires lots of pins to be defined. I found that the test_board project provided with CycloneIV documentation checks SDRAM so I used the pin assignments from this for my SDRAM.
Once I had added SDRAM in platform designer I removed on-chip memory and pointed reset and exception vectors to SDRAM.
Now when I download programs from Eclipse to NIOS I can make them as large as I like. A simple C program with printf and scanf takes about 100KB.
MICRIUM
There is a third variant Hello World program provided in Eclipse which uses MicroC-OS/II RTOS. It seems sensible to me to have an RTOS in an embedded processor. I could compile the RTOS Hello World quite easily and it only took 60KB. In fact it started two threads which run simultaneously. It will be useful for multi-tasking but it doesn't have a user shell (unless I write it myself).
Saturday, 14 December 2019
Cyclone IV NIOS
Introduction
Whilst building CycloneIV 8080 CPU I checked out a NIOS processor, in particular to determine how it used serial I/O.
Altera have provided NIOS as their ready-made processor for a number of years. One reason for using FGPAs is to combine bespoke digital logic together with embedded uProc in the same chip and not many customers would want to build these features from scratch.[November 2019]
Overview
To specify hardware Altera provide Platform Designer so that you can choose components for you processor including a NIOS II core. When you have working hardware software build tools (SBT) for Eclipse (IDE) enable you to specify, compile and download C Programs to the processor.
If you need extra functionality / peripherals you simply add the to the hardware design and utilise them in your program.
Samples
CycloneIV development kit came with some NIOS samples. To check out the LED example I simply downloaded the SOF file and it ran. Similarly for the Bell. Unfortunately TFT examples didn't work directly.
Simple Processor
It is very easy to create a simple processor in Platform Designer. The minimal components needed are a clock, somoe on-chip memory, a NIOS core and parallel IO for some LED outputs. I wanted to add terminal I/O to the solution as I was researching that facility for my 8080 CPU and also being a bit fed up with using LEDs for debugging.
Using LEDs for output was easy but, try as I might I couldn't use JTAG USB as a serial port. I feel that it may not be supported, or perhaps my USB blaster doesn't support that function. Along the way I found it is much easier to debug NIOS problems using command in the NIOS2 shell
Whilst building CycloneIV 8080 CPU I checked out a NIOS processor, in particular to determine how it used serial I/O.
Altera have provided NIOS as their ready-made processor for a number of years. One reason for using FGPAs is to combine bespoke digital logic together with embedded uProc in the same chip and not many customers would want to build these features from scratch.[November 2019]
Overview
To specify hardware Altera provide Platform Designer so that you can choose components for you processor including a NIOS II core. When you have working hardware software build tools (SBT) for Eclipse (IDE) enable you to specify, compile and download C Programs to the processor.
If you need extra functionality / peripherals you simply add the to the hardware design and utilise them in your program.
Samples
CycloneIV development kit came with some NIOS samples. To check out the LED example I simply downloaded the SOF file and it ran. Similarly for the Bell. Unfortunately TFT examples didn't work directly.
Simple Processor
It is very easy to create a simple processor in Platform Designer. The minimal components needed are a clock, somoe on-chip memory, a NIOS core and parallel IO for some LED outputs. I wanted to add terminal I/O to the solution as I was researching that facility for my 8080 CPU and also being a bit fed up with using LEDs for debugging.
Using LEDs for output was easy but, try as I might I couldn't use JTAG USB as a serial port. I feel that it may not be supported, or perhaps my USB blaster doesn't support that function. Along the way I found it is much easier to debug NIOS problems using command in the NIOS2 shell
Once I used the DB9 serial port the process became a lot simpler. Having tried a number of tutorials, mainly on youtube, my favourite was from Labbook pages. This helped me understand what we are creating at each step of the process.
Useable Processor
The labbook processor included the C executable in its image. I used a youtube tutorial to create my useable processor which contained LEDs and serial I/O as well as the ability to load programs through Eclipse.
Extending Functionality
I could now add a Seven Segment display to my processor. I chose to do this by specifying a 16-bit number to output from the CPU and then utilise previously written verilog to convert this to a number of hex digits and display them. The display works by refreshing each digit in turn every millisecond so that all the digits appear to remain lit.
I could have put this functionality in the NIOS processor and output 7SEG pin signals, but that was more work.
Cyclone IV 8080 CPU
When I have successfully (re-constructed) the 8080 CPU on MAX10000 it should be straightforward to build the same functionality into CycloneIV. We can use project stages from MAX1000 build and amend the project for CycloneIV pins and hardware components.[October 2019]
Stage 1
Create a project and use a generic verilog program (LEDwater) to check that LEDs are are setup. The Top Level module becomes Board.v which we will use as a "PCB" for our processor.
We can then add CPU functions in the increments:
test states
Add data memory
Add ALU
Stage 2
CycloneIV provides RS232 UART for I/O. I had an old PL23203 DB9 cable which I could plug in to the connector and PC-USB. Unfortunately it was too old; although I persuaded it to do output I couldn't do input until I bought a new cable.
It was then straightforward to implement keyboard input and screen output via a Putty terminal emulation session.
Stage 3
In addition to the DB9 UART CycloneIV allows terminal communication via USB for a NIOS console and this same feature worked fine for my 2nd MAX1000 serial interface. I wasn't able to get the inbuilt port to work on CycloneIV so I added an FTDI RS232 interface for the serial port. Once the corresponding CPU verilog functions had been added I had a working CycloneIV 8080 development environment complete with program load capabilities.
Stage 1
Create a project and use a generic verilog program (LEDwater) to check that LEDs are are setup. The Top Level module becomes Board.v which we will use as a "PCB" for our processor.
We can then add CPU functions in the increments:
test states
Add data memory
Add ALU
Stage 2
CycloneIV provides RS232 UART for I/O. I had an old PL23203 DB9 cable which I could plug in to the connector and PC-USB. Unfortunately it was too old; although I persuaded it to do output I couldn't do input until I bought a new cable.
It was then straightforward to implement keyboard input and screen output via a Putty terminal emulation session.
Stage 3
In addition to the DB9 UART CycloneIV allows terminal communication via USB for a NIOS console and this same feature worked fine for my 2nd MAX1000 serial interface. I wasn't able to get the inbuilt port to work on CycloneIV so I added an FTDI RS232 interface for the serial port. Once the corresponding CPU verilog functions had been added I had a working CycloneIV 8080 development environment complete with program load capabilities.
Thursday, 12 December 2019
Elektor processor dissection and reconstruction
To understand how a machine works you can, perhaps, take it apart and put it back together again. Whilst Elektor exp5 is great to see and you can look at the code to get a general idea of its construction, something more is needed to become familiar and understand it better.[October 2019]
Stage 1
The processor has a debug serial input allowing you to type in single character commands and get text output. I checked that I could add commands myself to look at the processor
Stage 2
I removed unwanted peripherals from Top.v, the top level function: DAC, accelerometer, SPI.
I then took out UART processing as this requires a lot of code. Subsequent stages use LEDs for output.
Stage 3
I slowed down the CPU to 16Hz (400 ticks) so that LED changes appear in real time - ie without needing to insert delays.
At the end of this stage Top.v is small but we still have a working CPU running C programs and producing output.
Stage 4
Firstly we remove the code for LED output from the processor and use LEDs for debugging output instead.
We can see, in Icarus each opcode being processed.
We can remove the special states div1, div2, readmemat3 without affecting processing.
Finally we can remove all the opcodes from the CPU except for jumps. A test program now loops but other codes are treated as NOP.
The end product is a processor with a clock, program counter and jump instructions.
Dissection
Stage 1
The processor has a debug serial input allowing you to type in single character commands and get text output. I checked that I could add commands myself to look at the processor
Stage 2
I removed unwanted peripherals from Top.v, the top level function: DAC, accelerometer, SPI.
I then took out UART processing as this requires a lot of code. Subsequent stages use LEDs for output.
Stage 3
I slowed down the CPU to 16Hz (400 ticks) so that LED changes appear in real time - ie without needing to insert delays.
At the end of this stage Top.v is small but we still have a working CPU running C programs and producing output.
Stage 4
Firstly we remove the code for LED output from the processor and use LEDs for debugging output instead.
We can see, in Icarus each opcode being processed.
We can remove the special states div1, div2, readmemat3 without affecting processing.
Finally we can remove all the opcodes from the CPU except for jumps. A test program now loops but other codes are treated as NOP.
The end product is a processor with a clock, program counter and jump instructions.
Construction
1 Clock
We start a new MAX1000 project and add ALTPLL clock and LPM_COUNTER IP. In a skeleton top level program Board.v we incorporate these components and output appropriate bits from the counter to LEDs so that we can see binary values being incremented.
In Quartus we need to add LED, clock pins and timing (SDC) information.
2 States
Add a skeleton Cpu.v which just switches between the states fetch, decode, readmem etc.
Add Testbench.v so that we can run tests in icarus first.
Add USR_BTN which stops the processor when pressed, we can use this for single stepping.
Use LEDs to see the processor cycles through instructions and states.
3,4 Codemem, datamem
We implement JMP and NOP instructions.
We use a program copied from stage 4 above and can see the program counter increasing until the JMP and then looping round.
We can now implement instructions ST (store), LD (load) to access memory and LDIND, STIND. We setup a stack at the end of datamem and implement CALL, RET. Add stack operations e.g. PUSHR0, ADDSP,....
We also add HALT to finish the program.
5 Arithmetic
Add arithmetic, logic and comparison instructions:
IADD
XOR, OR, AND, COM, NEG, MUL
CMPEQ/NE/LT/LE/GT/GE, CMPULT/ULE/UGT/UGE
Also add conditional jumping
A few more optimiser instructions (added to C by the author to decrease number of instructions) were also added.
At this stage it is possible to compile and run a c program containing code like result=i+i;
6 Output
All peripheral output is directed by the OUTA instruction. Initially we implement channel 5 to set LED values. We then add channels 9 (output character), 8 set output speed and input channel 5 (determine bits left to transmit). The CPU needs code to process the channels and Top.v needs corresponding details for physical hardware processing.
We have to add TXuart.v to do the bit-banging.
We can then run compiled programs including the C putchar() function.
7 Input, debug and load
Add RXuart.v module
Add, irq processing to CPU
Add, irq processing to CPU
Add bootload/standalone parameter to switch program load.
This was quite an extensive step at the end of which we had most C instructions available to us.
Quite a lot of code
8 RTC, DAC, LIS
Finally we add other peripheral functions to the processor so we are confident we have a complete working system.
Conclusion
This was a time-consuming and very worthwhile exercise which allowed me to understand how the Elektor-provided verilog code creates an 8080 processor. I kept variable names and formatted code the same so that the final result doesn't look radically different from the starting version but I understand content a lot better.
MAX1000 FPGA UARTs
UARTs are particularly useful for FPGA embedded processors as it quickly becomes very tedious using LEDs for debugging and for program output. The Elektor 8080 embedded processor experiment 5 requires two UARTs one for terminal I/O and the other for program load and debug statements.
As an introduction to UARTs I used an electronoobs tutorial which provides a clear explanation of the verilog required to send and receive bits. I used MAX1000 pins A4/B4 to utilise the internal UART for testing.
For the second UART Elektor I used an FTDI RS232 cable attached to pins M2/M1. I could have used any available GPIO and if I wanted I could add more terminals using more FTDIs/GPIOs.
Initially, in Elektor experiments 1 and 2, executable 8080 C programs are loaded into the image which is downloaded to MAX1000. Changing a program requires you to compile a program and put the executable in the Quartus project folder then running synthesis and loading using Programmer, which very quickly becomes very tedious. Elektor experiment 5 uses Processing (a C environment on PC equivalent to arduino) to transmit executable 8080 C programs to MAX1000 via a serial interface. The interface also provides some basic commands to be be used for debugging.
Our first serial interface (built-in B4/A4) is required for SYSIN/SYSOUT terminal I/O and we use the second one (FTDI M2/M1) for program loading. [August 2019]
As an introduction to UARTs I used an electronoobs tutorial which provides a clear explanation of the verilog required to send and receive bits. I used MAX1000 pins A4/B4 to utilise the internal UART for testing.
For the second UART Elektor I used an FTDI RS232 cable attached to pins M2/M1. I could have used any available GPIO and if I wanted I could add more terminals using more FTDIs/GPIOs.
Initially, in Elektor experiments 1 and 2, executable 8080 C programs are loaded into the image which is downloaded to MAX1000. Changing a program requires you to compile a program and put the executable in the Quartus project folder then running synthesis and loading using Programmer, which very quickly becomes very tedious. Elektor experiment 5 uses Processing (a C environment on PC equivalent to arduino) to transmit executable 8080 C programs to MAX1000 via a serial interface. The interface also provides some basic commands to be be used for debugging.
Our first serial interface (built-in B4/A4) is required for SYSIN/SYSOUT terminal I/O and we use the second one (FTDI M2/M1) for program loading. [August 2019]
Wednesday, 10 July 2019
8-bit MicroController
The driver for my FPGA familiarisation is to experiment with processors. As a starting point I looked for a small easy microcontroller to implement using verilog. I found a perfect example at FPGA4student.com. It is an 8 bit MicroController which has 8-bit registers, 12-bit instructions and five core components: MicroController, ALU, Control Unit, Program Memory and Data Memory.
The three articles provide a full specification, design and implementation for this MCU which greatly enhances the building process. The MCU has 256 (8-bit address) 12-bit instructions and 16 (4-bit address) 8-bit data locations. The two inputs for the ALU typically come from the Accumulator and data memory although the capability to load immediate values from instructions is provided. Each instruction requires 3 clock cycles for Fetch, decode and execute phases.
My verilog coding isn't very good but when I was struggling I could look at the code provided for guidance. I started by implementing the Program memory and Program counter (PC) for the fetch phase. Initially these were developed using Icarus simulation, but once it worked 'on paper' I transferred it to Quartus for download to the MAX1000 which required plenty of debugging. MAX1000 LEDs were used to display PC and partial instruction contents. I felt at this stage that the MCU was alive, even though it was just stepping through the instructions.
Implementing an ALU is very straightforward as it requires purely combinatorial logic. Adding the control logic is rather more challenging. It would be possible to work from the design and implement all the logic at one time but I felt this could prove hard to debug so I added instructions in groups. Load and store instructions using data memory came first, followed by ALU operations and finally status registers were set and jump instructions implemented.
As the instruction set and data is rather limited and output is restricted to LEDs I chose a Fibonacci series as a first program. This requires minimal processing and output can be displayed as binary on LEDs.
Tuesday, 9 April 2019
Cyclone IV FPGA Development Board
Intro
My favourite thing about ebay is that it has a wide variety of development boards which provide great functionality at prices far lower than branded products. Looking for an FPGA board I was very excited when I saw this one.
It has a great specification:
So three days later we have lots of inputs and outputs to play with.
In fact the FPGA board is detachable so one could theoretically use all the board IOs for other purposes.
I experienced problems initially as my usb blaster programmer turned out to be a clone and I had to install old WIN7 unsigned drivers to make it work. Once I was able to load software the board was great. The vendor provided Quartus programs to test the main features and it was a pleasure to load and run them so that I could be sure my subsequent efforts utilised working hardware. Update: Life became much easier when I bought a real usb blaster (£16 instead of £6), which works perfectly.
My favourite thing about ebay is that it has a wide variety of development boards which provide great functionality at prices far lower than branded products. Looking for an FPGA board I was very excited when I saw this one.
It has a great specification:
FPGA core board:
Power input DC5V
FPGA:EP4CE10F17C8N
SDRAM:256M
SPI FLASH: 64M
50 MHz CLK input
Bottom board:
PC PS/2 port
VGA port ,can display picture or video, 16 bit 65536 colours
SD card
LCD12864/1602 ,can display number or English character
LCD TFT ,can display number or English character, or video
LED 7seg display 1x8, can display number
LED 1X8
COM port
8X8 LED dot matrix
3X3 key input pad
1X4 key input
switch 1X8 input
ADCTLC549, you can analogue signal acquisition
DAC7512, digital signal to analogue signal
IR input
DS18B20 temperature sensor
|
So three days later we have lots of inputs and outputs to play with.
In fact the FPGA board is detachable so one could theoretically use all the board IOs for other purposes.
I experienced problems initially as my usb blaster programmer turned out to be a clone and I had to install old WIN7 unsigned drivers to make it work. Once I was able to load software the board was great. The vendor provided Quartus programs to test the main features and it was a pleasure to load and run them so that I could be sure my subsequent efforts utilised working hardware. Update: Life became much easier when I bought a real usb blaster (£16 instead of £6), which works perfectly.
Monday, 8 April 2019
MAX1000 FPGA
I have always been somewhat in awe of FPGAs and nervous of the challenges they present. Recently, in the March/April edition of Elektor, I saw an instructional article to create a processor on an Altera/Intel MAX1000 which reminded me of a study of Computer architecture, based on Nand2Tetris.com, which I enjoyed immensly . The web-site (and book) leads you through all the important steps involved in building a computer starting from a collection of Nand gates and ending with processor, assembler, compiler and Operating System to run your own programs.
Nand2tetris provided a hardware simulator based on building an increasingly complex hierarchy of components but naturally enough it was too slow to deal with running non-trivial programs.
I went through the following stages to find out more.
1) Purchase a $30 MAX1000 from Arrow Computers
2) Elektor 8080 + Small-C compiler part 1 : LED pattern
Following the instructions in Elektor magazine I built the 8080 simulator FPGA program and was able to run the C program which generates a MAX1000 LED pattern.
3) MAX1000 User Guide Tutorial : LED counter
When delivered the MAX1000 runs an LED counter program, using the tutorial I was able to recreate and download to the FPGA.
4) Quartus, Hello World : NAND gate LED
I wanted to write the simplest possible program and decided on a Nand gate. Unfortunately only one button is easily available on the MAX1000 so I implemented a NOT gate instead. When a button is pressed an LED goes out.
This exercise simplified technical work into two tasks:
a) a very simple HDL program to implement a NOT gate.
b) assignment of two MAX1000 pins to a button and an LED
5) Altera NIOS II Hardware Development Tutorial
NIOS II is an FPGA microcontroller developed provided by Altera. A tutorial is provided which goes through a significant number of steps to build the solution. It helps you become familiar with the Quartus Prime environment and the specific MAX1000 hardware.
By the end of this investigation I had a very basic idea how to use Quartus Prime and a MAX1000 board. Clearly 1 usable button and 8 LEDs are insufficient to maintain an interest in programming. Rather than adding my own devices by breadboarding the MAX1000 and various peripherals I decided to buy a board which already has them built in.
Nand2tetris provided a hardware simulator based on building an increasingly complex hierarchy of components but naturally enough it was too slow to deal with running non-trivial programs.
I went through the following stages to find out more.
1) Purchase a $30 MAX1000 from Arrow Computers
2) Elektor 8080 + Small-C compiler part 1 : LED pattern
Following the instructions in Elektor magazine I built the 8080 simulator FPGA program and was able to run the C program which generates a MAX1000 LED pattern.
3) MAX1000 User Guide Tutorial : LED counter
When delivered the MAX1000 runs an LED counter program, using the tutorial I was able to recreate and download to the FPGA.
4) Quartus, Hello World : NAND gate LED
I wanted to write the simplest possible program and decided on a Nand gate. Unfortunately only one button is easily available on the MAX1000 so I implemented a NOT gate instead. When a button is pressed an LED goes out.
This exercise simplified technical work into two tasks:
a) a very simple HDL program to implement a NOT gate.
b) assignment of two MAX1000 pins to a button and an LED
5) Altera NIOS II Hardware Development Tutorial
NIOS II is an FPGA microcontroller developed provided by Altera. A tutorial is provided which goes through a significant number of steps to build the solution. It helps you become familiar with the Quartus Prime environment and the specific MAX1000 hardware.
By the end of this investigation I had a very basic idea how to use Quartus Prime and a MAX1000 board. Clearly 1 usable button and 8 LEDs are insufficient to maintain an interest in programming. Rather than adding my own devices by breadboarding the MAX1000 and various peripherals I decided to buy a board which already has them built in.
Wednesday, 20 March 2019
Updating SQL Databases
I don't get at all excited about SQL databases, but they are very useful. I have one for art which contains many details of pictures I have seen at various galleries. In conjunction with the pictures I save descriptions of the paintings and have recently started adding my own comments on what is notable about them.
I use HeidiSQL to update the Maria/MySQL database but it is a bit of a nuisance for typing significant amounts of text. I felt that a simple web front-end to the database would make updating details less onerous. I didn't manage to find any utilities which would do this for me but I did find a couple of wonderful tutorials by Tania Rascia which explain how to carry out CRUD (create, read, update, delete) functions in PHP on database records. The subject was so beautifully presented I was able to quickly complete and use the sample page. Even better it was a simple matter to amend the basic structure to update my own database with a few minutes work.
I use HeidiSQL to update the Maria/MySQL database but it is a bit of a nuisance for typing significant amounts of text. I felt that a simple web front-end to the database would make updating details less onerous. I didn't manage to find any utilities which would do this for me but I did find a couple of wonderful tutorials by Tania Rascia which explain how to carry out CRUD (create, read, update, delete) functions in PHP on database records. The subject was so beautifully presented I was able to quickly complete and use the sample page. Even better it was a simple matter to amend the basic structure to update my own database with a few minutes work.
Friday, 15 March 2019
Alsa and Bluetooth Speakers
Normally it isn't possible to route linux sound output from ALSA directly through to Bluetooth speakers. It is expected that output is routed via Pulseaudio. This isn't terribly convenient for my music application which sends output from MPD to bluetooth, Pulseaudio is an extra inconvenience which needs to be managed.
Arkadiusz Bokowy has written a utility program bluez-alsa which allows you to define bluetooth speakers as ALSA devices obviating the need to use Pulseaudio. MPD is set up with an ALSA output for the device.
Bluez-alsa needs to be compiled from source and full instructions are provided I found RPI specific instructions helped with this. I also needed to compile an FDK-AAC program to get it to work. However once completed the program works beautifully and I can finally treat bluetooth speakers as devices.
Arkadiusz Bokowy has written a utility program bluez-alsa which allows you to define bluetooth speakers as ALSA devices obviating the need to use Pulseaudio. MPD is set up with an ALSA output for the device.
Bluez-alsa needs to be compiled from source and full instructions are provided I found RPI specific instructions helped with this. I also needed to compile an FDK-AAC program to get it to work. However once completed the program works beautifully and I can finally treat bluetooth speakers as devices.
Monday, 11 March 2019
Controlling Pure Jongos using upnpclient
I have previously written about a python script called nanodlna which is useful for "casting" ie sending dlna commands to Pure Jongo speakers and DACs.
A python script called upnpclient by Ellis Percival on Github does the job much more effectively. It includes an excellent interactive explanation using upnpclient to explore and control a dlna device. It provides device discovery storing upnp objects in python variables. All device parameters are easily available so that device capabilities can be investigated (interactively) and controlled through python.
I developed a python program upnp.py which loaded URLs (SetAVTransportURI) and played / stopped the URI (Play and Stop functions). upnp devices need a streaming URL so a number of UK channels were coded into the script so they could be started. Alternatively a URL can be entered on the command line to stream from an alternative source, for example the MuSe mpd server or an album / playlist M3U URL.
A web page JongoPanel was created to allow a user to see what is playing on each Jongo and start or stop the stream. Any source can be directed to any or all Jongos. JongoPanel calls upnp.py using the cgi interface.
A python script called upnpclient by Ellis Percival on Github does the job much more effectively. It includes an excellent interactive explanation using upnpclient to explore and control a dlna device. It provides device discovery storing upnp objects in python variables. All device parameters are easily available so that device capabilities can be investigated (interactively) and controlled through python.
I developed a python program upnp.py which loaded URLs (SetAVTransportURI) and played / stopped the URI (Play and Stop functions). upnp devices need a streaming URL so a number of UK channels were coded into the script so they could be started. Alternatively a URL can be entered on the command line to stream from an alternative source, for example the MuSe mpd server or an album / playlist M3U URL.
A web page JongoPanel was created to allow a user to see what is playing on each Jongo and start or stop the stream. Any source can be directed to any or all Jongos. JongoPanel calls upnp.py using the cgi interface.
Sunday, 17 February 2019
Python morsels
iPython
iPython is a must for any python programming. It is simple to install (with pip3), can be used just like the standard interpreter with no learning curve and is packed with useful features. Features which made my life better from day 1 include:
1) being able to a stop a program at a given point and inspect the variables
2) Hitting tab after an obect name to display a list of its properties.
Python CLI
python command line programs are useful when using linux. In particular Python is great for simple utilities but it is a nuisance to have to run them from specific directories. Python CLI makes this easier. Thomas Stringer has provided a simple example. I also found an explanation on Stack Overflow on usage helpful.
iPython is a must for any python programming. It is simple to install (with pip3), can be used just like the standard interpreter with no learning curve and is packed with useful features. Features which made my life better from day 1 include:
1) being able to a stop a program at a given point and inspect the variables
2) Hitting tab after an obect name to display a list of its properties.
Python CLI
python command line programs are useful when using linux. In particular Python is great for simple utilities but it is a nuisance to have to run them from specific directories. Python CLI makes this easier. Thomas Stringer has provided a simple example. I also found an explanation on Stack Overflow on usage helpful.
Monday, 11 February 2019
XWindows on WSL
I generally use linux headless as the functions I need can be initiated and controlled from the command line. It occured to me that there are potentially some GUI functions which would be useful and my initial reaction is to have an RPI build with a desktop.
However the idea is very uninspiring and it then occured to me that using Xwindows software on Windows I could get GUI sessions without an RPI desktop.
Xming provides a simple popular solution which works as expected.
Mobaxterm also provides an Xserver which is integrated into its other functions.
Clearly an Xwindows server is not much use without software. As my RPIs dont have desktop programs installed I cant do much with them by default.
However the idea is very uninspiring and it then occured to me that using Xwindows software on Windows I could get GUI sessions without an RPI desktop.
Xming provides a simple popular solution which works as expected.
Mobaxterm also provides an Xserver which is integrated into its other functions.
Clearly an Xwindows server is not much use without software. As my RPIs dont have desktop programs installed I cant do much with them by default.
It occured to me that WSL (Windows Sub-system for Linux) would be much better as a source for client applications at least for playing/testing. WSL doesn't include a GUI so Xming Xserver can be used on Windows show windows, the server should be started automatically at startup or before using any sessions.
To get a terminal window you can use Putty. Simply create a new putty session and in Connection>SSH>X11 tick "enable X11 forwarding" and specify "X display location" as localhost 0:0. The connection userid and password are john/secret.
WSL doesn't have any GUI software either so we have to install some for testing. I found this article very helpful. I can get some sample GUI programs by installing x11-apps. Before I did that I had to do an apt update/upgrade on Windows and this took a couple of attempts and quite a lot of CPU time for installation.
Once x11-apps is installed I can add "export DISPLAY=:0" at the end of .bashrc.
xeyes is a nice first program and xclock is useful.
xeyes is a nice first program and xclock is useful.
I then installed a terminal window (xterm) and editor (gedit), both of which required significant further software additions to WSL.
I now have a basic useful Xwindows system to use a basis for understanding better how it works.
As I run RPIs headless I dont have x-windows installed cant run x-windows based programs. However as I now have x-windows on Windows-10 and x-windows is famous for being able to run windows on remote machines.
1) On Windows-10 WSL install x-utilities: apt-get install x11-xserver-utils
2) On Windows-10 WSL permit RPI to start sessions: xhost +
3) On RPI install some x-windows apps: apt-get install x11-apps xterm
4) On RPI command line send remote display: export DISPLAY=192.168.0.5:0
5) On RPI command line run x-app: xeyes &
6) App now runs and displays a window on Windows-10
I tested this successfully with paprefs, which is one of the programs I have "missed"
This article maybe useful
As I run RPIs headless I dont have x-windows installed cant run x-windows based programs. However as I now have x-windows on Windows-10 and x-windows is famous for being able to run windows on remote machines.
1) On Windows-10 WSL install x-utilities: apt-get install x11-xserver-utils
2) On Windows-10 WSL permit RPI to start sessions: xhost +
3) On RPI install some x-windows apps: apt-get install x11-apps xterm
4) On RPI command line send remote display: export DISPLAY=192.168.0.5:0
5) On RPI command line run x-app: xeyes &
6) App now runs and displays a window on Windows-10
I tested this successfully with paprefs, which is one of the programs I have "missed"
This article maybe useful
Multiroom audio using DLNA devices
The story so far
Initial investigations to stream audio from MPD to multiple DLNA devices using Linux centred on finding a Linux utility which already implemented the necessary functionality. When that failed a node.js solution was investigated. Again I came to a dead end - although it turned out in retrospect to be a working solution. On the basis that a programmed solution is required it was sensible to concentrate on python as there is a huge amount of software available, it is easy to implement and generally easier to understand the programs.
Looking for a python solution
Initially I looked for linux-based python audio players to familiarise myself with considerations for playing files and streams using python. There wasn't as much choice as I expected, pygame, pyaudio and pyglet looked more complicated than they needed to be but playsound was a good simple solution to play files.
Looking for a program to play on dlna devices didn't have many candidates but nanodlna seemed promising. It is a command line utility written by Gabriel Magno which lists DLNA devices and plays a video. It was installed using pip but didn't work initially. I don't have a solid python programming environment so there wasn't much thought put into installation. The source programs were simple and I adjusted them so I could run nanodlna using the python3 command. For the first test I just specified a music track and no destination which causes nanodlna to pick a device at random. I was surprised and excited that it worked perfectly and played good quality music via a jongo.
nanodlna investigation - request content
nanodlna comprises 4 source files:
cli.py the main program which parses the command and calls other sources.
devices.py identify dlna devices attached to local network
streaming.py reads a music file so it can be played
dlna.py sends instructions to DLNA device for playing music.
I hacked the code extensively to see how it works. In particular I printed variables and replaced variables by their values in the program to see exactly what information was used. dlna.py sends 2 http instructions to a Jongo. The first includes the URI of the the file to be played and the second is an instruction to start playing it. The standard python http request urllib.request is used to send the information to the jongo. It comprises an http header and an xml body formatted for SOAP.
headers = {
'Connection': 'close',
'Content-Type': 'text/xml; charset="utf-8"',
'Content-Length': '465',
'SOAPACTION': '"urn:schemas-upnp-org:service:AVTransport:1#SetAVTransportURI"'}
action_data = b'<?xml version=\'1.0\' encoding=\'utf-8\'?>\n \
<s:Envelope s:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" \
xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">\n \
<s:Body>\n \
<u:SetAVTransportURI xmlns:u="urn:schemas-upnp-org:service:AVTransport:1">\n \
<InstanceID>0</InstanceID>\n \
<CurrentURI>http://192.168.0.33:9000/file_video/kylie.mp3</CurrentURI>\n \
<CurrentURIMetaData></CurrentURIMetaData>\n \
</u:SetAVTransportURI>\n \
</s:Body>\n</s:Envelope>\n'
headers = {
'Connection': 'close',
'Content-Type': 'text/xml; charset="utf-8"',
'SOAPACTION': '"urn:schemas-upnp-org:service:AVTransport:1#Play"',
'Content-Length': '337'}
action_data = b'<?xml version=\'1.0\' encoding=\'utf-8\'?>\n \
<s:Envelope s:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" \
xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">\n \
<s:Body>\n \
<u:Play xmlns:u="urn:schemas-upnp-org:service:AVTransport:1">\n \
<InstanceID>0</InstanceID>\n \
<Speed>1</Speed>\n \
</u:Play>\n \
</s:Body>\n \
</s:Envelope>\n'
This is extremely helpful. I could have found the same information by looking at TCP traffic using tcpdump or wireshark but this is much easier to understand. All requests to Jongos to play a track are identical, the only information which changes in these requests is the name of the track.
Previously I supposed that the python program streamed all chunks of a track to the Jongo. Clearly what actually happens is that the python program only has to send the name of the track to be played in a SetAVTransportURI request and then send a Play request. It doesn't seem necessary to know any more about the requests but Gabriel Magno provides a link to the official spec. The DLNA control device (remote/phone/RPI etc) doesn't need to communicate or even be switched on whilst the Media Renderer (Jongo) is playing music. The controller is only required to change instructions.
The program streaming.py uses a framework called Twisted Web to load a file into an internal web server so that it can be streamed. In practice I don't need that I have a lot of files already hosted on RPI lighttpd webservers which I can use. I was thus able to dispense with al the Twisted Web functionality and choose a URL directly.
nanodlna investigation - addressing
These dlna requests need to be sent to one or more Jongos. The address information discovered by nanodlna in devices.py is:
# print("video_data");print(video_data);
# deviceJon = {'st': 'urn:schemas-upnp-org:service:AVTransport:1', \
# 'friendly_name': 'Jon', 'hostname': '192.168.0.102', \
# 'action_url': 'http://192.168.0.102:48567/Control/org.mpris.MediaPlayer2'+ \
# '.mansion/RygelAVTransport', \
# 'location': 'http://192.168.0.102:48567/93b2abac-cb6a-4857-b891-0019f5844dd8.xml'}
# deviceJoe = {"st": "urn:schemas-upnp-org:service:AVTransport:1",
# "action_url": "http://192.168.0.122:55746/Control/org.mpris.MediaPlayer2.mansion/RygelAVTransport",
# "friendly_name": "Joe",
# "hostname": "192.168.0.122",
# "location": "http://192.168.0.122:55746/93b2abac-cb6a-4857-b891-0019f584c8f8.xml"
# }
From this we can see that a "random" port is used on the Jongo as the destination and there is some sort of file structure within the jongo. The port number changes from time to time, perhaps when a Jongo is rebooted. This information which is discovered using SSDP protocol. It was a simple job to change the program to send requests to two Jongos and I ascertained that they synchronise quite well. Listening to them in the same room gives a slight echo but was not irritating.
Streaming - curl
At this stage I stumbled across an excellent article which shows how to submit requests to uPnP devices using the Linux command line utility curl. The example provided closely resembles what I have found with python. I was quickly able to setup simple scripts / files assisted by this blog to allow me to send various requests to Jongos much more quickly than amending a python program each time.
In particular it seemed reasonable that the jongo should accept stream URLs. I tried a radio station and it worked fine. I tried an httpd stream from MPD and I had exactly the same problem as previously with node.js - the stream started for a moment then stopped. This time around, I realised that MPD was the problem and I looked more closely at setting up MPD shout cast streaming. I found a good RPI centric article and realised I needed to install icecast before MPD shout output would work. Once I had done this jongos played my mpd out perfectly. The solution doesn't require pulseaudio at all so it is simpler. I did note that changing tracks or streams within the muse application requires Jongo streams to be restarted.
uPnP discovery - udp multicast investigation
upnp hacks provides a good explanation of how uPnP devices find each other on a LAN. A new device uses the UDP protocol to send a multicast request which all others must reply with a UDP unicast message. Ideally we would like a linux command line tool to do this for us. netcat can easily send requests but I couldn't find an easy way to view the responses.
The most common solution appears to use Python UDP capabiilities to send UDP multicast and receive replies just like devices.py in nanodlna. The electric monk provides a lot more detail.
The best way to carry out a command line discovery is gssdp-discover from gupnp-tools:
gssdp-discover -i wlan0 --timeout=3 --target=urn:schemas-upnp-org:device:MediaRenderer:1
This will show media render devices on the LAN :
Using network interface wlan0
Scanning for resources matching urn:schemas-upnp-org:device:MediaRenderer:1
Showing "available" messages
resource available
USN: uuid:93b2abac-cb6a-4857-b891-0019f5844dd8::urn:schemas-upnp-org:device:MediaRenderer:1
Location: http://192.168.0.102:48567/93b2abac-cb6a-4857-b891-0019f5844dd8.xml
resource available
USN: uuid:93b2abac-cb6a-4857-b891-0019f584c8f8::urn:schemas-upnp-org:device:MediaRenderer:1
Location: http://192.168.0.122:55746/93b2abac-cb6a-4857-b891-0019f584c8f8.xml
resource available
USN: uuid:93b2abac-cb6a-4857-b891-0019f584dcf0::urn:schemas-upnp-org:device:MediaRenderer:1
Location: http://192.168.0.118:56405/93b2abac-cb6a-4857-b891-0019f584dcf0.xml
Initial investigations to stream audio from MPD to multiple DLNA devices using Linux centred on finding a Linux utility which already implemented the necessary functionality. When that failed a node.js solution was investigated. Again I came to a dead end - although it turned out in retrospect to be a working solution. On the basis that a programmed solution is required it was sensible to concentrate on python as there is a huge amount of software available, it is easy to implement and generally easier to understand the programs.
Looking for a python solution
Initially I looked for linux-based python audio players to familiarise myself with considerations for playing files and streams using python. There wasn't as much choice as I expected, pygame, pyaudio and pyglet looked more complicated than they needed to be but playsound was a good simple solution to play files.
Looking for a program to play on dlna devices didn't have many candidates but nanodlna seemed promising. It is a command line utility written by Gabriel Magno which lists DLNA devices and plays a video. It was installed using pip but didn't work initially. I don't have a solid python programming environment so there wasn't much thought put into installation. The source programs were simple and I adjusted them so I could run nanodlna using the python3 command. For the first test I just specified a music track and no destination which causes nanodlna to pick a device at random. I was surprised and excited that it worked perfectly and played good quality music via a jongo.
nanodlna investigation - request content
nanodlna comprises 4 source files:
cli.py the main program which parses the command and calls other sources.
devices.py identify dlna devices attached to local network
streaming.py reads a music file so it can be played
dlna.py sends instructions to DLNA device for playing music.
I hacked the code extensively to see how it works. In particular I printed variables and replaced variables by their values in the program to see exactly what information was used. dlna.py sends 2 http instructions to a Jongo. The first includes the URI of the the file to be played and the second is an instruction to start playing it. The standard python http request urllib.request is used to send the information to the jongo. It comprises an http header and an xml body formatted for SOAP.
headers = {
'Connection': 'close',
'Content-Type': 'text/xml; charset="utf-8"',
'Content-Length': '465',
'SOAPACTION': '"urn:schemas-upnp-org:service:AVTransport:1#SetAVTransportURI"'}
action_data = b'<?xml version=\'1.0\' encoding=\'utf-8\'?>\n \
<s:Envelope s:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" \
xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">\n \
<s:Body>\n \
<u:SetAVTransportURI xmlns:u="urn:schemas-upnp-org:service:AVTransport:1">\n \
<InstanceID>0</InstanceID>\n \
<CurrentURI>http://192.168.0.33:9000/file_video/kylie.mp3</CurrentURI>\n \
<CurrentURIMetaData></CurrentURIMetaData>\n \
</u:SetAVTransportURI>\n \
</s:Body>\n</s:Envelope>\n'
headers = {
'Connection': 'close',
'Content-Type': 'text/xml; charset="utf-8"',
'SOAPACTION': '"urn:schemas-upnp-org:service:AVTransport:1#Play"',
'Content-Length': '337'}
action_data = b'<?xml version=\'1.0\' encoding=\'utf-8\'?>\n \
<s:Envelope s:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" \
xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">\n \
<s:Body>\n \
<u:Play xmlns:u="urn:schemas-upnp-org:service:AVTransport:1">\n \
<InstanceID>0</InstanceID>\n \
<Speed>1</Speed>\n \
</u:Play>\n \
</s:Body>\n \
</s:Envelope>\n'
This is extremely helpful. I could have found the same information by looking at TCP traffic using tcpdump or wireshark but this is much easier to understand. All requests to Jongos to play a track are identical, the only information which changes in these requests is the name of the track.
Previously I supposed that the python program streamed all chunks of a track to the Jongo. Clearly what actually happens is that the python program only has to send the name of the track to be played in a SetAVTransportURI request and then send a Play request. It doesn't seem necessary to know any more about the requests but Gabriel Magno provides a link to the official spec. The DLNA control device (remote/phone/RPI etc) doesn't need to communicate or even be switched on whilst the Media Renderer (Jongo) is playing music. The controller is only required to change instructions.
The program streaming.py uses a framework called Twisted Web to load a file into an internal web server so that it can be streamed. In practice I don't need that I have a lot of files already hosted on RPI lighttpd webservers which I can use. I was thus able to dispense with al the Twisted Web functionality and choose a URL directly.
nanodlna investigation - addressing
These dlna requests need to be sent to one or more Jongos. The address information discovered by nanodlna in devices.py is:
# print("video_data");print(video_data);
# deviceJon = {'st': 'urn:schemas-upnp-org:service:AVTransport:1', \
# 'friendly_name': 'Jon', 'hostname': '192.168.0.102', \
# 'action_url': 'http://192.168.0.102:48567/Control/org.mpris.MediaPlayer2'+ \
# '.mansion/RygelAVTransport', \
# 'location': 'http://192.168.0.102:48567/93b2abac-cb6a-4857-b891-0019f5844dd8.xml'}
# deviceJoe = {"st": "urn:schemas-upnp-org:service:AVTransport:1",
# "action_url": "http://192.168.0.122:55746/Control/org.mpris.MediaPlayer2.mansion/RygelAVTransport",
# "friendly_name": "Joe",
# "hostname": "192.168.0.122",
# "location": "http://192.168.0.122:55746/93b2abac-cb6a-4857-b891-0019f584c8f8.xml"
# }
From this we can see that a "random" port is used on the Jongo as the destination and there is some sort of file structure within the jongo. The port number changes from time to time, perhaps when a Jongo is rebooted. This information which is discovered using SSDP protocol. It was a simple job to change the program to send requests to two Jongos and I ascertained that they synchronise quite well. Listening to them in the same room gives a slight echo but was not irritating.
Streaming - curl
At this stage I stumbled across an excellent article which shows how to submit requests to uPnP devices using the Linux command line utility curl. The example provided closely resembles what I have found with python. I was quickly able to setup simple scripts / files assisted by this blog to allow me to send various requests to Jongos much more quickly than amending a python program each time.
In particular it seemed reasonable that the jongo should accept stream URLs. I tried a radio station and it worked fine. I tried an httpd stream from MPD and I had exactly the same problem as previously with node.js - the stream started for a moment then stopped. This time around, I realised that MPD was the problem and I looked more closely at setting up MPD shout cast streaming. I found a good RPI centric article and realised I needed to install icecast before MPD shout output would work. Once I had done this jongos played my mpd out perfectly. The solution doesn't require pulseaudio at all so it is simpler. I did note that changing tracks or streams within the muse application requires Jongo streams to be restarted.
uPnP discovery - udp multicast investigation
upnp hacks provides a good explanation of how uPnP devices find each other on a LAN. A new device uses the UDP protocol to send a multicast request which all others must reply with a UDP unicast message. Ideally we would like a linux command line tool to do this for us. netcat can easily send requests but I couldn't find an easy way to view the responses.
The most common solution appears to use Python UDP capabiilities to send UDP multicast and receive replies just like devices.py in nanodlna. The electric monk provides a lot more detail.
The best way to carry out a command line discovery is gssdp-discover from gupnp-tools:
gssdp-discover -i wlan0 --timeout=3 --target=urn:schemas-upnp-org:device:MediaRenderer:1
This will show media render devices on the LAN :
Using network interface wlan0
Scanning for resources matching urn:schemas-upnp-org:device:MediaRenderer:1
Showing "available" messages
resource available
USN: uuid:93b2abac-cb6a-4857-b891-0019f5844dd8::urn:schemas-upnp-org:device:MediaRenderer:1
Location: http://192.168.0.102:48567/93b2abac-cb6a-4857-b891-0019f5844dd8.xml
resource available
USN: uuid:93b2abac-cb6a-4857-b891-0019f584c8f8::urn:schemas-upnp-org:device:MediaRenderer:1
Location: http://192.168.0.122:55746/93b2abac-cb6a-4857-b891-0019f584c8f8.xml
resource available
USN: uuid:93b2abac-cb6a-4857-b891-0019f584dcf0::urn:schemas-upnp-org:device:MediaRenderer:1
Location: http://192.168.0.118:56405/93b2abac-cb6a-4857-b891-0019f584dcf0.xml
Monday, 28 January 2019
What is the Linux Shell
When we use linux we communicate with it to manage our resouces and "do useful things" by means of the shell. There are a number of different shells (e.g. bash, sh, csh) but they all do the same sorts of things, file management, running applications, etc. Shell scripts are used to automate a list of actions and make systems easier to use. I spent a little while investigating what a shell is and found the results interesting.
When you login to linux, the operating system executes a program for you. You can find out which with :
pi@PI3 - ~ grep pi /etc/passwd
pi:x:1000:1000:,,,:/home/pi:/bin/bash
pi@PI3 - ~
You can change a users shell with:
chsh --shell /home/pi/c/shell.o pi
The program you run is not special, it simply provides a user interface and arranges for commands to be executed, either within the program or by calling the Operating system. For my experiment I wrote a simple program which displays a command prompt, waits for an input line, echoes the line and repeats. It terminates when the user types exit. I set this up as the users shell and logged in. The program starts automatically. When exit is typed the user connection is terminated.
Stephen Brennan has written a very neat introduction to writing a shell in C which explains how to initiate processes using fork and exec system calls.
A simple introductory fork example is provided at GeeksforGeeks
When you login to linux, the operating system executes a program for you. You can find out which with :
pi@PI3 - ~ grep pi /etc/passwd
pi:x:1000:1000:,,,:/home/pi:/bin/bash
pi@PI3 - ~
You can change a users shell with:
chsh --shell /home/pi/c/shell.o pi
The program you run is not special, it simply provides a user interface and arranges for commands to be executed, either within the program or by calling the Operating system. For my experiment I wrote a simple program which displays a command prompt, waits for an input line, echoes the line and repeats. It terminates when the user types exit. I set this up as the users shell and logged in. The program starts automatically. When exit is typed the user connection is terminated.
Stephen Brennan has written a very neat introduction to writing a shell in C which explains how to initiate processes using fork and exec system calls.
A simple introductory fork example is provided at GeeksforGeeks
Thursday, 24 January 2019
Casting
Background
For the past couple of years I have successfully used Pulse Jongo devices for playing music / radio on multiple devices. By default you use the Pure Connect app to distribute sound but it has a useful feature using Caskeid which takes a bluetooth input to any Jongo and copies it other Jongos, thereby providing a proper multiroom system. This formed the heart of my Multi-room Music Server (MuSe) which was based on a RPI streaming music via bluetooth to a Jongo and thence to the rest of the house. However bluetooth has a short range and my (probably erroneous) perception was that is unreliable.
A second approach was to utilise RPIs in each room and use the multi-system networking capability of pulseaudio to send sound between them using Multicast RTP. Each RPI is attached to a system via a 3.5mm jack plug interface. This works ok but is restricted by the low-ish quality DAC provided on RPI. A DAC HAT can be purchased to improve the quality (e.g. £12 from Pimoroni). It would be preferable to stream directly from RPI to Pulse devices. There is a much better solution for multi-room audio using multiple RPIs in the form of snapcast which can be installed through apt. Snapserver is run on the source system and snapclient on recipient systems (which can include the source). Music is streamed from a fifo audio output on mpd to //tmp/snapchat/fifo and then transmitted to clients and played through alsa or pulseaudio. It was very well synchronised and sounded excellent.
Casting
I have been thinking for a while that it would be useful to cast to devices using something similar to ChromeCast audio (which has recently been discontinued). It turns out this is really easy.
Phone-BubbleuPnP: this has a choice of where to cast music you are playing - Jongo devices such as JON and JOG are included in the list so you select the one you want and start playing music. On a linux platform BubbleuPnP Server can be installed. This makes it easier to connect to devices and also provides an alternative configuration called OpenHome which is a standard for devices.
Windows 10. In the network folder Jongos are shown as media devices. Right clicking allows you to turn media streaming on for that device (or multiple devices). You can then simply right-click on music and cast to a device, music plays straight away.
Note that uPnP, DLNA and casting are closely related. What you need is to be able to send music to a device. This requires an app which is DLNA or uPnP aware to initiate the communication. We dont want client software. It is a feature of some DLNA servers such as Windows Media Player. It doesn't appear to work with vlc - usually my goto software for streaming functions.
This is a great solution - it is exactly what we want to do, casting from a variety of devices to Pure bluetooth/wireless Jongo connected speakers using wireless without needing to use Pure software to control them.
Linux Casting - pulseaudio-dlna
The last step is to cast from Linux to Pure devices. Music players such as mpg123 output to the alsa card by default we need to redirect the output to Jongo devices. I looked very quickly at using minidlna or servers like serviio but soon realised that pulseaudio casting is what I need. There are two suitable solutions rygel and pulseaudio-dlna. I successfully tried "apt-get install pulseaudio-dlna" so concentrated on its configuration. Starting pulseaudio-dlna as user pi causes extra sinks to be added to pulseaudio. You can then start music (e.g. using mpg123) and redirect output to the jongo.
Initially pulseaudio-dlna wouldn't connect to pulseaudio. I put pulseaudio in system mode but then mpd would not connect to pulseaudio. Messing around with groups and the load for module-native-protocol-tcp seemed to fix the problem and I was able to control pulseaudio in the usual way use Pulse Jongo devices.
This is an awesome result, we are using Pure devices as if they are speakers from linux and they fit perfectly to our web/mpd/pulseaudio solution avoiding the need to use bluetooth. Much kudos is due to Massimo Mund (Masmu) for developing the pulseaudio-dlna add-on.
[update] After the euphoria of finding a neat solution using pulseaudio-dlna there was a show-stopper. pulseaudio-dlna doesn't support the pulseaudio feature which allows you to send output to multiple devices. Pulseaudio provides module-combine-sink which permits a number of sinks to be combined and output can be redirected to all of them. A similar effect is achievable using module loopback to "listen" to the sink.monitor stream. Unfortunately pulseaudio-dlna doesn't support these "virtual" devices and I was unable to complete the installation.
Masmu did provide an experimental branch of pulseaudio-dlna on github which aimed to provide the necessary functionality for combined-sink but I didn't feel confident enough in the software to test it.
Thinking laterally, it occured that I could set up multiple mpd outputs to pulseaudio and route each to a single pulseaudio-dlna speaker. It was a bit of a botch but did work except that output from the two speakers stuttered, so I didn't progress it further.
As an aside GMediaRender is of interest. It can be installed with apt-get. When running on an RPI the RPI will show up as a media renderer in BubbleuPnP so that music can be played on the RPI speaker. GMediaRender allows a wide variety of sinks to be used and I thought it might be possible to direct output accordingly. I installed various gstreamer packages to identify and use sinks but nothing looked promising. Gstreamer is an audio swiss army knife and other functions may be worth investigating in future, for example.
Linux casting part II - nodejs
There is an apparent dearth of software to send music to DLNA outputs in Linux. Rygel appears to be Gnome GUI oriented, mkchromecast and castnow provide Chromecast output but don't recognise other DLNA devices. Simon Kusterer's castnow github page mentions another utility called dlnacast, which wouldn't work for me but it also mentioned a upnp/DLNA media-renderer client by Thibaut Seguy upon which it was based.
upnp-mediarenderer-client (UMC) is written in "server" Javascript which I haven't used before. I installed nodejs and npm and was then able to install UMC. Thibaut provides a sample node.js javascript program to read media and send it to DLNA. The DLNA device address is a SSDP (Simple Service Discovery Protocol) URL, which I have never seen or heard of. Googling provided an answer so I installed gupnp-tools and ran gssdp-discover which provided a list of ssdp locations. An SSDP location is a http URL for an XML file. The URL includes the IP address UUID (available in windows properties) and a port number (which I couldn't find elsewhere).
UMC is based on upnp-device-client which provides the basic functions for device communication.
Once I substituted an SSDP location and a valid music URL in the sample node.js program I was able to play music on a Pure Jongo system. Modifying the program to send output to two devices and sound came out from both simultaneously., it was a wonderful experience. We now have a program which sends output wherever we want it.
[update]
Although our NodeJs program plays files either from a folder (e.g. /home/pi/music/runaway.mp3) or a web URL (e.g. http://rpip/music/runaway.mp3) if I tried to play an mpd stream output (http://localhost:8100/) I heard a snippet <1 second and no more. I could tell the stream was running as it played correctly in a browser session.
I tried unsuccessfully to reconfigure MPD httpd output with different parameters. I also tried but failed to setup shoutcast server output.
I tried to understand the UMC source code but it wasn't easy to guess its function. I then investigated other npm audio packages in an attempt to shed light on playing / streaming music but they seemed somewhat flaky, nothing worked well so I abandoned node.js and investigated python instead.
It turns out that the node.js program works just fine. I simply had to install icecast2 for MPD shoutcast to work and nodejs/client worked perfectly.
Monday, 7 January 2019
Watching TV without an aerial
Problem
Until recently my upstairs TV was near to an aerial socket. I have moved it to a room without an aerial but still want to watch programmes which are available through DVB (Digital Video Broadcast). I have a Samsung Smart TV which has an ethernet/internet connection so I figure there should be a good way to stream "broadcast" channels to the TV. My preference is for a solution which doesn't require extra hardware purchase or a subscription and it must be simple / non-technical to use.
Approaches
1 Use a (free) app on the smart TV
2 Cast programmes from an Android/IoS device
3 Play in a TV web browser session - tortuous
4 Use an extra hardware box - not that interesting
5 Use an RPI to send a stream to the TV
App (Smart IPTV)
My TV has about 100 apps which can be used to watch content. iPlayer, all4 etc are very good but only when you want to watch a particular programme or channel. I want a more general ability to change between any available channel. Many of the apps such as Amazon, Netflix require a subscription and provide access to premium content; as the TV is used only occasionally this is not appropriate for me. There are many more specialist apps which are of no interest.
Google pointed me to tvplayer which works on Samsung and shows a good range of channels. It works well on an iPad or in a PC browser window. Unfortunately it is not supported by my Samsung TV(s).
After further investigation I settled on using the Smart IPTV app. It doesn't actually provide any IPTV channels but allows you to create and save your own list of channels which you can then play. It has a nice interface so that you can see whats on using an EPG (Electronic Programme Guide) and you can surf channels using the TV program button. It is available for both my Samsung TVs. There is a one-off £5 subscription per device so that you can upload your channel list to their website and then download it to the TV based on Mac address. However you can try it for a week to see if it works before paying up.
I found streaming URLs (google online tv stream url) without too much difficulty.
Casting
I tried casting from my (old) iPad and Samsung S5 Neo. Both of them allowed youtube to cast (the old large Samsung TV was very temperamental) but I didn't find other apps. On the S5 you connect using Quick Connect and you are then able to play local videos and music from the phone.
I also have an apple cable to connect my iPad to a TV. That works when playing xvid or hdtv films with vlc. It also works quite well playing TVplayer on the iPad and allows iPlayer and youtube. I foresee few occasions to use this in practice.
Subscribe to:
Posts (Atom)