Saturday, 23 April 2022

HA : Playlists on Jongo with Voice Control

 I was pleased when I found that Home Assistant (HA) auto-discovers Jongos and allows you to play music on them.  Unfortunately my enthusiam was dampened when I found that you can only specify a single track to play from Local Media or DLNA server, making it pretty useless.  It is possible to specify / play radio channels using HA which may be of interest in future.  However my desire is to be able to use Jongos for playlists in the same way I use MPD.

Technical solutions

I knew that HA allows you to use linux commands or shell scripts within HA scripts. So I thought there would be a potential solution.  Googling gave me the idea that I could use curl commands to send requests to play music through Jongos.  I investigated and it is indeed possible to contruct SOAP (Simple Object Access Protocol) requests to start music.  The main problem is that Jongos use a variable port for communication which you need to determine using SSDP (Simple Socket Discovery Protocol), so some further work would be required to find port numbers.  In addition SOAP requests are quite tedious to create and would require extra work.


In fact I have done the same work using the python library upnpclient to assist with the creation of requests and control of devices.  My solution uses web sockets from a web page to send requests to a web socket daemon on my application server which then submits requests to Jongos using upnpclient.  My second option was to send these web socket requests from a HA script using curl to my app server.  Unfortunately the web socket protocol is not very similar to http and curl doesn't support it.

HA is written in python so it should be possible to run upnpclient python scripts directly on the HA server.  However the HA python documentation says that you cant use python imports on the server and would perhaps need to use pyscript instead.
This may be the most effective solution eventually but for the moment I prefer to try using commands.

As the HA and application servers both run linux the easiest way to control the app server is using ssh commands, which is the approach I adopted

Remote Login

Normally, in a terminal you can sign in to a remote server and run a command or script using the syntax: ssh user@192.168.0.nnn <command line>.
When you do this the remote system asks for a password and may prompt you to add the server to your known hosts table.  If we are automating our commands within HA, this isn't acceptible.
Storing keys to remove the need for passwords is something I do quite often but since HA uses containers the problem is not quite so straightforward.  I found an excellent article written by HA "Command Central" Mike which addresses and resolves these problems, which are regularly experienced by HA users.

As HA uses containers you shouldn't store keys in the usual file /root/.ssh/id_rsa.  HA software upgrades can erase previous contents of root.  Instead you need to save the keys in a separate area /config/.ssh/id_rsa was used instead and include this location on your ssh command line.

When you use the command line in HA you are actually working in the SSH container, but when your commands are automated they are exectued in the HA OS environment, so to test your commands you need to jump into the appropriate container: docker exec -it homeassistant bash.

Finally, since our id file is not in the usual place we have to specify that the known_hosts file is in the same place otherwise we would have to answer the known_hosts question repeatedly.

Once we have done all this we can test remote command with some confidence.  Here is the first use of a (quite long) ssh command string to run a simple hello.sh "hello world" script.

A working solution

We can now proceed to setup a script to start our music.
First we add a shell_command to our configuration file


Next we add a script automation which calls the shell command : jongo_play.
Thirdly we can add a dashboard card to play the jongo.

Finally we can "expose" the script to Google Home and set up a routine to run the script when I say "Hey, Google, play Job".
It works beautifully, after a few seconds the music plays on Job.

It is still a bit limited, in that the playlist I request is fixed, however it is a very satisfying PoC and I can add as many scripts/voice commands as I want.

HA supports webpages in a dashboard so I added the amuse and jongopanel webpages allowing me to control both MPD and Jongo players like I usually do.












Monday, 11 April 2022

Home Automation : Google Assistant Voice Control

The story so far

We started the Home Automation with a Google Nest Mini and we were able to set it up for voice control of "compatible devices", Sonoff smart power switches and an RGB colour bulb.  I wanted to expand the home automation setup to include other devices.  In particular I wanted voice control of the music system as I dont want to use Spotify/YouTube premium services.

Home Assistant was chosen as a good platform for extra functionality and I have spent some time finding out its capabilities for devices and setting them up.  The most significant device I have added so far is my Linux MPD server, which is my main way of music.  I have also added Pure Jongo devices which are available in various rooms in the house.  

I also have the ability to control these devices, for testing I used webhooks so that I could setup a browser button to trigger actions. I have also the ability to setup the actions in scripts which can be initiated in various ways to make changes.

The last building block is adding voice control to our solution so we have a "modern" home automation setup - I think tablet controls are very much last years thing.

Home Assistant Cloud

We need to integrate Google Assistant (GA) with Home Assistant (HA) if we are to utilise GAs voice control.  This isn't a trivial matter but the excellent HA documentation describes the steps to set it up.  In particular you need to make the HA server available externally on the internet and provide an SSL certificate.  I have done this before for my web-site but adding a second instance makes it more complicated to setup.  As an alternative the guy who set up Home Assistant has an add-on Home Assistant Cloud (HAC) which provides the external functionality between GH and HA for you.  It costs about £4 which I feel is particularly worthwhile as HA is a significant product provided for free and the guy also provides ESPHome which is something I want to use next.

I followed the documentation to setup HAC.  As usual you need to create an account to use the functionality of HAC and signin within HA.  Then you go across to GH on the iPad enable HA integration.  Once this is done I can see the HA devices within GH.

I can also control them, turning lights on and off etc.

In addition to the devices HA has exposed its scripts and scenes to GA.  Thus in GA I can utilise scripts that I have set up in HA to control the devices that GA doesn't know anything about.  This is exactly what we have been aiming for.

The script mpd1 causes MPD to load one of its playlists and start to play it and I now have access to it within GA.









GA Routines

GA automates actions using Routines.  I setup a new Routine in GA which is initiated when I say "start tulip". I can associate actions such as switches and lights with this routine and also Scenes.

I added my mpd1 script/scene as a Tulip action.  Wow, now if I say start Tulip the playlist is loaded and run.

Clearly tulip isn't very mnemonic so I setup playlists in MPD called things like HA-bruce and HA-taylor with suitable track lists.  I then define scripts mpd-bruce and mpd-taylor in HA which load / run the MPD playlists.  I sync these scripts with GA in HA Cloud.  Finally I setup routines "Play Bruce" and "Play Taylor" in Google Home and I can call Bruce (Springsteen) and Taylor (Swift) up whenever I want.

I can use the same technique for any MPD command I want, for example "Play Next" will skip to the next track.


Conclusion

I am very pleased with this result.  I have a flexible generalised solution to my objective.  Within Home Assistant I can define actions on various devices, which aren't available to Google Home. I can then define voice commands in Google Home to carry out these actions.  

This was a proof-of-concept.  I have only scratched the surface of what is possible, but I this has been a very successful experiment.


Home Automation : Making things change

In the previous blog we set up a number of devices which we want to control within Home Assistant (HA).  The HA dashboard is available on tablet, phone or PC browser session and we can control devices there but we would prefer other/better ways of starting them.

HA  allows you to configure automations which define a trigger, conditions and an action.  The trigger could be a time or someone entering a room and conditions, for example a time range, must be satisfied before the action, such as turning on a light, is initiated.  For our first automations we choose "webhook" as a trigger.  A Webhook is initiated by a POST request from a browser anywhere on the network.  Typically the user clicks on a URL which posts a form to the HA server which causes the associated automation to be triggered.

Turn on Nest Mini Speaker

In this example we specify a webhook trigger and HA provides us with a URL for the trigger.



To determine the name of the entity we want we go into Developer Tools and, looking down the list of available entities we see there is a media_player.office_speaker
Looking at the Developer Tools services we can see what services are available for a Media Player.  In this instance, we clearly want Media_player.Turn on.

We can return to our Automation and specify we want to call a service called Media_Plaer: Turn on.

This completes the automation which we can now save.  If we click on "Run actions" in the Automations list we can check that the automation does what we want it to.


Now that our automation is working we can setup the webhook.  We can test it from the linux command line before finally setting it up as a POST form in a web page and testing it.




Load an MPD Playlist

Once we have done one automation it becomes easier to set up more.  One action I want to do is to start a playlist on the MPD entity.   The service which does this is MediaPlayer: Select Source.  You also have to specify a valid playlist as shown below.


I set this up as a script, when this script is run the associated playlist is loaded into MPD and starts playing.

We now have the ability to trigger automations from outside HA and they are verging on being useful.




Home Assistant: adding devices

The story so far is that I have installed Home Assistant (HA) successfully on an RPI4 and I can access the Home Assistant console through a browser session which allows me to configure functions and look at the current status.

HA has helped me by auto-discovering a number of "integrations", which are devices and systems it can interact with.  I have been able to instruct HA to show me the status of my VirginMedia SmartHub on the dashboard.  Google Home (GH) / Assistant have already given me the ability to control a SmartSwitch, RGB colour bulb and Nest Mini Speaker with voice control, so HA has a hard act to follow.

GH is limited in the number of devices it can deal with and the range of functions.  Typically if something is GH enabled and quite new it can be setup easily, otherwise there is nothing you can do.  I would particularly like to be able to play some of my own music, and GH won't play specific tracks or albums without a subscription to premium version of Spotify, YouTube etc.  I am hoping to be able to combine the voice activation functions of GH into HA so that I have the convenience of GH and the extra capabilities of HA combined.

Playing Music

HA was kind enough to discover my DLNA server which runs on a Raspberry Pi.  I mainly use it for watching videos on the TV but it also has access to my shared music.  Clicking on the server card on the integrations page causes it to be configured automatically and added to the default dashboard.  Clicking on the dashboard card allows me to navigate through the music directories and choose a music track (just the one) to play.


 By default, it plays in the browser but I can direct it to play on the Nest Mini speaker instead 😊.   In fact the Google Cast integration which was setup automatically contains the  Nest Mini as the "Office Speaker" entity for me to use.  


Choosing the Office Speaker card on the dashboard also allows me to select music to play.  In addition to the DLNA server I can choose local media, radio browser and Google TTS (text to speech).  Local media simply chooses music from the PC/iPad/phone I am using for HA access.  The radio browser integration provides a load of radio channels from around the world including a comprehensive selection of UK stations.


TTS is quite fun, just type a sentence and it is read out for you - good news I can configure for a British accent.  This may come in very useful as I can send the text to any speaker device.  

I was very pleased that my Pure Jongo JOG and JOB (T4 and A2) speakers show up as DLNA renderers in the integrations list.  This means I can send music, radio or messages to any or all of them.

We are doing very well discovering useful HA facilities, I can now choose music/sounds from a variety of sources and play on a number of different speakers.  These don't yet provide a useful solution but they are essential building blocks.




Adding Switches and Lights and music player

Although the music devices are interesting, I want something I can control automatically using HA.  I have four eWelink sonoff smart switches which I purchased for home automation some time ago and I would like to be able to turn them on and off.  Googling HA + sonoff gives me a guide to configuring sonoff 

The first step is to choose add-on configuration.  I needed to add the eWelink repository and I can then install the eWelink Smart Home app and add eWelink to the HA sidebar.  In the add-on information page for eWelink I can open the web UI and sign on to my eWelink account.

For a technical reason, relating to the APIs used I have to set up a scene containing the devices and add some information to the system config, configuration.yaml.  After restarting HA a new card called "flasher" (from my eWelink definition) which turns the switch and attached LED strip on or off.

I now wanted to add my RGB light.  The appropriate integration is from a company called Tuya.  I already have the Tuya SmartLife app to control my RGB colour bulb but I also needed to register with the Tuya IoT platform to facilitate HA integration and obtain an authorisation key.  Finally I can add the integration and I have a new card on my dashboard to control the light.


The third new integration I want to add is MPD, I am hoping this can provide the starting point for music integration.  Installation consists only of adding a couple of lines, including the MPD server IP,  to the configuration.yaml file. Using a new MPD card on the dashboard I can now start and stop music on my MPD player.  This is excellent.


HA now knows about the devices I want to control and I am ready to move on to the next stage, controlling them. 

Thursday, 7 April 2022

Home Automation

 In my initial investigation with the Google Nest Mini I was able to control RGB lightbulb, smart switches and, to some extent the TV.
I want to expand the scope to more devices and also make the control somewhat more sophisticated.  Intelligent controls form the basis of Home Automation which builds on top of basic device control.  An initial look at the field gives us IFTTT and Home Assistant as interesting possibilities.

IFTTT

IFTTT stands for If This Then That.  Automation is based on actions (THAT) which are triggered by an event (THIS).   For example:  whenever you publish a new blogger post you could alert your twitter followers.

For my demo I chose an IFTTT button widget as my trigger (THIS) and creation of an item in my Google Task list "Jobs" when the button is pressed.  Once I enabled the widget I could use a button on my iPad to add an item to the joblist.  In practice it isn't much use, but it gave me an idea of what to do.

Unfortunately, after that I didn't make much progress.   IFTTT could not control either of the devices I already have working: Smart Life (RGB Bulb) actions aren't supported and ewelink (for wifi switches) requires a $10 pa subscription to enable control.  As these devices already work on Google Home I felt it didn't offer me anything extra.

Home Assistant

Home Assistant (HA) looked promising but it is difficult to tell until you try it.  It was suggested by Clemens Valens at Elektor because it supports ESPHome, a Home Automation offering for ESP8266/ESP32,  It apparently supports Google Assistant and probably sonoff.

HA  recommend installation on a dedicated RPI4, which implies it is a serious meaty product.  They provide a suitable, up to date 64-bit image, which is pre-configured for everything you need.  When you burn your image to SD card and start up your server you can access the home page through a browser at http://homeassistant.local:8123.  Wonderfully easy.

HA does its best to help you by discovering local network devices it can work with automatically.  



After the excitement of installation I struggled somewhat to use HA.  There is a dashboard showing all the things you can do, initially it just showed me what the weather is and what my name is :(.  I found a couple of good youtube videos from JuanMTech and TheHookup to give me ideas on what to configure but nothing specific I could use.

I need to get something more interesting, ideally something useful, for example the ability to control the devices I have.
I was pleased to find that the Arris TG2492 router which was auto-discovered is my Virgin Media Superhub.  Clicking 'Configure' on the entry causes HA to go off to configure it in to the system and adds the results to the dashboard.  There are a number of router statistics that you can look at, for example the network traffic over the past couple of days:

This is good, I am happy my new HA server can actually do something real.  I still feel a million miles from being able to conntrol it but I have made a start, the community who put it together have done a grand job making it useable and it has some excellent documentation.  I will add more in my next HA post.







Saturday, 2 April 2022

WSL2 and Windows Terminal

 There are some good reasons for using Windows Subsystem for Linux (WSL) version 2 over WSL version 1.  Unfortunately I cant remember what they are at present.  I do recall some frustration in the past when it wasn't available to me.  Anyway, I spent a short while yesterday setting up WSL 2, which turns out to be very easy .  WSL2 provides a VM with a Linux kernel as opposed to WSL1 which uses an Ubuntu flavour with Windows System Calls, so WSL2 should provide a more "real" linux experience.  I installed a Debian flavour VM to run WSL2 and I was pleased to find out that I can run WSL1 and WSL2 alongside each other as I don't want to redo/convert/check what I have done before.  I had a minor bug to investigate / fix before the Debian distro could be converted to WSL2 but once completed the distro looked fine.

As an afterthought the WSL2 tutorial recommends Windows Terminal - they are quite right; it is is an excellent addition to my environment.  It is open-source software which you can install from the Microsoft Store.  At its simplest it provides a tabbed window which allows you to run a number of Windows Command line and Powershell sessions.  As I have WSL1 and WSL2 configured it automatically provides me with the option to start WSL1 and WSL2 sessions.  Already it sounds good.

Even better you can easily customise its configuration using the settings.json configuration file.  Until now I have used WSL for RPI and RISC-V SSH sessions and Putty for RISC-V console and Arduino serial port sessions.  However, within a few minutes, I was able to customise WT allowing me access to all these systems as tabs in a WT window.  RISC-V console sessions are the easiest, they just use Windows cmdline SSH.  I have set up keys in WSL1 for my Raspberry PIs to for passwordless sign on so I use a WSL+SSH command line to start RPI sessions.  For Serial ports we use minicom running on WSL1 to access COM ports courtesy of a helpful tutorial by Scott Hanselman, and I can use a WT WSL1+minicom command to start the consoles.

This is wonderful, I have all my terminal access in one place, working seamlessly and easily configurable.