Evolution of a Blog

This blog has evolved as you can tell by comparing the title with most of my recent posts. The title should really be something like "The Physical Interface Side of Computing". It will still feature Raspberry Pi and Arduino from time to time but my current hardware of choice is a BeagleBone Black with JavaScript and Node.js providing the development environment.

Tuesday, April 30, 2013

Save State in Arduino EEProm Across Resets

I am back to playing with the Arduino and am remembering just how uncomfortable I was working in "C" so long ago.   'Def a brain teaser for me compared to Python or PHP.

So anyway, here is an implementation of a couple routines with one that saves a string in the onboard EEProm on the Arduino and then a second that reads it back.   In my next iteration I will add a CSV parser such that if you put two distinct variables into the EEProm, separated by commas, you will get an array of two variables back from the read.

Here is what the serial console displays when this sketch is run:
--Starting
--String to Save on EEProm
2.222,4.44
--String Retrieved from EEProm
2.222,4.44
--All Done

Here is a LINK to the sketch itself.  There is really not that much to this example but the time it took me to relearn a little about array handling in "C" made this more of a challenge than I expected (for my old brain).

Note that I wrote my own floating to string conversion function as I was not thrilled with the way that dtostrf works.   Even this trivial routine took my old brain more than a couple minutes to write...as we hear the clanking of gears shifting.  --Updated to reflect that even after all that I had an error in the code that I just corrected.   I am struggling with working at the pointer level!

Another update - Found this posting on the Arduino Forum with a nice technique for using the EEProm memory to store anything with one operation.

Monday, April 29, 2013

Raspberry Pi Data Collector - Part 1

I follow the stock market with some interest and in particular wonder about the relationship between option trading volume and price moves.   I want to be able to track option movements across time and, other than paying for a service, have not found an easy way to get this data online.   Google Finance used to have an API that would access option chains but they discontinued the API.  The Yahoo Finance API might be able to do it but I could not figure out how so I gave up and decided to write something myself.

Since I have a RPi that is running all the time anyway (TvHeadend) I decided to write some Python script to do a capture on a nightly basis from that platform.  More about this later.

Thursday, April 25, 2013

Clone Raspberry Pi Console to iPad

This post falls in the category of "why do you want to do that"?  Especially since I am sure there is a much easier way of accomplishing the same goal?

In any case, what I wanted to do was to use a tablet as the console display for my RPi allowing me to use the keyboard that is attached to the RPi rather than the iPad touch screen (yes, I know, one way of doing this would be to  buy an external keyboard for the iPad).   This post provides the recipe and below is how I put this to use.

First, install screen (sudo apt-get install screen) and then change some rights to allow read-write access from the clone:

sudo chmod +s /usr/bin/screen
sudo chmod 755 /var/run/screen

I wanted my RPi to boot with this capability enabled so my approach will not be as clean as that described by the above referenced post.   In any case, the first thing that I want to happen is for screen to be loaded on login...AND...for login to be automatic on boot. 


First to automate login on boot:

Modify /etc/inittab per the below:
1:2345:respawn:/sbin/getty --noclear 38400 tty1
 -- to -- 
1:2345:respawn:/sbin/getty --noclear 38400 tty1 --autologin pi 

Now make sure that screen starts when the above pi user logs on from the console (but only once or the console will be trapped in a login loop!).   Do this by adding the following at the bottom of .bashrc:

# Start screen (if not already running)  to allow us to clone the consolepgrep screen > /dev/null
if [ $? -ne 0 ]
then
   screen -S shared-session -c screen.in
fi

I also adjusted the size of the console display to better fit the iPad display by adding the following:

stty rows 55
stty cols 160

The I added a used called piclone.   The commands for doing this can be found elsewhere in this blog.

sudo useradd -m -g users -G audio,lp,video,games -s /bin/bash piclone
sudo passwd pi

Now to clone the console to your iPad (or Android table).   You need an SSH app like ServerAuditor.  Create a connection to the RPi using the piclone user and execute the following commands:

ssh pi@rpi1
screen -x pi/shared-session

...
and Bob is your Uncle.

Sunday, April 14, 2013

Arduino - Voltmeter - Redux

This article is an evolution of a volt meter that I discussed earlier.   This iteration adds an LCD Display and the code is done entirely in Arduino instead of using my interface library.  The LCD display that I am using was borrowed from the 'Bot where I have it in use for status messaging.

The circuit diagram follows.   This diagram was done with a neat piece of software called Fritzing.  Part of the reason for this article existing was so I would have an excuse to use this software to create a diagram!
 
Here is a picture of the hardware.   As you can see from the diagram the voltage that I am reading is coming from the Arduino so will be no higher than 5v.   I am using a variable resistor as part of a voltage divider circuit to provide something to read.  The display shows that voltage on the first line and a bar graph with a range of zero to five on the second line.
Finally, the code for the demo.   The following code illustrates the reading of a voltage with the Arduino, adjusting it using the code from the Secret Arduino Voltmeter article by Scott Daniels, and presenting it on the LCD display:
/*
    Voltage Measure with LCD Display

    Created 4 March 2013 - 1

    Copyright (C) 2012 Will Kostelecky <will.kostelecky@gmail.com>

    This program is free software; you can redistribute it and/or
    modify it under the terms of the GNU General Public License
    as published by the Free Software Foundation; either version 2
    of the License, or (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details.
*/

const int TRUE=1, FALSE=0;

// Include and initialize the library for the LCD
#include <LiquidCrystal.h>
LiquidCrystal lcd(0, 1, 2, 3, 4, 5);
String lcdMessage = "";
int lastMinute = 0;

int voltagePin = 5;
long voltageIn = 0;
float refVoltage = 0;
float voltage = 0;
float adjustToRef = 0;
String stars = "*******************";
char tmp[10];

/* ************************************************************************
Initialization
************************************************************************ */
void setup() {
    pinMode(voltagePin, INPUT);

    // Set the size of the lcd
    lcd.begin(16,2);
  
    // Get the reference voltage
    refVoltage = float(readVcc()) / 1000;
}

/* ************************************************************************
Process Loop
************************************************************************ */
void loop()
{
    // *****
    // Update the status on our LCD display
    // *****
    voltageIn = analogRead(voltagePin);
    adjustToRef = 1 + (refVoltage - 5) / refVoltage;
    lastMinute = millis();

    lcd.setCursor(0, 0);
    lcdMessage = "                ";
    lcd.print(lcdMessage);
    lcd.setCursor(0, 0);
    lcdMessage = "Voltage=";
    voltage = (float(voltageIn) / 204.8) * adjustToRef;
    dtostrf(voltage, 1, 2, tmp);
    lcdMessage += tmp;
    lcd.print(lcdMessage);

    lcd.setCursor(0, 1);
    lcdMessage = "                ";
    lcd.print(lcdMessage);
    lcd.setCursor(0, 1);
    voltage = voltage * 16 / 5;
    lcd.print(stars.substring(0, int(voltage)));

    delay(333);
}

// ****************************************************************
// Read 1.1V reference against AVcc - Courtesty of Scott Daniels - July 9, 2012
// http://provideyourown.com/2012/secret-arduino-voltmeter-measure-battery-voltage/
// ****************************************************************
long readVcc() {
    // Set the reference to Vcc and the measurement to the internal 1.1V reference
    #if defined(__AVR_ATmega32U4__) || defined(__AVR_ATmega1280__) || defined(__AVR_ATmega2560__)
        ADMUX = _BV(REFS0) | _BV(MUX4) | _BV(MUX3) | _BV(MUX2) | _BV(MUX1);
    #elif defined (__AVR_ATtiny24__) || defined(__AVR_ATtiny44__) || defined(__AVR_ATtiny84__)
        ADMUX = _BV(MUX5) | _BV(MUX0);
    #elif defined (__AVR_ATtiny25__) || defined(__AVR_ATtiny45__) || defined(__AVR_ATtiny85__)
        ADMUX = _BV(MUX3) | _BV(MUX2);
    #else
        ADMUX = _BV(REFS0) | _BV(MUX3) | _BV(MUX2) | _BV(MUX1);
    #endif 

    delay(20);                          // Wait for Vref to settle
    ADCSRA |= _BV(ADSC);                // Start conversion
    while (bit_is_set(ADCSRA,ADSC));    // measuring

    delay(20);
    uint8_t low  = ADCL;                // Read ADCL first - it then locks ADCH
    delay(20);
    uint8_t high = ADCH;                // Unlocks both

    long result = (high<<8) | low;

    // Calculate Vcc (in mV); 1125300 = 1.1*1023*1000 - (Uno 1098073L, Mega 1066957L)
    result = 1111145L / result;
   
    return result;                      // Vcc in millivolts
}
 



Saturday, April 13, 2013

Managing SD Card Images

Having multiple Raspberry Pis doing a variety of tasks with some aspect of new development constantly underway means a lot of different SD Card images.   I got tired of manually managing them so developed a little Python script to help.

What I wanted was a script that would run under Python on a Linux workstation and do the following:
  1. Save an image to the hard drive of the Linux box or burn a previously saved image to an SD Card (--save and --burn).
  2. Optionally compress the saved image (and decompress as part of the burn process) as the hard drive on my Linux box is kinda small (--zip).
  3. Organize my images into three folders with one for original Distro images, one for date stamped Backups, and one for Checkpoints.   An example of my use of Checkpoints:   I have an Arch image that is built just to the point of the network running, one that includes LXDE, and one that is my working Robot (--checkpoint and --distros with backup being default).   
  4. List images available for burn in the above folders (--list).
As I developed the script I also added some more options to:
  1. Provide password on the command line so it can be fed to sudo for the execution of the dd command 
  2. Enable a save or restore at the partition level.   Note that only two partitions are supported and that a burn destination will be reformatted and partitioned to fit the saved image
  3. Alter output from the script with one displaying everything (--verbose) and one nothing (--quiet).  The default is to display a minimum of status but to prompt for confirmation of command line options.
  4. Added an image number to the --list option such that the user can provide that number for a burn request rather than an entire image name.
Here is the --help for the script:
Options:
  -h, --help            show this help message and exit
  -b, --burn            Burn an image to an SD card
  -c, --checkpoint      Checkpoint type image, default is Backup
  -d, --distro          Distro type image
  -e DEVICE, --device=DEVICE
                        Device name (e.g. sdc)

  -l, --list            List images availble for burning
  -p, --part            Save at partition level
  -q, --quiet           Display nothing (including confirmation request) other
                        than an error
  -r ROOT, --root=ROOT  Root password for sudo
  -s, --save            Save image to computer
  -v, --verbose         Display everything
  -z, --zip             Source (/Target) is (/to be) compressed using zip

Some examples of the script command line invocation:
python mi.py --burn --distro --device=sdd distro-name
python mi.py --burn --distro --zip --device=sdd distro-name
python mi.py --burn --checkpoint --zip --device=sdd checkpoint-name
python mi.py --burn --zip --device=sdd backup-name
python mi.py --save --checkpoint --zip --device=sdd checkpoint-name
python mi.py --save --zip --device=sdd --root=secret backup-name


python mi.py --burn --distro --device=sdd 3
--or--
python mi.py -bde=sdd 3  

Output from the --list command showing image numbers:
will@UbuntuMini:~/Images$ ./mi --list
Images available in Distros
  ( 1) 2013-02-09-wheezy-raspbian:
       1.94gb/z    saved on 2013-04-11 07:06:08.736134
  ( 2) archlinux-hf-2013-02-11:
       1.94gb/z    saved on 2013-04-11 09:10:53.009246

Images available in Checkpoints
  ( 1) arch-1:
       3.97gb/z    saved on 2013-04-13 22:41:33.510442
  ( 2) arch-2-LXDE:
       3.97gb/z    saved on 2013-04-13 23:00:23.804046
  ( 3) tvhe:
       3.96gb/p/z  saved on 2013-04-18 15:15:52.762579

Images available in Backups
  ( 1) 2013-04-12-CritterCam:
       16.04gb/z    saved on 2013-04-12 09:44:34.467685
  ( 2) 2013-04-14-arch-webcam:
       3.97gb/z    saved on 2013-04-14 17:12:50.241074  

I had made the source code for this script available but then pulled it when I a) realized that there were still features that I wanted to add and b) that as I added those features I was also introducing bugs.

I have finally finished development and a fair amount of testing on the three Ubuntu machines that I have available.  The code is available here.  I would really appreciate some feedback and any reports of problems.

Tuesday, April 9, 2013

CritterCam Version 2


I have now refined my "CritterCam" based on experiences to date and am pretty happy with the result.   It meets most requirements of the brief though there are one or two refinements that might get added.

In short, the CritterCam is a RPi based device that will operate as a wireless motion detector, capturing images using either a WebCam for 720p resolution or a DSLR for 12mb resolution.  It is integrated with a small PHP application that also runs on the RPi courtesy of a LAMP install.   The device will operate in either of two modes, the first being the most portable, using a 720p webcam and being based on the OpenCV library.   The second uses a USB connected DSLR, in my case a Canon 500D, and relies on the gphoto2 library (libgphoto2) augmented by a wrapper called "piggyphoto".  Obviously, portability goes down when you add a DSLR!

The options provided by the CritterCam are shown below:
Usage: CritterCam.py [options]

Options:
  -h, --help            show this help message and exit
  -k, --keep            Save a static file (/var/www/camimages/stream.jpg

                        and do nothing else
  -d DELAY, --delay=DELAY
                        Delay between captures
  -s, --stream          Stream timestamped images
  -m, --motion          Stream time stamped images triggered by motion
  -r, --dslr            Use DSLR capture rather than using webcam

  -t THRESHOLD, --threshold=THRESHOLD
                        Motion threshold
  -v, --verbose         Produce extra status and progress messages

Within it's processing loop CritterCam will check for the existance of several action generating flag files left in the /var/www/ by it's PHP partner:
  • Restart.flg - Causes an exit with a zero return code.  Invoking shell script  loops
  • Shutdown.flg - Causes an exit with a one return code.  Invoking shell script  exits
  • Reset.flg - Causes a reset of the current base image to the most recent capture.  Useful in case the app itself has not detected a change in background.
  • Force.flg - Causes the app to capture a full resolution image even without the motion trigger.
To detect motion the CritterCam uses  a very simple, ok maybe rudimentary, approach of using the OpenCV absdif function to compare the matrice from the current image and that of the base image.  If the mean difference is greater than a threshold we assume motion.   While processing, the script will attempt to detect a change in the conditions of the base image and will replace it if necessary.

Here is a link to the source code for the CritterCam.   It still does some weird stuff now and again and will continue to be enhanced.    It is partnered with a PHP application but that app is based on my own forms development environment so is a little complicated to easily make available for download.

The most major enhancement that I would like to add would be an external batter for the DSLR.   I know that I could buy one but I already have the AC adapter and am thinking I would just hijack it, add a couple of 5.2mm plugs and sockets, splice in a DC power regulator and have something that could also live on the tripod.

I have to confess to a major issue with my CritterCam.   Namely that it has a hard time detecting a change to the ambient light.   Unless I manually reset the base image it will decide that the change is motion......bad CritterCam!

RPi as a Remote Control for a Canon DSLR - Part 5

Note that this is an update to Part 4 of the same story.

In the previous post I mentioned the following changes:

smsc95xx.turbo_mode=N in /boot/cmdline.txt
dwc_otg.microframe_schedule=1 in /boot/cmdline.txt
vm.min_free_kbytes=16384 in /etc/sysctl.conf
reboot

With a little experimentation I have determined that the first two changes must both be present for my sought after stability to be achieved.   If they are not both in place we will get the following error:

piggyphoto.libgphoto2error: Unspecified error (-1)

This will happen at some point and probably pretty quickly (within a minute).

The fourth change does not seem to be needed for my stability purposes.  I have tried with it at half the above and without it at all.  The vm.min_free_kbytes argument is actually part of a pair of tweeks in the sysctl file.   I have not detected any change with them in or out...but I am leaving them in at this time.

# rpi tweaks (16384 suggested though not conclusive over 8192)
vm.swappiness=1

vm.min_free_kbytes = 16384

There is one other thing that I am doing that is a left over from when I was working with a webcam.   The shell script that I run my Python code within is shown below:

#bin/bash
sudo rmmod uvcvideo < rpw
sudo modprobe uvcvideo nodrop=1 timeout=5000 quirks=0x80 < rpw
sudo rm /var/www/camimages/*.jpg  < rpw > /dev/null 2>&1
sudo rm /var/www/camimages/loop/*.*  < rpw > /dev/null 2>&1
sudo rm /var/www/shutdown.flg < rpw > /dev/null 2>&1
sudo rm /var/www/restart.flg < rpw > /dev/null 2>&1
if sudo python -B CritterCamDSLR.py $1 $2 $3 $4 $5 < rpw; then
    echo ""
    echo "Restarting CritterCam"
    echo ""
    ./CC $1 $2 $3 $4 $5
fi

I know, I know, it is ugly with all that running as root but let's leave that alone for now!

Note the two uvcvideo lines.   Without these two lines it takes my Python script twice as long to recognize motion (from about 1 second to about 3 seconds).  I guess libgphoto2 uses uvcvideo?   When I compare 'top' output between the two executions I do not see a difference...but...the response time difference is very noticeable.

Lastly (but not finally), I do still get the above libphoto2 error on occasion when I am starting my script a second or third time.   I might be able to solve this with a usb reset, and I do have a script to do this, but I think this confirms what I wrote in an earlier post.  For now I am happy with doing a power cycle on the camera now and again though I may work on the usbreset at some point.

Finally, in my next post I will describe my scripts a little more and make them available for download.  I would now but the capture script is still doing a couple weird things.

Monday, April 8, 2013

RPi as a Remote Control for a Canon DSLR - Part 4

I have made some progress with libgphoto2 driving my DSLR without going belly up.  I have modified CritterCam such that the DSLR does both the preview and the motion activated shots and it has been stable for the past ten minutes (previously it would only run for less than a minute before going boom).

Here are the changes that I made:

smsc95xx.turbo_mode=N in /boot/cmdline.txt
dwc_otg.microframe_schedule=1 in /boot/cmdline.txt
vm.min_free_kbytes=16384 in /etc/sysctl.conf
reboot

These are courtesy of the Raspberry Pi Forum and user elatllat.  This forum rocks in terms of helpful and responsive feedback.

I do still have a bit of an issue with the method that I am using for motion detection but I am hoping it is because of the conditions within which I am currently testing (indoors without a lot of light).

There is an update to the above based on some experimentation.

Sunday, April 7, 2013

LAMP

I have installed LAMP on my CritterCam RPi so I can host a little web application written in PHP.   I am not using MySQL at this point but being able to run it on the RPi is kinda cool.   It is slow but still...!

I followed the install instructions at this link.   They are great and so don't need repeating.  You can see the results here where LAMP is delivering the preview web page for my CritterCam.


RPi as a Remote Control for a Canon DSLR - Part 3

I am at an impasse with trying the get the DSLR to capture the motion being detected by the RPi.   Namely because there is a two second delay between the time the motion is detected by my Python script and when it actually occurred!   I have installed 'motion' on the RPi and it also suffers from the same delay.   Hmmm.  What to do?

I guess I will see what the delay is when using PiggyPhoto to grab preview images.  If that looks to be working then I will go back to pondering stability of the DSLR interface across libgphoto2.

Friday, April 5, 2013

CritterCam Version 1 in Action

Having solved the power problem (got a new battery in the mail) I have been able to get the CritterCam into action.   Here are some images from a test run.   First the web preview page.   This is a tiny little PHP app that runs on the RPi along side the Python script that does the captures.   It is kind of funny to think about the RPi running a SQL server and PHP behind Apache.   I am not using mySQL but I could!   In any case...


The large image at the top is the last image saved due to motion being detected.  The four images at the bottom are the last four images captured with the difference calculated between each image shown at the bottom.   The threshold specifies the point at which this difference between images triggers a capture.

I do have the DSLR triggering working with one major issue...there is a two second delay between when the Python script sees the motion and when the motion actually occurred.  No clue as to why yet.   Even without the DSLR the little 720p webcam does a decent job at the image capture.






Yes, that bench is in dire need of painting.   If it were not so bloody cold it would get pressure washed and painted!

Thursday, April 4, 2013

RPi as a CritterCam - Ultimately Remoting a DSLR

I have made some progress on a CritterCam based on one of my RPi's.   Here are a couple pictures of the device:

  • The base is a four battery holder for 3.7v 1850 Lithium Ion flashlight batteries with an output of 14.8v.
  • At the top is an adjustable power regulator that drops the 14.8v to 5v and delivers it via a 5.2mm power plug.
  • The power regulator is sitting on a powered USB hub that provides power to the RPi and also supports a Wi-Fi dongle (and a keyboard during debugging).  The hub is a cheap one but compatible with the RPi and is smaller than the more expensive ones I find on Amazon.
  • As mentioned above the Hub supports a USB Wi-FI dongle.   The one shown in the picture is an Edimax (larger of the two that I have).  I also have the small one but have been having some issues with dropped packets that I will discuss elsewhere.
  • In front of the USB Hub is the a 720p WebCam that is compatible with the RPi...but in my case only if plugged directly into the RPi.
  • Inside the Adafruit case is, of course, the Raspberry Pi that drives the CritterCam and it's web interface.
My goal is to use some Python code to detect motion in front of the webcam and trigger a shot by the DSLR which will be connected to the CritterCam.   The CritterCam will also support a web server running a little PHP application that will provide a preview of what the CritterCam sees.   I will talk about all of this a little later.

Right now I have a couple of issues.  First is one of power, on a couple of fronts, one being that I am using four Lithium Ion batteries to power the CritterCam and one of them has turned itself off!  I guess this is a good feature but I only have four right now but am waiting for an order from China for some more.  The other power related issue is that while I had a good test run with the Lithium Ion batteries driving the RPi for almost three hours...since then I can't get the power regulator to reliable support the RPi when connected to wall power of a variety of voltages.   The power does not seem sufficient to drive both the Hub and the RPi.   I am baffled as it worked so well with the batteries but can not retest with batteries again as I am missing one!

The other issue relates to driving the DSLR.   I have the Python code working well with the webcam on its own.   It detects motion and saves the current image.   Alas, when I try to trigger a DSLR capture at that point using the code I discuss here, it fails.   More work to do here but first I want to get the power issue sorted.