logo
background
 Home and Links
 Your PC and Security
 Server NAS
 Wargames
 Astronomy
 PhotoStory
 DVD making
 Raspberry Pi
 PIC projects
 Other projects
 Next >>

Autoguiding with the Pi astro-cam

Pi guiding

*** UNDER CONSTRUCTION ***

The Pi camera

The Pi camera has one of the smallest pixel sizes of any camera you can find, which is a real advantage when it comes to autoguiding, as the smaller the pixel size the 'quicker' you can detect a star moving 'off pixel'

The Pi camera pixels are 1.4um, with all 5M packed into a sensor that's only 3.67 x 2.75mm.
 
This compares to a 'typical' DSLR such as the Canon 350D (it's pixels are 41.09 sq. µm with 8M pixels spread over an area 22.2 x 14.8mm)
 
This means the Pi has a '10x magnification' advantage over the Canon (the Pi camera has an effective FOV of 9.79x7.3 arc min) but also means the Pi camera receives about 20x less light per pixel than the 350D.
 
The Pi camera is also just not very sensitive and although it does support 'binning' (adding a 2x2 matrix of pixels together to quadruple the sensitivity) this 'cuts in' automatically on specific combinations of camera exposure settings.
 
The Pi camera is also rather 'noisy' - due to the rate at which the camera accumulates 'thermal noise') the Pi camera maximum exposure time is limited to 6 seconds (10s for the Pi cam. Mk 2).
 
Finally, the 'amplification' is limited to ISO 800

So, 'all things being equal', the Pi camera is not going to be your 'first choice' for faint nebula etc. astrophotography - instead the Pi camera's small sensor and tightly packed pixels means it can be used to great advantage as an 'autoguider' camera on a nice bright 'guide star'

Mounting the camera

The Pi Astro-cam 1 1/4" 'eye piece' (film canister) adaptor will plug straight into most focuss assemblies. The main problem will be in achiecing focus.

Typically, Dobsonian (and other Newtonian OTA's) have a problem 'focusing in' far enough, whilst on a Refractor it's often impossible to 'focus out' far enough for the camera.
 
To adjust the camera 'further out' for a Refractor is easy enough = for example, you could use a 90 degree mirror/prism to increase the optical path, or fit a short 'extender' tube.
 
To adjust a camera so it's 'further in' for a Newtonian is harder. If your focus assembly is theaded, one trick is to add a Barlow lens (which has the effect of moving the focus point further out up the tube) = or, if you have a 2" focuss assembly, mount the canera inside (part way down) the 2" focuss tube (if it's 1.25" you are out of lick = the Pi camera is just slightly too big to fit within a 1.25" tube)

The next problem is that, chances are, the tiny camera itself will not be in the exact center of the field of view.

The easiest way to ficus is to 'aim at the Moon' - this guarantees that 'something' will be in the camera field of view.
 
A nice bright Moon means you can run the camera in 'movie mode' (and manually adjust the focus whilst looking at the display).
 
Once you have focus it's a 'good idea' to mark the position (for nights where there is no Moon :-) )

How does a 'tracking' mount 'work' ?

To 'track' the stars, 'all' the telescope mount has to do is counter-act the earth's rotation. This is typically achieved by specific gearing ratios designed so that the mount (worm gear) drive can be turned at some simple 'fraction of a second rate' (thus allowing a simple quartz-controlled clock to be used as the time standard along with some rather 'dumb' speed control electronics).

This, of course, only 'works perfectly' when you have perfect motors (or at least some 'feed-back' mechanism to monitor the motor speed and make adjustments) and perfect gears.
 
In practice nothing is ever perfect, so mounts with built-in motor drives have sensors on both axis and a 'feed back' system that sends corrections to the motors (whilst over-priced 'after market' motor kits have no sensors so provide a 'knob' that allows the user to adjust the 'tracking' speed :-) )
 
Whilst the movement sensors (and feedback system) are highly accurate in themselves, they are making the assumption that movemnet of the axis results in perfect star tracking - and if the mount is less than perfectly horizontal or the Lattitude setting is less than perfectly correct (on a EQ mount) this won't be the case.
 
So, in practice, the only way to really 'track the stars' is to fit a camera to the actual OTA and use a computer to 'see' where the ORA is 'pointing'. When the software detects the stars are 'drifting', it can generate the necessary 'corrections' to the imperfect motor/gear drive, a process known as 'auto-guiding'.

The 'one pixel' requirement

Autoguiding is used to maintain 'one pixel' accuracy when imaging so as to avoid a 'fuzzy' image (or even 'star-trails').

Unless the autoguider can 'spot' drift - and correct for that drift - before a star drifts into the 'next pixel' of the imagimg camera your main exposure will become 'smeared'.
 
To achieve this, the magnification applied to the guide camera is usually greater than that used for imaging. Indeed, when capturing deep sky images, it's not unusual for the 'main' OTA to be used (at high mag.) with the guide camera whilst a smaller co-mounted refractor is used for the actual imaging !
 
The problem with this is that the smaller scope gathers much less light, so more and longer exposures are needed - which in turn makes accurate guiding even more crucial !
 
To use the 'finder' scope for the autoguider (and main OTA for imaging), we need a guide camera (like the Raspberry Pi camera) with much smaller pixels than the imaging camera.

When using auto-guiding the mount alignment and gearing etc. accuracy becomes less vital (although if it's too far out you will get 'field rotation').

Todays auto-guiding software can even set-itself up by making a series of small changes and measuring the result. Indeed, most can even choose a guide star automatically (of course it helps if the chosen star is not a binary :-) )
 
The use of a 'video' camera and auto-guiding software such as 'PHD' eliminates the need for accurate motor drive clock circuits.
 
This simplifies the drive requirements and means the DIY fitting of motor drives no longer requires 'simple integer' gear-ratios - indeed it more or less eliminates the need for accurately cut gears, meaning that 'Meccano' gears really will 'do the job' !

PHD is 'open source' software (and can be found here), however whilst it has been run on the Pi, it's operating speed leaves a lot to be desired :-)

There are even a few tricks that can be used to 'boost' the capability of the auto-guiding camera. The main one is to use modern 'auto-guiding' software that measures the magnitude of light falling on the pixels around the 'edge' of the guide star.
 
Comparing light levels on 'opposite edges' of the guide star can allow corrections to be performed to 'sub-pixel' accuracy. When it is noticed that one edge pixel has started to receive more light (and the opposite edge less) corrections to the mount can be made well before the guide star 'moves by a pixel'
 
This trick even makes the need for precise focusing of the guide-camera less vital (actually, a slightly 'spread' (out-of-focus) guide star is likely to give better results)

Taking photos

Enable the camera

Disable the LED

Whatever you are using the Pi camera for, before taking photos 'in the dark' one of the first things you will want to do is turn off the camera's red 'recording' LED :-). To do this, edit the /boot/config.txt ..

sudo nano /boot/config.txt
 
add the line :-
disable_camera_led=1

This will disable the LED after the next reboot (sudo reboot)

Taking photos (raspistill, raspistillyuv)

By default, the camera displays a 5 second 'preview' before taking the shot. To suppress the preview display, use the '-n' option. To set the preview time use -t n (n= mS delay before taking the shot, so '-t 5000' = same as default) - . The -f option sets a full screen preview and the -fp option sets 'full res preview' mode (i.e. shows exact same size as the capture will be) at 15fps

For a series of time-lapse images, use the -tl option with %04d in the -o file name (to insert 4 digit serial number - of %06d for 6 digits etc)). In -tl mode, the -t parameter == the total sequence time (in mS), and -tl = 'gap' (also in mS) between shots. So (for example) -t 30000 -tl 2000 -o photo%04d.jpg will spend 30 seconds taking shots at 2 second intervals.

The default is .jpg (you can add RAW to jpg with the -r option). Other output formats are set using the -e option (gif, jpg, bmp), however the encoding is down by the CPU (whilst jpg is done by the GPU) so these choices will actually take longer to 'save' an image ...

raspistillyuv has an extra option, -tgb, which saves the image in RGB888 (8 bits per channel) rather than the default YUV420.

Streaming via Ethernet

Real-time focussing means viewing the video stream on a local PC.

The easy way to do this is to just wire the Pi 'TV out' to a locally positioned AV display. It's alos 'dead easy' for the Pi to place still-image photos into a local or remote 'shared folder' allowing them to be viewed on a PC. More difficult is to have the Pi 'transmit' a movie stream via Ethernet to a PC
 
A typical full HD (h264) data stream requites less than 4Mbytes/s = 32mbps, so a 100mbps Ethernet link should have no problems keeping up.
 
At the PC end, the Open Source VLC media player is an ideal choice of viewer

Autoguiding

One obvious use of the Pi is as an auto-guider (on a small co-mounted 'guide scope') to 'direct' the main telescope for long exposure DSI with a SLR.. The tiny pixels are ideal for 'spotting' a deviation from the guide star in time to prevent 'streaking' or 'smearing' of the main image

For more details see my  page

Using EQ or Alt-Az mounts

If an EQ (RA/Dec) mount is used, in theory you only need a single (RA) motor = but this is only the case when the mount is correctly polar aligned, totally level and the latitude angle (Dec) is correctly set (eg. 52 degrees for most of S. England).

The 'disadvantage' of an EQ mount is that 'normally' only the RA motor will be running - and so long as the autoguiding software only needs to make corrections to the RA motor speed everything will be fine.
 
However when a Dec error is found (usually as a result of drift due to poor polar alignment) the Dec motor has to be 'started up' - and (of course) if the correction is in the 'opposite' direction to the last usage of the Dec motor there will inevitably be some 'backlash' that needs to be taken up.
 
As a result the guiding software will have to send multiple Dec adjustments before the DEC gears 'wind up' and the OTA moves (usually with a sudden jerk !).
 
The only way to even partially avoid this is to make sure the DEC worm gear is "wound up" ("loaded") in the direction of any drift before starting the imaging guide sequence (i.e. run the guide s/w for a good time before opening the shutter).
 
Of course, changing the Dec during imaging will cause 'image rotation' (see below), however this is very much less noticeable than a Dec drift error

An Alt-Az mount requires 2 motors but has the advantage that no polar alignment is necessary. The disadvantage is that there are two sets of gears to contribute to errors and PHD (video tracking) becomes even more vital. It is also not suitable for very long-exposure (i.e. deep sky) imaging due to the inherent 'field rotation' (about the star being tracked)

The advantage of the Alt-Az mount is when it comes to auto-guiding corrections. With both motors already running, all corrections will result in be 'speed up' and 'slow down' (rather than 'reverse drive into backlash)
 
However the guide control software will have a harder time since it has to adjust both motors at once.
 
Not only that, but even 'simple' timing control is a lot harder (since both motors have to be running at different rates at the same time to track correctly).
 
Finally, 'GoTo' with an Alt-Az is also more complex to implement (since star positions are given in RA/Dec and have to be converted into Alt/Az = in fact you can't even use 'setting circles' to find a star without doing calculations).

For my first 'guide camera' system, I opted to use an ancient EQ mount simply because it allowed me to 'motorise' the mount by first installing the RA motor with simple 'timed' tracking, then adding RA (only) auto-guiding. Only later did I bother adding the Dec motor (for goto).

Using the Pi

In theory, you can 'do it all on the Pi', HOWEVER in practice I doubt even a B3 Pi is able to process images fast enough to make corrections before your deep sky imaging camera starts to show up 'star trails'.

The Pi camera is driven by the GPU, which performs jpeg or h264 encoding (or not) before passing thav date to the Pi CPU. The Pi CPU thus only has to deal with saving (or sending) the 'finished' images.
 
Whist it may seem that sending 'auto-tracking' images to a PC for processing provides the obvious solution, remember that data transmission is via the Pi USB port which also requires lots of Pi CPU attention.
 
With the GPU taking control of image capture, it may actually be faster to have the Pi process successive 'RAW' images in it's RAM (rather than have it compress them to jpg and send them out over Ethernet / WiFi - and then wait for the PC to respond)
 
Of course you only need to look at the 'luminance' channel of the Pi camera, and any semi-clever** algorithm need only look at the (relatively) few pixels around the edges of the star being used as the 'guide star' (i.e the extreme left/right pixels in the RA direction and the extreme top/bottom pixels in the Dec direction), however there is still quite a lot of data to sort through.
 
** a clever algorithm will look at all the bright stars in it's field of view and work out 'averages' in an effort to compensate for atmospheric distortions etc.

Controlling the mount

Whatever method is used, the Pi will need to transmit corrections to the telescope mount.

Older mounts with a serial link can be driven using the "ASCOM" standard, whilst more modern ones use USB or the INDI standard.
 
Many current mounts also have a 'auto guide' port (known as a "ST-4" port - this is usually the 6pin 'mini-phone' RJ-12 (similar to the ethernet RJ-) connector socket that accepts a simple 4 wire (up / down (= Dec), left / right (+ RA)) 'pulse' train)
 
Note that an ST-4 'port' is essentially just a direct wired connection to the mounts handset controller 'left/right up/down buttons (so you can 'wire your own ST4' to almost any mount with a GoTo controller :-) ).
 
The big 'plus' of ST4 is that it was designed to support a guide camera that performs it's own correction calculations (at a cost between £200 and £300 :-) ) and which can thus be linked directly to the mount (no PC necessary).
 
The fact that a stand-alone camera can perform auto-guiding calculations on what is likely to be a simple 'custom' chip running at a few hundred MHz (at most) shows how utterly inefficient modern software has become ... of course something like motion detect would be 'ideal' to run on a GPU (like the Pi's) = but that requires access to the GPU 'core' at a programmatic level = fat chance
 
If you are running PHD or similar on a PC, you can find multiple USB to ST-4 'converters' (there are even DIY versions that can be build for a fraction of the price demanded by the mount manufacturers) - see this mega-over-powered Arduino based design or this PIC approach.
 
For ST-4 pin outs see here (plus note that on some sockets pin 1 is 12v power :-) )

Driving a serial link / ST-4 control port

This is only necessary if the Pi is doing the 'guiding' calculations. If the Pi is simply sending the camera feed to a PC, then the PC is doing the calculations and there is little point wasting time sending the results back to the Pi (except, of course, when the PC is 'remote' i.e. other end of a WiFi link). If within cable distance, all you need is a PC to Serial / ST-4 instead (to drive ST-4 from USB, look here for a PIC based DIY solution).

Using the Pi Serial link

The Pi has a serial UART that is available at the GPIO pin header. However, first you have to stop the Pi using the serial link as a 'console'. You do this by editing the /etc/initab file and 'commenting out' (adding a # at the start) the following line before rebooting.

#  2:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100

Next you get one of the 'serial to/from Ethernet/WiFi' apps. For example :-

sudo apt-get install socat

Finally, to 'link' the serial line to (say) Port 1234, you launch it with :-

socat tcp-l:1234,reuseaddr,fork /dev/ttyAMA0,raw,b9600,echo=0,ocrnl=1

The result is that everything sent by the PC on IP port 1234 will 'copied' by the Pi to the serial link .. and anything the Pi receives on the serial link will be sent to port 1234. The Pi can, of course, use other TCP/IP port numbers 'at the same time' to transmit eg video frames to the PC

ST-4 port

This is even easier - all you need is 4 GPIO outputs and some opto-isolators.

Even if you are driving the ST-4 via a 'PIC' chip, you must isolate the drive pins from the actual ST-4 pin voltages (which could be anything from 5 to 12v) and currents (the drive 'pulse' is an active Lo pull = you will need to sink anything from 5-10mA)

Adding a Pi 'GoTo' to a non-GoTo mount

If your mount has dual motors, then 'yes' you can add a Pi GoTo controller.

One limitation is that the Pi has no 'real-time' clock.
 
Fortunately it is able to 'fetch' the time & date automatically from a network 'source' - so if you control the Pi 'StarTrack' from your PC / Laptop (via Ethernet or a WiFi 'dongle') it can be set-up to get the time & date from your computers clock, even when you have no Internet connection

How does the auto-guiding 'proof of principle' work ?

You connect a USB 'web-cam' to the Pi. However, the Pi ARM CPU is one or two orders of magnitude too slow to both capture and process video at any 'reasonable' resolution or 'reasonable' speed whilst also running the 'X window GUI' (users have achieved 120 x 240 resolution at 4 seconds a frame :-) ).

However, as a 'proof of principle', see below (based on information found on the Cloudy Nights forums and elsewhere
A1. For PHD proceed as follows :-Re: PiAstroHub: RaspberryPi for Autoguiding+DSLR+GOTO new [Re: gonzothegreat]
#5371321 - 08/16/12 09:55 AM
Edit post Edit   	Reply to this post Reply   	Reply to this post Quote	Quick Reply: Quick Reply  This is the step to install the QHY5 driver and Open-PHD-guiding after making a Raspbian "wheezy" SD card
http://www.raspberrypi.org/downloadsNote:
1. Set Locale to "en_US UTF8" otherwise there will be an error message when open PHDguiding.
2. The working directory is /home/pi/----------------------------------
sudo apt-get update
sudo apt-get upgrade# install dependencies
sudo apt-get install subversion
sudo apt-get install cmake
sudo apt-get install libusb-dev
sudo apt-get install cfitsio-dev
sudo apt-get install libnova-dev
sudo apt-get install fxload
sudo apt-get install libwxgtk2.8-dev
sudo apt-get install libv4l-dev# make the INDI and QHY5 USB drivers
svn co https://indi.svn.sourceforge.net/svnroot/indi/trunk indi
cd indi/libindi
cmake .
make
sudo make install
cd ..
cd 3rdparty/indi-qhy/
cmake .
make
sudo make install
sudo cp /usr/local/lib/libindi* /usr/lib/
# To check if the QHY5 is recognized, type the following command and then plug and unplug QHY5. QHY5 should be listed.
tail -f /var/log/syslog # Run this command to start the indi server on Raspberry Pi
indiserver indi_qhy_ccd
# install Open-Phd-guidingcd
svn checkout http://open-phd-guiding.googlecode.com/svn/trunk/ open-phd-guiding-read-only
cd open-phd-guiding-read-only/
cmake .
make
sudo make install# Open an X terminal and type "PHD" in a command line. PHD will open in an X windows.

If any of the above fails for any reason at all, don't be too surprised. Every version of Linux (including every Pi version) will come with various different 'libraries' pre-installed, so chances are you are just missing something that is 'taken for granted'. It's usually possible to work out what's missing from the error messages (or .log file) and there are ;lots of on-line forums where help (and a solution) can be found

As of late 2015, Pi "Raspbian" system was updated and renamed "Jessie" for the Pi B2. PHD2 can be built on the B2 by following the Ubuntu build instructions with only one minor problem - a missing SSAG driver. This can be found in the lin_guider package (see below) that comes with QHY5 installation instructions (the same driver works for PHD2). PHD2 and lin_guider can 'co-exist' just fine on the RasPi.

PHD is very, very, slow, what alternatives are there ?

As a disclaimer, I have not done a real field test yet so I will keep you posted about how it really works. I just checked that the camera (SSAG) connects and works (using ST-4 I presume). So far I have been using lin_guider on the Wheezy release of RaspBian and it works OK but I still have trouble guiding DEC though that may be have to do with other factors not just the software.

The INDI drivers and OpenPHD are quite slow and inconsistent, so some-one has compiled Lin_guider on Raspberry Pi, see http://sourceforge.net/projects/linguider/

Lin_guider is reported to be much faster than OpenPHD+INDI. Images are downloaded almost immediately after capturing The steps to compile Lin_guider are as follows:-

---------------------------------------------------------
sudo apt-get install libusb-1.0-0-dev
sudo apt-get install libqt4-dev
sudo apt-get install libftdi-dev
sudo apt-get install fxload>>>> copy lin_guider-26.0_static.tar.bz2 to /home/pi
tar -xvf lin_guider-26.0_static.tar.bz2cd lin_guider>>>> copy lin_guider-27.0_sfx.tar.bz2 to /home/pi/lin_guider
tar -xvf lin_guider-27.0_sfx.tar.bz2sudo ./lin_guider.bin>>> To install QHY5 firmware
pi@raspberrypi ~/lin_guider/udev/qhy5 $ sudo sh qhy5_lg_install.sh>>>> To run program
cd /home/pi/lin_guider/lin_guider_pack/lin_guider
./lin_guider
--------------------------------------------------------- 

Click 'Next >>' in the Navigation Bar (left) for my next Pi mini-project

Next page :- Pi astro photog

[top]