Student Shots of the Orion Nebula

As I mentioned I have a group of students doing their third year projects at the moment. We got some stellar spectra and Jupiter like last year.  I wanted to get them to do some imaging as well, we’ve been trying to get some nice images, we tried the Crab but it didn’t come out very well.  So I got them to find the Orion Nebula, take images and then stack and process those images, removing the dark frames.  This is entirely student obtained.

Here is the first effort, student Karan Solanki was first to analyse the jointly obtained image files.  Doesn’t mean he gets the best mark but he’s done a great Job and now gets to move on and do something else.  (I’m keeping them all busy with lots of different tasks).  So lets see if anyone does better.  Of course the Orion Nebula is easy, but they have looked at other objects…  hopefully we’ll see them here eventually.


M42 Orion Nebula obtained by students, this one processed by Karan Solanki


Best Shot of Jupiter Yet


I have a new set of students doing their 3rd year projects on the telescope and they are very good.  I will post some of their stuff soon.  However, tonight I was out on the roof with them and after even the most enthusiastic one ran out of steam, I still felt like having a go at Jupiter, just a quick go, so I stayed up alone.

Two extremely frustrating hours later I managed to spend about 34 seconds getting this.

I’m supposed to do the following official bit now, people complain when I don’t so here goes:-

3000 shots at 83 frames per second, best 50% stacked, 2.5x televue barlow, skyris 618c, Celestron C14, about a million Zeiss lens wipes and an awful lot of bad language.

The students got some awesome images too, including an object we’ve never seen before.  I’ll report this soon, but I need to give them a chance to analyse their data first… (I’ve done a sneak analysis.  It’s awesome data.)

The Maxwell Society viewing the planetary line up

The undergraduates here at King’s College London have a society called the Maxwell Society.  Named after James Clerk Maxwell, it boasts some famous members in the past including Peter Higgs and Arthur C. Clarke.  Today they run a variety of events throughout the year depending upon who’s in charge that year.  This year’s lot wanted me to let them up to the telescope, which of course I was very open to but I realised that there are not so many cheap thrills with a telescope in the middle of London, you have to work hard for your pictures/spectra and your views through the eyepiece will be so-so.

Anyway, they asked about the planetary alignment which is currently taking place and when I told them they’d have to get in around 6am, they barely skipped a beat before arranging it. So this morning I went in at 5am.  gah.  Anyway around 20 intrepid astronomers showed up at the crack of dawn.20160203_063307

As usual, it turned out the conditions were less than ideal, it was very windy, which meant the “seeing” was very bad, which means that the turbulent atmosphere makes the image extremely wobbly.  That also means you can’t even think about using a very high magnification as the images just get worse and worse so I stuck with a 24 mm eyepiece which gives 156x magnification.  In principle I could combine my 11mm eyepiece with my 2.5x barlow and get some massive magnification but it would look very bad- all the imperfections of seeing are enhanced non-linearly as you increase magnification.   Since all the planets we were looking at were close-ish to the horizon this was not ideal.   Even on a good day you shouldn’t even bother trying to see anything less than about 25 degrees above the horizon in central London.  I tried looking at Mars before they got up there but it was… underwhelming.

However I showed them Jupiter with cloud bands and three moons visible and then we saw Saturn.. it was very shaky but every so often they got a clear focused view of the rings, albeit a bit small.  Then some views of the moon, even the moon looked shaky, indicating how bad the conditions were.  Despite all this, they really seemed to enjoy the whole thing a lot!  I very much hope that they come again to see some better views when the conditions are improved.


That moment when you realise you’ve been handling ALL your image files wrongly (or 65,534 shades of Grey vs. 254)

Another post so soon?  Well I’ve discovered something and its pretty significant.  It’s really rather embarrassing, but well like I keep saying I REALLY don’t know what I’m doing and I try to pride myself on always admitting when I am wrong, (and if necessary apologising although on this occasion I don’t know who I’d apologise to.)

I recently discovered something about the free image processing software GIMP (photoshop for cheapskates) – it is 8 bit.  That means within the image every pixel has a value between 0 and 2^8-1 which is 255.  If it is 0 then the pixel is completely dark and if it is 255 then the pixel is completely saturated and there are 254 shades of grey between those two extremes.

I also recently discovered something about our camera, it is 16 bit, so each pixel has a value between 0 and 2^16-1 which is 65535 with 65534 shades of grey in between.

Unfortunately I’ve been using GIMP to stack and analyse our images.  So if the camera was reading 100 on a particular pixel out of a possible maximum 65535 then once you read the file from the camera into GIMP it will divide everything by 256.  The closest integer to 100/256 is zero.  If we were to stack 20 such images, then 20 times zero is still zero.  If you use a proper 16 bit software, you would have 20 x 100 which is a respectable 2000.

So basically we’ve been missing out on a heck of a lot of detail.  Luckily I’ve still got the original image files, so I’ve been spending the last part of my holiday re-analysing and stacking some of those old images.

The difference is remarkable:-


These two images come from exactly the same data, cropped very slightly differently.  The one on the left uses all 16 bits while the one on the right is only 8 bit.  So all this time we’ve been getting much better images than we thought.  This makes me even more eager to get out and look at more things, but the weather is awful.  Luckily I’ve got a UG project coming up with the telescope, so that will give me a chance to get some new images…

So I do feel like a bit of a fool but a relatively happy one…

End of Cloudy 2015

The weather has been horrible for months, its been mild but terribly cloudy.  For example, the only two clear nights in London lately have been Christmas Eve and New Year’s Eve, both of which I had plans for. It has really been very difficult to get good views of the sky from London for that whole time..  We haven’t had chance to do as much as we would have liked in continuing to learn how to use the telescope.

Messier 81, about 5 times as far away as Andromeda.

Sunayana and I did get this image of M81 earlier this month, which is a large Galaxy not very far away but it should be a lot brighter.  I hope you can see there is a galaxy there though!  We’ve also been trying to learn how to use Colour as you can see with this image of the Ring Nebula M57


which is not bad for the time being at least.  There are very many tricks that we are learning about stacking images, but we haven’t started to do things like subtracting flat fields and dead pixels.  Hopefully we will get better weather in 2016.  Certainly we are looking forward to the return to the skies of Jupiter which already feels like an old friend.  February will be good for that.  However we want more galaxies and more student access to the telescope, which we are working on and will report more about soon…

best wishes for the new year to you all.

Sunayana’s image processing Software

This post is by Sunayana who has been helping me with the telescope over the past year.  Unfortunately she is now leaving us to pursue an MSc in Astrophysics at UCL but I sincerely hope she will find time to come back and visit us up on the roof as often as possible.

One of the key purposes for the KCL telescope is its use within third year project work. Trialled in early 2015, the first set of project students used spectroscopy to analyse the composition of elements in the atmosphere of Jupiter, the Orion Nebula and Sirius. This involved grappling with raw images, plugging them into software and outputting a spectrum of intensities plotted against a range of wavelengths. In pursuit of a more efficient solution, a challenge was posed – to write concise image processing software which would take any greyscale image of emission/absorption lines and immediately convert it to its corresponding spectrum. This would circumvent the tediousness of cropping images manually and having to abide by the constraints of external software, allowing for flexibility and swifter analysis.

So immediately I got to work, contemplating the most suitable programming language for manipulating image files in a streamlined and precise way. Reluctantly turning away from my familiarity with FORTRAN 90, I embraced C for an easier ride into image manipulation.
The programme opens a grayscale bitmap chosen by the user and scans each pixel, processing its intensity value. Since these are 8-bit grayscale images, each pixel contains only intensity information and therefore has a value between 0-255 (2^8). Then, the intensity of pixels in each row for a given column is summed and plotted. So, going across the image, any detected non-zero intensity value for a given column will yield a peak in the spectrum. Proportionally, the larger the number of non-zero intensity pixels, the bigger the peak will be. An all-white vertical line across the image, for example, would have the maximum intensity possible. Of course, the non-zero intensity pixels represent the areas on the image where characteristic emissions are occurring (or vice versa for zero-intensity pixels where absorptions are occurring).

It is important to note that the intensity of each pixel is typically non-linear and discrete data analysis of an image which supposedly contained equal increases in intensity across its width was terribly unhelpful in the early stages of writing the programme. Finally, the number of pixels across the width of the image is calibrated according to the wavelength range of the spectroscope (between 300nm and 1100nm). Therefore, a spike identified at the 50th pixel of a 200 x 200 image, will translate approximately to a peak at the 500nm mark. Automatically, all the pixel data of the image is written to a readable, graphable text file which is easily plotted on any data analysis software including GNUPLOT, Xmgrace and Microsoft Excel. Hopefully, this swift and user-friendly method for image manipulation will serve the next cohort of third year undergraduates well. Watch this space…

More recently, we have been trying to ratify, more broadly, student access to the telescope in both an academic and extracurricular capacity. We’ll be instrumentalising the (optimistic) novelty of a telescope on a rooftop in central London through termly events, titled ‘A Night on the Roof,’ where undergraduates will be able to attempt to find and focus on solar system and/or deep sky objects according to visibility. This will hopefully be the first of many streamlined operations to allow wider access to the work being undertaken at the telescope. Later we may have a series of public lectures on astrophysics and related themes, which might feature a live-feed from the telescope as an additional bonus.

We have been attempting to make the dome rotation smoother and now there an additional monitor has been mounted to the wall inside in the dome so we can have the computer in the hut but still work in the dome (see below).DSCF1656

A few more pictures and the Sky at Night

Hello, we’ve been working to update the telescope a bit, Sunayana has been writing some image processing software to make the analysis of spectra faster for undergraduates and we have been cleaning up the dome, more on that in a future post.

We’ve had an intermittent problem with the mount –  sometimes it just doesn’t work properly and there is a weird wobble.  Apart from that the weather has been so-so its quite rare in London to get a night without cloud and then in summer the clearer nights are not so perfect.  Anyway here are some pictures for you:-

First Saturn.  It’s very low in the sky from up here in the UK and in London that’s bad, that’s very bad.  We have the Chelsea Lamborghini exhaust haze to the South East where Saturn pops up.  And there is huge thermal turbulence from the heat of the city which makes the density of air vary along the path of the photons, changing its refractive index and turning the air into a constantly distorting irregular lens which makes images wobble.

Anyway of course its Saturn, so we have to try, here is what we got:-

Saturn from London, see text for excuses as to why this is a difficult planet for us.

Saturn from London, see text for excuses as to why this is a difficult planet for us.

Next we looked at M27 Dumbell nebula which is a planetary nebula – what you are looking at is the expanding shell of gas from the Star which used to live at the middle, its the outer parts of that star that you can see, the inner parts would have formed a white dwarf star which is sitting somewhere in the middle.  That star will be so dense that a spoonful will literally weigh a ton but its very small so while it will be very hot, it won’t be very bright and it won’t do much, it will just sit there and cool down.  Slowly.

M27 Dumbell Nebula. Not very bright, I need to be more patient and take more exposures

M27 Dumbell Nebula. Not very bright, I need to be more patient and take more exposures

It’s not very bright, I need to take more exposures and then stack.  I did it through a hydrogen alpha filter as well because nebulae emit loads of hydrogen alpha photons.  I’ve coloured that bit in red.  We will definitely revisit this object and try to do a better job.

Finally I’m going to show you a globular cluster.  Now these objects look a bit boring but they are super important for Cosmology.  Globular clusters are spherical balls of stars, usually about a quarter of a million stars, There are two kinds of globular clusters, old ones and new ones.  New ones form recently in galaxies, old ones formed a long time before any galaxies existed, in fact people thing they are at least one component of the building blocks for galaxies.  They are the oldest remaining objects in the Universe today and by looking at the stars in them we can tell how old they are.

Now I’m not going to go into this in massive detail today because I don’t have time, but if you look at how quickly the Universe is expanding today and make some reasonable assumptions about what it contains you might calculate that it was only around 9 billion years old.  However, some of the old globular clusters are more than 12 billion years old, so this shows you that your assumptions are probably wrong and indeed is one of the reasons why we think dark energy exists, to make the Universe older.

Anyway here is a picture from the roof of one of the very old ones.  M56.

lives to be 12 billion years old and contains hundreds of thousands of stars and we can't think of a better name than M57

lives to be 12 billion years old and contains hundreds of thousands of stars and we can’t think of a better name than M56

In other news, I have been interviewed for the Sky at Night television program, which is very nice for me.  I used to watch this program when I was young and trying to see things with my very bad argos 60mm refractor telescope.  I thought when I turned my back on becoming a pure astrophysicist in 1997 that my chances of getting on the program were gone but here we are, life is strange!  I’ll be talking about dark matter at 2210 on Sunday 9th August on BBC four.  I was interviewed by Chris Lintott and not Maggie Aderin-Pocock although I did say hi to her.  The whole team were very nice and Chris was very friendly, he’s very good with people – I suppose that’s his job but he does it very well.

I hope I don’t end up looking stoopid, lets just wait and see…