The Naked Scientists

The Naked Scientists Forum

Author Topic: Would an array of digital camera sensors be cheaper than a large telescope?  (Read 2275 times)

Wes Bullock

  • Guest
Wes Bullock asked the Naked Scientists:
   
Instead of making ever-larger and more expensive reflectors for telescopes to see further into space, with all the complexity that entails, would it be possible to assemble a massive array of cells of the type used in digital cameras with some form of polarizing medium above them (to take in only the light coming from a distant point and not that which is being scattered in from odd angles)?

With such an array, the key criteria would be flatness instead of some complex parabolic shape and I would think it would make manufacturing the device a lot easier.

The polarization might be done with nanotubes oriented vertically or with a very long tube with black inner surfaces with the array of photo sensors in the bottom.

I'm probably missing some obvious things about optics with this idea but have wondered for a while why such an idea wouldn't work.

Thanks!

PS I really appreciate the fact that in every case I can remember, your questions about and descriptions of "what's happening" in a particular subject stand up to my own training in physics and engineering.  So often in the media, hosts are so scientifically illiterate that their questions and comments are off the mark.  Yours aren't.

Wes Bullock

What do you think?
« Last Edit: 20/10/2010 11:30:13 by _system »


 

Offline Soul Surfer

  • Neilep Level Member
  • ******
  • Posts: 3345
  • keep banging the rocks together
    • View Profile
    • ian kimber's web workspace
A very interesting question. 

Firstly there is a minor error in it you talk of a polariser when you mean a collimator.  A polariser filters out light vibrating in one plane a collimator separates light or radiation coming from one direction

Funnily enough there is a radio telescope that is being built using a similar concept by coupling together arrays of simple telescopes all over the world.  Coupling together optical sensors demands much more precision and multiple sensor telescopes are being built and used but they still use relatively large mirrors to concentrate the light.

I think that a medium sized mirror will always be needed for the same reason that a camera needs a lens but I can visualise the potential of arrays of small telescopes being used once the problems of coupling them together have been solved these are the sort of sensors that will be able to get maximum information from extrasolar planets. 
 

Offline imatfaal

  • Neilep Level Member
  • ******
  • Posts: 2787
  • rouge moderator
    • View Profile
Hi Wes,

I was also thinking along these lines when I heard the news about the square kilometre array on various podcasts and programs.  I must admit my idea was different in that I was wondering if we could merge the cloud computing concept with astronomical observing - not just the processing of astronomical results. 

This is a bit off the wall - and I am hoping that SoulSurfer can let me down gently - but with the standard implementation of fairly high quality optics on mobile phones, could we use networks of mobile phone images of the same part of the sky to see things that one phone (or even one decent telescope) could never see?  If every iphone in UK took a series of ten photos of a certian part of the sky and sent them - along with the gps co-ordinates of the viewing location (which phone can provide) - would a sufficiently powerful computer be able to put all the info together and come up with a sum greater than its parts. 

I ask, because it occurs to me that if you wanted a region of sky photographed at very short notice you could call on the services of many iphone users for the following reasons; standard not bad camera, in-built gps / compass, easy notification by sms on the phone itself, and a group of real tech-loving users.  So could 50,000 photos of the same patch of sky (with details of position and time etc) be of any more use than one?

Personally, i would sign up, and would love to recieve the sms along the lines of - "please photograph areas of sky at x degrees and y degrees inclination as soon as possible and forward photos - reasons for request on website tomorrow. thnx"
 

Offline Geezer

  • Neilep Level Member
  • ******
  • Posts: 8328
  • "Vive la résistance!"
    • View Profile
Hi Wes,

Great question! Here's a Wiki link that you might find interesting. I'd try to explain some of it myself, but I think I'd be a dismal failure  :D
 
http://en.wikipedia.org/wiki/Astronomical_interferometer
 

Offline Soul Surfer

  • Neilep Level Member
  • ******
  • Posts: 3345
  • keep banging the rocks together
    • View Profile
    • ian kimber's web workspace
Stacking multiple digital photographs is a fundamental part the fantastic results amateur astronomers get with modest sized telescopes but these are taken with the same telescope at a similar time and put together.  In theory this could be done with a simple camera but there are two reasons why the results would probably not be worth it.  

Firstly the resolution and sensitivity of a camera is a function of the focal length and size of the lens and the tiny reasonably wide angle lenses of mobile phone cameras probably do not have a good enough resolution to be very useful they also probably do not have enough sensitivity to show many bright stars which are an essential part of the image stacking process.

Secondly the control and processing of the camera is not good enough.  The cameras have automatic exposures and in dim conditions typical of astronomy they use long exposures which risks camera shake which is a disaster if you wish to stack images.  these must be pin sharp.  Most digital cameras process the sensor data to produce jpeg images.  This is a compressed data format which does not handle single (or small group) pixel images of stars very well.  It is designed for the normal sort of blocky and textures images we see.  To stack the images it is important to use the raw uncompressed data from the sensor and only specialist cameras are equipped to do this.
 

Offline imatfaal

  • Neilep Level Member
  • ******
  • Posts: 2787
  • rouge moderator
    • View Profile
ah well - back to the drawing board. 
1. resolution
2. sensitivity
3. camera shake
4. pre-processsing
5. output image

Damn - five hurdles.  Will think - but pretty sure this is a lost cause.
 

The Naked Scientists Forum


 

SMF 2.0.10 | SMF © 2015, Simple Machines
SMFAds for Free Forums
 
Login
Login with username, password and session length