Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Arc second per pixel.


adamsp123

Recommended Posts

Arc second per pixel, please could someone explain the importance of the result.

For example if I plug in the numbers for my canon 1000D on my SNT I get 1.16 Arcsec/pixel and with my 120ED I get 1.54 Arcsec/pixel.

So is one better than the other?

If I buy a Mono Cooled CCD what am I looking for in this parameter if at all?

Cheers Pete

Link to comment
Share on other sites

Arcseconds per pixel depends on the focal length of your scope (longer == fewer arcsec/pixel) and camera pixel size (smaller == fewer arcsec/pixel). To calculate it, do;

Pixel size (mm) * 206265 / Focal length (mm)

As the others say, there is a bit of a trade off to do to say how many arcsec/pixel is optimal. It's not a case that one is better than the other; they're just different.

Often, you aim to have 2--3 pixels per resolution element (the size of the star images delivered by your telescope). At most sites in the UK, with a well set up telescope that is guiding well, this is going to be about 2--3 arcseconds (assuming your telescope is bigger than ~6 inches). So, a pixel size of 1--1.5 arcsec/pixel is a good match to this. That is considered to be optimally sampling the image delivered by the telescope. However, as riklaunim says, smaller pixel lose you sensitivity *for extended objects like nebula*. So, if you want to take nice pictures of nebula and galaxies, you maybe want to go with a bigger pixel size, and undersample the image delivered by the telescope.

If you're buying a mono CCD, you can usually bin up the pixels in hardware to give you physically bigger pixels on the chip (e..g if your camera has 9 micron pixels, you can bin 2x2 or 3x3 and have effective 18 micron or 27 micron pixels). So, you can have the best of both worlds. In that case, you might want to choose a CCD which samples well your image at about 1"/pixel; but also allows you facility to bin up 2x2 or 3x3 to give you 2"/pixel or 3"/pixel for imaging faint extended objects (bear in mind that won't give you larger field of view though!).

Horses for courses really... depends what you're looking to image...

Link to comment
Share on other sites

However, as riklaunim says, smaller pixel lose you sensitivity

Not saying I disagree with this but surely the QE of the chip has a part to play in this too?

IMO, resolution is a factor but weather conditions play more of a part. This season when it's been clear the skies have been in the main, rubbish so I've stayed to using shorter focal length scopes because there wouldn't be any advantage in using a longer one.

Tony..

Link to comment
Share on other sites

Ah! thanks all that has helped, good diagram as well, now I understand binning on top :(

I am thinking of the kodak 8300 chipped cameras eg Atik 383L+ etc used on anything from a WO 72ED to my 10" SNT (focal lengths 430 to 1016mm)

So the information you all gave is very useful

Cheers Pete

Link to comment
Share on other sites

Not saying I disagree with this but surely the QE of the chip has a part to play in this too?

Yes it does. But assuming the QE is the same, you'll lose sensitivity with smaller pixels -- for extended objects.

The reason is that noise basically goes as the square-root of the signal. So if you double the signal, you only be 1.4x the noise; so your signal-to-noise improves (there is a secondary effect in that each pixel has a fixed amount of noise (readnoise + dark current), but a variable amount of signal (source)). So if you increase the area of the pixel, the signal goes up and so your signal-to-noise improves. The equation for signal-to-noise is;

S:N = Signal / SQRT ( Signal + Sky + DarkCurrent + ReadNoise^2)

So, considering an extended object with a theoretical detector; lets say we get 10 counts per pixel from the object, and 10 counts per pixel from the sky also. Dark current is 1 count per pixel, and readnoise is 1 count per pixel.

S:N = 10 / SQRT (10 + 10 + 1 + 1^2) = 10 / SQRT (22) = 2.1

Now we bin the pixels 2x2 to give us a bigger pixel. Now we get 40 counts from the object (pixel area is 4x bigger) and 40 counts from the sky, but still only 1 from dark current** and 1 from readnoise. So now we have;

S:N = 40 / SQRT (40 + 40 + 1 + 1^2) = 40/SQRT(82) = 4.4

Higher S:N to noise :(

The above also shows why bigger pixels are only more sensitive for extended sources (like galaxies/nebula). For a star, if you sample the image at anything less than the resolution (so if your telescope produces 3" star images, and you use 4" pixels, for example), increasing the pixel size will include more sky light (and hence sky noise), but not include any more source flux. So your signal to noise will go down.... So, if you have a bright sky, but good image quality, you should really go for longer focal lengths and smaller pixels -- at least if you want to image things with stars in; like globular clusters.

Sorry if that is all bit technical -- thought it might be interesting to have some background on the signal-to-noise drivers. Summary is; you won't go far wrong with ~1-2"/pixel. Especially if you have a CCD camera which you can bin up to get bigger pixels if you need them.

** technically that's probably not true, as dark current scales with physical pixel size; but usually dark current is negligible as a noise source

Link to comment
Share on other sites

Kodaks usually have big pixels, microlensing that aids QE. But they aren't cheap. Atik 383L+ will be a very good camera.

For comparison there are some entry-level cameras with super small 3.45µm pixels like Opticstar DS-335C ICE, which can be used with very short refractors or lenses (Omega Centurai by John Haunton. DS-335C ICE, WO 66mm refractor. Stacked 6 frames x 120 seconds.).

Also Opticstar gives some QE values: Sony ICX205AL (as in Atik 314E, small pixels): 42% QE max, Sony ICX285AL (as in Atik 314L+, bigger pixels): 62% QE max.

Link to comment
Share on other sites

Like Pete I'm looking at getting a mono CCD camera, I was initially looking at possibly the Atik 4000 but the 383L+ has now arrived on the scene. The 4000 has a larger pixel size (7.4 vs 5.4) but would this be realistically noticeable based on the info given in this thread with typical UK skies ? The price difference is certaily noticeable though, so wondering if the 4000 is now going to be effectively obsolete ?

Thanks, Andy

Link to comment
Share on other sites

There are lots of other things to consider on top of pixel size; Chip size, QE, dark current, read-noise, etc. The Atik 4000 has a much larger chip than the 314L for example (2048x2048 vs 1400x1000 pixels).

Oops, realise you said 383 not 314... sorry...

Link to comment
Share on other sites

Thank you Tea Dwarf for the technical lesson, I was aware what you are talking about but from experience I don't think resolution is a huge factor in regard to choosing a camera unless you're doing something very specific which most of us aren't.

I've taken several images with my Atik and ZS66 which gives a resolution of 3.41"/pixel and still get plenty of detail in nebulae. Get a good camera with a sensor that has smallish pixels and good QE and you could use it a variety of focal lengths.

Tony..

Link to comment
Share on other sites

  • 2 years later...

Hi all!

I'm programming an attitude estimation and control algorithm for an earth observation satellite. The final result is, for a given mass properties, control capabilities, orbit, blabla, I get a pointing accuracy of, let's say 1 arcsec.

I would like to know how to translate that arcsecond into some useful parameters, like "Then, if I have a FOV of 25x25º, I can take a 5Mpixel picture in which I can see a human hair". Sort of that. Translating that accuracy into picture properties. I don't know anything about it, and it's hard to find information when space technology is involved.

For example, hubble space telescope is capable of a pointing precission of 7.10^-3 arcsec (correct me if I'm wrong please). How does the fact translate into the imaging capabilities, rest of the properties constant??

Thanks a million in advance!

Excuse my english BWT :grin:

Link to comment
Share on other sites

I would like to know how to translate that arcsecond into some useful parameters, like "Then, if I have a FOV of 25x25º, I can take a 5Mpixel picture in which I can see a human hair". Sort of that. Translating that accuracy into picture properties. I don't know anything about it, and it's hard to find information when space technology is involved.

For example, hubble space telescope is capable of a pointing precission of 7.10^-3 arcsec (correct me if I'm wrong please). How does the fact translate into the imaging capabilities, rest of the properties constant??

Isn't the pointing precision just an expression of how close you guarantee to be to a target when you want an image of it? So if you wanted the HST to have its optical axis lined up on a specific distant galaxy, for instance, you could be sure that it would in fact be within 0.007 arcseconds of galaxy?

James

Link to comment
Share on other sites

Well if that really is its pointing accuracy, that's the point :p. It will point somewhere in a solid semiangle of 0.007 arcsec with respect to the desired target.

In my particular case, if I my attitude reference profile says I have to point to a specific star, my optical axis will slightly wander from that star, with a maximum error angle of 1 arcsec (because of perturbations, sensor noise, actuator accuracy, inertia...). But I do know where it is pointing exactly. And I know how fast it's wandering. Still, I cant relate it to the real thing, how would that poninting error/ pointing error rate would impact my imaging capabilities. That's exactly what I'd like to know. Sensitivity issues? aperture stuff?? dont know

Link to comment
Share on other sites

Since I cant find the classic edit button, double post :huh:

I thought that, as a rule of thumb, my pointing accuracy should be smaller that the arcsec/pixel. I.e., if my APE (attitude pointing error) is 1 arcsec, one pixel in my picture should range more than 1 arcsec. But i have no proof, evidence or knowledge to write that in my diplomma thesis (which is about this).

Also, I think that the "wander speed", i.e. how fast the optical axis slightly moves around the desired target, should be in the equation. But again, no idea.

Link to comment
Share on other sites

There's probably a lot more complexity here. For example, one might consider the "wander" of the optical axis similar to the random perturbation that ground-based imagers experience due to atmospheric turbulence. This results in a blurred image. It's entirely possible to apply mathematical transforms to the data to recover details that would otherwise not be visible however.

Depending on how the images are being processed it may also be that a small amount of perturbation (caused by the "wander") actually improves the image quality by making it easier to remove electronic noise in the camera sensor from the "real" image data.

There's a large amount of signal processing theory that might apply to the situation you're considering. The only source I know it from is an expensive book specifically targeted at astro-imaging called "The Handbook of Astronomical Image Processing", but I don't think it's anything particularly novel. I assume from your comments that you're in an academic environment. You may need to go and do some research on image processing.

James

Link to comment
Share on other sites

I'm thinking more simple stuff. Let's say my satellite camera has:

-an exposure time: T

-distance to target :h

-angular error rate while pointing: omega

During the exposure time, the pointing error angle will be:

alpha = omega * T

The distance ranged in that time over the earth surface would be then:

L = alpha / h = omega * T / h

So, my guess is that L should be smaller than the length ranged by one of my picture pixels. No matter how many of them there are.

Like some sort of blur criteria. Does this sound familiar?

Link to comment
Share on other sites

Like Pete I'm looking at getting a mono CCD camera, I was initially looking at possibly the Atik 4000 but the 383L+ has now arrived on the scene. The 4000 has a larger pixel size (7.4 vs 5.4) but would this be realistically noticeable based on the info given in this thread with typical UK skies ? The price difference is certaily noticeable though, so wondering if the 4000 is now going to be effectively obsolete ?

Thanks, Andy

Good heavens, what a scandalous idea!! :grin:

The 8300 chip is slow and has the hassle of a shutter. (Flats need care and there's one more thing to go wrong.) Don't consider a colour 8300 camera for one second. They are grindingly insensitive whereas the 4000 colour is pretty good. In my vioew the chip in the 4000s is way more attractive than the 8300 but the price difference is excessive, I agree.

I feel that some people attribute to sampling rate more importance than it merits. I use the 7-point-something micron Kodak chip in focal lengths varying from a miniscule 328mm to a more intimidating 2.4 metres.

Examples, respectively, below;

M42%20WIDE%202FLs-L.jpg

M101%20new%20core3CROP-L.jpg

In fact, since I can't go around buying new cameras for every new focal length I fancy, I've used the same pixels at just 85mm in a camera lens and as long as you don't expect to present the result full size it can still be productive...

ORION%2085MM%20LENS%20HaOSC%206%20PANEL-L.jpg

So arcsecnds per pixel: worth coming to blows over?? :grin: Hardly, in my opinion. Besides, the reality is that, like most of us, you'll end up with a choice of focal lengths before you end up with a choice of cameras, given the cost of each. In the real world not having enough signal is more likely to limit your final resolution than being undersampled.

Olly

PS These opinions relate to DS and not planetary imaging. I know nothing about the latter.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.