How are the images created?

Imaging faint objects from deep space using a telescope is no easy task and requires a substantial investment in equipment and time. The images you see here are the result of tens of hours of data acquisition and processing not to mention the time required to learn the trade and refine the hardware.

The “data acquisition” just mentioned refers to the process of taking many long exposure photos (up to ~200 images, each up to 20 minutes long) of a deep space object (nebulas, galaxies, star clusters, etc) using a number of filters (see below). All these individual images act as the raw data for the next stage of the process; creating the final image.

The first step in data processing is to perfectly align every exposure using one of the many programs available to astrophotographers. To acheive such perfect alignment, the stars are used as a reference. The next step is called “stacking”. This is where all the aligned images are layered on top of each other and an average is taken (with exclusion of outliers like satellite trails and hot pixels). Why do we do this? Because the average of many exposures gives an image of much greater quality than any of the single exposures. From here, the image is processed to reveal the details and color of the object, which can take many hours to do properly!

Narrowband Astrophotography

This form of astrophotography is all about imaging though a filter that passes only a tiny slither of the visible light spectrum onto the camera. Why do we do this? Because deep space objects called emission nebulas emit light at very specific wavelengths (called spectral lines) based on elements found in the nebula. So by using a filter that passes only that wavelength, you can really isolate that nebula from the surrounding star field and cut out light pollution significantly, which greatly improves contrast compared to the same object imaged using the whole visible light spectrum (see example below).

There are five narrowband filters commonly available to astrophotographers. The most relevant are hydrogen-alpha (656nm), oxygen-III (501nm) and sulfur-II (672nm) but hydrogen-beta (486nm) and nitrogen-II (658nm) are also used.

With narrowband astrophotography, the idea isn’t that you are capturing color per se but luminosity at a specific wavelength. Afterwards, a color image (false or true) can be created by mapping the individual narrowband images to the RGB color channels (see color palettes).

Below in an image of a faint emission nebula called RCW 75. The first image is a normal color image (RGB). While the nebula can bee seen, it's lost in the dense star field! The second image is what happens when narrowband (hydrogen-alpha in this case) is incorporated into the normal color image. Much richer in detail and contrast!

1 of 2

Broadband Astrophotography

Broadband astrophotography means you are imaging through a filter that passes a large portion of the visible light spectrum onto the camera. This is the required mode for imaging deep space objects that emit light across the entire visible light spectrum such as galaxies, reflection nebulas, or stars. If you are using a color camera, then a filter generally isn’t needed as the camera sensor already has the required primary color filters (red, green and blue) built into it. But for monochrome cameras, separate red, green, and blue filters are required to get a color image (see below). Additionally, monochrome astrophotographers will frequently image a deep space object using a colorless filter. This becomes the "brightness" or "luminosity" that provides the contrast and details into which RGB is blended to give color.

1 of 4

Color vs Monochrome Camera

If a color camera is used then a single picture captures all three primary color channels (red, green and blue or RGB for short) using a square array of four pixels (2 green, 1 red, 1 blue) that combine to make a color pixel (Bayer filter array). It's the kind of photography that we're all accustomed to using on our devices. In astrophotography, this is called one shot color (OSC). While very efficient at capturing a color image with a single exposure, a lot of clever algorithmic processing is required to construct the color and details. This can be undesirable when capturing astronomical data but is seldom an issue for everyday use.

A monochrome camera, however, lacks the Bayer filter array of color cameras, which means all incoming light reaches the sensor, regardless of the color. Therefore, the total amount of light captured is directly translated into a luminosity value (not a color) to give a monochrome image. Best of all, no algorithmic processing required! Consequently, monochrome cameras can achieve a slightly higher resolution than color cameras. For narrowband photography, these properties offer a huge advantage as no light is blocked by a Bayer filter array and resolution is maximized.

The main disadvantage is that if a color image (RGB) is desired then you must take three separate images through a red, green and blue filter then put them together to make a color image.

I personally use a monochrome camera because most of my imaging time is spent on narrowband.