Professional Documents
Culture Documents
3
May, 2010
Craig Stark
Table
of
Contents
INTRODUCTION
....................................................................................................................................................4
ACKNOWLEDGEMENTS
......................................................................................................................................4
FEATURES
...............................................................................................................................................................5
MAIN
SCREEN
......................................................................................................................................................10
IMAGE
WINDOW
......................................................................................................................................................................10
DISPLAY
PANEL
.......................................................................................................................................................................11
CAPTURE
PANEL
.....................................................................................................................................................................12
Camera
Section
....................................................................................................................................................................12
Exposure
Section
.................................................................................................................................................................12
Capture
Section
...................................................................................................................................................................13
STATUS
BAR
.............................................................................................................................................................................15
CUSTOMIZATION
OF
THE
INTERFACE
....................................................................................................................................15
Notes
and
History
...............................................................................................................................................................16
Camera-speci<ic
Dialogs
..................................................................................................................................................17
External
Filter
Wheel
........................................................................................................................................................17
Link
to
PHD
Guiding
..........................................................................................................................................................17
CAPTURING
IMAGES
.........................................................................................................................................18
MONOCHROME
VS.
COLOR
.....................................................................................................................................................18
ONE-SHOT
COLOR:
RAW
VS.
RGB?
.....................................................................................................................................19
FILE
FORMATS
..........................................................................................................................................................................20
CAMERA
GAIN
AND
OFFSET
...................................................................................................................................................21
What
do
gain
and
offset
do?
..........................................................................................................................................22
Gain's
downside:
Bit
depth
and
dynamic
range
....................................................................................................22
How
do
manufacturers
determine
gain
and
offset
for
cameras
that
don't
allow
the
user
to
adjust
them?
........................................................................................................................................................................................23
How
should
I
set
my
gain
and
offset
to
set
it
and
forget
it?
..............................................................................23
Automatic
Offset
.................................................................................................................................................................24
OVERVIEW
OF
IMAGE
PROCESSING
..............................................................................................................25
Deciding
on
Bad
Pixel
Mapping
vs.
Dark
Subtraction
........................................................................................25
Preparing
and
applying
the
darks,
<lats,
and
biases
...........................................................................................26
Converting
RAW
images
to
Color
and/or
Pixel
Squaring
(aka
Reconstruction)
....................................29
Normalize
Images
(optional)
........................................................................................................................................29
Grading
and
Removing
Frames
(optional)
..............................................................................................................30
Stacking:
Align
and
Combine
........................................................................................................................................31
Crop
off
the
edges
...............................................................................................................................................................32
Remove
the
Skyglow
Color
..............................................................................................................................................32
Stretching
...............................................................................................................................................................................32
All text and images Copyright Craig Stark, Stark Labs 2005-2010
Last updated May 15, 2010
1 Introduc3on
Welcome to Nebulosity. Nebulosity is designed to be a powerful, but simple to use capture and
processing application for your CCD camera. Its goal is to suit people ranging from the novice
imager who wants to create his or her first images and the advanced imager who wants a
convenient, flexible capture application for use in the field. As such, an emphasis has been
placed on easy access to commonly-used camera controls, as nobody wants to navigate through
many menus in order to simply capture a series of images.
An emphasis has also been placed on compatibility with other applications. For many imagers,
the tools provided here will be well-suited to produce images that are ready to be touched up in a
graphics editing package (e.g. Adobe Photoshop or the freeware application GNU's GIMP). The
tools provided are the tools most of us want and need to make great images. For more advanced
imagers who already use more sophisticated astronomical image manipulation software,
Nebulosity might serve as a suitable capture application and provide a few processing tools.
Nebulosity supports a wide range of output formats, including various FITS formats and other
16-bit per color formats, so that your images can be easily imported into whatever software you
use.
What Nebulosity is not designed to do is to be an all-inclusive, general-purpose, full-powered
astronomical imaging and analysis package. There are several of these on the market already and
all are fine packages. All are very large, place more substantial demands on your computer, and,
by virtue of being large and all-inclusive, do not typically present a simple, clean, interface for
basic image capture control. The author of Nebulosity routinely stands in cold, dark fields with a
laptop and a camera taking pictures. Under these situations, when gloves must be removed to
operate the computer, simple, dedicated user interfaces are exceptionally welcome.
That said, the author is also a stickler for power and accuracy. You get quite a few "serious" tools
in Nebulosity. The ones you get are purpose-built - tools that you will want for processing raw
DSO images into beautiful pictures.
2 Acknowledgements
The author would like to extend his heartfelt thanks to several individuals who have helped in the
creation of Nebulosity. In particular, I would like to thank Michael Garvin, William Behrens,
Tom Van den Eede, Sean Prange, Rob Sayer, Dave Schmenck, and Ray Stann for all their help. I
would also like to acknowledge the fine wxWidgets cross-platform GUI library used extensively
here. Without it, I would not have written Nebulosity. I would like to acknowledge use of the
FreeImage, LibRAW-Lite, and CFITSIO libraries for image input and output. In addition, note
that the noise reduction software GREYCstoration and the automatic alignment software ANTS,
are included as binary applications bundled with and called from Nebulosity.
3 Features
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
more like film images by using a hyperbolic scaling of the data. Here, the basic
technique is enhanced to allow easy darkening of the background at the same
time.
Star Tightening. A technique to sharpen stars using an edge-detection algorithm
(does not leave the artifacts found in "unsharp mask" techniques).
Unsharp Mask tool for image sharpening
Traditional and Laplacian image sharpening
Grade a series of images to determine the sharpest / best of the set
Versitile Image Preview / Rename tool to quickly sift through large sets of images.
Align a series of images using simple translation (for equatorially mounted
telescopes).
Align a series of images using sub-pixel level accuracy and translation + rotation
and (optional) scaling (equatorial or alt-az telescopes)
Drizzle alignment and resolution enhancement for either equatorial (translation
only) or alt-az (translation + rotation).
Colors in Motion: Simultaneous over-sampling alignment and De-Bayer of oneshot color images to significantly decrease color error and increase resolution. For
one-shot color imagers, this improves resolution and reduces color error.
Average a series of images without alignment (e.g., for combining darks, flats,
bias frames, etc.)
Standard-deviation based stacking (aka sigma clip) of aligned frames to reduce
noise in final stack.
LRGB color synthesis (RGB, traditional LRGB, and color-ratio LRGB)
Line filter reconstruction for one-shot cameras. Optimized reconstruction of
RAW images taken using line filters. General mode plus modes optimized for Halpha and O-III/H-beta on CMYG arrays
Adaptive scaling of combined data (stacks) to use full 16-bit range (gives you the
best features of adding and averaging frames).
Image normalization and Histogram matching to balance intensity across images.
Pixel math tool to allow scaling / shifting the image intensities.
Color balance adjust (offset and scaling) with real-time 3-color histograms for
easy, accurate balancing. Luminance extraction provided as well.
Auto color balance
De-mosaic a RAW one-shot color image using a very high quality debayer routine
(VNG). Both interactive and batch-mode supported. Pixels become square in the
process if native pixels were not square.
White balance on Canon DSLR settings for both stock and extended-IR cameras
Square pixels for images from B&W cameras.
2x2 binning of images: addition, averaging, adaptive, and low-noise 2x2 for oneshot color sensors.
Gaussian blurring tool
Vertical smoothing / deinterlacing
Adaptive median noise reduction
8
4 Main
Screen
When you open Nebulosity, you are presented with a screen that looks like this (Windows
version is similar):
10
11
should likely use a shorter exposure or less gain. Are you cutting of hard on the left edge? If so,
use more gain, more offset, or a greater exposure duration.
Finally, the panel has the Zoom button (marked "100%" by default). Repeated clicks on the
Zoom button will cycle through several zoom modes (20%, 25%, 33%, 50%, 100%, 200%, &
400%) to get a better view of your image. Next to this, you'll see + and - buttons that let you
zoom in and out respectively. Note again, this only affects how you see your image, it does not
change the underlying image itself.
Tip: You can use Ctrl + and Ctrl - (or Cmd + and Cmd - to zoom in and out.
For a more detailed inspection of your image, try activating the Pixel Stats pop-up window
(under the Image menu).
12
Duration: How long per image (in seconds). Note, fractions like 1.5 allowed)?
Gain (optional): How much CCD amplifier gain should be used during A/D conversion?
(Think of gain as a volume knob for the signal coming off the CCD). Numbers range
from 0-63.
Offset (optional): What offset should be added to the signal during A/D conversion?
(The offset adds signal into every pixel to help you keep the pixels from having zero
values anywhere). Numbers range from 0-255. (See Automatic Offset on p. 16)
# Exposures: How many images do you want to take?
Time lapse: How much time (seconds) should be inserted between each image?
Most of these are fairly self-explanatory, but Gain and Offset deserve a bit of attention. They get
this in the Section Taking Good Images. For now, you can leave them at their default values.
The Duration and Time Lapse entries allow you to specify the exposure duration in seconds,
but fractions are allowed. So, if you want an exposure of a half a second, simply enter 0.5.
Remember that a millisecond is a thousandth of a second (0.001). In addition to allowing you to
enter the time directly, the Duration control lets you pull down any of a number of common
times. The word Duration is actually a button. Click and hold on it and a list of common
times will appear that you can quickly select without having to type numbers in while in the
dark.
14
value in that pixel. Therefore, the maximum value recorded in the area should reach its peak
when the image is in focus. The Max reading and the red line in the graph in the lower left
show the current value and history of this value.
The second metric calculated is the Half Flux Radius or
HFR. This is a metric devised by Larry Weber and used in
his popular Focus Max plug-in for several packages. This is
an excellent metric and is quite possibly the most robust
metric we have. In it, the best star in the small region is first
found and its center is found. The total star flux is then found
and the radius of a circle around the stars center that would
contain half the total flux is calculated. This is the HFR.
In the lower left the history of values for both the Max and
the HFR are plotted. The most recent 100 samples are plotted
so you can watch how the focus quality changes as you
adjust you telescope's focus knob. This graph will auto-scale
itself if the range is too large or too small for the display.
Finally, also shown on here are the best values achieved
during this Fine Focus run for both measures (horizontal dotted lines).
window. For example, the dialog that controls the link to PHD Guiding is seen here floating
above the main Nebulosity window.
16
17
5 Capturing
Images
Most of what you need to know to capture images was covered in the previous section on the
Exposure section of the Capture Panel. There are a few topics worth considering on their own,
however.
1. Monochrome vs. Color?
2. One-shot color: RAW vs. RGB?
3. File formats
4. Camera Gain and Offsets
18
go from a black background to a star), the hue (or color) changes much more gradually. Thus, we
can "get away" with having less color resolution than we have intensity resolution.
It is for this very reason that even when using monochrome CCDs, imagers often shoot a
luminance channel at full resolution and color channels at lower resolutions (by "binning" their
CCDs to increase the signal to noise ratio but decrease the resolution). Thus, low color resolution
but high intensity resolution is often chosen by monochrome CCD imagers, narrowing the
potential difference between the quality of the output between the two CCD types.
19
internally. Data files will be twice as large and, in truth, will likely show little more than the
default of saving in 16-bit integers.
Finally, you can choose to rescale your data to 15-bits rather than the full 16-bits possible. Thus,
your data will be scaled into the range of 0-32767 rather than 0-65535. This is an option to
support several programs.
Suggested settings if you plan to use other applications as well
AstroArt
ImagesPlus
Iris
Maxim DL
Note: Select Save Settings in the Preferences menu and these will become the defaults.
FITS is used as a standard not only because it is so common in the astronomical community, but
also because it allows for arbitrary information to be stored along with the image. So, Nebulosity
stores information such as the time the image was captured, what camera was used, what
exposure duration, gain and offset were used, etc. along with the image.
That said, many graphics programs do not support reading of FITS images. Here, you have two
options. First, you can save an image as displayed (i.e., taking into account the B and W slider
positions) in 24-bit BMP or JPEG format. If you do this, try to do most of your processing
beforehand as this format will allow for only 8-bits of information for each color channel. Subtle
gradations will be lost when you do this (but remember, your monitor will only display 8-bits per
color anyway).
Second, you can save in 16-bit/color (aka 48-bit color) TIFF or PNG format. Both compressed
(LZW) and uncompressed TIFF formats are supported (PNG format is always compressed).
These options all provide ways of saving your data without any loss or degradation for use in
other programs. These also are excellent ways to get color images into programs like Iris v5.
Finally, you can load both 8-bit/color (24-bit) and 16-bit/color (48-bit) images from a number of
formats. 8-bit JPEG, BMP, TIFF, PNG, and TGA files can be loaded and will be automatically
stretched to 16-bits/color. 16-bit TIFF and PNG can be loaded as well.
21
and offset do. If you want to skip the theory, jump ahead to How should I set my gain and offset
to set it and forget it?
22
5.4.3 How
do
manufacturers
determine
gain
and
oset
for
cameras
that
don't
allow
the
user
to
adjust
them?
Let's pretend we're making a real-world camera now and put in some real numbers and see how
these play out. Let's look at a Kodak KAI-2020 sensor, for example. The chip has a well-depth
specified at 45k e-. So, if we want to stick 45,000 intensity values into a range of 0-65,535, one
easy way to do it is to set the gain at 45,000 / 65535 or at 0.69 e-/ADU. Guess what the SBIG
ST-2000 (which uses this chip) has the gain fixed at... 0.6 e-/ADU. How about the QSI 520ci?
0.8 e-/ADU. As 45k e- is a target value with actual chips varying a bit, the two makers have
chosen to set things up a bit differently to deal with this variation (SBIG's will clip the top end
off as it's going non-linear a bit more readily), but both are in the same range and both fix the
value.
Why? There's no real point in letting users adjust this. Let's say we let users control the gain and
they set it to 5 e-/ADU. Well, with 45k e- for a maximum electron count at 5 e-/ADU, we end up
with a max of 9,000 ADU and we induce strong quantization error. 10, 11, 12, 13 and 14 ewould all become the same value of 2 ADU in the image, loosing the detail you so desperately
want. What if the user set it the other way to 0.1 e-/ADU? Well, you'd turn those electron counts
into 100, 110, 120, 130, and 140 ADU and wonder just what's the point of skipping 10 ADU per
electron. You'd also make 6553 e- be the effective full-well capacity of the chip. So, 6535:1
would be the maximum dynamic range rather than 45000:1. Oops. That nice detail in the core of
the galaxy will have been blown out and saturated. You could have kept it preserved and not lost
a darn thing (since each electron counts for > 1 ADU) if you'd left the gain at ~0.7 e-/ADU.
What about offset? Well, it's easy enough to figure out the minimum value a chip is going to
produce and add enough offset in the ADC process to keep it such that this is never going to hit
0.
5.4.4 How
should
I
set
my
gain
and
oset
to
set
it
and
forget
it?
The best value for your camera may not be the best value for other cameras. In particular,
different makers set things up differently. For example, on a Meade DSI III that I recently tested,
running the gain full-out at 100% let it just hit full well at 65,535 ADU. Running below 100%
and it hit full-well at 40,000 or 30,000, or 10,000 ADU. There's no point in running this camera
at anything less than 100% gain. On a CCD Labs Q8-HR I have, even at gains of 0 and 1 (on its
0-63 scale), the camera would hit 65535 on bright objects (like the ceiling above my desk).
There's no point in running this camera at gains higher than 0 or 1.
Why is there no point? The camera only holds 25k e-. If a gain of 0 or 1 gets me to 0.38 e-/ADU
(so that those 25k e- become 65535), running at 0.1 e-/ADU will only serve to limit my dynamic
range. Each single electron already comes out to more than 2 ADU.
So, to determine the gain and offset to use:
1) Take a bias frame and look for the minimum value in it. Is it at least, say 100 and less than a
thousand or a few thousand? If so, your offset is fine. If it's too low, boost the offset. If it's high,
23
drop it. Repeat until you have a bias frame with an offset in, roughly 100 - 1000. Don't worry
about precision here as it won't matter at all in the end. You now know your offset. Set it and
forget it. Never change it.
2) Aim the camera at something bright or just put it on your desk with no lens or lenscap on and
take a picture. Look at the max value in the image. Is it well below 65k? If so, boost the gain. Is
it at 65k? If so drop the gain. Now, if you're on a real target (daylight ones are great for this) you
can look at the histogram and see the bunching up at the top end as the camera is hitting fullwell. Having that bunch-up roughly at 65,535 plus or minus a bit is where you want to be. If you
pull up just shy, you'll get the "most out of your chip" but you'll also have non-linearity up there.
You've got more of a chance of having odd color casts on saturated areas, for example, as a
result. If you let that just clip off, you've lost a touch but what you've lost is very non-linear data
anyway (all this assumes, BTW, an ABG chip which all of these cams in question are). Record
that gain and set it and forget it. Never change it.
By doing this simple, daytime, two-step process you've set things up perfectly. You'll be sure to
never hit the evil of zero and you'll be making your chip's dynamic range fit best into the 16-bits
of your ADC. Again, all the cameras in question have full-well capacities below 65,535 so you
are sure to have enough ADUs to fit every electron you record into its own intensity value.
24
25
exceptionally powerful technique as the hot pixels are removed effectively with no noise being
injected. It's also very flexible as you can use the same "master dark" from night to night and
from exposure duration to exposure duration just by adjusting the slider and making new maps as
needed.
Note: If you use Bad Pixel Mapping you will not use Dark Subtraction and vice versa. One
or the other but no need for both. If you use Bad Pixel Mapping you can still use flats and
bias frames and it doesn't matter whether you apply BPM before or after your other preprocessing.
3.
4.
5.
6.
select one or more dark frames that you can now refer to as Dark 1.
(optional) If you want to do things to your flats on the fly, tell it which bias and dark to
apply (both are entirely optional) and whether you want to blur the flats at all. If youre
using a one-shot color camera, you will at least want to use the 2x2 mean here to remove
the Bayer matrix. Blurring the flats will help reduce the noise (grain) in them.
Click on the buttons to define your sets of light frames (Light 1 through Light 5).
You can have 1-5 sets of lights that you work on at once.
Select which biases, darks, and flats get applied to each of the sets of lights.
If this all made no sense, see the Multiple-Set Pre-Processing section where it is laid out
in more detail.
really doesnt matter whether you do the flats and biases before BPM or after BPM). To apply
BPM to your light frames:
1. Create a Bad Pixel Map if you don't already have one. Batch, Bad Pixels, Make Bad Pixel
Map. Select a dark frame or stack and start off by just hitting OK to use the default
threshold.
2. Pull down Batch, Remove Bad Pixels, selecting the one for the kind of image you have.
If you have a one-shot color camera that is still in the RAW sensor format and looks like
a greyscale image and not color (another reason to capture in RAW and not color...),
select RAW color. If it's a mono CCD, select B?
3. A dialog will appear asking you for your Bad Pixel Map. Select it.
4. Another dialog will appear asking you for the light frames. Select all of them (shift-click
is handy here).
5. You will end up with a set of light frames that have had the bad pixels removed. They
will be called "bad_OriginalName.fit" where OriginalName is whatever it used to be
called.
6.1.3 Conver3ng
RAW
images
to
Color
and/or
Pixel
Squaring
(aka
Reconstruc3on)
The last step before stacking your images is to convert them to color (if they are from a one-shot
color camera and you captured in RAW) and square them up as needed. Some cameras have
pixels that are not square and this will lead to oval rather than round stars. The process of
demosaic'ing (color reconstruction) and/or pixel squaring is called Reconstruction in Nebulosity
and the details of this can be found in the section on Reconstruction: Demosaicing and Pixel
Squaring.
Note, you can tell if your images need to be squared up by pulling down Image, Image Info.
Near the bottom you will see the pixel size and either a (0) or (1). If it is (1), the pixels are
square. Of course, the pixel dimensions will be the same in this case too.
To reconstruct all of your light frames, simply:
1. Pull down Batch, Batch Demosaic + Square (if images are from a one-shot color camera)
or Batch Square (if images are from a monochrome camera or you just feel like squaring
up a color cam's but keeping the image as monochrome for some reason).
2. Select your frames
3. Ideally, Nebulosity will start loading and reconstructing the frames. If it pops up a dialog
asking for things like offsets, it means it did not recognize what camera captured the
image (or you have manually override color reconstruction checked in the Preferences).
If this happens, consult the Reconstruction: Demosaicing and Pixel Squaring section.
4. In the end, you'll have a set of images named "recon_OriginalImage.fit"
supposed and, well, all things aren't always equal. For example if you start with M101 high in
the sky and image for a few hours it starts picking up more skyglow as the session goes on,
brightening the image up. That thin cloud that passed over did a number on a frame that still
looks good and sharp, but isn't the same overall intensity as the others, etc. All things are not
always equal.
If you're doing the Average/Default method of stacking, you need not worry about this issue
unless the changes are really quite severe. If you're using standard-deviation based stacking,
Drizzle, or Colors in Motion, it is a good idea to normalize your images before stacking. What
this will do is to get all of the frames to have roughly the same brightness by removing
differences in the background brightness and scaling across frames.
As of version 2.3, there are two methods you can use to normalize images. The first (and
original), performs a purely linear stretch to put the black and white points in roughly the same
place. To normalize a set of images, simply:
1. Pull down Batch, Normalize images
2. Select the light frames you want to normalize
3. In the end, you'll have a set of images named "norm_OriginalName.fit"
The second, more advanced tool, attempts to equate the histograms of two or more images. This
is a more complex stretching procedure that can account for more kinds of changes across
images. To use this Match Histogram tool, simply:
1. Pull down Batch, Match Histograms
2. Select a reference frame and press OK. This will serve as the template image that others
will be matched to.
3. Select the set of frames you wish to normalize
4. In the end, you'll have a set of images named "histm_OriginalName.fit"
("align_OriginalName.fit") into Align and Combine again, selecting "None (fixed)" as the
alignemnt method (and one of the Std. Dev. thresholds in the Stacking Function). Make
sure you have Normalized your images at some point.
6.1.9 Stretching
Now, the fun begins as it's time to see what you really have in that shot. Sitting atop that skyglow
should be the faint galaxy or nebula you were shooting and stretching is how we bring this out.
There are three main tools for stretching in Nebulosity. The first is the Levels / Power Stretch, the
second is Digital Development Processing (DDP), and the third is Curves. For each of these,
more detail is provide in the section on Image Adjustment.
The goal in each of these is to pull your image's intensity profile (histogram) and stretch it so that
very low contrast differences are made more apparent. Thus, you are pulling your faint galaxy
arms away from the skyglow and doing things like sending the skyglow down to a nice dark
background. When doing this:
Keep your eye on the histogram. The histogram is your friend.
Until the very last steps of stretching, don't let the left edge of the histogram get cut off
and don't bang too much (e.g. the core of your galaxy) into the right edge of the
histogram. Once they hit the edges (0 and 65535), you'll never resolve details in there
again.
Turn off auto-scaling (or let Nebulosity do this for you) so that what you're seeing on the
screen is the full 16-bit data in all its glory. This will help you use the full range of
intensities your image can take. Remember, the B and W sliders are just there to make the
image prettier on the screen (they do a stretch for display but don't really affect the
underlying image). So, have them at full left and full right and then start to stretch. (If
you're in auto-scale when you enter Levels, it will turn it off and set these at the extremes
32
for you).
Don't try to do everything in one pass. Make several passes over the image to slowly pull
it into the condition you want it.
Save often
33
7 Image
Pre-Processing
1.
2.
3.
4.
Pre-processing: Theory
Pre-processing one or more sets using the Multiple Set tool
Automatic dark frame scaling
Bad Pixel Mapping
34
camera and taking a series of short exposures (e.g., 10 ms). Take a good number of these some
day when you're bored and combine them (average or median) to create a master bias frame.
In contrast to Bias and Dark frames, Flat Frames are taken with light hitting the camera, but
with the light coming from an even field of illumination (e.g., aiming your telescope at a white
wall, defocused at the sky at dusk bouncing the scope around, putting a diffuser over your
telescope, etc). The exposure duration of Flat frames does not matter per se, but should be long
enough to ensure no pixels are at or near zero and no pixels are near saturation (Nebulosity will
automatically scale the intensity of the image to have a mean of 1.0, so don't worry how bright it
is overall). Again, take several of these and combine them.
Nebulosity's Pre-process routine will subtract any Dark frame provided from each image,
subtract any Bias frame provided from each image, and divide the result by the flat Frame. You
may notice that this is leaving off part of the equation, as the denominator does not include the
part about subtracting the Dark frame and Bias frame from the flat frame. This is because the
Flat frame is typically taken at a different duration (usually much shorter) than the Light frames,
meaning a different Dark frame is needed to remove the hot pixels from the Flat frame. What this
means is that for best results, you should pre-process your Flat frame by treating it like a Light
frame and applying a suitable Dark frame and Bias frame to create the "master flat" image used
to correct your Light frames.
35
1. Define your control frames (darks, biases, and flats) first. Here, youre saying these are the
frames Im going to want to use to fix some lights.
2. You can select individual images or multiple images. Nebulosity will stack the frames on the
fly if you select multiple.
3. For the flats and lights, you can choose to apply the control frames - the bias, dark, (and, for
the light frames, flats). When youre doing this, youre telling Nebulosity, which control set
to apply.
Lets walk through this example. First, I pressed the Dark 1 button and when the dialog
appeared, I selected an existing master dark frame (Master_Dark_30_fr_5m-10C.fit). This is
now Dark 1. I then pressed the Dark 2 button and grabbed three frames (these are actually from
another camera with a different sensor size even). Remember, you can select multiple images in
those dialogs by shift-clicking, control-clicking (or on the Mac, Command-clicking) the way you
can in other applications. Here, were telling Nebulosity to stack those three dark frames and call
that Dark 2. Note how next to the name of one of these dark frames
(Dark_Stability_1m-1_052.fit) you see the number three in parentheses. This is telling you
that there are three frames you selected here that will be stacked on the fly.
I did a similar thing to select an individual bias (Bias 1) and to stack several raw bias frames
(Bias 2). Again, these are actually from two different cameras (though they neednt be, of
course).
I selected several sets of flats here as well. For Flat 1, Ive got pre-stacked flats from the first
camera when the H-alpha filter was on it. For Flat 3, its the same camera with the O-III filter on
it. For Flat 2, its the other camera and I selected several flats here to stack on the fly. For the
flats, though, we want to do some pre-processing to clean them up. First, the flats dont have
much dark current but they do have bias current. So, I told Nebulosity to apply the appropriate
bias frames to these flats. The first cameras bias frame was Master_Bias.fit and this was
defined as Bias 1. For both Flat 1 and Flat 3, well apply this bias frame. The second cameras
bias frame was Bias 2 (the stack-on-the-fly bias frame of three individual images, including one
called Bias_003.fit.) So, Bias 2 is selected for Flat 2 and Bias 1 is selected for Flat 1 and Flat
3. In addition, there will still be some noise here that I may want to reduce before applying the
flats to my lights. So, 7 pixel blur is selected here by default (the default value is set in the
Preferences dialog). Youve got several options here to choose from. If youre using a one-shot
color camera, though, youll want to do something here to at least remove the Bayer matrix (any
of them should remove this).
Now, we move onto the lights. Here, I selected some H-apha and O-III data from the first
camera and put them into Light 1 and Light 3. Its important to note here that I could have put
them in any set of lights. I could have done Light 1 and Light 2 or even started at the back and
done Light 4 and Light 5. Which set you put them in doesnt matter. You dont need to match
up those numbers with your flats, biases, or darks. Where you do the matching is in those
pull-downs. So, for these two sets ((4) Ha_10m_021.fit and (5) O3_take2_E-1_005.fit),
36
were going to apply the same dark frame - Dark 1. Well apply different flats, though, as there
were different dust bunnies on my two filters. So, the flat with the H-alpha filter (Flat 1) gets
applied to the light with the H-alpha filter (Light 1). Same deal for the flat with the O-III filter
(Flat 3) and the light with it (Light 3). For the lights from the second camera, Light 2, were
going to apply its dark (Dark 2), and its flat (Flat 2). Of course, since the bias current is
contained in the dark frame, were not going to apply any bias frames here (that would doublesubtract out the bias and inject the bias back in).
Once youre all set, press OK and go away for awhile. Note that if you have the History window
open (View menu), youll get a full blow-by-blow of exactly what is going on.
light frame than in the dark frame. It then scales the dark frame accordingly. The picture above
shows it in action.
In the upper-left, we have a section of a dark frame, stretched to show the hot pixels. In the
upper-right, we have a raw light frame showing our stars and the hot pixels (e.g., the dot inside
the red circle). In the lower-left, we have the result of a standard dark subtraction. Note the
"black hole" inside the circle. In the lower-right, we have the result of the autoscaled dark
subtraction.
combination of several frames taken at the longest exposure duration you expect to use. Pull
down Bad Pixels, Make bad pixel map from the Batch menu and load the dark frame when
prompted. A slider will now appear and the display will show you your hot pixels. Nebulosity
attempts to come up with a reasonable position of the slider for you. Here, we have the default
setting and display for creating the Bad Pixel Map using a sample dark frame.
At this point, you can adjust the slider and you'll see the number of bad pixels identified change
(here, showing 60 bad pixels). Move it to the left and more pixels appear and move it to the right
and fewer appear. What you are doing is moving a threshold - saying that anything above this
intensity is a bad pixel and anything below it is a good pixel. When you like your map, click on
Done and you will be prompted for a name to give this Bad Pixel Map. Give it a meaningful
name, as you may well want to create several maps. If you used a 5-minute dark frame, you
could use that dark frame to make several maps - one for ~5-minute exposures, one for ~1minute exposures, and one for ~20-second exposures for example by using different values of the
threshold (letting fewer hot pixels show for the shorter exposure maps). There is no "exact right
value" here. You're simply telling Nebulosity which pixels not to trust.
Once you have your map, you
can now process your light
frames. In the Bad Pixels menu,
select the Remove bad pixels
option corresponding to the kind
of images you have (images from
a one-shot color camera in RAW
format prior to the de-mosaic
process OR images from a black
and white camera). It'll prompt
you for the map to apply and then
for the set of light frames you
want to process (shift-click or
ctrl-click to select multiple
frames).
When done, you can Batch Demosaic the images if they were
from a one-shot color camera and then go on to Alignment and Stacking.
39
40
Sony CMYG chip and Nebulosity does not recognize the sensor, once the offsets are in place,
values of 1.06, 0.29, -0.41 in the first row, -0.4, 1.06, and 0.54 in the second row, and 0.50, -0.4,
and 1.11 in the last row will give a reasonable color rendition by compensating for the chips
imperfect color filters.
It is important to note that Nebulosity will automatically square the pixels during the debayer
process for one-shot color images. Any color image is assumed to therefore have square pixels.
It is also important to note that if you use a Canon DSLR, ideal color balance in Nebulosity will
be accomplished if you select the appropriate setting under "DSLR White Balance / IR Filter" in
the Preferences dialog.
41
42
10.1 Modes
10.1.1 RGB
In RGB mode, 3 color channels are used and directly create a color image. It is the simplest
mode.
43
44
12 Stacking
Images
Stacking multiple exposures is a fantastic thing to do for your images. If you can stack your
images, you don't need to hold perfect tracking as long (making life easier) and you reduce any
noise in your image that is not consistent from exposure to exposure (much of the noise is not).
Thus, a stack of images will look less grainy (less noisy) than any one individual image. This lets
you stretch and process the image more to bring out fainter details. All in all, stacking is a very
good thing.
In Nebulosity, stacking can be done with or without alignment. For light frames (where stars are
apt to move between each frame), you will want to align the images either prior to stacking or
during stacking (see below). For things like dark frames, bias frames, and flat frames, you will
not want to align the frames first.
45
telescopes (including fork-mounted scopes on a wedge). This does not work for Alt-Az mounted
scopes. This style of mount makes stars not only move left/right and up/down but the entire field
rotates as well.
A more complex technique, Colors in Motion, is also used for stacks that have shifts between
images. Unlike the other techniques provided, Colors in Motion simultaneously aligns RAW
images, stacks them, and reconstructs color information from one-shot color cameras. It cannot
be used on RGB data or on black and white data.
To align and combine images using alt-az mounted scopes (or equatorially- mounted scopes),
Nebulosity provides three other techniques. The first is similar to the above but allows for
rotation and sub-pixel alignment. It is called Translation + Rotation. Related to this is Translation
+ Rotation + Scaling in which frames are allowed to be resized to align atop each other. To let
Nebulosity know about the possible rotation, you must pick two stars in each image. Each image
will be shifted and rotated to align them all prior to averaging the data.
The final technique that works on both equatorial and alt-az mounted scopes is called Drizzle
(Align and Combine: Drizzle). Drizzle can not only combine images from alt-az (or equatorial)
scopes, but it also enhances the resolution very well. To do this, you will again have to pick two
stars in each image, and Nebulosity will do the rest.
average would make it 64037. Here, we see the problem with simple summing. You can saturate
the image pretty easily, especially if you start with 16-bit images. Here, one pixel should be
twice as bright as the other and yet it ends up equally bright (65535) if adding is used, since this
is the highest possible value.
Nebulosity uses an Adaptive Stacking technique that avoids the weaknesses of both. It can be
viewed as always being somewhere in between adding and averaging your data. The output (the
stack) will always have a maximum value of ~65535 so that you are always using the full range
of your data. This is enabled by default and for most uses will be optimal. (Note, it is not used
when the Fixed Combine is selected as this tool is often used for dark frames). Unless you have a
real reason to, you should leave this on (see Preferences menu). If you turn it off, Nebulosity will
compute a straight average when stacking.
47
With the filter set at 1.75, it takes a more extreme or "outlying" intensity value to be counted as
"bad" than at 1.5. At 2.0, it takes even more abnormal a value to be excluded. Thus, more
samples go into the final image using a higher threshold (and more noise as well). Typically,
filtering values at 1.5 or 1.75 will yield the best results.
12.2.2.1 How-to
To use Standard Deviation stacking, you must first Normalize your frames. Then, you must align
your images using any of the Translation (+Rotation (+Scaling)) routines. Do this to all of your
images and save each file rather than saving the average stack. Then, select "None" for alignment
method and the various SD stacking choices will appear. Try 1.75 or 1.5 initially before using
any more extreme values.
Note, that this requires a lot of memory. If you are using DSLR images, 1Gb of RAM would be
considered a minimum for this kind of technique.
cursor over this star (and remember which one it is) and click the left mouse button. In so doing,
you're saying "The star is here" to Nebulosity.
Tip: If you want to abort the whole process, simply press the Abort button.
In truth, you're actually saying, "The star is about here." None of us can click perfectly all the
time and doing so would be a very time consuming process as we would obsess over whether it
the click should be here or one pixel over. So, Nebulosity never assumes you got it 100% on
target. Instead, it looks in a small area (+/- 5 pixels) to see if there's a better candidate for the
center of that star. That is, it refines your click. So, get close, but don't obsess over being perfect.
Nebulosity assumes you're not perfect and will try to fix it anyway. After the first image, you will
notice a circle appear around a star - hopefully, the same star you've been using in prior images.
To keep the current location (i.e. to say "yea, you got the right star, there Nebulosity), Ctrl-Click
(Command-Click on the
Mac).
If there is an image you
don't want to include in the
stack (e.g., a plane flew
through your DSO, the
mount mistracked, you
moved the scope to recenter the target during
imaging, the wind blew, a
cop shined a spotlight at
your scope - all of which
have happened to me), just
Shift-Left-Click anywhere
in the image. That frame
will be ignored. (The
FWHM specified in the
Pixel Stats window can
help here, too).
Once you've selected the same star in each image (at least for each image you plan to use),
Nebulosity then goes about aligning the images and combining them into one composite image.
Depending on how many images you're aligning and how big they are, this could take some
time. Nebulosity shows you its progress in the Status Bar. After all images have been combined,
Nebulosity prompts you for a name to save the composite image as. Give it a name and press OK
and you're done. The image now displayed on the screen is this composite image.
49
to that used in Align and Combine: Translation. The only difference in what you do is to pick two
stars in each image.
Once you have gone through and picked the first star (which Nebulosity uses to gauge the
translation), you will go back through all images and be asked to pick a second star (used to
gauge rotation). As before, feel free to keep the current best guess of the star's location (CtrlLeft-Click or Command-Left-Click on the Mac), to skip any image you don't like by Shift-LeftClicking or to abort by hitting the Abort button.
smaller than the original pixels. This is the "up-sample factor" - how much higher the output
resolution is than the input resolution.
Second, these pixels will usually not line up perfectly with a pixel in the output image. Key to
Drizzle is the fact that these "drops" (the blue pixels) fall onto the output pixels to the degree that
they overlap. The value dropped by a blue pixel will fall a lot on one output pixel, a little on
another, less still on a third, etc. Think of the output grid as having little wells (or spots on an icecube tray) for each output pixel and the water drop landing on a spot that hits multiple wells. As
multiple images rain pixels down onto the output image, the output pixels fill up to the degree
that input pixels line up with the output pixels.
That's the theory behind Drizzle. How does it work in Nebulosity? Images must be black and
white or full-color (but not RAW) and pre-processed. There should also be some movement
between images (e.g., if Fixed combine of images would make a blurry average). You'll also want
to have a minimum of 8-10 images to work with as well and you should most likely have
Normalized your images prior to stacking.
Drizzle requires two stars to be found in each image. First, find one star (left-clicking as in Align
and Combine: Translation) in all the images (shift-left-click to skip an image, ctrl/command-leftclick to keep the current guess). This first one will serve to let Nebulosity know how much
translation is in each image. After you have selected the same star in each image, Nebulosity will
loop back and ask you to find a second star in each image. A red target will appear over the first
star to let you know what you picked the first time. Don't pick a star that's too close to the first
one, as the second star lets Nebulosity know how much rotation is present. The further away it is,
the more "leverage" you have.
Once the stars are picked, you'll be presented with a dialog
asking you for a few parameters to give to Drizzle. It asks
for the "Pixel Reduction" factor (how much smaller the
pixels become before being transformed to the output grid)
and the "Up-sample" factor (how much bigger the output
image is than the input images). Typical pixel reduction
factors range from 0.5-0.8 (0.6 is the default) and typical
up-sample factors range from 1.2-2.5 (1.5 is the default). If you try to use very small pixel drops
(e.g. a pixel reduction factor of 0.2) or very large up-sampling factors (e.g., 3.0), you risk leaving
"holes" in the output image where no pixels dropped from an input image to the output image or
other artifacts (right).
Finally, you'll be asked for an "Atomizer" value (default value of 2). This parameter lets you
trade off speed and accuracy of the Drizzle process. Think of it as how fine a "mist" is being
made out of each "drop". 1 is the fastest but is a bit less accurate and should not be used if you
have a small number of images. 3 is the slowest but most accurate. 2 represents a good trade-off
(and will rarely look any different than 3).
51
Be warned - Drizzle is
computationally intensive. Be
prepared to find something
else to do for a while after
you've picked your stars.
Here is a shot of a small
portion of a test image used to
evaluate Drizzle. On the left,
we have a single raw frame
taken from the DSS again.
This frame was shifted
randomly and undersampled
to create a stack of frames
that where then aligned with either translation (middle) or Drizzle (right). For display here, each
image is shown at the same scale. Note the separation of close stars recovered by Drizzle and its
overall increase in resolution.
52
CIM is computationally intensive, so be prepared to wait awhile once you've told it where the
common star is in each image. When it's done, you'll be asked for a file to save the results in.
53
13 Image
Adjustment
While Nebulosity is not designed to be an advanced image processing application like PhotoShop
or the GIMP, it does supply a number of purpose-built and very useful tools for adjusting your
images (usually the result of stacking). These tools are located under the Image menu.
For any of these, if you decide you don't like the results, simply press Cancel or use the Undo
command in the Image menu (or Ctrl-Z). By default you have 3 steps of Undo available, but with
a quick trip to the Preferences menu, you can have unlimited Undos and Redos.
13.3 Reconstruc@ng
Images
from
One-shot
Color
Cameras
and
Line
Filters
In a previous section (see Monochrome vs. Color?) a typical color filter array was shown for a
one-shot color camera that uses red, green, and blue filters. If we place a "line filter" in front of
such a camera, what happens? For example, suppose we placed an H-alpha filter in front of this
array? Such filters pass light in a very narrow range, centered on 656nm of "Hydrogen alpha".
Emission nebulae emit light specifically at this wavelength (and several others), so passing this
light and block all else can lead to an excellent way to image nebulae amid light pollution and
can lead to stunning images of these nebulae. When multiple lines are imaged separately (e.g.
one frame of Ha, one of O-III, etc) they can be combined into beautiful "false color" images.
Typically, such imaging has been reserved for monochrome cameras. The reason can be seen in
that Bayer array. Photons that pass through the Ha filter are well into the red area of the
spectrum. As such, only the red pixels will get any light. The green and blue pixels will be dark.
Thus, we have only 25% of our pixels doing anything and the others are merely contributing
noise. So, when reconstructing RAW data, one could take the RAW data and use the Low Noise
2x2 Bin or Adaptive 2x2 Bin tools. This would create an image half the size but would remove
all evidence of the Bayer pattern. The one valid red pixel would be averaged with the three
54
invalid pixels in a local 2x2 area and the result would be dominated by the red signal. One could
also use the normal debayer routine and simply use the Discard Color tool. This is a common
approach, but one that may not be optimal (see image below).
While we cannot
escape the loss of
resolution entirely,
there are ways of
improving how images
are reconstructed on
one-shot color cameras
when line filters are
used. For example,
when CMYG color
arrays are used instead
of RGB arrays, more
pixels respond to Ha
light (as is shown by
this shot with an Ha
filter of the Horsehead
nebula, courtesy of
Michael Garvin).
Knowing this, and
knowing how the
pixels respond to this
light can let us
optimize this
reconstruction.
Nebulosity gives you several tools to do the reconstruction in addition to using the binning tools
or the Discard Color tool. One is a "Generic" method that will do a good job on any line filter
with any camera but is not optimized for any specific combination. A second is a reconstruction
optimized for "nebula" filters and O-III filters that leak light in the Ha and beyond regions (e.g.,
Televue, Meade, and Lumicon filters). This is also a rather generic reconstruction that will work
well with a wide range of setups. Finally, for CMYG arrays, there are optimized reconstructions
for Ha and pure O-III filters (e.g., Astronomik, Orion, and Custom Scientific) that do not leak
light in Ha or higher wavelengths. A comparison of these techniques on the data from the
Horsehead nebula taken on an Orion StarShoot's CMYG array is shown below.
In addition, this menu has options for pulling out each of the color channels directly for one-shot
cameras with RGB sensors. Since the green pixels are twice as common as the red and blue, you
have options to grab either of the green fields or to average this into a single green. Note, with
these options, the image will be half the originals size.
55
13.6 Binning
Images from some cameras are quite large and it can be useful to cut their size down (e.g., to
post to the Web). Nebulosity lets you bin images 2x2, thus cutting the image size in half. An
added benefit of binning is that by combining data from 4 pixels into one, noise is reduced (much
in the same way it is with stacking). Finally, one additional use of binning 2x2 is to remove all
color information from a RAW frame off a one-shot color camera. This turns the one-shot color
camera into something a lot closer to a monochrome camera.
Nebulosity gives you 3 ways to bin your image. You can sum all 4 pixels (which will brighten the
image considerably), average all 4 pixels (which will keep the same brightness) or perform an
adaptive bin. The adaptive bin will combine the data in a way between the summation and
averaging, optimizing the combination so that the full 0-65535 range is used.
13.7
Blurring
Nebulosity lets you apply a Gaussian blur to your image. This is another nice way of reducing
noise. For example, prior to applying a flat frame to your image it can be useful to blur the flat
first to reduce noise in the image. Small amounts of blur can also be used after over-sharpening.
56
One very popular tool is an Unsharp mask. This algorithm actually makes a blurred version of
your image and subtracts this blur to make a sharper image. Nebulosity also provides a tool that
takes a different approach to the problem. The Tighten Star Edges tool first examines your image
and performs an edge detection analysis using a modified version of the Sobel Edge Detector
(modified to work better with our round stars than the traditional Sobel). These edges are then
subtracted from your image to yield tighter stars and enhanced edges. Note, this is not the same
kind of edge enhancement done during DDP processing.
This is a shot of M57 as acquired (left) and after the Tighten Star Edge tool using the default
parameters. Using the slider that appears when you run this tool (located in the Image menu),
you can adjust the degree of edge enhancement applied.
noise, and the skyglows shot noise that after stretching things can look downright ugly. One
solution is to not stretch so much. Another is to stretch but then try to reduce this noise.
Nebulosity has two tools to do this: Adaptive Median Noise Reduction and GREYCstoration
Noise Reduction.
58
In this before and after example, there is clearly an improvement noise-wise. The background is
a lot smoother and overall the image is still quite sharp. Its not perfect, though, and still could
benefit from some tuning of the parameters. There are a few things that will help you do this:
Before you enter GREYCstoration, select a box around something that has stars,
background, and your main object. You will be able to preview things on just this region,
which will make for much faster refreshes of the screen to see the effect.
On entering GREYCstoration or after a change in parameters, hit the Preview button to
see the effect. You can hit the Show Original button to revert the display back to the
original image to see what was gained and what was lost. You can change parameters as
much as you like hitting Preview between each iteration. Preview always goes back to the
original data. Hitting Done will apply this to the whole image (and can take a long time
watch the title bar).
Read the GREYCstoration documentation with examples.
The Fast approximation and nearest neighbor resample methods do speed things up but
dont produce the best output.
Watch this space for more details of maximizing GREYCstorations effectiveness.
59
13.12
Resize
Want your image to be
rescaled (aka resized,
aka resampled)? The
Resize tool in the Image
menu can make your
image larger or smaller
using a number of
different resampling
algorithms. Simply enter
in the scaling factor (e.g.
2 to double the size of
the image, 0.5 to make
it half as big) and choose
an algorithm.
Here, we have an
image that was first
reduced to half its size
by binning and then
restored to full size with
the different algorithms
available. Box is fast but ugly when increasing the size of the image. Bilinear is fast but will
smooth the image a bit more than some of the others. B-Spline will lead to the smoothest
output. Lanczos sinc can lead to the sharpest output but is more prone to ringing than
Catmull-Rom or Bicubic. The Catmull-Rom usually outperforms the standard Bicubic.
13.13
Crop
Usually, after stacking a series of images, you end up with a dark border in your stacked frame.
This is because Nebulosity had to move all the images around to get them to line up and some
needed to be moved further than others. To get rid of these borders (or just to recompose your
image), you can crop the image. One way to do so is to simply drag a selection using the left
mouse. Start in one corner and hold the mouse button down to create a selection box and let go
when the box is the desired size. Once happy with the box, pull down Crop from the Image menu
and you'll have cropped in on just that area. A second way to do so is to pull down Crop from the
Image menu and enter the number of
pixels to crop off directly
60
the image - the minimum value in each color channel not being the same. This can be fixed by
subtracting a constant number from one or two of the color channels. The Adjust Color Offset
tool lets you do this. A dialog box will appear with sliders for red, green, and blue. Nebulosity
will attempt to determine reasonable values for the sliders when the dialog opens. The values you
enter here will be subtracted from the specified color channel(s). For example, sliding the Red
slider to 1100 will not affect the green and blue data but will make every red value 1100 less than
it was previously. A 3-color histogram is shown below the sliders to help in getting the offset just
right. Aim to have the left edge of the histograms similar for all three colors. Pressing Cancel
will revert back to the original image.
61
Note: The histogram shown here is based on the luminance values of the image. This
may lead to clipping of the data if you have not already balanced the color channels.
Balance reasonably before stretching and/or keep an eye on the histogram shown in the
main window (which is computed based on all color values).
The Levels / Power Stretch tool lets you do this and quite a lot more. When run, it presents you
with three sliders and a window showing your image histogram (see above). One slider sets the
62
black level and another sets the white level. The third sets the "power" (or middle slider in a
"levels" tool). Leaving the power at 1.0 performs a "linear stretch" of the data. Setting the power
below 1.0 will tend to brighten the fainter bits of the image. Setting it above 1.0 will tend to
darken the fainter bits and brighten the already brighter bits. This is applying a specific kind of
curve to your image (see below) by computing
for each pixel:
NewValue = OrigValue power
At the same time, it is stretching the data so
that the output ranges between the values set
by the black and the white level sliders. You
can see its effect in the histogram window.
Here, the initial histogram is shown in blue
and the output, or resulting histogram is
shown as a dashed orange line. In the example
shown below, you can clearly see the
histogram being stretched to pull out
interesting bits in the data.
Many users are more familiar with a Levels
tool. In fact, it is mathematically identical to a Levels tool. Here, the "power" setting is directly
related to the "midtone" setting in a typical Levels tool. (In fact, the industry-leading Adobe
PhotoShop reports a value in its Levels tool between the black and white points that varies with
the midtone level and that is 1/power). To assist in using it this way, the black-point, midtone,
and white-point lines are superimposed on the histogram. As is typically shown in a Levels tool,
these lines show where in the original histogram (blue line) the black (left line), midtone (middle
line) and white (right line) points lie. You will note that as you move the power slider, the
midtone's position relative to the white and black points will move, but that it often won't be
placed directly under the slider. This is entirely normal. If you wish to think in terms of setting
the midtone level of the image, adjust the power slider until the middle line (slightly darker than
the other two) is at the desired place in the histogram.
There are a few things to note. First, if Auto-Scale is turned on prior to entering the Levels tool,
it will be turned off and the B and W sliders set at their full extent. This is to show you how
much of the full data range you are using and to encourage you to stretch the image to use that
full range. (If not in Auto-Scale mode, the sliders are not moved). Second, the Levels tool can be
quite computationally taxing on your computer, especially if you are working with very large
images. To make the adjustments more responsive, try defining a region of interest (ROI) with
the mouse (just as when cropping) before entering Levels. You will preview the adjustments on
this region only. When you hit OK, the same adjustment will be done to the whole image.
Take your shot and try setting the power to 0.3 - 0.8 and then sliding the white and black levels.
You should notice that the faint bits of your DSO start to appear and become a lot more
63
prominent. Quite often, optimal results are obtained by using this tool multiple times. Each time,
make only moderate adjustments to the image and don't worry about getting your background
very dark the first time or so through. Gradually hone in on your desired image. Don't worry
about the fact that you're doing this multiple times and that it might cause problems with the
image range or values. Remember, Nebulosity does everything in 32-bit floating-point numbers
(96-bits per pixel total) internally. Adjust and re-adjust as you see fit.
13.20
Curves
The Curves tool dialog has two main areas. In the lower-right are some sample / default curves
you can try or use as a starting point. The Keller Curves come from Warren Keller, author of
the IP4AP tutorials. The main part of the dialog is on the left. There are two control points (blue
dots) that you can move to draw the curve. Grab one and move it and you will see the red line
(the curve you are making) move and the screen will update to show the effect of the curve.
These two points and the end points let Nebulosity build a sensible curve (using a Bezier). While
more points will give more flexibility, they also can get you into more trouble. With two points
many curves can be drawn and with a second pass through, the combined effect of both curves
gives almost infinite control.
Note, in the Pre-set section,
there are entries for the last
curve you used and for a savedcurve (created with the Save
button). These let you come
back and re-trace out the curve
used in the past. Note also that
the Status bar and the History
dialog will record the positions
of both control points so that
you could re-create a curve
another time.
provides a method for darkening the background during the DDP process (not in Okano's
description but may be set to 0 for a pure DDP processing). Finally, the fourth slider, labeled
Edge Enhancement controls the amount of sharpening done during the DDP process (part of
Okano's description).
65
14 Supported
Cameras
Nebulosity supports a wide range of cameras on both Windows and OS X. They are:
1. Canon DIGIC II, III, and 4 DSLRs (Windows and OS X): EOS 1000D/Rebel XS, EOS
450D/Rebel XSi, EOS 400D/Digital Rebel XTi, EOS 500D/T1i, EOS 350D/Digital Rebel
XT, EOS 50D, EOS 40D, EOS 30D, EOS 20D/20Da, EOS 5D Mark II, EOS 5D,
EOS-1D Mark III, EOS-1D Mark II N, EOS-1D Mark II, EOS-1Ds Mark III and
EOS-1Ds Mark II). See below for more details.
2. Fishcamp Starfish
3. Meade DSI, DSI Pro, DSI II, DSI II Pro, DSI III, and DSI III Pro.
4. QSI 500 series (Mac: Only on Intel-based machines on 10.5+)
5. QHY8 (Mac: Only on Intel-based machines on 10.5+)
6. SBIG (See below for more details)
7. Starlight Xpress SXV / SXVF / SXVR USB cameras (See below for more details)
In addition, on Windows, the following cameras are supported
1. Apogee Alta
2. Atik 16 series (all) / Artemis 429/285 cameras
3. Atik 314, 4000, and other Generation 3 series
4. CCD Labs Q285M / QHY2Pro
5. FLI
6. QHY8 using Tom van Den Eedes libusb driver
7. QHY8 using QHYs driver
8. QHY8 Pro
9. QHY9
10. Opticstar DS-335, DS-145, DS-142, and PL-130
11. Orion StarShoot Deep-Space Color Imager (Other Starshoot cameras supported via
ASCOM)
12. SAC10
13. SAC7 / SC1 long-exposure modified webcams / Atik 1 and Atik 2 cameras. See below for
more details
14. Any camera with an ASCOM v5 driver (See below for more details)
66
67
fear not, it is storing the raw sensor data. It just doesnt store things like the focal length setting
of the zoom lens (thats probably not hooked up to the camera since youre probably shooting
through a telescope). (Aside - its not like CR2 isnt odd in its own right. Its a lossless JPEGcompressed bit-packed image stored inside a massive TIFF-like hierarchy.)
There are several key things to note about the Canon DSLRs, however, that do set them apart
from other cameras.
(typically 1/25th of a second or shorter) via a pop-up window. In long exposure mode, the
program (such as Nebulosity) controls the exposure duration.
In Nebulosity, short exposure mode is selected by setting the exposure duration to 0. Anything
greater than zero will put the camera into long exposure mode. In short exposure mode, the
shutter speed is controlled via a pop-up window. Press the Advanced button in the Camera panel
and you'll get a slightly different version of the Advanced dialog described above. You'll find
Setup and Format buttons that let you configure the resolution, shutter speed, gain, frame rate,
etc.
Note: For the best images in both short and long exposure modes, always set the frame
rate to a low setting such as 5 FPS. This minimizes the amount of compression your
images undergo. Do this in the Advanced Dialog using the Setup and Format buttons
In addition to these buttons, the Advanced Dialog has one added section for these cameras. A
"Read delay" can be entered. The default value should work on most systems but if you find you
are dropping frames, try adjusting this value (5 ms increments will be good). System speed and
specifics of your camera may dictate a slightly different value (10ms - 30ms for a typical range).
CCD twice. The net result is a less noisy image, but one that takes a bit longer to read and
process.
VBE balance color exp times: (SAC10 only) This feature attempts to fix the same
problem addressed by the Double Read option (the problem is sometimes called the
Venetian Blind Effect), but to do so with a single exposure. It intelligently balances the
intensity of the odd and even lines and can be quite useful for shorter exposures.
72
For maximum resolution, with perfect tracking (see below) and excellent seeing, a value of 1"/
pixel is a good target (some pros go to slightly smaller values still). For more typical conditions
with good seeing and good tracking, 1.5-2"/pixel is another fine target. Larger amounts of sky
covered per pixel will let you cover more sky and will not stress your mount's guiding accuracy
as much (see below), making values of 3-6"/pixel quite reasonable for many situations. In so
doing, you are trading off extreme resolution for wider swaths of sky and less difficulty guiding.
From this formula, you can see that there are two ways to adjust the final resolution in your
image. You can either adjust the pixel size of the camera or you can adjust the focal length of
your telescope. Neither seem trivial at first glance and, while they can be adjusted, it is only to a
limited degree. (Telescope focal length can be shorted with a focal reducer and lengthened with a
Barlow. CCD pixel size can be effectively increased by binning.) Thus, determining what
telescope to use for a given camera or vice versa is often best done before purchase.
73
Note: To see how much your mount is moving between images, right-click on a star to lay down a
"target" circle around it. This target will remain in the same place on the image across captures,
and let you see how far that star has moved.
your telescope during imaging, these imperfections will limit how long you'll be able to expose
each image. Exactly how long you can go will depend on the size of the periodic error and the
amount of sky covered by each CCD pixel. Wide-angle shots with 10"/pixel are a lot more
tolerant of periodic error than zoomed-in shots at 1"/pixel.
Many amateurs run shots unguided and end up stacking many 15-40s long exposures into one
long image. With enough images, and with the right exposure settings (see below), this can be
used to make very nice images.
But, what can you do to lengthen this time or to fix the problem entirely? Several mounts offer
Periodic Error Correction (PEC). On these mounts, you train the telescope to know what the
error is like by following a single star and correcting the errors using the telescope's controller.
The mount then learns these corrections and applies them automatically. This can reduce the
error quite a bit.
A second technique, often used on its own or in conjunction with periodic error correction is
guiding. Here, an image of a star is sent to either an eyepiece (manual guiding) or a second
camera (autoguiding) while your main imaging camera collects pictures. Two approaches are
taken. In one, an Off-Axis Guider is used to split some of the light away from the main camera
and towards this eyepiece or second camera. A small prism is placed so that the light split off is
light that would not have fallen on your main imaging camera anyway. In a second, another
telescope (a guide telescope) is attached to the imaging telescope. In both, this second view of a
star is used to determine when the telescope is drifting slightly off target and to correct this
problem by sending very small movement commands to the mount.
Many packages are out there to help you autoguide your mount. A free one from Stark Labs,
PHD Guiding, works well on a wide range of cameras and mounts and is designed to be "Push
Here Dummy" simple. Its goal is to make it so that you have little excuse for not trying
autoguiding.
15.3 Focus
Getting your camera sharply focused is critical to taking good pictures. The Frame and Focus
routine will get you close, but will often not get you to as sharp a focus as you could get. For
this, you'll want to make sure you're using the full Preview mode or the Fine Focus mode,
making only small adjustments to your focus between each shot.
You can evaluate your focus by simple visual inspection or by calculating several statistics about
a star. In particular, when a star is in focus, it will get more of its light on a central CCD pixel
than when out of focus. The Fine Focus tool offers an excellent focus aid that will help you
achieve critical focus.
In addition to these techniques, there are a few others you can try. One technique is to build or
75
buy a Hartman Mask, a diffraction mask, or a Bahtinov Mask. They're not tough to build - many
consist of cutouts in pieces of cardboard and one is assembled out of TinkerToys (no, really). All
work by having you place something in front of the scope during focusing. When the star is nice
and sharp, the artifacts induced by each disappear or form a particular pattern. The Bahtinov
mask is an exceptionally easy to use version and is very effective.
A second technique to try is to use the fact that in focus stars get more of their light on the CCD
than out of focus stars. When in focus, you'll be able to see stars in the Preview or even Frame
and Focus that would disappear when out of focus. Adjust the exposure duration or gain until
you can just barely see a star. Adjust the focus to see if you can make it brighter or if it
disappears on either side of where you are right now (or, if you know you're a bit out, make the
star disappear with the duration or gain and reappear with the focus knob).
15.4.1
Rule
#1:
Use
the
Histogram
to
keep
your
background
above
the
oor
and
bright
bits
below
the
ceiling.
First, you should always try to expose images so that the background sky is "off the floor" and
the stars (or at least the cores of the DSOs) are "off the ceiling". What this means is that you
don't want large parts of your image to have values of zero or of 65535 (the minimum and
maximum possible values). Any time a pixel has either of these values, we've lost information.
For example, let's say a star is at 65535 and one next to it is really twice as bright. Both get
recorded at 65535 and the final image doesn't show a difference between the two. Once we've
reached this maximum, we simply can't go any higher and so important details (such as the
difference between these stars) are lost.
The same holds true on the dim end. Let's say a faint arm of a galaxy is just barely brighter than
the skyglow around it (a very common situation). If your background sky is recorded as zero,
quite possibly the faint bit of the galaxy is at zero as well. No matter how many images you
stack, if they all have zero in them, you'll never be able to find that dim galaxy arm in your
image.
How do you do this? The exposure duration is the most obvious method. Longer exposures will
brighten the image (moving the histogram to the right). In addition, the increasing the gain and
offset controls will also brighten the image. Both will add more noise into the image, but a little
bit more noise is a lot better to have than ultra-black backgrounds. If you're running unguided
images, you'll likely use higher values of gain and offset than those running guided.
76
77
16 Menu
Reference
16.1
File
Menu
Open File: Loads any FITS (color or B&W, compressed or not, 8-64 bits, integers,
floating points, you name it), PNG, TIFF, JPG, BMP, or DSLR RAW (CR2, CRW, NEF,
etc.) file into memory and display. 8-bit/color files are automatically stretched to full
range.
Preview Files: Opens a dialog that lets you preview a set of files, deleting and renaming
them as desired. Useful for filtering images and for quick looks at files.
DSS Loader: Download an image from the Digitized Sky Survey and overlay an FOV
indicator to help you plan your shots.
FITS Header Tool: Lets you view the contents of the header of a FITS file.
Save current file (FITS): Saves the currently displayed image in FITS format using 16-bit
integers (0-65,535). Compression set by Preferences, Save as compressed FITS.
Save BMP file as displayed: Saves the currently displayed image in Windows BMP
(bitmap) format. The values of the black and white sliders set the black and white levels
in this, since BMP format is only 8-bits / color. How it looks is how it will save.
Save JPG file as displayed: Like Save BMP, but in JPEG format. Any JPEG quality /
compression (0-100) factor possible.
Save 16-bit/color TIFF: Saves the current image in TIFF format (lossless compressed or
uncompressed) at full 16-bit/color (aka 48-bit color) bit depth. This preserves all
information in your image for use in graphics programs
Save 16-bit/color PNG: Saves the current image in PNG format (always lossless
compression) at full 16-bit/color (aka 48-bit color) bit depth. This preserves all
information in your image for use in graphics programs
Save 16-bit/color PPM/PGM/PNM: Saves the current image in the appropriate variant of
these portable pixel map UNIX-based standard formats.
Save Color Components: Saves the current color frame as three separate FITS files
corresponding to the the red, green, and blue components of the image.
Undo: Undo the last change to your image. Undo will let you step back from any changes
made by tools in the Image menu. By default, you can take 3 steps back. You can opt to
disable Undo in the Preferences menu (to run a bit faster) or to have virtually unlimited
undo capability.
Redo: Think you liked it better with that processing you just undid? Redo.
Show Image Info: Shows information about the current image including its size and the
various capture parameters that either were stored in the FITS header or will be stored
when the image is saved.
Measure Distance: Measure the distance in CCD pixels, arc-seconds, and arc-minutes
among up to 3 points in the image (right-click to set points first).
Edit / Create Script: Open a window that allows you to create a capture script and load /
78
Pre-process color images: Apply traditional dark frame, flat frame, and bias frame
corrections to correct for typical CCD artifacts. Apply these corrections to a series of fullcolor images (RGB FITS files).
Pre-process BW/RAW images: Apply traditional dark frame, flat frame, and bias frame
corrections to correct for typical CCD artifacts. Apply these corrections to a series of
either black and white (monochrome CCD) images or to RAW images from a one-shot
color camera (e.g., the SAC-10) prior to De-Mosaic color reconstruction.
Bad Pixels: Create a map of the bad pixels on your CCD and/or apply that map to remove
hot pixels.
Batch Demosaic + Square RAW Color and Batch Square BW: Batch versions of the tools
found in the Image menu.
Grade Image Quality: Grade a set of images to determine the sharpest (and fuzziest) of
the set.
Normalize Intensities: Normalize all images in a set to remove offset and scaling
differences.
Match Histograms: Equate a set of images histograms to match that of a target image.
Align and Combine: Align and (optionally) combine a series of images. A dialog will
appear to let you control the method. Methods include: Fixed (no alignment), Translation
("one star", full-pixel shifts), Translation + Rotation (subpixel, including rotation such as
with an alt-az mount), Translation + Rotation + Scaling (same, but including a scaling
term), Drizzle, and Colors in Motion. For Fixed alignment, Standard Deviation based
stacking is an option.
Automatic Alignment (non-stellar): Automatically align frames without picking reference
stars.
Batch Geometry: Batch versions of rotation, binning, resampling and cropping.
Batch Conversion: Tools to convert a set of images from FITS to various graphics
formats or vice versa.
Batch One-shot Color with Line Filters: Batch versions of the tools in the Image menu
that extract portions of the color filter array.
De-Mosaic: Convert a single RAW CCD image currently displayed from a one-shot color
camera into a full-color image. Faster and better quality modes available.
79
Square B&W pixels: Squares pixels from black and white images.
One-shot color with line filters: Tools for reconstructing a RAW image taken with line
filters (e.g., Ha, Hb, OIII) from a one-shot color camera are provided along with a special
Low Noise 2x2 bin optimized for these cameras.
Crop: Resize the image by removing or trimming unwanted edges.
Mirror/Rotate Image: Tools are provided for 90 and 180 degree rotation and for mirroring
an image horizontally or vertically.
Resize Image: Resample the image to change its size using any one of 6 different
resampling algorithms (Box, Bilinear, B-Spline, Bicubic, Mitchell, Catmull-Rom)
Levels / Power Stretch: Apply a user-controlled stretch routine to the current image. You
can use this much in the same way a Levels tool is used to bring out details in the image.
Digital Development: Apply a user-controlled stretch routine to the current image
designed to make CCD images look more like film images. An excellent way to bring out
faint detail in your images.
Curves: Create a curve to transform the intensity of your image. Very powerful stretching
tool.
Zero Min: Add or subtract a constant from the current image so that its minimum will be
zero.
Scale Intensity: Pixel math to add, subtract, multiply, etc. each pixel.
Adjust Color Background (Offset): Subtract user-specified values from the red, green, and
blue color channels (e.g., from skyglow) to balance the color of the background in the
image.
Adjust Color Scaling: Apply a user-controlled scaling to the red, green, and blue color
channels separately to help balance the image.
Auto Color Balance: Automatically balance the color (both offset and scaling) to remove
a color-cast.
Adjust Hue / Saturation: Tool to adjust the hue, saturation, and luminance of the image.
Discard Color: Remove all color information from an image (extract the luminance data).
LRGB Color Synthesis: Create a color image from separate files using RGB, traditional
HSI-based LRGB, or Color Ratio based LRGB
Bin/Blur Image: Perform 2x2 binning using simple summation, simple averaging, or an
adaptive algorithm. These reduce your image size by 2x. Or, blur your image with your
choice of 5 levels of blur (Gaussian kernel sigma=1-3, 7 & 10).
Sharpen Image: Four tools are provided. Traditional and Laplacian sharpen tools based
on 3x3 kernels are provided along with an Unsharp mask and the Tighten Star Edge tool.
This applies an edge-detection routine (not a typical "sharpen" or "unsharp mask") to
tighten stars and enhance edges in your image.
Vertical smoothing (deinterlace): Smooth the image vertically to remove effects from
interlaced sensors.
Adaptive median noise reduction: Blend a median-based denoised image with your
original image to remove noise in the background
GREYCstoration noise reduction: Use the powerful tool from GRYEC Labs to reduce
noise while preserving details and important features in your image.
80
81
17 Preferences
17.1
Capture
DSLR Long Exposure Adapter: Without a "bulb" adapter cable ("USB only, 30s max"),
DIGIC II cameras will be limited to 30 second exposures. Here, select which longexposure adapter you have. Please make this selection before connecting to the camera.
DSLR Save location: Should the images be downloaded to the computer, saved on the
compact flash card, or both?
Color acquisition mode: When taking images with a one-shot color camera, what should
be done about converting them to full-color?
o RAW CCD data: Do no reconstruction and keep the data as RAW CCD data.
When saved, one FITS file with the raw data from the CCD (effectively a black
and white image that contains the color information) will be saved. You will likely
want to De-Mosaic the image prior to alignment and stacking or use Colors in
Motion.
o RGB Optimize speed: Do color reconstruction on the fly during image acquisition
and try to go for the fastest good color reconstruction at the expense of a bit of
quality.
o RGB Optimize quality: Do color reconstruction on the fly during image
acquisition and try to go for the highest quality color reconstruction at the expense
of a bit of speed.
Capture alert sound: Give an audible alert at the end of each image or the end or the
entire series.
Use max binning in Frame and Focus: Select to have the highest bin factor used during
Frame and Focus. Deselecting will have it run in 2x2 bin mode.
Enable Big Status Display during capture: During series captures, the progress will be
displayed in a pop-up dialog for easy viewing if you've left the computer unattended.
TEC / CCD Temperature set point: For cameras that can regulate the cooling, this is the
desired temperature (degrees Centigrade).
17.2 Output
Save as compressed FITS: FITS files are saved in lossless compressed FITS format to
save space with no loss of data integrity (default). Note, however, that some applications
do not support this aspect of the FITS standard.
Save in 32-bit floating point: FITS files are saved in the 32-bit floating point format used
internally to ensure no possible loss of data resolution at a cost of files being twice as
large
Use 15-bits (0-32767) instead of 16-bits: FITS files are saved in data ranging from
0-32767 rather than 0-65,535 if this is selected. Some programs (e.g., Iris) require this
format.
Color file format: When saving full-color data from a one-shot color CCD camera (e.g.,
the SAC-10), this preference controls how the color data are to be saved.
82
o RGB FITS - ImagesPlus: One FITS file with red, green, and blue components of a
reconstructed (de-mosaic'ed) full-color image stored inside in the style expected
by ImagesPlus (separate "HDU" per color) (default).
o RGB FITS - Maxim / AstroArt: One FITS file with red, green, and blue
components of a reconstructed (de-mosaic'ed) full-color image stored inside in the
style expected by Maxim DL and AstroArt (a "3-axis" or "3D" image with color
along the third axis).
o 3 FITS files: Reconstruct the full color image and save the red, green, and blue
data in three separate files. This should only be used if Nebulosity is not to be the
primary pre-processing application and if the application to be used does not
support RGB FITS (e.g., Iris).
Series naming convention: Choose to have images in a series named by a 3-digit code or
with a UTC date code (DDD_HHMMSS).
17.3 Processing
Use adaptive stacking: For the stacking techniques that you use on your light frames
(Translation, Drizzle, Colors in Motion), the image will automatically have the intensity
scaled to use the full range of the 16-bit file format used. Adding images and averaging
images each have their strengths and weaknesses. The Adaptive stacking technique sidesteps the weaknesses of each and lets you get the most out of your data. The only
downside is that a stack of 30s images and a stack of 3m images would appear equally
"bright" after stacking this way.
Flat processing: What should be done to your flats when you apply them? You can
choose to have nothing done or to have several filters applied. If youre using a one-shot
color camera, youll at least want to apply the 2x2 mean filter (this removes the Bayer
matrix).
Undo / Redo settings: You can opt for either no undo capability (to run faster and save
hard disk space), 3 steps worth of undo (default), or virtually unlimited undo capacity.
Manually override color reconstruction: Typically, Nebulosity will attempt to determine
what kind of camera a one-shot color file comes from and set the various demosaic
options automatically. At times, you may wish to override this automatic behavior and
specify offsets, array types, color mixing, etc. manually. Enabling this preference will
bring the manual color reconstruction dialog up each time so that you can override any
automatic behavior.
DSLR White Balance / IR filter: Ideally, the pixels are white balanced prior to actually
implementing the demosaic of a RAW image. For most cameras, this white balance is
known a priori, but DSLRs can be stock or modified. Choose the setting here that best
corresponds to your camera setup. Note, that at times, if there are severely saturated
areas, this may lead to a pink area in the saturated zones. If this occurs, the Straight Color
Scale option can be used.
Mirror Lockup delay (ms): How many milliseconds should Nebulosity wait after lifting
the mirror on a DSLR when mirror lockup mode is engaged?
83
17.4 Misc
Clock settings: In the control panel, Nebulosity can display a small clock that will show
the current time in a range of time formats or show the CCD's current temperature. The
time formats all use your computer's internal clock as the starting point and convert that
into other times. Note that local sidereal time and Polaris RA depend on Nebulosity
knowing your longitude.
o No clock: Hide the clock
o Local time: The current local time
o UT/GMT time: The current Universal Time (or Greenwich Mean Time)
o GMT Sidereal: GMST or Greenwich Mean Sidereal Time
o Local sidereal: The current local sidereal time (useful in finding objects with
setting circles)
o Polaris RA: Polaris' current right ascension (useful in using polar alignment
scopes)
o CCD Temperature: Current temperature of the CCD in centigrade.
Longitude: Local sidereal and Polaris RA require knowing your current longitude. Enter
it in decimal notation (e.g., -77.1 not H:M:S) with west (e.g., USA locations) being
negative.
84
18 Scripts
Nebulosity provides you with the ability to automate your capture process by using scripts.
Scripts are simple text files that list a series of commands for Nebulosity to perform in sequence.
For example, the script shown here would set the output directory to be \ccd\Oct22_05 on your
"C" drive (usually the letter associated with your hard drive). If the directory didn't exist,
Nebulosity would attempt to create it. It would set the output file name to be "m27", the duration
to be 2s (2000 ms), the gain to be 18, the offset to be 28 and then capture 10 images in a series
(m27_1.fit, m27_2.fit, etc). It would then pause and alert the user to "Setup for darks" (i.e., place
the lenscap over the telescope). After the user hits OK, it would then capture 10 dark frames
(dark_1.fit, etc.)
SetDirectory c:\ccd\Oct22_05
SetName m27
SetDuration 2000
SetGain 18
SetOffset 28
Capture 10
PromptOK Setup for darks
SetName darks
Capture 10
Nebulosity 's scripts can be created dynamically using the clipboard's operating system. If
commands are placed on the clipboard and Nebulosity is in a special "Listen" mode, it will
suspend reading commands from the script file and instead read them from the clipboard. This
allows other programs to dynamically control Nebulosity's actions. Full list of Commands
You can write scripts in any text editor (save in "ASCII text" format) or in Nebulosity's built-in
editor. Simply pull-down Create / Edit Script from the File menu. Here, you can start typing
commands or load an existing script. When done, you'll likely want to save your script (Save
button) and then press Done. Standard Windows shortcuts for copy (Ctrl-C), cut (Ctrl-X), and
Paste (Ctrl-V) work within the editor window.
When you're ready to execute the script, simply pull down Run Script from the File menu.
Nebulosity will then first verify that it's a valid script. Then, it will go through line by line,
executing each command until it reaches the end of the file. As it does so, the Status bar will
keep you apprised of what Nebulosity is currently doing. Pressing the Abort button will cancel
the script at any time.
Note: Commands act just as if you were to do them in the GUI. So, if you've already set
something in the GUI or if it is the default, there is no need to enter it in the script. For
example, since the default is to have the CCD amplifier control enabled (so that the amp
is off during exposure), there is no need to write "SetAmplifierControl 1" in every script
you write.
85
Tip: Script files can contain extra spaces or blank lines if you want to make them look
cleaner when writing them. Nebulosity will simply skip any extra spaces or lines it
finds.
Tip: If you want to place a "comment" to yourself in a script, simply put a "#" character
at the beginning of the line. Nebulosity will ignore that whole line. For example:
# Script used to capture data on 10/22/05
SetName
M51
Tip: You can execute scripts at startup by passing the script name as a command-line
argument. For example "nebulosity script.neb" will automatically execute script.neb
87