Category Archives: Digital photo files

Various files to assist in the understanding of digital photography.

9 Compositional Devices

Enclosed are  nine types of compositional devices discussed from the powerpoint presentation link. These devices provide you with ways to consider on how to compose the elements within your picture.

Below are additional examples as to how these compositional devices work within the presentation of a photographic image:

BALANCE: This can be a vertical balance (top and bottom) or horizontal balance (left to right)

FILL THE FRAME: Filling the frame with the subject

FOLLOW THE EYES: This can be from a person or statue

FRAMING: The use of other objects to surround the main subject

LEADING LINES: A literal or implied line is formed throughout your composition that helps the viewer move throughout your image

LINES: The use of diagonals, horizontal, vertical, or curved lines that dominate the image to harmonize with the main subject

POINT OF PERSPECTIVE: The vantage point the viewer takes on can be either high or low

RULE OF THIRDS: By aligning the main subject onto the thirds of your picture either vertically or horizontally forces you to take the main subject away from the corners and center

SYMMETRY AND PATTERN: By finding repeating structures or patterns within your image and harmonizing it with the main subject, or making it your main subject

Panning

The diagrams and photos show how the direction of a moving object affects the amount of blur that will result. When an object is traveling parallel to the plane of the film (above), considerable movement is likely to be recorded on the film and the object blurred, unless the shutter speed Is fast. If the object Is moving directly toward or away from the camera (below), there is no sideways movement recorded on the film and so a minimum of blur, even at a relatively slow shutter speed, When the camera Is panned or moved in the same direction as the object (below, right), the object will be sharp and the background blurred.

A blurred image can occur when an object moves in front of a camera that is not moving, because the image projected onto the film by the lens will move. If the object moves swiftly or if the shutter is open for a relatively long time, this moving image will blur and be indistinct. But if the shutter speed is increased, the blur can be reduced or eliminated. You can control this effect and even use it to advantage. A fast shutter speed can freeze a moving object, showing its position at any given instant, whether it be a bird in flight or a football player jumping for a pass. A slow shutter speed can be used deliberately to increase the blurring and accentuate the feeling of motion.

In the picture at far top left the bicycle moved enough during the relatively long 1/30-second exposure to leave a broad blur on the film. In the next photograph at a shutter speed of 1/500 second the bicycle is much sharper.

A moving subject may vary in speed and thus affect the shutter speed needed to stop motion. For example, at the peak of a movement that reverses (such as the peak of a jump just before descent), motion slows, and even a relatively slow shutter speed will record the action sharply.

The amount of blurring in a photograph, however, is not determined simply by how fast the object itself moves. What matters is how far an image actually travels across the film during the exposure. In the third photograph, with the rider moving directly toward the camera, the bicycle remains in virtually the same position on the film. Thus, there is far less blurring even at 1/30 second.

A slow-moving object close to the camera, such as a bicycle 10 feet away, will cross more of the film and appear to blur more than a fast-moving object far away, such as a jet in flight. A telephoto lens magnifies objects and makes them appear closer to the camera; it will blur a moving subject more than a normal lens used at the same distance.

The photograph above on right shows the effect of panning. The camera was moved in the same direction the bicycle was moving. Since the camera moved at about the same speed as the bicycle, the rider appears sharp while the motionless background appears blurred.

Successful panning takes both practice and luck. Variables such as the exact speed and direction of the moving object make it difficult to predict exactly how fast to pan. Decide where you want the object to be at the moment of exposure, start moving the camera a few moments before the object reaches that point, and follow your motion through as you would with a golf or tennis stroke. The longer the focal length of the lens, the less you will need to pan the camera; with a telephoto lens, a very small amount of lens movement creates a great deal of movement of the picture image.

Modes and Color Space

Modes and Color Spaces

HOW COMPUTERS WORK WITH COLOR

Computers create colors in several ways. When you scan an image or capture it with  a digital camera, a set of numbers is created to represent the colors of each pixel. However, there  is no standard way to assign numbers. Instead, there are many systems for numbering colors, including some systems that  were devised before the age  of computers. Scientists call these systems color spaces. A color space numerically describes  all the colors that can be created by  a device such as  a camera or  a printer. You do  not  need  to  understand the details  of color spaces, but  you  need  to  be aware that color  spaces  differ  because some contain more colors than others making it impossible to exactly translate colors  from  one color space to another. For example, your scanner and  camera use color  spaces that contain more colors than your printer’s color  space, so you  must make adjustments to your  images in order  to get good prints.

Use RGB color for capture  and display. Scanners and digital cameras use the RGB color space. In RGB, each pixel  is given  a separate number  for  each  of  the three  primary colors (red, green, and  blue). Scanners  and  cameras  have  red,  green, and blue colored filters placed over the sensors that measure the light intensity. Thus scanners  and cameras  are like color film, where each of the three  light-sensitive  layers is sensitive  to only one  RGB color. RGB  mode  is usually 24-bit RGB;  each color is  assigned  eight of  the 24 bits. Some cameras  and  scanners may use 16 or more bits per color to achieve better quality. Adobe Photoshop  software can edit  48-bit RGB images,where each color  is assigned 16 of the 48  bits.Computer monitors are also RGB devices. A monitor’s screen creates color when its red, green, and blue  phosphors glow  after  being struck by the tube’s electron beams.

RGB cannot create photographs on paper, however, because intermediate colors  like  cyan, magenta, and yellow cannot be created by mixing RGB inks. When red and green phosphors mix on a computer monitor, yellow  is the result. This is because RGB is an additive color space where color mixtures are created when light of the three  primary  colors is added to a dark background (for example. like a blank monitor screen). However, when  red and green inks mix on paper, the result is black. This is because ink on paper is a subtractive color space: the background (paper)  is white, and  inks subtract  colors  from  white. Red ink subtracts green and blue, while green  ink subtracts red and blue.  Working  together, they subtract  red,  green, and  blue. The paper is black because no light  is reflected.

Use CMYK color for prints on paper. Since a subtractive system of colors must be used in printing, the CMYK  color  space  is used. CMYK  represents  the three subtractive primary colors (cyan,magenta, and  yellow) plus black. Unlike  RGB inks, cyan, magenta, and yellow inks can create  intermediate colors. For example, yellow ink plus magenta ink creates red. The yellow ink subtracts blue  while the magenta ink subtracts green. Both inks reflect red, so only red appears on the paper.

Why is black ink used? Black (the K in CMYK) is necessary because the CMY  inks  used  in printing are  not color  perfect. When all three are mixed  together, they create brown instead of black.  Black ink is added to improve black and  near-black  tones.  Adding a fourth color makes CMYK  a 32-bit color because  each of the four colors uses 8 bits.

The color space of a computer image is called its  mode. Imaging software calls a color space like CMYK a mode. In Adobe Photoshop, modes are accessed with the Image  > Modes command. When a new  mode  is selected, the current image  is converted from its original mode  into  the selected  mode.  There are  several modes  besides  RGB and  CMYK. Grayscale mode  is often  used in black-and­ white  photography; it’s a colorless mode, measuring only  brightness. It creates  eight-bit images  that have 256 levels of gray. Indexed color is  a mode  used to create  color  images that use only  256 colors  (8-bits) or less. It is of limited use for most photographic images, but is ideal for creating graphics for the Internet.

There are possible conflicts between RGB and CYMK modes. When an RGB image  is printed on paper, problems in  conversion may occur  because the way  the image looks on an RGB monitor rarely matches the print created with  CMYK  inks. Photographers usually  edit images in RGB mode because most  home, school, and  small-business  printers, such as inkjet and color  laser  printers, do  a  fairly good job of translating RGB colors into CMYK colors. However, when preparing images for printing presses, photographers  usually prefer working in  the  software’s  CMYK  mode. When CMYK  mode  is selected, the software reduces  the range of colors  that  the monitor can  display in order  to mimic the colors that the inks can produce.

Picture Size

Picture “Size”

PPI, DPI, AND OTHER IMAGE MEASUREMENTS

How big is your picture? You must know how pixel data is measured when you prepare to scan an image, print it, or perform other operations.

Physical size is a familiar place to start. It is the width and height of the image measured usually in inches or sometimes in other units, like centimeters. For example, the physical size 8 inches wide by l0 inches high.

Pixel  dimensions are  the  number  of  pixels along  the height and width of an image. The number of these pixels is determined by the settings of your scanner or digital camera at the time the image is digitized. For example, a scanned 8 x l0-inch print might have pixel dimensions of 2,400 pixels wide, 3,000 pixels high, or simply 2,400 x 3,000. Generally, the more pixels you have, the better the quality  of the image.

An image with very few pixels. A 35mm slide (l.5 inches x l inch) scanned at only 20 samples per inch (20 dpi) results in an image of 30 x 20 pixels. Notice how the image has very little resolution (detail). The image’s file size would be only l,800 bytes because there are only 600 pixels, and each pixel uses 3 bytes (24 bits). The calculation is 30 x 20 pixels x 3 bytes per pixel = l,800 bytes.

An image with many pixels. The same slide was scanned at 300 samples per inch (300 dpi). which creates an image of 135,000 pixels (450 x 300). Notice the image has much more resolution than the 20 dpi image. The image’s file size would be 405,000 bytes (approximately 400 kilobytes). The calculation is 450  x 300 pixels x 3 bytes per pixel= 405,000 bytes.

Pixel  dimensions are the number  of  pixels along  the height and  width  of an image. The number of these pixels is determined by the settings of your scanner or digital camera at the time the image  is digitized. For example, a scanned 8 x l0-inch print might have pixel dimensions of 2,400 pixels wide, 3,000 pixels high, or simply 2,400 x 3,000. Generally, the more pixels you have, the better the quality of the image.

Pixels  per  inch  (ppi) or resolution (the apparent sharpness of an image)  is calculated  when an image is printed or is displayed on a monitor. Assuming that the number of pixels remains constant, the pixels per inch-and the resolution -will change as the size of the printed or displayed image  increases or decreases. For example, as you increase your print size, the same total number of pixels spread out to fill a bigger space. Each pixel  has to increase  in size,  which decreases the resolution, and makes the image  appear less sharp than a smaller print  of the same image.

Print size affects resolution. If the number of pixels remains constant, increasing the size decreases the resolution. Here, the some image file was printed at three different sizes. As the image increases in physical size, the size of each pixel also increases, making the image appear less sharp.

Note that, in the largest print, the pixels have become so large that they are individually visible. This is similar to what happens when you make a very large darkroom print from a small negative. The grain in the film is magnified so much that it becomes visible in the print.

Dots per inch (dpi) is used  for  several measures, which results in confusion. DPI is used as a measure of printer resolution. It is the number of dots of ink (per inch) produced by a printer as it prints an image. In general, the more dots per inch, the clearer  and  more detailed the image. Dots per  inch  also describes  the maximum number of pixels per inch that a monitor or LCD panel can display without blurring the pixels  together. However, the term  dots  per  inch  is  also improperly used to describe a scanner setting­ the   number of times per inch a scanner “samples” an image to convert it to a grid of pixels. This should preferably  be  called samples per inch.

File size measures the amount of disk space occupied by an image. It is affected  by  pixel dimensions  and bit depth, plus other factors such as file  format and how much the image  is compressed  for storage. Usually, when the file size is bigger the pic­ture quality is better, but unfortunately you can get too  much of a good thing. As the file size increases, the computer has  to process  and store more data  about the picture.  If the file is very large, this can cause problems by greatly increasing the time the computer  takes to ex­ecute each command or by greatly increasing the amount of computer storage space needed. Trade-offs may  have to be made  between  the quality  desired  and  the file  size  that can  be conveniently handled by your computer.

File size measures the amount of disk space or RAM occupied by an image, usually listed as the number of bytes the file contains.

bit                          The smallest unit of digital information

byte                        8 bits

kilobyte  (KB)          1,000 bytes

megabyte  (MB)       1,000,000 bytes

gigabyte  (GB)          1,000,000,000 bytes

(The numbers are rounded off. A binary kilobyte, for example, actually contains 1,024 bytes.)

How you set up a file depends on the final output you want. For example, to calculate the number  of dots per inch for  a scan, you  must determine how  big  you want  the final  print  to be. This will be discussed during class to specify the pixels per inch and other file characteristics  when  you are  setting up an image file depending upon which printer we will be using.

Pixels and Bit Depth

Pixels into Pictures

A picture that you take with film in an ordinary camera is in analog form. Analog means that the image’s tones and colors are on a continuously variable (analog)  scale, like  the volume on a stereo, which  changes  in smooth gradations from soft to loud.  Similarly, the image on a film negative has a smooth, continuous scale of tones, with  unbroken gradations from light to dark.

The  picture is  converted to  a  digital form, called a bitmap image or  raster  image. The image  is sampled at a series of locations, with each sample recorded as a single, solid-toned pixel (short for picture element). The pixels that make up the image  are arranged in a grid of rows and columns, like the squares on a sheet of graph paper. In the finished  image, the pixels are  so small that you  don’t see them individually; instead,  you see a smooth gradation of tones. You can see pixels if you enlarge an image on your computer (see illustrations, this page).

The original analog image is converted into digital form by assigning each pixel a set of numbers to designate its position, brightness, and color. Once the image is digitized, you can use editing software, like Adobe® Photoshop®, to select and change  any group  of pixels in order to change color, to lighten or  darken, and  so on. The computer does this by changing  the numbers assigned to each pixel. To put an image into digital form, the image is divided into a grid containing many tiny spots called pixels. The location, brightness, and color of each pixel are recorded as a series of numbers that are then saved by the computer for later use.

To see pixels on your computer’s monitor, select a part of the image and enlarge it. If you are using Adobe Photoshop, for example, you can take the following steps:

•  Open a picture file.

•  Select the Zoom tool (shown in the toolbox at right as a magnifying glass) by clicking on it.

 •  Place the magnifying glass on the image. Click repeatedly on the image to zoom the image to greater degrees of enlargement.

• To reverse the zooming and return the image to the original size, click repeatedly on the image while holding down the keyboard’s Option or Alt key.

Each square is o pixel. Notice that each contains a solid tone; the color or brightness varies from pixel to pixel, but never within a pixel.

Bit Depth

The bit depth, or number of bits each pixel contains, determines the colors  and tones  in an image. Computers record information in binary form, using combinations of the digits 1 and  0 (zero)  to  form  large  numbers.  A bit  is the smallest unit of information, which  consists of either a 1 or a 0. A pixel  may contain as little as one bit, which  is either a  1 or a 0, or it may contain 24  bits or even 48 bits.

The  greater the  bit  depth, the  smoother the gradation from one pixel to another, because each pixel will be able to render  a greater selection of possible colors and tones. A picture composed of 1-bit pixels (consisting of either a 1 or a 0)  will have  only black or white  pixels.

One bit per pixel produces two tones, black and white. (Left) The image can have only two tones, black and white. (Center) On extreme enlargement of nine of the pixels. (Right) How the computer represents the pixels with numbers.

An 8-bit pixel is composed  of eight  bits in a row. There are 256 ways to arrange eight Os or  1s, starting with 00000000 (zero) and end­ing with  1 1 1 1 1 1 1 1 (255), so an  8-bit pixel can be any of 256 colors. But 256 colors are not  enough for  a  good color reproduction. However, 8  bits  will produce a  very  good black-and-white pixel showing any of 256 dif­ferent black, gray, and  white tones.

An 8-bit pixel has 256 black,white, and gray tones available. (Left) This is enough for an excellent black-and-white rendition. (Center), On enlargement of nine pixels. (Right) the some tones are represented by numbers: 0 is block, 31 to 223  are various shades of gray, and 255  is white. An 8-bit pixel can also produce any of 256 colors, enough for a limited color rendition.

To depict a picture with realistic colors and tonality, 24-bit pixels are needed. A pixel containing 24  bits can represent  any one of over 16 million colors (see illustration this page). A 48-bit pixel can represent any of 2 80 trillion colors.

A 24-bit pixel can be any of more than 16 million colors. Left, it can produce an image comparable to a conventional color film photograph. Center, on enlargement of 12 pixels. Right, here the colors are produced by mixing the primary colors (red, green, and blue). Each of the three colors has eight of the pixel’s 24 bits, or 256  possible tones. Note that 255  indicates the maximum amount of a color; 0 indicates none of the color is present.

Increasing the  bit  depth has  a  price. An image composed of 24-bit pixels takes up three times as much disk storage as one composed of 8-bit  pixels. It also requires  three  times as much RAM (random-access memory) to display the picture, and computer processes take three times  as long. A 48-bit image requires  twice the computer resources  than  a  24-bit image requires.

Multiple light portrait set-up

These lighting setups model most faces in a pleasing manner and can be used to improve some features – for example, using broad lighting to widen a thin face. A typical studio portrait setup uses a moderately long camera lens so that the subject can be placed at least 6 feet from the camera; this avoids the distortion that would be caused by having the camera too close to the subject. The subject’s head is often positioned at a slight angle to the camera-turned just enough to hide one ear.

The first photograph shows short or narrow lighting where the main light is on the side of the face away from the camera. This is the most common lighting, used with average oval faces as well as to thin down a too-round face.

The photograph that shows a broad lighting setup where the side of the face turned toward the camera is illuminated by the main light. This type of lighting tends to widen the features, so it is used mainly with thin or narrow faces.

Short lighting places the main light on the side of the face away from the camera. The next four photographs show the separate effect of each of the four lights in this setup. Photo floods were used here. Flash units can be used instead, but when you are learning lighting, the effects of different light positions are easier to judge with photo floods.

The main light in a short lighting setup Is on the side of the face away from the camera. Here a 500-watt photoflood is placed at a 45′ angle at a distance of about 4 feet. The main light is positioned high, with the catchlight, the reflection of the light source In the eyes, at 11 or 1 o’clock.

The fill light, a diffused 500-watt photoflood, is close to the camera lens on the opposite side from the main light. Since it is farther away than the main light, it lightens but does not eliminate the shadows from the main light. Catchlights from the fill are usually spotted out in the final print.

The accent or back light (usually a spotlight) is placed high behind the subject, shining toward the camera but not Into the lens. It rakes across the hair to emphasize texture and bring out sheen. Sometimes a second accent light places an edge highlight on hair or clothing.

The background light helps separate the subject from the background. Here it Is a small photoflood on a short stand placed behind the subject and to one side. It can be placed directly behind the subject If the fixture Itself Is not visible In the picture.

Broad lighting places the main light on the side of the face toward the camera; again the main light is high so the catchlight is at 7 7 or 7 o’clock. The main light In this position may make the side of the head, often the ear, too bright. A “barn door” on the light (see page 26 7) will shade the ear.

Butterfly lighting Is conventionally used as a glamour lighting. The main light is placed directly in front of the face, positioned high enough to create a symmetrical shadow under the nose but not so high that the upper lip or the eye sockets are excessively shadowed.

Light sources for portraits

Front lighting (above left), with the light placed as near the lens axis as possible (here just to the right of the camera), only thin shadows are visible from camera position. This axis lighting seems to flatten out the volume of the subject and minimize textures.

Side lighting (above middle), sometimes called “hatchet” lighting because it can split a subject in half. This type of lighting emphasizes facial features and reveals textures like that of skin. The light is at subject level, directly to the side.

High side lighting (above right), a main light at about 45″ to one side and 45″ above the subject has long been the classic angle for portrait lighting, one that seems natural and flattering. It models the face into a three-dimensional form.

Making a photograph by natural or available light is relatively easy. You begin with the light that is already there and observe what it is doing to the subject. But where do you begin when making a photograph by artificial light, using lights like photofloods or electronic flash that you bring to a scene and arrange yourself? Since the most natural-looking light imitates that from the sun-one main light source casting one dominant set of shadows-the place to begin is by positioning the main light.

This light, also called the key light, should create the only visible shadows, or at least the most important ones, if a natural effect is desired. Two or three equally bright lights producing multiple shadows create a feeling of artificiality and confusion. The position of the main light affects the appearance of texture and volume (see pictures above). Flat frontal lighting (first photograph) decreases both texture and volume, while lighting that rakes across surface features (as seen from camera position) increases it. Natural light usually comes from a height above that of the subject, so this is the most common position for the main source of artificial light. Lighting from a very low angle can suggest mystery, drama, or even menace just because it seems unnatural. The demon in a horror movie is often lit from below.

Top lighting (above left), a light almost directly above the subject creates deep shadows in the eye sockets and under the nose and chin. This effect is often seen in pictures made outdoors at noon when the sun is overhead.

Under lighting (above middle), light from below produces odd looking shadows because light in nature seldom comes from below. Firelight is one source. Hightech scenes, such as by a computer monitor, are a modern setting for under lighting.

Back lighting (above right), a light pointing at the back of a subject outlines its shape with a rim of light like a halo. Position a back light carefully so it does not shine into the camera lens and fog the film overall, and so the fixture itself is not visible.