Photography Techniques

About Photographic Techniques

Most people are aware that you can use Adobe Photoshop to create an image of you shaking hands with the President or Prime Minister or something. This is NOT what these techniques are about. Photographers have always used compositional techniques, and editing techniques (e.g. dodging and burning) to produce their vision. The techniques here do not create an imaginary reality, but rather help show the reality as the photographer sees it. If we want to show a sunset for example, we need to focus on the distance and expose to be able to see what we are looking at. These are techniques. Similarly, if we are to show the viewer of the final photograph the scene that we see, we need to remove the camera's limitations from the equation. That means focusing correctly, exposing correctly, using the right speed, but also techniques such as HDR to make up for limited sensor dynamic range and focus stacking to make up for lens focus limitations. No camera or lens is perfect, but we can reproduce the scene better by using these techniques to make up for the limitations of our tools.

Techniques

Photography, like any skill, requires the practitioner to learn and be able to use a range of techniques. This list is by no means exhaustive, but it does include the techniques that I commonly use.

My Normal Style

My style changes with the circumstances, but I am normally shooting landscapes and in particular, panoramas or panographs. These are of course very slow moving things. I have no need for the highest shutter speed, and generally not for a high ISO also. I tend to shoot at ISO 100 (for digital) and 50 for Velvia Slide Film. Given that I want as much depth of field as possible without incurring diffraction. On the Canon 5DS r, this diffraction limit is about f11. I therefore shoot, f11, ISO 100, and let the speed be determined by the exposure I want. This sounds a lot like Aperture priority setting on the camera - but it isn't. I set the camera to manual and then manually determine the exposure by looking at the histogram and 'exposing to the right'.

As mentioned elsewhere, I am interesting in the capturing the scene as I see it, not in creating an artificial scene. I therefore wait for the light. Some times this works, sometimes it doesn't, but the Golden Hour is my favorite time, and I am particularly looking for the sun to be over the horizon behind me, and for some puffy white clouds to be in the sky to capture those reds, purples, oranges of the sun over the horizon. This means long exposures, and while it may be called the 'golden hour' - it is really only about 10 minutes before the light changes considerably - either to something else quite nice, or it is gone.

It is much more difficult to determine the correct exposure for film, and film also has considerably smaller exposure latitude than most digital cameras. On a digital camera, you can quickly review the shot on the back of the camera and check the exposure histogram. If in doubt, you can retake the shot. With film though, it may have taken a months planning to get to the spot, a drive of many hours, a walk of half an hour, and a wait of 2 hours to get the shot -- and then two weeks wait to find out if it was the right exposure. Film is not cheap (infinitely more expensive than digital) but it is cheap compared with needing to go back and re-do the shot / re-do the wedding. When using film I use both a hand held exposure meter and my digital camera to set the initial exposure. I then shoot + and - 1/2 stop just in case.

HDR - High Dynamic Range

Dynamic range refers to the range of shades, or colours, that we can capture, see or print. A black and white page has only two shades. A good black and white film should be able to resolve thousands of shades. A typical digital camera can only capture about 12-13 bits of dynamic range. This is about 8000 different levels. This means that if we are going to capture the range of whites in a field of snow, we wont be able to capture the range of browns and blacks in the nearby cave. We have to choose our exposure to suit what we are focusing on. This is particularly important when have a scene that has bright light in it somewhere - for example a sunset with bright clouds. If we are to capture the range of bright colours in the clouds, we are likely to lose out on capturing the detail in the shadows on the ground.

To solve this, we can use the HDR Technique. This is fairly simple: keeping the camera still (i.e. on a tripod), we keep the focus and f-stop the same, and then vary the speed of the shot. Going from 1/30th of a second to 1/15th, will give us a shot twice as bright. 1/60th twice as dark. Many cameras, including my Canon 5DSR can be set to automatically take 3, 5 or 7 shots, with each shot being half, or twice as bright. For my Canon, this extends the range of the exposure from near 14 bits, to about 16-17bits (65,000-132,000 levels depending on settings) or four - eight times what the camera can achieve. Provided that we record what the sensor sees. This means recording in 'RAW' and not JPG. (See RAW vs JPG below.

To assemble the image, we then need to merge these images together, selecting the darker image in the bright areas, and lighter image in the darker areas. Software such as Photoshop can do this automatically, or you can do it manually via masks. Manually masking the image gives more control.

Focus Stacking

Just as HDR is based on selecting parts of several 'stacked' images to make a combined image, focus stacking does the same thing. Here instead of selecting the best exposure though, we select the best focused image. Again, this can be done manually or automatically. Focus stacking is particularly important when we wish to have the foreground, mid-ground and distance all in the same picture. An important compositional technique is to 'lead the viewer' through an image - to them take a journey from some obvious point (a subject in the foreground) to your destination (generally something in the distance). Cameras, particularly at open apertures (e.g. f1.8) struggle to have a large 'depth of field' - to keep a lot of different distances in focus. If they have the foreground sharp, then the distant object is out of focus. Conversely, if the distant object is in focus, then the foreground object is not. By taking multiple images, with different focus points sharp, an image can be created where all of the items are in focus.

Removing people

Most photo editing tools are capable of 'stacking' a number of exposures on top of each other and aligning these images exactly (especially if taken using a tripod). The advantage of this is that the photographer can remove parts of one image, in order to show parts of another. For example, if photographing a hillside that contains a cave - then normally the hill would be ok, but the cave too dark. However, if a photograph is exposed for the cave, and a 'hole' made in the top hill exposure, then the cave exposed photo could be seen through. This is the basis for HDR photography. In the 'old' days we did the same exposure alteration using dodging and burning techniques.

So how is this useful to removing people from an image? Imagine you see a great photo, but it is full of tourists. Provided you take enough exposures, then as the tourist move, at least one of the exposures will contain each area without a tourist in it. When overlaid into a stack, most editors will recognise the tourists (or cars, or whatever) as 'ghosts' - things that only appear on one of stacked images. The editor can then remove these automatically to leave you with a composite image that has no tourists or cars. If the editing tool cannot automatically remove the ghosts, then the editor can by simply erasing the tourists from the 'top' image in the stack to see the next image (and if that contains another tourist, repeat). Again, this technique does not falsify the image. You could simply wait until no tourist is present. However, this technique stops us needing to do this when we are trying to also get the right light.

Photostitching

Photostiching involves overlaying one photograph partly over another to form a larger image. This was possible in days of film, but it often showed a disconnection where the two photographs overlapped. Now, with digital images, a computer is able to perform the alignment much better than any old lab technician could do. The concept remains the same though.

So how is photostitching useful? Imagine you are traveling and come across a great panoramic scene. You need a 16mm lens to capture this broad expanse, but only have your 55mm with you. By taking a collection of photographs from the one side to the other - making sure to overlap each photograph by a reasonable margin (about 1/3rd to 1/2) - you can join the images together later in the computer. The result is not only a wide image covering all of the scene - but it is also a bigger image - say 45 Megapixels where your camera was only 18 Megapixels. Photostiching allows a normal digital camera to create a Panograph - a digital equivalent of the Panorama created by specialist cameras such as the Linhof.

Long Exposures

The duration of an exposure can dramatically affect the end result, in addition to changing the exposure value. The longer an exposure, the more moving things blur. Motion blur can be used to imply the speed of a car, to make a moving person 'disappear', or to make water and in particular waves and waterfalls blur into a silky smooth look. The key requirement for Long Exposures is a rock solid tripod. I mostly use it to control the look of moving water.

Neutral Density (ND) Filters

Neutral Density (ND) Filters are dark glass filters that go in front of your lens. They reduce the amount of light going to your sensor, and thus increase the exposure required to take the photograph. Given we often need MORE light and carry flashes to increase the amount of light we have, it may seem strange that anyone would want to reduce the available light.

Reducing the amount of light can help us change our image though. For example, it can let us use a larger aperture. Large apertures decrease the depth of field, which 'forces the eye' to focus on only the part of the image we have left in focus. i.e. what we want the viewer to see. This can, for example, make one person stand out in a crowd, or get us to look at one flower in a amongst are collection of rubbish.

Another purpose is to flatten out water. If there are ripples on the surface of a lake, a long exposure will even these out and make the surface appear smooth. It will also turn moving water into a white 'fog' like flow.

Movements

Having spoken on how important movements (as in camera lens movements) are, I felt obliged to discuss them more under techniques. In most cameras, the sensor or film is in the camera and it has a lens at right angles to that sensor. The focus on the lens means that an area that is parallel to the sensor and a certain distance away is in focus. i.e. we normally see an area say 10 metres in front of the lens in focus. Areas further away (say 30 M) and closer (say 4 M) are no longer in focus. That area in focus at 10 m is called the focus plane. Normally the focus plane is at right angles to the sensor. However, if we tilt the lens slightly to one side of the sensor, then that focus plane moves dramatically in the direction we moved the lens. A slight tilt to the left might mean that the focus plane extends from 0.5 M on our left, to Infinity on our right. What use is this? Well imagine a street on your left that goes off into the distance. You could focus on the cars on the left side of the street from besides you, off into the distance, but leave the other cars and the footpath out of focus. Why would you want to do this? Because the human eye focuses on two things: brighter things and things in focus. I discuss this more below, but the key issue is that instead of having a distance in focus in the image - I can have what I want in focus in the image.

Lens Shift is another movement. Normally lenses show what what is directly in front of the camera. Imagine though that you are beside a raging river, and you want to photograph a bridge across the river. If you are close to the bridge, it is big in the viewfinder, but it also goes from big beside you to small away at the other end. You really want to photograph the bridge from in the river. Lens shift effectively does this. It distorts the image on purpose so that one part is bigger than the other (i.e. top is bigger than bottom, or in this case far side of bridge is bigger than side closest to you). In this way we can 'even' up the bridge just like if we were in the river. The same thing happens with buildings. Normally buildings that are photographed from close up 'fall over' or 'tombstone' in an image, appearing narrow at the top. Shifting the lens eliminates this. Making the buildings appear normal. In fact, used too much, the building can look like it is falling on you instead of over away from you. Could you do this in Photoshop? Yes, but only by removing or copying pixels. Captured in camera you still get all the pixels that your sensor has. Again, this is not falsifying the image. The 'old' plate cameras had movements built in automatically, it is only now with SLR's and their fixed lenses, that movements have become a specialised topic / tool.

Tombstoning

If you photograph a tall building from the bottom, the building appears to be falling backwards. It gets smaller at the top. When we see a building in real life, our brains understand that the top is further away and smaller - but this effect is less when we look at a static 2d picture. So 'normal' photos of tall buildings have this tombstoning effect. To counter it, you can used a Tilt-Shift lens, or movements (if available), or you can attempt in post to transform the image slightly. I prefer to get it right in the camera if I can, but this is not always possible.

Shutter Speed

If you take a photograph at too low a shutter speed, then either the whole image may blur, or things within it may blur. The old rule was that you can 'safely' hand hold a camera at 1/focal length of the lens. So 1/50th sec for a 50mm and 1/30th for a 30mm lens. However, this rule was created when film could not produce the level of detail that modern DSLR can. I think that you need to be at least 4 times faster than this for a good modern DSLR with no image stablisation. Now most good DSLRs / lens have some form of image stablisation. I have seen claims of 8 stops, but frankly, I would not trust those claims. You can however get 2 stops (4x) and sometimes more - which basically brings us back to the rule above. 1/ focal length.

ISO

In the days of film (which I still use sometimes), you purchased film based on its sensitivity to light. Slide fim was 50 ASA (quite insensitive), while some high-speed black and white film was 1000ASA (very sensitive.) The trade-off to sensitivity was quality. The film had 'grain' - bits of chemicals stuck to the film. In slide film this grain was very small, and that meant very fine detail could be recorded, and the fidelity of this was also high. ISO 1000 black and white however would let you shoot after dusk without a flash -- but it gave very poor results and you certainly could not 'blow it up' (create a large print) with it and have any quality.
Digital cameras do not have film and chunks of light sensitive chemicals, but they do have 'gain'. The camera has to read the light value from each little sensor chip to determine the colour and brightness of that pixel. If these light values are low, the camera mulitplies the values by a number to create a better number. It does this when you alter the 'ISO' setting on the camera. This multiplication is called 'gain'. Gain increases low light numbers to be more like bright light numbers, but it does the same to any digital 'noise' (random changes to the numbers). This means high gain images are 'noisy', and that is why we prefer to shoot at say ISO 100, instead of ISO 800. Different cameras have different levels of noise, and different 'base' sensitivities. Generally though, ISO 100-250 will be much better than ISO 800-1200, and while it may be possible to have the camera take a photo at ISO 50,000 or above, these will be of a very poor quality.

Depth of Field

A lens basically gathers light and 'focusses' it on the film / camera sensor. It can only focus one plane though. (generally, one distance from the lens). This is focussing. However, while we might accurately focus on something say 10 metres away, something 9.9 metres away might have what you would call 'acceptable focus'. i.e. not 100% perfect, but good enough for your photo. The distance in front of, and behind where you focus that still has 'acceptable focus' is the 'depth of field'. Three important things here: 1) each of us has a different view as to what is or is not acceptable when it comes to focus or sharpness. 2) the lenses focal length changes this apparent depth of field. Long lenses compress eveything, and they compress the depth of field, so they appear to have a narrow depth of field. Short lenses do the oppposite, so they appear to have more depth of field. 3) The aperature of the lens also changes its depth of field. Small numbers (e.g. f1.2) create a very shallow depth of field. This is wonderful for portraits where the eyes or face will be in focus, but the rest of the scene is not. Big aperature numbers (e.g. f16 or 32) create very large depth of field. Those big old plate cameras of the 1800's produced surpringly sharp photos as the lensed they used were typically f64 or f128. Mere pinholes by comparison with modern lenses.
Photo tools such as PhotoPills allow you to find out the depth of field of your camera - lens - aperature - distance. I don't know how you would go trying to check this calculator each time you took a photo - but some examples from the list would help you understand your camera settings better.

Diffraction

If you read Depth of Field above, you are likely to think 'I will shoot all landscape photos at f16' (or whatever you lens will go to) to produce the sharpess image. This does not work due to something called diffraction. Diffraction is basically light waves interferring with each other. The larger the aperature number used, the more likely your photo will be 'blurred' by this interference. Unfortunately, the higher megapixels that your camera resolve, the more sensitive it is to this diffraction. Cropped sensor DSLR's are even more sensitive than full-frame DSLRs, and this is one of the reasons that no matter how many pixels a phone camera has, it is not as good a full frame DSLR. The only way to avoid diffraction is to reduce the aperature number used to what is best for your camera. In most DSLRs, this is about f10-11. For a 50MP Canon 5DSR, it is about f8. You can look up your camera, but I keep my aperature between f1.2 and ~f10, with most landscapes being shot a ~f8.
Photo tools such as PhotoPills allow you to find out the diffraction limit for you camera. E.g. using this tool: Nikon 850, 24.3MP between f10 & f11; Phase one IQ280, 280MP, between f5 & f5.6; (my) Canon 5DSR, 50MP, betwen f7.1 & f8. (Note: this diffration limit is where diffraction starts to occur. You can go slightly over this limit without too much image degradation. What 'slightly' over means is up to you.)

Moving the eye around a picture

As photographers, we capture a static image compared to the moving image captured by a movie. However, we still want to tell a story, or to get the viewer to 'see' what we want them to see. We want to direct the eye to the point of interest. We can do this in a variety of ways, but two key ones are sharpness and brightness.

Humans naturally tend to look at the brighter areas in front of them. If we make an image dark in some areas and brighter in others, people will tend to look towards the brighter areas of the image. This is one of the key reasons that people have been making vignettes for many years. It directs people to look at the part of the image we want them to look at.

Humans also tend to look towards the areas that they can see the best. If we make some areas out of focus on purpose, and some areas in focus, then the viewer will look to the in-focus areas. This is why photographers pay for large f-stop lenses (e.g. f1.2). If you only shot at f11, then there is not much use in paying for a f1.2 or 1.4 lens. However, a f1.4 lens like my Sigma 85mm, when stopped to f1.8 or f2, lets me put a face completely in focus and blur everything else. This 'purposeful blur' is called bokeh, and portrait photographers love a good bokeh in the areas that they don't want viewers looking at. I specifically buy a lens like that to blur lots of my image, but in making lots of the image slightly blurry, I focus the viewer on the face of the person in the portrait. The viewer feels compelled to look where I want them to, and the image has more impact on the viewer. In portrait photography, we want the eyes of the subject sharp, the rest of the face very sharp, and then anything other than the face slightly out of focus (but in a pleasing way).

Golden and Blue Hour

The golden hour is an old film trick, and one i still use for Velvia 50 film and panographs where possible. The golden hour is the period just before and just after sunset, when the sun is low in the sky (or just over the horizon) and the scene is preferably lit by indirect light from clouds. This period gives a warm, golden light to scenes. The period is not really an hour, but depending on circumstances can be as short as 10 minutes. For films like Velvia 50, that are slow at best, this low light leads to quite long exposures. The increased red sensitivity of the film also helps to bring out the reds that can be found in the clouds. The indirect sunlight also has another advantage: in direct sunlight, some parts of the image are sunlight - and require short exposures, but others areas are in shadow, and they require longer exposures. During the golden and blue periods nothing is in direct sunlight, so everything requires longer exposures and the image looks better exposed - or more evenly lit.

The blue period is a newer technique. After the sun has set, but before pitched dark, there is still some light in the sky. This light tends to be blue. Generally the light is insufficient for most film photographs, but it can still produce striking digital photographs. Photographs in this period require very long exposures, so they are useless for anything other than static objects. The photographs typically, as the name suggests, have a blue cast over everything. This can be kept, to give a cold, dark night feeling to the photograph, or it can be removed digitally to give a more normal colour to the image, but retaining that even lighting.

Infra-red Photography

Most photographs are taken using 'visible light', i.e. what we can see. Within the electro-magnetic spectrum though there is a vast range of frequencies, including ultra-violet and infra-red light that are just at the edge of what we can see. Most digital camera sensors are sensitive to some of this light and they use a filter on the sensor to stop it. If you remove the filter though, the camera is sensitive to all three (to a varying extent depending on the sensor). Putting a 'hot shot' filter on the front of the lens restores the visible light only performance, and putting varying filters on the front can allow you to select the type of light you want to use.

Given that infra-red (and Ultra-violet) photography generally looks at only a small range of light frequencies, it is generally mono-chrome. However, by replacing infra-red frequencies with visible light frequencies, we can produce 'false colour' prints.

Monochrome Infra-red photographs are known for their white foliage and dark, brooding skies. In infra-red, the warmer something is, the whiter it is. The sky is freezing cold (dark) and foliage tends to be warm. People are of course warm (white) also. Infra-red photos generally try to avoid having people in them.

False colour infra-red can be dramatic.

Raw vs JPG

If you don't know what this means, don't fell alone. Lots of photographers don't know. The bottom line is that cameras produce 'RAW' information from their sensors. This is called a different name by each manufacturer, so people tend to just talk about 'RAW formats'. For a computer to be able to display the photo though, or for printers to be able to print it, they generally need a different format - typically JPG or the less popular TIF. JPG is popular as it substantially reduced the size of a computer file - meaning that more can fit on a memory stick, more can be saved to a drive, and files load quicker over the internet. So the bottom line is that RAW is what your camera takes and JPG is what your computer or phone normally uses. There are plenty of other formats too: TIFF, HEIC, etc. but generally they all fall into one of two camps: 1) The 'RAW' camp keeps all of the data that the sensor saw. This makes the file size large, and the computer processing slow. It does provide the best quality though as no data is lost. 2) Compressed files. Compressed files are much better for storing pictures (files) on your computer or phone, or for sending them over the internet. Some compressed files (e.g. JPG) reduce the file size by throwing away some data. This reduces the image quality. The process is actually really good at keeping what APPEARS to be all of the data - so the quality loss is normally not that noticeable. However, when you begin to edit the picture to say lighten one area - then the lack of quality becomes quickly noticeable. Lossy compressed images are not good for editing.

Most digital cameras will automatically turn RAW into JPG and save JPG files on the memory stick. This lets them store more photos on the memory stick. The trouble is, JPG is a format that reduces file sizes by 'throwing away' information that is not needed to show that photo. RAW of course is the information from the sensor - whether that information is good or bad, useful or not. So far the JPG format is winning hands down. However, when we come to edit the photo - for example to increase the exposure by 1/2 an f-stop - then the JPG file has thrown away some of the information that we need. The RAW file contains all that the sensor saw though. So, for people that do not edit their photos, JPG tends to be the way to go. For people that are going to take their photos into an editor though, RAW files tend be able to produce better results.

I shoot RAW when I can. On the Canon cameras RAW is called CR2 (Canon RAW format 2). When I import the files I change them to DNG which is Adobe's Digital Negative format. Basically Adobe's version of a common RAW file. The DNG is likely be around for longer than any manufacturers camera specific RAW file. It is also bigger than their files too, so this change costs in terms of file storage.

Shooting modes

I have had a number of people ask me to show them how to use their camera. People see those auto-shoot modes and want to know when to use them (despite the little picto-grams that normally imply the purpose). Generally I don't know. In most cases, I don't use them. If you take my Lihof Technorama, then I get Film ASA (which is set for the film) and I then have the bewildering array of options called aperture and speed. That is it. No Modes. This is how I learn t to shoot, and this is how I still shoot. I turn modes off and set the camera (and usually the focus) manually for Landscapes. Landscapes do not tend to move quickly, so I am able to keep pace with the settings myself.

Now do I use full manual always? No. If I am shooting a wedding, I might go with Aperture Priority, 1/3rd stop positive exposure (exposing to the right a touch) and then let the camera choose speed. If I am using flash, I tend to shoot Speed Priority and set the camera for 1/200th of second (in this case setting the flash exposure to 1/3rd or 1/2 positive exposure) - then let the camera do its thing. If I give the camera to someone else though, I tend to change it to full auto.

Expose to the right

Most cameras allow you to view the photo you have just taken on the rear screen. Many of them allow you to see a histogram of the exposure of that image. 'Exposing the right' means moving that histogram as far to the right as possible (towards the brighter colours), WITHOUT 'clipping' the whites. i.e without the histogram going off the page to the right. This means that the brightest part of the image is as close to complete white as possible, and that there is as much detail in the shadows as possible. The resultant image of course is too bright, but in an image editor the exposure can be reduced to bring it back to where it should have been - while still retaining all that detail in the shadows. Generally I always expose to the right. It is only a question of how much. In some circumstances you find that the scene contains a greater range of exposure values that your camera can capture. In this case, I prefer to loose information from the left of the curve (blacks) and retain the bright right of the curve. In other cases, the range of information is well within the ability of your camera to capture all shades present. In this case, provided the curve is towards the right, I don't feel as much pressure to push it all the way to the right. Having a slight gap on the right hand side means that as I move around and take a quick shot - I avoid any clipping of the whites that may have occur.

Sharpness

Nothing can bring back any lack of detail that was lost when a photo was captured. Certainly when a photo is processed, it is possible to use unsharp masks, and sharpening to help create the illusion of sharpness, but this still does not fix 'getting it right in camera'. Sharpness is controlled by how you take the photo. Sharpness is actually made up of a number of different factors: lens blur, motion blur, sensor blur, and diffraction. So here are my keys to sharpness

  1. Use the right lens for your situation. This does NOT mean some super expensive lens. For Canon lenses there is a site The Digital Picture that compares lenses and tells you how sharp they are. The same is true for most of the other camera makes. If you look at these sites it is common to find that some super-cheap 'plastic-fantastic' lenses are available for your camera that super sharp. The issue is that they wont be super sharp at all apertures, but rather only at some apertures (e.g. f8 - f13). If you know the lenses limitations, then you can still use the lens to make sharp photos.
  2. Set the right shutter speed. The shutter will remain open for the amount of time set by the shutter speed. Obviously 1/3200 of a second will freeze the movement of a swaying bush (and most other moving things). This is motion blur. However, it also allows very little light to be passed to the sensor / film, which requires higher ISO (which means more grain or noise) and / or larger apertures - which reduce the depth of field. To avoid these trade offs, we always want to use the shutter speed that 'stops' the action, but then no more. So a gently swaying tree might be stopped in 1/30th of second. Animals might need 1/200th, and fast action 1/1000th.
  3. Set the right aperture The larger an aperture (small numbers like 1.4) lets in a lot of light - but gives a shallow depth of field (range of sharp focus).& High apertures (like f16 or f32) give very large depth of field, but then loose sharpness due to refraction. On most cameras f11 is about the sweet spot where there is a good amount of depth of field, but little or no refraction reducing sharpness. On high resolution sensors like the Canon 5DS 50MP, f10 is about as far as you can go.
  4. Use megapixels. The more megapixels you have, then the more you can crop an image, change an image, blow an image up, and still create a good sharp photo. There are lots of sites that point out that even super large images - e.g. bill boards, do NOT need anything other than around 20MegaPixels to create. This is simply because the larger an image is, the more you move away from it to view it. However, if you shoot at 20MP, you can't crop the image by 50% and still achieve 20MP. More megapixels gives you more options.
  5. I cannot talk about megapixels without also talking about sensor size. There are lots of rubbish cameras (e.g. some phones) that have huge numbers or megapixels but lack image size. These do not make good cameras. If you look at high quality professional cameras (e.g. Phase one) they may only be 50 MP, but they have a sensor that is 53.4mm X 40mm. Coming down in quality and price, a top level 'full frame' Digital Single Lens Reflex (DSLR) has a 24mm X 35mm sensor. The cheaper DSLR's (e.g. Canon's entry level 'APC-C' SLR) 14mm X 22.2 ... right down to phone type sensors at around 4mm (maximum). Big sensors gather more light, which reduces the level of sensor noise and the amount of noise introduced by the electronics. They produce better photos. I am not suggesting everyone needs a $80,000 Phase One camera system, but I am suggesting that you move from phone, to compact, to entry level DSLR to full frame DSLR as your budget allows.
  6. Stop camera shake. Use a good tripod, and use either a remote camera release, or the self timer. Note, you can also use high shutter speeds and in-camera image isolation - at some times - but not always.

Rule of Thirds

The rule of thirds is a compositional technique that places the key elements of a picture (photo or painting) at the 4 points that are one third of the way into the picture. i.e. one third down from top, one third in from left; one third down from top, one third in from right; etc. These points provide a pleasing 'balance' to the picture.

You can also use this for any natural lines in the image. E.g if you have the horizon, instead of having it halfway, you can move it up to 2/3rd to show more ground in a pleasing balanced way. Conversely, you could place it about 1/3 up from the bottom to highlight the sky more.

Golden Ratio. (Also called: extreme ratio; mean ratio; and divine proportion depending on the circumstances.) The Golden Ratio can be use for many things, and it clearly works. In photography though, it is analogous to the rule of thirds. The ratio is around 1.62 to 1, instead of the rule of thirds 2 to 1. Some cameras have grids that can be placed on the screen based on rule of thirds or the Golden Ratio.

Foreground, Middle ground, Far Ground

This compositional technique breaks the photography up into regions and lets the eye wonder through those regions. The foreground is the area within 1-5M, the far ground is 200M++, and the middle ground is anything in between. The idea is simple in theory, but hard in practice - give the viewer a journey to take through the photo. Let them walk from the foreground, through the field, to view the grand landscape in the distance. This models how we normally view things in real life, and makes the photo more 'real' or inclusive for the viewer.

  • logo
    Auspano

    Auspano has specifically been set up to create large Australian Panoramic Images. The work is intended to show the 'real Australia' instead of tourist focused images.

  • logo
    Australian Travels

    Photographing Australia requires traveling Australia. This sub-site is dedicated to how and where we travel, and what lessons we learn.

  • logo
    Unimog

    Traveling Australia requires a vehicle that go anywhere, self recover, and accommodate two adults. We chose a Unimog, and this site details how we modified it to suit our purposes.


WHO ARE WE

Auspano is an Australian Panoramic Landscape Photography business within Advanced Control Concepts Pty Ltd. Advanced Control Concepts Pty Ltd is a diverse company, but with a background in Project Management Services. Jenny and Geoff Barton are the Auspano photographers.

GET IN TOUCH

Advanced Control Concepts Pty Ltd
email: admin@advancedcontrolconcepts.com.au

NEWSLETTER