Gregory Gutenko is Associate Professor of Communication Studies, University of Missouri at Kansas City.
Pervasive advertising and marketing efforts promote consumer
market digital video (DV) format camcorders as the ideal acquisition technology
for Internet video production. A shared digital nature and simple interconnections
between cameras and computers using IEEE1394/FireWire/i.Link® format
digital cabling do suggest a natural affinity between DV and Internet production.
However, experience with Web delivery reveals numerous obstacles associated
with consumer-friendly camcorder design and feature sets that can severely
compromise Internet streaming video quality.
This paper describes the various features in consumer and industrial grade DV camcorder designs that lead to unnecessary image quality loss, and what must be done to avoid such loss. Conventional video production techniques that can lead to quality loss regardless of the camera technology used are also identified. A number of recommendations are offered that will help the videographer in an educational or production environment adapt to Internet limitations.
La publicité envahissante et les efforts de commercialisation ont promu, sur le marché des consommateurs, les caméscopes en format vidéo numérique (VN) étant la technologie idéale à acquérir en vue de produire des vidéos sur Internet. Une même nature numérique et des interconnexions simples entre les caméras et les ordinateurs, grâce notamment au câblage numérique de format IEEE1394/FireWire/i.Link®, suggère en effet une affinité naturelle entre la production vidéo numérique et Internet. Cependant, l’expérience de ce qu’offre le Web révèle de nombreux obstacles associés à la conception et aux séries de caractéristiques des caméscopes conviviaux, qui peuvent sérieusement compromettre la qualité des séquences vidéos sur Internet. Cet article décrit les différentes caractéristiques des conceptions de caméscopes vidéos numériques industriels et grand public, qui mènent à une perte inutile de la qualité de l’image, et ce qu’il faut faire pour éviter cette perte. Les techniques de production vidéo conventionnelles, qui peuvent mener à une perte de qualité quelle que soit la technologie de la caméra utilisée, sont également identifiées. Un certain nombre de recommandations sont proposées qui aideront le vidéographe dans un environnement éducatif ou de production à s’adapter aux limites de l’Internet.
In the first scene to take place on the moon in Stanley Kubrick's 2001: A Space Odyssey, Dr. Haywood Floyd's visit is being recorded by a staff member using what in 1968 was an item of hardware beyond imagining: a handheld pocketbook-sized camcorder operating under available light. There is no moon base yet in the 2002 we live in today and Dennis Tito is the only space tourist to date; an orbiting Hilton remains a figment of futurecasting. However, the video camera technology predicted in this scene was right on target: point-and-shoot, handheld, and available light capable.
For decades this had been only a techno-dream. Now, along with mini camcorders using the DV, Hi-8, Digital-8, and VHS-C formats, there is also the DV/Firewire/Internet streaming video production workflow. As marketed, this revolutionary digital paradigm is the realization of the dream:
DV or not DV. As soon as you ask the question, the answer is obvious. DV is an enormous advance over analog, in its stunningly superior quality, versatility and ease of use. DV helps you create memories, DV helps you communicate, unlike anything that's ever come before. (DV or Not DV, 2000)
How wonderful are DV and Firewire? FireWire (EEE1394, or iLink®) as designed is a computer peripheral device connection method, not a networkable video pathway. It cannot be routed or distributed over distance as is. No fiber optic solutions are available yet. Early models of Sony DVCAM™ decks had no Firewire ports for this reason. How practical is Firewire in a broadcast or production center? Not very.
Consequently, this paradigm is truly the culmination of the desktop video concept. Because video is pre-digitized within the camcorder, DV is the most efficient way to get video from a video deck into a computer, but it does not necessarily follow that DV is the best or the only format to consider for the initial acquisition of video images.
It is worthwhile to beware of digital marketing hype such as digital headphones and digital cameras. Light is analog. We ourselves are analog input and output devices. All cameras begin with analog video where the light from the lens is transduced into electric signals at the prism block with its charge coupled device (CCD) image pickup chips. Video can be converted to digital only after this critical stage of the process.
It is also worthwhile to remember that consumer digital video (DV), along with Digital-8 and the professional market variants of DVCAM™ and DVC PRO, is a compressed digital format. Direct conversion of analog video to digital video increases the storage and transmission space needed by approximately ten times. Except for high-end telecasting and postproduction situations, this volume of data is a significant burden. For use in camcorders and most computer based editing systems, digital video needs to be mathematically reduced by compression formulae and/or hardware to a more manageable size. Compression creates image artifacts while losing camera image details, and does so to a varying and unpredictable extent. As a consequence, the compressed digital format can have both superior and inferior image qualities when compared to analog. How can this be so?
Subjective claims of analog versus digital formats often fail to take into account the dominating influence of lens and camera quality. Shoot-offs between analog and digital camcorders frequently reflect the comparative performance of dissimilar lens and camera designs much more than differences between formats. Generally speaking, digital processing circuits are much simpler and cheaper to engineer and manufacture than comparable analog circuits. Thus, we should expect better quality from a digital camcorder than from a similarly priced analog version. However, this is not always the case.
Lens performance affects resolution dramatically. The particular design of a lens and the apertures at which it is used yield certain contrast, colour, and resolution results. For instance, it is often not appreciated that most lenses produce their sharpest image at a midway aperture setting such as f5.6 or f8; resolution is lower at both ends of the f-stop range. One also needs to appreciate that a professional video zoom lens as an item by itself may cost up to $4,500.00. You do get what you pay for in lens performance, and a professional lens should yield a resolution of 800 or more TV lines. TV lines or line-pairs are visually observed, using a test chart similar to Figure 1. Since TV lines are visible because of a figure-ground relationship—black line against white line—two pixels are needed to "visualize" one TV line. It is to be expected that a lens built into an $800 camcorder would have a deliberately restrained performance design, one that might meet but not gratuitously exceed the limitations of the camera and/or format.
Figure 1. Standard video resolution test chart.
The camera section also plays a major part in setting the limits of performance. Specifications for professional camcorders describe the resolution produced at the camera. A Sony DXC-M7 provides 700 TV lines and a Sony DSR300 DVCAM™ camcorder yields 800 TV lines at the camera. Variables such as the number of CCD image pickup chips, their density and size, and a wide variety of electronic image processing options all come together to make a camera the performer it is.
Consider also the role of the tape medium in this equation. A tape format is only a box...a container for what is provided by the lens and camera. If a Sony DSR300 can capture an image with a resolution of 800 TV lines, but the DVCAM™ format can only hold up to 500 lines, there is obviously some quality overflow that will not be evident when viewing the videotape. In other instances, a lens/camera combination may be producing less resolution than the format can support; the lens/camera combination of the Canon ZR10 captures only 300 TV lines and so the DV format box here is never filled to its line resolution capacity, as seen in Figure 2. Image contrast, colour saturation and accuracy, and signal-to-noise (S/N) are also critical aspects of a good image, but resolution is often the first item of concern.
Figure 2. Enlarged screen captures of DSR300 (left) and ZR10 (right) camera images of standard video resolution test chart.
DV tape resolution is "up to 500 lines of resolution" in promotional literature. Up to 500 lines of resolution? In actual equipment specifications, no claim is made as to the precise resolution capacity of DV, DVCAM™, DVC PRO, or Digital-8. This is because, due to compression loss, these digital formats rarely get to 500 TV lines. Resolution in a compressed digital format is variable, unlike in analog formats that are not compressed and therefore have fixed resolutions. S-VHS tape
resolution is always 400 lines or more, whereas DV tape resolution varies. The variable resolution capacity of digital formats makes testing and comparison with analog formats problematic:
Utilizing lab test results for image analysis, the difference for most scenes between DV and higher end formats like Betacam and Digital Betacam are somewhat negligible. (Bunzel & Margolis, 2001)
Format resolution comparisons have typically been conducted using static test charts or stable footage devised with analog quality analysis in mind. But outside of the lab, field situations such as those encountered in television newsgathering tell a different story. Fixed analog resolutions need to be compared to downwardly variable DV resolutions because DV resolution drops when detail and/or motion exceeds the processing limits of the DV digital encoder (DCT variant of Motion-JPEG). Remember: it is 500 lines and less. If digital video's compression losses are truly taken into account, conventional resolution testing for format comparison purposes, as shown in Table 1, is questionable.
Table 1. Digital and analog formats and their subjective quality ratings.
Appreciating the problematic task of evaluating digital formats, Wilt (1998) has devised what he calls a DV stress test, a test image that challenges these formats on compression artifacts and losses as well as resolution. (This DV stress test can(Wilt, 1998)be found at http://www.adamwilt.com/pix-gens.html#DVStressTest). Considering that a video frame in the broadcast television standard established by the U.S.A. National Television System Committee (NTSC) converts when digitized to a 640x480 pixel matrix providing only 320 TV lines, there are other image quality factors besides counting lines to be concerned with.
Compressing digital data results in the loss of original analog data if the process is pushed much beyond a 50% reduction in the amount of data. Before that point is attained, the compression can be considered to be loss-less and the original analog content will be reconstituted faithfully. Beyond 50%, digital compression methods must discard more data, and the consequences become visible, and this level of compression is termed lossy. DV is a lossy compressed acquisition format. Analog is not. The DV format retains 20% (a 5:1 ratio) of the original analog video data. For this reason alone, compressed digital has not been widely adopted by professional broadcasting and tele-production. (The high end Avid Media Composer nonlinear computer editing system still does not accept DV inputs, although the Avid Corporation has handed off to IBM an unsupported DVXpress product that does.) While DVCAM™ and DVC PRO are fast replacing analog Betacam SP equipment in many production venues, the jury remains out and occasionally laments the sometimes-noticeable image defects of compression. Betacam SP is still cited as the benchmark of professional video quality.
Quirks of DV/DVCAM™/DVC PRO/Digital-8 products include an apparently random addition or removal of a video image variable termed 7.5 IRE setup. 7.5 IRE setup can be thought of as a picture contrast adjustment that raises the darkest video black tone up to a level that is slightly brighter than the full black level of 0 IRE, as displayed on a waveform monitor. Waveform monitors are calibrated in Institution of Radio Engineers (IRE) units from 0 (black) to 100 (white). The positioning of the video black level at 7.5 IRE instead of 0 IRE is a technique employed for recording and transmitting NTSC broadcast television analog video with a buffer zone, to insulate the unseen timing and blanking portions of the video signal from picture signal interference. Since DV was designed by engineers from the computer field rather than broadcasting, observing a 7.5 IRE setup was reasonably deemed unnecessary.
Eliminating setup actually makes digital video appear sharper and richer by increasing the contrast ratio of the image. Unfortunately, there has been no manufacturing standardization as to when to add or remove setup from video as it is passed between analog and digital formats. Professional uncompressed digital format decks (e.g. D1 and D2) typically have switches or menus that allow addition or removal of setup as needed. Compressed digital products, however, are usually automatic regarding setup and chaos is now rampant; 7.5 IRE is often being added in or removed without oversight, resulting in broadcast-destined material missing required setup and other material accumulating 15 IRE of setup. The misplacement of setup has a drastic effect on image appearance, ranging from too dark to washed out.
Another quirk of DV and Digital-8 consumer product design is image stabilization. Image stabilization reduces camera image shake. Most professional camcorders are heavy and shoulder-balanced, absorbing a significant amount of shakiness. Image stabilization is an add-on lens technology for these products. Handheld camcorders, however, are precariously positioned and collect all the tremours the operator's body telegraphs to them. Image stabilization is much more essential here, and there are two critically different methods of achieving stabilization. Of the two, only optical stabilization, using floating lens elements, has no consequences for resolution, but it is an expensive option found on larger camcorders.Digital stabilization, on the other hand, necessarily steals resolution from the image, since this process involves electronically shifting a digitally zoomed-in image to hide shake. Fewer picture elements (pixels) are used when the image is blown up in this manner and so the image has a lower resolution, which is why the resolution of the Canon ZR10 and all other camcorders with digital stabilization engaged, is lower than that of S-VHS and 3/4 inch SPUmatic.
Of more universal concern than the above quirks are the consequences of compression that can affect all users of DV/DVCAM™/DVC PRO/Digital-8. Compression will lower DV resolution and create artifacts. Some of those replayed 500 or less TV lines will be of artifacts and not camera image, and some of those artifacts will be noticeable. Exactly which artifacts will be troublesome and consequential will probably depend on which side of the digital division of end products a content creator is working: broadcast quality or compressed Internet quality.
Table 2. The Digital Divide at 30 frames per second and 640 by 480 pixel matrix resolution.
DV artifacts have generally been defined as in Table 3 below. The first two are related to the Motion-JPEG intraframe compression method (DCT - Discrete Cosine Transform) used in DV and should be familiar to those who have encountered JPEG images on the Internet. The remaining two artifacts are unique to DV/DVCAM™/DVC PRO/Digital-8. The probable significance - where it hurts the most - for the two areas of content creation identified in Table 2 above are noted.
Table 3. Compression artifacts and their production area significance.
Artifacts may slip by at the typical 30 frames per second (fps) rate of conventional broadcasting and videotape recording, but will have longer on-screen persistence at slower frame rate captures (15 fps or less) as these frames are randomly grabbed and held. These intra-frame compression artifacts arise within each independent video frame, and the multiplicity and complexity of image details that must be encoded may directly affect the level of artifacts.
Essentially, the more complex the image that is delivered by the lens/camera to the encoder/format, the less likely the image can be made to fit the format box, and overflowing image complexity is lost. When decompressed for viewing, the encoder/format synthesizes the lost image detail as artifacts. Capturing images that do not overflow the box keeps the generation of artifact patchwork to a minimum. Shooting for digital video and the Internet is a matter of simplifying image complexity. Keeping image details simple is of greater significance as the level of digital compression increases, and is extended to include inter-frame motion compression (MPEG) over time.
Uncompressed professional digital video formats were introduced partly to better support multigenerational and multilayer video editing. Digital decks were also much simpler and cheaper to build than analog and were thus beneficial for manufacturers: the analog 1 inch type C tape format had to be withdrawn from the market in order to encourage the sale of digital decks. While 1-inch type C held up well through six or seven generations of layering, it was obvious that, properly controlled, the uncompressed D1 and D2 digital formats had no practical limits on generations.
The drawback of generational quality loss pushed analog tape out of the editing suite, but analog remained secure as an acquisition format. Considering that most editing is now conducted on non-linear editing (NLE) computer systems rather than tape-to-tape, the original justification for digital recording is passé. Component analog formats such as Betacam SP still provide several advantages over compressed digital:
A consideration alien to most video creators, but familiar to computer application creators, is that of target data rate. Because of the very evident loss of video image quality that occurs through analog signal processing and tape generations, generations of video creators have always maintained a highest possible quality pathway throughout the production process. A camera that resolves 800 TV lines is used to get to a final destination of a broadcast image of 330 TV lines. This "target data rate" of the television broadcast is back at the far end of the analog editing and duplication process. With video material in a much bulkier digital form (uncompressed, about 10 times the size of analog), computer application creators appreciate that a highest possible quality pathway is unworkable, because of the mass of data. With a defined target data rate, however, and material in digital form, it is possible, and indeed necessary, for the production process to take place at a carefully maintained reduced size - that of the target data rate. Now, instead of a highest possible quality pathway there is in digital an exact quality pathway. As Ozer (1997) in Publishing Digital Video states, "Know your target data rate." For creators working with digital video, this means considering ways to limit the original camera image data prior to digitizing.
Ozer (1997) specifies guidelines to follow to reduce camera image complexity. Most are evident but unpalatable to conventional videographers and directors: no pans or zooms, no transitions when editing, avoiding fine line details, using flat lighting, and no post-production in analog. Avoiding camera moves and transitions simplify the image for better results when compressing motion (MPEG) for streaming Internet video. The lighting and detail restrictions benefit DV compression, and thus benefit both Internet and broadcast quality end products. But in essence "you're eliminating all the fun effects that make video interesting!" (Ozer, 1997).
To lose less of what you really want to see well on the screen, you need to sacrifice what is visually insignificant. This demands much more attention to what is being captured by the camera. It demands a form and complexity of scene analysis that is unheard of when shooting analog; now every visual element has a price tag on digital video. Images must be ruthlessly pruned, because every moving leaf in the background or wayward strand of hair carries a data penalty.
The reality of streaming video begs the question, when video is compressed to mere kilobytes does format really matter? Consider the comparative scale of these standard compression options in Table 4.
Table 4. Types of video delivery and their data rate.
It is risky to create video without some thought for multi-purposing or repurposing; at some point, material intended for broadcast will be considered for compressed distribution, or vice versa. Aside from controlling nonessential image complexity that is obvious to the eye (camera motion, complex details), there is also much that can be done to avoid creating and preserving invisible image data that is of no value. Analog creators have never had to conceive of or contend with such a penalty as the invisible image.
Vampire Video is a phrase that has over the years referred to a wide variety of undesirable aspects or practices in making television. Vampire Video in the context of digital production is defined as follows: Valueless image data that suck resolution from the valued video image.
To lose less of what you want to see, insure that you do not capture and record what you do not see. These unnoticed image elements are detected by codecs, captured, digitized, and otherwise handled as must-have video at the great expense of what actually is wanted. The following image variables are representative of Vampire Video:
Noise by another name is snow. Snow is the most complex video image possible; every pixel changes at random with every frame. It is the most difficult image to compress without loss. The camera is the primary source of noise, not the tape format (except in multigenerational analog). Noise is present in all video images, buried down in the black and is measured as a signal-to-noise (S/N) ratio rated in decibels. The S/N performance of cameras has improved over the years: the 1993 S-VHS AG-460 is 45dB, while the 1999 S-VHS GY-X3U is 60dB. The S/N of DV tape is 55dB, and of S-VHS it is 45dB. Clearly, you want a camera with a S/N that exceeds that of the format, and should seek DV cameras that have a 60 to 62dB S/N. But these are the ideal, published noise levels, and inferior shooting practices will result in increased noise, the main source of Vampire Video.
Use the recommended illumination levels (100 lux typical for consumer camcorders; 2000 lux typical for professional) rather than the minimum illumination levels. If it is an available camera feature, look for the zebra stripes superimposed over bright white scene details in the viewfinder, while shooting at an optimal f-stop. By far, underexposure is the most common cause of increased noise; this is a very prevalent flaw in quick-and-dirty video shooting. Available light in interiors should never be considered adequate.
Video gain and automatic gain control (AGC) are features used to atone for the lack of adequate lighting. The grainy appearance of gain-boosted video is a mere visual annoyance in analog applications - or sometimes even a desired textural effect. In digital applications, it is image quality and resolution-depleting video noise. There is no substitute for recommended illumination levels. When using camcorders with a gainshift feature, use -3db to set an even lower threshold of noise.
Handheld camerawork is style versus resolution. The movement of either subject or camera is a minor image quality loss factor in intra-frame compression (M-JPEG) and broadcast-quality DV applications, but a major loss factor in inter-frame compression (MPEG) and streaming video. Motion results in a major data burden penalty, with more of the data rate/file size being consumed to describe image motion rather than image picture detail. Does image stabilization help? One or two pixels of shift are enough to affect inter-frame quality, so do not count on this being beneficial. Only a locked-down shot on a tripod, or other absolutely stable support, will work.
Only a few high-end professional lenses do not create a slight change in image magnification as focus is changed. Image size fluctuations occur even when an adjustment of focus seems to have no effect, because of deep depth of field (DOF) or wide-angle (WA) zoom position. Auto-focus shifting may not create evident focus changes with deep DOF at WA, but the breathing effect of the image creates hidden motion, trivial to the eye but resulting in major data level changes. Approximately 95% of the pixels within every frame become unique to a codec. Therefore, it is recommended that auto-focus not be used while recording footage. Auto-focus can be engaged as a focusing aid, especially when using camcorders with overly small viewfinders or low-resolution LCD screens.
Crisp edges are likely to compress better than fuzzy edges, and it is worth remembering that lenses produce their best resolution at around f.5.6. Enhancement, also termed sharpness and edging, is an image embellishment found in most cameras that creates an artificial contrast exaggeration where transitions from light to dark portions of the video, such as lines and edges, are detected. This feature has been in place since the earliest generations of video cameras, and was greatly needed with tube cameras and blurry video formats. However, most modern cameras and higher-end formats, such as DV, do not need this tracing-over, and in fact enhancement will create unrealistically harsh edges. Enhancement is a false image overlay, which creates unneeded image detail that a codec will waste data to save. Cameras that have a fixed and inaccessible sharpness control should be avoided, and sharpness should be turned off for DV.
There is little justification for the use of auto tracing white balance. White balance that is set in reference to a white card and then fixed keeps the overall image hue stable. Drifting colour, which can occur as differently coloured objects momentarily enter and leave the frame, result in unnecessary changes in image data that consume data capacity.
Only uncontrollable shooting circumstances justify reliance on auto exposure. Auto exposure varies, deceived by insignificant reflectance changes within the frame, resulting in an overall brightening and darkening of the scene. Aperture bounce in response to auto exposure sometimes overcompensates, exacerbating the flickering appearance of the scene. These changes in exposure may be too subtle for the eye, but even slightly drifting exposure alters every pixel in the frame and every frame in the sequence, consequently loading the codec with data changes that are not needed. For this reason, exposure should be determined and then locked, and changed only as needed for each new shooting setup.
DV and streaming video probably present their worst face in television news photography. News videographers are often doing handheld, under-lit, and auto-exposed recording. Optimum image quality has never been a primary consideration in electronic newsgathering (ENG); capturing the spontaneous moment is what is deemed essential. Footage of a spontaneous major news event has rarely been banned from broadcast because it was shaky and off-hue.
The preceding Vampire Video variables are difficult to internalize in most videographers' minds. The consequences of data overload do not exist in the analog world. The recommended strategies are counter-intuitive and anti-aesthetic. As news videographers change over from analog Betacam SP to DVC PRO or DVCAM™, however, they are seeing these consequences on screen (broadcast and streaming), and it is their inclination to blame the digital rather than the aesthetic. And so to see the best examples of the worst video to compress, you need go no further than the streaming on-location news footage, courtesy of your local station or national network news.
When controlled for the least compression loss, streaming video can look almost good. One aesthetically pleasing site is www.playtv.com, which is the Web marketing outreach for PlayTV and their Trinity product. Following the rules scrupulously, the featured streaming videos are interview-style sales presentations that use locked-down camera shots, which use no transitions, which exhibit restrained subject movement, and use background sets that are easy to compress. There is no shakycam to be seen here.
In trying to control for Vampire Video, perhaps the biggest obstacles to quality preservation are inherent in camera design. The Vampire variables are the user-friendly features of consumer DV camcorders. Automation makes it possible for the naïve user to use a very complex technology without thought, but it must be remembered that automation is also without thought. Automated variables are always seeking for correction and consequently always drifting, often rendering very high image data fluctuations.
Professional DVC PRO and DVCAM™ camcorders are designed just as their analog predecessors were, with the expectation that the user wants to control all image variables quickly and at any instant desired. In most DV products with manual over-rides, controls are buried within menus, with multiple steps required for access. Professional camcorder designs assign major image variables to dedicated controls on the lens and camera body. If you have to look away from the viewfinder to find a button, that button is not accessible enough. When faced with a mini-DV that is fully automated for easy use, even a professional videographer will be discouraged from exercising control. To invert the credo of the Bauhaus, function follows form. The designed form of the consumer camcorder compels surrender to automation.
This leads to the conclusion that what matters most in obtaining maximum image quality, especially where streaming video is concerned, is not tape format but camera control. Originating on VHS rather than DV may yield better quality, if the VHS footage is shot with compression in mind and the DV is not. Shoot on VHS and transfer to DV to capture into a computer. Why not?
If the VHS camcorder has better S/N, manual standard gain, manual focus, manual white balance, and has minimal enhancement, it might well offer a better alternative than a fully automated DV camcorder. Most likely, however, the composite form of VHS video would have analog artifacts that would be carried along when digitized. Realistic alternatives to the DV format for consideration would be component video formats such as Betacam, Betacam SP, MII, S-VHS, and Hi-8.) Ultimately, what matters most is the camera and how it is used.
Everything presented thus far illustrates a major divergence between a conventional sensibility regarding video imaging and a new regard for video images as precious data. We have diverging production paradigms, one for uncompressed broadcast video and one for compressed streaming Internet video. The great conceptual handicap for those learning how to produce video is that everyone has grown up with the first paradigm, learning it from television and film, and thus the paradigm is internalized and intuitive. The second paradigm, if not approached from an extended immersion in computing issues, is obscurely alien.
This is compounded by the technology at hand: consumer-friendly but Internet content-hostile mini-DV camcorders, the affordable level of technology for most educational institutions. The tolerance of consumer technology for careless ideography has been integrated in many students for years prior to their arrival in college video production programs. A decade or two ago, the teaching of video production was a process of introducing relative innocents to new knowledge; it is now often a reprogramming regime to purge poor video habits.
Clients approach their established advertising agencies and insist they need streaming video; agency creative staff attempt to analyze client needs and define what benefit will be delivered to the client by the streaming video; after analysis fails to define benefits it is acknowledged that, regardless, the clients are right when they say they just have to have it. Is streaming video, in part, a vanity item?
Streaming video via modem is a long way from being easy to view and good to look at. The text and graphics in most sites, along with linking and searching, deliver the majority of interest and value. Aside from vanity, keeping up with high-tech appearances, and a little flash, what does streaming video deliver?
Compared to broadcast television, the communications outreach philosophy of streaming video is less than forthcoming. Broadcast communication outreach, as typified by David Sarnoff's molding of domestic television technology in the 1950s, made it possible for anyone with a working TV set to receive all television programming. A TV set half a century old can still pick up today's content, if you can find tubes for it. Compared to the 50-year useful performance life of television, the computer/Internet provides perhaps a 50-week performance life . . . 50 days if you want streaming video. Then you will require new codecs, new operating system for the new codecs, deletion of old and conflicting codec extensions, and a new CPU every three years.
Imagine how frustrating television viewing would be if you had to stop and load a codec for a particular program, or you find that your television wasn't suitable for watching a particular program at all. That's the consumer world of broadband today (Saint John, as cited in Kaufman, 2001)
Possibly, it is cool. Cool not only in the common sense but also in the sense of sensory involvement with the medium itself. When Marshall McLuhan devised his hot media/cool media continuum of user involvement in the processing of media information, television was black-and-white and over-the-air (McLuhan, 1964). Monochromatic, noisy, and fuzzy, television of the 1950s and early 1960s required much cognitive work by the viewer to apprehend the image. Early television was a cool, or low sensory information medium. Most other media McLuhan regarded as hot: there was no great effort required to comprehend or interpret the sensory data of film, radio, or print. They were as complete as needed for easy comprehension of their content. When McLuhan was queried years later about cable colour television, he stated that television had evolved into a warm medium. Cable colour advanced the temperature. It was then that viewers could relax from the task of deciphering and decoding TV images.
Perhaps streaming video is the new cool medium (Gutenko, 2002). Internet users now peer transfixed at a mosaic of pixels, blocky and jerky and immersed within an otherwise hot background of text and stills, just as the nuclear family peered transfixed at the ghostly images of Your Show of Shows. Streaming video may be involving, not because of the wealth of image data on display, but because of the level of visual imagination that must be engaged to fill in the blocks. Streaming video is in demand.
When the S-VHS format was introduced in the early 1980s, it was designed to be a consumer product. What many professionals, including this author, found was that S-VHS offered an affordable step up in picture resolution over 3/4 inch Umatic tape, although the latter still had superior colour quality. For not a few years, I provoked the rebuke of engineers and fellow videographers for using such a lowly technology for over-cable production. S-VHS was the first crossover format to confuse the boundary between consumer and professional video technology, although S-VHS was clearly a compromise.
When the DV format was introduced in the mid-1990s, professionals began packing Sony VX-1000s along as backups for their Betacam SP camcorders, finding that the footage from both formats could be intercut with little detection. This was the revolutionary aspect of consumer digital video. In terms of image resolution, contrast, and colour quality, there is no longer an absolute boundary between consumer and professional technology. However, as noted throughout this paper, there are significant differences between consumer and professional cameras in lens and image quality control. There are also significant differences between uncompressed and compressed digital video, and all of these differences combined make DV production for the Internet much more demanding than DV production for broadcast. DV or not DV is not the question of importance, but rather, can the videographer control the Vampire Video variables that undermine compressed digital video quality? Ultimately, it is not the tool but how you use it that matters, and DV is a sophisticated, complex, and sometimes treacherous tool. Handle it with awareness.
Bunzel, T. & Margolis, R. (2001). DV or Not DV? That is the Question. Retrieved April 23, 2001 from http://www.digitalvideoediting.com/Htm/Features/dv_or_not_dv_feature.htm
DV or Not DV. (2000). Canon U.S.A., Inc. Retrieved April 23, 2001 from http://www.canondv.com/optura_pi/dvt.html
Gutenko, G. (2002, May). Streaming Video: The New Cool medium . Paper presented at the annual conference of the Canadian Communication Association, University of Toronto/Ryerson Polytechnic University, Toronto, Ontario.
Johnson, B. A. & Wilt, A. J. (2001, April). Cameras: How to get the most from them. Paper presented at the annual conference of the National Association of Broadcasters, Las Vegas, Nevada.
Kaufman, D. (2001). An Imperfect Marriage. Netmedia, 1 (1).
Ozer, J. (1997). Publishing Digital Video(2nd ed.). London: Academic Press.
Shaw, R. (2001). Trailblazing cards for Mac Video. Video Systems , 27 (4).
Wilt, A. J. (1998). The DV, DVCAM, & DVCPRO Formats: FAQ. Retrieved May 14, 2001 from http://www.adamwilt.com/DV-FAQ-tech.html
© Canadian Journal of Learning and Technology