Film-out

Film-out is the process in the computer graphics, video production and filmmaking disciplines of transferring images or animation from videotape or digital files to a traditional film print. "Film-out" is a broad term that encompasses the conversion of frame rates, color correction, as well as the actual printing, also called scanning or recording.

The film-out process is different depending on the regional standard of the master videotape in question – NTSC, PAL, or SECAM – or likewise on the several emerging region-independent formats of high definition video (HD video); thus each type is covered separately, taking into account regional film-out industries, methods and technical considerations.

Film-out of live action video

Many modern documentaries and low-budget films are shot on videotape or other digital video media, instead of film stock, and completed as digital video. Video production means substantially lower costs than 16 mm or 35 mm film production on all levels. Until recently, the relatively low cost of video ended when the issue of a theatrical presentation was raised, which required a print for film projection. With the growing presence of digital projection, this is becoming less of a factor.

Standard definition (SD) video

Film-out of standard-definition video – or any source that has an incompatible frame rate – is the up-conversion of video media to film for theatrical viewing. The video-to-film conversion process consists of two major steps: first, the conversion of video into digital "film frames" which are then stored on a computer or on HD videotape; and secondly, the printing of these digital "film frames" onto actual film. To understand these two steps, it is important to understand how video and film differ.

Film (sound film, at least) has remained unchanged for almost a century and creates the illusion of moving images through the rapid projection of still images, "frames", upon a screen, typically 24 per second. Traditional interlaced SD video has no real frame rate, (though the term "frame" is applied to video, it has a different meaning). Instead, video consists of a very fast succession of horizontal lines that continually cascade down the television screen – streaming top to bottom, before jumping back to the top and then streaming down to the bottom again, repeatedly, almost 60 alternating screen-fulls every second for NTSC, or exactly 50 such screen-fulls per second for PAL and SECAM. Since visual movement in video is infused in this continuous cascade of scans lines, there is no discrete image or real "frame" that can be identified at any one time. Therefore, when transferring video to film, it is necessary to "invent" individual film frames, 24 for every second of elapsed time. The bulk of the work done by a film-out company is this first step, creating film frames out of the stream of interlaced video.

Each company employs its own (often proprietary) technology for turning interlaced video into high-resolution digital video files of 24 discrete images every second, called 24 progressive video or 24p. The technology must filter out all the visually unappealing artifacting that results from the inherent mismatch between video and film movement. Moreover, the conversion process usually requires human intervention at every edit point of a video program, so that each type of scene can be calibrated for maximum visual quality. The use of archival footage in video especially calls for extra attention.

Step two, the scanning to film, is the rote part of the process. This is the mechanical step where lasers print each of the newly created frames of the 24p video, stored on computer files or HD videotape, onto rolls of film.

Most companies that do film-out, do all the stages of the process themselves for a lump sum. The job includes converting interlaced video into 24p and often a color correction session – (calibrating the image for theatrical projection), before scanning to physical film, (possibly followed by color correction of the film print made from the digital intermediary) – is offered. At the very least, film-out can be understood as the process of converting interlaced video to 24p and then scanning it to film.

NTSC video

NTSC is the most challenging of the formats when it comes to standards conversion and, specifically, converting to film prints. NTSC runs at the approximate rate of 29.97 video "frames" (consisting of two interlaced screen-fulls of scan lines, called fields, per frame) per second. In this way, NTSC resolves actual live action movement at almost – but not quite – 60 alternating half-resolution images every second. Because of this 29.97 rate, no direct correlation to film frames at 24 frames per second can be achieved. NTSC is hardest to reconcile with film, thus motivates its own unique processes.

PAL and SECAM video

PAL and SECAM run at 25 interlaced video "frames" per second, which can be slowed down or frame-dropped, then deinterlaced, to correlate "frame" for frame with film running at 24 actual frames per second. PAL and SECAM are less complex and demanding than NTSC for film-out. PAL and SECAM conversions do agitate, though, with the unpleasant choice between slowing down video (and audio pitch, noticeably) by four percent, from 25 to 24 frames per second, in order to maintain a 1:1 frame match, slightly changing the rhythm and feel of the program; or maintaining original speed by periodically dropping frames, thereby creating jerkiness and possible loss of vital detail in fast-moving action or precise edits.

High definition (HD) digital video

High definition digital video can be shot at a variety of frame rates, including 29.97 interlaced (like NTSC) or progressive; or 25 interlaced (like PAL) or progressive; or even 24-progressive (just like film). HD, if shot in 24-progressive, scans nearly perfectly to film without the need of a frame or field conversion process. Other issues remain though, based on the different resolutions, color spaces, and compression schemes that exist in the high-definition video world.

Film-out of computer graphics and animation

Artists working with CGI-Computer-generated imagery animation computers create pictures frame by frame. Once the finished product is done, the frames are outputted, normally in a DPX file. These picture data files can then be put on to film using a film recorder for film out. SGI computers started the high-end CGI-Computer-generated imagery animation stystems, but with faster computers and the growth of Linux-based systems, many others are on the market now. Toy Story, and Tarzan are two samples of movies which were made in CGI and then film-out. The most CGI work is done in 2K Display resolution files (about the size of QXGA), but 4K Display resolution is on the rise. A 2K movie requires a Storage Area Network storage several terabytes in size to be properly stored and played out.

Computer graphics files are handled the same way but in single frames and may use DPX, TIFF or other file formats.

Film-out of digital intermediate

Film-out-recording is the last step of digital intermediate work flow. DPX files that were scanned on a motion picture film scanner are stored on a storage area network (often abbreviated as "SAN"). The scanned DPX footage is edited and composited-FX on workstations, then mastered back on film. Film restoration is also done this way.

Film-out of images for the graphic design and print industries

The days of newspapers and magazines shooting 35mm film are almost gone. Digital cameras can now shoot all the images needed, storing them as files (e.g. JPEG, DPX or another format) that are readily edited prior to use. Once the final copy is approved, it can be filmed out for publishing. Digital stills are not the only way to get pictures used in the graphic design and print industries. Film scanners and computer graphics programs are also common sources for graphic design and print industries.

Types of Film Out Devices

History

Lately it has become possible to transfer video images, including films scanned at high resolution, back to film stocks by making a digital intermediate, which can then be recorded out to fine-grain film intermediate with a laser film printer. The first major live-action film to use this process entirely was O Brother Where Art Thou, done by Kodak's Cinesite division in Hollywood in the spring of 2000. Prior to this, the video master was transferred from tape to film through one of several methods: CRT recorder, laser film printer, Kinescope, or electron beam recorder (EBR). Theater performances have been preserved with Kinescope for many years – the 1964 New York production of Hamlet with Richard Burton, for example, was shot on video and printed as a film that was released in movie theaters using this process. Fernando Arrabal was the first to use the technique of video-to-film for aesthetic purposes, for the 1971 film Viva la muerte, which used heavily color-adjusted video footage only for the fantasy sequences.[1] Experimental filmmaker Scott Bartlett also utilized video footage and effects for portions of his 1972 film OffOn, by filming such with a 16 mm film camera off of a video monitor.

Technicolor also experimented in the early 1970s with using video gear & videotape to make feature-length motion pictures with, by transferring the videotape to film for final release and distribution. Films made with this process were the 1973 film Why, the 1971 film The Resurrection of Zachary Wheeler, and the most famous film using this process, Frank Zappa's 1971 film 200 Motels, which was originally shot on 2 inch Quadruplex videotape, and then transferred to film by Technicolor, being the first independent film originally to be shot on videotape and distributed theatrically in 35 mm.

Also, countless educational, medical, industrial, and promotional videotapes produced from the late 1950s up to the mid-1980s were also transferred to film stock (usually 16 mm film) for widespread distribution, using either an EBR or CRT recorder. This was done due to VCRs and VTRs then not being commonplace in most schools, hospitals, boardrooms, and other institutional settings, due to their high cost and the multitudes of proprietary (and incompatible) open-reel, cartridge, & cassette videotape formats in the early years of industrial-market videotape recorders starting in the mid-to-late 1960s. But 16 mm projectors were widely available at the time in such settings, making distribution of such video productions on 16 mm film more practical. One company that specialized in the transfer of videotape-originated programming to 16mm film in the 1970s and 1980s was Image Transform, a company that specialized in and developed its own technologies for video-to-film transfer. Such transfers were the case until the mid-1980s, when the VCR became affordable enough (and much more standardized in the form of VHS and Betamax) to be adopted in such institutional settings on a widespread basis.

Digital video equipment has made this approach easier; theatrical-release documentaries and features originated on video are now being produced this way. High Definition video became popular in the early 2000s by pioneering filmmakers like George Lucas and Robert Rodriguez, who used HD video cameras (such as the Sony HDW-F900) to capture images for popular movies like Star Wars: Episode II – Attack of the Clones and Spy Kids 2, respectively, both released in 2002.

Independent filmmakers, especially those participating in the Dogme movement of filmmaking, have also shot their films on MiniDV videotape, to be transferred to 35 mm film stock for theatrical release. Some examples of independent movies being shot on videotape are Lone Scherfig's Italian For Beginners (a Dogme film), Steven Soderbergh's Full Frontal (which was shot on PAL-standard MiniDV gear in the normally NTSC-prevalent US, due to its higher resolution of 625 lines and frame rate of 25 frame/s (as opposed to NTSC's 525 line resolution and 30 frame/s frame rate), more closely matching film's 24 frame/s), and Mike Figgis' Timecode. Initially due to budgetary concerns, Filmmaker Rob Nilsson shot his feature drama "Signal 7" in 1984 using Sony portable U-matic format videocassette decks paired with Ikegami HL-79 3-tube broadcast video cameras (a setup comparable to ENG systems used by broadcast television stations at the time). The video hardware & taped footage took the place of the traditional cinema camera and it's negatives, which were edited in post-production and transferred to 35mm film for theatrical release & exhibition. Nilsson liked the visual look of video-to-film transfer, and shot several other of his films the same way.

Arrilaser film recorders are also used for film-out.

Arrilaser Film Recorder

See also

References

External links

This article is issued from Wikipedia - version of the 8/4/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.