Time-Based Editing

This chapter looks at the use of nonlinear editing in video and audio production and the visual interface components in the work space. It also will help us learn strategies for project organization and asset management and the general concepts and principles related to the aesthetics of editing.

Video editing is the art of arranging static and time based media assets into a linear form of presentation for the purpose of telling a story or communicating a message. The goal of editing video is to produce a narrative with a clear beginning, middle and end. Audio editing is similar but only focusing on the sound based elements. The way to edit video and motion pictures has evolved from tape to tape edition to machine to machine editing. Machine to machine edition works in a five step process 1. the playback deck, 2. a playback monitor, 3. a record deck, 4. a record monitor, and 5. the edit controller. This was used for many years but not it is not longer needed because we have software like Apple Imovie, and Final Cut Pro.

The media assets used to build a edited sequence fall into four categories

  1. Scripted action and dialog
  2. Unscripted action and dialog
  3. Titles and graphics
  4. Music and sound effects



The project file is a proprietary data file used for keeping track of every detail associated with the projects. Media files are the raw project assets that are created or acquired prior to the start of an editing session. You need both for video editing. Most editing professionals use a timeline for arranging clips linearly from beginning to end. This allows you to organize your clips in bins and mark them so you know the points in your video. If you don’t learn and understand these key concepts that you wont be able to edit your video correctly.



Sound and Video Recording

This chapter highlights the history and evolution of videotape recording systems, the formats used for recording audio and video to analog and digital tape. Industry standards for the coded representation of audio and video on digital platforms and the differences in those formats. This will also go over the tape based recording format and the file based.

In motion picture film recording they use recording technologies like videotapes. Videotapes have evolved over the years from being a kinescope, to a magnetic recording system, to analog tape formats. Through the years they have always been evolving and used to shot movies or TV shows. The term redundancy refers to the amount of wasted space consumed by stored media and the recording picture information. The goal of video compression is two-fold: It is to reduce the file size of an image by eliminating or rewriting as much of the redundant information as possible and also, to retain the visible quality of an image.


The most significant development in recent years is the development of solid-state technologies that use file-based recording formats and codecs. Such formats have drastically driven down the cost of professional HD recording systems and media are often more flexible and convenient to use. However, because they are more flexible, there is hardly ever just one way to record or trans code a video or audio signal. You need to develop skills and know the information related to recording formats, codecs, container files, resolutions, frame rates etc. The more you know the better you will be when everything keeps evolving like the video tape.

Audio Production

This chapter highlights the nature of sound and audio. It discuses the audio chain and signal flow, microphone elements from pickup patterns to form factors also, placement tips and recording techniques. This chapter will also go over audio cables, connectors and cable management.

Sound is what we hear and can be featured in a standalone product or it can be apart of a larger product. Sound is the natural phenomenon that involves pressure and vibration. Understanding how sound and hearing work will help you be able to capture and recording and use it better. The first thing we tend to notice about sound is how loud or quite it is. Amplitude is defined as the distance from the crest of the wave to the trough, the louder the sound, the greater the amplitude is. As the sound wave passes through matter the vibrating molecules experience three phrases of movement. This describes the frequency of the sound wave and is the sounds relative low or high pitch.


There are three main components to sound recording and a recording system. A source, something creating the sound, a sound, and a microphone to convert physical sound waves into a signal transmitted via an electrical current and a recording device to store the sound input. A microphone is a recording instrument used to convert sound waves into electrical currents that can be stored, transmitted and be played back. There are three functions to a microphone: transaction method, polar pattern and form factor. There are many different types of microphones like wireless ones, shot gun, boundary, and internal or external microphones.

It is a skill to learn how to work audio and how to use the correct tools. Learning these skills will help you stand out in the multimedia work force and is an advantage. Sound is usually one of the most important aspects of a multimedia project and can make or break it


In this chapter we will look at how digital cameras are classified according to their operation features and how they are intended to be used, the purpose and function of the imaging chain and each of its basic components, the variables affecting the proper exposure of a digital image, the use of fully automatics, semi-automatic and manual shooting modes and the strategies for organizing and managing digital image files.

Photography is the process of fixing an image in time through the action of light. In traditional chemical processing, photographic images are created by exposing a light-sensitive emulsion in the surface of the film to light in a controlled environment. While some people still shoot film, most people have switched over to digital photography, an electronic medium that renders pictures using a digital image senor. Digital photography offers instantaneous results, producing image files that are easily transferable, and adaptable, for a wide range of multimedia products.


The imaging chain of a digital camera is made up of four components: the lens, the shutter, the iris, and the image senor. Capturing an image uses all these four components to achieve desired effects. A digital camera creates a picture by exposing the image senor to light. You have you measure light intensity, white balance, and evaluate the metering mode.

The focus control is used to define the sharpness of an object within the frame by changing the distance between the optical elements of a lens. A cameras focus must be reset each time if the distance between the camera and the subject you are taking a picture of changes. There is an auto focus mode of you can manually do it.

A digital camera is one of the most important tools for a multimedia project or producer. This chapter focused mainly on the mechanical aspects of the digital camera and photography as an operational function. As the photographer you control the way the image is shot and how it looks.


The element of text is an important component of multimedia because text is the visual representation of the thought being expressed through language. This chapter examines the origins of typography and the modern use of electronic type in multimedia, styles and classifications, tools and techniques, controlling character and line spacing, text and aliment, and ideas for maximizing the readability of screen text in multimedia projects.

Type is a character or letter form created for the purpose of communication written information through printing or electronic means. The term letter form implies letters but also other characteristics like punctuation, symbols and numbers. Typography is the art of designing and arranging type. A collection of related fonts- all bold, italics and so forth in their sizes is called a font family. (Times New Roman: regular, bold, italic)


Legibility refers to a typefaces characteristics and can change depending on font size. The more legible the typeface the easier it is to glance at and distinguish the letters and numbers. Readability refers to how easy it is to read in context, not the isolated letters. Typefaces are generally classified into two groups depending on whether they contain serifs and they can be classified and into different subgroups. There are six main groups of serif typefaces: blackletter, humanist, old style, transitional, modern, and slab serif. Serif typefaces are usually industry standard for body copy like in books or news papers. San-Serif typefaces are those with serifs; they are sometimes also referred to as gothic and they usually have very uniform strokes with little to no contrast and vertical stress in rounded strokes.

The terms alignment and justification refers to the process of lining up objects or text uniformly along their tops, bottoms, and sides or middles. Distribution involves inserting an equal amount of space between the designated edges or centers of visual elements, including text, placed along a vertical or horizontal edge. You can also manipulate letter spacing, line spacing to enhance the readability and visual appearance of text, while bringing conformity and unity to the layout. Texts always add interest to the page or multimedia project.


In this chapter, we discuss computer graphics, image encoding process, moving image scanning technologies and computer and TV displays.

The term computer graphics means the digital process in which pictorial data is encoded and displayed by a computer and digital devices. Computer graphics are usually divided into two main categories: graphics and images. A graphic is any type of visual presentation that can by displayed on a physical surface like a sheet of paper, wall or a computer monitor. Graphics are usually done by hand or with a computer assisted design tool. A graphic design is someone who creates graphics for print and media. An image is a two or three-dimensional representation of a person, animal or object and it can be still or moving. Some examples of graphics would be clipart, logos, and symbols.


There are two ways to digitally encode and display computer graphics: bitmap or raster imaging and vector imaging. Raster images are formed by dividing the area of an image into a rectangular matrix of rows and columns comprised of pixels. The total number of pixels in a raster image is fixed and in order to make an image larger, more pixels have to be added. Vector imaging defines the area of a picture using paths made up of points, lines, curves, and shapes. Each vector creates a path to form the outline of a geometric region containing color information. Because paths can be resized, vector graphics can be scaled up or down without losing picture clarity.

Television and computer screen have fixed number of image pixels and it is called a native resolution. As the monitors increase size their native resolution also increases and more pixels are added to the screen. Unfortunately, user preference can get in the way of viewing pixels in a perfect image. For example, a user may not have their screen set to the native resolution of the monitor or they could have it set right but they may be zoomed in.

As you create and incorporate images and graphics into multimedia projects the final format you choose affects how good the user’s experience will be. The final format should also guide your workflow as you don’t want to start out using a lower resolution for you final format.

Web Design

How the “Web” works: One thing I never realized in the the “world wide web” and the internet are not the same thing. The Web is a part of the internet, the part we use through a browser by entering an address beginning. The internet is a “global network of networks through which computers communicate by sending information in packets. Each network consists of computers connected by cables or wireless links.”

The art of coding and using HTML tags I found extremely confusing but the root of it is that web pages are built using “hypertext markup language” instead of being a programming language, HTML enables you to put tags around your text and it tells the browser how to display on the screen. HTML files are all just text and the graphics that are included in web pages are not actually inside the text document but are referenced in a code and them uploaded along with the HTML file to the server and that’s what we see.  Good web designers use meaningful markup rather than markup that controls the presentation alone, this makes it much easier to manage a big site. Another interesting thing about  HTML tags is that while the browser knows it is looking at HTML, without doctype declaration it doesn’t know which type of HTML and browser has to know which it is using so it can read the page correctly. Some recourses call the tag the “element” but it is technically in everything from the start tag to the end tag and everything in between. Another key thing to remember is that most developers use lowercase and the using of lowercase will prepare you to branch out into similar coding that is case-sensitive.


One of the challenges to designing a web page is that what looks good on one browser or computer may not look the same on another. And that browsers don’t always interpret HTML the same way. Examples of different browsers are Firefox, Crome, Safari, etc. One thing that most have in common though is that they have to rely on “plug-ins to show non-HTML material such as videos which rely on Flash or Adobe Acrobat. Plug ins will enable you to run your program inside your web browser. Another important step is when you develop a page you need to make sure that the users see what you want them to see and it is not a jumbled up mess of what you’re seeing.

To ensure site usability before you start to write any code you need to make sure that,

  1. plan the scope of your site
  2. learn about your users
  3. sketch layouts to plan where your page components will go
  4. solicit preliminary user feedback
  5. create a mockup

Once you go through these steps you have to create a working prototype and make sure you can implement your design without any problems. Along the way to creating your web design, you should be checking your code constantly. One easy way to do this is using a code validator or code checker. These tools will help you see if your code is well informed and that it is displayed on the browser correctly. Also, remember that production hardware and software issue play a role in determining the effectiveness of your website and the best sites will have always have good navigational structures that are functional and easy to use. Sites that are successful much go thtough lots of planning, research, and design.

Interface Design and Usability

A user interface is any system that supports human-machine interaction or human-computer interaction. The interface has both hardware and software components and exists in the forms of both input and output. The input component allows the system to show the results of user control. There are many different types of user interfaces. An example would be graphical user interfaces that usually include elements of windows, icons, menus, buttons, and scrolling bars. Web user interfaces accept input and generate output in the form of web pages, these are the most common graphical user interface. Another important part of interface design would be user-centred design. User-centered design is all about creating an interface to meet the needs of real users rather than satisfying the designers. It supports users by applying their existing behaviors and adding new ways to have a natural interaction. There are three steps to designing user interfaces

  1. Specify the project requirements and determine what the client needs
  2. Analyze the users- who they are and what they need for this site
  3. Involve users in the design project and get their feedback
  4. Use an interactive design process in evaluating and modifying the interface

User interfaces will have many different features like navigation, different sections and categories, menus and drop-down menus. Another important feature would be tabs, you need to be able to provide access to different content and to be able to switch back and forth conveniently.

Each site should be tailored to fit specific interactions. There are two types to tailoring: personalization in which the system makes the changes, and customization in which the user makes the change. An example of this would be customizing your google interface by changing the theme and removing gadgets that you don’t use.


It is important to focus on the users throughout design process when creating any type of interface. When you plan on having navigation features make sure they will be easy to use and will help the interface. Also do not forget to tailor your interface to give users the best experience and to make sure it is easy, usable and accessible.

Multimedia Page Design

Page layout is the graphic design that refers to the visual arrangement of text and images on a page. Programs like InDesign are software tools used in the desktop publishing industry for the layout design of printed pages. The Gutenberg diagram is a primitive eye-tracking model that is used to show how readers scan through a page that is full of evenly distributed text. The diagram is helpful for those developing page layouts of printing materials.

An important part of page layouts is headings and subheadings. These bring order and structure to the presentation of the text. A heading is a short descriptive title and a subtitle is used to mark the beginning of a paragraph or content area.Headings and subheadings should look the same throughout the documents to reinforce the design principle of repetition. Another important aspect of page layouts is the grid system. The typographic grid system is the popular tool for breaking up pages into smaller editable parts. The starting point for designing a new layout often begins with defining the grid structure on the page and to have an idea of the scope, size, and proportions of the content that will be filling the page. Once the grid is created it can not be altered without affecting the visual information within the page.


Other important terms in making a successful page layout: A static page delivers the same page layout and contents to every person viewing the page whereas a dynamic page is one whose content changes over time or with each new person viewing the page.  Fixed layouts have pages that have a predetermined fixed width and height. An example would be a printed book or newspaper. Fluid layouts will vary by what computer you are viewing the page on and they will look different on different monitors.

Page design is an important part of designing a successful multimedia project. The general concepts in this chapter apply to an arrangement of different visual elements within the shared space of a multimedia page.

Visual Communication

Visual communication is an area of study that looks at the transmission of ideas and information through visual forms and symbols. It also looks at the cognitive and affective process that affect the way we perceive stimuli. Visual communication involves the interaction of content and form. Content is the stories, ideas, and information that we exchange with others.  Form is the manner in which content is designed. An example would be a person; their clothes are accessories is the content and form is the makeup. Content relates to what we want to say and form has to do with how we choose to communicate it. Content and form are one of the main components to to visual design. Without an appealing design in content and form the message you are trying to convey will go unnoticed.

There are many different important elements in visual design. Elements of design are the building blocks to any successful visual content. Understanding the elements of design will help you become a better visual communicator. 38fa2d53ce1b5afddd05a3de73d5cbb8.jpg

  1. Space: a digital designer uses a blank surface and or design space and it evolves from there. Its the designers job to full the empty space with visual content. In photography the design space is called the “field of view”.
  2. Dot: The most basic form of representation is the dot. It is the starting point for all other elements of design. Dots merge, and complete and portray visual objects.
  3. Line: A line is the visual connector between tow points in space. Lines can be real or implied (horizon line vs a white line painted). There are straight, curved, diagonal and horizontal lines that can be used in designing visual communication.
  4.  Shape: a shape is a two dimensional element formed by the enclosure of dots and lines. Organic shapes resemble shapes in the natural world and are usually imperfect and have a flowing appearance. Shapes can be powerful visual forces in design.
  5. Form: form adds the dimensional elements formed by other visual design elements like dots and lines. Lighting can affect form and can help make a visual design successful.
  6. Texture: texture is the surface of visual object that evokes a sense of tactile interaction and can be implied in images. Texture ignites imagination and can break up visual monotony.
  7. Pattern: is a reoccurrence of visual element within a design space. An example would be clothing, furniture, and wallpaper can be identified by a pattern they employ. Organic patterns are found in nature such as flock of geese flying in a formation.
  8. Color: color has three dimensions: hue, saturation, and brightness. We can use color to set tone or mood and elicit instant associations to attract attention.

Its important to use balance and depth when creating a visual communication project. You want to make sure the visual weight of objects are equally distributed within a frame and that they have a state of equilibrium. Objects also shouldn’t fall flat on the visual space. There should be depth in order for it to be successful.