Posts

Smartphones & Videography

Tuesday Instagram launched a new app, currently for iOS with an Android version planned, that easily creates hyperlpase videos.  What’s it called you ask?  Hyperlapse, of course.

Hyperlapse by Instagram

What exactly is hyperlpase? Hyperlapse is a type of timelapse photography. In traditional timelapse footage, or a series of still images, is sped up in post-production to create a fast motion sequence. Hyperlapse alters the position of the image frame-by-frame to create a smooth motion tracking sequence in fast motion. Check out their promo video for some examples and comparisons with normal footage.

Hyperlapse framing comparison

The end image is smaller, or what would appear to be more zoomed in, than the original since there needs to be leeway to rotate and pan each frame.

What’s unique about Instagram’s new app is that instead of using image processing to stabilize the image it’s using data from the gyroscope in your phone. This save the traditional heavy computational requirements to plot a course through the frames of video. Instagram has an in-depth explanation of the technology behind their app available on their engineering blog.

 

There’s tons of accessories and gadgets available for smartphones to make them capable of very high quality recordings, but photographer Lorenz Holder shows you seven tricks to help create some awesome stills and videos. I had several Why didn’t I think of that? moments while watching.

 

Lastly, FilmmakerIQ.com has fantastic, regularly updated lessons and tutorials about filmmaking and related topics from the science of sound to DIY equipment construction.

FilmmakerIQ

 

 

 

 

Color Correction vs Color Grading

Color Correction vs Color Grading

Camera Corrected & GradedWorking with color is an important aspect of video production that many people grow into, at least cursorily. As part of being a novice in post-production terminology frequently gets misused or interchanged. While color correction and color grading use some of the same tools and processes they serve different purposes and are done in different parts of the workflow.

Color correction is used to alter footage across a project so that its appearance is consistent, creating an accurate portrayal as it would be viewed by the human eye, making sure whites look white and blacks look black. Typically this is compensating for inaccurate camera settings, leveling color temperature, or adjusting contrast, brightness, and saturation. The human eye will view white under varying lighting as white. However, with cameras you have to tell the sensor what white is. If done improperly your image will have a red, blue, or yellow cast. In addition, if you are shooting outside over the course of an entire day the color of the light will change as we move from sunrise to mid-day to dusk. Even passing clouds will change the color.

Color grading (color timing in reference to film) is altering the image for aesthetic or communicative purposes to enhance the story, create a visual tone, convey a mood, express emotion, or carry a theme. Typically the alterations in color grading are more extreme than with color correction. Rarely color grading can even be used to salvage problematic footage that color correction is incapable of fixing. Usually at the end of editing the editor will begin color grading, give the project to a dedicated colorist, or when quicker turnaround is required the footage will be sent off to be graded while editing is being done.Extreme dream sequence grading

Color is a powerful component and careful thought goes into crafting the look of a piece. Grading can be used subtly, to warm a scene by pushing the oranges and reds to give it the feel of late afternoon, or be used to make sweeping changes, creating striking visuals creating a surreal dream sequence.

 

Band of BrothersThere any many common grades that you may not have overtly noticed. A desaturated look is used to indicate something from a long time ago, such as in HBO’s Band of Brothers. Sepia tone is also used for the same purpose, but more sparingly.  O Brother Where Art ThouO Brother Where Art Thou employed a variant using desaturation while exaggerating the yellows and orange hues in a monochromatic color palette. Desaturation can also be used when dealing with a bleak and dreary world like the one in AMC’s The Walking Dead.The Walking Dead

Contrasting color grades can be used to differentiate locations, opposing forces or viewpoints, and acts within a picture. Steven Soderbergh’s Traffic used starkly contrasting color themes for Mexico’s corrupt underbelly and the United State’s political environment. TrafficThe Matrix movies are another iconic example of color being used to indicate changes in locale with heavy greens for the Matrix and more true to life colors for the real world.Matrix

Pitch Black uses color theme changes to show time lapse akin to Traffic’s use for locale. The warm, desaturated, blown-out highlights are used to stylize the ultra-bright daytime of multiple suns with dark, cold blues for the long, dangerous nights.Pitch Black

Faux nighttime can also be created in color grading when the footage was shot during the day. This is common practice since shooting in low light can be difficult and frequently imparts heavy grain in the image.Day For Night

Production Tip: White balance off a manila envelope for easy in-camera day-for-night.
Thanks Jason Johns, Media Services’ man-of-awesome!

Along the same lines, almost all night vision shots are created with color grading, such as in this shot form Zero Dark Thirty.1134604 - Zero Dark Thirty

Recently major Hollywood block busters have adopted a duo-tone color theme of warm midtones and cool shadows & highlights creating neutral backgrounds with almost-orange skin tones. This treatment can best be seen in the Transformers movies, which some argue is being overused.Transformers Treatment

Color grading can do more than just affect the image as a whole, though. With color grading you’re able to isolate colors for secondary color correction as well. For instance, a shot in the forest that’s drab straight from the camera can become a vibrant, lush landscape without affecting skin tones and other colors.Secondary Color Correction

Speedgrade Presets

Adobe Speedgrade’s presets

So, how is color grading done? Professionally color grading is done with dedicated software such as DaVinci Resolve, Red Giant Colorista II, or Adobe Speedgrade, but much less elaborate tools also exist in editing and compositing software like Premiere and After Effects for rudimentary control. Color grading softwares come with at least a handful of presets and add-on packages of nothing but presets, such as Magic Bullet Looks, are available for one click styling.

A sample of presets included with Looks

A sample of the presets included with Looks

However, if you inspire to do more than just slap a template look onto your video, an understanding of color theory is crucial for good color grading. A good overview of color theory is available at worqk.com and Adobe has a fantastic color theme creation tool. While one of the best ways to get good at color grading is to do it everyday day-in-day-out on a variety of content, the next best thing is to study deconstruction videos from professionals. These videos walk you step by step through how they created the look through layering grades and manipulating ranges of thresholds and tolerances within the color channels.

A blog of color grade breakdowns from professional colorist Charles-Etienne Pascal.
http://blog.iseehue.com/

A color grade breakdown article from PremiumBeat.com
http://www.premiumbeat.com/blog/impressive-color-grading-breakdowns/

 

Checking out before and after examples is also a great way to see what’s possible.

Adam Myhill provides some before and after examples
of his color grades with some extra insight.
http://www.adammyhill.com/color-grading/

A great color analysis of the movie Black Hawk Down.
http://www.outside-hollywood.com/2009/03/color-theory-for-cinematographers/

Post-production Goodies

Today we’re going to take a quick look at a handful of useful tools to enhance your post-production workflow and help take your videos to the next level.

 

Open Captions

The ability to easily create open captions, commonly referred to as ‘burned-in’, which are always visible is surprisingly absent from most editing software. While out-of-the-box tools exist for creating titles, using these tools is a long and laborious process for authoring captions. Thankfully plugins for After Effects, Premiere Pro, Motion, and Final Cut have been created to do just that. These plugins import SRT caption files, a common format supported by YouTube and many other web video platforms.

SUGARfx Subtitles $99

Provides full control over the font & layout and is widely compatible; After Effects CS5-CC, Premiere Pro CS6-CC, Motion 4-5, & Final Cut Pro 7-X.

pt_ImportSubtitles $25

While cheaper, pt_ImportSubtitles isn’t quite as widely compatible as SUGARfx. This plugins supports After Effects CS3-CS6.

Open Caption Plugins

 

Racking Focus and Camera Mapping in 2D

Creating motion in still images brings the image alive when being used in video, but you can only go so far with the Ken Burns zoom and pan technique. Rowbyte has created a pack of After Effects plugins that allow for creation of depth in 2D images and video. The two you’ll get the most use out of with still images are the Camera Mapper and Rack Focus.

Buena Depth Cue $99

Buena Depth Cue

 

Color Grading Presets

If you’re not yet fully into color grading, but would like to do more with your videos in After Effects, Film Riot has a pack of 15 presets available. These presets are click to apply simple and cover a large range of styles including a daylight white balance correction.

Film Riot’s Color Preset Pack & Tutorial $15

Triune Color Presets

 

Advanced Color Grading Presets

Before making the jump all the way to full-bore color grading with SpeedGrade, Resolve, Colorista or REDCINE-X, Red Giant has a nice middle ground with over 100 look presets and light-weight grading controls.

Magic Bullet Looks $199 Academic / $399 Full

Magic Bullet Looks

 

Noise Remover

If you have to deal with grainy footage whether it’s low-light footage shot at very high ISO or old analog recordings, Red Giant’s Magic Bulllet Denoiser II does an amazing job with the available presets and provides fine control for customizing the output.

Denoiser II $99

Denoiser II

 

Aspect Ratio Masks

Triune also has a pack of aspect ratio masks for giving normal 16:9 widescreen video a film look. The pack includes:

  • 1.85:1 (35mm)
  • 2.35:1 (CinemaScope)
  • 2.40:1 (Panavision)
  • 2.55:1 (CinemaScope 55)
  • 2.75:1 (Ultra Panavision 70)
  • 3.00:1 (MGM Camera 65)

Aspect Ratio Pack & Tutorial $3

Aspect Mask

Current State of HEVC/H.265

The current industry preferred codec, H.264, took over six years to mature to full end-to-end support and polished transcoding efficiency. HEVC’s timeline, however, should be accelerated compared to that of H.264. The migration to HEVC from H.264 is more of a maturation of technologies into an advanced state whereas the adoption of H.264 was a major jump from existing technologies.

The reason there’s so much fervor over HEVC is because of the many improvements it promises over its predecessors. The most prominent is its encoding efficiency. HEVC is expected to provide the same quality at half the data rate, reducing the demands on Wi-Fi and cellular networks, or twice the resolution at the same data rate, allowing HD content where it wasn’t feasible prior. Some are skeptical that this will aid in relieving network requirements arguing that HEVC will actually worsen the situation. Why? Because the lessened requirement on connection speeds is going to create and facilitate drastically greater demand. Video is already the majority of Internet traffic and is poised to become the dominant data type in the near future as demand continues to grow.

Another boon, albeit one that is much less talked about, is that image degradation is done in a manner that is much more pleasing to the human eye. As bits are lost the image takes on a smoother, almost softened, look that is less blocky than the compression artifacting of H.264.

Licensing terms for HEVC was initially a major concern for many companies until the terms were made public. H.264’s licensing incurred royalty charges whenever H.264 content was sold as pay-per-view and could have similarly, or to a greater extent, impacted HEVC. However, HEVC has no content-related royalties alleviating many costs that are currently burdened by content creators under H.264 licensing. Yet the terms are more expensive for encoder/decoder manufacturers. Where H.264 had a maximum $6.5 million in possible charges, HEVC’s cap is set at a staggering $25 million. That being said only very large companies that play in this field will be impacted; companies like Adobe, Apple, Google, Microsoft, and Mozilla. Even though this cap will only be felt by such massive companies it may greatly hinder or even prevent widespread adoption. Adobe recently announced its decision to not implement HEVC support in Flash. This may sound like an odd move, but with the global saturation of the Flash player Adobe would be facing a $25 million expenditure the quarter it implements HEVC when their quarterly net income is only around $50 million. That’s a hard pill to swallow for investors especially when Adobe and these other companies will be shouldering the cost so that content providers can distribute their content with no fees and no return for Adobe.

As with any technology in its early stages there are a few reasons to not rush in to migrating to HEVC. Currently encoding to HEVC requires drastically more power than H.264. For the individual this increase is negligible, but for massive companies such as Netflix this can be a substantial cost increase. In addition, the efficiency of current encoders is upwards of ten times that of H.264. Not only does this increase your time to deliverable file, but it also makes real-time encoders currently impossible. Early adopters are also reporting that encoding resolutions at or below 720p aren’t yielding appreciable file size savings. Even with similar file sizes they are seeing an increase in perceived quality of the video. As the technology matures the efficiency and power needs will improve addressing and alleviating these problems with real-time encoders likely just a few years away.

Several other reasons loom over HEVC, but these will also be alleviated with time. Encryption and digital rights management (DRM) for protecting the delivery of content is still emerging. Something that many content owners will require before they award approval for distribution of their content. UHDTVs, a major venue for HEVC content, are still nascent in ownership. Lastly the cost of replacing copious hardware and software systems for transcoding into HEVC looms over current companies.

With all of the benefits awaiting us with HEVC there is prudence in not rushing to quickly into adoption. As time passes and the technology matures we’ll see some truly great video in places we haven’t yet been able to.

Current State of Video in Education

Today students are intimately familiar with and expecting video in their education starting as early as elementary school.

Not only are they assuming there will be video, but their expectations are being shaped by resources like YouTube and Coursera. This trend is having a profound impact in education on instructors who embrace forward-looking pedagogies. With video, students are more engaged in creating their learning experience rather than just passively receiving information. Integration of video increases student engagement, maximizes school resources, facilitates collaboration, and can accommodate different learning styles. All of this leads to improve learning experiences and outcomes.

Flipped classroom and blended learning approaches to teaching a course are the new baseline utilizing video as a core component. However, just recording a lecture and putting it online for students to watch isn’t enough. Ideally videos need to be in short, easily digestible chunks with high production quality. This is very similar to the model that the online training company Lynda.com has adopted. An entire training module could be upwards of 24 hours of content, but it’s broken into numerous short videos typically under five minutes each, that play in sequence. As instructors have started to embrace video for their new teaching methods the biggest problem is still that do-it-yourself recordings are typically low quality plagued by simple mistakes with easy fixes, such as being back-lit creating a silhouette. Taking the time to plan, set up, and practice will make a world of difference in the end product and keep the viewer engaged.

Unfortunately, many instructors see production value as a universal solution, but condensed easily digestible content still trumps video quality. However, there’s a factor that’s less tangible while just as important as good production; the charisma of the presenter. The more engaging and entertaining they are, the more easily and likely the student is to keep watching. Monotone professors droning on will kill viewership faster than a broken overhead projector.

For entirely online courses video also serves as a reminder that the instructor is a real person not just someone hidden behind endless PowerPoint slides. Videos allow for that visual connection between the professor and their students. Not all content is conducive to having the presenter on screen all the time and can potentially create a good deal of post-production time to edit recordings to switch back and forth. At a minimum the instructor should have introductory videos for each topic chunk or week, depending on how the course is set up.

Cisco recently commissioned a review of current research on the benefits of video on learning and the quality of the educational experience and the findings are impressive. Two thirds of respondents believe that video increases student motivation, increases discussions, and helps instructors be more effective. Over 90% of university students that consumed recorded lectures felt it helped them learn course material. And almost half of elementary school children who used streaming video scored higher on their end-of-year science exam. Check out their infographic and whitepaper for more information.

Current State of Captioning

In October of 2010 President Obama signed the Twenty-First Century Communications and Video Accessibility Act (CVAA) into law. The CVAA expands upon the existing requirements set forth in the Rehabilitation Act of 1973 and the Americans with Disabilities Act of 1990 to increase access to current technologies, such as broadband and mobile, for those with disabilities. A key component of this law is the requirement that all captioned content that is delivered on television is now required to be captioned when delivered via the Internet as well.

In addition, the FCC added four non-technical quality standards in February including accuracy, completeness, placement, and synchronicity. Of those accuracy will be the most impactful to content creators. Primarily accuracy can be considered a measurement of correct words in the transcript compared to the number of incorrect words in the program. For example, a 9,050 word video with 68 errors is 99.2% accurate. However, errors could be anything from misspellings to missed words to punctuation that impedes comprehension. But the new restrictions are more comprehensive than just accuracy. “In order to be accurate, captions must match the spoken words in the dialogue, in their original language (English or Spanish), to the fullest extent possible and include full lyrics when provided on the audio track,” states the FCC. This includes the prohibition of paraphrasing and mandatory non-verbal information such as speaker identification, sound effects, and audience reactions. The FCC hasn’t given the specific accuracy requirement, but as a likely guideline the U.S. House of Representatives already requires a 98.6% accuracy for all of their floor proceedings recordings.

Schools and universities have long been held to provide accommodations under Section 504 and 508 to any student that requires them. This includes any media or online content used within a course.  The sheer volume of material that is created on a daily basis within a university can be a daunting task let alone tackling the vast amount of preexisting content. With so many organizations and businesses working just to maintain compliance many haven’t looked forward to the extended benefits of having captioned content.

Many student unions have video walls displaying several sources at once that would otherwise not be consumable.

Many student unions have video walls displaying several sources at once that would otherwise not be consumable.

With a large portion of the population experiencing hearing difficulties, but who aren’t deaf, captions can assist in digesting materials where they may struggle otherwise. This also extends to hearing individuals who may be unable to hear the content due to being in a noisy environment or where multiple pieces of content would otherwise compete with each other. From a lecture standpoint, when the material covered in a video is particularly difficult or convoluted, being able to read the transcript as the presenter is speaking can greatly assist in comprehension and notation.

Most universities host a diverse student population locally and this diversification is only compounded by new online course and program offerings. With the global popularity of massively open online courses (MOOCs) the traditional base of online students is even broader. On average OSU’s course offerings through Coursera are seeing over 60% of enrolled students coming from outside of the United States. Especially in language classes comprehending the written word is much easier before mastering speech nuances. Beyond those learning a new language or multilingual viewers, those that don’t speak the language of the content would need to rely entirely on the transcript and captions. Thick accents can pose comprehension problems even when the language between presenter and viewer is common.

Lastly, having transcripts available makes content more easily indexed by web search engines putting your content in front of more people through ease of discovery. A step beyond that, captions allow for searching within the content and being able to jump directly to the desired point of interest in certain products, such as you can with Mediasite, Ohio State’s new lecture capture solution. This increased usability gives students the flexibility to make abbreviated notes and return to the recorded lecture later for review or clarification while focusing on the lecture when in class.

With all of these benefits ready to be taken advantage of, the workflow for getting your content captioned can be discouraging to non-professionals. Fortunately, outsourcing has become a reasonably priced option. Among the many options available, OSU has its own. Transcribe OSU is a student-staffed, cost-recovery service supported by the OSU Web Accessibility Center, Office for Disability Services, and ADA Coordinator’s Office available exclusively to the Ohio State community.

You might want to wait to buy that UHDTV

What’s the problem? Your shiny new TV may be obsolete one way or another within a couple of years.
UHD supports a wider range of colors than HD does, but the parts required to support this new standard weren’t available for the production of first generation UHDTVs.

Images from Sakurambo and GrandDrake as found in an article from HD Guru

Image from HD Guru article by Sakurambo and GrandDrake

The color space used by the HDTV standard is referred to as Recommendation 709 and in addition to supported colors the recommendation also defines frame rates, pixel count, and that it’s 16:9 wide screen. The new UHD color space is Recommendation 2020. The increased color range means less banding and a higher quality color recreation. Although most displays today can’t support the new color space content creators and providers are already preparing for fully standards-capable UHDTVs.

In addition to the new UHD color space, HEVC/H.265 has multiple profiles. Every UHDTV will support the rudimentary Main profile, but content creators are encoding their material using the Main 10 profile, which uses 10-bits over Main’s 8-bits per sample, because this profile produces better quality. The problem with this lies in the fact that most UHDTVs don’t list the profiles that are supported and like early HDTVs support for the full range of specifications is hit or miss. Usually miss.

Netflix recently announced that they’re beginning the transition to HEVC Main 10 for their 4K content and many other large distributors will be quick to follow.  Hollywood as well is eager for the move to Rec. 2020 and HEVC Main 10 and are already processing their films as such. The push to move to UHD is palpable, but not everyone is ready.

So, if you choose to pick up a UHDTV soon, make sure it specifically has Main 10 profile support or if you don’t mind adding an external decoder in the future HDMI 2.0 support…unless you find a price you just can’t say no to.

Real Promise of 4K

With 4K/UltraHDTV vying for consumer adoption many are touting the new standard’s increased resolution as a way to make content even clearer. While increased clarity is certainly a boon no one seems to be talking about what could easily be a much more impactful difference, screen real estate.Television Size & Distance

Today most HD screens are overkill for where they’re used. To realize the full benefits of a shiny new 65” 4K TV you’d have to be sitting about four feet away from the screen. Even for a 1080p screen that size you’d need to be a paltry eight feet from it. (See chart at right) Once you get far enough away from any of these screens your eyes just can’t tell much of a difference.

Try scooting back a few feet from your monitor and see how much of a disparity you can discern between these two stills. (Click to view full size)
1080 vs 4K
Up close there’s an easily distinguishable difference in detail, especially in fine areas such as hair. But once you put any appreciable distance between yourself and the screen it gets lost. Since most HDTVs are used in living rooms where the viewer is ten or more feet away the increase in resolution is less and less perceivable as you move from 720p to 1080p to 4K.

Where 4K can really shine is by leveraging all of that extra real estate to display double the content at the same resolution that we are accustomed to today. By using wider angle lenses we’re able to reveal more of the scene.Take the 2014 Sochi Olympics for instance.

What you saw in HD as this …

Sochi @ 1080
… could have looked like this.
Sochi @ 4K
Swapping lenses to change how the image is captured is something that cinematographers have been doing since the early days of movies as well as in modern television content. But without increasing the resolution you’re squeezing more information onto the same number of pixels.

Movie Tight vs Wide Angle TV Tight vs Wide Angle

Video Side-by-side_thHowever, I don’t want to sell short the amazing quality that can be garnered from 4K cameras and lenses. Side-by-side comparisons of stills taken from 1080 and 4K footage are impressive, but seeing actual video is amazing. Dylan Lierz posted a great video on YouTube comparing shots taken in 1080p and 4K.  Make sure you watch it full screen at 1080p quality.

Since market saturation of 4K displays is still the vast minority one of the first benefits that the majority of consumers will see is what’s being referred to as Super 2K, 4K content scaled down to 1080p. Content creators will get all the benefits of 4K and consumers will get a crisper and higher contrast image, even though the image isn’t the native resolution of their screens. This is visible in Dylan’s comparison clip as YouTube doesn’t yet support delivery of the 4K resolution, so the 4K video is effectively Super 2K.

 

Encoding or Transcoding?

Most people use the terms encoding and transcoding interchangeably, however there is a distinct difference between the two. Encoding refers to taking an uncompressed source and converting it to a compressed file whereas transcoding is taking an already compressed file and converting it to another compression scheme. These compression schemes are commonly referred to as codecs. Codec, in this context, is the standard or format of the compression.

In order for a computer to create or decipher files using a compression format it employs a program that codes and decodes to and from that standard. This program is also referred to as a codec and is where the portmanteau is derived from; coder/decoder.  These programs are akin to their hardware counterparts endecs (encode/decode), used for encoding analog signals such as those from a VCR to files, and modems (modulator/demodulator), if you’re old enough to remember them, which were used for sending digital information over analog telephone lines.

Ah, the good old days of BBSs.  Yes, they were a thing.  Look it up.

Encoding & Transcoding