Quantcast
Channel: Media & Entertainment Technology » flows
Viewing all articles
Browse latest Browse all 2

Media Content Storage Panel

$
0
0

January 6, 2013, Storage Visions Conference, Las Vegas—A panel looked at the requirements for digital production flows for media. Pallab Chatterjee from M & E Tech moderated the panel. Speakers included Don Molaro from DataDirect Networks, Jim Jonez from Dot Hill, Tracey Doyle from HDS, and Steven Cohen from Aberdeen LLC. In addition to the speakers, Tracy Spitler from IntelliProp and Brent Walsh from Panasas participated in the panel.

RAID at hyperscale?
Molaro suggested that storage can scale from 50GB to 8TB, but duplication becomes a problem at the higher levels. The bandwidth is a constant. For a RAID5 configuration of 1TB, the rebuild time can range from hours to days, and a 2TB array could take from days to weeks. Unfortunately, a standard RAID configuration suffers from a single point of failure as 1 dead drive can kill the system.

Changing to a RAID6 configuration provides an extra parity bit, and needs 2 drives to fail to kill the system. The problem is the rotating drives, and the standard formatting that puts stripes in big blocks in the array. An alternative is to scatter stripes across multiple drives, so a single drive failure only affects a small number of RAID stripes. The reads are all in parallel in the backplane of the storage, and the standard formatting that puts stripes in big blocks in the array. An alternative is to scatter stripes across multiple drives, so a single drive failure only affects a small number of RAID stripes. The reads are all in parallel in the backplane of the system.

This configuration also does rebuilds in parallel to reduce recovery times. A large number of small systems is easier to deal with, so scaling out reduces the impact of a single drive failure and also increases performance. Overall reliability is a function of array size times the inverse MTBF.

Jonez described the next generation for edit and post production storage as more areas in cinema go digital. The changes are driven by production needs and not by consumer demand. The entertainment community is trying to make more money by increasing the use of high technology. Traditional flows replicate the sense of film by using 24 frames per second and edit at 8-10 bits per pixel. The content is projected on 2K machines at 4 bytes per pixel.

Newer evolving technologies are moving to 4K images and flows with 3-D. The assets are using 16 bits per pixel to increase dynamic range, and people are looking closely at high frame rate production and projection. Comparing the storage requirements of the traditional and evolving flows shows large differences. Traditional flows need about 300MB/s. 4K needs 1200 MB/s and stereo doubles that. High frame rate can go to 7600 MB/s, 25 times the rate for traditional cinema.

As a result, the movie companies need both throughput and capacity to meet the needs of the evolving flows. In addition, the companies need a collaborative workflow that allows parallel work to improve speed. The tools need to be fully integrated and have capabilities for the unique features of color graders and on-line finishing. A typical flow needs open tools. Storage needs to be high throughput, reliable, and operate with a single manager for all file types and flows. When the flows become mature, they will migrate to broadcast. Ultra-high definition and 8K images will eventually become available to everyone.

Doyle considered the issues of monetizing file-based content. All of the digital assets need to be easy to manage and operate in a scalable environment to enable other uses of those data. Monetizing the assets calls for new revenue streams. The issue for the data systems is to reduce the total cost of operation while increasing the storage volume. A good flow allows alternate uses for the data, but generates more data in the process.

Companies need to look at all spectra of data to determine how to monetize more pieces. The tools for this environment need to deliver data on-time, on-demand across the world. For on-set production, the data system needs to provide security, scalability, and reliability, and provide a realistic method to maintain the metadata. To maintain the value of the assets in the flow, everything has to be available when needed, in a secure and safe manner.

Archiving and a good database of the assets improves the access to all of the assets. The issues are in search and making the archived materials tie into the workflow. Preservation requires good metadata management and acceptable throughput. The file sizes can create issues due to the independent scaling of storage and compute. Media replication and upgrades are constantly needed, and the user has to put in efforts to keep file types and formats useable with current tools.

Cohen described the Sony training center and lab. Users have access to F65 cameras, outputs are dumped to RAID and are between 175MB to 5GB. Their 4K workflow has 40 TB of useable space and can handle any file format and any operating system. The storage connects through 10GB Ethernet, 1GbaseT, Firewire, and USB. The network configuration has other hardware for ingest and has integrated functions to test other components for the system .

The RAID 10 has 96 TB available and the RAID configuration has 43 TB useable. One area of conflict is the links are 4 X 10GBE going into a single 40 GB port. Software for transport and transcoding is in place. The metadata includes 35 values per frame plus 13 more non-real-time values embedded in the first frame.

Metadata access?
Molaro noted for large deployments, search is integrated in the storage array. A metadata query links to the files, and a billion objects only takes a few minutes.
Jonez added that metadata is the key to maintaining data from camera to post with integrated storage.
Doyle suggested tiered storage and analytics are important.
Cohen added that versioning and naming conventions through software are necessary to manage the non-linear, non-destructive workflows.
Spitler suggested that hardware protection is required for readback of data and metadata.
Welsh added that the data has to be used across all layers of the stack. People have to use the right tools and have applications to manage all of the data. The whole process takes lots of effort.

Bandwidth and reliability?
Molaro offered many of the systems are using non-traditional networks, more like the high performance compute farms. They DMA across Infiniband.
Doyle commented that the users need scalable solutions with a management layer that is "people proof".
Cohen didn't have a physical product to tout, but noted that the film bond companies are requiring backup and multiple validation across drives before store. They are saying the production companies need to use multiple media and locations for the assets.
Spitler stated that the use of SSDs is increasing, but ATA has no data protection functions. Users need a RAID controller to handle the data integrity.
Welsh countered that SSDs have internal integrity check capabilities.

Back compatibility issues, film archives, and translation to current formats?
Cohen agreed that black and white film separations have lasted over 100 years. The studios are still backing up to film. There is no digital medium that works for a long time, so analog magnetic stripes are used for the sound.
Molaro stated that high-performance compute labs are copying data from old technologies and converting to current formats.

Migration from traditional RAID stacks?
Molaro stated that the drives are participants in the system. They have to be designed as a part of the architectural process and store objects and not blocks. They have to ignore file systems. In general, drives are all very high quality.

Apps for block-level store?
Molaro offered operating characteristics. The goal is to look like a "perfect disk drive"
Welsh noted that declustering started in the '90's. Different workloads across spindles need smaller blocks to take advantage of large disk drives and reduce recovery times.

Collaboration? Interoperability?
Cohen observed that there are a small number of partnerships, and a lot of islands of software that is hardware agnostic.
Jonez commented that they work with various companies. The key interactions are to integrate the software. The industry needs more efforts in this area.
Doyle suggested that there are world-wide centers of excellence, but they need to collaborate more.

Expanding capacity for video change over time. Distribution issues?
Jonez acknowledged that this is a challenge. Innovation moves from cinema to consumers overtime. The infrastructure is harder to migrate. Changes are coming and long-term pro capabilities will move to the desktop.
Doyle added that more software tools are needed.

Directions for backplanes and capture versus legacy ?
Molaro commented that this is a race as content changes. The hardware will drive the storage. Storage will be operational and archive, with the archive increasing faster to keep up with aging of processes.
Walsh offered higher performance such as 12Gbps SAS and increased HDD densities. Systems will need more integration for consumers. Not all of the storage will be SSD, as some functions need the capacity of a HDD. There will be a lot of churn in the hardware.
Cohen differed that the cloud and intelligent storage will change the requirements and address the increased capacity needs.
Jonez said that content will drive the changes. Storage has to keep up, or if it falls behind the requirements, it will drive other innovations.
Doyle suggested the solution will be layers. Virtualization and good management tools will enable more cloud aspects.


Viewing all articles
Browse latest Browse all 2

Latest Images

Trending Articles





Latest Images