Monday, August 25, 2014

Barnfind's high-speed data router and optical CWDM for TV infrastructure

I've been a very slack blogger over the last five weeks due to work (installations, running training, getting trained!) and holiday (splendid). I spent a few days last week in Norway as the guest of Barnfind in Sandefjord.
Norway seems to be a lovely country if not a tad expensive (£24 for a round of three pints).  Barnfind are a small company whose engineers used to be with Nevion - you've probably come across their VikinX range of HD/SDi and other facilities routers. 

I had a long Skype chat with Barnfind a month ago and kind of 'got' their range. It's not a one-for-one replacement for any other specific products rather a platform that nicely ties together all digital signals within a facility; synchronous (SDi, MADI, AES, etc) and asynchronous (ethernet - copper & fibre, fibre-channel). They also make CWDM very do'able in a broadcast environment. As we move towards an entire IP infrastructure these are the kind of platforms that allow an easy transition. 

The basic product (the BarnOne BTF1-01) is a 32x32 generic data router and 16 bi-directional SFP ports. The SFPs can be any MSA-compliant units but Barnfind manufacture their own at very reasonable costs (much less than Cisco!). You could insert Ethernet, SDi i/o, fibre or any of around 150 variations they offer. This allows you to route SDi in and out over fibre, insert AES into an SDi stream, convert ethernet to/from fibre etc etc. 
3G HD/SDi input/output SFP

Clearly some signal types don't sensibly convert; routing an SDi stream to a fibre channel-equipped port won't replace an HP workstation running Avid! But where is does make sense everything is taken care of for you. In the case of all video signals (composite, SDi and HDMI are all supported) the signal is converted in the SFP to 3G SDi before it is passed to the 32x32 router.

The BarnOne range extends to several variations - the lower board which carries the router and the first sixteen ports can be joined by two upper boards carrying BNCs, more SFP holes or (more interestingly) CWDM fibre modules. Essentially having BNCs on an upper board allows you to avoid SDi SFPs (it's marginally cheaper to do it on an 8-way board than an extra 8-holes with video-SFPs).
 
The other end of the link could be easily served by their BarnMini units - essentially replacing Blackmagic or AJA converters but integrating very nicely with the BarnOne. For less than £400 you can get either two BNCs with an SFP hole or two SFPs. 

The whole thing makes sense when you realise that all the signal intelligence is in the SFPs - the dual-port BarnMini can do anything that makes sense; maybe you need to route some ethernet coming in on single-mode fibre and send it out over existing multi-mode cable. Again, AES, SDi, MADI as well as all fibre and copper networking are supported. 

Before I start banging on about Course Wave Division Multiplexing it is worth including a photo of the insides of a BarnOne so you can see the control card they use.

That's right! It's a RaspberryPi! When re-invent the wheel; they claim they tested a few Linux SOC boards and found the humble Pi to be the most reliable and they make use of the watchdog timer to ensure it's always listening for config updates. Their BarnStudio software not only allows you to configure the system (including all the monitoring via SNMP) but you can also control the router. They also support several manufacturers generic control panels if that's what's needed.

CWDM

Single mode cable (more often than not) is used to carry a network feed, a 3G video feed or some other data. We quote wavelengths for fibre (typically 1310nM) rather than frequency and so often forget that the sidebands of the signal we put down a fibre are tiny relative to the centre frequency. 1400nM of wavelength is around 200Thz (yes, 200 x 10^12 Hz!) which makes the 4.5Ghz bandwidth of the best-quality HD video coax look very modest. So, we could divide up the single-mode range into many wavelengths and use each for a different purpose; a bit like the radio stations on the VHF band.
These are the standard wavelengths for CWDM working; sixteen channels and by convention the two colours run in different directions (SDi input/output or ethernet Tx and Rx for example). This allows you to have SFPs that put the signal they send/receive onto specific wavelengths. Then, with a simple passive optical splitter/combiner (which is all a CWDM multiplexer is). With all this in place you can use your Banrfind kit to multiplex sixteen functions in and out of a single 9/125u fibre. If that's all you have between premises then this is a lifesaver. The multiplexers are in the £400 range. 

We opened one up last week and it was a thing of beauty; all passive optical engineering with tiny dichroic filters. So, by careful planning you could send multiple SDi signals, ethernet, fibre channel and other things to and from a remote site over a single strand of mono-mode fibre. It could be up to 80kms away. 
There is a further development of the multiplexing technology called Dense Wave Division Multiplexing - DWDM which allows up to 192 channels and very far distances (by the use of Eridium-doped optical amplifier - but in the case of Barnfind (and other manufacturers) the cost of DWDM vs CWDM is fivefold!


The BTF1-07 is the box I've ordered as my demo unit; it has sixteen SFP holes, eight bi-directional BNCs as well as a CWDM multiplexer.

Saturday, July 19, 2014

MPEG transport stream corruption; effect on pictures

I'm running training at the MET's video forensics lab soon and part of it is explaining how DCT-based compression works and particularly the effects of corruption on long-GOP MPEG transport streams as delivered in DVB-MUXes. One of the illustration videos is below. 
Three clips; each with a momentary corruption to the data stream and in each case you can see how the decompressor can't reconstruct a proper picture until the next i-Frame. The second half of the clip shows me framing through with the I, B, or P frame indicated top-left - you'll need to make it full screen to see that as the marker is quite small.

Wednesday, July 16, 2014

The challenges of modern picture quality analysis.

This is a recent article written for a trade magazine;

Engineers have sought to quantify the quality of the video signal since the birth of television. Since all aspects of the TV picture are represented first by voltages (analogue) and then by numbers-in-a-bit-stream (digital) you have to make measurements to really know anything about the quality of your TV pictures.
“When you can measure what you are speaking about, and express it in numbers, you can know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind: it may be the beginning of knowledge but you have scarcely, in your thoughts, advanced to the stage of science.” - William Thompson, Lord Kelvin.
In the dim and distant days of monochrome tele the only things the TV engineer had to worry about was the blacks and the whites. Common consent had us placing the blacks (dark areas of the picture) as a low signal (at zero volts) and the whites (bright parts of the picture) up at 0.7 of a volt. In addition we allocated the 0.3v below black to the synchronising pulses – the electronic equivalent of the sprocket holes in film; a mechanism that allows the receiving equipment to know when new lines and frames of video were starting so that the picture is “locked” and not free-running (“try adjusting the vertical-hold!”). Once all those things are well-defined then Mr. Sony’s cameras work nicely with Mr. Grassvalley’s vision mixer and the engineer at the broadcast centre can adjust the incoming signal from the OB truck such that it looks right on the wavefrom monitor and hence the pictures will match what left the truck. Dark shadows and bright clouds will look like what the camera operator saw.


Fig.1 – monochrome TV signal, two lines.

So far so good; but people wanted colour TV and so all of a sudden the way colour is encoded needs to be considered. With colour comes grading and the “look” of pictures and colourists need to see different representations of the colour parts of the signal for artistic reasons. Engineers need to ensure that the colour content of the picture is constrained to the legal gamut of colours that the transmission system can handle; nobody wants things to change colour as they get to air! Tektronix have always been the gold-standard for TV test and measurement and to this day if you ask an engineer or colourist what kind of test equipment they’d like it’s going to be a Tek.


Fig.2 – colour TV signal, several types of display

As we moved from analogue to digital working in the 1990s and then from standard definition to higher resolutions in the noughties the principle of looking at the lines and fields of the TV signal remained; we assumed that if one frame got through the system with minimal/acceptable levels of distortion then all subsequent frames would; and as we know - the illusion of television is that many frames make a moving sequence.
However – with the introduction of “long GOP” (lit. Group Of Pictures) video compression in the 90s it became apparent that we don’t treat every frame of video the same. On a compressed video-link there are I-Frames (the ones where the entire picture can be re-built) and other, more complex beasts, called B-Frames and P-Frames; these serve to convey the differences (by not sending all complete video frames, but merely the differences from previous and subsequent ones we achieve video data rate reduction AKA compression). You’ve no doubt seen the on-air fault where some parts of a picture seem to have become “stuck” in the previous scene where other parts of the picture are behaving normally. Then, suddenly the picture rights itself. What you have witnessed is a corrupt I-Frame; all your set-top-box can now do is show the changes as they arrive and you don’t get a re-built complete frame until the next I-Frame arrives (typ. half a second later). This is just one kind of “temporal” fault.


Fig.3 – MPEG multiplex fault

So, now we have to consider things happening in time as well as the pixels, lines & fields of video.  The colour-bars you’re looking at coming across the satellite link may look splendid, but perhaps the link is only able to convey pictures that change minimally between frames and as soon as moving video arrives it looks awful. Perhaps the fields of video have been reversed (it’s a more common technical fault than you’d expect) and you’ll only see that on moving pictures; and it’s not nice!

Test signals have always served to “stress” the system they are being run through; the traditional colour bars have their colours set at the extreme ends of what the system can handle – you never see that amount of saturated yellow in pictures coming out of a TV camera! We want our test signal to show faults; if the pictures look good it should be because the OB link or editing workstation is capable of carrying the worse-case pictures.
So, now we need a test signal that is not just the same frame of video repeated endlessly; we need a test that changes and serves as a challenge to compression encoders and is constructed in such a way as to have predictable picture effects that highlight when your production chain is sub-par. Ideally we could see these drop-offs on a picture monitor and not on a £10k Tektronix test set. Once we have an animated test signal we can test not just for the degradation of compression but also those field-cadence problems. We can also test for lip-sync errors, exaggerating the effect of audio being late or early with respect to the video and more importantly all of these tests can be done by an operator rather than the expensive engineer. We’d also like the sequence to be constructed such that faults are visible on a 24” video monitor from the across the other side of a busy machine room or studio gallery.

Fig.4 – SRI Visualizer - a still frame; it moves normally!

  • Lip-sync: Measure and quantify the synchronization offset between audio and video – a single frame of sync is easily missed on camera pictures.
  • Bit depth: Detect 10-bit to 8-bit truncation – in a modern facility a mix of eight and ten bit video is a fact of life, but no client wants unnecessary loss of dynamic range.
  • Compression fidelity: Measure and quantify compression levels in real time; again, real camera pictures often make this effect hard to spot.
  • Colour matrix mismatch: Determine high-definition (709) and standard-definition (601) colour space conversion errors. These colour shifts are subtle until the director is shouting about that shade of red he wants!
  • Chroma subsampling: Determine the Chroma subsampling being used (4:2:2, 4:2:0, 4:1:1…)
  • Chroma upsampling: Reveal how missing chroma samples are interpolated
  • Field dominance: Definitively determine field order reversal.
  • Chroma motion error: Demonstrate incorrect processing of chroma in interlaced 4:2:0 systems
  • Subjective image fidelity: Perform a rapid check of system integrity
  • Colour conversion accuracy: Verify colour-space conversions
  • Display gamma: Measure monitor gamma quickly
  • Black clipping: Reveal black clipping and accurately set monitor black level
  • White clipping: Determine if highlights are being blown out
  • Noise: See how an encoder handles noise
  • Skipped frames: Detect repeated and dropped frames
The SRI Visualiser is available as a hardware product (the TG-100 which also includes equally innovative audio tests) which can be installed into a machine room to augment/replace existing SPG-type test signal generators. You can also purchase and download it as several kinds of video files which can prove very useful in file-based workflows; injecting the sequence at the start of the post/production workflow and confirming all is well at the very last stage. An hour of time paying attention at the start of a new production will pay dividends by highlighting exactly where any picture faults creep in. Did that colour-shift occur during editing, grading, VFX or when the show was transcoded for distribution? Without a test system like The Visualizer these problems are hard to track down.

Tuesday, July 8, 2014

Temporal-based video tests; The SRI Visualizer

We've recently take on SRI as a supplier and I'm very excited about their test system.


You can read the article I've written for a couple of industry magazines here.
SRI International

Thursday, July 3, 2014

Someone who has some actual experience of file-based deliverables!

There are an awful lot of people touting themselves as "DPP consultants" and "file-based technologists" in the UK TV industry at the moment. For the most part they are riding the gravy train of the DPP roadshow and the fact that come October all the big broadcasters will expect file-based delivery AND material QC'ed to the DPP specification. The majority of these folks have not delivered a single minute of material for terrestrial broadcast but the one chap who really does know his stuff is my good pal Simon Brett (recently moved from National Geographic/Fox UK to NBC-Universal).


Here he is at a recent evening event we laid on; if you want to know some actual details (what bit of software to use to handle your metadata etc) then Simon is your man.  He's quite an engaging speaker (and he bigs-me-up for colourimetry!)

Tuesday, July 1, 2014

Near synchronous video over UDP/IP with Sony's NXL-IP55


Just about the only things that caught my eye at the recent Beyond HD Masters day at Bafta was Sony's IP Live system. This is a single product called the NXL-IP55 which puts four 1080i 4:2:2 signals over a gigabit connection - so modest 6:1 compression with a well-defined single field of latency. The camera channels can go either way (so three source cameras and a return preview monitor for example) and embedded audio plus tally & camera head (colour) and lens control are included. It's quite expensive ($10k per end) but is the only video-over-IP device which I've seen so far which is suitable for live production.  


http://www.sony.co.uk/pro/press/pr-sony-nab-av-over-ip-interface

Saturday, June 28, 2014

Measuring fibre cabling and the problem of encircled flux loss


Last week I went on a very interesting training day courtesy of Nexans - data cable & parts supplier. I went looking forward to learning all about the new standards surrounding catagory-8 cabling for 40 and 56 Gigabit ethernet (a massive 1600Mhz of bandwidth down a twisted pair cable!) and the new GG45 connector; but those things will have to wait for another blog post! The thing that really tickled my fancy is the new standard for measuring the response of multi-mode fibre.
Multi-mode fibre works in a fundamentally different fashion to single mode (they are as different as twisted-pair and coaxial copper cable; but they look very similair). If you want a bit of a primer on fibre then Hugh & I did an episode of The Engineer's Bench a couple of years ago on the subject.



As we've gone from one-gig to greater than 10Gigbits/sec in OM3 and OM4 cable and engineers have often noted the lack of consistency between different manufacturers light-source testers. You might get as much as 0.5dB of difference between say an Owl and a JDSU calibrated light source and detector. We typically use a 20dB(m) laser at 850nM to test OM3 and we always just deliver the loss figres to the client, but it would be good to know if your absolute reading is of any use at all?

Well, the answer is that LED or VCSEL (Vertical-cavity surface-emitting laser) will tend to "overfill" the fibre and high-order modes of light travel (to a degree) down the cladding of the cable.
Launch conditions correspond to how optical power is launched into the fiber core when measuring fiber attenuation. Ideal launch conditions should occur if the light is distributed through the whole fiber core.


Transmission of Light in Multimode Fiber in Underfilled Conditions 


Transmission of Light in Multimode Fiber in Overfilled Conditions


An overfilled launch condition occurs when the launch spot size and angular distribution are larger than the fiber core (for example, when the source is a light-emitting diode [LED]). Incident light that falls outside the fiber core is lost as well as light that is at angles greater than the angle of acceptance for the fiber core. Light sources affect attenuation measurements such that one that underfills the fiber exhibits a lower attenuation value than the actual, whereas one that overfills the fiber exhibits a higher attenuation value than the actual. The new parameter covered in the IEC 61280-4-1 Ed2 standard from June 2009 is known as Encircled Flux (EF), which is related to distribution of power in the fiber core and also the launch spot size (radius) and angular distribution.

All the manufacturers are producing EF-compliant testers so you don't need to worry about inaccurate reading due to these high-order modes, but for now there are some suggestions.


Multimode launch cables allow for the signal to achieve modal equilibrium, but it does not ensure test equipment will be EF-compliant based on the IEC 61280-4-1 standard.
Multimode launch cables are used to reveal the insertion loss and reflectance of the near-end connection to the link under OTDR test. They also reduce the impact of possible fiber anomalies near the light source on the test.

If the fiber is overfilled, high-order mode power loss can significantly affect measurement results. Fiber mandrels that act as “low-pass mode filters” can eliminate power in high-order modes. It effectively eliminates all loosely coupled modes that are generated by an overfilled light source while it passes tightly coupled modes on with little or no attenuation. This solution does not make test equipment EF-compliant.


Mode conditioning patch cords reduce the impact of differential mode delay on transmission reliability in Gigabit Ethernet applications, such as 1000Base-LX. They also properly propagate the laser VCSEL light along a multimode fiber. This solution does not make test equipment EF-compliant