Researchers demo vector-based video codec

Modern video codecs rely on pixels to represent moving pictures. That method works well enough, but there are some inherent problems. Lower-resolution videos can’t be scaled up without a noticeable loss in image quality, for example. While using a higher resolution alleviates that issue, it increases the file size substantially. Now, researchers at the University of Bath, United Kingdom, have developed a new video codec that replaces pixels with vectors.

Vector-based formats store data about the contours of shapes in a given scene. Filling in those shapes accurately has always been problematic, but the researchers claim to have a solution that offers the "highest visual quality." They’ve also posted a video of their work in action. The one-minute clip has a 512×288 resolution, so it’s hard to tell how well the codec scales content. That said, the quality looks decent to me. I just wish there were a side-by-side comparison with a pixel-based codec instead of the constant flipping between contours and the full picture.

The research team is already working with several companies, and it’s encouraging others to get involved. Right now, the focus seems to be on post-production applications. Web, tablet, and mobile applications are mentioned explicitly, though, and the group expects its work will lead to "the death of the pixel within the next five years." That sounds a tad optimistic, but vector-based formats certainly seem to have a lot of potential.

Comments closed
    • Usacomp2k3
    • 7 years ago

    This would be great for rendered content. Like Pixar films.

      • GrimDanfango
      • 7 years ago

      Rendered content is generated in a very similar way to the way cameras record photographs. The final image is just as inherently pixel-based, and would still need to be converted somehow.
      The only content that would be directly suitable would be something like traditional drawn animation.

    • albundy
    • 7 years ago

    I should have gone for the blue pill!

    • CppThis
    • 7 years ago

    They don’t really say what the file size is like compared to pixel. I’d love to be able to share my fraps recordings in a format that doesn’t either look horrible or take up multiple gigabytes.

    • ClickClick5
    • 7 years ago

    Would this require a lot of power to decode and play? I imagine the encoding of this would take some power.

      • xeridea
      • 7 years ago

      I think it would be insanely computationally intensive, and not feasible for some time, perhaps that’s why it’s a low “resolution” video.

        • ClickClick5
        • 7 years ago

        That’s my thought. 512×288? How long (and space consuming) would 1920×1080 be? Heck, lets just try 1024×768.

        And with everything going to power efficiency, and ARM style chips, this encoding could be an issue to decode on small devices. Herm.

    • jamsbong
    • 7 years ago

    It is a bad press release I think.
    They should immediately highlight its strength like a simple side by side compare.

    What about file size at equivalent encoded quality? does the vector based format able to reduce file size whist maintaining the same level of quality?

    One thing I don’t get is that if the video is originally a pixel based at high resolution, how is it gonnabe improved with vector based encoding? Shouldn’t the camera or the video source be analogue or vector based itself to achieve the perfect vector based video?

      • ludi
      • 7 years ago

      In theory, you could scale the vector based video ABOVE its native encoding resolution without introducing the interpolation artifacts typical for pixel-based video, because every edge would be defined as a scalable mathematical function, and color fill in between as gradients.

      Although as near as I can figure, the likely result would be somewhat unreal — depending on how the decoder handled the process, either the output would start to look like a photograph under an enlarger, or it would start to take on a “cell shaded” appearance.

      • xeridea
      • 7 years ago

      Video is recorded in pixels, there is no way around it. Its how digital cameras work. You could however take the lossless original video stream and make it vector. Movies are recorded at higher resolution than you will see it, unless you go to a digital theater, so this would theoretically be higher quality than BluRay at the same “resolution”, though technically vector has no resolution, it would be more of a quality factor that would be loosely related to resolution.

    • GrimDanfango
    • 7 years ago

    This sounds an awful lot like the idiotic self-promotion-blurb that those “unlimited detail” guys pedal.

    [quote<]The result is a resolution-independent form of movie or image, capable of the highest visual quality but without a pixel in sight.[/quote<] Except that this is a vector-based representation of a pixel-based source video, as there is no such thing as a vector-recording camera, so you automatically have some detail loss over the original pixel-based representation. You then convert the vector based representation back into pixels to display on a screen, as there's no such thing as a vector based display. (I know there once was, many decades ago, but if I recall, they only drew monochrome line drawings :-P) So, even if this vector codec was earth-shatteringly efficient, it's still doing two highly lossy conversions to get the image onto your screen. I'm not surprised they're talking about mobile applications, as I would think that's about the limit of it's usefulness, but I sure as hell don't think it's going to be "the death of the pixel" 🙂

      • BobbinThreadbare
      • 7 years ago

      I’m not saying this is legit, but why do the conversions have to be lossy?

        • GrimDanfango
        • 7 years ago

        Because for it to be lossless, it would need to be tracing “vectors” around the edges of each discrete pixel – which would mean a set of four vector coordinates to represent a single pixel (compared to a pixel, whose position and shape are implicit, and whose colour is just three bytes of colour information)

        That would result in files that were way way bigger than a bmp file for each frame. About the least efficient compression on earth.

        Assuming it’s not doing that, then it’s skipping information, and approximating it, which means it’s lossy. Potentially very lossy, to bring it close enough to existing algorithms.

          • GrimDanfango
          • 7 years ago

          I’ll concede that it could have it’s applications, it could potentially be more efficient at storing footage that contained a lot of flat, empty regions, diagrammatic stuff maybe. But even then, existing pixel-based codecs are already incredibly good at detecting unchanging colour regions and leaving out redundant data.

            • Bensam123
            • 7 years ago

            It could be a hybrid between a vector based encode and a normal encode.

          • xeridea
          • 7 years ago

          Movies are recorded at higher resolution than you will see at home or on any device. It is easily plausible this would benefit you, as you wouldn’t notice any loss. The main factor is how advanced the algorithms are at vectorizing the image. We will just need to wait and see.

            • GrimDanfango
            • 7 years ago

            No, they’re not. I’ve worked on movies. The majority are still scanned from film at 2048×1556 or 1828×1556 anamorphic, which gives a bit more vertical res, but nothing significant. Increasingly, as filmmakers start to give digital a change, they’re moving to shooting digital 1920×1080 footage. Very rarely there’s some vistavision 3k footage, or 4k imax, but the majority is shot to fit whatever the majority of cinema projectors can handle, which currently is 2k.

          • xeridea
          • 7 years ago

          BMP is not compression, it is pretty much a raw dump of the image data, like the name implies. PNG is lossless, though compressed.

            • GrimDanfango
            • 7 years ago

            I know BMP isn’t compression, that’s why I called it the least efficient compression on earth 🙂

            I know there are plenty of lossless image compression formats too. Tiff lzw is one of the best for 8-bit images.

            My point was, for a vector based format to compress losslessly, it could potentially have to store more data than even a BMP file, as it would have to define the geometry of individual pixels in order to be truly lossless. Pixel based algorithms don’t need to do that, as the position and shape of pixels is naturally implied by the order of the pixels along the lines of the image, so all it needs to compress is colour information.

            Vector images could potentially skip over a lot more colour information, but they have to define a lot of geometry. The trade off might be worth it in some cases, but I’m pretty sure it wouldn’t be for a lossless image.

        • Bauxite
        • 7 years ago

        Physics

        Sensors don’t capture “vectors” of light

          • GrimDanfango
          • 7 years ago

          Well, actually, those recent Lytro Light Field cameras kinda do… but in a different sense. The vector is the path of the light hitting the sensor, which doesn’t help much constructing the vectors needed by an image like this.

          Indeed, I think sensors are going to remain resolutely pixel-based for quite some time yet. Even the human eye works that way – not a grid of pixels maybe, but discrete photo-sensitive cells. Vectors will always be an abstraction of what’s there. Pixels are too, but they’re closer to a literal representation.

      • Arag0n
      • 7 years ago

      But it allows recording done a much higher resolution and vectorized to be synthesized in fewer data and maybe, be able to provide a single quality output that can scale up to the original definition. Think about it like this. If movies are recorded at 16 times the resolution of current FullHD, you could have the vectorized movie in a blu-ray or downloaded file, watch it in FullHD and several years later upgrading your TV and player, you may be able to improve the resolution you view it.

      Of course you can do the same compressing the original 16xFullHD resolution, but the point is that current internet speeds and distribution devices wouldn’t like so much the idea…. if it proves to be able to compress into a similar or less space than current MPEG4 does without losing the original resolution upper limit, it may be damned useful as future proof format for movie distribution (and it may be avoided by movie makers too, so they can sell you again and again every time a new format comes out).

      Another kind of movies that can benefit dramatically from it is Japanese Anime. Most of current anime is computer generated and every frame is likely to be a vectorized image originally. In that case, the output resolution is limit-less. Same with any kind of computer generated information that is used to create a video output from vector based images.

        • sschaem
        • 7 years ago

        The vector version will be bigger then encoding at native res.

        Think of it this way, a bluray hold 1920×1080 in h.264 format, but to hold an equivalent vectorized version with the same amount of detail you will need 4x to 16x+ the storage.

        but ok, you can scale up the vector version, but you dont get extra details . Example leaf on a tree wont start to magically appear when you play the stuff back on a 8K display.

        You are talking like CSI image zooming is real… its not. You cant take a vector compressed image and zoom to some guy glasses , zoom in and see a hub cap reflection, zoom and in the reflection see a window, and zoom more and in that windows you see…. yourself ?!?

        The vector will be sampled at a given resolution, and that ALL that you will get.
        More info will cost more storage space. and so far vector require an order of magnitude more storage space to store data then pixel because they are not the natural format of video. pixel are.

        This is just a publicity scam. What they have done is simply improve the vector scaling of what they published 3 years ago.
        [url<]http://eprints.gla.ac.uk/47879/1/ID47879.pdf[/url<]

          • Arag0n
          • 7 years ago

          Did you miss the point where I acknowledge that vector based video won’t be able to upscale higher than the original source? I don’t know why you talk about CSI Image Zooming…. I just said that for Anime and other computer generated videos may be true, but it will never be true for real world footage.

          And second, how can you know that h.264 format will have less size than this codec? I’m not saying you are wrong, but I would like to know why you think that, specially given that the researchers mentioned the lower bit-rate as an advantage.

    • brute
    • 7 years ago

    they dont gotta re search it, it’s on u tube and we have a link

    just send it to them , man

    • UberGerbil
    • 7 years ago

    We kind of went through this a decade or so ago with wavelets, and before that with fractals. Initially we hear about how they’re Going to Change the World, and then reality sets in and they fall somewhere on the spectrum between a nice addition to existing techniques or nothing more than a neat trick that only works in demos. I don’t have enough knowledge to be able to say where this falls, but I’m jaded enough not to expect it to change the world.

    • NoahC
    • 7 years ago

    Ah, codec design. Speed, quality, and power-efficiency: choose two.

    Any development geared toward tablet and mobile applications is welcome news. But I’m skeptical this vector method has anything new to offer. Assuming the method can achieve lower bit-rates at comparable quality to, say, h.264. That’s great but how about power?

    I have to imagine that using vectors to represent complex frames & scenes at high fidelity — like sschaem’s tree example — would be drastically more computationally intensive compared to existing encoding methods. And even if that’s not an issue, storage density and network bandwidth tend to evolve much quicker than battery tech, so I don’t see any practical value to this approach in mobile, tablet, or other battery-dependent applications.

    Unless they have some energy-efficient surprise up their sleeve for software or SoC decode, which would be awesome news. Or maybe it’s easier to optimize the same vector codec for higher and higher resolutions? That would be a significant development. It seems like that’s one of the main catches with current codecs; the algorithms are necessarily chosen and then optimized for a specific range of resolutions and bit-rates. Any codec experts out there who can comment on this?

    P.S. @Geoff – maybe I missed your point, but I’m not clear on how vector encoding could lead to better up-scaling. If we capture video using sensors with finite resolution, encoding the sensor’s output into vectors doesn’t change anything. Though I guess it might help with scaling down to arbitrary resolutions.

      • sjl
      • 7 years ago

      What it comes down to is that bitrate equals information. The more information, the higher the bitrate. The higher the bitrate, the greater the information.

      Pixels are nice, simple to understand, and very straightforward. Vectors … aren’t. Take the example of a tree swaying in the wind. If you were to go to the effort of plotting out every single edge of every single leaf, every twig, every branch … you would have one hell of a lot of information. More, most likely, than would be encapsulated in a 1080p video track. The level of scaling you could get out of those vectors, and still look realistic, is directly proportional to how fine-grained your sampling of the vector is (and hence how much information you’ve captured about it, and hence how many bits of data you’ve stashed away.)

      I’m skeptical, is what it comes down to. It’s like compression using chaos theory; sounds great, until you realise that the details will defeat you in many cases. Like you, I want to see some hard figures on how computationally expensive the algorithms are, and just how big the data savings turn out to be.

        • sschaem
        • 7 years ago

        The core issue is that images & video are not vector/contour based to start with.
        So using contour/vector inherently doesn’t fit.

        Just take that image for example.

        [url<]http://4.bp.blogspot.com/-lJUGUcPzmcY/TvctdcK5csI/AAAAAAAAJE0/nHnVC8ELxMw/s1600/explosion-photo.JPG[/url<] Pixel are absolutely a perfect fit for representing all type of images (beside human made vector art) The video they showed is probably compressed at maybe 4:1 at best, where x264 would get 40 to 100:1 for the same quality. And if you where to zoom those video to 4K, they would look very weird. The result would be more natural with a up sampling filter.

          • sjl
          • 7 years ago

          Oh, true. I was blithely ignoring that particular point, and looking at it purely from a “how well could this scheme work?” I would be very surprised if it made more than a minor improvement to an otherwise purely pixel based system.

          That said, it might find a niche use for animated video (South Park type stuff.) Doubt it’d be much good in any other application.

      • Jakall
      • 7 years ago

      [quote<] I have to imagine that using vectors to represent complex frames & scenes at high fidelity -- like sschaem's tree example -- would be drastically more computationally intensive compared to existing encoding methods.[/quote<] If they somehow connect both worlds of vector and pixel and take what`s best from both, they might have something. For example the tree - make it vector, every branch, every leaf, but only once, then only record the difference between frames(motion of leaves, etc.) - that would mean way less data needed.

        • faramir
        • 7 years ago

        … and you just (re)invented fractal compression. Google it up 😉

    • The Dark One
    • 7 years ago

    Isn’t HEVC on track to deliver half the bandwidth utilization of h.264 for comparable quality?

      • ChronoReverse
      • 7 years ago

      Yeah, H.265 is supposed to be a significant jump. With that said, even with MPEG-LA behind it, I don’t expect it to be widespread for at least 5 years.

      For something like this vector based method without even a reference encoder/decoder, I really don’t think it’ll amount to anything on its own. That is to say, the good ideas will be used but not in itself.

      • sschaem
      • 7 years ago

      Not really. Not all h.264 encoder are create equal.
      For example x264 deliver 20% better compression then its closest competitor.
      So if HVEC claim was done with a plain reference encoder, its possible that x264 even beat HVEC.
      I’m not saying it does, but the claim are so vague that this is a possibility.
      (Where a reference HVEC encoder would deliver worse ration then a state of the art h.264 encoder)

        • ChronoReverse
        • 7 years ago

        While that’s true in principle, h.265 is theoretically so much better than h.264 that it’s unlikely.

        For example, the current reference encoder supposedly already exceeds x264 (I haven’t seen the results myself unfortunately) and if that’s true, that’s huge since x264 is a lot better than any other available h.264 encoder.

    • lilbuddhaman
    • 7 years ago

    620×349 still image? 512×288 video? Quicktime? Really? REALLY?

      • RainMotorsports
      • 7 years ago

      What about quicktime? Oh you believe container formats have jack to do with whats in them? Its an mov container with H.264 inside in this case. Plays fine in Windows Media Player or VLC.

      Do have to complain about that res though.

        • MaMuS
        • 7 years ago

        It’s not just the resolution. This particular video has extreme amounts of artifacts due to the very low bitrate. It’s kind of like showing the differences between a .gif and .png file by rendering then both inside a very low quality .jpeg image…

      • Liron
      • 7 years ago

      If it’s vector based, we should be able to watch that 512×288 video at any resolution we want and it would still look just as nice, right?

        • xeridea
        • 7 years ago

        Scaling up wouldn’t result in pixelation, just not as much detail as if you were to have it at a higher starting quality. If its vector, there really isn’t a “resolution”. Fonts are generally vector so you can make them big and they don’t pixelate, it just won’t add more detail. It would depend on the content, animated videos would fair the best and have excellent compression. Complex scenes would be the hardest. We shall see, its a very interesting concept I have wondered in the past how plausible this would be.

    • sschaem
    • 7 years ago

    This is not new, and the problem was that the contour data took more space then the grid aligned methods. so this was limited to low frequency detail video.
    Example
    [url<]http://youaretheprimemover.com/wp-content/uploads/2012/09/tree-15_19_1-Tree-Sunrise-Northumberland_web.jpg[/url<] Video like this would fall on its face with contour/vector methods. Not sure what they invented to fix the issue with all the previous failed attempts. The main reason ? vector and contour are NOT what our world is built off. "vector-based formats certainly seem to have a lot of potential" is this a reference to Flash 🙂

    • hoboGeek
    • 7 years ago

    So you want me to watch a pixel based movie (.mov) that shows a vector based movie?
    Don’t I need to see it on a system that supports this last codec to determine if the quality is better than the first one?

    Nice going

      • Goty
      • 7 years ago

      Not really. Vector based encoding isn’t about the appearance or quality of the image; instead, it’s all about bandwidth. The inherent limitations of vector displays are too much to overcome, so it’s all going to be raster video for the end-user, but vector-encoded video has the advantage of being somewhat more efficient (or so I hear).

    • sweatshopking
    • 7 years ago

    [quote<] Modern the video codecs [/quote<] looks great!

      • Meadows
      • 7 years ago

      Wow, I actually skipped over that one.

        • sweatshopking
        • 7 years ago

        pay attention, learn from your master. one day, i’ll leave this site, and i’ll need to leave somebody behind to watch over these guys.

        • Wirko
        • 7 years ago

        You have met the word “Modern” too many times lately. Your visual cortex has adapted to this annoyance and now sees a multicolour, purple-orange-violet-green-blue haze instead of the word itself, and around it. Could this be the explanation?

          • brute
          • 7 years ago

          interestly

      • anotherengineer
      • 7 years ago

      We know what you’re thinking, take your low res skin shows and upscale them to HD.

    • bjm
    • 7 years ago

    The screenshot proves it, there [b<][url=https://techreport.com/discussion/24051/geforce-versus-radeon-captured-on-high-speed-video?post=694220<]are[/url<][/b<] issues with the left side!

      • chuckula
      • 7 years ago

      ROFL’d… looks like these researchers are on to something!

Pin It on Pinterest

Share This