Nvidia unveils $99 Jetson Nano single-board computer

As I noted yesterday, the Game Developers Conference in San Francisco just kicked off. However, yesterday was also the first day of Nvidia's GPU Technology Conference just down the road in San Jose. Green GPU giant CEO Jensen Huang opened the show with a nearly three-hour keynote during which not much came to light regarding new hardware. There was one pretty interesting tidbit, though: Jetson Nano, a $99 single-board computer sporting a CUDA-capable Tegra SoC.

On the face of it, Jetson Nano is already appealing. $100 gets you four ARM Cortex-A57 CPUs, a Maxwell-based GPU with 128 shaders, and 4 GB of LPDDR4 memory. The SoC supports hardware encoding of 4K UHD video in AVC or HEVC formats, and the board includes a pile of connectivity: SDIO, SPI, I2C, I2S, GPIO, USB 3.0, PCIe by way of an M.2 socket, and gigabit Ethernet.

Display connections comprise one each of HDMI 2.0 and DisplayPort 1.2 connections, both of which can be used simultaneously. While the Jetson Nano production-ready module includes 16 GB of eMMC flash memory, the Jetson Nano developer kit instead relies on a micro-SD card for its main storage.

As Jensen Huang himself noted in the keynote, the Jetson Nano is a suitable entry-level platform for developers looking to pick up deep learning programming, as it supports the full CUDA-X stack. That means that code written on a Jetson Nano will run with minimal changes—after being re-compiled—on the big-boy Nvidia GPUs. The Jetson Nano could also make a nice step-up for folks whose needs aren't satisfied by something like the Raspberry Pi. Nvidia says the little SBC will be available in mid-June, but you can check out some preliminary benchmarks over at Phoronix.

Comments closed
    • anotherengineer
    • 5 months ago

    So is this like a rasp pie on roids???

    • tipoo
    • 5 months ago

    One way to sell harvested TX1 dies with half the GPU disabled I guess. I can’t see that the ML performance would be anything to write home about, that would be I guess ~500Gflops FP16 and 250 SP.

    Only place it’s interesting is that something like a 1030 doesn’t support double performance FP16. Otherwise that gets you into CUDA for a similar price and better performance. Then again again this is also a full computer, so probably interesting for teaching.

    • ronch
    • 5 months ago

    [b<]YABADABADOO!!![/b<] Oops.. er.. wrong cartoon...

    • NTMBK
    • 5 months ago

    Cool, they found something to do with all the chips that fail to make the grade for Switch.

    • Shobai
    • 5 months ago

    I’m intrigued by the nomenclature : here is a SoM, with a SODIMM footprint, that plugs into an expansion board to break out the connectivity. I’m genuinely curious, because it has me wondering whether “Single Board Computer” doesn’t mean what I thought it did…

      • notfred
      • 5 months ago

      The idea is that you have the expansion board to break out all the connections in to a standard format and you can prototype with that.

      Then you build your custom hardware that has the connections you need hardwired across to the appropriate bit of your hardware. You unplug the module from the expansion board and plug it into the custom hardware and it runs just like it did without the size of standard connectors and worry of them shaking loose.

    • hungarianhc
    • 5 months ago

    Is this basically around the power of a Nintendo Switch?

      • willmore
      • 5 months ago

      I think it’s pretty much a little over half. It’s got the four big cores, but is missing the four smaller ones. It’s got half the GPU. I think it has the whole memory BW.

      Are the clock speeds of the Switch known? That’s going to make a big difference for the parts that remain.

    • The Egg
    • 5 months ago

    Nice little board, but I’d have a hard time coming up with a use-case which utilizes the the GPU to it’s full potential. Maybe some sort of custom arcade setup, or augmented reality kiosk?

    Edit: Nevermind, I didn’t realize it’s video encoding/decoding capabilities. Also, thumbs down on the barrel-plug power connection.

      • dragontamer5788
      • 5 months ago

      [quote<]Also, thumbs down on the barrel-plug power connection.[/quote<] Why? Barrel-plug is well standardized across USA at least. [url<]https://www.digikey.com/products/en/power-supplies-external-internal-off-board/ac-dc-desktop-wall-adapters/130?k=barrel%20plug[/url<] You got your pick at literally thousands of models of power-adapters, from cheapo $5 ones to energy-efficient $15 ones. You gotta know a bit about electricity (Volts and Amps...) but its really not that hard to figure out. USB is fine if you stay within USB 2.0 specs (500mA at 5V == 2.5W). But if you're above that, USB power-adapters are [b<]very[/b<] poorly documented and non-standardized. Barrel plug adapters come with strict documentation and specifications, all the way up to 50W or beyond.

        • The Egg
        • 5 months ago

        [quote<]Why? Barrel-plug is well standardized across USA at least.[/quote<] It is? Maybe some are used more often than others, but there's got to be at least 15-20 common barrel plug sizes. Unless you happen to keep spare universal DC adapters on hand, the chances you're going to have the exact size and rating are almost zero. On the other hand, Micro-B USB cables are nearly everywhere (and USB C are getting common enough to be acceptable).

          • dragontamer5788
          • 5 months ago

          [quote<]Maybe some are used more often than others[/quote<] Yeah. 5.5mm x 2.1mm is exponentially used more than others. [quote<]the chances you're going to have the exact size and rating are almost zero.[/quote<] I haven't even looked at the Jetson Nano specs yet. I bet you its 5.5mm x 2.1. [url=https://developer.download.nvidia.com/assets/embedded/secure/jetson/Nano/docs/Jetson_Nano_Developer_Kit_User_Guide.pdf?W0pazGtvvFsY0ozi2yH6bjQZhiuQy75-Wup4UfKxMuSSTUAovx7_5w8a3jtYumAOJGXGFj3LE386aAmJDEVff2mr3jqSF2hz6FBqw9k5nksSFGbNeaUcxi8jCvn-IXHiHpd3ngP4BtfYmZITn_-EWDsBr2pMyIaGiuIK4-y2Q8WG0JrAbZzPurrVorYPtZo<]Aaannnnnd here we go.[/url<] [quote<]Power jack for 5V⎓4A power supply. Accepts a 2.1×5.5×9.5 mm plug with positive polarity.[/quote<] Heh, knew it. It follows the de-facto standard. Guess what? 4A power supply is 8x larger than USB2.0's standard. Good luck finding a USB power supply for that amount. Barrel Plugs are [b<]far[/b<] more reliable at actually giving you those amps that you need. That's 20W of power. You ain't getting that from a typical MicroUSB 2.0 power-adapter. [b<]Period[/b<]. USB 3.0 type-C might be able to get there, but only with appropriate boot-straps and voltage changes, and the wire-specifications get complicated when you get to that level (4A means you need either thicker-wires than typical, or less than a certain length to reduce the resistance)) Now here's [url=https://www.digikey.com/products/en/power-supplies-external-internal-off-board/ac-dc-desktop-wall-adapters/130?k=&pkeyword=&sv=0&pv1120=426&sf=0&FV=ffe00082%2Cc00001%2C1f140000&quantity=&ColumnSort=0&page=1&pageSize=25<]100+ different 4A / 5V Barrel plug supplies[/url<] that match those specifications. True, barrel plugs are "complicated" because they don't follow any official standard. But that's a benefit in this case. 4A / 5V is extremely non-standard amount of power. When dealing with SBC computers like this, you need to have flexibility in choosing different amounts of Amps that flow into your computer. Barrel plugs offer that flexibility to choose the power levels you need precisely, because they don't follow any written standard. Its a flexible pseudo-standard that bends one way or the other. Yeah, some plugs are 12V and others are 5V, some are negative polarity and some are positive. So you can't be as brain-dead as a USB spec. But USB Specs are too set in stone, and can't really be used when a board draws 4 Amps (such as in this case). [quote<]On the other hand, Micro-B USB cables are nearly everywhere (and USB C are getting common enough to be acceptable).[/quote<] And how many of those cables are up to spec, ready to handle 4A of load? EDIT: Note that USB C "cheats" by setting the voltage to 12V to achieve higher wattage. So USB C gets higher-watts through the same cable by changing the voltage-specs. I doubt that USB C cables can support 4 Amps (although they can support 20+ Watts by auto-negotiating 12V and switching to that power). We're looking at roughly [url=https://www.powerstream.com/Wire_Size.htm<]AWG 23[/url<] or thicker wires to handle 4A. That's a non-trivial thickness. [url=https://www.cui.com/product/resource/smi24.pdf<]The first barrel plug on Digikey[/url<] btw uses AWG 18 wires by the way. Well above specification (~16+ Amps by the chart)

            • tay
            • 5 months ago

            Thanks I learned something today

            • dragontamer5788
            • 5 months ago

            I don’t want to discount “The Egg’s” point too much, but I’ll give some credit to his point.

            2.1 x 5.5mm has two connectors: 9.5mm and 10.5mm. The 1mm difference is annoying, but it hasn’t caused major issues in my experience. (Maybe some connectors are looser and they fall out easier).

            The #1 issue is plugging in a 12V barrel-jack into a 5V barrel-jack, which will probably blow-up a device through overvoltage. So barrel-jacks are certainly a “pseudo-standard” which require careful thought to use completely.

            But as long as you understand the “pseudo-standard” nature of barrel jacks, the increased flexibility (and far stricter adherence to Voltage / Amp requirements) is absolutely great.

            • willmore
            • 5 months ago

            I’ll put in another plug for the 19V laptop type of voltage input (over a barrel plug). As it provides a measure of protection from using the wrong voltage. If it’s implemented with a standard switching power supply, then it’ll probably take 6.5V to 20V in with no problems.

            Surprisingly, 12V in might not kill anything on a board like this. There’s some huge conditions attached to that. Most boards like this only use the 5V input for the USB output–if nothing is plugged in there, then nothing will be damaged there. Then they use the 5V to generate the lower voltages (often through a 3.3V intermediate voltage). As long as that 3.3V regulator can tolerate 12V input, then the board might survive the 12V input. I have not reviewed the schematic for this board, so I can’t say how it will respond specifically.

            But don’t do it! It’s not the kind of risk you want to take with a $100 board. 🙂

            • willmore
            • 5 months ago

            Solidly agreed. USB as a power delivery mechanism for devices without batteries is a bad idea. The Rpi boards generally get away with it–or did in older models–but even they have lots of problems because of that particular design choice.

            When noobs have problems with an Rpi board, the first steps are:
            1) did you write the image to the uSD card with a good tool that verifies it?
            2) did you use a uSD card that’s not some poor quality clone?
            3) is your power supply/cable up to snuff?

            90% or more of the problems are addressed by those three steps.

            Even with proper power supplies, all Rpi 3 and 3+ boards will throttle due to brownout when you load all four ARM cores with a typical burnin program. That’s by design–the micro-USB connection cannot provide a low enough voltage drop to prevent that. If the board is supplied power by the GPIO connector, it doesn’t throttle the CPU.

            I’m completely with Dragontamer on this. Barrel connectors are way superior to micro-USB connectors if you need >5W. For the 20W that this board claims it can use, I’m sort of surprised they stuck with 5V instead of using a typical laptop type plug/voltage (19V).

            There are a lot of barrel connector sizes and that can lead to confusion, but by far the two most common sizes are the 5525 and 5521. The numbers are the dimensions in tenths of a mm. 5.5mm (outer dia) by 2.5mm (inner diameter) is refered to as the 5525. Similarly for the 5521 that this board seems to use.

            To go even further, there is *no* USB spec for 5V at 4A. The USB power delivery spec has a 5V/3A setting, but if you want more power, you are required to up the voltage. Even if you had a 20V/5A capable (100W) USB-PD supply, it would refuse to provide 5V/4A as that’s an invalid voltage/current value. Using USB-PD for a board like this would be problematic. To change voltages, the load and source have to communicate what voltage/current is needed (and what can be provided). When the paid decide on a new voltage/current, the supply voltage and current supply is not specified for a short interval. So, unless the load has enough power to survive the switch, the device could easily brown out and reset–or glitch. For laptops, battery packs, and phones, this isn’t an issue as they have batteries they can draw from to survive the dropout. SBCs without batteries don’t have that to rely on.

            • dragontamer5788
            • 5 months ago

            [quote<]For the 20W that this board claims it can use, I'm sort of surprised they stuck with 5V instead of using a typical laptop type plug/voltage (19V).[/quote<] Typical CPU voltages somewhere around 1V to 3V. So the lower 5V delivery is likely more efficient. (5V -> 3.3V conversion is more efficient than 19V -> 3.3V conversion). I dunno if Jetson runs at 3.3V or 1.5V... but whatever it is, its probably closer to 5V than 19V. 20W is probably a worst-case scenario thing: like full GPGPU load running Furmark (or whatever equivalent). Most people probably will be using this as a Rasp. Pi replacement running an always-on web-server connected to a GPIO or I2C thermometer, which would almost never draw more than 2W or 3W. But for that group of people who uses a highly-optimized CUDA load, you need a power-supply that can go up to 20W. So to optimize the 2W to 3W case, you use lower 5V specs. To support the potential of 20W worst-case draw, you have to specify 4A delivery. Yeah... electrical engineering. Where an inordinate amount of time is spent thinking about the proper thickness of wires. Hurrahhh.... EDIT: Note that a 24 AWG cable has 0.5 Ohms across 20-feet. At 4-Amps, this is a voltage-drop of 2V. That's right, at 4-Amps, a 20-foot 24AWG Cable will LOSE 2Volts in the cable itself. Cable-thickness is serious business!

            • willmore
            • 5 months ago

            (insert comic book guy image…)
            Actually… The higher the input voltage, the less current used (for a given wattage). Since losses are proportional to current squared, the 19V input is actually more efficient than the 5V input. The only downside of the 19V is that you do have to use higher voltage capable parts, but they’re used in every laptop in the world, so they’re really high volume–so the price difference isn’t bad. Keep in mind that the power conversion is using switching regulators, not linear regulators. Linear regulators work like you’re mentioning–current in is equal to current out. With switching regulators are power in equals power out (minus losses).

            The nano probably uses around 1V for the core, FWIW.

            Lots of boards like this cascade their voltage converters. If they have 5V input, they use that directly (maybe with a current switch) for the USB. Then they’ll convert that to 3.3V for GPIO, uSD, etc. Then they convert that down to lower voltages for DRAM, ethernet transceiver chips (2.5V), and bunches of other stuff. If you’re lucky, they used a power management chip to generate and control most of these supplies.

            • dragontamer5788
            • 5 months ago

            [quote<]Since losses are proportional to current squared[/quote<] EDIT: That's [b<]Output[/b<] current squared, which would be the same regardless. Input current has an issue, but that's over the resistance of the input wire. If we assume that the thickness of the input-cable is sufficient (ex: 18 AWG cable), the losses due to input-current should be minimal. [quote<]Keep in mind that the power conversion is using switching regulators, not linear regulators. [/quote<] Indeed. I'm talking about switching regulators. 🙂 [url<]https://www.electronicdesign.com/power/fundamentals-buck-converter-efficiency[/url<] Specifically, this image has all the formulas you need: [url<]https://www.electronicdesign.com/sites/electronicdesign.com/files/uploads/2013/05/Table%20Avnet.JPG[/url<] A huge component of efficiency is determined by Rds(on) by the high-side and low-side MOSFETs. Those MOSFETs are generally assumed to be "like resistors" during operation, so a bigger voltage difference results in a bigger loss. Its far less compared to a linear-regulator, but the concept is similar. The bigger the difference in Vout vs Vin, the bigger the loss across Rds(on) of the MOSFET. Switching-regulators take advantage of the fact that MOSFETs have outrageously low Rds(on) values, but it still exists. (1 - Vout / Vin) is proportional to the efficiency of the low-side MOSFET (or the diode). I would expect that the most efficient usage would be a Vout / Vin == 50% (splitting the difference between high-side efficiency and low-side efficiency). A 19V to 1V converter would have Vout/Vin of 0.95, which would weigh heavily on the low-side MOSFET. A 5V to 1V converter would have a better ratio of 0.80, which increases the loss of the high-side MOSFET to 0.20, but decreases the losses of the low-side MOSFET to 0.80. I haven't done the math entirely yet, but it "feels" like 5V to 1V will be more efficient (assuming both high-side and low-side MOSFETs have the same Rds). [quote<]The only downside of the 19V is that you do have to use higher voltage capable parts, but they're used in every laptop in the world, so they're really high volume--so the price difference isn't bad. [/quote<] On the other hand, this is a very good point. A higher-efficiency MOSFET from a mass-produced 19V model could very well lead to lower-losses. Because if you buy a higher-quality mass produced model, reducing Rds(on) is the biggest deal here. That requires knowledge of the marketplace that I don't have however. So I think you have a solid point here... it really depends what MOSFETs are available right now.

    • puppetworx
    • 5 months ago

    Wow. This looks really good for the price.

    • dragontamer5788
    • 5 months ago

    I’ve got issues figuring out what I’d use a 500 GFLOP GPU for (EDIT: Its 500GFlops FP16 only… hmmm… that’s far less useful). Its far stronger than any Rasp. Pi or single-board computer, but its 4x weaker than even a GTX 1050 (~2TFlops 32-bit at $130).

    Its in that uncomfortable zone where its still far weaker than any Desktop technology, but its way more expensive than competing SBCs (Rasp. Pi, Pine64, etc. etc). It splits the difference between the two platforms, but is there really a market for that?

    EDIT: The main benefit is maybe the Tegra SoC. This may be the most mainstream ARM SoC available. Depending on the documentation available… maybe its worth the premium. I think there’s a market for Cortex-A75 level SoCs. Its kinda weird that cell-phone chips are getting more advanced features (A-5x level is definitely a step down and towards power-efficiency compared to the A-7x chips).

    Really, I think that’s what I want. A SoC board with A-7x ARMs.

      • willmore
      • 5 months ago

      The Phoronix testing of it shows it behind the ODROID-XU4 (which is cheaper).

      Points in the favor of the Nano:
      4GiB of DRAM (vs 2 on the XU4)
      1 lane of PCI-E on an M.2 connector (none on the XU4)

      The GPU might make the Nano more attractive if you have CUDA specific code that you can run to make use of it.

      One hard to quantify down side of the nano is that it seems to be using some pretty heavily harvested X1 chips. All the A53 cores are fused off, half of the GPU, the clocks are down from 2GHz to 1.4GHz for the A57 cores. The only reason to really bring that up is because a die harvested product will be at the mercy of the non-harvested product. The X1–and the variant used in the Switch–are currently well supported, but there’s already talk of a new Switch and that would drop off most of the supply of the non-harvested chip for the Nano. So, if you’re making a professional product and need long term supply, this board may require some deeper investigation.

      • Rakhmaninov3
      • 5 months ago

      I remember back when Mac people were bragging about the G5 being a supercomputer because it could do a gigaflop. I think I heard the word “gigaflop” about 500 times in a couple of months

        • derFunkenstein
        • 5 months ago

        You are probably thinking of the Power Mac G4 export ban.

        [url<]http://www.cnn.com/TECH/computing/9909/17/g4.ban.idg/[/url<] Back when "supercomputer" was still using a definition dreamed up in the 70s or similar, I think.

    • Usacomp2k3
    • 5 months ago

    PoE?

      • 223 Fan
      • 5 months ago

      Nevermore. It uses a barrel connector for power I think.

    • Krogoth
    • 5 months ago

    “JENSEN! YOU’RE FIRED!”

      • Mr Bill
      • 5 months ago

      [url=https://www.youtube.com/watch?v=FyinD6ZDqeg<]Meet George Jetson![/url<]

    • DancinJack
    • 5 months ago

    When I saw this yesterday, I thought it looked like a pretty decent little base for a NAS machine. I kinda want one.

      • dragontamer5788
      • 5 months ago

      Hmmm… Is there a way to turn that M.2 slot into like 4x SATA drives? If so, then maybe it’d be good. NAS machines IMO need multiple hard drives to really shine. A NAS box should have an easy way to install ZFS as well.

      Given how slowly FreeNAS updates, I almost always buy “last gen” hardware for my NAS builds. But I’d still buy something more like ASRock J4105M for a NAS build rather than a SBC.

      This NVidia Jetson is more of a competitor to a Rasp. Pi kind of use case, with a CUDA-enabled GPU on it for some reason. GPIO + I2C is the key attribute here IMO.

        • Vinceant
        • 5 months ago

        Aye, this thing wouldn’t be good for a NAS…

        Also, FreeNAS updating too slowly might be an encouragement to move to FreeBSD vanilla or Fedora/ZOL. At least that’s the way I’ve been leaning for awhile. Most of the important stuff I do that’s ZFS related is CLI anyway. The GUI is still handy for basic management though.

        • cygnus1
        • 5 months ago

        [quote=”dragontamer5788″<]This NVidia Jetson is more of a competitor to a Rasp. Pi kind of use case, with a CUDA-enabled GPU on it for some reason. GPIO + I2C is the key attribute here IMO.[/quote<] If you're looking at it as a competitor for RPi, you really don't know what it's for. As was pointed out in the article: [quote<] As Jensen Huang himself noted in the keynote, the Jetson Nano is a suitable entry-level platform for developers looking to pick up deep learning programming, as it supports the full CUDA-X stack. That means that code written on a Jetson Nano will run with minimal changes—after being re-compiled—on the big-boy Nvidia GPUs. [/quote<] This thing is meant to put the hardware on EVERY developers desk that can run nVidia's code. Code that sells much higher end HPC/datacenter hardware. It would also make for a pretty solid set top box since it's basically the same hardware as the nVidia Shield, but made customizable. NAS duty might not be too bad. You could throw [url=https://www.amazon.com/EXPLOMOS-NGFF-Adapter-Power-Cable/dp/B074Z5YKXJ<] something like this [/url<] in there and add a SATA or SAS card that has supported drivers and get a ton of disks attached at proper speeds that way. The GPU could easily be used for transcode duty too I would think.

          • dragontamer5788
          • 5 months ago

          [quote<]This thing is meant to put the hardware on EVERY developers desk that can run nVidia's code. Code that sells much higher end HPC/datacenter hardware. [/quote<] Jetson is lol Maxwell. Pascal and Turing are 1 and 2 generations ahead, and there have been dramatic optimizations to the platform. MX150 laptops have 1,127 32-bit GFLOPs, roughly 2x the GFLOPs of this thing. Tell me, why should I get a Jetson when I can just buy a MX150 laptop? The MX150 will be closer to real CUDA hardware (in particular: MX150 actually has its own memory modules) and will emulate the real environment better.

            • cygnus1
            • 5 months ago

            It’s not at all about the performance of the hardware. It’s about the cost of the hardware to run that code. This is the cheapest option. It can be written, iterated, etc, and validated for functionality on Jetson, and then moved up to bigger hardware when necessary with little to no modification. That’s huge.

            • dragontamer5788
            • 5 months ago

            [quote<]moved up to bigger hardware when necessary with little to no modification[/quote<] Unlikely. As I was stating earlier: the "big" GPUs have [b<]remote[/b<] VRAM. This Jetson machine shares its RAM between the CPU and GPU. This means that memory transfers between CPU / GPU will be grossly faster on this Jetson device than on bigger CUDA machines. (In fact, any data accessible by the CPU is instantly accessible on the GPU. Literally instant, since they share the same RAM). The best environment for CUDA with similar performance characteristics will be the MX150: where your data-transfers cross PCIe properly. ---------- Its the same reason why people don't use AMD's APUs to develop code for AMD's bigger GPUs. AMD's R5 2500U even uses the Vega ISA (yes, it shares the same assembly code as big Vega64), but the grossly different architecture leads to different performance characteristics. Data transfer between CPU and GPU is non-trivial. If you really want to emulate bigger NVidia GPUs, buying a cheap GTX 1050 is far more cost efficient, and will match the specs and architecture of larger GPUs. NVidia MX150 on laptops would serve a similar purpose.

          • Anonymous Coward
          • 5 months ago

          [quote<]This thing is meant to put the hardware on EVERY developers desk that can run nVidia's code. Code that sells much higher end HPC/datacenter hardware.[/quote<] Spending $99 on your development hardware ... and then waiting [i<]how long[/i<] to see how model training goes? Madness. Train your models on big hardware, deploy it on small hardware.

            • cygnus1
            • 5 months ago

            Some small developers may need to write code and prove it works before even being able to acquire the big hardware. This hardware lets them do that. That’s the point of it. Not everyone has thousands of dollars to spend on hardware before code is ever written.

            • dragontamer5788
            • 5 months ago

            [url=http://computer-go.org/pipermail/computer-go/2017-October/010307.html<]It will take 1700 years for AlphaZero's feat to be replicated with NVidia 1080 Tis[/url<]. Those 1080 Tis are 10-TFLOPs, or roughly 20x more powerful than this Jetson Nano. You are grossly underestimating the amount of compute power needed to do routine AI training and inference. A $99 500GFlop machine won't be able to do any real form of training. For more information, see the LeelaZero project, which is trying to replicate AlphaZero's results: [url<]https://github.com/leela-zero/leela-zero[/url<] ---------- If you're doing deep-learning, you are going to need a 2080 Ti for training (or maybe even a cluster of those...), and then maybe deploy to the Jetson Nano. Just as Anonymous Coward says. Its a very difficult computational problem to optimize all of those weights.

            • Anonymous Coward
            • 5 months ago

            Its not economically sensible to save a bit of money on hardware at such a cost in productivity. I don’t specialize in the field, but I have enough experience that I can assure you that the challenges in employing machine learning are much greater than the cost of the hardware… thats the cheap part. (Suitable hardware can also be rented from a cloud provider.)

      • liquidsquid
      • 5 months ago

      The processing power would be like a sledgehammer on a tack for a NAS. Sure you could use it for that, but a lot of processing power would go wasted.

    • drfish
    • 5 months ago

    Can’t help but read it at Jensen Nano. It should come with a tiny leather jacket.

      • cynan
      • 5 months ago

      Despite the affinity for leather jackets, I’m pretty sure, allergies aside, he’d use latex for his little Jensen like everyone else.

        • tay
        • 5 months ago

        Doubt he calls it the nano. Thanks to you cynan, I am thinking about something I really don’t care for. Delete this thread mod 😉

Pin It on Pinterest

Share This