National Ferret Day Shortbread

Except for their smell and high-pressure bowels, ferrets are amazing pets. I'll never own one again.

Thanks to everyone for your positive response to the MMM yesterday. It seemed like there was a lot more backlash against April Fools' Day this year than in the past, especially from the tech sector. So, it meant a lot to see my contribution appreciated instead of derided.

PC hardware and computing

  1. WD Black SN750 1TB review @ bit-tech
  2. Synology DS1019+ Gigabit NAS review @ Guru3D
  3. Seasonic X-750 750W PSU 10 year redux @ HardOCP
  4. Sapphire GearBox Thunderbolt 3 graphics enclosure @ Hexus
  5. Micron 1100 512GB M.2 SATA SSD review – FIPS 140-2 @ Legit Reviews
  6. Colorful iGame GeForce GTX 1660 Ultra 6 GB review @ TechPowerUp
  7. The Microsoft Surface Laptop 2 review @ AnandTech

Games, culture, and VR

  1. Garfield phones beach mystery finally solved after 35 years @ Slashdot
  2. S.T.A.L.K.E.R. 2 re-re-announced @ Blue's News
  3. Fortnite creator sees Epic Games becoming as big as Facebook, Google @ Slashdot (sigh)

Hacks, gadgets and crypto-jinks

  1. Finding plastic spaghetti with machine learning @ HackADay
  2. Apple iPad mini review (2019) @ Engadget
  3. Researchers trick Tesla Autopilot into steering into oncoming traffic @ Ars Technica

Science, technology, and space news

  1. Alligator gar both sucks and chomps to catch its prey, new study finds @ Ars Technica
  2. Sea otter archeology exists, and it's awesome @ Ars Technica
  3. NASA chief says a Falcon Heavy rocket could fly humans to the Moon @ Ars Technica
  4. Office Depot and Support.com to pay $35 million to settle FTC allegations that they charged users millions in 'fake' malware cleanup fees @ Slashdot

Cheese, memes, and shiny things

  1. Food banks are dealing with a surplus of meat, milk and cheese @ npr.org
Colton Westrate

I post Shortbread, I host BBQs, I tell stories, and I strive to keep folks happy.

Comments closed
    • HERETIC
    • 8 months ago

    Talking about “ferrets.”

    How slow is the secret service?
    [url<]https://www.abc.net.au/news/2019-04-03/intruder-gains-access-at-donald-trumps-mar-a-lago-resort/10965776[/url<] Screams "SPY" to me............................................

    • blastdoor
    • 8 months ago

    Regarding the MMM…

    More seriously, I’ve often wondered if there would be any utility to using a Stirling engine to convert waste heat from a CPU into electricity. Perhaps that would be too cumbersome in a consumer context, but what about in a server / datacenter context?

    I guess if it was a good idea it would have already been done, since it’s kind of obvious and these guys aren’t dumb. So maybe he question is — why is it a bad idea?

      • Neutronbeam
      • 8 months ago

      Efficiency? Dunno. But waste heat can be used to warm other buildings. Wonder if it’s cost effective to warm indoor growing farms–for FOOD. Such as in Iceland?

      • Redocbew
      • 8 months ago

      Maybe a Stirling engine wouldn’t remove the heat from the CPU fast enough? If one “side” of a Stirling engine needs to stay hot in order to work properly that seems at odds with the purpose of a heatsink which is to pull heat away from some object.

      • just brew it!
      • 8 months ago

      As a guess, the amount of power generated is too small to make it useful/economical given the cost and complexity of the additional hardware.

      MSI apparently experimented with the concept a while back: [url<]https://www.tweaktown.com/news/9051/msi_employs_stirling_engine_theory/index.html[/url<] I think it's safe to say nothing ever came of it, or we would've heard more about it.

        • blastdoor
        • 8 months ago

        Thanks for the link!

        I guess that shows it’s not a totally absurd idea, but after somebody tried making it work, it didn’t.

      • Wirko
      • 8 months ago

      Absorption and adsorption heat pumps are common in industry where there’s a lot of waste heat available. I just don’t know how well suited they are for datacenters. They don’t output electricity, though, but liquid at a temperature that’s either hotter or cooler than the input.

    • Aquilino
    • 8 months ago

    > Fortnite creator sees Epic Games becoming as big as Facebook, Google @ Slashdot (sigh)

    I only have one question: what will go out of business first – Fortnite or the Epic Games Store?

    • superjawes
    • 8 months ago

    AM/FM is a useful idea in engineering. It’s not for radio, but stands for Actual Machines vs. ****ing Magic.

    For example, people talking about the Trolley Problem wrt programming Self-Driving cars are imaging an FM version of said cars that can “see” and understand a situation as a human would.

    In the AM World, these cars are tricked by stickers:
    [quote<]Researchers trick Tesla Autopilot into steering into oncoming traffic @ Ars Technica[/quote<] Please get the actual machines to work before worrying about AI morality.

      • Redocbew
      • 8 months ago

      Perhaps even more concerning is that blurb at the end of the article attempting to justify that modifications to the external environment are out of scope and not considered a bug in the AI.

      If that’s just an attempt at limiting the amount of cash awarded as part of the bug bounty program, then that’s lame, but understandable. The line has to be drawn somewhere. If they truly think that modifications to the external environment are going to be so uncommon that the AI need not be aware of it, then I don’t have high hopes for the capability of the AI in general. Take a cue from the MagicLeap people, and figure out how to make your stuff work in the real world.

        • drfish
        • 8 months ago

        Totally agree. I’m a Tesla fan, and I cut them some slack because they are innovative, trying to solve a difficult problem, and because people are stupid. However, their “not a realistic concern” line is completely off base. If they aren’t actively working against adversarial images, in whatever form they take, then they are making a huge mistake.

          • superjawes
          • 8 months ago

          I pretty much agree with both of you. My concern isn’t so much the “external modification”, but what about the sensors, cameras, or programming can be so easily tricked? Does the car not understand how many lanes exist, and what direction the traffic flows in each?

          That would be a pretty big oversight (IMO), since the car needs to know where it can safely drive at all times. If nothing else, it should ping the “driver” to know that something is wrong/conflicting and pass control off to avoid dangerous situations (basically acknowledging the shortcoming of the system to avoid potentially lethal situations).

            • drfish
            • 8 months ago

            Yeah, my line for this is “huh, that’s weird” – at work, that’s what I expect folks to say, and then ask for help. They don’t have to solve the problems themselves, but we don’t want it ignored either. Either way, it could become worse. The car (or, you know, a 737 MAX) should be able to do the same thing as a safety net.

            • Redocbew
            • 8 months ago

            Yeah, that’s the thing. I don’t want to get all doom and gloom about it, but right now the stakes are lower than they could be since there’s comparatively very few of these vehicles on the road. If this sort of thing happened with a city bus, or a truck carrying hazardous materials, then it’s not going to be a case of some dude getting squished against a K-rail or something. It could be much, much worse.

            • superjawes
            • 8 months ago

            I don’t think we’ll eve see completely driverless vehicles because there are too many variables for a machine to consider. The engineers designing the systems have to deliver finite solutions, and the simplest ones will always be the safest: avoid dangerous situations, obey traffic laws, slow down and/or stop. All safety systems will converge on these ideas, and any advancement in tech will provide the most value (in terms of implementation AND safety) by supporting those ideas.

            Side note: you’ll never see driverless cargo vehicles, and I can say that with compelte certainty. Even if the cargo were completely safe for humans, that’s a business’s profit. A band of thieves could easily figure out how to stop a driverless cargo vehicle, break inside, and clean it out. The best deterrent for that? Put a driver in the cabin.

            • Redocbew
            • 8 months ago

            With the appropriate resources applied I’m hesitant to say “never”, but chances are that widely available, fully autonomous vehicles are far enough into the future that it doesn’t really warrant speculation. However, if the position from the makers of these machines are that mods to the external environment are inconsequential, then “never” is looking pretty good.

            • sweatshopking
            • 8 months ago

            part of the issue is that massive amounts of computational power to understand complicated systems like driving require massive amounts of energy. In order for these guys to make these things work they’re trying to stick into slim power and computing constraints. we need to improve all the underlying computational technology before we’ll be able to really trust these things.

            • Redocbew
            • 8 months ago

            The thing about computational resources is that you don’t really need to do anything but wait in order to get more of it(My apologies to embedded system devs. Can you tell I deal with high level languages?). That’s why I’m skeptical about resources alone being a showstopper. Fixing all the things that could cause a machine to squish the driver, but a human would pick up on almost immediately isn’t going to solve its self in the same way.

            • superjawes
            • 8 months ago

            Even with appropriate resources. I am confident in saying “never” (at least when it comes to cargo-carrying vehicles). There will ALWAYS be a way–externally–to fully stop a self-driving vehicle, even if that means the bad actors need a few vehicles of their own. When a cargo company realizes that their profits could be stolen this way, they’ll pay for someone to stay in the cabin at all times to protect it.

            The tech might vastly reduce the mental power a driver needs to put in, but there will always be someone there…just in case.

            • Redocbew
            • 8 months ago

            Or, they could outfit each truck with a murderbot, just in case. I have a feeling that might also go against regulations though. 🙂

            • superjawes
            • 8 months ago

            CRUSHBOT WANTS TO BE A LIBRARIAN, BUT CRUSHBOT HAS BILLS TO PAY.

            • just brew it!
            • 8 months ago

            The 737 Max debacle is a whole different class of stupid. “Hey, let’s add a critical flight control system with a single point of failure which can cause it to malfunction in an extremely hazardous way shortly following takeoff… and while we’re at it, let’s not train the pilots on how to quickly disable it when that happens!”

            • drfish
            • 8 months ago

            There’s some serious connective tissue here, though. The single point of failure on the MAX isn’t technically the same as a shortcoming of NN training for a car’s AI, but the outcome can be similarly catastrophic.

            • just brew it!
            • 8 months ago

            But IMO it was also easier to foresee. The avionics industry understands how AOA sensors work (and how they commonly fail), and the FAA isn’t supposed to approve designs which allow a single point failure to cause loss of an aircraft. Unlike AI, aircraft safety is (mostly) an understood problem.

            • Ikepuska
            • 8 months ago

            True, but I do want to point out that AOA failures like these aren’t allowed to be in play past takeoff in the US because of other FAA rules and regs on maintenance. It’s telling that the vast majority of the issues are with overseas operators who have a less robust oversight and maintenance infrastructure. A lot of the hot takes by (military) Pilots I know are that it’s a shitty design, but that the conditions that cause catastrophic failures are less common in the US. In fact I don’t know of any cases in the US at all.

            • K-L-Waster
            • 8 months ago

            “Oh, and lets make secondary safety features (like an indicator that the AOA input is inconsistent with other sensor inputs) an optional upgrade!”

            • just brew it!
            • 8 months ago

            Yeah, SMH at that one. You just don’t do that with critical safety features. It’s like the light on the dashboard of your car that tells you your brake fluid is low being an expensive add-on option.

            Another tidbit that leaked out of the investigation of the second crash: There’s apparently some evidence that the pilots [i<]did[/i<] follow the procedure for disabling MCAS, then (for reasons unknown) concluded the manual override procedure wasn't having the desired effect, and switched it back on. Speculation is that the manual crank procedure takes so long to have a noticeable effect that the pilots gave up on it, concluding they'd mis-diagnosed the failure.

            • just brew it!
            • 8 months ago

            [quote<]Does the car not understand how many lanes exist, and what direction the traffic flows in each?[/quote<] No, it does not. Modern AI relies on training an algorithm based on real-world data (images of roads and traffic, in this case). It's a statistical process, so the car doesn't really "understand" anything. In effect, the car says "in a majority of training scenarios where I was presented with stimuli which resembled these, the correct decision was to veer to the left". A human driver, with human cognition, would make a different decision given the same inputs because they understand the broader context. What's even scarier is, the people who create the AI don't understand what it is doing either, since the decisions aren't based on traditional logic. It's an AI "black box" which is a mindbogglingly complex function of its algorithms, and all of the training data it has been fed. It's the same sort of shortcoming which led a Tesla to drive under a semi that was crossing an intersection (decapitating the driver) a few years back. The white side of the semi and/or the several foot high gap under the trailer led the AI to "decide" that the road ahead was clear. A human driver would've almost certainly had no trouble spotting the semi, and applied the brakes in a timely manner. Tesla has also put themselves at a severe disadvantage by insisting that they don't need lidar (instead relying solely on cameras) to provide inputs to their AI. Just because people can drive using just their two eyes doesn't mean it is easy to do for an AI.

            • superjawes
            • 8 months ago

            Exactly. The “fixes” to these issues are better inputs. I would expect the car to be using map data to determine what a road is like, an if a change in data conflicts with the mapped info, ping the driver. If your design assumes that the car is wrong, you can at least go into a safe(/safer) state.

            And are Tesla [i<]still[/i<] not using lidar after that crash? That--to me--seems like a perfect solution, even if they want to rely heavily on algorithms. Just put in the hardware to verify what the computer is seeing...

            • just brew it!
            • 8 months ago

            They will continue to insist that lidar isn’t necessary because admitting they need it would mean either retrofitting it to all previously sold vehicles, or disabling the autopilot feature on those vehicles. The cost of retrofitting lidar would probably put them out of business. Disabling autopilot would probably be a big enough PR black eye that it would put them out of business too (just not as quickly). So they’re stuck between a rock and a hard place.

            • K-L-Waster
            • 8 months ago

            Not to mention changing the story to “turns out we need Lidar after all” practically begs for a class action law suit.

            • superjawes
            • 8 months ago

            I wouldn’t be confident in Tesla’s business case regardless…it feels like they do things “just because [they] can” without asking “Does it make sense?” And they’ve gotten away with that so far, but I don’t think it will last.

            As for lidar, adding it to new vehicles wouldn’t force them to retrofit onto older ones. They can just roll it in with other changes for “Autopilot 2.0” in a given model year. Well…assuming they have the staffing resources to maintain more than one version of the system.

    • Unknown-Error
    • 8 months ago

    Question – Why do the [b<]Mongoose[/b<] (not [u<]ferret[/u<]) cores suck so much? - [url<]https://www.anandtech.com/show/14072/the-samsung-galaxy-s10plus-review[/url<] And why does Samsung insist on them?

Pin It on Pinterest

Share This