Biostar TB250-BTC Pro motherboard hands miners a shovel

Gerbils in the market for a midrange graphics card lately are acutely aware of the impact of cryptocurrency miners on the supply of cards based on AMD and Nvidia chips. Biostar isn't going to make the situation any better with its newest motherboard, the TB250-BTC Pro. The BTC Pro sports a whopping twelve PCIe slots that miners can stuff with graphics cards or ASICs in order to increase the number of TFLOPS they can bring to bear in the search for virtual coins.

As one might guess from the model name, the BTC Pro is based on the Intel B250 chipset and can accomodate LGA 1151 CPUs from the Skylake and Kaby Lake families. One of the PCIe slots is wired with 16 PCIe 3.0 lanes; all the other ones are 1x slots. Six of those smaller slots lay behind the others on the motherboard and require the use of a PCI bracket or a riser cable unless the card in question is tiny. Cryptocurrency mining requires lots of processing grunt but little in the way of PCIe bandwidth, so attaching as many GPUs or ASICs as possible to a single CPU and motherboard allows miners to reduce their initial cash outlay.

In addition to all those PCIe slots, the board has six SATA ports, four USB 3.0 ports plus a header, two USB 2.0 connectors, and a pair of USB ports that only provide power. The board has a Realtek Gigabit Ethernet chip, but no integrated Wi-Fi. A pair of PS/2 ports and a DVI-D display output make it easy for miners to use old KVM switches to manage multiple mining rigs. The board has onboard audio, so it could conceivably be used in a normal computer when the crypto boom goes bust.

Biostar didn't provide pricing or availability information for the BT250-BTC Pro, but we expect it'll come in more expensive than the company's $90 TB250-BTC board, since that model has a measly six PCIe slots.

Comments closed
    • psuedonymous
    • 2 years ago

    They’ve missed a trick: in addition to the 12 lanes from the PCH, the lanes from the CPU can be bifurcated. That means the x16 slot could potentially be split down to an x8 and two x4 slots.

    • blahsaysblah
    • 2 years ago

    Too much extraneous crap. The 200 PCH has 24 PCI-E lanes available plus 6 USB only slots.
    Those 6 USB could take care of boot/k/m/lan. I see space for 18 if not all 24 1x slots.

    Who needs fancy sound? Who needs SATA? Maybe take one slot for Intel LAN. Front panel connector? What?

    Though maybe they know the PCH better, it cant really handle 24x active devices at same time.

    edit: link to [url=https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/200-series-chipset-pch-datasheet-vol-1.pdf<]200 PCH[/url<].

      • brucethemoose
      • 2 years ago

      The extra stuff makes the board usable outside of mining. So when you’re out, you can sell/trash 10 GPUs and leave one in the x16 slot for gaming or general use.

      • rufus210
      • 2 years ago

      The PDF you link clearly shows the B250 only supports up to 12 PCIe lanes.

        • blahsaysblah
        • 2 years ago

        Bah, i missed that chart. 🙂 But i did than find another chart that also says there is a maximum of 16 devices(root ports), regardless of the 24 lanes available.

    • ronch
    • 2 years ago

    I like how the rear port cluster looks so clean and organized. Hope more motherboard makers would give more thought to that.

    • DPete27
    • 2 years ago

    Haha, that’s pretty cool actually.

    • bthylafh
    • 2 years ago

    Now that the semi-average person is getting into this, the bubble bursting can’t be far away.

      • Chrispy_
      • 2 years ago

      ETH isn’t going to be easy to develop an ASIC for.

      At present, nobody thinks it’s going to be possible since ETH mining requires that the entire blockchain be stored in fast RAM. Even a DDR3 or DDR4 cache is too slow, that’s why the miners currently prefer HBM Furys and 256GB/s Polaris DDR5 cards to mine on.

      Perhaps a very complex ASIC farm tied to a high-speed DDR4 controller might offset the slower blockchain access with huge scaleability but someone has to develop that and it’s a 10x more complex task than previous standalone ASICs.

      GPU mining is going to be going for a while, but I think the bubble caused by rapid growth and shortage of GPUs is about to end. In 3-6 months, so many people will be mining that the hashrate will start to level off and countries that have high electricity costs will scale back.

        • southrncomfortjm
        • 2 years ago

        Could this spell the end for the PC master race? Something’s got to give.

        My guess is we’ll have to wait for mining specific cards, like the ones without video outs, that really make more sense for mining that gaming cards.

        Or, I just give up on PC gaming going forward and go with a PS4 and Xbox One. At this point, get both of those consoles, with a few games, is cheaper than getting a GPU.

        • xeridea
        • 2 years ago

        Incorrect. The blockchain is not stored in RAM. The DAG is. Algorithm is Dagger-Hashimoto. The DAG file is stored in RAM, and needs accessed at high speed, preferably low latency (GDDR5X not great, or HBM). The DAG file grows every 4-5 days to account for memory size increases. Things that matter are mem speed, mem bus width, and mem latency. All mining GPUs are heavily memory bottlenecked to the point where compute power is useless.

        An ASIC could partially improve cost, but would be basically a card with insane memory bandwidth, which isn’t cheap. Bitcoin ASICs speed up about 3,000x over GPU, Litecoin ASICs about 100x. Some other coins with ASICs about 10x. Eth would be much less, perhaps 2x at best, so not worth R&D. Ethereum is planned to move to Proof of Stake anyways….

          • DavidC1
          • 2 years ago

          I’m not so convinced that HBM was the reason for Fury not being fast.

          Nvidia’s HPC oriented GP100 Tesla achieves 60MH/s mining Ethereum. It uses HBM2.

          When CPUs used external memory controllers, the companies that could make top performing ones were Nvidia and Intel. Intel was known to be “legendary” in memory controller performance but there were chipsets by Nvidia that exceeded Intel’s. AMD would be distant second behind the two.

          If we consider Nvidia/Intel to be neck-and-neck regarding memory controller performance, and by looking at CPU memory controller performance, AMD is quite a bit behind.

          It’s possible that Fury’s HBM controller performance is just not that good.

            • brucethemoose
            • 2 years ago

            I think HBM1 latency is pretty high, even compared to GDDR5.

            Not sure about HBM2, but I know the frequency is like twice as high.

        • ColeLT1
        • 2 years ago

        The AntMiner S3+ is a Script (ETH) ASIC. I think it is only in preorder status currently.

    • Anovoca
    • 2 years ago

    yellow like a canary?

Pin It on Pinterest

Share This