It’s been a while since my last post – been busy in my day job …. – but this post will hopefully make up for it: Unlike the last few posts that mostly covered updates to the miner software, this one will tackle what I had a lot of readers ask me about: How to actually build a mining rig with the Xeon Phi 7220 (x200/KNL) PCI cards – in a way that does not require a crazy-expensive server that costs more than the cards themselves, yet that (seems to) work reliably with those particular cards. In this post, I’ll describe exactly this; and in fact, the resulting “rig” is actually a plain old rackabale server that you can put into any co-lo, data center, etc. What more could one ask for? 😉
Background
When I first started writing this blog, the – by far! – biggest community response was when I first wrote on building a 8-card “rig” with the Xeon Phi 7220 PCI cards that could do close on 24kH/s for cryptonight, which at that point in time was – I think – a single-machine speed record (for those that haven’t read that yet, the original article is still right here, behind this link).
This article created quite a stir, and triggered lots of questions, blog comments, emails, etc. However, as “nice” as this build back then was, it had three major issues:
- those cards are still pretty rare on the ground; now that people can google an actual use for them a few more are appearing, but they’re still rare.
- those cards are discontinued products, so there’s no documentation, support, etc. In particular, it’s still not 100% clear which boards, processors, chipsets, etc they will actually work with – because quite frankly, there’s quite a few that won’t.
- the one machine that I had used in my build cost a little fortune; in fact, I paid as much for that box (>$6k) than I paid for the cards that went into it. At that price tag the impact on profitability is quite big: the server itself hardly makes any revenue, as the “revenue per dollar” for the full system is only half that of each card.
Those shortcomings are exatly why I had always advocated for going with the “ready to rack” systems from Asrock, Exxact, or Colfax for phi-based production mining. However, production is one thing, and fun another, and ever since that original article there’s always been readers that asked along the line of “how do I build a mining rig with the 7220 PCI cards” … and while I did experiment with this question on and off – and found quite a few different combinations that do work just fine – I never really found the time to properly document my findings.
Anyway – earlier this week I sold a few 7220 cards I had recently gotten my hands on, and while doing so realized that whoever buys these cards will eventually have to figure out how to get to run them; and since I now do have a reasonably good idea how to get those cards to work in a non-crazy expensive way I decided that it’s now finally time to sit down and share my findings. And to make this in the most effective way, I decided to do this as a complete “walk-through” for building a rig with the 7220 cards.
The Rig – Ingredients
Part 1: Four 7220 cards: To start out with, you do of course need some x200 PCI cards. For this particular build I pulled four 7220 cards out of my 8-card monster machine (should’ve written this article before I sold these other four cards – oh well).
Part 2: A old, “surplus” Xeon GPU server: For this particular build I’m going to use a old SGI/Supermicro “SYS-2027GR-TRF” system I bought from “Mr Rackables” (now “UnixSurplus” on ebay. These servers are quite old, and sell pretty cheaply – I paid a clean $900, including server chassis, PSUs, memory, CPUs, fans, everything ready to go (except a harddisk, I think). I now bought several systems from UnixSurplus; can only recommend them.
Important: Please note I’m intentionally using pretty old systems, here: not only are they much cheaper than newer systems, they also seem more reliable: I had a few newer systems (Xeon v3 and Xeon v4 based) that did not work, but so far all the Xeon v1 and Xeon v2 system (both of which use a different chipset than the v3 and v4 generation!) do seem to work flawlessly. So, the older stuff (originally designed for the x100 Phis) it is going to be for me! For those that are curious, the particular system I got has two Xeon e5-2530s; 16 GBs of RAM, and is rated for four “Phi or Kepler” GPUs (LOL). As you can see on the PICs the system actually has both PCI slots and space for a total of six dual-width PCI cards, but at least for this build I’m sticking with the recommended since four cards (though of course, I will at some point try numbers five and six too 🙂 ).
Here some pics of the system as I took it out of the box:
Don’t go by appearances: The machines are surplus, and the chassis’ are sometimes a bit beat up – this one has some pretty beat in handles for the PSUs (what did the carriers do with it? Play baseball!?); and another one I had had one of the front handles mostly broken off – but inside, they’re actually pretty well cared for, and so far all the ones I got worked out of the box no prob. For that price (as parts you’d pay that much for the CPUs and memory :-/) I can live with a few scratches.
Of course, it probably doesn’t have to be exactly this model and system – in fact I got a few other ones from unixsurplus, too – one with a first-gen Xeon, one with only two GPU slots, etc – and so far they all seem to work, too. The exact steps of (dis-)assembling may be a bit different from the ones I use below, but overall it should be very similar.
Step 1: Open, and get at the PCI slots
Now, the first thing we should do is boot up the machine, check the BIOS, etcpp. But since patience isnt’ exactly my strongest suit of course we’ll just skip that for now, and see what’s inside. There’s two screws on the side (marked with little triangles) that you have to take out, then you can slide off the top cover backwards (tip: get a box for the screws, you’ll need it :-/) :
What you’ll see when it’s open is basically the CPUs and RAM in the middle, surrounded by four big metal blocks. Of those, the one on the right back (right front in the image) is the dual power supplies; the other three are three compartments for two dual-width GPUs each (so yes, in total it should fit more than four cards). Each of those three compartments contains some risers that plug into the motherboard.
Now, take your beloved screwdriver, unscrew those three compartments, and take them out. If you accidentally unscrew the wrong screws in those compatments dont’ worry – the only other screws in there are for some weird metal brackets that have no practical use whatsoever, so if you accidentally take them off you’ve only saved time – because if you haven’t, you should do it anyway. Here’s some pics with all three compartments taken out:
We’ll eventually only need the two front compartments; but I’ve taken off all three – it can’t hurt to take out the third one, too, and might even help in airflow (and maybe, I’ll eventually try mounting something there, too).
Step 2: Boot, and properly configure BIOS
OK, now that the first fun is over we have to boot the machine and check the BIOS – once we put the cards in the machine might not boot any more without 4GB support enabled; and since it’s not exactly fun having to take the cards out again just to go into the BIOS I’d strongly suggest to do it before putting in the cards.
Once booted, the main thing to check is if “4GB support” is enabled in the BIOS. In my case it already was, but in another machine I did a few days ago it wasn’t, so better double check. For good measure I also yanked up the “PCI latency cycles” to max value – we’ll not be bound by PCI latency, anyway, and since the main errors I’ve seen in other systems was DMA timeouts I thought this can’t hurt – probably will do no good, either, but hey, call me superstitious…. Here a few shots of my BIOS screens:
Of course, you should never run a chassis with the lid open, or with only one of the PSUs attached, and of course only when properly grounded, and with anti-static mat and wrist-band, and … oh well.
Step 3: Mount the cards
OK, now that the BIOS should be able to detect them, let’s mount the cards. Doing that in turn requires four steps: Taking off the brackets, putting them into the risers, connecting power, and plugging the riser compartments back in.
Step 3.1: Taking off he brackets
Before we can put the cards into the riser compartments, we’ll first have to take off the mounting brackets that they come with: These are useful for mounting into a workstation or a full-height server like my 8-way toy … but when mounted sideway through risers, the brackets will only be in the way. Luckily, they’re easy to remove: Just remove the four little screws – two on two, two on bottom, and it slides right out:
Remember that box for screws I mentioned? Make sure to keep those brackets and screws – if you ever want to sell those cards off you “probably” want to reattach them, and they get lost very easily …
Step 3.2: Choosing the compartments to use, and adding power cables
As said above there’s three compartments for GPUs in this board. Of those, the ones you should use are the two ones of the front – left side and right side – as those are the ones that slot into PCI slots 1, 2, 3, and 4, all of which are full x16 slots. The compartment at the back instead goes into slots 5 and 6, one of which is a x8 slot – which may work, or may now, I haven’t tried yet. Just use the two front ones; they’re also closer to the power connectors and fans, both of which is good to have.
If you take a closer look at those compartments, you’ll see that there’s some PCIe power connectors right next to them on the motherboard; some of those will have PCIe 8-pin power cables already attached, others will be empty:
Though how many cables you’ll have may depens on the actual box you get, chances are they’ll not be remotely enough – so get a few PCI 8-pin power connector cables (I use ones that have two 8-pin connectors each, just in case). I should probably have listed those as additional “ingredients” above, but since they’re very useful for any GPU or Phi mining rig I assume you already have a box of those lying around – if not, get some!
Now, make sure you’ll have a total of two 6-pin and two 8-pins (or simply four 8-pins) on each side. Mine already came with two 8-pins on each side, so all I needed was one such 2x 8pin cable on each side. Luckily in those systems there’s plenty of connectors on the motherboard.
Step 3.3: Insert cards, insert compartments, and power them up
Now, it’s time to take it all together: Put the cards into the front two compartments (two cards each, if you have them), and get them powered up and plugged into the motherboard.
Note this in fact is trickiest part of the entire operation: For the front right compartment it’s fine to first attach the cables, and then slot the compartment’s riser back into the motherboard. For the front left one, the riser is actually so incredibly long that you first have to get that riser back onto the board, and only then should you connect the power cables, else some other cables will be in the way. As you can see in the last two pics above I had to learn this the hard way, too – I first connected all power, then couldn’t fit any more… oh well, if I never do more wrong than that I’ll be a happy man. Either way, save yourself the time and first plug the compartment (halfway) back in, then connect the PCIe power cables from the front, and one.
Plug both compartments back in, srew them back on (uhhh… might have forgotten that), and everything looks rather tidy again:

Step 4: “Adjusting” the fans
Though the server above now looks pretty solid already, chances are that the airflow won’t be enough to cool the cards once the miner kicks in. Yes, there’s a turbofan in front of each of the compartments, but by default they won’t run high enough. For the left compartment (back on this image) that seems still OK, because it has enother fan at its back, and good air flow. For the right one (front on that image) that’s not the case, as there’s only one fan, and the back of the compartment is partly obscured by the PSUs and some cables.
So, to fix that we’ll have to “convince” the fans to put in a little bit of extra work. There’s “probably” some way to do that through IMPI, BIOS, or something, but so far I haven’t figured out how, so let’s do the completely failsafe way we’ve done in some previous builds, and just just cut the two control wires of the four-cable fan connectors. Without those control wires (but the other two power wires still on) the fan will go full tilt no matter what load or temperature, which is exactly what we want.
In the following images you’ll see how the fan is attached to some connector using four cables: black, red, yellow, and blue. If you don’t care about “modifying” the chassis itself the easiest would be to simply cut the yellow and blue cables; but since I migth want to eventually sell that system on I didn’t want to simply cut any existing cables (even in a surplus machine :-/), so first inserted a 4-line fan extension cable (a few cents on NewEgg, if you buys them in bulk), and cut that instead:
If you turn that machine on, you’ll hear the effect very clearly, right away :-).
Step 5: Run it …
OK, that should be it: Take a lukStick (the mpss-knl variant), burn it onto a 16 GB USB stick (if you haven’t already done so, start this before even opening the box); plug it in, and reboot: Upon booting the four cards show up, first in micctrl, later (once mpss is started) also in micinfo.
In my case the first time booting still failed with the MPSS service not properly booting the cards, and micctrl eventually showing “error” – probably because the lukStick was built with only two cards each, so had to first re-initialize itself to four. The second time around everything worked as expected. And the third time. And the fourth. And every time I retried since them. Now been running round the clock for almost 24 hours, no problems whatsoever (and similar builds have run for way longer, all without any issues, either):
Particularly interesting is also to have a look at the temperate of those cards (“micinfo | grep Temp”) : before the miner kicks in the cards run at a nice 40-ish degrees Celsius; but once the miner kicks in that’ll go up quite quickly. Without our “fan modding” this quickly goes beyond 80, 90 degrees, and the cards shut down; but with our modded fans it stays around 80, which is just fine.
Summary
OK, that’s it for now; I hope you’ll enjoy reading this as much as I enjoyed doing it! In total the entire thing cost me about an hour of work, including taking the photos, arranging lights, cleaning my desk, etc; writing actually took (way) longer than doing it.
Of course, that won’t help with the question of “how to even get those cards”, but that may soon be a topic for another article – we’ll see. At least for those lucky ones that got some, this article should be a good blueprint for getting them to work. The machine itself cost me $900, which amortized over 4 cards is pretty good, considering that power supplies, processors, etc, are all included… either way, it’s way better than the $6+k professional server which – even after adjusting for the fact that it can take twice as many cards – is still three times as expensive per card. And since mining is all about efficiency, I’m pretty happy with that setup. All else you need are a couple of cables for a couple of bucks, so this build is not only cheap, but also way simpler to arrange than trying for a K1SPE workstation build, or a workstation-with-7220s build, etc. These old SYS2027s seem to be in good supply, too, so finding them will be way simpler than findign the cards. The final thing then does about 11.5ish kH/s – yes the CPUs are pretty old, but since they don’t have to do much they’re perfectly adequate.
Of course, this is not the only way to build such a rig: I tried several similar old surplus machines, and pretty much anything that was originally designed for a x100 KNC phi seems to work, too. I’ve seen issues with newer Xeon v3 and Xeon v4 boards (still can’t pin it which ones do and which ones dont’ work!?), but those old ones seem to work consistently. Use at your own risk, of course – this is not legal advice, I’m not respsonsible for damage, financial loss, mis-investments, or whatever, but at least to me this sounds like something that works. Best of all, the final product is a ready-to-use rackable server; you can rack them up, move them to colocation, etcpp – all you have to do is slide it in, power it up, and done (in fact, the system even came with free mounting rails 🙂 ).
With that: Happy mining!