More on building Phi 7220 Mining Rigs…

Wow. What a community response. Unbelievable. Last week I posted on my latest toy, a rig that used eight Xeon Phi 7220 cards in a single 4u-server to achieve a total of roughly 24kH/s for cryptonote coins (in case you missed it, the original post is right below).

I had expected some interest in that merely because that was, to my knowledge, the new speed record for a single-node(!) mining rig…. but man, was I wrong: three thousand unique visitors in the first 48 hours, on a blog that didn’t even exist until two days before. And tons and tons of interesting comments on questions, on both the blog and reddit.

Based on that feedback from the last three days, it’s now become very clear that there’s two follow-ups I’ll have two write (else I’ll drown in emails :-/). The first topic that came up again and again was “Vegas vs Phis” in terms of mining revenue, profitablity, etcpp …. and I promise, I’ll write one – but I’ll first wait for my 4×7250 node to arrive, so bear with me. The second big group of questions that came up again and again revolved around “how can I replicate that build” – i.e., how do the Phis work at all, where can I get them, and if I have them, how can I build a machine that’ll take them. This latter question is what this post will be about.

Mining on Phi 7220 cards…

First off, it is kind of hard to get those 7220 cards – if you don’t already have some, you’ll probably have a hard time to get some. I got mine off ebay, but that seller – at least right now – doesn’t list any any more, so manybe they’re gone. Also, be sure to not confuse the x100 cards (3120, 5100, 7120, etc) with those newer x200 phis – the old ones will be about 5x slower, so think hard if that’s worth it. Quite frankly, if you’re interested in mining with Phis you’ll be best off buying one of the 4×7250 self-bootable machines (I’ll write more on that once mine arrives).

That said, if you already have 7220 cards – or miraculously found a good source for them – you’ll have to find a way of actually hosting them, of getting them to run, and mining on them. In terms of mining software, lukMiner will run on them, and will run rather profitably. To get it to run, though, you’ll need a copy of Intel’s MPSS 4.x software stack to drive those 7220 cards – and since Intel took that product down that isn’t all to easy to get any more. I still have an older copy from back-then-when, but don’t have permission to share that, so you’ll have to find somebody else to share that with you (If anybody that has a copy wants to share, please feel free to send a comment with a link!). Once you get both hardware and MPSS stack up, you copy luk-xmr-phi to the card, and run it; so that part is easy.

Now to the tricky question: How to actually build a rig with these cards. The problem here is that they seem to be working in only certain motherboards, and since they got pulled off the market there’s no support. So all I can do is go and share the three successful builds I managed to create – without any warranty whatsoever that they’ll work on your side.

Building a Rig – Option #1: the 24kH, 8-card, 4U server

The build I used in my original post is a professional, off-the-shelf server from Exxact Corp. I used theirs because I know they had listed this product with Phis in the summer, so had pretty high confidence that would still work. For those interested in more details: It’s a 4U server with two Xeon CPU slots, a C612 chipset (which seems to be important),  10 full-length PCI slots, and then (8+6) PCI power connectors, which works out perfectly. Since I didn’t want to take any risks I bought a complete system from Exxact, with CPUs, memory, disks, 2x1600W power (fully redundant, so 4 PSUs actually), and everything  else except the Phi cards, which I already had. In total – and with shipping – that set me back something like $6k, just as much as the cards themselves.

Of course, I could probably have built that thing from parts for much less (the two un-used redundant PSUs alone are worth several hundred bucks) … but already having $6k in Phi cards on the line I didn’t want to take any risks – and quite frankly, so far I’ve been extremely pleased with this purchase (and Exxact have been most helpful so far, too!). For anybody wanting to get this system, here a pic of both the rig, as well as the exact sticker of that machine (and I’m sure Exxact would be happy to help, too – just mention what it’s for, they’ll remember me :-/).

One final note: The careful reader will have seen that I mentioned ten PCI slots, yet my build uses only eight cards. Yes, I did fit 10 cards in there, but ran into some issues. First of all, I blew some circuits in my basement :-/ … and worse, at home (where I built the rig) I only had 110V power, which wasn’t all that good for the 2x1600W PSUs. And finally, when I did get it to boot the machine became unstable with 10 cards – maybe bec.ause of heat, or maybe because the drivers don’t like that many cards, I don’t know. Eight work, 10 I’m not sure. Might get back to trying, but for now I don’t have any spare cards any more, anyway. Here a pic with 10 cards, but again, right now I only run eight. I’m not crazy. Not really, anyway.

Option #2: A Cheaper, but still professional rig

Since $12k in a single rig is admittedly a somewhat scary thought I also played around with finding cheaper, smaller options. After looking primarily for the same board generation and C612 chipset I ended up with a SuperMicro SYS-5018GR-T server.

Initially this didn’t produce enough air flow to cool the cards, but after a bit of “friendly persuasion” on behalf of the fans (ie, cutting the two control cables of the fans to make them go full blast :-/ – see pic) that worked out of the box, too.

Got the barebone for $1100 off ebay, plus a refurbished Xeon, and a single DIMM of memory …. all together probably around $1400 (plus cards) – not that much off what you’d pay for a typical desktop PC to put GPUs in – but in rackable form-factor, so you can actually farm it out to a co-hosting place.

Option #3: The totally Stone-Soup, Do-it-yourself Build

OK, before I went ahead and bought all these servers and cards for now close on $20k I (obviously?) first did some simpler tests, buying only a single card, and testing that in something I already had. “Luckily” I had lots of un-used workstations lying around that I could test with ….. the reason I say “luckily” in quotes is that the reason I have those in the first place is that they started out as GPU mining rigs, but since “several” of those GPUs have died the mining death over the last few months I now have some unused workstations :-/. (yes, one of the reasons I switched to mining on Phis is that I simply had too many GPUs die on me – in particular a certain brand, but don’t want to offend anybody, so will keep that part to myself).

Anyway – tested many different machines, and most didn’t work. Either they didn’t boot at all, or they booted but didn’t show the cards, or had too old BIOSes (they need above 4GB decoding), etc. Finally found one of my machines that took that card, and for everybody that wants to replicate, here the specs:

  • Motherboard (likely the most important part): Asrock X99 Deluxe
  • CPU: Some X core Xeon E5 bought off ebay – probably won’t matterl
  • A cheapo PCI 1x GPU to drive a monitor (won’t need it for mining, though).
  • Three Xeon Phi 7220 cards (started with one, but then put in two more)
  • EVGA 850GQ (850W) PSU
  • Phanteks Enthoo Pro M case (won’t matter), and a single DIMM of 8GB of RAM.
  • Lots of fans.
  • CentOS 7.3 with MPSS 4 stack and lukMiner 0.8.6.

And la voila, here we are:


In terms of cooling, a regular workstation’s case fans won’t be enough. Not by a long shot they won’t. At first, I added this semi-professional Lasko fan:


This obviously blows air to the inside rather than out of the case, but if you leave the case open that’s OK (and if not, it’s strong enough to make all other case fans go backwards, too, LOL).

Eventually, however, that setup looked a bit shaky even by my standards, so went ahead and scouted for some smaller fans to cool this. Typical case fans won’t do, even if mounted right behind the cards. Eventually, however, I found some higher-powered fans on ebay (see, for example, here for a listing). The blue (printed) shroud doesn’t actually fit the x200s (they have their power connectors organized differently from the x100s), but the case wouldn’t have had enough space to host those shrouds, anyway, so simply took them off – the fans are strong enough to make enough air go through the cards even without a perfect fit. Oh, and of course: Duct tape is your friend :-/ In the following two pics, left one shows two such fans connected with two shashlik skewers and soem duct tape (I didn’t say it was professional grade, did I?); the right one shows those fans mounted right behind the cards – one fan does one card, the other does two.

Again, I used the trick of messing with the fans’ control cables, and simply have them go full blast (image on the left shows the fan connector cable cut open to allow a four-pin fan connector to connect to a two-pin 15V connector – the control wires aren’t, but cut simply not connected, so they go full blast. Right image: That stuff connected to a 15V molex). With that, the machine is now up and running for three weeks, no issues whatsoever (well, had to fix a few issues with hung nicehash connections in the miner, but the hardware works all right).

As mentioned above, I’m not sure that those builds can easily be recreated – for example, I have some other x99 boards that do not work, so I have no clue why this one does Ie, no warranties, and your mileage may vary). Anyway – for those that have some of those cards I hope this info will at least open a path to getting them up and running. As such:

Happy Mining!

Published by


To learn more about me, look at the "About" page on

70 thoughts on “More on building Phi 7220 Mining Rigs…”

    1. Actually, it doesn’t seem to be a Deluxe at all – the serial number on the motherboard doesn’t fit the box I thought it came in.

      The BIOS doesn’t actually show any bios or firmware version at all (one of the weirdest bios’es I’ve ever seen), but it does at one place show UEFI version X99 Extreme4 P2.10. That might also explain why I found two “identical” boards of which one worked and one didn’t: from the boxes I found I have one Asrock Extreme 4 and two Asrock Extreme 3’s – they look absoltely identical, but at last according to the bios the one that works is the 4.

      Also note that “above 4gb decoding” is turned on in the bios.


  1. Luk, thanks for the great write-up.

    Is it possible to get a xeon phi x200 coprocessor and vegas running in the same machine? for example, w/ the Asrock motherboard you mentioned.


    1. If only I knew – I bought a Vega Frontier before Christmas, and still haven’t gotten it to run in any of my machines, or in any of my linux flavors … so seriously, no clue.


      1. On which part are you getting stuck? Is the system not recognizing the card, or does that part work and the miner doesn’t work?


      2. Tried all kind of drivers, from amdgpu (17.4beta, 17.5, and 17.5) to compiling my own 4.15 kernel and compiling my own rock stack, all on both ubuntu and centos (even different versions), all from clean installs, all on different boards/cpus, and still can’t get a single system to boot with a kernel that supports this thing. Can get the amdgpu driver installed to boot X, but then opencl doesn’t work; and if I compile my own kernel or rock stack, the next boot shows a garbled screen and/or hangs the machine with “firmware error”. Ridiculous.


      1. You think some of the still-hosted 3.x mpss packages would work on ubuntu? I haven’t configured this hardware so I don’t know why that wouldn’t work.


      1. Yes, you can find it…. but I think he was referring to the windows version of the miner, and _that_ is another kettle of fish….:-)


      1. Doesn’t exist :-/ (if it did, would I ever have made my sticks with centos?!).
        For the old 3.x mpss versions you could always recompile (if you knew how, and were willing to fight it a bit); but with this one I never even tried. :-/


      2. I was replying to your comment about recompiling 3.x to try to get software to work on Ubuntu. I couldn’t figure out how to reply directly to that comment. So the TAR works as normal on RH 7.3?


      3. For MPSS 4.x (I think latest was 4.4.1, but not sure without checking) the install script that comes with it seems to work just fine with whatever CentOS the README that comes with it says. I _think_ it’s 7.3, but not sure – again, double-check the readme. Once you got it installed you should, however, _not_ update the kernel on that system, since then you’d have to recompile the MPSS modules, and they seem to not like any newer kernels. Install exactly the version they want to have, and don’t mess with it 🙂


      4. Thank you for all the information on your site. I am a little surprised that Intel decided to erase all the product support as if these cards were never made.

        Is it possible to keep the KNL cores in a separate environment from the one in your main socket, or do they all have to be registered in the same OS? How does the job queue manage the two core types?


      5. For the 7220s, the KNLs are in separate PCI cards, just like GPUs, so the “cores” never appear to the OS – you have to manually code your coprocessor offloading (again, just like GPUs), which lukminer is of course doing. Either way there’s nothing that “registers” itself into the OS, and certainly no two different core types.


      1. I just received mine, and good news is that it’s “working” out of the box – bad news is that performance – at least with the limited time I looked at it – is about 20% lower than expected: Had expected close on 3k, geting about 2350H/s. Don’t yet know what the issue is – possibly OS version/kernel (seems it’s never going into turbo mode), possibly something still mis-configured in the bios. Will send a post once I know more…. but at least 2350H/s is baseline I can vouch for so far.


    1. As I mentioned on my “about” page I do work for intel in my day job, so I know by experience how good the Phi can be if you only have the right software. The first version of the miner was for regular CPUs, but I always wanted to figure out how good the Phi would be for mining, and eventually just sat down (when I had a lot of spare time in my sabbatical) and did it.

      As to “showing h/s on monero”, I’m not sure what you mean. Right now each 7220 phi card makes about 2800-2850 h/s for _any_ cryptonight algorithm, which includes monero (as well as sumo, electroneum, bytecoin, and a few others that I don’t recall right now). So when I say “24kH/s” for the above setup, that’s what you get for monero – 24,000 h/s.


      1. Thanks for the reply!

        And man, you just opened a new door for most miners with your data!

        Definitely going to check the lukeminer!


  2. According to Intel, they refused to produce Xeon Phi 7220P, 7220A and 7240P for a wide market, and turned down the production of PCIe cards altogether.

    It remains only possible to order the CPU Xeon Phi 7210, 7230, 7250 and 7290 for some single socket SVLCLGA3647 motherboards that support them (there are very few on the market and can be counted on the fingers).

    That’s why RX Vega looks preferable even with their inflated prices and a shortage in stores.


      1. Actually, that _is_ a pretty good price – I may well have spent more on the parts when I built my own development box off ebay components. That said, it’s still only _one_ box – the asrock systems cost 1.5x as much, but have _four_ nodes. Still; I’d probably have gotten one if I had seen this price a few weeks back…


      2. Only available as a system with HDD and RAM. That suited me anyhow because I’m not using it for mining.


      3. Yes, it’s only _one_ box but you could have thrown a couple of 7220 cards in there as well.


    1. No. You need something more modern with a C602 or C612 chipset and bios support for large addresses. Think T7610, R7910. If the manufacturer’s docs don’t _specifically_ say they support a Xeon Phi coprocessor, they generally won’t.


      1. Well … that may not be “entirely” correct. I’ve definitely seen systems that did run them even though the manufacturers didn’t even bother the mention them (since they’re not a supported product, nobody will mention it, right?). As a (acknowledgely extreme) example: I even recently put two cards into an old pre-sandybridge Xeon system, and it works just fine…. even though the Knights didn’t even exist when that system was manufactured. More like a hit and miss situation.


      2. There’s another good reason you can’t use a Xeon Phi in the Dell 210 II: it’s only a 1U rack. 🙂


      3. _That_ doesn’t mean anything – one of my 7220 based machines is 1U, too. That particular machine has risers with which the cards are then flat on their side, which works just fine in a 1U system. Had the same on some x100 based system, too.


      4. Oh; yeah, fully agree. Plenty of machines that do _not_ have enough space – just wanted to make sure that nobody mis-understood your post as saying that it could _never_ work in a 1U 🙂


    1. I do not know for sure. For 7210 and 7220 I have hard numbers, both from myself and users – and at least for those two I actually see the final performance to be almost exactly what those respective processors’ “core count x frequency” would indicate. I also (finally) just received my first 7250 system, but am still struggling with getting exactly the performance out that I’d expect – to be exact, I’d expect it to do something between 2800 and 3000H/s, but am currently seeing only 2350, which is a solid 20% less than its higher clock and core count would indicate.

      For a 7230 I don’t have any numbers at all, yet. Yes, from the numbers on I’d _expect_ it to do slightly better than a 7210 (say, around 2800? 2850?), but again, I’ve never tested, so don’t know for sure ….

      If you ever do get access to one, I’d be very interested in hearing actual hash rate – please share!


  3. Hi,

    I have a working mining setup for 7220p co-processor using windows 10 but there are a few things I couldn’t quite figured out. I can start lukminer and start mining, however, I see a few warnings. I appreciate if you can help me on those and as well as one or two other things:
    1) I receive a warning on the startup of lukminer that it says I need to set memory mode to “cache”. Is it a generic warning, or intended to be displayed for regular phi systems, or is it also applicable for phi coprocessors as well? If latter, how can I set it up to use “cache” mode?
    2) On the same place it also says ” MAKE SURE TO HAVE /proc/sys/vm/nr_hugepages set to 4000 !!!”, but in the, which I use for mining, it is set to 10000? Does it make a difference? Or should I just use 10000?
    3) I receive an error on the start “argument ‘–url’ is deprecated, please use ‘–host'” but when I replace –url with –host I started to get connection errors. Do you have a more recent example config that I can use as a reference?

    Well I think I asked a bit much but whenever, or if ever, you have time to answer, I appreciate it.



    1. Hey,
      So you’re one of the few lucky ones that got one of those cards? 😉

      Re 1) You won’t have to change to cache mode, I think it’s on by default (I didn’t have to set it on mine).

      Re 2) 4000 should be enough – I initially used 10k because on ubuntu it doesn’t matter. For centos it does, which is why i changed the output to that. But on the card, too, it hsouldn’t matter – if it didn’t succeed with the large-page allocations it would have error-exited, so pretty sure you’re good.

      3) Sigh; I really should have fixed that script; you’re the second one to report this today :-/. Note it’s actually only a _warning_, not an error; I changed the miner from the misleading ‘url’ to the more apt ‘host’, and am printing a warning for those users that still use the old format …. but then didn’t even change my own scripts. That said, it’s only a warning, it’ll still accept the old format.

      Do you get the 2800H/s?


      1. Hi,

        Thanks for your quick response. In this case I don’t need to do anything else, other than figuring out my networking setup, which requires me to disable internet sharing and enable back it again on every host reboot before co-processor can access internet again. Also, MPSS service requires full CPU utilization for some odd reason. For the time being I use affinity setting to use only one core at a time and “low” process priority to prevent it hogging all the CPU power. As per hash rate, from time to time I see 2800H but it usually is around 2750Hish. Your share rate goes above 4.0% from time to time though 🙂

        Yep, indeed I was lucky. In fact, probably, I bought 2 of them from that eBay seller just before you, as I was the second one to buy it (the guy prior to me bought 2 of them as well, which pushed me act rather quickly). I could have bought more but didn’t want to risk too much for something that I have absolutely no knowledge. Naively I thought I can buy more once I figure out how to properly run & cool them, and of course it didn’t take too long to ran out of those (specifically given that friggin’ good price). It was only yesterday that I was able to cool them properly with the same blower fans you suggested. I wish I tried them first, I used some other DIY solutions that didn’t work. Then I found the same eBay seller and ordered the same blower fan on that listing. This evening I just figured out to control the speed by connecting RPM & PWM wires to motherboard and powering it separately & directly on a 12V rail, as it would otherwise can damage the motherboard if I were to connect it directly to it. I think motherboards can supply max 1A current.

        Anyways, I think I wrote more than you are interested, but thanks again for your prompt response.




  4. Lukas, the reason you could only get 8 cards working in that rack is because mpss 4.4.0 only supports that many. See the Readme file, section 2.


    1. LOL. That does indeed explain it 😉
      Too sad, though – it’s already a pretty crazy hashrate for a single machine, but another 25% more would have been even nicer :-/


  5. Lukas, I’ve now downloaded the latest version of your miner and am a bit perturbed to find six different files for Monero mining with no ReadMe file to explain the differences between them.

    While I wait for the SuperMicro 7290 system to arrive I’d like to try it out but the only machine I have that currenctly contains a Xeon Phi is running Windows 7. Is there any version that can be loaded into a 3120A and run natively without requiring a Linux host?

    BTW I’m totally jelly about those 7220 cards. I probably have more Xeon Phis here but they’re all X100s.

    Oh, one last question. Are you building the software with ICC, GNU C, Clang or something else?


    1. Hey,
      The 7290 systems are not PCI based systems, so “testing it out” with an older 3120 A might not make too much sense.
      If you do want to use that 3120, then yes, the “luk-xmr-knc-native” binary is a “native mode” binary that will probably also run under windows: If you have the MPSS stack installed (and properly configured) you should be able to putty-copy that to the card, then log in on that card, and run it there “native”ly.

      It’s a lot of pain for the 550H/s, though; the 7290 system should be so much easier to deal with.

      I can also see if I can make a lukstick for machines with x100 phis. Already did one for those with the 7220 cards; so shuldn’t be too hard… but need another one because the two MPSS versions for x100 and x200 are not comptaible :-/


      1. Cool, I’ll give it a spin and report back the results. So far all I’ve done is run the Linpack test so it will be nice to see it do something useful.


  6. Update: Seems to be working but I can’t ping the host from the Xeon Phi so will need to do some jiggerypokery with the network settings.


  7. Hi guys! I have this device, with it i built a rig on asrock x99. I maked i flash with LukStick-0.9.2-phi.iso, configured Ubuntu was loaded, miner started and i seen in log this: My blue card doesn’t found? And nicehash showed the same: Can you help me with this trouble?


    1. Wow – is that a 7220*A*? If so it’s the first time I ever even seen a _picture_ of it – those are about as rare as two-horned unicorns with checkered tails! (it should work, though, so not to worry)

      As to your “error” message: The problem you’re dealing with is that you’re using the wrong binary/lukstick for this card. If you follow the explanations on the blog, or the “pick your binary” section of the readme ( then you’ll see that the “-phi” binary is for *bootable* phis that live as main CPUs right on the motherboard, _not_ for PCI cards.

      What you are having (or at least, what it looks like) is a phi _pci card_, which needs to be used through the MPSS “driver” (just like you’d have to use a driver for a GPU); either using the “mpss-knc” driver if it’s a x100 phi card, or the “mpss-knl” driver if it’s a x200 phi card.

      If it truly is a 7220A, then what I’d suggest is to start with the “mpss-knl” *lukstick*, which has the mpss driver pre-installed. *please* let me know how this goes; I’d absolutely *love* to see one of those cards running the miner!


    1. Hey, Karl,
      Interesting; it’s the right lukStick since it tries to start the MPSS miner; but apparently it dies in the driver initialization (ie, it’s not the miner that dies, but the driver!?).

      Can you send me an email with the output from “lspci”, “micctrl -s”, “dmesg”, and “micinfo”? And two questions: Are you sure it’s a 7220A? And did you already enable “above 4GB decoding” in the bios?


  8. Just got a SC7220P in the mail, replace the Nvidia 1070 or wait for the 1U case that will hold this beast. Still thinking Hyper-V to Lukstick with PCI direct attachment.


    PS I am still waiting on the active cooler but it seems I got a 7100 instead of a x200 cooler, can you say 120MM cardboard shroud?


    1. Hey, Brad,
      Yes, I ran into the same; got a custom 7100 cooler that didn’t fit the 7200 one. What I did was simply take off the plastic shroud part and put the blower directly to the end of the card. Not perfect, but worked well enough.

      Of course, the _best_ solution would be to finally get my hands on a 7220_A_ card that is self-cooled…. fingers crossed. :-/.

      Or, as you suggest, use a server with sufficient airflow; I used some old surplus pre-haswell supermicro systems that are pretty cheap right now, and seem to work just fine.


  9. I looked everywhere for the SC7220A as I didn’t want to get the P but since I only saw 2 on a shady non https page in about 40 different websites I gave up and got the only SC7220P I found(ebay). I was planning on using your stick, with VT-d pass threw for the PCIE card, on Hyper-V do you think this would actually work, or should I try and find the working MPSS 4.0 for win, or use a native redhat load?

    Thanks again!


  10. Just bough one from ebay (from you I think) Waiting for it to arrive to play 🙂 I’m going to be running a win10 machine..I’ll let you know the outcome.


  11. For the x100s I dont’ know for sure whether they do run on a z170, though I’d assume that probably yes. Whether it makes sense is up to your power prices, though – at around ~500 H/s you’d only make about a hundred-and-some bucks a year – maybe 150 if prices come up again a bit – but that’ll burn 250W in power, which at 10cents a kWh will cost you more than what you’d mine :-(. Th


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s