Phi-based “NeonMiner” v3 system now uses lukMiner (and currently has stock!)

Bottom line: “NeonMiner.com” now has lukMiner-powered four-node Xeon Phi systems, in stock, in the US…

Background

My latest article on the Xeon Phi PCI cards has stirred quite some feedback; however, though these cards are undoubtedly more interesting than a rack-mount server from a “rigging”/”modding” perspective, I can’t say often enough that those cards are discontinued products, do not come with any warranty, and may or may not work in whatever system one plugs them in (I’ll write some more on that soon – but yes, it’s a 50-50 thing right now).

As such, I’d like to point out that at least for those interested in more “production” mining (in the sense of “buy it, plug it in, and let it make some money”) I would overall still suggest to not have those fancy cards distract you from the much more ready-to-go Xeon Phi four-node servers I had written about before.

In terms of those servers, the – by a wide margin – biggest concerns people have brought up over the last few months are two-fold: First, the worry about buying machines on the other side of the planet (with questions like shipping cost and time, customs fees, etc); and second, the not inconsiderable delays of actually getting the machines … because they are built to order, in high demand, often sold out, and in short, typically come with up to a few months of delivery times.

The “NeonMiner”

For those that need something quicker: Have a look at the latest version of the “NeonMiner” (www.neonminer.com). The guys there now offer a phi-based system (running lukMiner), and as far as I understand they have machines directly available.

A bit of background: I first stumbled over the NeonMiner name by chance, when I saw one of those systems listed for sale on Amazon, several months back — but then at a pretty steep price (before the Phi “specials” kicked in), and without much mention of what software it would come with. At about the same time, the guys behind this system apparently heard about lukMiner, and contacted me through the blog. We’ve since loosely worked together, in the sense that I helped hook them up with the right contacts to get the pricing specials used by the other Phi suppliers (meaning those NeonMiner systems are now way cheaper!), and at the same time suggesting that lukMiner be preinstalled to make the systems easier to use (which they immediately did). As a result, the latest version of the NeonMiner now comes with lukMiner pre-installed, ready-to go.

Though we did exchange quite some emails, I myself am not part of the NeonMiner team, nor do I even have a system at hand to “take apart” and have a closer look at. As such, I do not know what exact hardware components that system is comprised of, nor how the software setup works, etcpp; however, judging from the pictures and numbers on the website,  the system appears to be a 4-node 2U “Adams Pass” system similar to what you’d get if you order from Intel, Exxact, or Colfax; probably with four Phi 7210s or 7250s (I honestly do not know which, yet, though I’d expect the latter). This 4×7210/7250 assumption would also be consistent with the hash rate of about 11+kH/s claimed on their web site, so this number is perfectly plausible to me (and though the web site doesn’t say that, since it runs lukMiner it should also run cryptonight heavy, light, etc).

What other components (memory, disks, …?) the system may or may not come with I cannot say, nor do I know what exactly they’d cost. However, one thing the web site is very clear on (and which they’ve confirmed to me when I double-checked!) is that they have systems “in hand” right now, in the US (and apparently, it even comes with a 6-month warranty), which should address at least some of the issues I’ve mentioend above:

neonminer-instock

As such, if anybody here wants to get their hands on some 11+kH of Cryptonight/V7 hashing power in a ready-to-rack and ready-to-run system then feel free to contact them. If or when you do, make sure you mention that you found them through this blog, which – as I was told – should give you a discount over the official price. Full disclosure: If you do mention the blog I, too, will get a (small) referral fee (though I do hereby promise that by the end of the year I will donate any such referral fee to a charity!).

If any of you guys are getting any of those systems, please let me (us?) know; as said above I haven’t gotten my hands on any of those systems yet (sure, I could buy one, but have no spare power to run it right now, anyway :-/), so if anybody is willing to share their experiences on what hardware exactly is included, shipping time and cost, how to configure/update the miner, measured performance and wall-draw wattage, etc, please share!

With that,

Happy mining!

 

SC7220A (active) cards: They exist! They do well!! And I got some!!!

I’ll write a bit more on this topic soon, but wanted to at least drop a brief note before I find the time for a real article: I’m talking about the “active-cooled” Xeon Phi SC7220A PCI cards – they really do exist! They do pretty well (around 3000H/s on crytponight heavy/sumo). And yes, I finally got my hands on some!

In past articles I’ve called them “rare”, “rumored”, and in one instance even “as elusive as a two horned unicorn” when somebody mentioned he might have one (turned out he didn’t, it was something else). I had heard that there’s three models of the Phi cards – the passive 7220P and 7240s, and the active 7220As – but so far had never ever gotten at anything other than the 7220Ps. Not the 40’s, and certainly not the active ones. In fact, until somebody listed an “A” on ebay the day before yesterday I had never even seen an image of one. Yet finally, I got my hands on some. Wow. That took a while.

Why is this such a big deal? Until today, all I ever had for phis was either the 7220Ps, or the bootable 7210/7250s (Asrock, Colfax, etc). The latter ones are great if you’re building a farm or want to go co-location to a data center – I have some, they’re great, and I recommend them to everybody that goes that route. But – they’re somewhat less interesting for the “home miner” that is used to building his own mining rig “GPU style”, with a cheap motherboard, CPU, PSU, minimum RAM, and just plugging in a PCI card.

The 7220Ps I got a while back on ebay are “kind of” going in this direction, but as described in my two past posts on this topics (e.g., the original one on building a desktop mining rig and last week-end’s one on putting 7220Ps in surplus servers) I had already pointed out that those passive cards need a lot of really strong airflow to stay cool (well, at least once you start the miner :-/). And that, for regular desktops or mining rigs, is simply not the case – and once they overheat, they’ll turn off, you’ll simply see “DMA error”s in the dmesg, and can power-cycle.

Mounting additional high-powered fans as I had done in my desktop rig is a solution for a “proof of concept”, but only borderline practicable. With an “A” (ie. “active”) card, though, the card itself has a fan – just like a GPU – so cools itself … meaning you can actually mount it in a desktop, workstation, or free-standing rig – I now had my first machine running about 48 hours, and still going strong (2 cards in one box, together about 6500H/s on sumo). Of course you still need the right mobo that it’ll work in, but that’s something for another post – I now have two machines where I have A’s working in, and at least two other users seem to have that, too, so “we’ll figure it out”. Luckily this experimenting will be relatively easy, because the mpss-knl lustick I did for the servers seems to work just as fine with desktops… and that saves a ton of time.

Anyway – for now I got a stack of those two-horned unicorns (and a few 7240P’s, too, for good measure), and will start experimenting with them. Already ordered a range of different mobos and CPUs to test with, and will certainly keep everybody updated on that through this blog. Also, I won’t be able to power all those cards myself, so will soon start to sell some off on ebay (also get my investment back – I did have to pay for those cards :-/). If you see some appear in the ‘lukMiner’ ebay account – then yes, that’s I (and of course, you can also drop me an email if you want to save the ebay fees).

At least so far that’s it – will soon update you guys on which rigs I get them to work in and which ones not. ASAP, I promise (weekend, probably – I do have a day job).

Until then – happy mining!

Building a (low-cost) Phi 7220 Mining Rig

It’s been a while since my last post – been busy in my day job …. – but this post will hopefully make up for it: Unlike the last few posts that mostly covered updates to the miner software, this one will tackle what I had a lot of readers ask me about: How to actually build a mining rig with the Xeon Phi 7220 (x200/KNL) PCI cards – in a way that does not require a crazy-expensive server that costs more than the cards themselves, yet that (seems to) work reliably with those particular cards. In this post, I’ll describe exactly this; and in fact, the resulting “rig” is actually a plain old rackabale server that you can put into any co-lo, data center, etc. What more could one ask for? 😉

Background

When I first started writing this blog, the – by far! – biggest community response was when I first wrote on building a 8-card “rig” with the Xeon Phi 7220 PCI cards that could do close on 24kH/s for cryptonight, which at that point in time was – I think – a single-machine speed record (for those that haven’t read that yet, the original article is still right here, behind this link).

This article created quite a stir, and triggered lots of questions, blog comments, emails, etc. However, as “nice” as this build back then was, it had three major issues:

  1. those cards are still pretty rare on the ground; now that people can google an actual use for them a few more are appearing, but they’re still rare.
  2. those cards are discontinued products, so there’s no documentation, support, etc. In particular, it’s still not 100% clear which boards, processors, chipsets, etc they will actually work with – because quite frankly, there’s quite a few that won’t.
  3. the one machine that I had used in my build cost a little fortune; in fact, I paid as much for that box (>$6k) than I paid for the cards that went into it. At that price tag the impact on profitability is quite big: the server itself hardly makes any revenue, as the “revenue per dollar” for the full system is only half that of each card.

Those shortcomings are exatly why I had always advocated for going with the “ready to rack” systems from Asrock, Exxact, or Colfax for phi-based production mining. However, production is one thing, and fun another, and ever since that original article there’s always been readers that asked along the line of “how do I build a mining rig with the 7220 PCI cards” … and while I did experiment with this question on and off – and found quite a few different combinations that do work just fine – I never really found the time to properly document my findings.

Anyway – earlier this week I sold a few 7220 cards I had recently gotten my hands on, and while doing so realized that whoever buys these cards will eventually have to figure out how to get to run them; and since I now do have a reasonably good idea how to get those cards to work in a non-crazy expensive way I decided that it’s now finally time to sit down and share my findings. And to make this in the most effective way, I decided to do this as a complete “walk-through” for building a rig with the 7220 cards.

The Rig – Ingredients

Part 1: Four 7220 cards: To start out with, you do of course need some x200 PCI cards. For this particular build I pulled four 7220 cards out of my 8-card monster machine (should’ve written this article before I sold these other four cards – oh well).

Part 2: A old, “surplus” Xeon GPU server: For this particular build I’m going to use a old SGI/Supermicro “SYS-2027GR-TRF” system I bought from “Mr Rackables” (now “UnixSurplus” on ebay. These servers are quite old, and sell pretty cheaply – I paid a clean $900, including server chassis, PSUs, memory, CPUs, fans, everything ready to go (except a harddisk, I think). I now bought several systems from UnixSurplus; can only recommend them.

Important: Please note I’m intentionally using pretty old systems, here: not only are they much cheaper than newer systems, they also seem more reliable: I had a few newer systems (Xeon v3 and Xeon v4 based) that did not work, but so far all the Xeon v1 and Xeon v2 system (both of which use a different chipset than the v3 and v4 generation!) do seem to work flawlessly. So, the older stuff (originally designed for the x100 Phis) it is going to be for me! For those that are curious, the particular system I got has two Xeon e5-2530s; 16 GBs of RAM, and is rated for four “Phi or Kepler” GPUs (LOL). As you can see on the PICs the system actually has both PCI slots and space for a total of six dual-width PCI cards, but at least for this build I’m sticking with the recommended since four cards (though of course, I will at some point try numbers five and six too 🙂 ).

Here some pics of the system as I took it out of the box:

Don’t go by appearances: The machines are surplus, and the chassis’ are sometimes a bit beat up – this one has some pretty beat in handles for the PSUs (what did the carriers do with it? Play baseball!?); and another one I had had one of the front handles mostly broken off – but inside, they’re actually pretty well cared for, and so far all the ones I got worked out of the box no prob. For that price (as parts you’d pay that much for the CPUs and memory :-/) I can live with a few scratches.

Of course, it probably doesn’t have to be exactly this model and system – in fact I got a few other ones from unixsurplus, too – one with a first-gen Xeon, one with only two GPU slots, etc – and so far they all seem to work, too. The exact steps of (dis-)assembling may be a bit different from the ones I use below, but overall it should be very similar.

Step 1: Open, and get at the PCI slots

Now, the first thing we should do is boot up the machine, check the BIOS, etcpp. But since patience isnt’ exactly my strongest suit of course we’ll just skip that for now, and see what’s inside. There’s two screws on the side (marked with little triangles) that you have to take out, then you can slide off the top cover backwards (tip: get a box for the screws, you’ll need it :-/) :

What you’ll see when it’s open is basically the CPUs and RAM in the middle, surrounded by four big metal blocks. Of those, the one on the right back (right front in the image) is the dual power supplies; the other three are three compartments for two dual-width GPUs each (so yes, in total it should fit more than four cards). Each of those three compartments contains some risers that plug into the motherboard.

Now, take your beloved screwdriver, unscrew those three compartments, and take them out. If you accidentally unscrew the wrong screws in those compatments dont’ worry – the only other screws in there are for some weird metal brackets that have no practical use whatsoever, so if you accidentally take them off you’ve only saved time – because if you haven’t, you should do it anyway. Here’s some pics with all three compartments taken out:

We’ll eventually only need the two front compartments; but I’ve taken off all three – it can’t hurt to take out the third one, too, and might even help in airflow (and maybe, I’ll eventually try mounting something there, too).

Step 2: Boot, and properly configure BIOS

OK, now that the first fun is over we have to boot the machine and check the BIOS – once we put the cards in the machine might not boot any more without 4GB support enabled; and since it’s not exactly fun having to take the cards out again just to go into the BIOS I’d strongly suggest to do it before putting in the cards.

Once booted, the main thing to check is if “4GB support” is enabled in the BIOS. In my case it already was, but in another machine I did a few days ago it wasn’t, so better double check. For good measure I also yanked up the “PCI latency cycles” to max value – we’ll not be bound by PCI latency, anyway, and since the main errors I’ve seen in other systems was DMA timeouts I thought this can’t hurt – probably will do no good, either, but hey, call me superstitious…. Here a few shots of my BIOS screens:

Of course, you should never run a chassis with the lid open, or with only one of the PSUs attached, and of course only when properly grounded, and with anti-static mat and wrist-band, and … oh well.

Step 3: Mount the cards

OK, now that the BIOS should be able to detect them, let’s mount the cards. Doing that in turn requires four steps: Taking off the brackets, putting them into the risers, connecting power, and plugging the riser compartments back in.

Step 3.1: Taking off he brackets

Before we can put the cards into the riser compartments, we’ll first have to take off the mounting brackets that they come with: These are useful for mounting into a workstation or a full-height server like my 8-way toy … but when mounted sideway through risers, the brackets will only be in the way. Luckily, they’re easy to remove: Just remove the four little screws – two on two, two on bottom, and it slides right out:

Remember that box for screws I mentioned? Make sure to keep those brackets and screws – if you ever want to sell those cards off you “probably” want to reattach them, and they get lost very easily …

Step 3.2: Choosing the compartments to use, and adding power cables

As said above there’s three compartments for GPUs in this board. Of those, the ones you should use are the two ones of the front – left side and right side – as those are the ones that slot into PCI slots 1, 2, 3, and 4, all of which are full x16 slots. The compartment at the back instead goes into slots 5 and 6, one of which is a x8 slot – which may work, or may now, I haven’t tried yet. Just use the two front ones; they’re also closer to the power connectors and fans, both of which is good to have.

If you take a closer look at those compartments, you’ll see that there’s some PCIe power connectors right next to them on the motherboard; some of those will have PCIe 8-pin power cables already attached, others will be empty:

Though how many cables you’ll have may depens on the actual box you get, chances are they’ll not be remotely enough – so get a few PCI 8-pin power connector cables (I use ones that have two 8-pin connectors each, just in case). I should probably have listed those as additional “ingredients” above, but since they’re very useful for any GPU or Phi mining rig I assume you already have a box of those lying around – if not, get some!

Now, make sure you’ll have a total of two 6-pin and two 8-pins (or simply four 8-pins) on each side. Mine already came with two 8-pins on each side, so all I needed was one such 2x 8pin cable on each side. Luckily in those systems there’s plenty of connectors on the motherboard.

Step 3.3: Insert cards, insert compartments, and power them up

Now, it’s time to take it all together: Put the cards into the front two compartments (two cards each, if you have them), and get them powered up and plugged into the motherboard.

Note this in fact is trickiest part of the entire operation: For the front right compartment it’s fine to first attach the cables, and then slot the compartment’s riser back into the motherboard. For the front left one, the riser is actually so incredibly long that you first have to get that riser back onto the board, and only then should you connect the power cables, else some other cables will be in the way. As you can see in the last two pics above I had to learn this the hard way, too – I first connected all power, then couldn’t fit any more… oh well, if I never do more wrong than that I’ll be a happy man. Either way, save yourself the time and first plug the compartment (halfway) back in, then connect the PCIe power cables from the front, and one.

Plug both compartments back in, srew them back on (uhhh… might have forgotten that), and everything looks rather tidy again:

IMG_3010.JPG

Step 4: “Adjusting” the fans

Though the server above now looks pretty solid already, chances are that the airflow won’t be enough to cool the cards once the miner kicks in. Yes, there’s a turbofan in front of each of the compartments, but by default they won’t run high enough. For the left compartment (back on this image) that seems still OK, because it has enother fan at its back, and good air flow. For the right one (front on that image) that’s not the case, as there’s only one fan, and the back of the compartment is partly obscured by the PSUs and some cables.

So, to fix that we’ll have to “convince” the fans to put in a little bit of extra work. There’s “probably” some way to do that through IMPI, BIOS, or something, but so far I haven’t figured out how, so let’s do the completely failsafe way we’ve done in some previous builds, and just just cut the two control wires of the four-cable fan connectors. Without those control wires (but the other two power wires still on) the fan will go full tilt no matter what load or temperature, which is exactly what we want.

In the following images you’ll see how the fan is attached to some connector using four cables: black, red, yellow, and blue. If you don’t care about “modifying” the chassis itself the easiest would be to simply cut the yellow and blue cables; but since I migth want to eventually sell that system on I didn’t want to simply cut any existing cables (even in a surplus machine :-/), so first inserted a 4-line fan extension cable (a few cents on NewEgg, if you buys them in bulk), and cut that instead:

If you turn that machine on, you’ll hear the effect very clearly, right away :-).

Step 5: Run it …

OK, that should be it: Take a lukStick (the mpss-knl variant), burn it onto a 16 GB USB stick (if you haven’t already done so, start this before even opening the box); plug it in, and reboot: Upon booting the four cards show up, first in micctrl, later (once mpss is started) also in micinfo.

In my case the first time booting still failed with the MPSS service not properly booting the cards, and micctrl eventually showing “error” – probably because the lukStick was built with only two cards each, so had to first re-initialize itself to four. The second time around everything worked as expected. And the third time. And the fourth. And every time I retried since them. Now been running round the clock for almost 24 hours, no problems whatsoever (and similar builds have run for way longer, all without any issues, either):

Particularly interesting is also to have a look at the temperate of those cards (“micinfo | grep Temp”) : before the miner kicks in the cards run at a nice 40-ish degrees Celsius; but once the miner kicks in that’ll go up quite quickly. Without our “fan modding” this quickly goes beyond 80, 90 degrees, and the cards shut down; but with our modded fans it stays around 80, which is just fine.

Summary

OK, that’s it for now; I hope you’ll enjoy reading this as much as I enjoyed doing it! In total the entire thing cost me about an hour of work, including taking the photos, arranging lights, cleaning my desk, etc; writing actually took (way) longer than doing it.

Of course, that won’t help with the question of “how to even get those cards”, but that may soon be a topic for another article – we’ll see. At least for those lucky ones that got some, this article should be a good blueprint for getting them to work. The machine itself cost me $900, which amortized over 4 cards is pretty good, considering that power supplies, processors, etc, are all included… either way, it’s way better than the $6+k professional server which – even after adjusting for the fact that it can take twice as many cards – is still three times as expensive per card. And since mining is all about efficiency, I’m pretty happy with that setup. All else you need are a couple of cables for a couple of bucks, so this build is not only cheap, but also way simpler to arrange than trying for a K1SPE workstation build, or a workstation-with-7220s build, etc. These old SYS2027s seem to be in good supply, too, so finding them will be way simpler than findign the cards. The final thing then does about 11.5ish kH/s – yes the CPUs are pretty old, but since they don’t have to do much they’re perfectly adequate.

Of course, this is not the only way to build such a rig: I tried several similar old surplus machines, and pretty much anything that was originally designed for a x100 KNC phi seems to work, too. I’ve seen issues with newer Xeon v3 and Xeon v4 boards (still can’t pin it which ones do and which ones dont’ work!?), but those old ones seem to work consistently. Use at your own risk, of course – this is not legal advice, I’m not respsonsible for damage, financial loss, mis-investments, or whatever, but at least to me this sounds like something that works. Best of all, the final product is a ready-to-use rackable server; you can rack them up, move them to colocation, etcpp – all you have to do is slide it in, power it up, and done (in fact, the system even came with free mounting rails 🙂 ).

With that: Happy mining!

 

Just added Preliminary support for “Cryptonight Heavy” (Sumokoin v1)

While getting my morning coffee earlier today (long night yesterday – very long night …) I got an email by one of my users, asking about whether I’d ever planned to add support for mining Sumokoin, and in my ignorance replied right away with “sure, I support that for a long while” … hm. In retrospect, that answer probably didn’t sound all too intelligent, for the simple fact that I had completely missed the news that Sumokoin (or “Sumo” for short) had recently also done an algorithm fork similar to Monero, with yet another algorithm (called “cryptonight heavy”)…. and until he told me about it I hadn’t even known about it. Well, guess I didn’t earn the biggest mark of excellence on this one…

Anyway – after I finally did realize that I had simply been a rather ignorant fool in my first reply I had a look at this new “cryptonight heavy” thing, and have to say, it’s actually quite an interesting modification: Unlike the rather cosmettic “v7” modifications that monero did this really did some major changes. In particular, “heavy” changes the scratchpad size from 2MB to 4MB per thread, which means that it’s now “heavy” indeed in its cache impact. Still, the effort to code this up didn’t look all too awful, so I looked up the profit estimator on whattomine to see if it would even be worth it … and looks like it actuallly did: with so few miners out there, and with such a heavy cost on regular cpus, the profitability of this “new sumo” is now way higher than other coins (long may it last…). So with that, I finally did sit down and implemented the changes – which after all the recent re-orgs to support turned out to be relatively little work.

Long story short: I just uploaded a new version (v0.10.2) to the usual place (http://www.lukminer.net/releases) that has preliminary/experimental support for this “xnheavy” coin type. “Preliminary” in that case means that there’s still some loose ends – in particular, I currently support this coin time only for the cpu, phi, and mpss-knl miners; and knc-native, mpss-knc, and ocl will, for now, simply refuse to run if you tell them to mine heavy (of course they’ll still run the other algorithms, though). For cpu, phi, and mpss-knl variants I’ve now tested for several hours on pool.sumokoin, and it seems to be working out of the box. Performance on CPUs is way lower than old cryptonight, because with the larger scratchpad you can only use half as many threads. On the KNLs (7210, 7220 etc) performance is way better – still about 20% slower than old cryptonight, but that’s still quite tolerable. In fact, with just three (admittedly rather hefty) machines I currently seem to be making about 5% of the pool hash rate on pool.sumokoin.com, so it can’t be all that bad ;-). To give you an idea: I currently get about 2500H/s on a 7210, and about 18kH/s on the 8×7220 “monster” machine I described in that past article

To run: download the latest 0.10.2 release from http://www.lukminer.net/releases, unpack, and run with the “-a xnheavy” or “-a sumo” flags, for example like that:

./luk-phi -a sumo --host pool.sumokoin.com --port 4444 --user Sumoo6SgKXMD8NcBFzqB1QBzmRPiLeJxFPUmcy7tfM88br8y76G6EGTi8ireo3dy1VcSiK5sVKB4wbpcHtCu32RLBGMbZ1Nbfx3

(you should of course change the –user field, though I won’t complain if you don’t 🙂 ).

I think there’s also some other coins that have adopted this algorithm (somebody asked earlier today), and if they do say that they use “cryptonight heavy” then it should actaully work – but I haven’t tested anything other than Sumo, so no guarantees. If you do find one and happen to try it out, feel free to share!

With that: Happy mining!

PS: Yes, of course I’ll add opencl and knc support ASAP…

 

 

New lukSticks (v0.10.1)

How-dee everybody. As some may already have seen I “recently” (well – already several days ago by now) pushed some new “lukStick” images to the releases page. For those that read this for the first time: this is at http://www.lukminer.net/releases .

I actually wanted to write about this new release right after I uploaded (and apologize to those I confused by not doing so, because they actually change a few things :-/), but then goto side-tracked… apologies, too much to do :-/. Either way, in this blog I’ll try to summarize the latest changes, all of which were driven by feedback from users.

Re-Cap: The lukStick

For those that haven’t read the previous article and/or discussions on the lukStick, here a very brief re-cap of what it’s supposed to be about (for those that already do, feel free to jump to the next section!).

The core idea of the “lukStick” is to provide a single ISO image that is a fully installed linux with the miner pre-installed, ready-to-go, and auto-started upon boot time. This was initially driven by my own “experience” (say: annoyance?) at having to re-install machine after machine after machine, doing the same things all over again, etc, and losing tons of time because I always forgot one little detail, etc. Also, I realized that many of my readers might simply not have the experience of dealing with linux at all; let alone with auto-starting the network, miner, etc; let alone dealing with installing and starting MPSS stacks for KNC and KNL PCI cards; etcpp.

As such, I eventually decided to try to build (and it worked like a charm!) a single bootable USB stick that had all of that installation work done, and simply clone it every time I got a new machine. And in addition to making life easier in setting up a new machine this had the added benefit of allowing to use any “barebone” server (e.g., the Asrock Machines) without ever having to get or install a disk for them…

Be that as it may;  that thing turned out to worked pretty well at least for myself, so eventually I eventually decided to generate ISO images of those sticks, and share them as well. Sharing for others required a few additional changes (users, passwords, ability to change pool/user setttings, etc), and not unexpectedly, a few rough edges have been reported since. This release is supposed to fix at least the most egregious ones of those.

The new lukSticks (v0.10.1)

The most obvious change to the new lukSticks is – who’d have guessed – that they update the miner to v0.10.1, which is the one with support for XMR v7. As such, this update to the sticks was way overdue even without the additional changes, becuase starting 6 days from now (ie, after the v7 fork) you’ll actually need this new version to mine XMR.

However, the new stick isn’t just the old one with an updated miner, but actually contains several additional changes as well that were triggered by feedback from you, the users. In particular, on a high level:

  • There are now three version of the lukStick, for “cpu and/or phi”, “knc with mpss” and “knl with mpss”, respectively (see below).
  • All three sticks are now “homogenized” in the sense that they all use the same user, password, ssh key, directory structure, startup method, location of config and output/log files, etc.
  • User/password: All three lukSticks now only have a single user (root), and the same password (luk) for that root user. Ie, no more confusion on which users/password to user for which stick, how to sudo, etcpp.
  • Remote login: All luksticks now allow remote ssh login, using either password or ssh key (I’ll upload the public/private ssh key soon). In particular that should allow for more easily “remote control”ing many different servers via remote ssh calls.
  • Homogenized file naming and directory structure: In all three lukSticks the relevant ssh keys are not in /root/.ssh, the config file in /mnt/fat/mine.cfg, etc. This again should make it easier to, for example, “bulk” change the miner config on a large number of machines.
  • At least the “cpu-phi” and “mpss-knl” method will automatically print miner output to console tty1; so no more need to first login before you can see the miner status. (note this feature is intentionally disabled on the mpss-knc version, since the MPSS “mic” kernel module throws a kernel panic when enables (ugh!).
  • The partitions have been slightly shrunk, so even with USB sticks that have a few  broken sectors (ie, which in the past were slightly too small for the ISO image) you’ll no longer get broken partitions.

Now with these high-level changes, a few more details below:

Which lukStick to Use?

There’s now three flavors of lukSticks, for the three different configs that exist:

  • lukStick-phi-cpu : This one works for both regular CPUs and bootable Phis (it’ll auto-detect if it’s a phi, and fallback to CPU if not). Ie, that’s the one you want to use if you have one of the Asrock Rack, Exxact, Colfax, etc 2U4N servers; but you can also use it on any old CPU/Xeon server you may have lying around. Internally this stick builds on top of Ubuntu, which generally gets highest performance.
  • lukStick-mpss-knc : This version is designed for systems with the x100 “KNC/Knights Corner” Xeon Phi PCI cards; those require an install of the MPSS 3.x stack, which in turn requires the right version of CentOS, which in turn … you get the point. This stick has all of that pre-installed, using CentOS 6.9, MPSS 3.8(I think), and with everything auto-started and auto-configured (even the number of cards is automatically detected).
  • lukStick-mpss-knl : This version is for the select lucky ones that got some of the 7220 or 7240 PCI versions of the x200 “KNL/Knights Landing” PCI cards. Ie, it’s not for the Asrock Rack “bootable” KNLs (use phi-cpu instead!), but if you happen to have one of the 7240s that version should do the trick. As with the mpss-knl version this has the right flavor of Linux (CentOS 7.3), the right driver stack (MPSS 4.x), automatic card detection and configuration, etcpp.

Note I do not install NVidia, ATI, or whatever GPU drivers, so the luk-ocl miner will not work on thost sticks.

“Making” a lukStick

To make a new lukStick all you have to do is download the right image (see previous section), unpack the ISO image, and burn it on a 16GB (or larger) USB stick (in fact, a harddisk should work, too, though I haven’t tried yet). For those that use linux and are comfortable with the command line, this would work, for example, via

dd if=lukStick-<type>-<version>.iso of=/dev/sd<deviceName>

Note: Usually, you can find the “deviceName” to write to inserting a new, blank USB stick into a USB port, and then doing “dmesg” – which should show something like “new USB device X-Y-Z …. /dev/sdXYZ.

Warning: If you’re not familiar with linux, command line, dd, etc, you might want to use some graphical tools, or you might simply overwrite your main harddisk. Be careful! I’m also sure there’s some windows tools for that, but sorry, can’t help with that – I’m not exactly an expert in windows….

Configuring the LukStick to your pool/user/… settings

Once you “burned” the ISO image to a USB stick or harddisk, you should be able to mount this stick in either a linux or a windows machine. Under linux you should see two partitions (a ‘linux’ one with the linux, and a ‘fat’ one with the config files); under Windows you’ll only see a ca 100MB FAT partition.

The “FAT” partition is what contains the config files. Unlike previous versions this version of the stick no longer hosts the scripts themselves in this location (too easy to accidentally break, apparently), but instead contains a file “miner.cfg”. You can edit the respective host, port, user, etc values, and next time you boot they should automatically apply.  Make sure to properly unmount (“eject” under windows) to make sure your edits are saved!

Running the lukStick

Once burned you should be able to put the lukStick into any machine, turn it on, and automatically run upon boot (of course, this assumes that your BIOS is configured to actually boot from USB 🙂 ). I’ve seen some issues with UEFI vs non-UEFI systems (the sticks are “legacy” stick, and some UEFI systems apparently want to boot only UEFI devices!@#!@), but “usually” it should work. (In particular, all the Asrock Rack etc machines should work.

Of course, to actually run the miner the machine needs an internet connection – and while the miner will automatically start the network devices you’ll still have to make sure that there’s a proper physical network caple plugged into the right network port (on the Asrock Rack machines, for example, I usually use the right one of the two side-by-side ethernet ports – do not use the individual one next to those two, that one is only for board’s IMPI; the OS won’t even see it).

Monitoring Hash Rate / Miner Output

Upon startup it may take a while for all the services to start (in particular for the MPSS version that can take three or four minutes); but at least ont he cpu-phi and mpss-knl versions you should at some point see some miner outputs on the text console (assuming you do have a monitor connected, of course). Note this doesn’t work on the mpss-knc version (mic module throws a kernel panic when writing to /dev/tty1!?!), but does on the others.

In addition, all miner output is also written to /tmp/luk.out, so you can always login (remotely, if required, see next sec), and check that file as well. Also, to help debug any potential issues (ie, if the miner doesn’t start up), the startup scripts will write some debugging information (dmesg, cpu info, memory info, startup log, etc) to the FAT filesystem, from which you can inspect them (and/or send them to me if you need to).

Loggin in

At least for casual users you usually shouldn’t have to log in – as described above you can configure the miner with the “miner.cfg” file on the FAT partition – but just in case you do want/need to: All three lukSticks now have a unified user (root) and password (luk). Of course, three ascii letters for a password isn’t exactly NSA standard, so if you do plan on putting a machine with this stick onto an externally accessible IP: make sure to change that root password!!!!  (If you’re on a trusted network or behind a DHCP server and firewall, it shouldn’t matter).

If you do want to log in, there’s two three ways: Console, ssh via password, and ssh via ssh key. For console login, you’ll probably see rather quickly that the miner also prints its output to console 1, and that logging in / working on that console becomes a bit “confusing” with all those outputs. Simple fix: Press ctrl-alt-2 (or ctrl-alt-f2?) to swtich to another console (may take 4 or 5 seconds to open), then you can login/use this one.

For ssh login, you can either use the user/password I provided, or use some public/private ECDSA key. (I’ll upload that later today; until then you can pull it from /root/.ssh). All three versions have the same ssh key installed, so you can use the same ssh key to login (without password), and thus easily do things like remotely overwrite each machine’s mine.cfg with a new version, etc.

Summary

Oh-kay – that post was way longer than I had hoped it’d be, but hope it helps – there were a lot of people that tried the earlier lukSticks, and those most eventually got them to work some questions came up again and again – I hope this post will answer the most important ones up front.

All that said, I’m reasonably sure there’ll be some remaining teething issues even in this major iteration of the lukStick, so if you run into any, don’t hesitate to drop me a note. It may take a few days (weeks?) to fix it, but eventually I will …. (or so I hope).

As such: Happy mining!

Luk

 

lukMiner v0.10.1 with Monero v7 Support

Oh-kay, that turned out to be more work than expected… with cpu, knc, knl, and opencl versions all using different codes this “simple” thing of adding v7 support eventually turned out to be more work than expected. But hey, isn’t it always like that?

Either way – as of a few minutes ago I finally finished baking a new release (v0.10.1) that “seems” to run stably on the Monero v7 Testnet pool, for every one of the supported platforms: cpu, x100 phi, x200 phi, and OpenCL all seem to work just fine. As with all recent releases, you can download this relaese under http://www.lukminer.net/releases . And also as usual: please let me know if you run into any issues!

To use: When using “classic” cryptonight (sumo, etn, and monero before the fork): use the “-a xn” (or “-a xnclassic”) command line flag; for cryptonight light (aeon) use “-a xnlight”; and for Monero v7 use “-a xmr-v7”. For more details on usage – including some examples – also see the README.md at http://www.lukminer.net/releases.

Caveats/Remaining Issues

Though I’ve tested the above version quite a bit, a few issues are known to remain:

  1. This version can not (yet?) auto-detect whether we’re in pre-fork or post-fork times, so you have to specify the right “-a <xn|xmr-v7>” command line flag to produce the right hashes. If you don’t, you’ll produce invalid hashes, and at some point the pool you use will get angry. As such: make sure to switch your miner over to the new protocol as soon as – but not before – the hard fork hits (unless I manage to build an auto-detecting version before that, of course 🙂 ).
  2. Even if you use the right flags for the right pool, I do not yet know whether dwarfpool – where my dev shares go – will actually switch over to the v7 protocol automatically; nor do I know when they’ll do that, nor whether they’ll do that on the same URL/port, etc. Now if they do not simply switch the existing dwarfpool over at the right time (as one would expect they should!?), then the version you’d be using would try to submit invalid v7 dev shares to a not-yet v7 dwarfpool. If you do run into any such issues – or if you want to test the miner yourself on the testnet before dwarfpool switches over – then you can also use the “–dev-shared-on-test-net” command line flag: With this flag, all xmr-v7 dev shares will get sent to the (already existant) testnet pool on moneroworld, which should certainly accept them as v7 shares. (Needless to say, though, those shares get wasted, so better not use that flag after the hard fork is over :-/)
  3. Though I did update all the miner variants I have not yet updated the lukSticks. I’ll try to do this later this week…

As such: some issues remain; but at least the miner itself will already support v7 … and with another 9 days or so to go ’til the hard fork I’m reasonably optimistic I’ll get the remaining kinks ironed out ’til then, too!

With that:

Happy Mining!

PS: I’d like to hereby give a big shout-out to whoever set up the monero v7 testnet pool at http://killallasics.moneroworld.com/ … whoever that was: THANK YOU!

 

Monero v7, first light …

Just a quick heads-up, because so many of you are curious on support for the v7 hard-fork: it’s now Sunday morning, 12:37 am, and I finally finished getting a first implementation of this Monero v7 variant working…. and at least according to killallasics.moneroworld.com the generated hashes are actually correct.

I’ll still have a ways to go in cleaning up the code, fixing the opencl code, doing some more burn-in testing, and in particular allowing command-line switching between old and new protocol (currently it’s compile-time)…. but at least the lion’s share is done. So bottom line: Everything looking good for the hard fork in at the end of March ;-).

With that – happy mining!