CategoryPC Hardware

HP ProLiant DL580 G5 — GOOD LORD HOW LOUD YOU ARE, Also: How to quiet down a DL580 G5

So these servers I got are insanely loud, and I can’t stress insanely enough. The fans don’t seem to spin down to any reasonable level even when nothing is stressed and the system as drawing a “paltry” 650w.

So since there’s no direct control of the fans, at least so far in my limited testing with FreeBSD which I’m unfamiliar with anyhow I decided to quiet things down the hardware way. I took each of the 6 120mmx120mmx38mm fan cages out with *65dB* fans rated at 150CFM and cut the power cables to the proprietary connector. There was no way to get a standard width fan in here easily, so I decided to try running each pair in series.

That didn’t go so well. The fans would spin up a bit and then spin all the way down, the server thought they were bad. 6 volts was not enough to keep them going.. so I decided to cheat another way. I cut the power wires to some of the remaining 4 and wired a few diodes in series (5 of the 6 fans have 2 diodes, one has 3). That should give me some voltage drop from the forward bias of the diodes, and it did! My system went from absurdly loud to manageable.

For the other server I’ll be trying 4 diodes in series to try to get a bit more Vdrop and a bit more manageable noise. So far, so good. The 1.5amp diodes aren’t quite enough for my 150CFM Deltas and managed to burn out. 4x 3 amp barrel diodes fit perfectly in the little cavity in the fan, but I don’t have any pictures to show of that at the moment unfortunately.

 

 

3 1.5amp Rectifier Diodes, Twisted Together

3 1.5amp Rectifier Diodes, Twisted Together

Step 1:

Twist the diodes together, anode to cathode (note the silver band), this will get us some voltage drop.

3 1.5amp Rectifier Diodes, Soldered

3 1.5amp Rectifier Diodes, Soldered

Step 2:

Solder these connections and clip the parts we just soldered, we just need the two ends. This step is basically the same for 4 diodes, you just cut one more twisted set.

Fan and cage separated

Fan and cage separated

Step 3:

If your fan has a cage, disassemble the fan from the cage. Mine had plastic push pins much like most cars do, after that it slipped right out as I spread it apart to take the custom connector out.

Diodes Placed on Fan, Tinned

Diodes Placed on Fan, Tinned

Step 4:

Place the diodes on the fan, you can use some super glue to hold them in the cavity if there is a cavity. Otherwise place them somewhere convenient. Tin the anode/cathode of the diode(s) and cut the main power wire to the fan. Tin those wires.

Diodes Wired To Fan

Diodes Wired To Fan

Step 5:

Solder the power wires. The incoming wire will be wired to the cathode, or to the diode with the band furthest away from the connection.

Assembled Fan

Assembled Fan

Step 6:

You should probably have used shrink wrap in Step 5 (doh!), use some electric tape to make sure the exposed power does not hit ground and short out. (Yes, I had this problem, even with the electric tape, and I had to do horrible things to get the fan going again since it uses an uncommon connector that I couldn’t just replace)

Servers!

I got a pair of HP Proliant DL580 G5s, old, but potentially good. I was looking for something to run the ZFS SAN setup (which will hereafter be referred to as Hermes), and perhaps another VM server. They were supposed to have 2 Xeon E7330 quad-core CPUs and 32GB of DDR2 RAM.

To my surprise when I booted them up, server one had:

  • 4x X7460 CPUs (Six-core, 2.66GHz 16MB L3, SSE4)
  • 128GB of RAM

The second also had 128GB of RAM, but 4 E7450 CPUs, also six-cores but at a slightly lower 2.4GHz clock speed and I believe less cache. Total score, a single X7460 is worth more on eBay than I paid for both servers, so I ordered some cheaper CPUs (you know — the cheapest I could find that would work in the socket) at around $6/ea and these will go on eBay to help fund my little lab.

Once the MSA70 comes in I’ll be moving all the SAN stuff off the old desktop it’s in now and onto Hermes.

Note: These things SUCK power. Like 650w sitting there doing nothing power usage. But hey.. 128GB of RAM. That’s a lot of ARC for my ZFS machine!

iSCSI Booting Win2012 Server WITHOUT an HBA (Intel I350-T2 / 82571 / 82574 etc)

Thankfully Intel cards have iSCSI initiators in their firmware, so I setup a ZFS volume to make my HTPC diskless to attempt to stress the file server a bit more and generally just play with things as I tend to do.

So I added some settings to my ISC DHCP daemon under my shared network stanza to pass IQN/server settings to the Intel I350 card (82574 etc would work equally well here):

shared-network "VLAN-451" {
 default-lease-time 720000;
 option domain-name "p2.iscsi.frankd.lab";
 option domain-name-servers ns.frankd.lab;
  subnet 172.17.2.128 netmask 255.255.255.128 {
  range 172.17.2.144 172.17.2.239;
 }
 host intel-htpc1 {
  hardware ethernet a0:36:9f:03:99:7c;
  filename "";
  option root-path "iscsi:172.17.2.130::::iqn.2014-12.lab.frankd:htpc1";
 }
}

Voila, the card came up, grabbed DHCP settings and immediately initiated a connection! Awesome, the first thing to go right so far!  I admit I briefly spent some time trying to get iPXE to work with the Realtek card, but I ran into issues and just decided to use something I had laying around to get up and running quicker. The onboard Realtek is now for regular network data only, I might get a single port Intel card since I don’t need MPIO to this machine.

I imaged Win2012 Server to a USB stick using Rufus and plugged it in, it saw the drive and installed to it. I can’t believe things are going so easy/well for once! Then the system reboots. And it mounts the volume. And the Windows logo comes up. Then an error message comes up saying it couldn’t boot. Right away I knew it wasn’t getting past the BIOS calls to the disk (which were taken care of by the Intel NIC), and some Googling came up with horrible answers until I found an IBM document saying a new Intel driver fixes the issue — in a very indirect way. They don’t specify what, but it apparently has something to do with the iBFT tables that are created for the handoff. So I downloaded the newest drivers, put them on the USB stick and I installed Windows 2012 Server AGAIN. This time I loaded the newest version of the network drivers off the USB stick before even partitioning the disk, though.

The machine rebooted..

 

And..

 

IT WORKED! I was up and running. I installed the User Experience stuff so I could get Netflix/Hulu up easy, downloaded nVidia drivers and am now getting my Steam games downloaded to the machine — although I could stream off my workstation/gaming PC. It can’t hurt to have more than one machine with them installed in case either one of them dies and I need to go blow some pixels up to relieve some stress though, right?

 

Getting My Real VM Server Back Online Part III: Storage, iSCSI, and Live Migrations

After some dubious network configurations (that I should have never configured incorrectly) I finally got multipath working to the main storage server. All of the multipath.conf examples I saw resulted in non-functional iSCSI MPIO, while having no multipath.conf left me with failover MPIO instead of interleaved/round-robin.

A large issue with trying to get MPIO configured was the fact that all the examples I found were either old (and scsi_id works slightly differently in Ubuntu 14.04) or just poor. Yes, I wound up using Ubuntu. Usually I use Slackware for EVERYTHING, but lately I’ve been trying to branch out. Most of the VMs run Fedora, “Pegasus” or VMSrv1 uses Fedora, “Titan” uses Ubuntu.

Before I did anything with multipath.conf (It’s empty on Ubuntu 14.04), I got this:

root@titan:/home/frankd# multipath -ll
1FREEBSD HTPC1-D1 dm-2 FREEBSD,CTLDISK
size=256G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 13:0:0:0 sde 8:64 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 12:0:0:0 sdd 8:48 active ready running

Note the disks are both round-robin — with only one member each! This works for fail-over, but did nothing for performance. The only thing that wound up working for multipath.conf was this:

defaults {
 user_friendly_names yes
 polling_interval 3
 path_grouping_policy multibus
 path_checker readsector0
 path_selector "round-robin 0"
 features "0"
 no_path_retry 1
 rr_min_io 100
}

multipaths {
 multipath {
  wwid 1FREEBSD_HTPC1-D1
  alias testLun
 }
}

The wwid/alias doesn’t work, however. All of the MPIO is just coming from the defaults stanza. I attempted many things with no luck, unfortunately. I’m going to have to delve into this more especially if I want live migrations to work properly with MPIO. As it stands the disk devices are pointing at a single IP (ex /dev/disk/by-path/ip-172.17.2.2:3260-iscsi-iqn.2014-12.lab.frankd:htpc1-lun-0), I’ll need to point at aliases to get the VMs working with multipath.

The multipath tests themselves were promising though, dd was able to give me a whopping 230MB/s to the mapper device over a pair of GigE connections.

The output from ‘multipath -ll’ now looked more reasonable:

root@titan:/home/frankd# multipath -ll
mpath1 (1FREEBSD HTPC1-D1) dm-2 FREEBSD,CTLDISK
size=256G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 39:0:0:0 sde 8:64 active ready running
  `- 40:0:0:0 sdg 8:96 active ready running

You can see the drives are both under the same round-robin policy instead of two separate ones.

The storage server also saw some slight changes, including upgrading from one Intel X25-V 40GB for L2ARC to 2xX25-Vs for a total of 80GB. I also added a 60GB Vertex 2 as a SZIL device. I really need to build a machine with more RAM and partition out the SZIL. I’ll likely wind up using my 840Pro 256GB for L2ARC and leave the old X25Vs out of the main array once I get a pair of 10GbE cards for maximum speed (hopefully near-native of the 840Pro — perhaps better with a large amount of ARC) to my workstation.

So we’re at a point where everything appears to be working, although in need of some upgrades! Great! I’m looking at a KCMA-D8 Dual Opteron C32 motherboard as I have a pair of Opteron 4184s (6 core Lisbon, very similar to a Phenom II X6 1055T) laying around, so I could put together a 32GB 12 core machine for under $400 — but as always, budgetary constraints for a hobby squash that idea quickly.

Getting My Real VM Server Back Online Part II: Storage Server!

Anticipating the arrival of RAM for my VM server tomorrow I decided I needed some kind of real storage server, so I started working on one. I haven’t touched BSD since I was a kid, so I’m not used to it in general. I wasn’t sure how OpenSolaris would work on my hardware (I hear it’s better on Intel than AMD) so I opted for FreeBSD. Unfortunately I just found out FreeBSD doesn’t have direct iSCSI integration with ZFS, but that’s okay! We can always change OS’s later, especially since the storage array leaves a lot to be desired (RAID-Z1 with 4x1GB 2.5″ 5200RPM drives + 40GB Intel X25V for L2ARC, no separate ZIL).

I’m getting used to the new OS and about to configure iSCSI, which will be handed out via multipath over an Intel 82571EB NIC into two separate VLANs into a dedicated 3550-12T switch. We’ll see how it works, and if it’s fine I’m going to get my HTPC booting over it.

I’m going to look around for a motherboard with more RAM slots, for now I’m stuck with a mATX motherboard, a SAS card that won’t let the system boot, and 2 RAM slots (8GB) with an FX-8320.

Performance tests to come.. after I encounter a dozen issues and hopefully deal with them!

Getting My Real VM Server Back Online

My server has been off hiding somewhere far away from me for a while, so I’ve been running virtual machines on an AMD FX-8320 990FX based box. Unfortunately it only had 16GB of RAM and I gutted the server RAM for use in my workstations.

I’ve decided to order some used ECC Registered 4GB sticks off of eBay — 32GB ought to do for now. I won’t have to worry about whether I can launch a new VM due to RAM constraints (I was using a lot of swap before!), so titan.frankd.lab will soon be back online with the FX-8320 machine for failover. I’m going to need shared storage, so I’ll have to setup a real iSCSI storage box soon.

End short random thought.

Another VM Host Upgrade

And yet another not-exciting blog entry. My VM host with an FX-8320 was on an AMD 760G board so it lacked IOMMU which I’d love to have for SR-IOV among other things. I have a spare machine laying around that was formerly a gaming machine. Needing more RAM (The 760G board only had two slots) and IOMMU, I decided to repurpose the gaming machine as the VM host. The 990FX based board already had an FX-8120 in it, so I took a single step back in CPU generation but it’s fairly close. I only had 8GB of RAM in the old setup, so I combined that with 2x2GB sticks of ECC DDR3 RAM I had hiding in a box. I have a bit of head room now and can launch a few more VMs with 12GB of total RAM. While that’s not impressive as far as virtualization host hardware goes it does let me run a bunch of local services for testing/learning/re-learning. Not having onboard graphics with the new board necessitated the use of another video card, luckily I had some GTX 750 Tis laying around (I seem to experience ‘laying around’ about hardware pretty often) so one went in the bottom PCI-E x4 slot so as to not block any other slot for future upgrades. The Intel I350-T2 card went in the next x4 slot for iSCSI.

VM storage is going to be split off from the hardware, so it will all be through iSCSI with MPIO. That pretty much just leaves me with a ton of PCI-E slots for NICs.

I was able to reduce reported CPU TDP by offlining the “odd” cores (1/3/5/7) while load is low (better to offline these cores as 01, 23, 45, 67 are shared in AMD’s CMT architecture), locking the CPU at idle and reducing power state 6 (idle) voltage from 0.9375v to 0.825v which has been stable so far (sensors reports 0.85v). Power tends to stay close to 30w and never breaks 50w. If it was more heavily utilized I’d let it clock up, but nothing is CPU limited at the moment. I’ll have to try monitoring power usage at forced idle vs the ‘ondemand’ governor with various load transition points. I wouldn’t call anything sluggish, but I don’t have hundreds of devices on my network.

As for a power supply, the case already had a SeaSonic 660XP2 80+Platinum power supply, so even if I do have to run the CPU at full tilt there should be little waste in the PSU department. It’s completely overkill both for being Platinum at this power level (likely sub 100w at all times), and for its 660w rating. If I was going to buy something I probably would’ve got a SeaSonic Gold which would still allow me plenty of headroom even if it was full of NICs and RAM. It does feel a little safer than running off a 180w power supply with an FX-8320 and a drive array, though.

There’s plenty of local services running here, eventually I’m going to make some (counter)intuitive web GUIs for configuring stuff (ie IP Address Management which then configured DHCP/DNS).. so it was good to brush up on configuring these things from scratch.

New HTPC

Ok, so my Bobcat machine was a little underwhelming even for use as an HTPC. I’ve never liked waiting on computers.. so I was forced to buy a Pentium G3258 on sale with the cheapest motherboard they had on the shelf. Going Intel feels a little sacrilege since I’ve been predominantly an AMD guy for a very long time. The performance/$ on this thing is great, especially with a mild overclock. I don’t have to use an external GPU any more (likewise I’d be fine with an AMD APU).. although I will admit I threw a GTX 670 in from my 3-way SLI setup to see how it would do. Turns out a fairly respectable ~5200 in 3DMark Firestrike! I think if I was going to do any gaming with it I’d stick with a GTX 750 Ti, though. The power envelope fits the whole idea better (38.5w TDP GPU + 53w TDP CPU) with good performance in most games at 1080p (as if I have anything 1080p anymore..) I guess I’ll have to take a look into the performance and power of GM206/GTX 860 when it comes out.

Windows 8 has been what I can only describe as “annoying,” and has what I’d call a messsy pathetic UI that MS is trying to futilely push on me. Beyond that I’m annoyed at both Win7 and Win8 not being able to do iSCSI MPIO, which would’ve actually given me good performance to my temporary fileserver (sitting on the VM machine). I’m going to try to pull Win Server 2012 iSCSI utilities and drivers into Win7 for MPIO, but I don’t have all that much faith in it working properly.. so I may just be stuck with getting 10GigE cards for the workstations and a multi-10GigE card for the server. Switches are too damn expensive.

The goal is to get the disks out of all the workstations so I can more easily have snapshots and high performance disk IO across everything I use. The HTPC now boots via iSCSI over one link on an 82571EB card while the other link is for regular network traffic. I’d really like to be using both of the links and leave regular network traffic to the onboard viagra aus frankreich.

I’ve always been a bit of a hardware junkie, but I didn’t think I was too elitist to use an E350 as an HTPC. Apparently I was wrong. I guess it’d still be good for a small NAS box (GigE, not enough PCI-E connectivity to support a fast array and 10GigE adapters) — or a Linux media player. I might do the latter just for watching multicast video over the network if I ever get REALLY bored.

4K Is AWESOME!!

I mean this in the terms of a developer/network engineer. As far as media I’m sure it WILL be great once we can get fed >100mbit HVEC streams. For the time being with no content test footage is not really knocking my socks off.

But with a pair of 24″ 4K monitors there is SO much I can fit on the screen and check at a glance that I was unable to do before. I’ve had some pretty good monitor setups for a while (4x1080p + 1440p in the center), so while I haven’t really gained that many pixels I HAVE made it easier to see them. Ironically some people may find that to not be the case if they have vision problems, and I’m sure in another 10 or 15 years 24″ 4k will aggravate me for that same reason. There have been a multitude of posts from developers about the awesomeness of 4k, so I won’t go into too much detail. But it really increases my work efficiency — if only I had something like this at my job!

Even when working with network lab setups it’s great having that many console windows actually open and visible, with a browser up where I can actually read the content.. and graphs chugging along in the NMS while I can watch syslog messages through Graylog2 in another pane. This simply wouldn’t be possible with less than 16.5 million pixels on a screen smaller than 39″. I know because I ran ~8MP worth of 1080p + 3.7MP in a 1440p. I could do almost as much, but I was constantly either craning my neck around due to the sheer area involved (a 27″ monitor flanked 21″ 1080 monitors is really not all that easy to work practically with) or opening and closing windows as needed.

And yet, despite all that awesomeness I do hate my UP2414Q, MST (Multi-Stream Transport) is annoying. The monitor will occasionally do all kinds of crappy things (like only use the right half of the image and stretch it across the whole screen, or decide to only apply color calibration on the right pane). I can’t wait for ~28″ IPS SSTs to arrive. I’d really like the pixel density to stay high, but I could settle for 3840×2160 @ 28″ (4320×2430 would be preferred). MST works by splitting the monitor into two 1920×2160 panels, wreaking all kinds of havoc at the worst times. It’s really disappointing spending that much on a piece of equipment and dealing with this much buginess — that is what you get for being an early adopter, though. And despite that and all the annoyances and hate, I still wouldn’t trade it in for a monitor with less pixels.

As for gaming.. it just makes my GTX 670 FTWs wish I never bought them, I hit the 2GB VRAM limit with ease in most games. Even when I’m under the VRAM limit it can barely drive most games even in 3-way SLI. There’s just so many pixels. Keep in mind SLI scaling is better than with a 1080p monitor since it can make more efficient use of each GPU, but there really just needs to be more graphics rendering hardware. I’m definitely going to need atleast a pair of high-end next-generation GPUs in SLI to properly drive this thing at reasonable framerates. Hopefully GM200 8GB cards come out soon.. but I’m sure they won’t be ‘affordable’ (sub-$1000 per card) for a couple years.

HTPC Duty

Netflix annoyingly will only stream up to 3000kbit/s video through the Silverlight plugin. The only way to get higher quality video is either to have a high-end smart TV or to use a Windows 8 metro app. Since I have a couple 4k monitors and a 1440p monitor mounted to the wall as a psuedo-TV I decided to try Win8 in a virtual machine. That went right down hill as after I got it installed Netflix instantly complained that it wouldn’t play video in a VM (This may not have been an issue with GPU pass-thru — a test for another day, I guess). So I pulled out the E350 and slapped it in another case with 4GB of RAM. The PSU made a loud whining noise and smelled acidic, I thought I popped a capacitor. It’s actually fairly common for older power supplies to blow up when certain rails are very lightly loaded (and this is a 550w Rosewill from nearly 10 years ago with an 18w APU and a 2.5″ 5400RPM hard drive) so I wasn’t too surprised. Hoping that I didn’t also fry the motherboard I pulled the PSU out and apart to clean it from all the heavy dust that had caked up in it over the years. Surprisingly everything looked in good shape, there were no caps with bulging heads — so I put it back in the machine and everything seems to be fine.

After getting Windows 8 installed, I noticed that I was only getting a maximum supported resolution of 1920×1080 to my 27″ 2560×1440 monitor. Some quick Googling showed that every E350 board I searched for only supported up to 1080p resolution. That’s somewhat annoying as there’s no such limitation on the HDMI version used, or as far as I know the silicon HD6310 is based off of. So I tried the GT430 nVidia card which is GF108 based with GDDR3 and can impressively draw up to 49w of power.. which is almost triple what the E350 CPU/GPU combo draws by itself. For some unknown reason that ALSO only supported 1920×1080 over HDMI. Just my luck.

Fortunately I had a pair of GTX 660 SCs laying around — which are far, far overkill but happily output 1440p at 60Hz. I quickly replace the GT430, and just as quickly I’m annoyed to find out that Win8 will not automatically use the driver I had already installed for the GT430 (same driver — and Win7 has no such qualms with similar swaps). So I install the driver again, which is painfully slow on this poor dual-core 1.6GHz Bobcat, load up Netflix and all is well! I can finally get 5800kbit/s 1080p video from Netflix — and even 4k when it’s available! Of course now I’m now wasting a perfectly good GPU just to get decent output. Eventually I’ll have to take a look at new APU or CPU/GPU options (maybe an i3 or i3+Maxwell GM208 if that’s ever released as a low profile passive card) but for now it works.

Synopsis of issues:
Netflix + Silverlight = 3000kbit/s max video
Win8 in a VM = Netflix App won’t work!
ATX PSU + low load = Whining + boom?
AMD E350 (Asus E35M1-M Pro)  = 1920×1080 max resolution over HDMI 🙁
GT430 = 1920×1080 max resolution over HDMI 🙁
GTX660 + E350 = TOO MUCH POWAH!

More Virtualization Hardware

The dual C32 Opteron 4284 (aka titan.frankd.lab) server I have had its 16GB of RAM gutted for my workstation. I’m looking at 32GB-64GB kits on eBay to try to get a decent amount of memory for virtualization. I’ve also picked up an Intel I350 card to go in the “light-weight” FX8320 server in order to have a card with SR-IOV, so it will still have 4 Intel ports + the onboard Broadcom. For now the I350-T2 is replacing the GT430 and video duty is being served by the onboard graphics. It’s not as if a server needs to have video output, but it does make things somewhat easier as I’m right next to the hardware. The other alternative would be running a Windows X server so I could have access to virtmanager easy-mode.

The C32 already has 2 82571 ports and 2 82576 (SR-IOV) ports, so the extra I350 card will eventually go in there for a total of 6 ports while the FX8320 machine keeps its dual 82571EB cards.  This leaves me with lots of options for super-segregating networks and great offload capabilities for the 82576/I350 NICs. Currently I’m running many VLANS to keep things separate (SevOne/Icinga/Graylog all have their own respective VLANs), which means I might be able to get a little more interesting with my routing for actual use gibt es viagra rezeptfrei.

More ports will be interesting if I actually build an iSCSI server (which I’m certainly planning on once I have money I can actually spend).. though it may necessitate a higher quality switch (4948), or some 10G interconnects.. or IPoIB (IP over Infiniband).

© 2017 Musings

Theme by Anders NorenUp ↑