Meltdown and Spectre loom large this year, and this article outlines how simple Meltdown/Spectre-1 mitigation is. But first, you should also do your homework.
...
Because CVE-2017-5753 (Meltdown) is considered by some to be the most severe/exploitable of the issues, we did not want to wait for CVE-2017-5715 (Spectre-2) mitigations while Spectre-1/Meltdown fixes were ready to ship. We also understand that some customers may want to delay updating until all mitigations are in place. While we strongly recommend taking updates as soon as they become available, we wanted to be transparent about the fact that more updates are on the way. ...
For those of you with VCSA 6.5.x already installed, the simple VAMI update method also means you won't need the VMware-VCSA-all-6.5.0-7119157.iso from the Download Page for: VMware-VCSA-all-6.5.0-7119157.iso | Release Date: 2017-11-14 | Build Number: 7119157
This upgrade is also known as version 6.5.0.14100 or 6.5U1f or Build 7801515, as seen in the vCenter Server Appliance Management Interface (VAMI), as pictured above this article. I have a screenshot and video of the process below.
Create a snapshot of your VCSA appliance first, or at least do a backup using whatever VM backup software you prefer, or even the backup abilities of the VCSA appliance itself.
Log in to your VCSA using VCSA VAMI (vCenter Server Appliance Management Interface) (aka Appliance Management User Interface) using the steps seen in the clear screenshot below.
Click on Summary, then choose Reboot, click Yes.
Test that vSphere Client is functioning properly, to know whether VCSA is working. It might be best to also verify that vSphere Web Client is also fully operational.
In vSphere Client, select the VCSA appliance and right-click on it, then choose Snapshots, Manage Snapshots, DELETE ALL, then click on DONE.
Connecticut VMUG UserCon was at the Connecticut Convention Center in 2016, and on March 1 2018, it's being held in New Haven at the Omni Hotel.
I've long hoped that the VMUG organization would someday see the value of having an amazing home lab raffle at a VMUG UserCon. Ever since I did a well-attended live demo at the Connecticut VMUG UserCon back in 2016, actually, see:
Photo by former Connecticut VMUG Leader Matthew Kozloski.
So you can imagine my delight when I recently learned from VMware Senior TAM/VMUG Volunteer Matt Bradford that the VMUG organization was sponsoring a very special give-away this year, for the big upcoming Connecticut VMUG UserCon. Even more special that it's right in my home state of Connecticut, but what could it be?
It turns out it's one of the popular TinkerTry'd Supermicro SuperServers! One lucky attendee who enters to win in person takes home their new Xeon D-1541 Bundle at the end of the day. Awesome! I love bringing some happiness and productivity, and maybe even career advancement, to others. Helping make such events a little bit better is one of the many joys of blogging. I love testing, aka tinkering, with servers, and I also love documenting the results of my home lab adventures. This helps others get the most value from their time and investments too, it's kind of my thing at work, and at home. In case you hadn't noticed, see the TinkerTry tagline above, which reads "TinkerTry IT @home - Efficient virtualization, storage, backup and more." It reads TinkerTry it at home, or TinkerTry IT at home, get it? Gear you can actually buy yourself, or in this case, actually win.
This is fantastic! What a great way to evangelize the value of having physical gear that well prepared to handle even the most demanding workloads that a single IT Professional can throw at it. Or used for self-training. Or for fun. Or all of the above. This is a proven solution for home lab use that's already been enjoyed by hundreds of SuperServer owners globally. It's not so much a lab really, it's more like a tiny, efficient, quiet, and powerful home datacenter. This premium device's low overall noise output and power usage also means it enjoys a high family acceptance factor, more stories here.
Last year, in February, I came to the realization that if I really wanted to increase my knowledge and advance my career, then I needed to invest in a homelab. So, I made an investment and purchased a SuperMicro 5028D-TN4T system bundle from WiredZone. ... Conclusion
To round things up, I can say I’m very satisfied with my purchase. I don’t have any regrets in my purchase and at least from a storage standpoint, I still have room to grow. The system has been very reliable. I have not had a single hardware issue or any quirkiness from the system. Everything has worked as expected. The system runs 24/7 at home and is only powered off when performing updates to it. ...
This is just one new addition to the many SuperServer experiences already shared by prominent bloggers such as William Lam, Zach Widing, and Tai Ratcliff who also work at VMware, and Michael White who works at Veeam.
Qty 1 Supermicro SuperServer SYS-5028D-TN4T "Bundle 2" from authorized Supermicro reseller Wiredzone, who performed the pre-configuration and testing, including:
BIOS and IPMI are flashed to the latest known-good levels, and the BIOS is pre-configured correctly, with UEFI on for modern OSs, in procedures developed by TinkerTry here.
2 TinkerTry logo stickers, an SuperServer front sticker, and a custom rear backplate label, as pictured below. All are customer-installed, optionally, and are strong but removable, with no gooey residue left behind.
Handy PC speaker, so you can hear when the BIOS is prompting you to pay attention during the end of the POST sequence.
6th SATA3 cable that is normally missing from the barebones system, so all 6 drive bays can be used immediately.
SanDisk Ultra Fit CZ43 32GB USB 3.0 Flash Drive, conveniently front-mounted and ready for your ESXi, note that the system is also compatible with booting from SATADOM modules.
Bare bones normally means no storage, but this system has the very fast and super popular Samsung SSD on board, installed by Wiredzone for you. See how just how fast this drive can go, in the video below.
That's 64GB total, installed and burn-in tested for you, with a printed certificate in the tamper-sealed box that was shipped from Wiredzone to VMUG to hand to you, still sealed.
What better way to get the most of your hardware is there than some very sweet and valuable bits of software! Product list here, and more details below.
All the benefits of the System-on-a-Chip Xeon D-1500 platform, including two Intel I350 1GbE ports and two Intel X557 10GbE ports, built right into the motherboard for maximum performance at the lowest watts used. Yes, those photos below are real, my entire traveling datacenter demo uses only 66 watts at idle, and no more than about 110 watts at maximum loads.
This is exactly what I'll be bringing with me to demo live.
Come on by my demo area and ask me to:
Live demo a complete clone of a Windows 10 VM in well under 30 seconds! That's the power of M.2 NVMe storage that has revolutionized home lab performance, leaving that old fangled SATA based SSD stuff in the dust years ago.
Try your hand at the vSphere Client, the HTML5 future of VMware vSphere sysadmin.
Nerd out with me about the speeds and feeds of this system, where the PCIe 3.0 x 4 lanes allows one full-speed M.2 NVMe device on the motherboard, and up to 4 more in the PCIe 3.0 x 16 slot.
Talk about whatever is on your mind.
Read more about Intel Optane 900p.
I'll also have some fun props to show off, including a Micron PLP/SuperCapacitor equipped NVMe storage that VMware vSAN craves, and even a snazzy looking Intel Optane SSD in hand, the world's fastest type of storage available to consumers now.
Of course, I'll also be highlighting the many benefits of VMUG Advantage EVALExperience, for those all-you-can eat 365 day VMware licenses, so you can tinker with all those bits in your non-production home lab, and keep on tinkering by re-uppping your subscription, freeing your busy self from 60 day time-bomb/rebuilds. The VMUG organization is even sending a Kubi to answer any questions you may have about the program, give it a try! I had a terrific call with the program organizers today, and it turns out the discount code TINKERTRY that scores you 10% off your $200 purchase of VMUG Advantage has been pretty popular. It's so good to hear that, and to learn that over 90% of subscribers renew after their first year. That stat alone says it all! You get VMware Workstation (worth $249!) and Fusion thrown in, along with vSphere (ESXi and VCSA) and many VMware products. My how the program has grown this year, it's all detailed on the order page, see also:
What better way to leverage your SuperServer? NSX is memory hungry, and the limitations of the Intel NUC or noisier Xeon Ds out there can really hold you back, see TinkerTry.com/compare. This raffle winner can easily upgrade from 64GB total RAM to 128GB of total RAM just by adding 2 more DDR4 DIMMs.
You'll see that other amazing topics and speakers are on the agenda too, including none other than Mike Foley doing a vSphere 6.5 Security Update, VMware and Frank Gesino doing Planning for Your Upgrade to vSphere 6.5 - Deep Dive, VMware.
Don't forget to register, it's free. And really really don't forget to enter the raffle when you arrive, and stick around for the end-of-day give-away. See you there!
TinkerTry.com, LLC is an independent site, has no sponsored posts, and all ads are run through 3rd party BuySellAds. All equipment and software is purchased for long-term productive use, and any rare exceptions are noted.
TinkerTry's relationship with Wiredzone is similar to the Amazon Associates program, where a very modest commission is earned from each referral sale from TinkerTry's SuperServer order page. I chose this trusted authorized reseller for its low cost and customer service, and a mutual desire to help folks worldwide. Why? Such commissions help reduce TinkerTry's reliance on advertisers, while building a community around the Xeon D that strikes a great balance between efficiency and capability.
I personally traveled to Wiredzone near Miami FL to see the assembly room first-hand, and to Supermicro HQ in San Jose CA to share ideas and give direct product feedback.
I'm a full time IT Pro for the past 23 years. I've worked with IBM, HP, Dell, and Lenovo servers for hands-on implementation work across the US. Working from home over the past year as a VMware vSAN SE, I'm quite enjoying finally owning a lower-cost Supermicro solution that I can recommend to IT Pro colleagues, knowing it will "just work." That's right, no tinkering required.
Come back and refresh, this new article is a work in progress!
I'm happy that Xeon D finally has some competition, validating the value of the embedded/IoT market niche that home virtualization lab enthusiasts have benefited from. There are many popular Intel Xeon D designs that take advantage of the compact and efficient "System on a Chip" design.
Any competitive pressures that help keep prices in check are good! You'll want to get started by reading ServeTheHome's fantastic new article:
We recently attended a launch of two new products, the AMD EPYC Embedded 3000 series and the AMD Ryzen Embedded V1000 series. This piece is going to focus on the AMD EPYC Embedded 3000 series which has the technical features that should have the Intel Xeon D team nervous. ...
I'm out of time to add more analysis to this article for today, but this carefully crafted Google search should keep you busy for a while: AMD EPYC Embedded 3000.
FYI, I’m currently in the midst of improvements to my home lab environment, retooling to get ready for some serious tinkering. I’ll have some fun pictures of the progress I’ve made already that I'll be sharing soon, and I’m working on getting a Xeon D-2100 loaner for testing too.
If I do manage to get my hands on enough funds to try an AMD EPYC out, you can bet it will be the 3000 Series. Note that the supported launch OS list doesn’t include VMware. It doesn't even include Windows yet, so it may be a while before maturity and support is really there for VMware. Note that the Xeon D-2100, only launched earlier this month, already works with VMware, see Intel Xeon D-2100 with VMware ESXi and Ubuntu 16.04 on Supermicro X11SDV. Whatever happens, the 3000 Series will likely be smoother than AMD's Ryzen first experiences were with ESXi 6.5.
So many Xeon D Form Factors
There are so many form-factors for Xeon D out there, thanks to the System On a Chip design that allows for a variety of embedded and IoT centric-designs. See my recently updated:
Xeon D Success Stories
There are so many successful stories of using the Intel Xeon D-1500 as the basis for a wide variety of home labs. Here's a small sampling, in alphabetical order. Many of these authors have many Xeon D articles, so be sure to have a look around at each of their sites. Most of these authors purchased their systems with their own money, but I still notate where each blogger works, since it's sometimes used as a part of their day jobs too.
If I forgot a home lab story featuring Xeon D that I really should add to this list below, please drop a comment below to let me know!
vBrownBag's "Hyperconverged Home Lab 2.0 with Joshua Stenhouse" featured at Virtually Sober leverages Xeon D-1541 and VMware vSAN
Josh Stenhouse works in a technical role at Rubrik, and on his particularly awesome recent vBrownBag, he really gets into the weeds about how he built his home lab, along with the challenges he had to overcome. I figured this would be right your alley! He made 15 or 20 different revisions (prototypes) before setting on his finalized configurationW.
While this recording available in audio-only form at this spot that features playback controls, you'll really want to watch the whole video on the vBrownBag YouTube Channel for a much richer experience, where Ariel Sanchez Mora does his usual fantastic job of interviewing:
Video
VIDEO - Hyperconverged Homelab 2.0 with Josh Stenhouse [@joshuastenhouse]
...
I use my lab daily to perform my job as a sales engineer. I need to demo solutions and test my PowerShell scripts at a reasonable scale with multiple hypervisors. Any investment in my lab enables me to do the best job that I can. Also, it’s fun. 😊 ...
due to idling host efficiency and HA config issues I had to switch back to ESXi and trusty vSphere with vSAN (6.5 Update 1). For me, vSAN is the best thing to come out of VMware in years. Why? Because it disrupts a legacy market, it works, it’s efficient, and most importantly, it’s integrated! I just check a box and go. ...
Here's the key part of the BOM (Build Of Materials)
3 x Supermicro X10SDV-8C-TLN4F+ Xeon D-1541, 45w each, total 135w
I'm flying with this home datacenter tomorrow, to present at my 14th user group presentation this year. This time, it's to one of the biggest VMware User Groups in the US, over in Minneapolis Minnesota! OK, I'm not actually trying to fly with that UPS battery, but everything else is going with, mini-tower server itself tucked safely into luggage, easily fitting into the overhead bin. I even use a bunch of those little foam "TinkerTry IT @ home" houses to protect the in-transit server, and to toss a few to audience members that ask great questions ;-) ...
...
In terms of the power consumption the Xeon-D processor is amazing and I have not noticed any change in my power bill over the last 12 months…for me this is where the 5028D-TNT4 really shines and because of the low power consumption the noise is next to nothing. In fact as I type this out I can hear the portable room fan only…the Micro Tower it’s self is unnoticeable. ...
To round things up, I can say I’m very satisfied with my purchase. I don’t have any regrets in my purchase and at least from a storage standpoint, I still have room to grow. The system has been very reliable. I have not had a single hardware issue or any quirkiness from the system. Everything has worked as expected. The system runs 24/7 at home and is only powered off when performing updates to it. ...
It's worth noting that both Anthony Spiteri and Chestin Hay mention that they wish they had more than 128GB of RAM. Earlier this month, Intel announced the Xeon D-2100 System On a Chip that will help address this concern, at a higher price point. Read more about it right here at TinkerTry. Last week, some new competition called the AMD EPYC Embedded 3000 Series was also announced. The question is largely around whether anybody will ship one of these "edge/IoT" devices in a compact and quiet mini-tower form factor that is suited for home labs, rather than the 40mm 1U screamers that seem to be the focus, at least initially. Whatever happens, exciting times and opportunity lie ahead!
TinkerTry.com, LLC is an independent site, has no sponsored posts, and all ads are run through 3rd party BuySellAds. All equipment and software is purchased for long-term productive use, and any rare exceptions are noted.
TinkerTry's relationship with Wiredzone is similar to the Amazon Associates program, where a very modest commission is earned from each referral sale from TinkerTry's SuperServer order page. I chose this trusted authorized reseller for its low cost and customer service, and a mutual desire to help folks worldwide. Why? Such commissions help reduce TinkerTry's reliance on advertisers, while building a community around the Xeon D that strikes a great balance between efficiency and capability.
I personally traveled to Wiredzone near Miami FL to see the assembly room first-hand, and to Supermicro HQ in San Jose CA to share ideas and give direct product feedback.
I'm a full time IT Pro for the past 23 years. I've worked with IBM, HP, Dell, and Lenovo servers for hands-on implementation work across the US. Working from home over the past year as a VMware vSAN SE, I'm quite enjoying finally owning a lower-cost Supermicro solution that I can recommend to IT Pro colleagues, knowing it will "just work." That's right, no tinkering required.
I recently bumped into an issue after a public demonstration of my home lab. After the successful day, I routinely replaced the VCSA appliance I had been messing around with by deleting the old one and installing a new one. I re-used the same DNS name, which for my home lab is vcsa.lab.local, avoiding the need to update my DNS server.
Suddenly, using a browser to get to either of the UIs, the vSphere Web Client (Adobe Flash) or vSphere Client (HTML5), wouldn’t work. Even VAMI broke, and the main VCSA welcome page the allows easy certificate download. My browsers were trying to warn me that I was trying to connect to what they rightfully saw as an imposter. I've bumped into this conundrum before over the past fear years, of testing of dozens of beta versions. So off to Google I went, curious if there were clear-cut-articles out there with the resolution. I didn't find anything beside this KB 210894, so figured it's a great time for me to finally get my fix documented here, partly for my future self.
If you work in a lab where you've already downloaded certificates into your system's "Trusted Root Certification Authorities" store to avoid those important but pesky red browser warnings everywhere, such as by following along with my TinkerTry article:
and you later replace your VMware vCenter Server Appliance (VCSA) like I did, you'll also get those scary warnings. These warnings can't be bypassed, as listed/shown here for reference:
Chrome
Chrome
Tested with version 64.0.3282.186 (Official Build) (64-bit)
Your connection is not private
Attackers might be trying to steal your information from vcsa.lab.local (for example, passwords, messages, or credit cards). Learn more
NET::ERR_CERT_INVALID
Automatically send some system information and page content to Google to help detect dangerous apps and sites. Privacy policy
vcsa.lab.local normally uses encryption to protect your information. When Google Chrome tried to connect to vcsa.lab.local this time, the website sent back unusual and incorrect credentials. This may happen when an attacker is trying to pretend to be vcsa.lab.local, or a Wi-Fi sign-in screen has interrupted the connection. Your information is still secure because Google Chrome stopped the connection before any data was exchanged.
You cannot visit vcsa.lab.local right now because the website sent scrambled credentials that Google Chrome cannot process. Network errors and attacks are usually temporary, so this page will probably work later.
Internet Explorer
Internet Explorer
Tested with version 11.850.15063.0 (64-bit)
This site is not secure
This might mean that someone’s trying to fool you or steal any info you send to the server. You should close this site immediately.
Recommended iconClose this tab
More information More information
The website’s security certificate is not secure.
Error Code: 0
Microsoft Edge
Microsoft Edge
Tested with version 40.15063.674.0 (64-bit)
This site is not secure
This might mean that someone’s trying to fool you or steal any info you send to the server. You should close this site immediately.
Go to your Start page
Details
The website’s security certificate is not secure.
Error Code: 0
Firefox Quantum
Firefox
Tested with version 58.0.2 (64-bit)
Your connection is not secure
The owner of vcsa.lab.local has configured their website improperly. To protect your information from being stolen, Firefox has not connected to this website.
Learn more…
Report errors like this to help Mozilla identify and block malicious sites
vcsa.lab.local uses an invalid security certificate.
The certificate is not trusted because the issuer certificate is unknown.
The server might not be sending the appropriate intermediate certificates.
An additional root certificate may need to be imported.
Error code: SEC_ERROR_UNKNOWN_ISSUER
Add Exception...
While replacing your vCenter/VCSA in the enterprise isn't exactly a common occurrence, it's much more commonplace in the home lab, testing different versions of VCSA or beta testing future versions.
The fix I've documented here is fairly straightforward, tested on Windows 10 and VMware vSphere/VCSA 6.5U1f.
Here's the step-by-step written instructions, with a walk-thru video below.
Step 1) Delete the old VCSA certificate
Press the Win+R key on your keyboard
Type certlm.msc then press the "Enter" key
When prompted by "User Account Control", click "Yes"
Along the left, open the "Trusted Root Certification Authorities" and highlight the "Certificates" folder
Look for a certificate that is Issued To and Issued By "CA" and double-click on it
Select the "Details" tab
Scroll down to "Subject" and look for something like "VMware Engineering, vcsa.lab.local" but with your vcsa server's name instead
Click on the "Copy to File..." button, and save the certificate to your system's drive, just in case you ever need to import it again
Click OK to exit the view of the Certificate
With the Certificate you just inspected still highlighted, press Del on your keyboard and say Yes (to delete the certificate)
Close all copies of the browser you use for vSphere sysadmin, making sure to kill all copies using Task Manager if necessary, or logging off and back in again to be extra sure.
Try opening up vSphere Client and launch a Remote Console to a powered-on VM, if you get this error, turn on the checkbox then click on "Connect Anyway"
Step 4) Recreate Chrome shortcuts (optional)
If you find any of your Taskbar shortcuts created in Chrome give an unexpected error, it's due to VCSA specific bookmarking. To clean them up, simply recreate those shortcuts, it's all explained in detail in the following TinkerTry article.
A TinkerTry reader send this great comment early today, how timely!
I'm disappointed that I don't have a great answer for Koen, at least not yet anyway.
Click to visit the Supermicro SuperServer E300-9D Product page.
For folks who don't need much storage and don't mind 1U fan noise, the only system based on the Xeon D-2100 that seems ready for home labs is the affordable 4 core Supermicro SYS-E300-9D SuperServer that can go all the way to 512GB memory (DDR4/2666MHz) some day, discussed below. This article shows the research I did along the way, with a careful look at each aspect of the announcements. I'm also excited to announce I have some hands-on testing planned, but I haven't yet heard when they'll start shipping.
If you haven't already learned about the technical details of the Intel Xeon D-2100, you should read my article that analyzes industry coverage first:
The Intel Xeon D-2100 isn't so much as a successor to the successful Broadwell-based Xeon D-1500 line, but more of a complementary, higher-end offering, expanding the range of options available to OEMs, featuring greatly increased expansion possibilities. Both product lines continue onward in parallel, and both are based on Intel® 14 nm Technology.
March 2015 - The Xeon D-1540 is announced
The first Xeon D was the Xeon D-1540, an 8 core offering announced March 2015, and first shipping in volume by Supermicro in July, but I got my hands on mine on June 25, 2015, which may have been one of the very first ever produced.
Feb 2016 - Many second generation Xeon D-1500 CPUs announced
I've added some product hyperlinks and pictures of the systems and motherboards being announced in the press releases below, but the text itself is unchanged.
Note that both press releases below link to the Embedded Solutions Brochure, but that older PDF is still from October 2017. The product pages for the SuperServers themselves are only starting to surface, with several just saying "Coming Soon" on the Xeon D-2100 X11 UP (Uni-processor) page, with a Mar 10 2018 screenshot of that page seen at right, click twice to really zoom in.
See also the entire portfolio of SuperServer systems and motherboards at:
Now let's roll-up our sleeves and dive into the two recent press releases by Supermicro that feature Xeon D-2100. Highlighted are the parts that are of most relevance to the admittedly niche home lab / virtualization enthusiast market that seeks out lots of memory and CPU cores, along with high speed networking soldered right into reasonably efficient and quiet form factors that these SoC designs offer. Stick with me here, you'll also find a detailed summary of my feelings about these these announcements below.
Mini-ITX platforms based on the New Intel® Xeon® D-2100 SoC (System-on-a-Chip) Processor for compact high-performance, low power, feature rich embedded and IoT (Internet of Things) applications SAN JOSE, Calif., February 7, 2018 — Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in enterprise computing, storage, networking solutions and green computing technology, today announced several new additions to its edge computing and network appliance portfolio based on the new Intel® Xeon® D-2100 SoC (System-on-a-Chip) processor.
Leveraging its deep expertise in server technology, Supermicro is bringing customers some of the first Intel® Xeon® D-2100 System-on-a-Chip (SoC) processor-based solutions. The company’s X11SDV series motherboards offer infrastructure optimization by combining the performance and advanced intelligence of Intel® Xeon® processors into a dense, lower-power system-on-a-chip. Supermicro is introducing a wide range of new systems to the market including compact embedded systems, rackmount embedded systems, as well as multi-node MicroCloud and SuperBlade systems.
With server-class reliability, availability and serviceability (RAS) features now available in an ultra-dense, low-power device, Supermicro X11SDV platforms deliver balanced compute and storage for intelligent edge computing and network appliances. These advanced technology building blocks offer the best workload optimized solutions and long life availability with the Intel® Xeon® D-2100 processor family, available with up to 18 processor cores, up to 512GB DDR4 four-channel memory operating at 2666MHz, up to four 10GbE LAN ports with RDMA support, and available with integrated Intel® QuickAssist Technology (Intel® QAT) crypto/encrypt/decrypt acceleration engine and internal storage expansion options including mini-PCIe, M.2 and NVMe support.
“These compact new Supermicro Embedded Building Block solutions bring advanced technologies and performance into a dense, low-power system-on-a-chip architecture, extending intelligence to the data center and network edge,” said Charles Liang, President and CEO of Supermicro. “With the vast growth of data driven workloads across embedded applications worldwide, Supermicro remains dedicated to developing powerful, agile, and scalable IoT gateway and compact server, storage and networking solutions that deliver the best end to end ecosystems for ease of deployment and open scalability.”
Supermicro’s new SYS-E300-9D is a compact box embedded system that is well-suited for the following applications: network security appliance, SD-WAN, vCPE controller box, and NFV edge computing server. Based on Supermicro’s X11SDV-4C-TLN2F mini-ITX motherboard with four-core, 60-watt Intel Xeon D-2123IT SoC this system supports up to 512GB memory, dual 10GbE RJ45 ports, quad USB ports, and one SATA/SAS hard drive, SSD or NVMe SSD.
The new SYS-5019D-FN8TP is a compact (less than 10-inch depth) 1U rackmount embedded system that is ideal for cloud and virtualization, network appliance and embedded applications. Featuring Supermicro’s X11SDV-8C-TP8F flex-ATX motherboard supporting the eight-core, 80-watt Intel Xeon D-2146NT SoC, this power and space efficient system with built-in Intel QAT crypto and compression supports up to 512GB memory, four GbE RJ45 ports, dual 10GbE SFP+ and dual 10GbE RJ45 ports, dual USB 3.0 ports, four 2.5” internal SATA/SAS hard drives or SSDs, and internal storage expansion options including mini-PCIe, M.2 and NVMe support.
Supermicro is introducing two new MicroCloud servers based on the new processors. Perfect for cloud computing, dynamic web serving, dedicated hosting, content delivery network, memory caching, and corporate applications, these systems support eight hot-pluggable server nodes in a 3U enclosure with a centralized IPMI server management port. The SYS-5039MD8-H8TNR features the 8-core, 65-watt Intel Xeon D-2141i SoC, and the new SYS-5039MD18-H8TNR features the 18-core, 86-watt Intel Xeon D-2191 SoC. Each server node for these MicroCloud systems supports up to 512GB of ECC memory, one PCI-E 3.0 x16 expansion slot, two hybrid storage drives that support U.2 NVMe/SATA3, two M.2 NVMe/SATA3 connectors, and dual GbE ports.
Supermicro’s 4U/8U SuperBlade enclosures feature blade servers that support new Intel Xeon D-2100 System-on-a-Chip (SoC) processors, including the 18-core D-2191 processor as well as the 16-core D2187NT processor with 100G Crypto/Compression. The blade servers support up to 512GB DDR4 memory, hot-plug 2.5” U.2 NVMe/SATA drives, M.2 NVMe, and 25Gb\10Gb Ethernet and 100G Intel® Omni-Path (OPA) or 100G EDR InfiniBand. Redundant chassis management Modules (CMM) with industry standard IPMI management tools, high-performance switches, integrated power supplies and cooling fans, Battery Backup Modules (BBP) make this all-in-one blade solution ideal for datacenter and cloud applications.
Supermicro showcasing industry’s broadest product portfolio of embedded servers & motherboards to support a wide range of markets from Industrial Automation, Retail, Medical, Transportation to Communications and Networking NUREMBERG, Germany, Feb. 27, 2018 — Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in enterprise computing, storage, networking solutions and green computing technology, today announced that it is highlighting new additions to its extensive edge computing and gateway product portfolio, including solutions based on the new Intel® Xeon® D-2100 SoC (System-on-a-Chip) and Intel® Atom® C3000 SoC processor at Embedded World 2018 from February 27-March 1, Nuremberg Exhibition Centre booth 1-330.
Leveraging its deep expertise in server technology, Supermicro is introducing a full line of solutions that support the latest Intel® Xeon® D-2100 SoC processors (codenamed Skylake-D) with up to 1.6x compute performance improvement compared to the last generation. The X11SDV series motherboards offer infrastructure optimization by combining the performance and advanced intelligence of the new system-on-a-chip processors into a dense, lower-power compact solution ideal for embedded applications.
“As the 5G era continues to emerge and Edge computing becomes more prevalent, Supermicro is ready with the industry’s best selection of embedded servers and motherboards to service a wide range of vertical markets including industrial automation, retail, medical, transportation, communication, and networking,” said Charles Liang, President and CEO of Supermicro. “With the vast growth of data driven workloads across embedded applications, Supermicro remains committed to developing powerful, scalable yet agile IoT gateway and compact server, storage and networking solutions that deliver the best ecosystems for the Edge with ease of deployment and open scalability.”
With server-class reliability, availability and serviceability (RAS) features now available in an ultra-dense, low-power device, Supermicro’s X11SDV platforms deliver balanced compute, storage and networking for the intelligent Edge. These advanced technology building blocks offer the best workload optimized solutions and long life availability with up to 18 processor cores, up to 512GB DDR4 four-channel memory operating at 2666MHz, up to four 10GbE LAN ports with RDMA support, and available with integrated Intel® QAT (Quick Assist Technology) crypto/encrypt/decrypt acceleration engine and internal storage expansion options including mini-PCIe, M.2 and NVMe support.
In addition to these new solutions in compact box and 1U rack systems, Supermicro is also showcasing the latest low-power Intel® Atom® embedded solutions in fan-less compact box, short-depth 1U rack and mini-tower systems. For more information on these diverse low-power products, please go to: www.supermicro.com/products/nfo/Atom.cfm.
Supermicro is demonstrating fan-less modular solutions using passive cooling techniques that can operate in extreme temperature and provide protection from dust ingress and condensation with IP51 (Ingress Protection) certification, too. The industrial grade compact enclosure designs provide modular expansion housing ultra-small form factor 3.5” SBC or 2.5” Pico-ITX designs with integrated DC power support, storage expansion and wireless networking capabilities.
512GB!
With Xeon D-2100 supporting 512GB of memory, and maybe even 1TB of memory someday, you'll likely be needing to think about memory interleaving again, something you didn't need to think about on the Xeon D-1500, where you could buy 2 32GB DIMMs with your Bundle, and enjoy full speeds whether or not you used those remaining 2 slots with 2 more 32GB DIMMs later on, with no wasteful removal of the DDR4 you already invested in. This is no small thing, as DDR4 prices have risen from around $175 when I first gone mine in 2016, all the way to $375 now in 2018.
Looks like unbuffered DIMM support also went away (to be expected since Skylake-D is from higher up the chain)
Intel
Did Intel just (inadvertently?) throw a little shade on smaller OEMs? See Intel's February 7 2018 Press Release
...
Today signals the general availability of the processor in the market, and we are working with industry leaders including Dell EMC, Ericsson, F5, NEC, NetApp, Palo Alto Net and Supermicro on to deliver solutions to our joint end-customers. ...
where you'll notice Supermicro as one of the launch partners, but no mention of the Gigabyte, ASRock, and Tyan offerings that are clearly listed on Patrick's excellent Xeon D-2100 Launch Central. Oops? Perhaps they're just delayed, but at least a nod to their planned efforts would seem to have been a nice gesture. Also note that Dell EMC is listed, but I'm finding nothing out there, not yet anyway.
Supermicro
Mini Towers - Nothing with Intel Xeon D-2100 inside, darn!
Click the image to visit William Lam's article that details the SYS-E200-8D-based BOM, used for the VMworld 2017 Hackathon. Photo Credit - William Lam
Also note that there is no SYS-E200-9D, which could have been a step-up from the popular SYS-E300-8D that's also Mini ITX, and was successfully used for the VMworld 2017 Hackathon, pictured at right.
Pictured below, you'll see just how good the versatile mini-tower can look, with 8 or 12 cores, a much quieter (slower spinning/large) chassis fan, plenty of internal storage bays, and a PCIe slot that can accomodate up to 4 more M.2 NVMe drives. Can you tell I'm more than a bit disappointed that the Xeon D-2100 hasn't arrived in this form factor, or something like it? Ideally a quiet system with two full-height PCIe slots in a Flex ATX cube, with plenty of U.2, SATA, and M.2 NVMe storage bays. Now that would be something I'd be very interested in TinkerTry'ing, to see if it would be suitable for inclusion in a future Bundle. Supermicro, or not Supermicro.
Supermicro SuperServer SYS-5028D-TN4T, resting on TinkerTry's new workbench, using only 60 watts at idle, even with several SSDs installed.
My take on the Intel Xeon D-2100.
If you really need and can afford up to 512GB of RAM, or you need seamless vMotion from Xeon D-2100 to and from new systems that feature the Intel Scalable Processors (Purley), or you need more than 12 cores, then by all means, yes, you should have a look at Xeon D-2100, available in 1U and 2U form factors. Of course, it is safer to wait for others to kick the tires first, ideally on the exact OS you're planning to run.
My take on the Supermicro SuperServer SYS-E300-9D.
If you need lot of memory in a compact form factor as the basis of something like a (test/unsupported) vSAN cluster, then this appears to be a good choice. This is conjecture, based purely on what's been published so far, with nobody reporting any hands-on time with this exact system just yet.
The good
the RJ-45 connectors from the 2 10GBase-T LAN ports can also be used for 1GbE networking that is common in home labs
if you need to go past Xeon D-1500's limit of 128GB of RAM in each of your home lab cluster nodes, this could be a great choice for you, as long as it's located far from living space
it features a 120W DC Power Adapter (brick), which should be sufficient for whatever load you can exert by using all storage and PCIe slot options
from the press releases, it's nice to see fanless designs are still very much a thing at Supermicro, let's hope they make lower TDP Xeon D systems available without fans in the future, at reasonable prices
RDMA support (which wasn't in Xeon D-1500), can be very good for vSAN, suspected to be iWARP not RoCE)
The not so good
no M.2 NVMe PCIe 3.0 x4 slot on any of the Mini ITX Xeon D-2100 systems and motherboards, this is not good, since NVMe is great for tremendously fast VMFS or NTFS datastores
60 watts CPU TDP for the Intel Xeon processor D-2123IT 4-Core/8 Threads in that small CSE-E300 chassis will invariably mean some loud dB levels from those 40mm fans, even at idle, especially when compared to the SYS-E300-8D that featured a much lower 35 watt TDP
that external 120W power brick is likely to be quite large, based on my testing of Supermicro 84 watt power bricks
you'll likely still need to buy the optional riser just to PCIe cards
even with the riser, you can only use one of the two PCI-E 3.0 x8 (LP) slots
the two 10GBase-T (RJ45) jacks use slightly more power and have slightly higher latency than the SFP+ ports that the SYS-E300-8D included, which also Flex ATXs Xeon D-1521 cousin
severely limits storage options, which means you don't get to benefit from the wide range of storage possibilities that Xeon D-2100 offers, such as OCuLink for NVMe, given the cramped design
it's only 4 cores, unlike the 8 cores and up available in the full 1U rack mount SYS-5019D-FN8TP
still no NVDIMM support, which could be handy for vSAN
still no vSAN compatible embedded HBA, there is market differentiation going on here
The open questions
the picture on the product page likely just a re-used E300-8D pic, based on the PCI slot being shown with 4 NIC ports that didn't actually come with that system, we won't know for sure until my loaner arrives
it will be interesting to see if they ship the SYS-E300-9D with 3 40mm fans, since the SYS-E300-8D shipped with only two fans
Supermicro Mini 1U E300-9D is the one to watch, for now...
Check out the full specs on the product page:
I have reached out to Supermicro and Wiredzone about seeing whether a mini-tower system based on Xeon D-2100 is something they're interested in, and of course I'll let you know if I make any progress. If that sounds good to you too, please let us know by dropping a comment below.
As for VMware compatibility, that's looking even easier than ESXi on Xeon D-1500, see:
I will be testing the SYS-5028D-9D, hopefully soon
I did get assurance that I'll be getting my hands-on a loaner Supermicro SYS-E300-9D SuperServer soon, but I don't have a date yet. You can be sure I'll be testing it like crazy, aka, TinkerTry'ing it, have a look at my previous close look at the first-in-world SYS-E200-8D and SYS-E300-8D examples here. Honestly, I currently don't even know when shipments from resellers like Wiredzone will even begin, but Wiredzone's listing currently shows "Build To Order (Ships in 3-4 Days)," so I suspect folks are receiving their pre-ordered systems soon. What's odd is that I don't even see a list of certified memory yet at the product pages at Supermicro and Wiredzone though, and there's no links to BIOS versions either. The systems are expected to ship with Spectre/Meltdown mitigation already applied though, a detail called-out by Intel in their Xeon D-2100 announcement.
Pre-order (non-bundle/bare-bones)
If you're not interested in waiting until this new chipset has been TinkerTry'd, it seems orders are being taken already, but the tested memory list doesn't appear to be ready just yet.
Click link to visit product page TinkerTry affiliate link at Wiredzone, where the price is currently $699 as of March 10 2018, but this is subject to change at any time. Keep in mind bare-bones means you'll need to buy memory and storage.
I have added RDMA, Optane, M.2 (none!), and vSAN items to the good and bad lists above.
This is when comments below articles and tweets are really fun for me, I love the feedback, as we share ideas and widely varying perspectives with one another. This is all armchair quarterbacking here, not until I get some actual GA level product (not engineering samples) will I be able to conclude much of anything with any real certainty.
Meanwhile, we can look at various STH articles, and a simple Intel ARK comparison:
between the following systems: SYS-E300-8D - featuring -- Xeon D-1518 - 4 Core, TDP 35 W $234 SYS-E300-9D - featuring -- Xeon D 2123IT 4 Core, TDP 60 W $213 SYS-5028D-TN4T featuring - Xeon D-1541 - 8 Core, TDP 45 W $581 SYS-5019D-FN8TP featuring- Xeon D-2146NT 8 Core, TDP 80 W $641
What you'll notice is that the 8 core Xeon D-2146NT seems to make a lot more sense if you're going to be spending over $3000 to get yourself a 128GB system, given today's RAM prices. In other words, would you really want just 4 cores at 60 Watts when $400 more gets you 8 cores, at a cost of 20 more Watts? This makes folks who are OK with the size and noise of 1U rack mount gear likely to lean toward the SYS-5019D-FN8TP, which isn't yet showing as available for pre-order at Wiredzone yet.
So if you need small dense nodes at the most NUC-like prices (and performance), the SYS-E300-9D is appealing, especially given it's priced about the same as the SYS-E300-8D. We're not quite sure about the RAM yet though.
If you need one beefier system and don't mind the size and noise of a 1U rack mount system, then the SYS-5019D-FN8TP may be the way to go, but keep in mind that it's SFP+ on the 10G ports, with no RJ45 model available on that system. See the list of all Xeon D-2100 (X11) systems and motherboards.
Click the image above to view the Intel ARK table.
Honestly, all of this has me not terribly excited about this Xeon D-2100 for all but the most enthusiastic of home lab buyers, since fan noise, BTU output, and the increased electric bills all conspire to prevent even the SYS-E300-9D from looking all that appealing versus the SYS-E300-8D system before it, given the 25 extra watts used. Yes, that's even accounting for the performance gains that it may offer some users, see:
The reason I say this is based on my experience with measuring watts at idle graph, where most home lab systems left running 24x7 spend the vast majority of their time, even my used-every-day (and for this article) SuperServer Workstation. Where I live, with electricity costs over 15 cents per kilowatt hour, so this sort of delta really matters to me. Add to that the fact that running VMware tends to keep the watt burn at idle a bit higher than when running other OSs.
The reason I have only focused on the mini-tower Bundles I created is simple. Buyers of the mini 1U SYS-E300-8D and SYS-E200-8D complain about noise pretty often. Most SYS-5028D-TN4T mini-tower owners brag about how quiet their system is, enjoying less than a 0.5% return rate to Wiredzone. That's pretty amazing. Anybody that puts down $2K to $3K of their own money to invest in their home lab would likely want to get the most value they can, over the next several years of active use. Ideally with many expansion options, and a high family acceptance factor. These are just some of the reasons why I'm lamenting the fact that there is no mini-tower based on the Xeon D-2100 shipping by any OEM out there, not just Supermicro. That's a long-term opportunity, but in the short term, it's just disappointing.
With SFP+ holding diminished appeal for me personally, and with the price uplift $700 for the 16 core model versus the 12 core model, here's a BOM of my own personal idea of the dream machine. Admittedly, it might not be at all supportable with full warranty, since it uses 3rd party components out of necessity. Most of my focus has been blogging about stuff that anybody can buy, anywhere in the world. The the rough concept is to strike the right balance between performance, cost, and size, to help bridge the gap until 10nm Xeon D CPUs arrive that will likely change everything, including NVDIMM support.
Qty 1 CPU Fan (Supermicro, or 3rd party, by necessity, since none of the Supermicro Flex ATX motherboards offer active cooling)
Qty 1 Chassis that can hold Flex ATX motherboard with 2 full height PCIe slots (3rd party, TBD), and has at least 2 3.5 SATA3 drive bays, and 4 2.5 SATA3 bays, and/or 4 U.2 NVMe drive bays with Oculink cables
Qty 1 nearly silent Power Supply (Supermicro or 3rd party), at least 250 watts
Qty 1 OPTIONAL M.2 NVMe drive for super speedy VMFS (or NTFS) (model TBD)
Supermicro X11SDV-16C+-TLN2F is actively cooled, see giant CPU heat sink fan. Photo courtesy of Supermicro.Supermicro X11SDV-12C-TP8F is passively cooled like all their Flex ATX motherboard designs. Notice the absence of a CPU heat sink fan. Photo courtesy of Supermicro.
Then, I visited Supermicro's booth at VMworld 2016, I politely inquired why ESXi 6.5 wasn't listed yet, and here was Supermicro's response:
We do plan to certify all those models and put it on the VMware HCL as time permits. We will schedule them to put it on to the queue, as soon as we are back from VMworld.
Finally, today, I noticed that the VMware Hardware Compatibility Guide had recently been updated. The Intel Xeon D-1500 now lists VMware ESXi 6.5 and 6.5 Update 1 for Supermicro Xeon D-1500 systems like the various SYS-5028D-TN4T based Bundles, and the 1U rack mount SYS-5018D-FN4T. This is great news! This change apparently happened since I grabbed a screenshot back on Dec 04 2017, which showed many OEMs at 6.5.x, and many still lagging at 6.0 too, to be fair.
I'm relieved Supermicro was finally able to deliver on what they've been intending to do for uncomfortably long. This is good news for the many owners of the popular Xeon D that already run VMware ESXi 6.5.x smoothly, continuing to enjoy their time with the world's first Xeon D system of any brand well-suited for home lab use. This is ongoing assurance that opening a VMware Service Request for 6.5.x won't be an issue, should the need arise, if the technician decides it's relevant to ask you what exact hardware you're running ESXi on.
I'm really not sure why the popular SYS-E200-8D was neglected. That's right, it's still not even on the VMware VCG, for any ESXi version. This doesn't make much sense, since it's essentially the same as the mini-tower SYS-5028D-TN4T and may even share the same PCB. It just has than fewer CPU cores, and passive CPU cooling, see detailed comparison at:
What about the Flex ATX SYS-E300-8D that has enjoyed far fewer BIOS updates these past 2 years? I'm not sure what's going on there, but it was never listed for 6.0 either.
If you have a look at the VCG entry for the SYS-5028D-TN4T for example, you'll see a drop-down menu for 6.0 U1 all the way to 6.5 U1. The odd thing is that it says BIOS 1.2 (Boot Mode:Legacy BIOS) for 6.5 U1, but I don't know for sure why they only tested with Legacy BIOS mode, on the mid-2017 BIOS 1.2 release. The Bundles ship with UEFI mode set on BIOS 1.2c, and that combination works great, for any OS (Windows/Linux/VMware ESXi). It's likely just less clutter and paperwork to only list the as-shipped factory defaults.
In my testing, the's better to ship with UEFI on these days for modern OSs, allowing things like >2TB boot drives with GPT partitions. There's also less confusion in the boot selection menu, and ensuring capabilities like UEFI Secure Boot for ESXi Hosts work with your easy ESXi install. These are things that are easily found if need be, and it's best to leave the Bundles shipping exactly the way they are, BIOS 1.2c w/ UEFI mode set.
Note, BIOS 1.2.Click to see full size table at VMware VCG site.
TinkerTry.com, LLC is an independent site, has no sponsored posts, and all ads are run through 3rd party BuySellAds. All equipment and software is purchased for long-term productive use, and any rare exceptions are noted.
TinkerTry's relationship with Wiredzone is similar to the Amazon Associates program, where a very modest commission is earned from each referral sale from TinkerTry's SuperServer order page. I chose this trusted authorized reseller for its low cost and customer service, and a mutual desire to help folks worldwide, including a new way to reduce EU shipping costs. Why? Such commissions help reduce TinkerTry's reliance on advertisers, while building a community around the Xeon D-1500 chipset that strikes a great balance between efficiency and capability.
I personally traveled to Wiredzone near Miami FL to see the assembly room first-hand, and to Supermicro HQ in San Jose CA to share ideas and give direct product feedback.
I'm a full time IT Pro for the past 25 years. I've worked with IBM, HP, Dell, and Lenovo servers for hands-on implementation work across the US. Working as a VMware vSAN Systems Engineer lately, I'm quite enjoying finally owning a lower-cost Supermicro solution that I can recommend to IT Pro colleagues, knowing it will "just work." That's right, no tinkering required.
What's New
vCenter Server 6.5 Update 1g addresses issues that have been documented in the Resolved Issues section and Photon OS security vulnerabilities. For more information, see VMware vCenter Server Appliance Photon OS Security Patches.
Patches Contained in This Release
vCenter Server 6.5 Update 1g delivers the following patch. See the VMware Patch Download Center for more information on downloading patches.
The comment above is still relevant, as I'm admittedly this is just one more-universal way to upgrade ESXi, a one-liner that avoids the need to download the ISO separately. Warning - booting from an upgrade ISO has the advantages of checking for CPU compatibility before installing, and a way to revert if things go wrong. The method below doesn't have either safety advantage. All hypervisor upgrades come with risks, including the possibility of losing your network connections, so proceed at your own risk only after making sure you've backed up first. Read the entire article below before getting started.
Disclaimer/Disclosure - I cannot provide free support for your upgrade, especially given the variety of unsupported hardware out there. This article is focused on just one easy way to upgrade that may be suited for home labs, and was voluntarily authored. It has nothing to do with my employment at VMware, and is not official documentation. I work in the storage division, separate from the group developing and supporting the hypervisor.
Meltdown and Spectre are looming large this year, and this post is one of many that are in direct response to that potential threat, see also at TinkerTry:
Yes, this ESXi patch is the latest that mitigates Branch Target Injection, released today, March 20 2018. If you patch your system right away, you are by definition at the bleeding edge. You may want to wait and see how this upgrade goes for others before you jump in yourself. Since the procedure below is based on a reader's input and various refinements to the previous set of similar articles, any and all feedback through comments below would be greatly appreciated.
Warning!
Don't rush things. At a minimum, even for a home lab, you'll want to read this entire article before patching anything!
Read all three of these KB articles below for details on what this patch fixes. I have some brief excerpts below each link, to encourage you to read each of the source kb articles in their entirety. Much of this information came from My VMware under Product Patches, where I searched on ESXi 6.5.0.
... Purpose
Recent microcode updates by Intel and AMD provide hardware support for branch target injection mitigation (Spectre v2). In order to use this new hardware feature within virtual machines, Hypervisor-Assisted Guest Mitigation must be enabled.
This document will focus on Hypervisor-Assisted Guest Mitigation as it pertains to vSphere. Please review KB52245: VMware Response to Speculative Execution security issues, CVE-2017-5753, CVE-2017-5715, CVE-2017-5754 (aka Spectre and Meltdown) for a complete view on VMware’s response to these issues.
See VMware Security Advisory VMSA-2018-0004.3 for the VMware provided patches related to this KB.
Resolution
Patching the VMware vSphere hypervisor and updating the CPU Microcode (which the vSphere patches will do for the processors described in the below table will allow guest operating systems to use hardware support for branch target mitigation.
To enable hardware support for branch target mitigation in vSphere, apply these steps, in the order shown:
Note: Ensure vCenter Server is updated first, for more information, see the vMotion and EVC Information section.
Upgrade to one of the following versions of vCenter 5.5 – 6.5:
6.5 U1g: Release Notes.
6.0 U3e: Release Notes.
5.5 U3h: Release Notes. Important: Please review the release notes for vCenter as there are new items listed in the ‘known issues’ section.
Apply both of the following ESXi patches. Note: these can both be applied at once so that only 1 reboot of the host is required:
ESXi 6.5: ESXi650-201803401-BG* and ESXi650-201803402-BG**
ESXi 6.0: ESXi600-201803401-BG* and ESXi600-201803402-BG**
ESXi 5.5: ESXi550-201803401-BG* and ESXi550-201803402-BG**
* These ESXi patches provide the framework to allow guest OSes to utilize the new speculative-execution control mechanisms. These patches do not contain microcode.
** These ESXi patches apply the microcode updates listed in the Table below. These patches do not contain the aforementioned framework. ...
Release date: March 20, 2018
Download Filename:
VIBs Included:
VMware_bootbank_esx-base_6.5.0-1.41.7967591
VMware_bootbank_esx-tboot_6.5.0-1.41.7967591
VMware_bootbank_vsan_6.5.0-1.41.7547709
VMware_bootbank_vsanhealth_6.5.0-1.41.7547710
Summaries and Symptoms
This patch updates the esx-base, esx-tboot, vsan and vsanhealth VIBs to resolve the following issue:
This ESXi patch provides part of the hypervisor-assisted guest mitigation of CVE-2017-5715 for guest operating systems. For important details on this mitigation, see VMware Security Advisory VMSA-2018-0004.3. ...
NOTE: If you have added the workaround step mentioned in KB 52345, the workaround will not be removed automatically if you apply the cpu-microcode VIB alone. You must also apply the VIBs in bulletin ESXi650-201803401-BG to remove the workaround.
Solution Summaries and Symptoms
This patch updates the cpu-microcode VIB to resolve the following issue:
This ESXi patch provides part of the hypervisor-assisted guest mitigation of CVE-2017-5715 for guest operating systems. For important details on this mitigation, see VMware Security Advisory VMSA-2018-0004.3. ...
I tend to put my modern systems BIOs setting to UEFI mode (instead of Dual), see details here, as a bit of future proofing. You can read Mike Foley's warnings in Secure Boot for ESXi 6.5 – Hypervisor Assurance
... Possible upgrade issues
UEFI secure boot requires that the original VIB signatures are persisted. Older versions of ESXi do not persist the signatures, but the upgrade process updates the VIB signatures.
If your host was upgraded using the ESXCLI command then your bootloader wasn’t upgraded and doesn’t persist the signatures. When you enable Secure Boot after the upgrade, an error occurs. You can’t use Secure Boot on these installations and will have to re-install from scratch to gain that support. ...
Backed up the ESXi 6.5.x hypervisor you've already installed and configured, for easy roll-back in case things go wrong. If it's on USB or SD, it's best to clone to a new USB drive and boot from it, to be sure your "backup" is good. You can use something like one of the home-lab-friendly and super easy methods such as USB Image Tools under Windows, as detailed by Florian Grehl here.
I've occasionally had some trouble with that tool, so an alternative you may want to try (that I'm still testing) is EaseUS free backup software instead, note the blue button below is a direct download link, preventing you from having to provide an email address.
Direct Download - EaseUS free backup software for Windows allows you to easily clone USB drives.
Download and upgrade to VMware ESXI 6.5 Update 1 Build 7967591 using the patch bundle that comes directly from the VMware Online Depot
The entire process including reboot is usually well under 10 minutes, and many of the steps below are optional, making it appear more difficult than it is. Triple-clicking on a line of code below highlights the whole thing with a carriage return, so you can then right-click and copy it into your clipboard, which gets executed immediately upon pasting into your SSH session. If you want to edit the line before it's executed, manually swipe your mouse across each line of code with no trailing spaces at the end.
Open an SSH session (eg. PuTTY) to your ESXi 6.x server
(if you forgot to enable SSH, here's how)
OPTIONAL - Turn on Maintenance Mode - Alternatively, ensure you've set your ESXi host to automatically gracefully shutdown all VMs upon host reboot, or manually shutdown all the VMs gracefully that you care about, including VCSA.
OPTIONAL - Firewall allow outbound http requests - This command is likely not needed if you're upgrading from 6.5.x, and is here in case you get an error about https access. I'm trying to make these instructions applicable to the broadest set of readers. Paste the one line below into into your SSH session, then press enter:
esxcli network firewall ruleset set -e true -r httpClient
Dry Run - Taking this extra step will help you be sure of what is about to happen, before it actually happens.
Here's the simple command to cut-and-paste into your SSH session:
If you see some VIBs that are going to be removed that you need, you'll need to be fully prepared to manually re-install them after the actual upgrade below. If it's a network VIB that is used for your ESXi service console, you'll want to be extra careful to re-install that same VIB before rebooting your just-patched host(s). Don't just assume some later VIB version that it may offer will work fine with your hardware.
ACTUAL RUN - This is it, the all-in-one download and patch command, assuming your ESXi host has internet access. This will ppull down the ESXi Image Profile using https, then it will run the patch script.
When you paste this line into your SSH session and hit enter, you'll need to be patient, as nothing seems to happen at first. It will take somewhere between roughly 3 to 10 minutes before the completion screen (sample below) appears:
Firewall disallow outbound http requests - To return your firewall to how it was before (optional) step 3 above, simply copy and paste the following:
esxcli network firewall ruleset set -e false -r httpClient
Attention Xeon D-1500 Owners - See 3 lines you may want to paste in before rebooting, details below, then return to the next step when done.
Reboot - This is needed for the new hypervisor version to be loaded upon restart. You may want to watch the DCUI (local console) as it boots, to see if any errors show up.
reboot
OPTIONAL - If you turned on Maintenance Mode in step 2 above, you'll need to turn it off again once the reboot is complete, and you're able to login to turn it off manually.
You're Done! - You may want to be continue with checking whether everything is working correctly after your systems is back up again, but you are done with the update itself. YOu can also watch DCUI during the boot if you'd like, to see if you spot any warnings.
Test things out - Log in with ESXi Host Client (pointing your browser directly at your IP address or ESXi servername), and be sure everything seems to function fine. You may need to re-map USB devices to VMs that use USB, and you may need to re-map VT-d (passthrough) devices to VMs that use passthrough devices like GPUs.
You're Done! - If you're happy that everything seems to be working well, you're done!
Now that you've updated and rebooted, various UIs will show your ESXi version, depending upon where you look:
The default ESXi 6.5 install works great with your Intel SATA3 AHCI ports, but there are better drivers for your I350 1G ports, and a driver needed for your two X552/X557 10GbE ports, also handy as extra 1GbE connections. There's also a fix for odd RPM and temperature readings. No problem, all 3 items easily remedied, and the Xeon D-1500 is on the VMware HCL. These ESXCLI commands below are really only needed if you haven't already done these steps, and are experiencing the issues described below. Note that the not-yet-shipping Xeon D-2100 is said to work with the drivers that are included with ESXi 6.5 Update 1, but that doesn't necessarily mean they're fully supported at those versions. I'll soon have my hands on a loaner Supermicro SuperServer SYS-300-9D to find out for sure.
OPTIONAL - Xeon D-1567 - If your system uses the Xeon D-1567 (12 core) you may find the VMware ESXi 6.0 igbn 1.4.1 NIC Driver for Intel Ethernet Controllers 82580,I210,I350 and I354 family performs better for the service console on either ETH0 or ETH1 instead of the included-with-6.5U1EP4 VMware inbox driver for I-350 called VMW_bootbank_net-igb_5.0.5.1.1-5vmw.650.0.0.4564106. No need to download separately. Simply copy and paste the following one-liner easy fix:
before proceeding, or just download the VIB yourself, then follow the install instructions in the readme.
OPTIONAL - Xeon D with 10GbE - If your system includes two 10GbE Intel X552/X557 RJ45 or SFP+ NICs ports, they can be used for 1GbE or 10GbE speeds, but you'll need to regain the 10GbE Intel driver VIB that the upgrade process replaced with an older one that doesn't work with your X557. Simply copy and paste the following one-liner easy fix:
with the details and fully supported download method described in detail here before proceeding.
OPTIONAL - Xeon D with inaccurate RPM and Temperature readings in Health Status Monitoring
The reasons for these commands are explained in detail here, note, this fix shouldn't be necessary for future major releases of ESXi, with this bug already reported:
esxcli system wbem set --ws-man false
esxcli system wbem set --enable true
You should wind up with the same results after this upgrade as folks who upgrade by downloading the full ESXi 6.5 U1 ISO / creating bootable media from that ISO / booting from that media (or mounting the ISO over IPMI/iLO/iDRAC/IMM/iKMV) and booting from it:
Below, I've pasted the full text of my update. It will help you see what drivers are touched. Just use the horizontal scroll bar or shift + mousewheel to look around, and Ctrl+F to Find stuff quickly:
As also seen in my video of my previous upgrade, here's the full contents of my ssh session, as I completed my Xeon D-1541 upgrade from Version: 6.5.0 Update 1 (Build 7388607)
to: Version: 6.5.0 Update 1 (Build 7967591)
login as: root
Using keyboard-interactive authentication.
Password:
The time and date of this login have been sent to the system logs.
WARNING:
All commands run on the ESXi shell are logged and may be included in
support bundles. Do not provide passwords directly on the command line.
Most tools can prompt for secrets or accept them from standard input.
VMware offers supported, powerful system administration tools. Please
see www.vmware.com/go/sysadmintools for details.
The ESXi Shell can be disabled by an administrative user. See the
vSphere Security documentation for more information.
[root@xd-1567-5028d:~] esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.5.0-20180304001-standard --dry-run
Update Result
Message: Dryrun only, host not changed. The following installers will be applied: [BootBankInstaller]
Reboot Required: true
VIBs Installed: VMware_bootbank_cpu-microcode_6.5.0-1.41.7967591, VMware_bootbank_esx-base_6.5.0-1.41.7967591, VMware_bootbank_esx-tboot_6.5.0-1.41.7967591, VMware_bootbank_vsan_6.5.0-1.41.7547709, VMware_bootbank_vsanhealth_6.5.0-1.41.7547710
VIBs Removed: VMware_bootbank_cpu-microcode_6.5.0-0.0.4564106, VMware_bootbank_esx-base_6.5.0-1.36.7388607, VMware_bootbank_esx-tboot_6.5.0-1.36.7388607, VMware_bootbank_vsan_6.5.0-1.36.7388608, VMware_bootbank_vsanhealth_6.5.0-1.36.7388609
VIBs Skipped: VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.650.0.0.4564106, VMW_bootbank_ata-pata-amd_0.3.10-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-atiixp_0.4.6-4vmw.650.0.0.4564106, VMW_bootbank_ata-pata-cmd64x_0.2.5-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-pdc2027x_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-serverworks_0.4.3-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-sil680_0.4.8-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-via_0.3.3-2vmw.650.0.0.4564106, VMW_bootbank_block-cciss_3.6.14-10vmw.650.0.0.4564106, VMW_bootbank_char-random_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ehci-ehci-hcd_1.0-4vmw.650.0.14.5146846, VMW_bootbank_elxnet_11.1.91.0-1vmw.650.0.0.4564106, VMW_bootbank_hid-hid_1.0-3vmw.650.0.0.4564106, VMW_bootbank_i40en_1.3.1-5vmw.650.1.26.5969303, VMW_bootbank_igbn_0.1.0.0-15vmw.650.1.36.7388607, VMW_bootbank_ima-qla4xxx_2.02.18-1vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-devintf_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ixgben_1.4.1-2vmw.650.1.26.5969303, VMW_bootbank_lpfc_11.1.0.6-1vmw.650.0.0.4564106, VMW_bootbank_lsi-mr3_6.910.18.00-1vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt2_20.00.01.00-3vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt3_12.00.02.00-11vmw.650.0.0.4564106, VMW_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.650.0.0.4564106, VMW_bootbank_misc-drivers_6.5.0-1.36.7388607, VMW_bootbank_mtip32xx-native_3.9.5-1vmw.650.0.0.4564106, VMW_bootbank_ne1000_0.8.0-16vmw.650.1.26.5969303, VMW_bootbank_nenic_1.0.0.2-1vmw.650.0.0.4564106, VMW_bootbank_net-bnx2_2.2.4f.v60.10-2vmw.650.0.0.4564106, VMW_bootbank_net-bnx2x_1.78.80.v60.12-1vmw.650.0.0.4564106, VMW_bootbank_net-cdc-ether_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-cnic_1.78.76.v60.13-2vmw.650.0.0.4564106, VMW_bootbank_net-e1000_8.0.3.1-5vmw.650.0.0.4564106, VMW_bootbank_net-e1000e_3.2.2.1-2vmw.650.0.0.4564106, VMW_bootbank_net-enic_2.1.2.38-2vmw.650.0.0.4564106, VMW_bootbank_net-fcoe_1.0.29.9.3-7vmw.650.0.0.4564106, VMW_bootbank_net-forcedeth_0.61-2vmw.650.0.0.4564106, VMW_bootbank_net-igb_5.0.5.1.1-5vmw.650.0.0.4564106, VMW_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.650.0.0.4564106, VMW_bootbank_net-libfcoe-92_1.0.24.9.4-8vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-core_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-en_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-nx-nic_5.0.621-5vmw.650.0.0.4564106, VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.650.0.0.4564106, VMW_bootbank_net-usbnet_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.650.0.0.4564106, VMW_bootbank_nhpsa_2.0.6-3vmw.650.0.0.4564106, VMW_bootbank_nmlx4-core_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-en_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-rdma_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx5-core_4.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_ntg3_4.1.3.0-1vmw.650.1.36.7388607, VMW_bootbank_nvme_1.2.0.32-5vmw.650.1.36.7388607, VMW_bootbank_nvmxnet3_2.0.0.23-1vmw.650.1.36.7388607, VMW_bootbank_ohci-usb-ohci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_pvscsi_0.1-1vmw.650.1.26.5969303, VMW_bootbank_qedentv_2.0.3.29-1vmw.650.0.0.4564106, VMW_bootbank_qfle3_1.0.2.7-1vmw.650.0.0.4564106, VMW_bootbank_qflge_1.1.0.3-1vmw.650.0.0.4564106, VMW_bootbank_qlnativefc_2.1.50.0-1vmw.650.1.26.5969303, VMW_bootbank_sata-ahci_3.0-26vmw.650.1.26.5969303, VMW_bootbank_sata-ata-piix_2.12-10vmw.650.0.0.4564106, VMW_bootbank_sata-sata-nv_3.5-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-promise_2.12-3vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil24_1.1-1vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil_2.3-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-svw_2.3-3vmw.650.0.0.4564106, VMW_bootbank_scsi-aacraid_1.1.5.1-9vmw.650.0.0.4564106, VMW_bootbank_scsi-adp94xx_1.0.8.12-6vmw.650.0.0.4564106, VMW_bootbank_scsi-aic79xx_3.1-5vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-fnic_1.5.0.45-3vmw.650.0.0.4564106, VMW_bootbank_scsi-hpsa_6.0.0.84-1vmw.650.0.0.4564106, VMW_bootbank_scsi-ips_7.12.05-4vmw.650.0.0.4564106, VMW_bootbank_scsi-iscsi-linux-92_1.0.0.2-3vmw.650.0.0.4564106, VMW_bootbank_scsi-libfc-92_1.0.40.9.3-5vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid2_2.00.4-9vmw.650.0.0.4564106, VMW_bootbank_scsi-mpt2sas_19.00.00.00-1vmw.650.0.0.4564106, VMW_bootbank_scsi-mptsas_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-mptspi_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.650.0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-3-0_6.5.0-0.0.4564106, VMW_bootbank_uhci-usb-uhci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usb-storage-usb-storage_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usbcore-usb_1.0-3vmw.650.1.26.5969303, VMW_bootbank_vmkata_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmkplexer-vmkplexer_6.5.0-0.0.4564106, VMW_bootbank_vmkusb_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmw-ahci_1.0.0-39vmw.650.1.26.5969303, VMW_bootbank_xhci-xhci_1.0-3vmw.650.0.0.4564106, VMware_bootbank_emulex-esx-elxnetcli_11.1.28.0-0.0.4564106, VMware_bootbank_esx-dvfilter-generic-fastpath_6.5.0-1.36.7388607, VMware_bootbank_esx-ui_1.23.0-6506686, VMware_bootbank_esx-xserver_6.5.0-0.23.5969300, VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-5vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-10vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-7vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-8vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-mpt2sas-plugin_2.0.0-6vmw.650.1.26.5969303, VMware_bootbank_native-misc-drivers_6.5.0-0.0.4564106, VMware_bootbank_rste_2.0.2.0088-4vmw.650.0.0.4564106, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.10-1.26.5969303, VMware_locker_tools-light_6.5.0-1.33.7273056
[root@xd-1567-5028d:~] esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.5.0-20180304001-standard
Update Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: VMware_bootbank_cpu-microcode_6.5.0-1.41.7967591, VMware_bootbank_esx-base_6.5.0-1.41.7967591, VMware_bootbank_esx-tboot_6.5.0-1.41.7967591, VMware_bootbank_vsan_6.5.0-1.41.7547709, VMware_bootbank_vsanhealth_6.5.0-1.41.7547710
VIBs Removed: VMware_bootbank_cpu-microcode_6.5.0-0.0.4564106, VMware_bootbank_esx-base_6.5.0-1.36.7388607, VMware_bootbank_esx-tboot_6.5.0-1.36.7388607, VMware_bootbank_vsan_6.5.0-1.36.7388608, VMware_bootbank_vsanhealth_6.5.0-1.36.7388609
VIBs Skipped: VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.650.0.0.4564106, VMW_bootbank_ata-pata-amd_0.3.10-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-atiixp_0.4.6-4vmw.650.0.0.4564106, VMW_bootbank_ata-pata-cmd64x_0.2.5-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-pdc2027x_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-serverworks_0.4.3-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-sil680_0.4.8-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-via_0.3.3-2vmw.650.0.0.4564106, VMW_bootbank_block-cciss_3.6.14-10vmw.650.0.0.4564106, VMW_bootbank_char-random_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ehci-ehci-hcd_1.0-4vmw.650.0.14.5146846, VMW_bootbank_elxnet_11.1.91.0-1vmw.650.0.0.4564106, VMW_bootbank_hid-hid_1.0-3vmw.650.0.0.4564106, VMW_bootbank_i40en_1.3.1-5vmw.650.1.26.5969303, VMW_bootbank_igbn_0.1.0.0-15vmw.650.1.36.7388607, VMW_bootbank_ima-qla4xxx_2.02.18-1vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-devintf_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ixgben_1.4.1-2vmw.650.1.26.5969303, VMW_bootbank_lpfc_11.1.0.6-1vmw.650.0.0.4564106, VMW_bootbank_lsi-mr3_6.910.18.00-1vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt2_20.00.01.00-3vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt3_12.00.02.00-11vmw.650.0.0.4564106, VMW_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.650.0.0.4564106, VMW_bootbank_misc-drivers_6.5.0-1.36.7388607, VMW_bootbank_mtip32xx-native_3.9.5-1vmw.650.0.0.4564106, VMW_bootbank_ne1000_0.8.0-16vmw.650.1.26.5969303, VMW_bootbank_nenic_1.0.0.2-1vmw.650.0.0.4564106, VMW_bootbank_net-bnx2_2.2.4f.v60.10-2vmw.650.0.0.4564106, VMW_bootbank_net-bnx2x_1.78.80.v60.12-1vmw.650.0.0.4564106, VMW_bootbank_net-cdc-ether_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-cnic_1.78.76.v60.13-2vmw.650.0.0.4564106, VMW_bootbank_net-e1000_8.0.3.1-5vmw.650.0.0.4564106, VMW_bootbank_net-e1000e_3.2.2.1-2vmw.650.0.0.4564106, VMW_bootbank_net-enic_2.1.2.38-2vmw.650.0.0.4564106, VMW_bootbank_net-fcoe_1.0.29.9.3-7vmw.650.0.0.4564106, VMW_bootbank_net-forcedeth_0.61-2vmw.650.0.0.4564106, VMW_bootbank_net-igb_5.0.5.1.1-5vmw.650.0.0.4564106, VMW_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.650.0.0.4564106, VMW_bootbank_net-libfcoe-92_1.0.24.9.4-8vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-core_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-en_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-nx-nic_5.0.621-5vmw.650.0.0.4564106, VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.650.0.0.4564106, VMW_bootbank_net-usbnet_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.650.0.0.4564106, VMW_bootbank_nhpsa_2.0.6-3vmw.650.0.0.4564106, VMW_bootbank_nmlx4-core_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-en_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-rdma_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx5-core_4.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_ntg3_4.1.3.0-1vmw.650.1.36.7388607, VMW_bootbank_nvme_1.2.0.32-5vmw.650.1.36.7388607, VMW_bootbank_nvmxnet3_2.0.0.23-1vmw.650.1.36.7388607, VMW_bootbank_ohci-usb-ohci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_pvscsi_0.1-1vmw.650.1.26.5969303, VMW_bootbank_qedentv_2.0.3.29-1vmw.650.0.0.4564106, VMW_bootbank_qfle3_1.0.2.7-1vmw.650.0.0.4564106, VMW_bootbank_qflge_1.1.0.3-1vmw.650.0.0.4564106, VMW_bootbank_qlnativefc_2.1.50.0-1vmw.650.1.26.5969303, VMW_bootbank_sata-ahci_3.0-26vmw.650.1.26.5969303, VMW_bootbank_sata-ata-piix_2.12-10vmw.650.0.0.4564106, VMW_bootbank_sata-sata-nv_3.5-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-promise_2.12-3vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil24_1.1-1vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil_2.3-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-svw_2.3-3vmw.650.0.0.4564106, VMW_bootbank_scsi-aacraid_1.1.5.1-9vmw.650.0.0.4564106, VMW_bootbank_scsi-adp94xx_1.0.8.12-6vmw.650.0.0.4564106, VMW_bootbank_scsi-aic79xx_3.1-5vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-fnic_1.5.0.45-3vmw.650.0.0.4564106, VMW_bootbank_scsi-hpsa_6.0.0.84-1vmw.650.0.0.4564106, VMW_bootbank_scsi-ips_7.12.05-4vmw.650.0.0.4564106, VMW_bootbank_scsi-iscsi-linux-92_1.0.0.2-3vmw.650.0.0.4564106, VMW_bootbank_scsi-libfc-92_1.0.40.9.3-5vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid2_2.00.4-9vmw.650.0.0.4564106, VMW_bootbank_scsi-mpt2sas_19.00.00.00-1vmw.650.0.0.4564106, VMW_bootbank_scsi-mptsas_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-mptspi_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.650.0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-3-0_6.5.0-0.0.4564106, VMW_bootbank_uhci-usb-uhci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usb-storage-usb-storage_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usbcore-usb_1.0-3vmw.650.1.26.5969303, VMW_bootbank_vmkata_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmkplexer-vmkplexer_6.5.0-0.0.4564106, VMW_bootbank_vmkusb_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmw-ahci_1.0.0-39vmw.650.1.26.5969303, VMW_bootbank_xhci-xhci_1.0-3vmw.650.0.0.4564106, VMware_bootbank_emulex-esx-elxnetcli_11.1.28.0-0.0.4564106, VMware_bootbank_esx-dvfilter-generic-fastpath_6.5.0-1.36.7388607, VMware_bootbank_esx-ui_1.23.0-6506686, VMware_bootbank_esx-xserver_6.5.0-0.23.5969300, VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-5vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-10vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-7vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-8vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-mpt2sas-plugin_2.0.0-6vmw.650.1.26.5969303, VMware_bootbank_native-misc-drivers_6.5.0-0.0.4564106, VMware_bootbank_rste_2.0.2.0088-4vmw.650.0.0.4564106, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.10-1.26.5969303, VMware_locker_tools-light_6.5.0-1.33.7273056
[root@xd-1567-5028d:~] reboot
What's New
vCenter Server 6.5 Update 1g addresses issues that have been documented in the Resolved Issues section and Photon OS security vulnerabilities. For more information, see VMware vCenter Server Appliance Photon OS Security Patches.
Patches Contained in This Release
vCenter Server 6.5 Update 1g delivers the following patch. See the VMware Patch Download Center for more information on downloading patches.
2. Relevant Products
VMware vCenter Server (VC)
VMware vSphere ESXi (ESXi)
VMware Workstation Pro / Player (Workstation)
VMware Fusion Pro / Fusion (Fusion)
3. Problem Description New speculative-execution control mechanism for Virtual Machines
Updates of vCenter Server, ESXi, Workstation and Fusion virtualize the new speculative-execution control mechanism for Virtual Machines (VMs). As a result, a patched Guest Operating System (Guest OS) can remediate the Branch Target Injection issue (CVE-2017-5715). This issue may allow for information disclosure between processes within the VM. ...
Column 5 of the following table lists the action required to remediate the vulnerability in each release, if a solution is available.
First portion of the table, click the image to visit the source article.
Here's the 2 patches for VCSA and ESXi 6.5 that the table above points to, hyperlinked for you:
VCSA 6.5 U1g available here, and Release Notes. Name: VMware-VCSA-all-6.5.0-8024368.iso Release Date: 2018-03-20 Build Number: 8024368
ESXi 6.5: ESXi650-201803401-BG KB52460 and ESXi650-201803402-BG KB52461 are both seen here, with ESXi Build Number 7967591 KB52456.
The comment above is still relevant, as I'm admittedly this is just one more-universal way to upgrade ESXi, a one-liner that avoids the need to download the ISO separately. Warning - booting from an upgrade ISO has the advantages of checking for CPU compatibility before installing, and a way to revert if things go wrong. The method below doesn't have either safety advantage. All hypervisor upgrades come with risks, including the possibility of losing your network connections, so proceed at your own risk only after making sure you've backed up first. Read the entire article below before getting started.
Disclaimer/Disclosure - I cannot provide free support for your upgrade, especially given the variety of unsupported hardware out there. This article is focused on just one easy way to upgrade that may be suited for home labs, and was voluntarily authored. It has nothing to do with my employment at VMware, and is not official documentation. I work in the storage division, separate from the group developing and supporting the hypervisor.
Meltdown and Spectre are looming large this year, and this post is one of many that are in direct response to that potential threat, see also at TinkerTry:
Yes, this ESXi patch is the latest that mitigates Branch Target Injection, released today, March 20 2018. If you patch your system right away, you are by definition at the bleeding edge. You may want to wait and see how this upgrade goes for others before you jump in yourself. Since the procedure below is based on a reader's input and various refinements to the previous set of similar articles, any and all feedback through comments below would be greatly appreciated.
Warning!
Don't rush things. At a minimum, even for a home lab, you'll want to read this entire article before patching anything! Special thanks go out to VCDX 194 Matt Kozloski, whose invaluable feedback made this article much better than my previous set of ESXi update articles.
Read all three of these KB articles below for details on what this patch fixes. I have some brief excerpts below each link, to encourage you to read each of the source kb articles in their entirety. Much of this information came from My VMware under Product Patches, where I searched on ESXi 6.5.0.
... Purpose
Recent microcode updates by Intel and AMD provide hardware support for branch target injection mitigation (Spectre v2). In order to use this new hardware feature within virtual machines, Hypervisor-Assisted Guest Mitigation must be enabled.
This document will focus on Hypervisor-Assisted Guest Mitigation as it pertains to vSphere. Please review KB52245: VMware Response to Speculative Execution security issues, CVE-2017-5753, CVE-2017-5715, CVE-2017-5754 (aka Spectre and Meltdown) for a complete view on VMware’s response to these issues.
See VMware Security Advisory VMSA-2018-0004.3 for the VMware provided patches related to this KB.
Resolution
Patching the VMware vSphere hypervisor and updating the CPU Microcode (which the vSphere patches will do for the processors described in the below table will allow guest operating systems to use hardware support for branch target mitigation.
To enable hardware support for branch target mitigation in vSphere, apply these steps, in the order shown:
Note: Ensure vCenter Server is updated first, for more information, see the vMotion and EVC Information section.
Upgrade to one of the following versions of vCenter 5.5 – 6.5:
6.5 U1g: Release Notes.
6.0 U3e: Release Notes.
5.5 U3h: Release Notes. Important: Please review the release notes for vCenter as there are new items listed in the ‘known issues’ section.
Apply both of the following ESXi patches. Note: these can both be applied at once so that only 1 reboot of the host is required:
ESXi 6.5: ESXi650-201803401-BG* and ESXi650-201803402-BG**
ESXi 6.0: ESXi600-201803401-BG* and ESXi600-201803402-BG**
ESXi 5.5: ESXi550-201803401-BG* and ESXi550-201803402-BG**
* These ESXi patches provide the framework to allow guest OSes to utilize the new speculative-execution control mechanisms. These patches do not contain microcode.
** These ESXi patches apply the microcode updates listed in the Table below. These patches do not contain the aforementioned framework. ...
Release date: March 20, 2018
Download Filename:
VIBs Included:
VMware_bootbank_esx-base_6.5.0-1.41.7967591
VMware_bootbank_esx-tboot_6.5.0-1.41.7967591
VMware_bootbank_vsan_6.5.0-1.41.7547709
VMware_bootbank_vsanhealth_6.5.0-1.41.7547710
Summaries and Symptoms
This patch updates the esx-base, esx-tboot, vsan and vsanhealth VIBs to resolve the following issue:
This ESXi patch provides part of the hypervisor-assisted guest mitigation of CVE-2017-5715 for guest operating systems. For important details on this mitigation, see VMware Security Advisory VMSA-2018-0004.3. ...
NOTE: If you have added the workaround step mentioned in KB 52345, the workaround will not be removed automatically if you apply the cpu-microcode VIB alone. You must also apply the VIBs in bulletin ESXi650-201803401-BG to remove the workaround.
Solution Summaries and Symptoms
This patch updates the cpu-microcode VIB to resolve the following issue:
This ESXi patch provides part of the hypervisor-assisted guest mitigation of CVE-2017-5715 for guest operating systems. For important details on this mitigation, see VMware Security Advisory VMSA-2018-0004.3. ...
I tend to put my modern systems BIOs setting to UEFI mode (instead of Dual), see details here, as a bit of future proofing. You can read Mike Foley's warnings in Secure Boot for ESXi 6.5 – Hypervisor Assurance
... Possible upgrade issues
UEFI secure boot requires that the original VIB signatures are persisted. Older versions of ESXi do not persist the signatures, but the upgrade process updates the VIB signatures.
If your host was upgraded using the ESXCLI command then your bootloader wasn’t upgraded and doesn’t persist the signatures. When you enable Secure Boot after the upgrade, an error occurs. You can’t use Secure Boot on these installations and will have to re-install from scratch to gain that support. ...
Backed up the ESXi 6.5.x hypervisor you've already installed and configured, for easy roll-back in case things go wrong. If it's on USB or SD, it's best to clone to a new USB drive and boot from it, to be sure your "backup" is good. You can use something like one of the home-lab-friendly and super easy methods such as USB Image Tools under Windows, as detailed by Florian Grehl here.
I've occasionally had some trouble with that tool, so an alternative you may want to try (that I'm still testing) is EaseUS free backup software instead, note the blue button below is a direct download link, preventing you from having to provide an email address.
Direct Download - EaseUS free backup software for Windows allows you to easily clone USB drives.
Download and upgrade to VMware ESXI 6.5 Update 1 Build 7967591 using the patch bundle that comes directly from the VMware Online Depot
The entire process including reboot is usually well under 10 minutes, and many of the steps below are optional, making it appear more difficult than it is. Triple-clicking on a line of code below highlights the whole thing with a carriage return, so you can then right-click and copy it into your clipboard, which gets executed immediately upon pasting into your SSH session. If you want to edit the line before it's executed, manually swipe your mouse across each line of code with no trailing spaces at the end.
Open an SSH session (eg. PuTTY) to your ESXi 6.x server
(if you forgot to enable SSH, here's how)
OPTIONAL - Turn on Maintenance Mode - Or you can just be sure to manually shutdown all the VMs gracefully that you care about, including VCSA. These instructions are geared to a home lab without High Availability enabled. This is also a good time to ensure you've also set ESXi host to automatically gracefully shutdown all VMs upon host reboot, or if you don't use vCenter or VCSA, use this Host Client method.
OPTIONAL - Reboot(Pro Tip courtesy of VCDX 194 Matt Kozloski) - Consider rebooting your ESXi server and maybe even a hard power cycle before updating. Matt explains:
if people are running on SD cards or USB sticks and they haven't rebooted the server in a LONG time to patch/update, I would strongly recommend doing a reboot of the server before applying any updates. I've seen, more than once, the SD card or the controller goes into some funky state and as ESXi is running largely in memory, it can comes up half patched or not patched at all. A [cold] reboot before update helps with that (again, if a server has been running for a long period of time - like a year+ - since it was rebooted last). Cold (remove the power cables) can be important, if the SD card or USB stick is actually running on an embedded controller like iLO or iDRAC.
OPTIONAL - Firewall allow outbound http requests - This command is likely not needed if you're upgrading from 6.5.x, and is here in case you get an error about https access. I'm trying to make these instructions applicable to the broadest set of readers. Paste the one line below into into your SSH session, then press enter:
esxcli network firewall ruleset set -e true -r httpClient
Dry Run - Taking this extra step will help you be sure of what is about to happen, before it actually happens.
Here's the simple command to cut-and-paste into your SSH session:
If you see some VIBs that are going to be removed that you need, you'll need to be fully prepared to manually re-install them after the actual upgrade below. If it's a network VIB that is used for your ESXi service console, you'll want to be extra careful to re-install that same VIB before rebooting your just-patched host(s). Don't just assume some later VIB version will work fine with your hardware, use what you know works, and carefully double-check the [VMware Compatibility Guide]https://www.vmware.com/resources/compatibility/search.php) for the recommended version.
ACTUAL RUN - This is it, the all-in-one download and patch command, assuming your ESXi host has internet access. This will pull down the ESXi Image Profile using https, then it will run the patch script.
When you paste this line into your SSH session and hit enter, you'll need to be patient, as nothing seems to happen at first. It will take somewhere between roughly 3 to 10 minutes before the completion screen (sample below) appears:
Firewall disallow outbound http requests - To return your firewall to how it was before (optional) step 4 above, simply copy and paste the following:
esxcli network firewall ruleset set -e false -r httpClient
Attention Xeon D-1500 Owners - See 3 lines you may want to paste in before rebooting, details below, then return to the next step when done.
Reboot - This is needed for the new hypervisor version to be loaded upon restart. You may want to watch the DCUI (local console) as it boots, to see if any errors show up.
You're Done! - You may want to be continue with checking whether everything is working correctly after your systems is back up again, but you are done with the update itself. YOu can also watch DCUI during the boot if you'd like, to see if you spot any warnings.
Test things out - Log in with ESXi Host Client (pointing your browser directly at your IP address or ESXi servername), and be sure everything seems to function fine. You may need to re-map USB devices to VMs that use USB, and you may need to re-map VT-d (passthrough) devices to VMs that use passthrough devices like GPUs.
You're Done! - If you're happy that everything seems to be working well, you're done!
Now that you've updated and rebooted, various UIs will show your ESXi version, depending upon where you look:
There is no BIOS published by Supermicro since Sep 2017's 1.2c, but Intel has completed it's hand-off to all such OEMs.
The default ESXi 6.5 install works great with your Intel SATA3 AHCI ports, but there are better drivers for your I350 1G ports, and a driver needed for your two X552/X557 10GbE ports, also handy as extra 1GbE connections. There's also a fix for odd RPM and temperature readings. No problem, all 3 items easily remedied, and the Xeon D-1500 is on the VMware HCL. These ESXCLI commands below are really only needed if you haven't already done these steps, and are experiencing the issues described below. Note that the not-yet-shipping Xeon D-2100 is said to work with the drivers that are included with ESXi 6.5 Update 1, but that doesn't necessarily mean they're fully supported at those versions. I'll soon have my hands on a loaner Supermicro SuperServer SYS-300-9D to find out for sure.
OPTIONAL - Xeon D 12 or 16 core - If your system uses the Xeon D-1557, Xeon D-1567, or Xeon D-1587, you may find the VMware ESXi 6.0 igbn 1.4.1 NIC Driver for Intel Ethernet Controllers 82580,I210,I350 and I354 family performs better for the service console on either ETH0 or ETH1 instead of the included-with-6.5U1EP4 VMware inbox driver for I-350 called VMW_bootbank_net-igb_5.0.5.1.1-5vmw.650.0.0.4564106. This driver can help prevent ethernet enumeration (reversal) issues, where the service console can jump over to the second I350 port after updating. You simply copy-and-paste the following one-liner:
before proceeding, or just download the VIB yourself, then follow the install instructions in the readme.
OPTIONAL - Xeon D with 10GbE - If your system includes two 10GbE Intel X552/X557 RJ45 or SFP+ NICs ports, they can be used for 1GbE or 10GbE speeds, but you'll need to regain the 10GbE Intel driver VIB that the upgrade process replaced with an older one that doesn't work with your X557. Simply copy and paste the following one-liner easy fix:
with the details and fully supported download method described in detail here before proceeding.
OPTIONAL - Xeon D with inaccurate RPM and Temperature readings in Health Status Monitoring
The reasons for these commands are explained in detail here, note, this fix shouldn't be necessary for future major releases of ESXi, with this bug already reported:
esxcli system wbem set --ws-man false
esxcli system wbem set --enable true
You should wind up with the same results after this upgrade as folks who upgrade by downloading the full ESXi 6.5 U1 ISO / creating bootable media from that ISO / booting from that media (or mounting the ISO over IPMI/iLO/iDRAC/IMM/iKMV) and booting from it:
This article has some updates to suggest an extra reboot before upgrading, based on some excellent new feedback I received this morning. Thank you, Matt Kosloski, my Connecticut neighbor at Kelser!
2. Relevant Products
VMware vCenter Server (VC)
VMware vSphere ESXi (ESXi)
VMware Workstation Pro / Player (Workstation)
VMware Fusion Pro / Fusion (Fusion)
3. Problem Description New speculative-execution control mechanism for Virtual Machines
Updates of vCenter Server, ESXi, Workstation and Fusion virtualize the new speculative-execution control mechanism for Virtual Machines (VMs). As a result, a patched Guest Operating System (Guest OS) can remediate the Branch Target Injection issue (CVE-2017-5715). This issue may allow for information disclosure between processes within the VM. ...
Column 5 of the following table lists the action required to remediate the vulnerability in each release, if a solution is available.
First portion of the table, click the image to visit the source article.
Here's the 2 patches for VCSA and ESXi 6.5 that the table above points to, hyperlinked for you:
VCSA 6.5 U1g available here, and Release Notes. Name: VMware-VCSA-all-6.5.0-8024368.iso Release Date: 2018-03-20 Build Number: 8024368
ESXi 6.5: ESXi650-201803401-BG KB52460 and ESXi650-201803402-BG KB52461 are both seen here, with ESXi Build Number 7967591 KB52456.
Below, I've pasted the full text of my update. It will help you see what drivers are touched. Just use the horizontal scroll bar or shift + mousewheel to look around, and Ctrl+F to Find stuff quickly:
As also seen in my video of my previous upgrade, here's the full contents of my ssh session, as I completed my Xeon D-1541 upgrade from Version: 6.5.0 Update 1 (Build 7388607)
to: Version: 6.5.0 Update 1 (Build 7967591)
login as: root
Using keyboard-interactive authentication.
Password:
The time and date of this login have been sent to the system logs.
WARNING:
All commands run on the ESXi shell are logged and may be included in
support bundles. Do not provide passwords directly on the command line.
Most tools can prompt for secrets or accept them from standard input.
VMware offers supported, powerful system administration tools. Please
see www.vmware.com/go/sysadmintools for details.
The ESXi Shell can be disabled by an administrative user. See the
vSphere Security documentation for more information.
[root@xd-1567-5028d:~] esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.5.0-20180304001-standard --dry-run
Update Result
Message: Dryrun only, host not changed. The following installers will be applied: [BootBankInstaller]
Reboot Required: true
VIBs Installed: VMware_bootbank_cpu-microcode_6.5.0-1.41.7967591, VMware_bootbank_esx-base_6.5.0-1.41.7967591, VMware_bootbank_esx-tboot_6.5.0-1.41.7967591, VMware_bootbank_vsan_6.5.0-1.41.7547709, VMware_bootbank_vsanhealth_6.5.0-1.41.7547710
VIBs Removed: VMware_bootbank_cpu-microcode_6.5.0-0.0.4564106, VMware_bootbank_esx-base_6.5.0-1.36.7388607, VMware_bootbank_esx-tboot_6.5.0-1.36.7388607, VMware_bootbank_vsan_6.5.0-1.36.7388608, VMware_bootbank_vsanhealth_6.5.0-1.36.7388609
VIBs Skipped: VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.650.0.0.4564106, VMW_bootbank_ata-pata-amd_0.3.10-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-atiixp_0.4.6-4vmw.650.0.0.4564106, VMW_bootbank_ata-pata-cmd64x_0.2.5-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-pdc2027x_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-serverworks_0.4.3-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-sil680_0.4.8-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-via_0.3.3-2vmw.650.0.0.4564106, VMW_bootbank_block-cciss_3.6.14-10vmw.650.0.0.4564106, VMW_bootbank_char-random_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ehci-ehci-hcd_1.0-4vmw.650.0.14.5146846, VMW_bootbank_elxnet_11.1.91.0-1vmw.650.0.0.4564106, VMW_bootbank_hid-hid_1.0-3vmw.650.0.0.4564106, VMW_bootbank_i40en_1.3.1-5vmw.650.1.26.5969303, VMW_bootbank_igbn_0.1.0.0-15vmw.650.1.36.7388607, VMW_bootbank_ima-qla4xxx_2.02.18-1vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-devintf_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ixgben_1.4.1-2vmw.650.1.26.5969303, VMW_bootbank_lpfc_11.1.0.6-1vmw.650.0.0.4564106, VMW_bootbank_lsi-mr3_6.910.18.00-1vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt2_20.00.01.00-3vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt3_12.00.02.00-11vmw.650.0.0.4564106, VMW_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.650.0.0.4564106, VMW_bootbank_misc-drivers_6.5.0-1.36.7388607, VMW_bootbank_mtip32xx-native_3.9.5-1vmw.650.0.0.4564106, VMW_bootbank_ne1000_0.8.0-16vmw.650.1.26.5969303, VMW_bootbank_nenic_1.0.0.2-1vmw.650.0.0.4564106, VMW_bootbank_net-bnx2_2.2.4f.v60.10-2vmw.650.0.0.4564106, VMW_bootbank_net-bnx2x_1.78.80.v60.12-1vmw.650.0.0.4564106, VMW_bootbank_net-cdc-ether_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-cnic_1.78.76.v60.13-2vmw.650.0.0.4564106, VMW_bootbank_net-e1000_8.0.3.1-5vmw.650.0.0.4564106, VMW_bootbank_net-e1000e_3.2.2.1-2vmw.650.0.0.4564106, VMW_bootbank_net-enic_2.1.2.38-2vmw.650.0.0.4564106, VMW_bootbank_net-fcoe_1.0.29.9.3-7vmw.650.0.0.4564106, VMW_bootbank_net-forcedeth_0.61-2vmw.650.0.0.4564106, VMW_bootbank_net-igb_5.0.5.1.1-5vmw.650.0.0.4564106, VMW_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.650.0.0.4564106, VMW_bootbank_net-libfcoe-92_1.0.24.9.4-8vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-core_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-en_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-nx-nic_5.0.621-5vmw.650.0.0.4564106, VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.650.0.0.4564106, VMW_bootbank_net-usbnet_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.650.0.0.4564106, VMW_bootbank_nhpsa_2.0.6-3vmw.650.0.0.4564106, VMW_bootbank_nmlx4-core_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-en_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-rdma_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx5-core_4.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_ntg3_4.1.3.0-1vmw.650.1.36.7388607, VMW_bootbank_nvme_1.2.0.32-5vmw.650.1.36.7388607, VMW_bootbank_nvmxnet3_2.0.0.23-1vmw.650.1.36.7388607, VMW_bootbank_ohci-usb-ohci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_pvscsi_0.1-1vmw.650.1.26.5969303, VMW_bootbank_qedentv_2.0.3.29-1vmw.650.0.0.4564106, VMW_bootbank_qfle3_1.0.2.7-1vmw.650.0.0.4564106, VMW_bootbank_qflge_1.1.0.3-1vmw.650.0.0.4564106, VMW_bootbank_qlnativefc_2.1.50.0-1vmw.650.1.26.5969303, VMW_bootbank_sata-ahci_3.0-26vmw.650.1.26.5969303, VMW_bootbank_sata-ata-piix_2.12-10vmw.650.0.0.4564106, VMW_bootbank_sata-sata-nv_3.5-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-promise_2.12-3vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil24_1.1-1vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil_2.3-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-svw_2.3-3vmw.650.0.0.4564106, VMW_bootbank_scsi-aacraid_1.1.5.1-9vmw.650.0.0.4564106, VMW_bootbank_scsi-adp94xx_1.0.8.12-6vmw.650.0.0.4564106, VMW_bootbank_scsi-aic79xx_3.1-5vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-fnic_1.5.0.45-3vmw.650.0.0.4564106, VMW_bootbank_scsi-hpsa_6.0.0.84-1vmw.650.0.0.4564106, VMW_bootbank_scsi-ips_7.12.05-4vmw.650.0.0.4564106, VMW_bootbank_scsi-iscsi-linux-92_1.0.0.2-3vmw.650.0.0.4564106, VMW_bootbank_scsi-libfc-92_1.0.40.9.3-5vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid2_2.00.4-9vmw.650.0.0.4564106, VMW_bootbank_scsi-mpt2sas_19.00.00.00-1vmw.650.0.0.4564106, VMW_bootbank_scsi-mptsas_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-mptspi_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.650.0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-3-0_6.5.0-0.0.4564106, VMW_bootbank_uhci-usb-uhci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usb-storage-usb-storage_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usbcore-usb_1.0-3vmw.650.1.26.5969303, VMW_bootbank_vmkata_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmkplexer-vmkplexer_6.5.0-0.0.4564106, VMW_bootbank_vmkusb_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmw-ahci_1.0.0-39vmw.650.1.26.5969303, VMW_bootbank_xhci-xhci_1.0-3vmw.650.0.0.4564106, VMware_bootbank_emulex-esx-elxnetcli_11.1.28.0-0.0.4564106, VMware_bootbank_esx-dvfilter-generic-fastpath_6.5.0-1.36.7388607, VMware_bootbank_esx-ui_1.23.0-6506686, VMware_bootbank_esx-xserver_6.5.0-0.23.5969300, VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-5vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-10vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-7vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-8vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-mpt2sas-plugin_2.0.0-6vmw.650.1.26.5969303, VMware_bootbank_native-misc-drivers_6.5.0-0.0.4564106, VMware_bootbank_rste_2.0.2.0088-4vmw.650.0.0.4564106, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.10-1.26.5969303, VMware_locker_tools-light_6.5.0-1.33.7273056
[root@xd-1567-5028d:~] esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.5.0-20180304001-standard
Update Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: VMware_bootbank_cpu-microcode_6.5.0-1.41.7967591, VMware_bootbank_esx-base_6.5.0-1.41.7967591, VMware_bootbank_esx-tboot_6.5.0-1.41.7967591, VMware_bootbank_vsan_6.5.0-1.41.7547709, VMware_bootbank_vsanhealth_6.5.0-1.41.7547710
VIBs Removed: VMware_bootbank_cpu-microcode_6.5.0-0.0.4564106, VMware_bootbank_esx-base_6.5.0-1.36.7388607, VMware_bootbank_esx-tboot_6.5.0-1.36.7388607, VMware_bootbank_vsan_6.5.0-1.36.7388608, VMware_bootbank_vsanhealth_6.5.0-1.36.7388609
VIBs Skipped: VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.650.0.0.4564106, VMW_bootbank_ata-pata-amd_0.3.10-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-atiixp_0.4.6-4vmw.650.0.0.4564106, VMW_bootbank_ata-pata-cmd64x_0.2.5-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-pdc2027x_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-serverworks_0.4.3-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-sil680_0.4.8-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-via_0.3.3-2vmw.650.0.0.4564106, VMW_bootbank_block-cciss_3.6.14-10vmw.650.0.0.4564106, VMW_bootbank_char-random_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ehci-ehci-hcd_1.0-4vmw.650.0.14.5146846, VMW_bootbank_elxnet_11.1.91.0-1vmw.650.0.0.4564106, VMW_bootbank_hid-hid_1.0-3vmw.650.0.0.4564106, VMW_bootbank_i40en_1.3.1-5vmw.650.1.26.5969303, VMW_bootbank_igbn_0.1.0.0-15vmw.650.1.36.7388607, VMW_bootbank_ima-qla4xxx_2.02.18-1vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-devintf_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ixgben_1.4.1-2vmw.650.1.26.5969303, VMW_bootbank_lpfc_11.1.0.6-1vmw.650.0.0.4564106, VMW_bootbank_lsi-mr3_6.910.18.00-1vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt2_20.00.01.00-3vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt3_12.00.02.00-11vmw.650.0.0.4564106, VMW_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.650.0.0.4564106, VMW_bootbank_misc-drivers_6.5.0-1.36.7388607, VMW_bootbank_mtip32xx-native_3.9.5-1vmw.650.0.0.4564106, VMW_bootbank_ne1000_0.8.0-16vmw.650.1.26.5969303, VMW_bootbank_nenic_1.0.0.2-1vmw.650.0.0.4564106, VMW_bootbank_net-bnx2_2.2.4f.v60.10-2vmw.650.0.0.4564106, VMW_bootbank_net-bnx2x_1.78.80.v60.12-1vmw.650.0.0.4564106, VMW_bootbank_net-cdc-ether_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-cnic_1.78.76.v60.13-2vmw.650.0.0.4564106, VMW_bootbank_net-e1000_8.0.3.1-5vmw.650.0.0.4564106, VMW_bootbank_net-e1000e_3.2.2.1-2vmw.650.0.0.4564106, VMW_bootbank_net-enic_2.1.2.38-2vmw.650.0.0.4564106, VMW_bootbank_net-fcoe_1.0.29.9.3-7vmw.650.0.0.4564106, VMW_bootbank_net-forcedeth_0.61-2vmw.650.0.0.4564106, VMW_bootbank_net-igb_5.0.5.1.1-5vmw.650.0.0.4564106, VMW_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.650.0.0.4564106, VMW_bootbank_net-libfcoe-92_1.0.24.9.4-8vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-core_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-en_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-nx-nic_5.0.621-5vmw.650.0.0.4564106, VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.650.0.0.4564106, VMW_bootbank_net-usbnet_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.650.0.0.4564106, VMW_bootbank_nhpsa_2.0.6-3vmw.650.0.0.4564106, VMW_bootbank_nmlx4-core_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-en_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-rdma_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx5-core_4.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_ntg3_4.1.3.0-1vmw.650.1.36.7388607, VMW_bootbank_nvme_1.2.0.32-5vmw.650.1.36.7388607, VMW_bootbank_nvmxnet3_2.0.0.23-1vmw.650.1.36.7388607, VMW_bootbank_ohci-usb-ohci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_pvscsi_0.1-1vmw.650.1.26.5969303, VMW_bootbank_qedentv_2.0.3.29-1vmw.650.0.0.4564106, VMW_bootbank_qfle3_1.0.2.7-1vmw.650.0.0.4564106, VMW_bootbank_qflge_1.1.0.3-1vmw.650.0.0.4564106, VMW_bootbank_qlnativefc_2.1.50.0-1vmw.650.1.26.5969303, VMW_bootbank_sata-ahci_3.0-26vmw.650.1.26.5969303, VMW_bootbank_sata-ata-piix_2.12-10vmw.650.0.0.4564106, VMW_bootbank_sata-sata-nv_3.5-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-promise_2.12-3vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil24_1.1-1vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil_2.3-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-svw_2.3-3vmw.650.0.0.4564106, VMW_bootbank_scsi-aacraid_1.1.5.1-9vmw.650.0.0.4564106, VMW_bootbank_scsi-adp94xx_1.0.8.12-6vmw.650.0.0.4564106, VMW_bootbank_scsi-aic79xx_3.1-5vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-fnic_1.5.0.45-3vmw.650.0.0.4564106, VMW_bootbank_scsi-hpsa_6.0.0.84-1vmw.650.0.0.4564106, VMW_bootbank_scsi-ips_7.12.05-4vmw.650.0.0.4564106, VMW_bootbank_scsi-iscsi-linux-92_1.0.0.2-3vmw.650.0.0.4564106, VMW_bootbank_scsi-libfc-92_1.0.40.9.3-5vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid2_2.00.4-9vmw.650.0.0.4564106, VMW_bootbank_scsi-mpt2sas_19.00.00.00-1vmw.650.0.0.4564106, VMW_bootbank_scsi-mptsas_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-mptspi_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.650.0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-3-0_6.5.0-0.0.4564106, VMW_bootbank_uhci-usb-uhci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usb-storage-usb-storage_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usbcore-usb_1.0-3vmw.650.1.26.5969303, VMW_bootbank_vmkata_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmkplexer-vmkplexer_6.5.0-0.0.4564106, VMW_bootbank_vmkusb_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmw-ahci_1.0.0-39vmw.650.1.26.5969303, VMW_bootbank_xhci-xhci_1.0-3vmw.650.0.0.4564106, VMware_bootbank_emulex-esx-elxnetcli_11.1.28.0-0.0.4564106, VMware_bootbank_esx-dvfilter-generic-fastpath_6.5.0-1.36.7388607, VMware_bootbank_esx-ui_1.23.0-6506686, VMware_bootbank_esx-xserver_6.5.0-0.23.5969300, VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-5vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-10vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-7vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-8vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-mpt2sas-plugin_2.0.0-6vmw.650.1.26.5969303, VMware_bootbank_native-misc-drivers_6.5.0-0.0.4564106, VMware_bootbank_rste_2.0.2.0088-4vmw.650.0.0.4564106, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.10-1.26.5969303, VMware_locker_tools-light_6.5.0-1.33.7273056
[root@xd-1567-5028d:~] reboot
Turn Your QNAP, Synology, ASUSTOR, or WD NAS
into a VM Backup Appliance - Improve backup performance by up to 2X and offload your IT infrastructure
It's no secret that I quite enjoy the simplicity of installing the NAKIVO Backup & Replication appliance in a home lab. Getting your most precious VMs protected easily and effectively takes under 10 minutes to install and configure, kicking off your first backup job. See also:
Today, NAKIVO announced that they've opened their beta program for v6.4 to everybody, and I wanted to let home lab enthusiasts know about this, a chance to make an impact and give valuable product feedback. I've participated in the past, and exactly as promised, I earned an Amazon gift card, emailed to me, ready to use immediately. Nice!
Here's just 3 of the 11 new features that caught my eye:
Automated Self-Backup
NAKIVO Backup & Replication v7.4 Beta includes the Self-Backup feature, which automatically back up the entire product configuration, including jobs, inventory, and all other settings.
Global Search
Finding a specific VM, backup, replica, job, or any other item in a large infrastructure can be a time-consuming task. NAKIVO Backup & Replication v7.4 Beta introduces Global Search – a powerful feature that can help you find and act on items in a matter of seconds.
Built-in Chat with Technical Support
Getting help with NAKIVO Backup & Replication v7.4 Beta is now even easier, as the new version provides an integrated chat with Technical Support. Should you need any assistance, you can instantly contact technical support without even leaving the application.
Note that NAKIVO offers NFR (Not For Resale) downloads to many IT Professionals, which I explain here:
If you are a VMUG member, VMware vExpert, VCP, VSP, VTSP, or VCI you can receive a FREE two-socket Not For Resale (NFR) license of NAKIVO Backup & Replication for one year and use it in your home or work lab.
The NFR licenses are available for non-production use only, including educational, lab testing, evaluation, training, and demonstration purposes. This offer ends on March 30, 2018.
Full Disclosure: "TinkerTry.com, LLC" is registered as a NAKIVO Bronze Partner, mostly to help get notified of latest news and code releases. I get no special early-access, anybody can sign up for the betas. All TinkerTry site advertising goes through middle-man BuySellAds. NAKIVO does know if you found their affiliate link from my site, which means the possibility of reseller commissions if you eventually upgrade to one of their paid offerings. Here's their pricing.
This article will is under construction, and will be updated frequently these first few days.
BIOS 1.3 has been released mostly just to address Side Channel Speculative Execution and Indirect Branch Prediction, also referred to as Spectre (type 2) mitigation. This new release was first spotted by jrp Friday evening Mar 23, with my usual TinkerTry'ing and documentation effort still underway, starting with a successful upgrade of my Xeon D-1567 system on Mar 24, with my Xeon D-1541 up next.
I just snagged the detailed BIOS 1.3 Release Notes today, and put them in the usual spot for you here.
Next to your motherboard model here, download X10SDVF8_213.zip, with X10SDVF8.213 inside. Direct download links are no longer shareable, you get this error, but you can get close by clicking on Accept on this site.
Next to your motherboard model here, download X10SDVT8_319.zip, with X10SDVT8.319 inside. Direct download links are no longer shareable, you get this error, but you can get close by clicking on Accept on this site.
Meanwhile, all the rest of what you need to know appears right here, in the original article, including detailed update procedures. I'm also experimenting with a faster method via IPMI that doesn't reset your BIOS settings, but I'm only testing that moving from 1.2c to 1.3, which is a relatively small jump.
The comment above is still relevant, as this ESXCLI method is just a more-universal way to upgrade ESXi, a one-liner that side-steps the preferred VUM method for those without VCSA and/or without a My VMware account to download the patch separately. This method doesn't have quite as easy a way to revert if things go wrong. All hypervisor upgrades come with risks, including the slight possibility of losing your network connections, so proceed at your own risk only after reading the entire article, and after backing up your hypervisor first.
Disclaimer/Disclosure - I cannot feasibly provide support for your upgrade, especially given the variety of unsupported hardware out there, see full disclaimer at below-left. This article is focused mostly on small home labs, was voluntarily authored, and not associated with my employment at VMware. It is not official documentation. I work in the storage division, separate from the group developing and supporting the hypervisor.
Meltdown and Spectre are looming large this year, and this post is one of many that are in direct response to that potential threat, see also at TinkerTry:
Yes, this ESXi patch is the latest that mitigates Branch Target Injection, released today, March 20 2018. If you patch your system right away, you are by definition at the bleeding edge. You may want to wait and see how this upgrade goes for others before you jump in yourself. Since the procedure below is based on a reader's input and various refinements to the previous set of similar articles, any and all feedback through comments below would be greatly appreciated.
Warning!
Don't rush things. At a minimum, even for a home lab, you'll want to read this entire article before patching anything! Special thanks go out to VCDX 194 Matt Kozloski, whose invaluable feedback made this article much better than my previous set of ESXi update articles.
Read all three of these KB articles below for details on what this patch fixes. I have some brief excerpts below each link, to encourage you to read each of the source kb articles in their entirety. Much of this information came from My VMware under Product Patches, where I searched on ESXi 6.5.0.
... Purpose
Recent microcode updates by Intel and AMD provide hardware support for branch target injection mitigation (Spectre v2). In order to use this new hardware feature within virtual machines, Hypervisor-Assisted Guest Mitigation must be enabled.
This document will focus on Hypervisor-Assisted Guest Mitigation as it pertains to vSphere. Please review KB52245: VMware Response to Speculative Execution security issues, CVE-2017-5753, CVE-2017-5715, CVE-2017-5754 (aka Spectre and Meltdown) for a complete view on VMware’s response to these issues.
See VMware Security Advisory VMSA-2018-0004.3 for the VMware provided patches related to this KB.
Resolution
Patching the VMware vSphere hypervisor and updating the CPU Microcode (which the vSphere patches will do for the processors described in the below table will allow guest operating systems to use hardware support for branch target mitigation.
To enable hardware support for branch target mitigation in vSphere, apply these steps, in the order shown:
Note: Ensure vCenter Server is updated first, for more information, see the vMotion and EVC Information section.
Upgrade to one of the following versions of vCenter 5.5 – 6.5:
6.5 U1g: Release Notes.
6.0 U3e: Release Notes.
5.5 U3h: Release Notes. Important: Please review the release notes for vCenter as there are new items listed in the ‘known issues’ section.
Apply both of the following ESXi patches. Note: these can both be applied at once so that only 1 reboot of the host is required:
ESXi 6.5: ESXi650-201803401-BG* and ESXi650-201803402-BG**
ESXi 6.0: ESXi600-201803401-BG* and ESXi600-201803402-BG**
ESXi 5.5: ESXi550-201803401-BG* and ESXi550-201803402-BG**
* These ESXi patches provide the framework to allow guest OSes to utilize the new speculative-execution control mechanisms. These patches do not contain microcode.
** These ESXi patches apply the microcode updates listed in the Table below. These patches do not contain the aforementioned framework. ...
Release date: March 20, 2018
Download Filename:
VIBs Included:
VMware_bootbank_esx-base_6.5.0-1.41.7967591
VMware_bootbank_esx-tboot_6.5.0-1.41.7967591
VMware_bootbank_vsan_6.5.0-1.41.7547709
VMware_bootbank_vsanhealth_6.5.0-1.41.7547710
Summaries and Symptoms
This patch updates the esx-base, esx-tboot, vsan and vsanhealth VIBs to resolve the following issue:
This ESXi patch provides part of the hypervisor-assisted guest mitigation of CVE-2017-5715 for guest operating systems. For important details on this mitigation, see VMware Security Advisory VMSA-2018-0004.3. ...
NOTE: If you have added the workaround step mentioned in KB 52345, the workaround will not be removed automatically if you apply the cpu-microcode VIB alone. You must also apply the VIBs in bulletin ESXi650-201803401-BG to remove the workaround.
Solution Summaries and Symptoms
This patch updates the cpu-microcode VIB to resolve the following issue:
This ESXi patch provides part of the hypervisor-assisted guest mitigation of CVE-2017-5715 for guest operating systems. For important details on this mitigation, see VMware Security Advisory VMSA-2018-0004.3. ...
I tend to put my modern systems BIOs setting to UEFI mode (instead of Dual), see details here, as a bit of future proofing. You can read Mike Foley's warnings in Secure Boot for ESXi 6.5 – Hypervisor Assurance
... Possible upgrade issues
UEFI secure boot requires that the original VIB signatures are persisted. Older versions of ESXi do not persist the signatures, but the upgrade process updates the VIB signatures.
If your host was upgraded using the ESXCLI command then your bootloader wasn’t upgraded and doesn’t persist the signatures. When you enable Secure Boot after the upgrade, an error occurs. You can’t use Secure Boot on these installations and will have to re-install from scratch to gain that support. ...
Backed up the ESXi 6.5.x hypervisor you've already installed and configured, for easy roll-back in case things go wrong. If it's on USB or SD, it's best to clone to a new USB drive and boot from it, to be sure your "backup" is good. You can use something like one of the home-lab-friendly and super easy methods such as USB Image Tools under Windows, as detailed by Florian Grehl here.
I've occasionally had some trouble with that tool, so an alternative you may want to try (that I'm still testing) is EaseUS free backup software instead, note the blue button below is a direct download link, preventing you from having to provide an email address.
Direct Download - EaseUS free backup software for Windows allows you to easily clone USB drives.
Download and upgrade to VMware ESXI 6.5 Update 1 Build 7967591 using the patch bundle that comes directly from the VMware Online Depot
The entire process including reboot is usually well under 10 minutes, and many of the steps below are optional, making it appear more difficult than it is. Triple-clicking on a line of code below highlights the whole thing with a carriage return, so you can then right-click and copy it into your clipboard, which gets executed immediately upon pasting into your SSH session. If you want to edit the line before it's executed, manually swipe your mouse across each line of code with no trailing spaces at the end.
Open an SSH session (eg. PuTTY) to your ESXi 6.x server
(if you forgot to enable SSH, here's how)
OPTIONAL - Turn on Maintenance Mode - Or you can just be sure to manually shutdown all the VMs gracefully that you care about, including VCSA. These instructions are geared to a home lab without High Availability enabled. This is also a good time to ensure you've also set ESXi host to automatically gracefully shutdown all VMs upon host reboot, or if you don't use vCenter or VCSA, use this Host Client method.
OPTIONAL - Reboot(Pro Tip courtesy of VCDX 194 Matt Kozloski) - Consider rebooting your ESXi server and maybe even a hard power cycle before updating. Matt explains:
if people are running on SD cards or USB sticks and they haven't rebooted the server in a LONG time to patch/update, I would strongly recommend doing a reboot of the server before applying any updates. I've seen, more than once, the SD card or the controller goes into some funky state and as ESXi is running largely in memory, it can comes up half patched or not patched at all. A [cold] reboot before update helps with that (again, if a server has been running for a long period of time - like a year+ - since it was rebooted last). Cold (remove the power cables) can be important, if the SD card or USB stick is actually running on an embedded controller like iLO or iDRAC.
OPTIONAL - Firewall allow outbound http requests - This command is likely not needed if you're upgrading from 6.5.x, and is here in case you get an error about https access. I'm trying to make these instructions applicable to the broadest set of readers. Paste the one line below into into your SSH session, then press enter:
esxcli network firewall ruleset set -e true -r httpClient
OPTIONAL - See a list of all available ESXi profiles - VMware's Upgrade or Update a Host with Image Profiles documentation tells you how this command was formed. Paste the one line below into into your SSH session, then press enter:
esxcli software sources profile list --depot=https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
You can cut-and-paste the output from the above command into a spreadsheet if you'd like, so you can then sort it, making it apparent which profile is the most recent.
Dry Run - Taking this extra step will help you be sure of what is about to happen, before it actually happens.
Here's the simple command to cut-and-paste into your SSH session:
If you see some VIBs that are going to be removed that you need, you'll need to be fully prepared to manually re-install them after the actual upgrade below. If it's a network VIB that is used for your ESXi service console, you'll want to be extra careful to re-install that same VIB before rebooting your just-patched host(s). Don't just assume some later VIB version will work fine with your hardware, use what you know works, and carefully double-check the VMware Compatibility Guide for the recommended version.
ACTUAL RUN - This is it, the all-in-one download and patch command, assuming your ESXi host has internet access. This will pull down the ESXi Image Profile using https, then it will run the patch script.
When you paste this line into your SSH session and hit enter, you'll need to be patient, as nothing seems to happen at first. It will take somewhere between roughly 3 to 10 minutes before the completion screen (sample below) appears:
Firewall disallow outbound http requests - To return your firewall to how it was before (optional) step 4 above, simply copy and paste the following:
esxcli network firewall ruleset set -e false -r httpClient
Attention Xeon D-1500 Owners - See 3 lines you may want to paste in before rebooting, details below, then return to the next step when done.
Reboot - This is needed for the new hypervisor version to be loaded upon restart. You may want to watch the DCUI (local console) as it boots, to see if any errors show up.
You're Done! - You may want to be continue with checking whether everything is working correctly after your systems is back up again, but you are done with the update itself. You can also watch DCUI during the boot if you'd like, to see if you spot any warnings.
Test things out - Log in with ESXi Host Client (pointing your browser directly at your IP address or ESXi servername), and be sure everything seems to function fine. You may need to re-map USB devices to VMs that use USB, and you may need to re-map VT-d (passthrough) devices to VMs that use passthrough devices like GPUs.
You're Really Done! - If you're happy that everything seems to be working well, that's a wrap, but keep that backup, just in case you notice something odd later on.
The default ESXi 6.5 install works great with your integrated Intel SATA3 AHCI ports, but there are better drivers for your I350 1GbE ports, and a driver needed for your two X552/X557 10GbE ports, also handy as extra 1GbE connections. There's also a fix for odd RPM and temperature readings. No problem, all 3 items easily remedied, and the Xeon D-1500 is on the VMware HCL.
All of the optional ESXCLI commands below are really only needed for Xeon D-1500 if you haven't already done these steps to your prior ESXi host before the update.
OPTIONAL - Xeon D 12 or 16 core - If your system uses the Xeon D-1557, Xeon D-1567, or Xeon D-1587, you may find the VMware ESXi 6.0 igbn 1.4.1 NIC Driver for Intel Ethernet Controllers 82580,I210,I350 and I354 family performs better for the service console on either ETH0 or ETH1 instead of the included-with 6.5U1 VMware inbox driver for the I-350 called VMW_bootbank_net-igb_5.0.5.1.1-5vmw.650.0.0.4564106. Here's how you can determine your driver version to verify, but running the command anyway is generally harmless, since it will skip the upgrade attempt if the same exact driver is already present. This driver can help prevent ethernet enumeration (reversal) issues, where the service console can jump over to the second I350 port after updating. You simply copy-and-paste the following one-liner:
before proceeding, or just download the VIB yourself, then follow the install instructions in the readme.
OPTIONAL - Xeon D with 10GbE - If your system includes two 10GbE Intel X552/X557 RJ45 or SFP+ NICs ports, they can be used for 1GbE or 10GbE speeds, but you'll need to have the newer 4.5.3 10GbE Intel driver VIB, rather than older VMW_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.650.0.0.4564106 release that came with ESXi 6.5. Here's how you can determine your driver version to verify, but running the command anyway is generally harmless, since it will skip the upgrade attempt if the same exact driver is already present. To update, simply copy and paste the following one-liner easy fix:
with the details and fully supported download method described in detail here before proceeding.
OPTIONAL - Xeon D with inaccurate RPM and Temperature readings in Health Status Monitoring
The reasons for these commands are explained in detail here, note, this fix shouldn't be necessary for future major releases of ESXi, with this bug already reported:
esxcli system wbem set --ws-man false
esxcli system wbem set --enable true
You should wind up with the same results after this upgrade as folks who upgrade by downloading the full ESXi 6.5 U1 ISO / creating bootable media from that ISO / booting from that media (or mounting the ISO over IPMI/iLO/iDRAC/IMM/iKMV) and booting from it:
This article has some updates to suggest an extra reboot before upgrading, based on some excellent new feedback I received this morning. Thank you, Matt Kosloski, my Connecticut neighbor at Kelser!
Title changed from: How to easily update your VMware Hypervisor from 6.x to 6.5 Update 1 Patch Release ESXi650-201803001 (ESXi Build 7967591) for hypervisor-assisted guest mitigation for branch target injection
to: How to easily update your VMware Hypervisor from 6.x to 6.5 Update 1 Patch Release ESXi650-201803001 (ESXi Build 7967591) with Spectre mitigation
To be more technical precise, I have changed the title changed from: How to easily update your VMware Hypervisor from 6.x to 6.5 Update 1 Patch Release ESXi650-201803001 (ESXi Build 7967591) for hypervisor-assisted guest mitigation for branch target injection
to: How to easily update your VMware Hypervisor from 6.x to 6.5 Update 1 Patch Release ESXi-6.5.0-20180304001-standard (ESXi Build 7967591) with Spectre mitigation
2. Relevant Products
VMware vCenter Server (VC)
VMware vSphere ESXi (ESXi)
VMware Workstation Pro / Player (Workstation)
VMware Fusion Pro / Fusion (Fusion)
3. Problem Description New speculative-execution control mechanism for Virtual Machines
Updates of vCenter Server, ESXi, Workstation and Fusion virtualize the new speculative-execution control mechanism for Virtual Machines (VMs). As a result, a patched Guest Operating System (Guest OS) can remediate the Branch Target Injection issue (CVE-2017-5715). This issue may allow for information disclosure between processes within the VM. ...
Column 5 of the following table lists the action required to remediate the vulnerability in each release, if a solution is available.
First portion of the table, click the image to visit the source article.
Here's the 2 patches for VCSA and ESXi 6.5 that the table above points to, hyperlinked for you:
VCSA 6.5 U1g available here, and Release Notes. Name: VMware-VCSA-all-6.5.0-8024368.iso Release Date: 2018-03-20 Build Number: 8024368
ESXi 6.5: ESXi650-201803401-BG KB52460 and ESXi650-201803402-BG KB52461 are both seen here, with ESXi Build Number 7967591 KB52456.
Below, I've pasted the full text of my update. It will help you see what drivers are touched. Just use the horizontal scroll bar or shift + mousewheel to look around, and Ctrl+F to Find stuff quickly:
As also seen in my video of my previous upgrade, here's the full contents of my ssh session, as I completed my Xeon D-1541 upgrade from Version: 6.5.0 Update 1 (Build 7388607)
to: Version: 6.5.0 Update 1 (Build 7967591)
login as: root
Using keyboard-interactive authentication.
Password:
The time and date of this login have been sent to the system logs.
WARNING:
All commands run on the ESXi shell are logged and may be included in
support bundles. Do not provide passwords directly on the command line.
Most tools can prompt for secrets or accept them from standard input.
VMware offers supported, powerful system administration tools. Please
see www.vmware.com/go/sysadmintools for details.
The ESXi Shell can be disabled by an administrative user. See the
vSphere Security documentation for more information.
[root@xd-1567-5028d:~] esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.5.0-20180304001-standard --dry-run
Update Result
Message: Dryrun only, host not changed. The following installers will be applied: [BootBankInstaller]
Reboot Required: true
VIBs Installed: VMware_bootbank_cpu-microcode_6.5.0-1.41.7967591, VMware_bootbank_esx-base_6.5.0-1.41.7967591, VMware_bootbank_esx-tboot_6.5.0-1.41.7967591, VMware_bootbank_vsan_6.5.0-1.41.7547709, VMware_bootbank_vsanhealth_6.5.0-1.41.7547710
VIBs Removed: VMware_bootbank_cpu-microcode_6.5.0-0.0.4564106, VMware_bootbank_esx-base_6.5.0-1.36.7388607, VMware_bootbank_esx-tboot_6.5.0-1.36.7388607, VMware_bootbank_vsan_6.5.0-1.36.7388608, VMware_bootbank_vsanhealth_6.5.0-1.36.7388609
VIBs Skipped: VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.650.0.0.4564106, VMW_bootbank_ata-pata-amd_0.3.10-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-atiixp_0.4.6-4vmw.650.0.0.4564106, VMW_bootbank_ata-pata-cmd64x_0.2.5-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-pdc2027x_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-serverworks_0.4.3-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-sil680_0.4.8-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-via_0.3.3-2vmw.650.0.0.4564106, VMW_bootbank_block-cciss_3.6.14-10vmw.650.0.0.4564106, VMW_bootbank_char-random_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ehci-ehci-hcd_1.0-4vmw.650.0.14.5146846, VMW_bootbank_elxnet_11.1.91.0-1vmw.650.0.0.4564106, VMW_bootbank_hid-hid_1.0-3vmw.650.0.0.4564106, VMW_bootbank_i40en_1.3.1-5vmw.650.1.26.5969303, VMW_bootbank_igbn_0.1.0.0-15vmw.650.1.36.7388607, VMW_bootbank_ima-qla4xxx_2.02.18-1vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-devintf_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ixgben_1.4.1-2vmw.650.1.26.5969303, VMW_bootbank_lpfc_11.1.0.6-1vmw.650.0.0.4564106, VMW_bootbank_lsi-mr3_6.910.18.00-1vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt2_20.00.01.00-3vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt3_12.00.02.00-11vmw.650.0.0.4564106, VMW_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.650.0.0.4564106, VMW_bootbank_misc-drivers_6.5.0-1.36.7388607, VMW_bootbank_mtip32xx-native_3.9.5-1vmw.650.0.0.4564106, VMW_bootbank_ne1000_0.8.0-16vmw.650.1.26.5969303, VMW_bootbank_nenic_1.0.0.2-1vmw.650.0.0.4564106, VMW_bootbank_net-bnx2_2.2.4f.v60.10-2vmw.650.0.0.4564106, VMW_bootbank_net-bnx2x_1.78.80.v60.12-1vmw.650.0.0.4564106, VMW_bootbank_net-cdc-ether_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-cnic_1.78.76.v60.13-2vmw.650.0.0.4564106, VMW_bootbank_net-e1000_8.0.3.1-5vmw.650.0.0.4564106, VMW_bootbank_net-e1000e_3.2.2.1-2vmw.650.0.0.4564106, VMW_bootbank_net-enic_2.1.2.38-2vmw.650.0.0.4564106, VMW_bootbank_net-fcoe_1.0.29.9.3-7vmw.650.0.0.4564106, VMW_bootbank_net-forcedeth_0.61-2vmw.650.0.0.4564106, VMW_bootbank_net-igb_5.0.5.1.1-5vmw.650.0.0.4564106, VMW_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.650.0.0.4564106, VMW_bootbank_net-libfcoe-92_1.0.24.9.4-8vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-core_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-en_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-nx-nic_5.0.621-5vmw.650.0.0.4564106, VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.650.0.0.4564106, VMW_bootbank_net-usbnet_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.650.0.0.4564106, VMW_bootbank_nhpsa_2.0.6-3vmw.650.0.0.4564106, VMW_bootbank_nmlx4-core_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-en_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-rdma_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx5-core_4.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_ntg3_4.1.3.0-1vmw.650.1.36.7388607, VMW_bootbank_nvme_1.2.0.32-5vmw.650.1.36.7388607, VMW_bootbank_nvmxnet3_2.0.0.23-1vmw.650.1.36.7388607, VMW_bootbank_ohci-usb-ohci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_pvscsi_0.1-1vmw.650.1.26.5969303, VMW_bootbank_qedentv_2.0.3.29-1vmw.650.0.0.4564106, VMW_bootbank_qfle3_1.0.2.7-1vmw.650.0.0.4564106, VMW_bootbank_qflge_1.1.0.3-1vmw.650.0.0.4564106, VMW_bootbank_qlnativefc_2.1.50.0-1vmw.650.1.26.5969303, VMW_bootbank_sata-ahci_3.0-26vmw.650.1.26.5969303, VMW_bootbank_sata-ata-piix_2.12-10vmw.650.0.0.4564106, VMW_bootbank_sata-sata-nv_3.5-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-promise_2.12-3vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil24_1.1-1vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil_2.3-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-svw_2.3-3vmw.650.0.0.4564106, VMW_bootbank_scsi-aacraid_1.1.5.1-9vmw.650.0.0.4564106, VMW_bootbank_scsi-adp94xx_1.0.8.12-6vmw.650.0.0.4564106, VMW_bootbank_scsi-aic79xx_3.1-5vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-fnic_1.5.0.45-3vmw.650.0.0.4564106, VMW_bootbank_scsi-hpsa_6.0.0.84-1vmw.650.0.0.4564106, VMW_bootbank_scsi-ips_7.12.05-4vmw.650.0.0.4564106, VMW_bootbank_scsi-iscsi-linux-92_1.0.0.2-3vmw.650.0.0.4564106, VMW_bootbank_scsi-libfc-92_1.0.40.9.3-5vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid2_2.00.4-9vmw.650.0.0.4564106, VMW_bootbank_scsi-mpt2sas_19.00.00.00-1vmw.650.0.0.4564106, VMW_bootbank_scsi-mptsas_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-mptspi_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.650.0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-3-0_6.5.0-0.0.4564106, VMW_bootbank_uhci-usb-uhci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usb-storage-usb-storage_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usbcore-usb_1.0-3vmw.650.1.26.5969303, VMW_bootbank_vmkata_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmkplexer-vmkplexer_6.5.0-0.0.4564106, VMW_bootbank_vmkusb_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmw-ahci_1.0.0-39vmw.650.1.26.5969303, VMW_bootbank_xhci-xhci_1.0-3vmw.650.0.0.4564106, VMware_bootbank_emulex-esx-elxnetcli_11.1.28.0-0.0.4564106, VMware_bootbank_esx-dvfilter-generic-fastpath_6.5.0-1.36.7388607, VMware_bootbank_esx-ui_1.23.0-6506686, VMware_bootbank_esx-xserver_6.5.0-0.23.5969300, VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-5vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-10vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-7vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-8vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-mpt2sas-plugin_2.0.0-6vmw.650.1.26.5969303, VMware_bootbank_native-misc-drivers_6.5.0-0.0.4564106, VMware_bootbank_rste_2.0.2.0088-4vmw.650.0.0.4564106, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.10-1.26.5969303, VMware_locker_tools-light_6.5.0-1.33.7273056
[root@xd-1567-5028d:~] esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.5.0-20180304001-standard
Update Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: VMware_bootbank_cpu-microcode_6.5.0-1.41.7967591, VMware_bootbank_esx-base_6.5.0-1.41.7967591, VMware_bootbank_esx-tboot_6.5.0-1.41.7967591, VMware_bootbank_vsan_6.5.0-1.41.7547709, VMware_bootbank_vsanhealth_6.5.0-1.41.7547710
VIBs Removed: VMware_bootbank_cpu-microcode_6.5.0-0.0.4564106, VMware_bootbank_esx-base_6.5.0-1.36.7388607, VMware_bootbank_esx-tboot_6.5.0-1.36.7388607, VMware_bootbank_vsan_6.5.0-1.36.7388608, VMware_bootbank_vsanhealth_6.5.0-1.36.7388609
VIBs Skipped: VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.650.0.0.4564106, VMW_bootbank_ata-pata-amd_0.3.10-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-atiixp_0.4.6-4vmw.650.0.0.4564106, VMW_bootbank_ata-pata-cmd64x_0.2.5-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-pdc2027x_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-serverworks_0.4.3-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-sil680_0.4.8-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-via_0.3.3-2vmw.650.0.0.4564106, VMW_bootbank_block-cciss_3.6.14-10vmw.650.0.0.4564106, VMW_bootbank_char-random_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ehci-ehci-hcd_1.0-4vmw.650.0.14.5146846, VMW_bootbank_elxnet_11.1.91.0-1vmw.650.0.0.4564106, VMW_bootbank_hid-hid_1.0-3vmw.650.0.0.4564106, VMW_bootbank_i40en_1.3.1-5vmw.650.1.26.5969303, VMW_bootbank_igbn_0.1.0.0-15vmw.650.1.36.7388607, VMW_bootbank_ima-qla4xxx_2.02.18-1vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-devintf_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ixgben_1.4.1-2vmw.650.1.26.5969303, VMW_bootbank_lpfc_11.1.0.6-1vmw.650.0.0.4564106, VMW_bootbank_lsi-mr3_6.910.18.00-1vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt2_20.00.01.00-3vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt3_12.00.02.00-11vmw.650.0.0.4564106, VMW_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.650.0.0.4564106, VMW_bootbank_misc-drivers_6.5.0-1.36.7388607, VMW_bootbank_mtip32xx-native_3.9.5-1vmw.650.0.0.4564106, VMW_bootbank_ne1000_0.8.0-16vmw.650.1.26.5969303, VMW_bootbank_nenic_1.0.0.2-1vmw.650.0.0.4564106, VMW_bootbank_net-bnx2_2.2.4f.v60.10-2vmw.650.0.0.4564106, VMW_bootbank_net-bnx2x_1.78.80.v60.12-1vmw.650.0.0.4564106, VMW_bootbank_net-cdc-ether_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-cnic_1.78.76.v60.13-2vmw.650.0.0.4564106, VMW_bootbank_net-e1000_8.0.3.1-5vmw.650.0.0.4564106, VMW_bootbank_net-e1000e_3.2.2.1-2vmw.650.0.0.4564106, VMW_bootbank_net-enic_2.1.2.38-2vmw.650.0.0.4564106, VMW_bootbank_net-fcoe_1.0.29.9.3-7vmw.650.0.0.4564106, VMW_bootbank_net-forcedeth_0.61-2vmw.650.0.0.4564106, VMW_bootbank_net-igb_5.0.5.1.1-5vmw.650.0.0.4564106, VMW_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.650.0.0.4564106, VMW_bootbank_net-libfcoe-92_1.0.24.9.4-8vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-core_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-en_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-nx-nic_5.0.621-5vmw.650.0.0.4564106, VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.650.0.0.4564106, VMW_bootbank_net-usbnet_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.650.0.0.4564106, VMW_bootbank_nhpsa_2.0.6-3vmw.650.0.0.4564106, VMW_bootbank_nmlx4-core_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-en_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-rdma_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx5-core_4.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_ntg3_4.1.3.0-1vmw.650.1.36.7388607, VMW_bootbank_nvme_1.2.0.32-5vmw.650.1.36.7388607, VMW_bootbank_nvmxnet3_2.0.0.23-1vmw.650.1.36.7388607, VMW_bootbank_ohci-usb-ohci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_pvscsi_0.1-1vmw.650.1.26.5969303, VMW_bootbank_qedentv_2.0.3.29-1vmw.650.0.0.4564106, VMW_bootbank_qfle3_1.0.2.7-1vmw.650.0.0.4564106, VMW_bootbank_qflge_1.1.0.3-1vmw.650.0.0.4564106, VMW_bootbank_qlnativefc_2.1.50.0-1vmw.650.1.26.5969303, VMW_bootbank_sata-ahci_3.0-26vmw.650.1.26.5969303, VMW_bootbank_sata-ata-piix_2.12-10vmw.650.0.0.4564106, VMW_bootbank_sata-sata-nv_3.5-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-promise_2.12-3vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil24_1.1-1vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil_2.3-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-svw_2.3-3vmw.650.0.0.4564106, VMW_bootbank_scsi-aacraid_1.1.5.1-9vmw.650.0.0.4564106, VMW_bootbank_scsi-adp94xx_1.0.8.12-6vmw.650.0.0.4564106, VMW_bootbank_scsi-aic79xx_3.1-5vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-fnic_1.5.0.45-3vmw.650.0.0.4564106, VMW_bootbank_scsi-hpsa_6.0.0.84-1vmw.650.0.0.4564106, VMW_bootbank_scsi-ips_7.12.05-4vmw.650.0.0.4564106, VMW_bootbank_scsi-iscsi-linux-92_1.0.0.2-3vmw.650.0.0.4564106, VMW_bootbank_scsi-libfc-92_1.0.40.9.3-5vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid2_2.00.4-9vmw.650.0.0.4564106, VMW_bootbank_scsi-mpt2sas_19.00.00.00-1vmw.650.0.0.4564106, VMW_bootbank_scsi-mptsas_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-mptspi_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.650.0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-3-0_6.5.0-0.0.4564106, VMW_bootbank_uhci-usb-uhci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usb-storage-usb-storage_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usbcore-usb_1.0-3vmw.650.1.26.5969303, VMW_bootbank_vmkata_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmkplexer-vmkplexer_6.5.0-0.0.4564106, VMW_bootbank_vmkusb_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmw-ahci_1.0.0-39vmw.650.1.26.5969303, VMW_bootbank_xhci-xhci_1.0-3vmw.650.0.0.4564106, VMware_bootbank_emulex-esx-elxnetcli_11.1.28.0-0.0.4564106, VMware_bootbank_esx-dvfilter-generic-fastpath_6.5.0-1.36.7388607, VMware_bootbank_esx-ui_1.23.0-6506686, VMware_bootbank_esx-xserver_6.5.0-0.23.5969300, VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-5vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-10vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-7vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-8vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-mpt2sas-plugin_2.0.0-6vmw.650.1.26.5969303, VMware_bootbank_native-misc-drivers_6.5.0-0.0.4564106, VMware_bootbank_rste_2.0.2.0088-4vmw.650.0.0.4564106, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.10-1.26.5969303, VMware_locker_tools-light_6.5.0-1.33.7273056
[root@xd-1567-5028d:~] reboot
Well, I can say I saw this one coming, with today being a rather peculiar day for an announcement. Nevertheless, this new service is real, and it's here, and the speeds claims appear to be pretty remarkable, especially when you look at the DNSPerf site that Tom mentions. The thing is, DNS is very location specific, so your results will vary from what DNSPerf indicates. So once again, DNS Benchmark came in handy for my own testing at my own 06109 location, so I decided to retest. First, I just ran it, adding CloudFlare's 1.1.1.1 primary and 1.0.0.1 secondary IP addresses, along with Quad9's 9.9.9.9 primary and 149.112.112.112 secondary DNSs. My router's DNS cache was still on. Then I set my router's DNS cache feature to zero, and ran DNS Benchmark again, and have posted one preliminary screenshot below.
Some things I noticed on this run, and subsequent runs. It appears that 1.1.1.1 might already be suffering the pressure of early adopters already, showing significantly slower response times than their secondary IP of 1.0.0.1, so you may want to set 1.0.0.1 as your primary and 1.1.1.1 as your secondary. Also notice that Quad9's 8.8.8.8 is faster for me than 1.1.1.1 on some runs, and not others. In other words, the practical differences may not be significant for me. So far, it appears that Quad9 and CloudFlare 1.1.1.1 are neck-and-neck, at least in my neck of the woods. I'll post screenshots of test results later. Only time and more testing will tell which one will be the keeper in my home.
Give DNS Benchmark a try yourself, and let us know your results by commenting below. I'm curious! Don't forget to visit my Quad9 article too.
...
The service is using https://1.1.1.1, and it’s not a joke but an actual DNS resolver that anyone can use. Cloudflare claims it will be “the Internet’s fastest, privacy-first consumer DNS service.” While OpenDNS and Google DNS both exist, Cloudflare is focusing heavily on the privacy aspect of its own DNS service with a promise to wipe all logs of DNS queries within 24 hours. ...
Quad9 uses real time information about what websites are malicious and blocks them completely. No content is filtered - only domains that are phishing, contain malware, and exploit kit domains will be blocked. No personal data is stored. An unsecure pubic DNS is also available from Quad9 at 9.9.9.10 but they do not recommend using that as a secondary domain in your router or computer setup. See more in the Quad9 FAQ.
Earlier this Apr 04 2018 evening, I noticed that the version the latest DNS Benchmark download has a splash screen showing as version 1.3.6668.0. In a way, you could say that this version doesn't even exist quite yet, according to the slightly outdated DNS Benchmark Version History. The last time Steve updated that page was likely back in 2010, when the last version of DNS Benchmark was released. It's still works fine, right through to Windows 10 Build 1709 though. What we learned during Steve's on-air mention yesterday that all he planned to change in his new release was the built-in list of IPs, that's in, no other known bugs needed fixing. This is why I'm not at all worried that my recent set of screensheets that I took when testing out Cloudflare's new 1.1.1.1 DNS are somehow obsolete.
Steve Gibson signed the new DNSBench.exe version 1.3.6668.0 on Wednesday, April 4 2018 4:59:57PM.
I noticed that Cloudflare and Quad9 primary and secondary DNS IP addresses have now been baked right into the tool, so you won't have to add them manually, making things even simpler, see also my step-by-step guide below. As featured in my recent Cloudflare 1.1.1.1 DNS article, the only somewhat more popular public DNS offerings left out in this build that can easily be added manually is Norton ConnectSafe's 199.85.126.20 and 199.85.127.20.
How do I know this was released today? Seen in the screenshot at right, I right-clicked on the DNSBench.exe executable and chose Properties, then the Digital Signatures tab, highlighting the one entry, then clicking on the Details button.
I do regret removing my own home's Ubiquiti 10.10.1.1 router from my test results screenshots in my recent Cloudflare 1.1.1.1 article, so subsequent tests will include it. Just like Steve's 10.1.0.0 router shown at the top of his screenshot. Also note that such local DNS from a local router will be fastest, as it's a local lookup with essentially no pesky distance-based latency. You know, physics and all, that pesky speed-of-light thing. The speed difference isn't always as apparent though, when using consumer grade routers. They tend to really just forward all DNS requests to your ISP, by default, and don't really do the local DNS lookups with DNS caching that my compact VMware vSphere home lab datacenter greatly appreciates.
Did you know that many web pages perform more than 100 requests for various elements of a single web page, when visiting a single rich-media webpage, such as cnn.com? Yes, today, it does 138 requests, and that's with ad blocking turned on! It's actually 387 requests with no ad blocking. Many of those requests require a round-trip DNS request for various on site and off site items, each requiring their own DNS lookup. All those DNS lookup milliseconds can quickly add up to several seconds.
See page load mechanics for yourself. Open Chrome, visit cnn.com, on your PC press F12, or on a Mac, click on View, Developer, Developer Tools. Next, select the Network tab, now just reload the web page, and you'll get a similar waterfall view, with stats along the bottom.DNS Performance Analytics and Comparison screenshot from Apr 01 2018.
You can actually temporarily change just your system's DNS to various DNS services, then try loading a rich web page a few times, performing your own informal benchmarking of the effects of trying out the various public DNS services that are listed below, for example. Once you decide which you like, it's best to put your system back at DHCP, then change your router's DNS instead.
Since DNS speeds are determined by your location, and your particular ISPs connection to the internet, relying solely on 3rd parties like DNSPerf for DNS speed rankings is insufficient. DNS Benchmark let's you quickly home in on the top performers, from your location.
I have developed my own instructions so I can use them for consistency in my own home network tests. They may be helpful for you too, to refer to when running DNS Benchmark on your network.
Download the latest version of the tiny, portable (no install required) DNS Benchmark directly from Steve Gibson's site GRC here.
Be sure all your other systems and devices on your network are idle, even better, disconnected and/or powered off. This step may be more important to follow if repeated runs of DNS Benchmark give you wildly inconsistent results.
Launch DNS Benchmark DNSBench.exe on your Windows system that is at idle, wait for it to finish its automated discovery process, which is indicated by the animated spinning dark red logo at upper-right .
Click on the Nameservers tab.
Right-click Remove Redirecting Servers, which gets rid of the light-brown entries that clutter up your results, and I'd rather not point my system to a DNS that redirects elsewhere anyway. This shouldn't eliminate any of the public DNS providers featured in this article.
If you have a DNS provider you'd like to add such as Norton ConnectSafe's 199.85.126.20 and 199.85.127.20, simply use the Add/Remove button found at top-left, adding each IP, one at a time. All the other services featured in this article are already built-in to the latest DNS Benchmark version 1.3.6668, but all older versions will also need Quad9 and Cloudflare IPs added manually. Did you skip Step 1 above? See also the list of various public DNS services below.
At top-right, click on Run Benchmark, then wait a while for it to complete, refraining from using your system to do anything else.
You may find that you can't quite see the results for the IPs of interest, just drag the lower edge of the Window to make it taller, as I did for my screenshots above.
Based mostly on your desired features, and partially on your benchmark results, use your "winning" IPs to replace the DNS IPs in your router's DNS settings. As mentioned here back in November of 2017, I'd also like to figure out implementing DNSSEC in my Ubiquiti EdgeRouter Lite someday, and am unsure whether that could impact performance signficantly.
Steve Gibson talks all about Cloudflare's 1.1.1.1 DNS at this exact spot in Security Now 657: ProtonMail. He explains that using 1.0.0.1 as your primary isn't a good idea as a long-term strategy, as he was seeing very low speed test results. I didn't see that same behavior at all, with 1.0.0.1 frequently besting most other DNS IPs, including 1.1.1.1. He also explains that for many users, their ISPs might be very fast due in part to physical proximity, but they are unlikely to offer any sort of privacy as far as what they do with your browsing habit data. Steve and I are both using Cox Communications, but he's in CA, and I'm in CT. Interesting. Maybe it's time for some tracert'ing.
Like my article above, Steve also mentions you'll need to manually add the 1.1.1.1 to your list of DNS servers to be tested, since DNS Benchmark hasn't been updated since 2010. I would add that you should also add Quad9's 9.9.9.9 as well. Well, that was this morning, read onward.
This is a small subset of the many public DNS services out there, focusing on the one's I've had more experience with personally, or others have commented they've found to be reliable and valuable for their home networks. I encourage you to choose not soley on benchmark speeds, but on security and/or filtering too, based on your family's needs and priorities.
...
The service is using https://1.1.1.1, and it’s not a joke but an actual DNS resolver that anyone can use. Cloudflare claims it will be “the Internet’s fastest, privacy-first consumer DNS service.” While OpenDNS and Google DNS both exist, Cloudflare is focusing heavily on the privacy aspect of its own DNS service with a promise to wipe all logs of DNS queries within 24 hours. ...
A handful of alternative DNS services offer protection from malware, ransomware and phishing. Providers like OpenDNS and Quad9 can blackhole DNS requests for blocking network traffic associated with botnets, phishing and exploits. These DNS providers promise some level of threat protection, but what do they know? Do they know things? Let’s find out!
Quad9 uses real time information about what websites are malicious and blocks them completely. No content is filtered - only domains that are phishing, contain malware, and exploit kit domains will be blocked. No personal data is stored. An unsecure pubic DNS is also available from Quad9 at 9.9.9.10 but they do not recommend using that as a secondary domain in your router or computer setup. See more in the Quad9 FAQ.
It's a big day! See also virtuallyGhetto's All vSphere 6.7 release notes & download links, but note that not all the links are live quite yet. Meanwhile, VMware has a lot of detail in the press release and technical blog posts ready already. Enjoy!
...
New and enhanced features in VMware vSphere 6.7 will include:
New vCenter Hybrid Linked Mode: Will enable unified visibility and management across different versions of vSphere running on-premises and in the public cloud such as VMware Cloud on AWS, IBM Cloud and other VMware Cloud Provider Program partner clouds. This will allow customers to maintain their current version of vSphere on-premises as needed while enjoying the benefits of new capabilities in vSphere-based public clouds. New ESXi Single Reboot and vSphere Quick Boot: Will significantly reduce patch and upgrade times by halving the number of reboots required to one, while vSphere Quick Boot will skip hardware initialization steps to gain further re-start efficiencies. New vSphere Persistent Memory: Will leverage the latest innovation around non-volatile memory and significantly enhance performance for both existing and new apps. Enhanced NVIDIA GRID vGPUs Support for Modern Workloads: Will improve host lifecycle management and reduce end-user disruption via new suspend and resume capabilities for VMs for GPU-accelerated environments. vSphere 6.7 will enhance support for NVIDIA GRID Virtual PC/Virtual Apps (for knowledge workers) and NVIDIA Quadro Virtual Data Center Workstation (for design and engineering professionals) to enable optimal management of VDI workloads as well as enable administrators (admins) to run other NVIDIA GPU-enabled workloads, including AI and ML. New Trusted Platform Module (TPM) 2.0 Support and Virtual TPM 2.0: This combination will significantly enhance protection and integrity for both the hypervisor and the guest operating system (OS). Virtual TPM 2.0 will help prevent VMs and hosts from being tampered or compromised, thwarting the loading of unauthorized components and enable guest OS security features. Enhanced VMware vSphere Client: This latest release of the HTML-5-based vSphere Client will introduce new functionality to manage VMware NSX, vSAN and vSphere Update Manager along with an increased support for third-party products. ...
You may recall me talking about the RDMA capabilities baked into the latest Xeon Scalable systems and even the recent hotness in IoT, the SoC based Xeon D-2100. By the way, I'll have a loaner Xeon D-2100-based SYS-E300-9D soon, to start some testing out 6.7 on by the end of the month, nice timing! Meanwhile, this article dives right in to RDMA today, in depth, enjoy!
... New HTML5 User Interface Fast, efficient, and consistent
While many customers use CLI and API’s to interact with vSAN, the graphical UI is still the most common day to day management tool. vSAN 6.7 introduces a new HTML5 UI based on the “Clarity” framework as seen in other VMware products. All products in the VMware portfolio are moving toward this UI framework. This interface is more than a simple, direct port from the old vSphere Web Client. VMware took a long look at how tasks and workflows can be optimized and introduced new ways to accomplish tasks more intuitively with fewer clicks. Going forward all new functionality will be delivered in the HTML5 client; however, the legacy vSphere Web Client is still available in this release. The HTML5 UI is a great step forward in providing vSAN users an intuitive and efficient user experience. ...
...
Moreover, with vSphere 6.7 vCSA delivers phenomenal performance improvements (all metrics compared at cluster scale limits, versus vSphere 6.5):
2X faster performance in vCenter operations per second 3X reduction in memory usage 3X faster DRS-related operations (e.g. power-on virtual machine)
These performance improvements ensure a blazing fast experience for vSphere users, and deliver significant value, as well as time and cost savings in a variety of use cases, such as VDI, Scale-out apps, Big Data, HPC, DevOps, distributed cloud native apps, etc.
vSphere 6.7 improves efficiency at scale when updating ESXi hosts, significantly reducing maintenance time by eliminating one of two reboots normally required for major version upgrades (Single Reboot). In addition to that, vSphere Quick Boot is a new innovation that restarts the ESXi hypervisor without rebooting the physical host, skipping time-consuming hardware initialization. ...
... Remote Directory Memory Access
vSphere 6.7 introduces new protocol support for Remote Direct memory Access (RDMA) over Converged Ethernet, or RoCE (pronounced “rocky”) v2, a new software Fiber Channel over Ethernet (FCoE) adapter, and iSCSI Extension for RDMA (iSER). These features enable customers to integrate with even more high-performance storage systems providing more flexibility to use the hardware that best compliments their workloads.
RDMA support is enhanced with vSphere 6.7 to bring even more performance to enterprise workloads by leveraging kernel and OS bypass reducing latency and dependencies. This is illustrated in the diagram below.
vSpeaking Podcast Episode 75: What’s New in vSAN 6.7
vSAN 6.7 is here! This release offers features and capabilities that enable improved performance, usability and consistency. Significant improvements to the management and monitoring tools are also matched by lower level performance and application consistency improvements. This week on the Virtually Speaking Podcast we welcome vSAN expert Myles Gray to walk us through all the features and enhancements.
vSAN continues to see rapid adoption with more than 10,000 customers and growing. A 600 Million dollar run rate was announced for Q4FY2018, and IDC named it the #1 and fastest growing HCI Software Solution. ...
vSAN 6.7 Whats New TechnicalvSAN 67 Technical OverviewvSAN 6.7 - WSFC support on vSAN!Join Pete Flecha at vForum Online Spring 2018! I'll be there too with many of my vSAN SE colleagues, helping with the Q&A, come join us!
Apr 17 2017 - This article is being published just as the bits become available for download, so it's likely to be updated frequently these first few hours, and new information comes in.
Today, not only were we treated to the vSphere 6.7 announcement, but within a few hours, all the bits became available for download too! This includes vSAN 6.7 too, of course. Here's the main download link you need for all key vSphere 6.7 related items most likely to be running in your home lab:
but folks new to VMware sometimes get tripped up when trying to find the actual files they'll need to get started. this article is for you! The two big file downloads do require you to use your my.vmware.com account, with free sign-up and 60 day trials. Licensing and support are out of scope for this article, but I'll quickly note that the amazingly affordable VMUG Advantage EVALExperience gets you 365 day licenses for your non-production home lab, and has its bits lovingly refreshed every 3 months or so, with a new refresh likely coming up soon.
that covers both ESXi and VCSA? You will find all kinds of details in there, including dependencies and warnings. This article is focused only on the downloads.
Warning - As the 6.7 name hints, this follow-on release to October 2016's vSphere 6.5 release is not quite a major release, but there is plenty new here. Back up everything first! If you have license keys for 6.0 or 6.5, my understanding is that they should work with 6.7.
You can see I'm getting my bits at roughly 38Mbps, gaining faster times overall by downloading the two files at once. Why? Because the Akamai CDN seems to limit each download to around 25Mbps maximum, at least in my northern US location. My internet connection is actually 300Mbps down and 30Mbps up.
I can safely say that the promised focus on a browser-based HTML5 UI for all essential functions and now for vSAN too is quite an improvement for both installation simplicity and ongoing routine sysadmin workflows, as well as for producing how to videos. See also my tirade about the transition pains, which I'm glad are finally (mostly) behind us, especially for day-to-day vSAN deployment and operations.
This is a basic overview document, highlighting the steps that I walk you right through in the accompanying video below. The video features my live recording done on the very day vSphere 6.7 was announced and released, so it's a bit rough around the edges and completely unrehearsed. All that aside, I'm delighted to say that it went well. Like, really well. The Xeon D-1500's included Intel I350 1GbE RJ45 network connections seems to work fine, and for the first time ever, the Xeon D-1500's Intel X552/X557 10GbE RJ45 network connections seem to work fine as well. Even better, as I was promised months ago, the fix for the inaccurate RPM and temperature readings has been incorporated too, no more kludgey "fix"!
Following along, you will learn some stuff even if you've done this before, as I do my usual voice-over, and side-step any dependencies on crummy Java for the Out of Band Management / iKVM install using just HTML5 and a network share.
The install and initial configuration procedure for ESXi has always been quite straight forward, but this detailed step-by-step will be extra helpful if you've never installed ESXi recently, and/or it's your your first time using the HTML5 iKVM and the vSphere Host Client, which is also HTML5.
SuperServer Bundles owners aren't likely to be asking me questions about how to install ESXi 6.7 on this system because it's now become quite simple, and this is great! The days of fiddling with CDs and DVDs burners are well behind us, let's just hope the X557 woes that some 12 core Xeon D owners experience, myself included, are also behind us. Only time, and a whole lot more Tinkering will tell! The next system I'll be test upgrading is my expermental Xeon D-1567 SuperServer Workstation that I use as my primary system, used to write hundreds of articles and videos, like this one, and to run vSphere, simultaneously.
You should keep in mind that this is just day one, release day. It could be a while before we see what network driver & firmware combination is recommended for this system, and which solid VM backup solutions offers full compatibility. So this is really just an initial test. I'll add updates below this article.
use your browser to connect to your SuperServer's IPMI management features using your browser (the IP address that you point your browser to is displayed if you temporarily connect a VGA monitor, shortly after power on)
enter BIOS Configuration and configure it for UEFI, as described in detail here, note, if you bought a SuperServer Bundle, you can skip this step, since you have a turn-key solution that's ready for ESXi as-is
click the Power Control to restart the server
press F11 to invoke Boot Menu
from the Please select boot device menu, select UEFI: SanDisk then press Enter
proceed with a normal ESX installation
from the Select a Disk to Install or Upgrade menu, choose SanDisk Ultra Fit (mpx.vmhba32:C0:T0:L0) (or similar)
when prompted to Remove the installation disc before rebooting, click the iKVM Virtual Media / Virtual Storage menu option, then click the Plug Out button then click OK
you may now hit Enter to Reboot
once the newly installed ESXi is booted off USB, it will show you the hostname or IP address that you can point your browser to, then type root and the password you configured during install
now that you're authenticated, click on the Open the VMware Host Client seen at the top left of the VMware ESXi Welcome page, which launches the new HTML5 interface of VMware's future, all the basic features needed to format a datastore and create your first VMs are there for you
The rest of the video goes through me setting up the clock/NTP.
VCSA install and configuration is more involved, and will need to wait for another day.
This article is a work in progress, a bookmark if you will, for when the full process and video are available. As always, you need to upgrade VCSA to 6.7 before you upgrade your ESXi host(s) to 6.7.
You'll want to read the following Release Notes in full before getting started:
... ESXi and vCenter Server Version Compatibility
The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESXi, VMware vCenter Server, and optional VMware products. Check the VMware Product Interoperability Matrix also for information about supported management and backup agents before you install ESXi or vCenter Server.
The vSphere Update Manager, vSphere Client, and vSphere Web Client are packaged with vCenter Server.
Hardware Compatibility for ESXi
To view a list of processors, storage devices, SAN arrays, and I/O devices that are compatible with vSphere 6.7, use the ESXi 6.7 information in the VMware Compatibility Guide. ...
...
For vSphere
See the vSphere 6.7 GA Release Notes for information on unsupported CPU's in vSphere 6.7.
Devices deprecated and unsupported in ESXi 6.7
Windows 2003 and XP are no longer supported
VMware Performance Impact for CVE-2017-5753, CVE-2017-5715, CVE-2017-5754 (aka Spectre and Meltdown)
Windows 7 and 2008 virtual machines lose network connectivity on VMware Tools 10.2.0
It is not possible to upgrade directly from vSphere 5.5 to vSphere 6.7. ...
Apr 18 2018 VMware KB
Document Id
53710
Purpose
VMware has made available certain releases to address critical issues and architectural changes for several products to allow for continued interoperability:
When moving from 6.0 or 6.5 to 6.7, my understanding is that the wizard will essentially be replacing your VCSA under the covers, migrating all your data over, then removing the old VCSA only if the migration succeeds. When moving from 6.0 to 6.5 or 6.7, it's a move form SuSE to Photon OS, which is much more svelte, reboots a lot faster, and is now even faster on 6.7 than it was on 6.5, see Introducing VMware vSphere 6.7!
Moreover, with vSphere 6.7 vCSA delivers phenomenal performance improvements (all metrics compared at cluster scale limits, versus vSphere 6.5):
2X faster performance in vCenter operations per second 3X reduction in memory usage 3X faster DRS-related operations (e.g. power-on virtual machine)
I'm still working on what will be available via VAMI CD ROM ISO mounting and/or URL download methods stay tuned.
For environments with any VCSA 6.0 or VCSA 6.5 version.
This is how the first screen looks when you mount/open your VCSA 6.5 ISO file [VMware-VCSA-all-6.7.0-8217866.iso](https://my.vmware.com/group/vmware/details?downloadGroup=VC670&productId=742&rPId=22641)
and navigate your way to the installer. Here's the path to the Windows version, but there are Mac and Linux versions too! \vcsa-ui-installer\win32\installer.exe
I'm still working on testing this upgrade method, basically you walk through the wizard and answer the questions, which tends to be straight-forward if you have proper DNS including FQDN and reverse lookups.
Once you've finished the upgrade, your VAMI UI will show you're at 6.7.0.10000 Build Number 8217866.
Isn't she lovely? This is the new VCSA VAMI look and feel, courtesy of the new Clarity UI.
2. Relevant Products
VMware vCenter Server (VC)
VMware vSphere ESXi (ESXi)
VMware Workstation Pro / Player (Workstation)
VMware Fusion Pro / Fusion (Fusion)
3. Problem Description New speculative-execution control mechanism for Virtual Machines
Updates of vCenter Server, ESXi, Workstation and Fusion virtualize the new speculative-execution control mechanism for Virtual Machines (VMs). As a result, a patched Guest Operating System (Guest OS) can remediate the Branch Target Injection issue (CVE-2017-5715). This issue may allow for information disclosure between processes within the VM. ...
Column 5 of the following table lists the action required to remediate the vulnerability in each release, if a solution is available.
First portion of the table, click the image to visit the source article.
Here's the 2 patches for VCSA and ESXi 6.5 that the table above points to, hyperlinked for you:
VCSA 6.5 U1g available here, and Release Notes. Name: VMware-VCSA-all-6.5.0-8024368.iso Release Date: 2018-03-20 Build Number: 8024368
ESXi 6.5: ESXi650-201803401-BG KB52460 and ESXi650-201803402-BG KB52461 are both seen here, with ESXi Build Number 7967591 KB52456.