We invite you to use our new free Ubiquiti Network Management System. You can simply configure, monitor, upgrade, and back up your UBNT devices. Add your routers and switches. You can include your wireless equipment and optical GPON devices as well. And why stop there. You can even manage your client APs with ease. Management of all devices in a single application: UNMS™.
Turns out Ubiquiti surprised me today with the full announcement email:
The UniFi® Controller revolutionized enterprise software, and now it’s time for the WISP industry: welcome to UNMS™ (Ubiquiti® Network Management System) for centralized control of Ubiquiti devices across multiple sites worldwide. There are no software, licensing, or support fees – try out the demo or download now LEARN MORE
Simple Installation
Docker-based application with install script and in-app upgrade.
Install UNMS™ on your server or use any cloud provider.
There will be a Ubiquiti cloud where you can install UNMS™ with a single click and use your Ubiquiti SSO.
Had I been paying closer attention, UNMS was actually announced in beta form in the forum way back in June 2017:
Welcome to UNMS Beta!
Options
06-23-2017 09:03 AM
Ubiquiti Network Management System (Beta)
Today we are releasing a beta version (0.8.0) of our new Ubiquiti Network Management System - UNMS. This is built from the ground up for the WISP industry. UNMS gives you the tools to monitor and configure your UBNT devices from a single software solution. This beta version includes partial support for EdgeRouter, EdgePoint, uFiber (GPON) and airMAX models. See our roadmap below for more details. UNMS can be installed locally using docker on Linux or on a cloud instance like Digital Ocean or Amazon AMI. Please give us your feedback in a new thread.
On the main UNMS page, we find details on the Linux requirements:
Minimum System Requirements
Operating System: Ubuntu 16.04+, Debian 8.4+
CPU: 64-bit (x64) processor
Memory: 2 GB RAM
Hard disk: 16 GB available disk space
Windows and Mac OS not supported, use VM instead.
Official cloud solution is not yet provided; however, UNMS™ can be hosted on your own cloud service.
Demo is available online here, which shows the full UNMS experience, without having to install it on Linux first.
Launch the UMobile app, tap +, and tap any discovered and compatible Ubiquiti devices.
I'm just getting started with kicking the tires on the app, and I haven't even gotten the Linux appliance downloaded and configured quite yet. I hope to add my observations to this article, as time permits.
You too can get started right way with the mobile app! I noticed it doesn't require you to have the full UNMS already installed. I was able to use it straight away for some basic management by merely pointing it to my EdgeRouter Lite, seen pictured below. This sort of direct connect didn't work on earlier app versions.
This document is a work-in-progress, with refinement and new screenshots being added over the next week or so during this time of year that so many IT Pros upgrade their home labs.
Read the Release Notes. The simple update method that this article details means you won't need the VMware-VCSA-all-6.5.0-7312210.iso from the Download Page for: vCenter Server Appliance 6.5 Update 1d | 19 DEC 2017 | Build 7312210
This upgrade is also known as version 6.5.0.13000 or 6.5U1d, as seen in the vCenter Server Appliance Management Interface (VAMI), as pictured above. It's the web UI featured throughout in this article, no command line needed.
Yep, upgrading via VAMI works as advertised for any 6.5.x release. I began using it when upgrading from 6.5 to 6.5.0a back in February, and to 6.5.0b in March, and to 6.5.0c and 6.5.0d in April, 6.5.0e in June, 6.5 U1 in July, and finally 6.5 U1d here in December. This is an easy upgrade, as shown screen-by-screen walk through below, and in the video below.
This was one snag that I noticed after the upgrade, but the fix is pretty simple.
Warning:
You need to do your homework before upgrading, if you're wondering why, read this.
Do this VCSA 6.5 U1d upgrade in a test environment first! Before attempting, you should be sure to have a full backup, such as the simple native VCSA backup button seen at top-right. You can also use a 3rd party backup solution such as NAKIVO or Veeam.
At a minimum, do a snapshot (or backup) of this VCSA VM before upgrading, then make sure everything works alright after the upgrade, then remove the snapshot within a few days, to avoid performance degradation.
If you're looking for how you get from 6.0.x to 6.5.x, that's more of a migration, and the right article for you is over here:
along the left edge of DCUI, click 'Update', then click on 'Check Updates', then 'Check Repository', then under Available Updates, click on 'Install Updates' then choose 'Install All Updates', accept the EULA, and when it's done downloading and upgrading, you'll be prompted to reboot the VCSA appliance
Takes about 2 to 5 minutes to upgrade, if you have fast internet, and your VCSA VM is located on an SSD based datastore such as the Samsung 960 EVO 1TB M.2 NVMe SSD I used for my home datacenter, featured in this video.
in your browser, go to your VCSA IP or Name:5480login with root and your passwordalong the left edge of DCUI, click 'Update', then click on 'Check Updates', then 'Check Repository', then under Available Updates, click on 'Install Updates' then choose 'Install All Updates'click on 'I accept' checkbox, then click on 'Install'wait for a bit, on SSDs, a bit is less than 2 minuteswow, you're done alreadyat left, click on 'Summary', then at right, click on 'Reboot'login with root and your passwordalong the left edge of DCUI, click 'Update', optionally also clicking on 'Check Updates' then 'Check Repository', with the DCUI showing you confirmation that you're already done, since you're now at 6.5.0.13000 Build Number 7312210.If you see status "Unknown" in VAMI after this upgrade, copy and paste chrome://settings/clearBrowserData into a new Chrome tab, turn on the Cached images and files checkbox, then click Clear Data.
This document is a work-in-progress, with refinement and new screenshots being added over the next week or so during this time of year that so many IT Pros upgrade their home labs.
Once you've performed this patch and rebooted, various UIs will show your ESXi version, depending upon where you look:
Release Notes. The simple update method that this article details below means you won't need the ISO Download Page for: ESXi 6.5 U1 | 27 JULY 2017 | Build 5969303
I have only tested this method when upgrading from 6.5.0 U1 EP4 to Build 5969303 Build 5224934 to Build 7388607, your experience from earlier 6.x versions may vary.
I have been able to replicate that the Xeon D 10GbE X552/X557 driver VIB needs to be re-installed right after the upgrade, simple one line workaround is documented here, with details below.
This is not official VMware documentation, it's merely a convenient upgrade technique that may help in lab tests, a little simpler than the official procedure VMware documents and demonstrates in KB2008939. It's up to you to adhere to the backup-first advice detailed below, full Disclaimer found at below-left, at the bottom of very TinkerTry page.
All the background story on how this easy ESXCLI upgrade method came about was covered in my earlier articles about updating 6.0 U2 and 6.5.
If you're in production, beware, this code just came out yesterday. This article is for the lab, where you may want to give this critical patch a try.
The esxcli software profile update command brings the entire contents of the ESXi host image to the same level as the corresponding upgrade method using an ISO installer. However, the ISO installer performs a pre-upgrade check for potential problems, and the esxcli upgrade method does not. The ISO installer checks the host to make sure that it has sufficient memory for the upgrade, and does not have unsupported devices connected. For more about the ISO installer and other ESXi upgrade methods, see Upgrade Options for ESXi 6.0.
Before proceeding, you should read Overview of the ESXi Host Upgrade Process. This article below is just about the quick and easy way, effective and safe for most folks. For those more interested in "clean installs", where you login to My VMware, download the ESXi 6.5U1 ISO, shut down the ESXi on USB that you're already running, eject that USB flash drive and label it and set it aside, then boot from another USB drive like the SanDisk Ultra Fit with a fresh install of 6.5U1 imaged onto it. This clean install is much more time consuming than the easy method outlined below. Why? This is because once ESXi 6.5U1 is freshly installed, at a minimum you'll also have to use Datastore Browser to locate your VMs on your VMFS datastores, then add those files with *.vmx extensions back into your inventory, then add the host back to your cluster that should already be at 6.5U1. While this extra work may help you be sure that you don't have any drivers or changes carried over from your previous build, for many users, that's not a concern.
backed up the ESXi 6.5.x you've already got, if its on USB or SD, then use something like one of the home-lab-friendly and super easy methods such as USB Image Tools under Windows, as detailed by Florian Grehl here
you can now continue with this simple approach to upgrading your lab environment. Unsupported, at your own risk, see the full disclaimer at below-left.
You should wind up with the same results after this upgrade as folks who upgrade by downloading the full ESXi 6.5 U1 ISO / creating bootable media from that ISO / booting from that media (or mounting the ISO over IPMI/iLO/iDRAC/IMM/iKMV) and booting from it:
Download and upgrade to 6.5 Patch 02 update using the patch directly from the VMware Online Depot
The entire process including reboot is usually well under 10 minutes, and many of the steps below are optional, making it appear more difficult than it is. Triple-clicking on a line of code below highlights the whole thing with a carriage return, so you can then right-click and copy it into your clipboard, which gets executed immediately upon pasting into your SSH session. If you want to edit the line before it's executed, manually swipe your mouse across the code rather than triple-clicking the lines of code.
Open an SSH session (eg. PuTTY) to your ESXi 6.0.x server
(if you forgot to enable SSH, here's how)
Turn on maintenance mode, or ensure you've set your ESXi host to automatically gracefully shutdown all VMs upon host reboot, or shutdown all the VMs gracefully that you care about, including VCSA.
Firewall allow outbound http requests - This command might not be needed, but I'm trying to make these instructions applicable to the broadest set of readers. Paste the one line below into into your SSH session, then press enter:
esxcli network firewall ruleset set -e true -r httpClient
Pull down ESXi Image Profile using https and run patch script - Paste the line below into into your SSH session, then hit enter and wait while nothing seems to happen, taking somewhere between roughly 3 to 10 minutes before the completion screen (sample below) appears:
It MAY just do it's thing after a several minute pause, or it may immediately fail and warn you what VIBs will be removed if you proceed. Note that next command is the same as the one above, but with --ok-to-remove
added at the end. This allows the upgrade to proceed, now that you've been properly warned. Be sure to make note of what VIBs it says will be removed, just in case the inbox (included) drivers it installs don't work for your system.
Be sure all your devices still work afterward, and if not, you'll need to locate the original VIB download site and install it, using the detailed install instructions usually found at the vendor's VIB download site. Now that the included AHCI/SATA driver has been fixed, home lab enthusiasts are likely to find such issues much less common.
If these esxcli software profile install commands fails, you may want to try changing update to install, details below, see also Douglas' comment. Wait time for the successful install depending mostly on the the speed of the ESXi's connection to the internet, and somewhat on the write speed of the storage media that ESXi is installed on.
OPTIONAL - Xeon D with 10GbE - If your system includes two 10GbE Intel X552/X557 RJ45 or SFP+ NICs ports, they can be used for 1GbE or 10GbE speeds, but you'll need to regain the 10GbE Intel driver VIB that the upgrade process replaced with an older one that doesn't work with your X557. Simply copy and paste the following one-liner fix:
OPTIONAL - Xeon D-1567 - If your system uses the Xeon D-1567 (12 core) you may find the VMware ESXi 6.0 igb 5.3.3 NIC Driver for Intel Ethernet Controllers 82580, I210, I350, and I354 performs better for the service console on either ETH0 or ETH1 instead of the included-with-6.5U1EP4 VMware inbox driver for I-350 called VMW_bootbank_net-igb_5.0.5.1.1-5vmw.650.0.0.4564106. No need to download separately. Simply copy and paste the following one-liner fix:
OPTIONAL - Intel Optane P4800X - If your system has an Intel Optane P4800X NVMe SSD of either the PCIe or U.2 type, or a consumer 900P version, you'll need the Intel driver for full speed support under ESX. First, find your NVMe firmware version, then reference this version to verify the exact VIB you should be using on the VMware HCL - IO Devices Keyword P4800X. If it's intel-nvme version 1.3.2.4, simply paste the easy one-liner fix:
before proceeding. This method is here as a reference only, you should use your own internal web host to make pulling/installing this VIB easy, or just download it to a local directory on the ESXi and install it from there.
Firewall disallow outbound http requests - To return your firewall to how it was before this online upgrade, simply copy and paste the following:
esxcli network firewall ruleset set -e false -r httpClient
If you turned on maintenance mode earlier, remember to turn maintenance mode off.
If you normally leave SSH access off, go ahead and disable it now.
Type or paste
reboot
and hit return (to restart your ESXi server), or use your favorite ESXi UI to restart the host.
After the reboot is done, it would be a good idea to test login using ESXi host client, pointing your browser to the IP or hostname of your just-graded server, to be sure everthing seems to be working right.
You're done!
Special thanks to VMware ESXi Patch Tracker by Andreas Peetz at the VMware Front Experience Blog. This upgrade test was performed on a TinkerTry'd VMware HCL system. Yes, on both the very popular 8 core and the rather special 12 core version of the beloved Supermicro SuperServer SYS-5028D-TN4T system.
Here's how my upgrade from 6.5.0d to 6.5 U1 Patch 02 Build 7388607 looked, right after the 1 minute download/patch.Yep, it worked! This is called the DCUI, using Supermicro's iKVM HTML5 UI to show you what my console looked like after the patch & reboot.ESXi Host client view of Build 6765664, Build 7388607 looks similar.
That's it! When the reboot is complete, you'll see for yourself that you now have the latest ESXi, Build 7388607, as pictured above. Now you have more spare time to read more TinkerTry articles!
When the upgrade is complete, on the ESXi Host Client UI, under Host / Configuration, you should see the following "Image profile" (Updated) ESXi-6.5.0-20171204001-standard (VMware, Inc.)
Depending upon your ESXi firewall configuration, if the above command results in a network related error such as: 'NoneType' object has no attribute 'close'
then you skipped the firewall configuration step above, try again!
Alternatively, you could have used VMware Update Manager on a Windows system or VM, but for one-off upgrades typical in a small home lab, pasting these 3 or 4 lines of code is pretty darn easy.
Looking ahead, since VUM is now built into VCSA 6.5, this adds another way to do future upgrades and patches, even in a small home lab environment.
Below, I've pasted the full text of my upgrade, helps you see what drivers were touched, use the horizonal scroll bar or shift + mousewheel to look around, and Ctrl+F to Find stuff quickly:
login as: root
Using keyboard-interactive authentication.
Password:
The time and date of this login have been sent to the system logs.
WARNING:
All commands run on the ESXi shell are logged and may be included in
support bundles. Do not provide passwords directly on the command line.
Most tools can prompt for secrets or accept them from standard input.
VMware offers supported, powerful system administration tools. Please
see www.vmware.com/go/sysadmintools for details.
The ESXi Shell can be disabled by an administrative user. See the
vSphere Security documentation for more information.
[root@xd-1541-5028d:~] esxcli network firewall ruleset set -e true -r httpClient
[root@xd-1541-5028d:~] esxcli software profile install -p ESXi-6.5.0-20171204001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
[Exception]
You attempted to install an image profile which would have resulted in the removal of VIBs ['VMware_bootbank_vscsi-statsd_1.0.26-5.1.u2', 'INT_bootbank_intel-nvme_1.3.2.4- 1OEM.650.0.0.4598673', 'Intel_bootbank_intel_ssd_data_center_tool_3.0.4-400']. If this is not what you intended, you may use the esxcli software profile update command to p reserve the VIBs above. If this is what you intended, please use the --ok-to-remove option to explicitly allow the removal.
Please refer to the log file for more details.
[root@xd-1541-5028d:~] esxcli software profile install -p ESXi-6.5.0-20171204001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
--ok-to-remove
Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: VMW_bootbank_igbn_0.1.0.0-15vmw.650.1.36.7388607, VMW_bootbank_misc-drivers_6.5.0-1.36.7388607, VMW_bootbank_net-igb_5.0.5.1.1-5vmw.650.0.0.4564106, VMW_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.650.0.0.4564106, VMW_bootbank_ntg3_4.1.3.0-1vmw.650.1.36.7388607, VMW_bootbank_nvme_1.2.0.32-5vmw.650.1.36.7388607, VMW_bootbank_nvmxnet3_2.0.0.23-1vmw.650.1.36.7388607, VMW_bootbank_vmkata_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmkusb_0.1-1vmw.650.1.36.7388607, VMware_bootbank_esx-base_6.5.0-1.36.7388607, VMware_bootbank_esx-dvfilter-generic-fastpath_6.5.0-1.36.7388607, VMware_bootbank_esx-tboot_6.5.0-1.36.7388607, VMware_bootbank_esx-ui_1.23.0-6506686, VMware_bootbank_vsan_6.5.0-1.36.7388608, VMware_bootbank_vsanhealth_6.5.0-1.36.7388609, VMware_locker_tools-light_6.5.0-1.33.7273056
VIBs Removed: INT_bootbank_intel-nvme_1.3.2.4-1OEM.650.0.0.4598673, INT_bootbank_net-igb_5.3.3-1OEM.600.0.0.2494585, INT_bootbank_net-ixgbe_4.5.1-1OEM.600.0.0.2494585, Intel_bootbank_intel_ssd_data_center_tool_3.0.4-400, VMW_bootbank_igbn_0.1.0.0-14vmw.650.1.26.5969303, VMW_bootbank_misc-drivers_6.5.0-1.26.5969303, VMW_bootbank_ntg3_4.1.2.0-1vmw.650.1.26.5969303, VMW_bootbank_nvme_1.2.0.32-4vmw.650.1.26.5969303, VMW_bootbank_nvmxnet3_2.0.0.22-1vmw.650.0.0.4564106, VMW_bootbank_vmkata_0.1-1vmw.650.1.26.5969303, VMW_bootbank_vmkusb_0.1-1vmw.650.1.26.5969303, VMware_bootbank_esx-base_6.5.0-1.29.6765664, VMware_bootbank_esx-dvfilter-generic-fastpath_6.5.0-0.0.4564106, VMware_bootbank_esx-tboot_6.5.0-1.29.6765664, VMware_bootbank_esx-ui_1.21.0-5724747, VMware_bootbank_vsan_6.5.0-1.29.6765666, VMware_bootbank_vsanhealth_6.5.0-1.29.6765667, VMware_bootbank_vscsi-statsd_1.0.26-5.1.u2, VMware_locker_tools-light_6.5.0-0.23.5969300
VIBs Skipped: VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.650.0.0.4564106, VMW_bootbank_ata-pata-amd_0.3.10-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-atiixp_0.4.6-4vmw.650.0.0.4564106, VMW_bootbank_ata-pata-cmd64x_0.2.5-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-pdc2027x_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-serverworks_0.4.3-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-sil680_0.4.8-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-via_0.3.3-2vmw.650.0.0.4564106, VMW_bootbank_block-cciss_3.6.14-10vmw.650.0.0.4564106, VMW_bootbank_char-random_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ehci-ehci-hcd_1.0-4vmw.650.0.14.5146846, VMW_bootbank_elxnet_11.1.91.0-1vmw.650.0.0.4564106, VMW_bootbank_hid-hid_1.0-3vmw.650.0.0.4564106, VMW_bootbank_i40en_1.3.1-5vmw.650.1.26.5969303, VMW_bootbank_ima-qla4xxx_2.02.18-1vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-devintf_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ixgben_1.4.1-2vmw.650.1.26.5969303, VMW_bootbank_lpfc_11.1.0.6-1vmw.650.0.0.4564106, VMW_bootbank_lsi-mr3_6.910.18.00-1vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt2_20.00.01.00-3vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt3_12.00.02.00-11vmw.650.0.0.4564106, VMW_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.650.0.0.4564106, VMW_bootbank_mtip32xx-native_3.9.5-1vmw.650.0.0.4564106, VMW_bootbank_ne1000_0.8.0-16vmw.650.1.26.5969303, VMW_bootbank_nenic_1.0.0.2-1vmw.650.0.0.4564106, VMW_bootbank_net-bnx2_2.2.4f.v60.10-2vmw.650.0.0.4564106, VMW_bootbank_net-bnx2x_1.78.80.v60.12-1vmw.650.0.0.4564106, VMW_bootbank_net-cdc-ether_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-cnic_1.78.76.v60.13-2vmw.650.0.0.4564106, VMW_bootbank_net-e1000_8.0.3.1-5vmw.650.0.0.4564106, VMW_bootbank_net-e1000e_3.2.2.1-2vmw.650.0.0.4564106, VMW_bootbank_net-enic_2.1.2.38-2vmw.650.0.0.4564106, VMW_bootbank_net-fcoe_1.0.29.9.3-7vmw.650.0.0.4564106, VMW_bootbank_net-forcedeth_0.61-2vmw.650.0.0.4564106, VMW_bootbank_net-libfcoe-92_1.0.24.9.4-8vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-core_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-en_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-nx-nic_5.0.621-5vmw.650.0.0.4564106, VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.650.0.0.4564106, VMW_bootbank_net-usbnet_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.650.0.0.4564106, VMW_bootbank_nhpsa_2.0.6-3vmw.650.0.0.4564106, VMW_bootbank_nmlx4-core_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-en_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-rdma_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx5-core_4.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_ohci-usb-ohci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_pvscsi_0.1-1vmw.650.1.26.5969303, VMW_bootbank_qedentv_2.0.3.29-1vmw.650.0.0.4564106, VMW_bootbank_qfle3_1.0.2.7-1vmw.650.0.0.4564106, VMW_bootbank_qflge_1.1.0.3-1vmw.650.0.0.4564106, VMW_bootbank_qlnativefc_2.1.50.0-1vmw.650.1.26.5969303, VMW_bootbank_sata-ahci_3.0-26vmw.650.1.26.5969303, VMW_bootbank_sata-ata-piix_2.12-10vmw.650.0.0.4564106, VMW_bootbank_sata-sata-nv_3.5-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-promise_2.12-3vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil24_1.1-1vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil_2.3-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-svw_2.3-3vmw.650.0.0.4564106, VMW_bootbank_scsi-aacraid_1.1.5.1-9vmw.650.0.0.4564106, VMW_bootbank_scsi-adp94xx_1.0.8.12-6vmw.650.0.0.4564106, VMW_bootbank_scsi-aic79xx_3.1-5vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-fnic_1.5.0.45-3vmw.650.0.0.4564106, VMW_bootbank_scsi-hpsa_6.0.0.84-1vmw.650.0.0.4564106, VMW_bootbank_scsi-ips_7.12.05-4vmw.650.0.0.4564106, VMW_bootbank_scsi-iscsi-linux-92_1.0.0.2-3vmw.650.0.0.4564106, VMW_bootbank_scsi-libfc-92_1.0.40.9.3-5vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid2_2.00.4-9vmw.650.0.0.4564106, VMW_bootbank_scsi-mpt2sas_19.00.00.00-1vmw.650.0.0.4564106, VMW_bootbank_scsi-mptsas_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-mptspi_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.650.0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-3-0_6.5.0-0.0.4564106, VMW_bootbank_uhci-usb-uhci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usb-storage-usb-storage_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usbcore-usb_1.0-3vmw.650.1.26.5969303, VMW_bootbank_vmkplexer-vmkplexer_6.5.0-0.0.4564106, VMW_bootbank_vmw-ahci_1.0.0-39vmw.650.1.26.5969303, VMW_bootbank_xhci-xhci_1.0-3vmw.650.0.0.4564106, VMware_bootbank_cpu-microcode_6.5.0-0.0.4564106, VMware_bootbank_emulex-esx-elxnetcli_11.1.28.0-0.0.4564106, VMware_bootbank_esx-xserver_6.5.0-0.23.5969300, VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-5vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-10vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-7vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-8vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-mpt2sas-plugin_2.0.0-6vmw.650.1.26.5969303, VMware_bootbank_native-misc-drivers_6.5.0-0.0.4564106, VMware_bootbank_rste_2.0.2.0088-4vmw.650.0.0.4564106, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.10-1.26.5969303
[root@xd-1541-5028d:~] esxcli software vib install -v https://cdn.tinkertry.com/files/net-ixgbe_4.5.3-1OEM.600.0.0.2494585.vib --no-sig-check
Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: INT_bootbank_net-ixgbe_4.5.3-1OEM.600.0.0.2494585
VIBs Removed: VMW_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.650.0.0.4564106
VIBs Skipped:
[root@xd-1541-5028d:~] esxcli network firewall ruleset set -e false -r httpClient
[root@xd-1541-5028d:~] reboot
[root@xd-1541-5028d:~] esxcli software vib install -v https://cdn.tinkertry.com/files/net-igb_5.3.3-1OEM.600.0.0.2494585.vib --no-sig-check
Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: INT_bootbank_net-igb_5.3.3-1OEM.600.0.0.2494585
VIBs Removed: VMW_bootbank_net-igb_5.0.5.1.1-5vmw.650.0.0.4564106
VIBs Skipped:
[root@xd-1541-5028d:~]
Yeah, it's a bit long, and somewhat complicated. At least I know it will feel good when this is all completely behind us!
I take great pride that there are now hundreds of happy Supermicro SuperServer Bundle owners in the world who by-and-large have enjoyed their product ownership experience, with well under <0.5% returned. I'm also glad to have had TinkerTry readers play an active role in bringing the even-more-capable 12 core Xeon D-1567 to market in the form of another Wiredzone Bundle, shipped already burn-in-tested and fully warranted, along with the latest tested BIOS and IPMI firmware already installed, making for a much-improved out-of-box experience for those eager to get right to work.
But when it comes to 10GbE networking, 12 core Xeon D's track record has been a little bumpy, and only the 12 core, and maybe the 16 core, more details below. While it's true that there are hundreds of 12 core Xeon D owners out there, it's quite likely that only a small proportion of them use those 10GbE ports with their 1G or 10G switches. And for that subset of folks is who I wrote this article for, the most enthusiast Xeon D fans willing to pay the extra premium for those extra cores, which also enjoy pretty much linear scaling, evident here.
Those 12 core owners have also been experiencing an intermittent problem with network outages, where the physical link-layer LEDs go dark at some random time. This unfortunate state has only one known workaround right now. You have to shut down whatever OS you're running, then remove power from the system. That's awkward, unacceptable "fix" for now, really just a temporary work-around. It's also not an action that can be done remotely, at least if you're not lucky enough to have a smart power strip already installed between your UPS and your SuperServer.
Back in April of 2017, a solitary report of some 10GbE strangeness arrived, documented so well by Devoid at TinkerTry here:
Months back, I experienced the random disconnects of the 10G NICs. I did the BIOS and ICMP updates along with ESXi 6 U2 and the 4.4.1 x552 driver. Things had been running without issue for months, so I thought all was well. I recently decided to turn up some of my old VMs that I've had powered off. All was well for about 3 days. Now, it appears my 10G disconnecting NIC issue is back. I even just updated to 4.5.1 x552 driver...no luck. The kicker, no matter how many reboots, how many interface shutdowns (on both the switch and esxcli), and even pulling the network cable, nothing would bring up the links. I even tried hard setting speeds, nothing. The only solution, pull power to the X10SDV-12C-TLN4F. The huge problem this is causing me is that all my VM storage is on my synology NAS, via 10g (x540-T2 also hooked up to same Cisco switch, no issues). Once my supermicro 10g Links go down, all the VMs die.
So. My questions: is anyone else experiencing this? Given that it ran fine for months with a few VMs, but came crashing down when I loaded it up, I'm wondering if it's linked to load?
My setup:
X10SDV-12C-TLN4F
BIOS: 1.1c
Firmware: 03.46
ESXi 6.0.0, 4600944 (u2)
Cisco 3750x, C3KX-NM-10GT
Synology RS3413xs+
HELP! I wouldn't even know who to talk to, Cisco, Supermicro, VMWare, Intel?
I replied with good intentions, then David replied back again with good news:
Update to the story. I haven't been able to open a VMWare ticket yet. Hopefully I will be able to through work, if needed.
The good news, however, is that I did open a ticket with SuperMicro, and they responded pretty quickly, and suggested I update the firmware. They provided a specific firmware update for the 10G NICs. I'd pass is along, but it seems pretty specific. The firmware was labeled: SDV23A.
So far, so good. 6+ days and counting. It has a decent load, so I plan to pile on a few more VMs, and keep monitoring.
My Netgear XS708T was in my basement, but my server was on my second floor. 100' of CAT7 and some attic adventures solved that problem.
A few months later, another another report. I'll admit I didn't think too much of these reports, since an RMA swap resolved his issue, so it seemed an issue was sneaking through, but without a 10GbE switch hooked up to my own Xeon D-1567 SuperServer Workstation Bundle 1, I didn't have a way to replicate the experience. But I never forgot, and this incident incented me to climb into my rather hot attic this summer to get myself some fresh 100' CAT7 cabling strung from my basement's Netgear XS708T ProSAFE 8-Port 10-Gigabit Smart Managed Switch to my attic. Why? Well, I use my SuperServer Workstation near bedrooms. While my Netgear XS708T was quieter than the Ubiquiti, and while this 10GbE switch was quieter than the Ubiquiti ES-16-XG switch I briefly tried (unboxing and sound tests), it was still far noisier than any of my Xeon D servers, so I wanted to keep it in my basement. I also hadn't heard about the simple fan swap solution yet either.
All of your 12 core or 16 core Xeon D systems X557 RJ45 port LEDs go dark at some seemingly random interval, ranging from several times per day to once every few months. It can happen with whatever OS you're running.
Anybody with a 12 core Xeon D system, and possibly 16 core Xeon D systems too.
There's a lot more to this complex problem, this section will help you determine if you might be affected, should you choose to use your 10G network ports. I only have these details so far, with this issue reported to happen on:
Any OS
I have reports of this problem occurring on
VMware ESXi 6.0
VMware ESXi 6.5
XenServer 7.3
Any 10G switch
I have reports of this problem occurring on
Any 12 or 16 core Xeon D
Presumably any brand of Xeon D system (there are many!), but only heard of this issue on Supermicro 12 core systems. This includes:
Xeon D-1557 featured on the X10SDV-12C-TLN4F motherboard as reported by Devoid here and discussed by phone recently
Xeon D-1567 featured on the X10SDV-12C+-WD002 motherboard (PIO-5028D-TN4T-01-WD002 in Windows Device Manager) that Wiredzone sells as part of the SYS-5028D-TN4T-12C SuperServer Bundle 1 and Bundle 2 system.
Xeon D-1577, Xeon D-1571, Xeon D-1559 likely affected too, see also Intel Xeon Processor D Family (aka Broadwell DE) here.
Any network connection speed, 1GbE or 10GbE
Presumably 1GbE links to the X557 network ports are also prone to this failure, but that's just conjecture, since this sure appears to be a firmware issue with the X557 itself.
I realize this is an odd section title, but when you read the bullet list, you'll start to gain a further understanding of why it has been challenging to get to the bottom of this issue.
Power cycling the 10GbE network switch
Upgrading firmware of the 10GbE switch
I only tried this with my Netgear XS708T, it made no difference.
I'm currently at 6.6.1.7, 1.0.0.8 level with 1.3.6.1.4.1.4526.100.4.39 System Object OID.
Forcing different network negotiation methods in the device driver
I only tried this to the extend that the Intel driver VIBs would allow under VMware ESXi 6.5
Trying different CAT6a or CAT7 cables
Trying different cable lengths
Correlating OS events with network outage events, no obvious pattern after exploring syslog from Netgear and attached VMware ESXi 6.5U1 host, with the configuration of VMware vRealize Log Insight detailed at TinkerTry here.
Applying the Intel X557 firmware SDV23A using Intel's SDVTLN4.BAT batch file on DOS bootable media fixed the issue for Devoid for a few months, but it didn't work for me, encountering another outage in less than a day. Unsure what this means, not enough data points yet. It's also possible my firmware update didn't complete successfully.
Why am I unsure about this last item? I used my article:
Yes, the firmware versions seem to differ. But do I know that SDV23A is supposed to give me 0x800005ad? Not entirely sure, and it doesn't show anywhere in my archive of all BIOS release notes.
This workaround won't prevent you from losing 10GbE connections, but it will allow an automatic fail-back to 1GbE for those occasions where powering down is very inconvenient.
While you're likely using Intel I350 ETH0 for your service console, you can assign ETH1 to be your standby adapter.
When troubleshooting such problems, it becomes important to find a way to make the system fail fast. In other words, come up with an easy way to show the problem happen when doing a live web meeting, or even better, fully document a way to replicate the configuration and the failure, so Supermicro can recreate the issue in their lab. This is seemingly the first step in getting a proper solution created that anybody can apply to their own system, most likely in the form of a new BIOS version. Meanwhile, 1.2c is the latest, see:
Yes, that's my situation too. As a blogger representing Supermicro owners like myself, I'm reluctant to sign an NDA, and would much rather focus my energies on helping Supermicro find a solution that helps everybody. The best outcome would be for everybody to enjoy a new BIOS with this new X557 workaround baked right in. But for that to happen, these steps are likely needed first:
Full recreate in the Supermicro labs
Coordination with Intel
Permission from Intel
QA testing at Supermicro labs
This all adds up to what will likely be weeks or even months of time before we have this fixed, and I'm very sorry about this temporary inconvenience.
use iKVM to mount the ISO and install it onto bootable USB media such as get that ESXi install onto bootable USB media such as this readily available Sandisk
once ESXi is configured, allow ssh (detailed in this article) then issue this one line command to download and install the latest Dec 04 2017 build 7388607 (helpful version history found here):
reboot
As usual with all 6.x builds of ESXi, you’ll notice no 10G drivers are working or even visible: the built-in VMware inbox drivers don’t work with X557, install the 4.5.3 VIB from here then reboot reboot
You may find that you now have no 10G connection at the physical level (no link LEDs on, on your 10G switch)
This 10G network out problem can be resolved temporarily by shutting down and unplugging all system power for >15 seconds, the powering back up and booting ESXi back up, waiting for it to finish booting so the 10G driver loads and 10G speed indicator comes back on
This 10G network out problem can be greatly reduced by asking Supermicro for SDV23A.zip and using the SDVTLN4.BAT, one user reports many months before his next outage occurred
This 10G network out problem can be fixed by asking Supermicro for the latest firmware (I don’t know the version), that one requires signing a non-disclosure
I'm working with Supermicro support directly to help get a fix to you, my valued TinkerTry reader who invested heavily in > 8 core Xeon D who demand stable 10GbE networking. It's now 10:22pm, and I actually just got off an a nearly hour long phone call with their technicians, working through the details of this article.
HHHL stands for Half Height, Half Length (source). These are the type of single PCIe slot cards that fit into the Supermicro SuperServer SYS-5028D-TN4T mini-tower bundles, and are entirely PCIe bus powered. This means the watt consumption is generally under 70 watts at highest workloads, which means the overall system stays well within the capabilities of the 250 watt power supply.
Back in March of 2017, I needed to go back to using my SuperServer Workstation as my daily driver Windows 10 workstation, in large part because of excessive video render times for 4K video production workflow. This meant it was time to revisit how well this VM worked with the latest ESXi 6.5.0d, determining whether all features and functions behaved as they did back in the ESXi 6.0 days, when I first wrote this article:
This SuperServer Workstation hybrid combo is a niche build, and not a turn-key solution. Yes, I admit that Bundle 2 is much more popular for good reasons: it's sold without Windows 10 and without a GPU, a better choice for the vast majority of use cases. This is especially true for virtualization enthusiasts, who generally don't need or want a watt-burning GPU. But this article is about those who very much do want a compact GPU that takes up only one slot for their Workstation VM. The term Workstation comes from the use of an attached keyboard, mouse, and monitor, so your vSphere 6.5 Datacenter can also be your Windows 10 Workstation, simultaneously. See also How to locate your triple monitor (up to 2K) PC 20 feet away for less noise and more joy.
With so many months having gone by since these SuperServer Workstations began shipping, it's high time I revisit this little screamer, and let you know what discoveries I made when moving from vSphere 6.0 to vSphere 6.5, and when determining whether any suitable replacements for the VisionTek AMD 7750 GPU card had arrived, with the promise of quiter operation.
Notably, I failed to get the newer and more powerful and quieter AMD Radeon Pro WX 4100 Workstation Graphics Card working properly for passthrough, when using the latest system BIOS 1.2, as explained here. Yellow bangs through the Display adapter listed in Windows Device Manager, and/or occasional PSODs. Maybe some manual tweaks to the .VMX file will do the trick someday, if somebody figures out what those tweaks are.
As for NVIDIA, well, the spiffy looking and quiet sounding K1200 was a no-go as well, exhibiting the dreaded yellow bangs through Device Manager. Yes, NVIDIA still doesn't want you to use anything but the proper NVIDIA Grid product line for vGPUs carved up among many of your VMs, explained in their video. NVIDIA has historically not been interested in allowing easy VMware ESXi passthrough for either their Quadro (workstation) or GeForce (gaming) product lines.
The noise of the always-on fan can be reduced by creatively using a fan speed reducer from nearly any Noctua cooling fan, installing it inline. Noctua calls it a Low-Noise Adaptor (L.N.A.), sold alone as the NA-SRC10 3-Pin Low-Noise Adaptors. Clumsy to get attached firmly, but very effective once in place, with my GPU still managing to stay cool to the touch even after long benchmarks like FurMark that doled out heavy abuse.
I'm much happier using my Xeon D for daily use when compared to any laptop, even that lovely work-issued Core i7 Dell Precision 5510 with 1TB NVMe, worth around $3000 total. Why? Because it's still just a mobile CPU, the 4 core 8MB cache i7-6820HQ. Compare that with my 16 core 18MB cache Xeon D-1567 in my SuperServer Bundle 1, as detailed in this Intel ARK comparison table. These specs really do matter. Routine content creation tasks like Camtasia 9 video renders use all those cores and now take me much less time. See also my 4K video render measurements with various core counts here:
Here's my current observations, fresh off about 3 months of heavy use.
Premium CPU in Dell Precision 5510 versus Xeon D-1567 in SuperServer Bundle 2 12 Core.
snappy UI, easy triple monitor support
CPU speed and multitasking abilities are impressive
extreme versatility compared to any laptop
many drive bays for your storage needs, and a fast M.2 slot for exceptional M.2 NVMe storage performance when used as an ESXi datastore for your Windows 10 VM
video render times using Camtasia 9 are greatly reduced over any laptop
having great performance with assigning 20 vCPUs to this powerhouse VM I use for dozens of hours per week
sound quality of my USB to Digital Coaxial and headphone jack adapter is great
I've discovered that turning VGA to offboard in the BIOS, for hand-off of video from onboard VGA port to offboard GPU card, isn't necessary for stable VM operation, I may want to revisit the build procedure Wiredzone follows when prepping these systems for shipment
Yes, full disclosure here, this is not a VMware supported way to run a VM, we already knew this. Only certain USB devices are support, and really only products like NVIDIA GRID are properly supported for use as vGPUs carved up across your most important VMs. But this is a home lab, where pushing technology forward with what's possible on budget can be fun, especially if somebody has figured out the bumps in the road before you.
approximately every 5th reboot of the Windows 10 VM that is also my Windows 10 triple-monitor workstation, I encounter an issue with the VisionTek AMD 7750 GPU card not being passed through at all for mysterious reasons, requiring me to reboot the SuperServer's ESXi itself, using vSphere Client on another network attached PC
on my attached triple monitors, I can't easily view BIOS screen and early Windows boot issues, such as BSODs, requiring me to use VMRC on another system for problem determination
currently I've disabled AMD's sound over DisplayPort and HDMI, to avoid nuisance default sound reassignment to one of those devices, since I don't use my monitors for sound
occasionally, 2-3x per-constant-use-hour, my mouse seems to drop some packets randomly for about 1/3 of a second, no big deal, and this isn't associated with any CPU or disk IO load
can't snapshot or vMotion the VM, the same old restrictions ESXi VMs have for RDM users
adding USB 3.0 devices a little clumsier than simply plugging in, need to take steps to map it to the VM as well, sometimes the VM needs to be shut down for this to work, gladly these mappings persist through reboots of VM or ESXi host, this guy has an easier way, see Running a virtual gaming rig using a Xeon D server, a GFX 750Ti, PCI passthrough and a Windows 10 VM, but it's USB 2.0 only, and only tested on ESXi 6.0
can't sync iPhone with iTunes via a physical USB 3.0 connection attached to the host/server, mapped to the Win 10 VM using the ESXi UI (Apple device seen in Device Manager, but not in iTunes)
avoid RDM mappings of the C: drive for this UEFI VM for much more robust booting, I'm quite happy now with thick provisioned 1.7TB virtual drive that lives on my VMFS 6.81 formatted Samsung 960 PRO 2TB M.2 NVMe SSD
if you turn SR-IOV on or off in the BIOS, you'll need to reconfigure passthrough in your ESXi host, reboot, then re-add the PCI devices back to your VM settings, so they'll show up again in Device Manager as the expected 'AMD Radeon HD 7700 Series' video device and the 'AMD High Definition Audio Device', and for me, I just right-click disable the audio device, as I don't use my monitors speakers
Nice write-up, as always. Your site has always given me inspiration and keeping me up-to-date with what's possible in my home lab - a small-chassis x10sri-f on xeon e5 L v4, and a sys-e200-8d "frankensteined" with a 60mm fan to reduce the stock fan noise.
Just wanted to share what I have with regard to GPU and USB3 isochronous, FWIW.
On GPU, I managed to find an older Grid K2 card which has 2 GPUs on board - I passed through one of the GPUs to a VM for demanding tasks, and the other GPU can still accelerate other VMs vSGA (via VMware tools' 3d acceleration via Xorg on host) for lower requirements with the added advantage of being vMotion-able. The Grid K2 requires good cooling, so I ended up having to add a few more fans and so far the noise has been bearable. As opposed to the newer Grid, the K2 doesn't require the newer Nvidia software licensing which can get very expensive.
On the USB, I've tried 3 USB-to-IP devices (yeah, part of work eval to passthrough USB-to-serial console and Rainbow tokenkeys): Digi's AnywhereUSB, SEH's UTN2500 and the Silex DS600. The AnywhereUSB is USB2.0 only and doesn't support isochronous and had driver issues. So far I've been having good results with SEH and Silex, both support isochronous and managed to run a USB-based DVD drive successfully.
The biggest surprise was that Apple announced a new "space gray" iMac Pro that can be configured with up to an 18-core Intel Xeon CPU, 128GB of RAM, and a 4TB SSD drive. Keep in mind that the base model of this beast will feature an 8-core Xeon, 32GB of RAM, and a 1TB SSD and it will cost $4999. So, the price of the fully configured version is going to be astronomical, likely in the $8K-$10K range, when it arrives at "the end of the year."
This excellent article details a different approach that leverages ESXi 6.0 and USB 2.0 passthrough:
Looking to ignore release notes and disclaimers/warnings to get right to the downloads and detailed upgrade procedures? There's no big fix or issue we're trying to resolve here, but if you're still in a hurry to try anyway, jump below.
Here's the current Supermicro Xeon D-1500 systems with X10SDV motherboards with RJ45 10GbE, in form factors suited for home and small business (single PSU), eligible for these new August BIOS and June 2017 IPMI releases:
Note, the Flex ATX E300-8D with the X10SDV-TP8F motherboard has not received a new BIOS. Flex ATX releases have historically arrived many months later, see also table below. Unfortunately, I don't own a Flex ATX system to test, but I have heard that boot-from-NVMe does work fine.
I managed to borrow one of each X10SDV system, for VMworld 2016. As for updating your BIOS and IPMI firmware to the latest release, you might not have to. For all SuperServer Bundle customers, Wiredzone handles these upgrades for you prior to shipment, along with the DIMM install, and 4 hour burn-in test with certificate. Nice touches that save you about 45 minutes while reducing your risk.
Here's Supermicro's Disclaimer:
WARNING!
Please do not download / upgrade the BIOS/Firmware UNLESS your system has a BIOS/firmware-related issue. Flashing the wrong BIOS/firmware can cause irreparable damage to the system.
Here's a copy of TinkerTry's Disclaimer, exactly as posted below every article:
Disclaimer
Emphasis is on home test labs, not production environments. No free technical support is implied or promised, and all best-effort advice volunteered by the author or commenters are on a use-at-your-own risk basis. Properly caring for your data is your responsibility. TinkerTry bears no responsibility for data loss. It is up to you to follow all local laws and software EULAs.
This all boils down to you needing to contact Supermicro's SuperServer Technical Support if something goes wrong, with no guarantees that they can help you if you bricked your system. I would add that you should be sure to run your SuperServer off an uninterruptable power supply during any firmware upgrades, and be sure you use a stable network connection, or a known-good USB flash drive for bootable media.
Right here at TinkerTry, there's full release notes that go all the way back to the beginning. It would be even better if Supermicro published them themselves, but having them here is a good start. Just one of those little victories, trying to help everybody out there, and I'm so very glad I'm able to share these notes with everybody here:
There is a way to upgrade the BIOS over IPMI that I describe here, but it may require waiting for a trial license key for Supermicro Update Manager. So instead, I present to you the old school safest way to upgrade your BIOS(s), anytime:
make sure your SuperServer is on UPS-protected power
power on or reboot your SuperServer, then enter the BIOS setup by pressing Del when prompted
to (temporarily) turn UEFI OFF, going into the BIOS's Boot tab, and choosing Legacy mode
to turn CSM ON (Compatibility Support Mode) to On (it's on by default), see details here
create a bootable USB flash drive on another Windows workstation using Rufus
extract all X10SDVF7_816.zip files to the root directory of the USB drive, which includes the BIOS image itself named X10SDVF7_816
properly eject the USB drive using the Windows Taskbar Safely Remove... icon.
insert the USB drive into any available USB port on your SuperServer
power up or reboot, and get ready to press that F11 key to choose alternative boot device, then choose the USB drive from the list
Using either a locally attached keyboard and mouse, or over iKVM, at the DOS command line, type: FLASH X10SDVF7_816 (you can use type-ahead to auto-complete)
wait until it's done, takes about 5 minutes, it will tell you when it's done
unplug the power cord from the SuperServer for about 15 seconds
remove the USB flash drive
plug the power cord back into your SuperServer
power on your SuperServer
you will notice it boots, finishes POST but doesn't prompt you to press any buttons, then it auto-reboots again, this is normal
press Del to enter the BIOS setup again, you will see you've been reset to factory default BIOS settings. Switch back to UEFI mode, and turn CSM back to off if you like, see the rest of the Recommended BIOS Settings and differences between UEFI and BIOS
reboot, make sure your default boot device comes up, you're done!
if you encounter issues, you can go back to the prior BIOS level 1.1c, found here.
on another PC, use a browser and type in the IP address of your BMC/IPMI/iKVM management interface in the URL area
login, default is ADMIN/ADMIN
you should gracefully shut down any OS you may have running on this system, and leave it powered off, or use iKVM's Power Off button
under Maintenance, IPMI Configuration, you may wish to use the Save IPMI Configuration feature to save a config file for possible restore later, since you are about to lose all of your IPMI configuration settings
under Maintenance, Firmware Update, select the Enter Update Mode button and follow the instructions, using the IPMI file downloaded REDFISH_X10_352.bin, then make sure to Un-check both checkboxes when prompted to preserve your configuration, as seen pictured at right. Keeping your certificate or not is up to you, I went with unchecking all 3 boxes. If you don't uncheck those first two, you may get voltage alerts or critical sensor error / 5V Dual warnings in VMware ESXi, or other problems, which folks resolved by reflashing to the same level again, making sure to uncheck the boxes this time.
wait until it's done with the IPMI upgrade, takes about 5 minutes, when done, it will prompt you to wait another minute, click OK and wait some more as it says "Rebooting..." and once the IPMI Web Interface starts to respond to login again, you can continue
unplug the power cord from the SuperServer for at least 15 seconds (optional but recommended, more difficult if you're remote, I realize)
plug the power cord back in to your SuperServer
power on your SuperServer, wait a minute for IPMI to boot up
on another PC, use a browser and type in the IP address of your BMC/IPMI/iKVM management interface in the URL area
optional - under Maintenance, IPMI Configuration, you may wish to use the Reload IPMI Configuration feature to choose your saved file, and restore it
I've added some testing in here, finding out that it appears 2400MHz memory is now properly supported, but note that only Xeon D-1541 supports that speed.
The only Xeon D CPU released last year that supports 2400MHz speeds is the Xeon D-1541, see validation of this in Intel's Product Brief. Thank you for spotting this newly revised update to the brief, reported by Bryce Wilkins right here at TinkerTry. Keeping this minor issue in perspective, 2400MHz is easier to find and cheaper to acquire, so the argument that you're not getting what you paid for is weakened somewhat, see also this excellent article about how little most workloads would ever notice this difference in speed.
Image provided by Bryce Wilkins.
I've successfully updated my SYS-5028D-TN4T system based on the Xeon D-1567, see new screenshot atop this article. Testing is still underway, but so far, it's looking to work OK. I did have to set up VT-d passthrough under ESXi 6.5.0d all over again for my Windows 10 VM, but that's normal behavior.
I did spot 2 new settings in the BIOS 1.2a release, also seen in BIOS 1.2b:
SMCBiosActionFlag [0]
SumBbsSupportFlag 48
as seen pictured on the first two rows below, with no explanatory help text at top-right. Googling for either term comes up with nothing. The BIOS release notes don't include these new terms either, and the instruction manual hasn't been updated since it was published on Feb 22 2016.
New SMCBiosActionFlag and SumBbsSupportFlag options appeared back on BIOS 1.2a.
The only change in this release is "Added patch for Intel SPI vulnerability," seen in the release notes, not to be confused with Intel® AMT Critical Firmware Vulnerability. Supermicro has confirmed to me that 1.2c is their Xeon D fix for CVE-2017-5701. Initial testing of BIOS 1.2c has gone well so far for jrp and I. I tested on my Xeon D-1541 and Xeon D 1567 SuperServers. Wiredzone is now making sure to ship all Bundles with BIOS 1.2c and IPMI 3.58.
Here's the current Supermicro Xeon D-1500 systems with X10SDV motherboards with RJ45 10GbE, in form factors suited for home and small business (single PSU), eligible for this new October BIOS and May 2017 IPMI releases:
Note, the Flex ATX E300-8D with the X10SDV-TP8F motherboard has not received a new BIOS. Flex ATX releases have historically arrived many months later, see also table below. Unfortunately, I don't own a Flex ATX system to test, but I have heard that boot-from-NVMe does work fine.
I managed to borrow one of each X10SDV system, for VMworld 2016. As for updating your BIOS and IPMI firmware to the latest release, you might not have to. For all SuperServer Bundle customers, Wiredzone handles these upgrades for you prior to shipment, along with the DIMM install, and 4 hour burn-in test with certificate. Nice touches that save you about 45 minutes while reducing your risk.
Here's Supermicro's Disclaimer:
WARNING!
Please do not download / upgrade the BIOS/Firmware UNLESS your system has a BIOS/firmware-related issue. Flashing the wrong BIOS/firmware can cause irreparable damage to the system.
Here's a copy of TinkerTry's Disclaimer, exactly as posted below every article:
Disclaimer
Emphasis is on home test labs, not production environments. No free technical support is implied or promised, and all best-effort advice volunteered by the author or commenters are on a use-at-your-own risk basis. Properly caring for your data is your responsibility. TinkerTry bears no responsibility for data loss. It is up to you to follow all local laws and software EULAs.
This all boils down to you needing to contact Supermicro's SuperServer Technical Support if something goes wrong, with no guarantees that they can help you if you bricked your system. I would add that you should be sure to run your SuperServer off an uninterruptable power supply during any firmware upgrades, and be sure you use a stable network connection, or a known-good USB flash drive for bootable media.
Right here at TinkerTry, there's full release notes that go all the way back to the beginning. It would be even better if Supermicro published them themselves, but having them here is a good start. Just one of those little victories, trying to help everybody out there, and I'm so very glad I'm able to share these notes with everybody here:
There is a way to upgrade the BIOS over IPMI that I describe here, but it may require waiting for a trial license key for Supermicro Update Manager. So instead, I present to you the old school safest way to upgrade your BIOS(s), anytime:
make sure your SuperServer is on UPS-protected power
power on or reboot your SuperServer, then enter the BIOS setup by pressing Del when prompted
to (temporarily) turn UEFI OFF, going into the BIOS's Boot tab, and choosing Legacy mode
to turn CSM ON (Compatibility Support Mode) to On (it's on by default), see details here
create a bootable USB flash drive on another Windows workstation using Rufus
extract all X10SDVF7_816.zip files to the root directory of the USB drive, which includes the BIOS image itself named X10SDVF7_816
properly eject the USB drive using the Windows Taskbar Safely Remove... icon.
insert the USB drive into any available USB port on your SuperServer
power up or reboot, and get ready to press that F11 key to choose alternative boot device, then choose the USB drive from the list
Using either a locally attached keyboard and mouse, or over iKVM, at the DOS command line, type: FLASH X10SDVF7_816 (you can use type-ahead to auto-complete)
wait until it's done, takes about 5 minutes, it will tell you when it's done
unplug the power cord from the SuperServer for about 15 seconds
remove the USB flash drive
plug the power cord back into your SuperServer
power on your SuperServer
you will notice it boots, finishes POST but doesn't prompt you to press any buttons, then it auto-reboots again, this is normal
press Del to enter the BIOS setup again, you will see you've been reset to factory default BIOS settings. Switch back to UEFI mode, and turn CSM back to off if you like, see the rest of the Recommended BIOS Settings and differences between UEFI and BIOS
reboot, make sure your default boot device comes up, you're done!
if you encounter issues, you can go back to the prior BIOS level 1.1c, found here.
on another PC, use a browser and type in the IP address of your BMC/IPMI/iKVM management interface in the URL area
login, default is ADMIN/ADMIN
you should gracefully shut down any OS you may have running on this system, and leave it powered off, or use iKVM's Power Off button
under Maintenance, IPMI Configuration, you may wish to use the Save IPMI Configuration feature to save a config file for possible restore later, since you are about to lose all of your IPMI configuration settings
under Maintenance, Firmware Update, select the Enter Update Mode button and follow the instructions, using the IPMI file downloaded REDFISH_X10_352.bin, then make sure to Un-check both checkboxes when prompted to preserve your configuration, as seen pictured at right. Keeping your certificate or not is up to you, I went with unchecking all 3 boxes. If you don't uncheck those first two, you may get voltage alerts or critical sensor error / 5V Dual warnings in VMware ESXi, or other problems, which folks resolved by reflashing to the same level again, making sure to uncheck the boxes this time.
wait until it's done with the IPMI upgrade, takes about 5 minutes, when done, it will prompt you to wait another minute, click OK and wait some more as it says "Rebooting..." and once the IPMI Web Interface starts to respond to login again, you can continue
unplug the power cord from the SuperServer for at least 15 seconds (optional but recommended, more difficult if you're remote, I realize)
plug the power cord back in to your SuperServer
power on your SuperServer, wait a minute for IPMI to boot up
on another PC, use a browser and type in the IP address of your BMC/IPMI/iKVM management interface in the URL area
optional - under Maintenance, IPMI Configuration, you may wish to use the Reload IPMI Configuration feature to choose your saved file, and restore it
I've added some testing in here, finding out that it appears 2400MHz memory is now properly supported, but note that only Xeon D-1541 supports that speed.
The only Xeon D CPU released last year that supports 2400MHz speeds is the Xeon D-1541, see validation of this in Intel's Product Brief. Thank you for spotting this newly revised update to the brief, reported by Bryce Wilkins right here at TinkerTry. Keeping this minor issue in perspective, 2400MHz is easier to find and cheaper to acquire, so the argument that you're not getting what you paid for is weakened somewhat, see also this excellent article about how little most workloads would ever notice this difference in speed.
Image provided by Bryce Wilkins.
I've successfully updated my SYS-5028D-TN4T system based on the Xeon D-1567, see new screenshot atop this article. Testing is still underway, but so far, it's looking to work OK. I did have to set up VT-d passthrough under ESXi 6.5.0d all over again for my Windows 10 VM, but that's normal behavior.
I did spot 2 new settings in the BIOS 1.2a release, also seen in BIOS 1.2c:
SMCBiosActionFlag [0]
SumBbsSupportFlag 48
as seen pictured on the first two rows below, with no explanatory help text at top-right. Googling for either term comes up with nothing. The BIOS release notes don't include these new terms either, and the instruction manual hasn't been updated since it was published on Feb 22 2016.
New SMCBiosActionFlag and SumBbsSupportFlag options appeared back on BIOS 1.2a.
It would seem that finally, the Flex ATX systems have a BIOS update available, this is good! Table above updated accordingly, making this change:
Aug 16 2016 / 1.0b was the last release.
Sep 21 2017 / 1.0c is the latest release, and the direct download file is called X10SDVT7_A31.zip, and the BIOS release notes are found here, thanks to @BennyE_HH!
I have not tested 1.0c since I don't have a Flex ATX X10SDV system. The Mini ITX X10SDV system's release notes are posted by me here, and have been testing those frequent BIOS updates at a rate of several new releases per year.
Full NAKIVO v7 install/configure demo video below is available, using my home lab's vSphere 6.5 environment! I haven't had a chance to test v7.3 yet, as it just came out today, and I haven't gotten my hands on the NFR version yet. The trial version is available now.
There have been new features and refinements happening regularly ever since NAKIVO started out in 2012, as their backup solutions for VMs gains maturity, yet retaining their signature ease-of-use kudos, including home virtualization lab enthusiasts like Florian Grehl and Vladen Seget. See also NAKIVO's post, What's New in v7.3.
All the details of today's v7.3 announcement are right in NAKIVO's press release, with a focus on performance enhancements this time around. Here are some of the highlights:
NAKIVO, Inc., a fast-growing software company that specializes in protecting VMware, Hyper-V, and AWS environments, announced today that it had released NAKIVO Backup & Replication v7.3. This latest version features a new type of backup repository with a special architecture optimized for deduplication appliances, such as NEC HYDRAstor, EMC Data Domain, HP StoreOnce, and Quantum DXi.
Deduplication appliances are designed to reduce the data size and operate best with sequential large block I/O from backup software. If the architecture of a backup repository is not optimized for deduplication appliances, VM backup may appear to be random I/O, which deduplication appliances are not designed to handle. This can greatly reduce the VM backup performance.
...
The new backup repository complements the existing one, so customers now have a choice between the following options:
The regular backup repository, which is optimized for generic storage systems and performs forever-incremental VM backups along with global data deduplication and compression.
The backup repository optimized for deduplication appliances, which accelerates VM backups to speeds up to 53 times faster on deduplication appliances.
... RESOURCES
Trial Download: /resources/download/trial-download/
Success Stories: /customers/success-stories/
Datasheet for VMware: nakivo-vm-backup-datasheet.pdf
It appears this NFR request process now includes a bit of a waiting period before you can actually access the code, unfortunately. Not sure how long the wait is, and whether it depends upon the time-of-day that requests are made. But it's worth waiting for, since to my knowledge you can't convert the trial appliance to an NFR appliance.
You may have to use a valid "work" email address, which apparently includes Google G Suite email addresses with custom, non @gmail.com domains.
Are you an IT Pro? Click the image above to fill out the form for yourself.Not sure how long the wait for 1 year NFR download is, but you can get a Trial immediately, described below.
If you're not one of the listed IT Professionals, you can still get instant access to the various downloads by filling out the form here: nakivo.com/resources/download/trial-download
Now that you're past providing contact information, click 'Virtual Appliance' then 'Full Solution' for the easiest way to get going in a VMware vSphere environment.
The filename of the VMware appliance that you download will be called NAKIVO_Backup_Replication_VA_v7.3.0_Full_Solution_TRIAL.ova
or a later, 587 MB in size.
Most VMware users will choose the Virtual Appliance, Full Solution to get started quickly and easily
If you have a compatible Linux-based NAS listed on page 4 here, that install-directly-on-NAS procedure is different, see Synology and Western Digital articles. Note that competitor Veeam is Windows-only, so a NAS approach by Veeam is unlikely, since NASs are usually Linux.
Deploy the OVA, much like as illustrated in detail by NAKIVO here, but if you're on vCenter 6.5, you'll need to use vSphere Web Client (Adobe Flash) (as seen in my video), or vSphere Client (HTML5).
Power up the appliance, then point your browser to the appliance IP or name (starting with http, not https, so you then get redirected properly to the https version). For my lab, since I created a DNS reservation for the MAC address of the appliance before powering it on, it was: http://nakivo.lab.local
Now you're able to login that first time without credentials, and a simple wizard asks you some basic questions. Clicking next next next is OK, now you're ready to configure your first backup job.
When I first tested v7, it only took a few minutes to install the appliance, configure it, then create and run my first backup job. Easy!
Here's the impressively low pricing, edition comparison, and the datasheet, and some of the many details I think could be most interesting for my home lab use cases:
You can back up VMware virtual machines on schedule or ad hoc, both locally over LAN and offsite over (a slow) WAN. You can save up to 1,000 recovery points per VM and, using the Grandfather-Father-Son (GFS) retention scheme, rotate them on a daily, weekly, monthly, and yearly basis. Learn more>>
VMware Backup Copy
Backup Copy jobs provide a simple and powerful way to create and maintain copies of your backups. Backup copies can be stored onsite, offsite, or in Amazon Cloud. Backup Copy jobs have their own schedule and recovery point retention policy, so you can easily set up your jobs to fit your needs. Learn more>>
VMware VM Replication
VMware VM replication creates and maintains an identical copy of the source virtual machine on a target host. You can replicate live VMs on schedule or ad hoc locally and offsite and simply power them on in case the source VM is lost or damaged. You can even save up to 30 recovery points per replica. Learn more>>
...
Storage Space Reduction
Global Data Deduplication
NAKIVO Backup & Replication automatically deduplicates backed up data at the block level across entire backup repository, and saves only unique blocks of data. This dramatically reduces backup size. Learn more>>
Efficient Data Compression
After data deduplication, the product automatically compresses each block of data so it occupies even less storage space in a backup repository.
...
Performance
Network Acceleration
Data compression and traffic reduction techniques enable the product to reduce network traffic by 50% on average, thus decreasing the network load and speeding up backup and replication times by 2X.
Multi-Threading
NAKIVO Backup & Replication can run dozens of backup, replication, and recovery jobs simultaneously, which speeds up data processing and shortens the time windows allocated to data protection.
Full Disclosure: "TinkerTry.com, LLC" is registered as a NAKIVO Bronze Partner, mostly to help get notified of latest news and code releases. I get no special early-access, anybody can sign up for the betas. All TinkerTry site advertising goes through middle-man BuySellAds. NAKIVO does know if you found their affiliate link from my site, which means the possibility of reseller commissions if you eventually upgrade to one of their paid offerings. Here's their pricing.
We invite you to use our new free Ubiquiti Network Management System. You can simply configure, monitor, upgrade, and back up your UBNT devices. Add your routers and switches. You can include your wireless equipment and optical GPON devices as well. And why stop there. You can even manage your client APs with ease. Management of all devices in a single application: UNMS™.
Turns out Ubiquiti surprised me today with their full announcement email:
The UniFi® Controller revolutionized enterprise software, and now it’s time for the WISP industry: welcome to UNMS™ (Ubiquiti® Network Management System) for centralized control of Ubiquiti devices across multiple sites worldwide. There are no software, licensing, or support fees – try out the demo or download now LEARN MORE
Simple Installation
Docker-based application with install script and in-app upgrade.
Install UNMS™ on your server or use any cloud provider.
There will be a Ubiquiti cloud where you can install UNMS™ with a single click and use your Ubiquiti SSO.
Had I been paying closer attention, UNMS was actually announced in beta form in the forum way back in June 2017:
Welcome to UNMS Beta!
Options
06-23-2017 09:03 AM
Ubiquiti Network Management System (Beta)
Today we are releasing a beta version (0.8.0) of our new Ubiquiti Network Management System - UNMS. This is built from the ground up for the WISP industry. UNMS gives you the tools to monitor and configure your UBNT devices from a single software solution. This beta version includes partial support for EdgeRouter, EdgePoint, uFiber (GPON) and airMAX models. See our roadmap below for more details. UNMS can be installed locally using docker on Linux or on a cloud instance like Digital Ocean or Amazon AMI. Please give us your feedback in a new thread.
On the main UNMS page, we find details on the Linux requirements:
Minimum System Requirements
Operating System: Ubuntu 16.04+, Debian 8.4+
CPU: 64-bit (x64) processor
Memory: 2 GB RAM
Hard disk: 16 GB available disk space
Windows and Mac OS not supported, use VM instead.
Official cloud solution is not yet provided; however, UNMS™ can be hosted on your own cloud service.
Web Management UNMS Demo - available online here, which shows the full UNMS experience, without having to install it on Linux first.
Launch the UMobile app, tap +, and tap any discovered and compatible Ubiquiti devices.
I'm just getting started with kicking the tires on the app, and I haven't even gotten the Linux appliance downloaded and configured quite yet. I hope to add my observations to this article, as time permits.
You too can get started right way with the mobile app! I noticed it doesn't require you to have the full UNMS already installed. I was able to use it straight away for some basic management by merely pointing it to my EdgeRouter Lite, seen pictured below. This sort of direct connect didn't work on earlier app versions.
My plan is to create an Ubuntu VM that’s left always running on my VMware ESXI 6.5 U1 SuperServer, to run UNMS. Then I'll be directing my browser to that UNMS instance, and will reconfigure my iOS app to connect through UNMS.
Here's my lab test notes, a work-in-progress:
Create a VM with 1 vCPUs and 2GB of RAM with the Ubuntu 16.04 ISO mounted
Install Ubuntu, don't bother with VMware tools, built-in tools are good enough
Open a Terminal session, install curl (bash, sudo, and netcat are already installed), then elevate yourself to root, here's how:
Testing the download and install and configuration of the Ubuntu appliance is continuing to go well. Hoping I can produce a video walk-thru, including the use of the iOS version of the UMobile UBNT app.
I would really love to hear if somebody has gotten UNMS to run on extremely-light-weight Project Photon OS by VMware though, which is a "Container-Optimized Linux Operating System." Ideally an OVA with the UNMS already installed.
I decided to change this article's title from: Even unmanaged EdgeMax EdgeRouters can be managed with new UNMS (Ubiquiti Network Management System) free beta, no UniFi required!
to clearer and shorter: Non-UniFi EdgeMax EdgeRouters can be managed with Ubiquiti Network Management System (UNMS) free beta
It's that time of year when IT Pros lucky enough to have a few days off upgrade their home labs. This VMware update arrives just in time for the holidays, and this article gets you ready to start 2018 off right!
Read the Release Notes. The simple update method that this article details means you won't need the VMware-VCSA-all-6.5.0-7312210.iso from the Download Page for: vCenter Server Appliance 6.5 Update 1d | 19 DEC 2017 | Build 7312210
This upgrade is also known as version 6.5.0.13000 or 6.5U1d, as seen in the vCenter Server Appliance Management Interface (VAMI), as pictured above. It's the web UI featured throughout in this article, no command line needed.
Yep, upgrading via VAMI works as advertised for any 6.5.x release. I began using it when upgrading from 6.5 to 6.5.0a back in February, and to 6.5.0b in March, and to 6.5.0c and 6.5.0d in April, 6.5.0e in June, 6.5 U1 in July, and finally 6.5 U1d here in December. This is an easy upgrade, as shown screen-by-screen walk through below, and in the video below.
This was one snag that I noticed after the upgrade, but the fix is pretty simple.
Warning:
You need to do your homework before upgrading, if you're wondering why, read this.
Do this VCSA 6.5 U1d upgrade in a test environment first! Before attempting, you should be sure to have a full backup, such as the simple native VCSA backup button seen at top-right. You can also use a 3rd party backup solution such as NAKIVO or Veeam.
At a minimum, do a snapshot (or backup) of this VCSA VM before upgrading, then make sure everything works alright after the upgrade, then remove the snapshot within a few days, to avoid performance degradation.
If you're looking for how you get from 6.0.x to 6.5.x, that's more of a migration, and the right article for you is over here:
along the left edge of DCUI, click 'Update', then click on 'Check Updates', then 'Check Repository', then under Available Updates, click on 'Install Updates' then choose 'Install All Updates', accept the EULA, and when it's done downloading and upgrading, you'll be prompted to reboot the VCSA appliance
Takes about 2 to 5 minutes to upgrade, if you have fast internet, and your VCSA VM is located on an SSD based datastore such as the Samsung 960 EVO 1TB M.2 NVMe SSD I used for my home datacenter, featured in this video.
in your browser, go to your VCSA IP or Name:5480login with root and your passwordalong the left edge of DCUI, click 'Update', then click on 'Check Updates', then 'Check Repository', then under Available Updates, click on 'Install Updates' then choose 'Install All Updates'click on 'I accept' checkbox, then click on 'Install'wait for a bit, on SSDs, a bit is less than 2 minuteswow, you're done alreadyat left, click on 'Summary', then at right, click on 'Reboot'login with root and your passwordalong the left edge of DCUI, click 'Update', optionally also clicking on 'Check Updates' then 'Check Repository', with the DCUI showing you confirmation that you're already done, since you're now at 6.5.0.13000 Build Number 7312210.If you see status "Unknown" in VAMI after this upgrade, copy and paste chrome://settings/clearBrowserData into a new Chrome tab, turn on the Cached images and files checkbox, then click Clear Data.
Alternative title: Paul Braren's nostalgic look back at PC, audio, video, PC, and phone tech from the 1970s to 2000s, including the ThinkPad 701c Butterfly with an expanding keyboard
Here's the new podcast episode at the averageguy.tv here, where you can also view, listen, and/or subscribe. Please show Jim your support by visiting:
Paul Braren from TinkerTry.com joins Jim Collison @jcollison for show #340 of Home Gadget Geeks brought to you by the Average Guy Network
We looked back at some items from the gadgets from the extended family that we held on to, as representative examples of the tech used in my home at the time. Most of my useful used stuff was sold on eBay long ago, to try to put a lid on my hoarding tendencies. But these items in particular offered some special sentimental value, and were held on to for posterity.
BÖHM Wireless Bluetooth Over Ear Cushioned Headphones with Active Noise Cancelling - B76 (2016)
Date first available on Amazon was May 2016, I got mine in December 2016. Been used a few dozen times for snow blowing and mowing duties, allowing me to hopefully reduce hearing damage from those crazy-loud gasoline powered motors. Oh, and the allow me to enjoy podcasts, with some playback controls that avoid the need to reach for my phone, touching my right-ear controls instead. I used a Ziploc snack bag to protect my iPhone 7 Plus (with the case removed), and now my iPhone X, kept in my pocket.
Flying
Bose QuietComfort 35 (Series I) Wireless Headphones, Noise Cancelling - Black (2016)
First available on Amazon in June of 2016, I got mine in January 2017, knowing my new job would have me on noisy regional jets like the CRJ-900 much more often. I also use them for phone calls from hotel rooms, and nobody seems to complain about the quality of my voice through that ear-cup mic.
Here's a newer version than mine that features the new Google Assistant integration (with button), and was first available on Amazon is October 2017
Image courtesy of tonepublications.com, click image to visit source.
I told the story of picking up a pair of these in my early teens by haggling the price at one of those Times Square electronics shops. It could explain why I don't like haggling, at all.
Nikon FE2 SLR camera (1983)
My Dad’s analog Super8 and my Hi8 analog camcorder
My Digital Digital8 + FireWire equipped Sony camcorder, for digital transfer to PC
(each 2 hour tape became a ~24GB AVI file!)
Click the image to jump to the right spot in the video.
We concluded the podcast with the beloved and memorable IBM ThinkPad 701C (1995). Live demo featuring the magical “butterfly” fold-out keyboard, and the much-maligned Windows Millennium Edition!
Thank you again Jim, for you being you, and for inviting me again!
Detailed shownotes, podcast feeds to subscribe to, and an easy way to leave greatly appreciate podcast reviews, all found right over at Jim's source page:
This is an active issue. Please expect that the information in the article below may be updated at any time, especially these first few days. I would encourage you to revisit and refresh.
The net of all this info below is that thorough remediation requires you to patch your:
VMware ESXi hypervisor(s)
release notes in KB52236
(if you have VCSA 6.x, you should update that to 6.5 U1e too, see release notes here, and easy method to update here)
CPU microcode (flash BIOS update)
VMs
This article focuses on the remediation, not the how or why, since that's already been covered so well in so many places, including:
In a home lab environment, it's harder to see how this patching exercise is nearly as urgent as it would be for IT Professionals scrambling to patch their production environments. That said, it sure could be a good "rehearsal" for IT Pros who need to do this at work! I'd say the urgency is highest for cloud providers, where any known (albeit theoretical) data leakage possibility must be eliminated.
Unfortunately, unless you have an HA cluster where only the VM reboots affect you, you'll be having considerable down time here, as reboots of the server itself are required for the other 2 remediation steps.
As for when BIOS upgrades will be available industry-wide, here's what Intel says earlier today:
In early December we began distributing Intel firmware updates to our OEM partners. For Intel CPUs introduced in the past five years, we expect to issue updates for more than 90 percent of them within a week, and the remainder by the end of January.
After backing up your ESXi itself, use VUM (VMware Update Manager), or follow along with the ESXCLI method detailed in my Update 6.5 Update 1 Patch 02 article, but using this revised command instead, to get to 6.5u1e (aka 6.5 Update 1e), but be sure to read the whole article to guide you on how you re-add (optional) Intel X557 10GbE support after this update:
Unfortunately, we're just not there yet for Supermicro Xeon D, I will update this article when BIOS 1.3 becomes available. For now, 1.2c is the latest release for the popular X10SDV Mini-ITX form factor. Details (with video) of the exact BIOS 1.2c update procedure is available:
In early December we began distributing Intel firmware updates to our OEM partners. For Intel CPUs introduced in the past five years, we expect to issue updates for more than 90 percent of them within a week, and the remainder by the end of January. We will continue to issue updates for other products thereafter. We are pleased with this progress, but recognize there is much more work to do to support our customers.
Press Kit: Security Exploits and Intel Products
Our goal is to provide our customers with the best possible protection against the exploits while minimizing the performance impact of the updates. We plan to share more extensive information about performance impact when we can, but we also want to provide some initial information today.
Based on our most recent PC benchmarking, we continue to expect that the performance impact should not be significant for average computer users. This means the typical home and business PC user should not see significant slowdowns in common tasks such as reading email, writing a document or accessing digital photos. Based on our tests on SYSmark 2014 SE, a leading benchmark of PC performance, 8th Generation Core platforms with solid state storage will see a performance impact of 6 percent or less*. (SYSmark is a collection of benchmark tests; individual test results ranged from 2 percent to 14 percent.)
Ensuring the security of our customers’ data is job one. To help keep our customers’ data safe, we have been focused on the development and testing of the updates. We still have work to do to build a complete picture of the impact on data center systems. However, others in the industry have begun sharing some useful results. As reported last week, several industry partners that offer cloud computing services to other businesses have disclosed results that showed little to no performance impact. Also, Red Hat and Microsoft have both shared performance information.
Summary
VMware ESXi, Workstation and Fusion updates address side-channel analysis due to speculative execution.
Relevant Products
VMware vSphere ESXi (ESXi)
VMware Workstation Pro / Player (Workstation)
VMware Fusion Pro / Fusion (Fusion)
Problem Description
Bounds-Check bypass and Branch Target Injection issues
CPU data cache timing can be abused to efficiently leak information out of mis-speculated CPU execution, leading to (at worst) arbitrary virtual memory read vulnerabilities across local security boundaries in various contexts. (Speculative execution is an automatic and inherent CPU performance optimization used in all modern processors.) ESXi, Workstation and Fusion are vulnerable to Bounds Check Bypass and Branch Target Injection issues resulting from this vulnerability.
Result of exploitation may allow for information disclosure from one Virtual Machine to another Virtual Machine that is running on the same host. The remediation listed in the table below is for the known variants of the Bounds Check Bypass and Branch Target Injection issues.
The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifiers CVE-2017-5753 (Bounds Check bypass) and CVE-2017-5715 (Branch Target Injection) to these issues.
1. Summary
VMware vSphere, Workstation and Fusion updates add Hypervisor- Assisted Guest Remediation for speculative execution issue.
Notes:
Hypervisor remediation can be classified into the two following categories:
Hypervisor-Specific Remediation (documented in VMSA-2018-0002)
Hypervisor-Assisted Guest Remediation (documented in this advisory)
2. Relevant Products
VMware vCenter Server (VC)
VMware vSphere ESXi (ESXi)
VMware Workstation Pro / Player (Workstation)
VMware Fusion Pro / Fusion (Fusion)
3. Problem Description New speculative-execution control mechanism for Virtual Machines
Updates of vCenter Server, ESXi, Workstation and Fusion virtualize the new speculative-execution control mechanism for Virtual Machines (VMs). As a result, a patched Guest Operating System (Guest OS) can remediate the Branch Target Injection issue (CVE-2017-5715). This issue may allow for information disclosure between processes within the VM.
login as: root
Using keyboard-interactive authentication.
Password:
The time and date of this login have been sent to the system logs.
WARNING:
All commands run on the ESXi shell are logged and may be included in
support bundles. Do not provide passwords directly on the command line.
Most tools can prompt for secrets or accept them from standard input.
VMware offers supported, powerful system administration tools. Please
see www.vmware.com/go/sysadmintools for details.
The ESXi Shell can be disabled by an administrative user. See the
vSphere Security documentation for more information.
[root@xd-1541-5028d:~] esxcli software profile install -p ESXi-6.5.0-20180104001-standard -d https://hostupdate.vmware.com/software/VUM/PR
ODUCTION/main/vmw-depot-index.xml
[Exception]
You attempted to install an image profile which would have resulted in the removal of VIBs ['INT_bootbank_intel-nvme_1.2.1.15-1OEM.650.0. 0.4598673']. If this is not what you intended, you may use the esxcli software profile update command to preserve the VIBs above. If this is what you intended, please use the --ok-to-remove option to explicitly allow the removal.
Please refer to the log file for more details.
[root@xd-1541-5028d:~] esxcli software profile install -p ESXi-6.5.0-20180104001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml --ok-to-remove
Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: VMW_bootbank_net-igb_5.0.5.1.1-5vmw.650.0.0.4564106, VMW_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.650.0.0.4564106, VMware_bootbank_cpu-microcode_6.5.0-1.38.7526125, VMware_bootbank_esx-base_6.5.0-1.38.7526125, VMware_bootbank_esx-tboot_6.5.0-1.38.7526125, VMware_bootbank_vsan_6.5.0-1.38.7395176, VMware_bootbank_vsanhealth_6.5.0-1.38.7395177
VIBs Removed: INT_bootbank_intel-nvme_1.2.1.15-1OEM.650.0.0.4598673, INT_bootbank_net-igb_5.3.3-1OEM.600.0.0.2494585, INT_bootbank_net-ixgbe_4.5.3-1OEM.600.0.0.2494585, VMware_bootbank_cpu-microcode_6.5.0-0.0.4564106, VMware_bootbank_esx-base_6.5.0-1.36.7388607, VMware_bootbank_esx-tboot_6.5.0-1.36.7388607, VMware_bootbank_vsan_6.5.0-1.36.7388608, VMware_bootbank_vsanhealth_6.5.0-1.36.7388609
VIBs Skipped: VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.650.0.0.4564106, VMW_bootbank_ata-pata-amd_0.3.10-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-atiixp_0.4.6-4vmw.650.0.0.4564106, VMW_bootbank_ata-pata-cmd64x_0.2.5-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-pdc2027x_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-serverworks_0.4.3-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-sil680_0.4.8-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-via_0.3.3-2vmw.650.0.0.4564106, VMW_bootbank_block-cciss_3.6.14-10vmw.650.0.0.4564106, VMW_bootbank_char-random_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ehci-ehci-hcd_1.0-4vmw.650.0.14.5146846, VMW_bootbank_elxnet_11.1.91.0-1vmw.650.0.0.4564106, VMW_bootbank_hid-hid_1.0-3vmw.650.0.0.4564106, VMW_bootbank_i40en_1.3.1-5vmw.650.1.26.5969303, VMW_bootbank_igbn_0.1.0.0-15vmw.650.1.36.7388607, VMW_bootbank_ima-qla4xxx_2.02.18-1vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-devintf_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ixgben_1.4.1-2vmw.650.1.26.5969303, VMW_bootbank_lpfc_11.1.0.6-1vmw.650.0.0.4564106, VMW_bootbank_lsi-mr3_6.910.18.00-1vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt2_20.00.01.00-3vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt3_12.00.02.00-11vmw.650.0.0.4564106, VMW_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.650.0.0.4564106, VMW_bootbank_misc-drivers_6.5.0-1.36.7388607, VMW_bootbank_mtip32xx-native_3.9.5-1vmw.650.0.0.4564106, VMW_bootbank_ne1000_0.8.0-16vmw.650.1.26.5969303, VMW_bootbank_nenic_1.0.0.2-1vmw.650.0.0.4564106, VMW_bootbank_net-bnx2_2.2.4f.v60.10-2vmw.650.0.0.4564106, VMW_bootbank_net-bnx2x_1.78.80.v60.12-1vmw.650.0.0.4564106, VMW_bootbank_net-cdc-ether_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-cnic_1.78.76.v60.13-2vmw.650.0.0.4564106, VMW_bootbank_net-e1000_8.0.3.1-5vmw.650.0.0.4564106, VMW_bootbank_net-e1000e_3.2.2.1-2vmw.650.0.0.4564106, VMW_bootbank_net-enic_2.1.2.38-2vmw.650.0.0.4564106, VMW_bootbank_net-fcoe_1.0.29.9.3-7vmw.650.0.0.4564106, VMW_bootbank_net-forcedeth_0.61-2vmw.650.0.0.4564106, VMW_bootbank_net-libfcoe-92_1.0.24.9.4-8vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-core_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-en_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-nx-nic_5.0.621-5vmw.650.0.0.4564106, VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.650.0.0.4564106, VMW_bootbank_net-usbnet_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.650.0.0.4564106, VMW_bootbank_nhpsa_2.0.6-3vmw.650.0.0.4564106, VMW_bootbank_nmlx4-core_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-en_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-rdma_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx5-core_4.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_ntg3_4.1.3.0-1vmw.650.1.36.7388607, VMW_bootbank_nvme_1.2.0.32-5vmw.650.1.36.7388607, VMW_bootbank_nvmxnet3_2.0.0.23-1vmw.650.1.36.7388607, VMW_bootbank_ohci-usb-ohci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_pvscsi_0.1-1vmw.650.1.26.5969303, VMW_bootbank_qedentv_2.0.3.29-1vmw.650.0.0.4564106, VMW_bootbank_qfle3_1.0.2.7-1vmw.650.0.0.4564106, VMW_bootbank_qflge_1.1.0.3-1vmw.650.0.0.4564106, VMW_bootbank_qlnativefc_2.1.50.0-1vmw.650.1.26.5969303, VMW_bootbank_sata-ahci_3.0-26vmw.650.1.26.5969303, VMW_bootbank_sata-ata-piix_2.12-10vmw.650.0.0.4564106, VMW_bootbank_sata-sata-nv_3.5-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-promise_2.12-3vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil24_1.1-1vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil_2.3-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-svw_2.3-3vmw.650.0.0.4564106, VMW_bootbank_scsi-aacraid_1.1.5.1-9vmw.650.0.0.4564106, VMW_bootbank_scsi-adp94xx_1.0.8.12-6vmw.650.0.0.4564106, VMW_bootbank_scsi-aic79xx_3.1-5vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-fnic_1.5.0.45-3vmw.650.0.0.4564106, VMW_bootbank_scsi-hpsa_6.0.0.84-1vmw.650.0.0.4564106, VMW_bootbank_scsi-ips_7.12.05-4vmw.650.0.0.4564106, VMW_bootbank_scsi-iscsi-linux-92_1.0.0.2-3vmw.650.0.0.4564106, VMW_bootbank_scsi-libfc-92_1.0.40.9.3-5vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid2_2.00.4-9vmw.650.0.0.4564106, VMW_bootbank_scsi-mpt2sas_19.00.00.00-1vmw.650.0.0.4564106, VMW_bootbank_scsi-mptsas_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-mptspi_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.650.0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-3-0_6.5.0-0.0.4564106, VMW_bootbank_uhci-usb-uhci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usb-storage-usb-storage_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usbcore-usb_1.0-3vmw.650.1.26.5969303, VMW_bootbank_vmkata_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmkplexer-vmkplexer_6.5.0-0.0.4564106, VMW_bootbank_vmkusb_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmw-ahci_1.0.0-39vmw.650.1.26.5969303, VMW_bootbank_xhci-xhci_1.0-3vmw.650.0.0.4564106, VMware_bootbank_emulex-esx-elxnetcli_11.1.28.0-0.0.4564106, VMware_bootbank_esx-dvfilter-generic-fastpath_6.5.0-1.36.7388607, VMware_bootbank_esx-ui_1.23.0-6506686, VMware_bootbank_esx-xserver_6.5.0-0.23.5969300, VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-5vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-10vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-7vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-8vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-mpt2sas-plugin_2.0.0-6vmw.650.1.26.5969303, VMware_bootbank_native-misc-drivers_6.5.0-0.0.4564106, VMware_bootbank_rste_2.0.2.0088-4vmw.650.0.0.4564106, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.10-1.26.5969303, VMware_locker_tools-light_6.5.0-1.33.7273056
[root@xd-1541-5028d:~] reboot
This story continues to develop. Please expect that the information in the article below may be updated at any time, especially these first few days. I suggest that you to revisit and refresh.
I'm a vSAN Systems Engineer who joined VMware one year ago yesterday. I've been getting a lot of questions lately from my customers wondering about these security vulnerabilities. While these side-channel attacks certainly are not only applicable to virtualization, note that VMware vSphere does also need to be patched, along with the system's BIOS, and all the OSs running in VMs. This article helps bring together some of the crucial information IT Pros need as they begin to prepare for risk mitigation in their environment.
I couldn't find any articles like this one, so I decided write this. While I usually stick to mostly home lab topics here at TinkerTry, this particular risk certainly crosses right over to the infrastructure running companies of any size.
This whole issue has been quite the IT story! For me personally, it all started when I noticed this little Jan 01 2018 tweet by Matt Tait @pwnallthethings that predicted this story would be big. Very big. How right he was!
In a way, all this behind-the-scenes collaboration between so many companies, especailly between and Intel and various software and hardware companies, has been, dare I say it, reassuring? The idea here was to hopefully head-off exploitation of these weaknesses in the wild, which doesn't tend to take long once details are disclosed. This hurculean risk mitigation effort seems to me to require an unprecedented level of collaborative and coordinated effort. Once an only hope that the industry comes out of this mess stronger than ever, eventually. By the way, did you know that the Meltdown vulnerability can be traced all the way back to 1995?
Unfortunately, performance for folks with older CPUs (Haswell and earlier) are likely to suffer a performance hit after these fixes, but the overall impact should be negligible for client workloads on systems that get BIOS updates and OS upgrades. Only time will tell.
This collection of technical articles should help you get up to speed on what you need to do, since a careful look at all elements of your datacenter is warranted. This collection of links is really just a starting point in your personal journey of understanding, then action.
I make no claims to be an expert on any of this, I'm just the curator. If you're interested in seeing the exact mitigation steps I've taken in my own virtualization home lab, see:
There has been recent press coverage regarding a potential security issue related to modern microprocessors and speculative execution. Information security is a priority at AMD, and our security architects follow the technology ecosystem closely for new threats.
It is important to understand how the speculative execution vulnerability described in the research relates to AMD products, but please keep in mind the following:
The research described was performed in a controlled, dedicated lab environment by a highly knowledgeable team with detailed, non-public information about the processors targeted.
The described threat has not been seen in the public domain.
Based on the recent research findings from Google on the potential new cache timing side-channels exploiting processor speculation, here is the latest information on possible Arm processors impacted and their potential mitigations. We will post any new research findings here as needed.
Cache timing side-channels are a well-understood concept in the area of security research and therefore not a new finding. However, this side-channel mechanism could enable someone to potentially extract some information that otherwise would not be accessible to software from processors that are performing as designed. This is the issue addressed here and in the Cache Speculation Side-channels whitepaper. ...
...
In early December we began distributing Intel firmware updates to our OEM partners. For Intel CPUs introduced in the past five years, we expect to issue updates for more than 90 percent of them within a week, and the remainder by the end of January. We will continue to issue updates for other products thereafter. We are pleased with this progress, but recognize there is much more work to do to support our customers.
Press Kit: Security Exploits and Intel Products
Our goal is to provide our customers with the best possible protection against the exploits while minimizing the performance impact of the updates. We plan to share more extensive information about performance impact when we can, but we also want to provide some initial information today.
Based on our most recent PC benchmarking, we continue to expect that the performance impact should not be significant for average computer users. This means the typical home and business PC user should not see significant slowdowns in common tasks such as reading email, writing a document or accessing digital photos. Based on our tests on SYSmark 2014 SE, a leading benchmark of PC performance, 8th Generation Core platforms with solid state storage will see a performance impact of 6 percent or less*. (SYSmark is a collection of benchmark tests; individual test results ranged from 2 percent to 14 percent.)
Ensuring the security of our customers’ data is job one. To help keep our customers’ data safe, we have been focused on the development and testing of the updates. We still have work to do to build a complete picture of the impact on data center systems. However, others in the industry have begun sharing some useful results. As reported last week, several industry partners that offer cloud computing services to other businesses have disclosed results that showed little to no performance impact. Also, Red Hat and Microsoft have both shared performance information. ...
...
As Intel reported in their FAQ, researchers demonstrated a proof of concept. That said, Dell is not aware of any exploits to date.
... Patch guidance
There are essential components that need to be applied to mitigate the above mentioned vulnerabilities:
Apply the firmware update via BIOS update.
Apply the applicable operating system (OS) patch.
Apply hypervisor patches, browser and JavaScript engines updates where applicable.
For more information on affected platforms and next steps to apply the updates, please refer to the following resources. They will be updated regularly as new information becomes available. Dell is testing all firmware updates before deploying them to ensure minimal impact to customers.
Dell EMC is aware of the new side-channel analysis vulnerabilities (also known as Meltdown and Spectre) affecting many modern microprocessors that were discovered and published by a team of security researchers on January 3, 2018. We encourage customers to review the Security Advisories in the References section for more information.
Dell EMC is investigating this issue to identify any potential impact to products and will update this article with information as it becomes available, including impacted products and remediation steps.
There are two essential components that need to be applied to mitigate the above mentioned vulnerabilities:
System BIOS as per Tables below
Operating System & Hypervisor updates. ...
We’ve been working closely with our vendors concerning the security vulnerability announced on January 3, 2018. This vulnerability has the potential to allow those with malicious intent to gather sensitive data from computing devices. Intel believes these exploits do not have the potential to corrupt, modify, or delete data.
We will be applying patches to our VSI cloud hosts worldwide starting January 5, 2018 through January 8, 2018 to mitigate the risk to our virtual server clients. Due to the nature of this vulnerability and the affected components, we are not able to mitigate this potential vulnerability via hot patching; cloud host reboots are required. While we do not expect any problems with the reboots, all customers should create a backup of all data from their virtual server instances.
In addition to providing an overall schedule to clients with active virtual servers, we’ll also use maintenance tickets to notify customers when their VSIs are scheduled to be rebooted. These maintenance tickets will identify the scheduled VSIs and provide the date and time of the cloud host reboot. Clients also can expect to receive a two-hour reminder update before the maintenance event, a ticket update with the start of maintenance, and a final ticket update once the maintenance is complete.
Firmware updates and operating system updates will be required for our bare metal offerings. Please watch for these updates and instructions as they become available in the client control portal. We will push these notifications as soon as we receive updates from the relevant vendors.
In addition to the cloud infrastructure mitigations above, our engineers will apply similar patches to the platform compute offerings from the IBM Container Service, IBM Cloud Foundry platform, and IBM Cloud Functions, after the necessary vendor updates are available and tested.
We will update this blog post as more information is available. ...
On Wednesday, January 3, researchers from Google announced a security vulnerability impacting microprocessors, including processors in the IBM POWER family.
This vulnerability doesn’t allow an external unauthorized party to gain access to a machine, but it could allow a party that has access to the system to access unauthorized data. ...
This website is updated frequently, as new product information becomes available.
On January 3 2018, side-channel security vulnerabilities involving speculative execution were publicly disclosed. These vulnerabilities may impact the listed HPE products, potentially leading to information disclosure and elevation of privilege. Mitigation and resolution of these vulnerabilities may call for both an operating system update, provided by the OS vendor, and a system ROM update from HPE. Intel has provided a high level statement here: https://newsroom.intel.com/news/intel-responds-to-security-research-findings/
Note: Intel has informed HPE that Itanium is not impacted by these vulnerabilities. ...
...
Lenovo is aware of vulnerabilities regarding certain processors nicknamed “Spectre” and “Meltdown” by their discoverers. Both are “side channel” exploits, meaning they do not access protected data directly, but rather induce the processor to operate in a specific way, and observe execution timing or other externally visible characteristics to infer the protected data. We continue to work with our processor and operating system suppliers to incorporate fixes as we receive them. Lenovo will update this page frequently as fixes are released and new information emerges. Please check back often ... Product Impact:
Please click for more info. CLIENT SYSTEMS Desktop Desktop - All In One IdeaPad Tablet ThinkPad ThinkStation ENTERPRISE SYSTEMS Converged and ThinkAgile Solutions Hyperscale Networking Switches Server Management Software Storage System x - IBM System x - Lenovo ThinkServer ThinkSystem Other Information and References:
Details regarding a previously undisclosed microprocessor vulnerability that could impact Supermicro systems has been announced and requires a microcode update of the system BIOS along with an operating system update. Commonly referred to as Meltdown and Spectre the vulnerability involves malicious code utilizing a new method of side-channel analysis and running locally on a normally operating platform has the potential to allow the inference of data values from memory.
We are working around the clock to integrate, test and release the updates as soon as they are made available. To address the issue systems will need both an Operating System update and a BIOS update. Please check with operating system or VM vendors for related information.
Note that Supermicro hasn't yet updated their ESXi 6.0 entries on the VMware Hardware Compatibility Guide to 6.5 yet, check all vendor's Xeon D entries here. Please feel free to contact Supermicro directly, to register your request, stating that you would like for ESXi 6.5 and ESXi 6.5U1 to appear on the VMware Hardware Compatibility Guide. While Xeon D SuperServers works great with ESXi 6.5 just like they did with 6.0, it would be best to have official support for this latest release.
We wanted to provide a bit more context for the most recent login issues and service instability. All of our cloud services are affected by updates required to mitigate the Meltdown vulnerability. We heavily rely on cloud services to run our back-end and we may experience further service issues due to ongoing updates.
Here is a link to an article which describes the issue in depth.
The following chart shows the significant impact on CPU usage of one of our back-end services after a host was patched to address the Meltdown vulnerability.
... Summary
Microsoft is aware of new vulnerabilities in hardware processors named “Spectre” and “Meltdown”. These are a newly discovered class of vulnerabilities based on a common chip architecture that, when originally designed, was created to speed up computers. The technical name is “speculative execution side-channel vulnerabilities”. You can learn more about these vulnerabilities at Google Project Zero.
Who is affected?
Affected chips include those manufactured by Intel, AMD, and ARM, which means all devices running Windows operating systems are potentially vulnerable (e.g., desktops, laptops, cloud servers, and smartphones). Devices running other operating systems such as Android, Chrome, iOS, and MacOS are also affected. We advise customers running these operating systems to seek guidance from those vendors.
At this time of publication, we have not received any information to indicate that these vulnerabilities have been used to attack customers. ...
TERRY MYERSON Executive Vice President, Windows and Devices Group
in Security Development, Security Strategies, Industry Trends
Last week the technology industry and many of our customers learned of new vulnerabilities in the hardware chips that power phones, PCs and servers. We (and others in the industry) had learned of this vulnerability under nondisclosure agreement several months ago and immediately began developing engineering mitigations and updating our cloud infrastructure. In this blog, I’ll describe the discovered vulnerabilities as clearly as I can, discuss what customers can do to help keep themselves safe, and share what we’ve learned so far about performance impacts.
What Are the New Vulnerabilities?
On Wednesday, Jan. 3, security researchers publicly detailed three potential vulnerabilities named “Meltdown” and “Spectre.” Several blogs have tried to explain these vulnerabilities further — a clear description can be found via Stratechery. ...
Red Hat has been made aware of multiple microarchitectural (hardware) implementation issues affecting many modern microprocessors, requiring updates to the Linux kernel, virtualization-related components, and/or in combination with a microcode update. An unprivileged attacker can use these flaws to bypass conventional memory security restrictions in order to gain read access to privileged memory that would otherwise be inaccessible. There are 3 known CVEs related to this issue in combination with Intel, AMD, and ARM architectures. Additional exploits for other architectures are also known to exist. These include IBM System Z, POWER8 (Big Endian and Little Endian), and POWER9 (Little Endian). ...
Summary
VMware ESXi, Workstation and Fusion updates address side-channel analysis due to speculative execution.
Relevant Products
VMware vSphere ESXi (ESXi)
VMware Workstation Pro / Player (Workstation)
VMware Fusion Pro / Fusion (Fusion)
Problem Description
Bounds-Check bypass and Branch Target Injection issues
CPU data cache timing can be abused to efficiently leak information out of mis-speculated CPU execution, leading to (at worst) arbitrary virtual memory read vulnerabilities across local security boundaries in various contexts. (Speculative execution is an automatic and inherent CPU performance optimization used in all modern processors.) ESXi, Workstation and Fusion are vulnerable to Bounds Check Bypass and Branch Target Injection issues resulting from this vulnerability.
Result of exploitation may allow for information disclosure from one Virtual Machine to another Virtual Machine that is running on the same host. The remediation listed in the table below is for the known variants of the Bounds Check Bypass and Branch Target Injection issues.
The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifiers CVE-2017-5753 (Bounds Check bypass) and CVE-2017-5715 (Branch Target Injection) to these issues.
...
... 1. Summary
VMware vSphere, Workstation and Fusion updates add Hypervisor- Assisted Guest Remediation for speculative execution issue.
Notes:
Hypervisor remediation can be classified into the two following categories:
Hypervisor-Specific Remediation (documented in VMSA-2018-0002)
Hypervisor-Assisted Guest Remediation (documented in this advisory)
2. Relevant Products
VMware vCenter Server (VC)
VMware vSphere ESXi (ESXi)
VMware Workstation Pro / Player (Workstation)
VMware Fusion Pro / Fusion (Fusion)
3. Problem Description New speculative-execution control mechanism for Virtual Machines
Updates of vCenter Server, ESXi, Workstation and Fusion virtualize the new speculative-execution control mechanism for Virtual Machines (VMs). As a result, a patched Guest Operating System (Guest OS) can remediate the Branch Target Injection issue (CVE-2017-5715). This issue may allow for information disclosure between processes within the VM.
...
I was going to mention this topic in my newsletter this weekend. But things got sort of crazy and now there is so much info it is confusing out there so I thought I would treat this as a separate subject rather than as part of my newsletter.
Here is an article by one of the top security thinkers – good info! Here is some good technical detail as well.
...
Meltdown and Spectre exploit critical vulnerabilities in modern processors. These hardware vulnerabilities allow programs to steal data which is currently processed on the computer. While programs are typically not permitted to read data from other programs, a malicious program can exploit Meltdown and Spectre to get hold of secrets stored in the memory of other running programs. This might include your passwords stored in a password manager or browser, your personal photos, emails, instant messages and even business-critical documents.
Meltdown and Spectre work on personal computers, mobile devices, and in the cloud. Depending on the cloud provider's infrastructure, it might be possible to steal data from other customers. ...
This week, before we focus upon the industry-wide catastrophe enabled by precisely timing the instructed execution of all contemporary high-performance processor architectures…
Optionally, you can jump ahead to just the right spot where Steve really dives into this at length, streamed right to your browser complete with playback speed controlls. See also the detailed transcript-like shownotes.
Red Hat has been heavily involved in the Meltdown and Spectre patch efforts. It also has its initial patches ready well before the originally planned disclosure date of January 9, 2018. Red Hat is also in the unique position that it has the most robust set of open source OS enterprise customers. Those same customers are clamoring for information regarding the performance impacts of the Meltdown and Spectre series of patches. ...
Yesterday, news broke about vulnerabilities affecting AMD, Intel, and ARM CPU’s. These vulnerabilities, termed Meltdown and Spectre, have the potential to expose information that the machine(s) process. Check out this post for an in-depth look. At this point, it appears that VMware is not vulnerable to Meltdown; however, they have released patches for Spectre. It has been speculated that patching the flaws will cause performance hits. To what degree varies by reporting source. As always, test patches before deployment and contact support if you have any questions. ...
There is a lot of information swirling around out there on what to do with the latest Spectre/Meltdown vulnerabilities. Whereas I can’t tell you how to solve the vulnerabilities for for every Hardware and Operating System combination, I can tell you how to get your Hyper-V environments protected.
ON A COLD Sunday early last month in the small Austrian city of Graz, three young researchers sat down in front of the computers in their homes and tried to break their most fundamental security protections.
Two days earlier, in their lab at Graz's University of Technology, Moritz Lipp, Daniel Gruss, and Michael Schwarz had determined to tease out an idea that had nagged at them for weeks, a loose thread in the safeguards underpinning how processors defend the most sensitive memory of billions of computers. ...
Read the VMware vCenter Server 6.5 Update 1e Release Notes. For those of you with VCSA 6.5.x already installed, the simple VAMI update method also means you won't need the VMware-VCSA-all-6.5.0-7119157.iso from the Download Page for: VMware-VCSA-all-6.5.0-7119157.iso | Release Date: 2017-11-14 | Build Number: 7119157
This upgrade is also known as version 6.5.0.14000 or 6.5U1e or Build 7515524, as seen in the vCenter Server Appliance Management Interface (VAMI), as pictured above this article. I have a screenshot and video of the process below, and the upgrade method is the same VAMI method that I featured in my recent (Dec 23 2017) article, only the versions (6.5U1d 6.5U1e) differ slightly:
The comment above is still relevant, as I'm admittedly this is just one more-universal way to upgrade ESXi, avoiding the need to download the ISO separately. Booting from a new ISO has the advantage of checking for CPU compatibility before installing, the method below does not. All upgrades come with risks, including the possibility of losing your network connections. Proceed at your own risk, and always back up first.
Meltdown and Spectre are looming large this year, and this article is in direct response to all that, see also at TinkerTry
Warning!
But don't rush things. Xeon D owners, and a few other Xeon E5 and E7 owners, will want to read this entire article before patching anything! There's a surprise twist to this story at the end!
If you've already performed this patch and rebooted, various UIs will show your ESXi version, depending upon where you look:
Profile Name ESXi-6.5.0-20180104001-standard
Summaries and Symptoms
This patch updates the following issue:
This ESXi patch provides part of the hypervisor-assisted guest remediation of CVE-2017-5715 for guest operating systems. For important details on this remediation, see VMware Security Advisory VMSA-2018-0004. ...
Warning!
If you have an Intel Xeon D system, earlier today, VMware published a new article today on Jan 13 2018. You'll want to read this before you proceed, especially the sentenced at the end that I bolded:
Xeon D is of particular interest to TinkerTry readers.
Document Id
52345 Purpose
Although VMware strongly recommends that customers obtain microcode patches through their hardware vendor, as an aid to customers, VMware also included the initial microcode patches in ESXi650-201801402-BG, ESXi600-201801402-BG, and ESXi550-201801401-BG. Intel has notified VMware of recent sightings that may affect some of the initial microcode patches that provide the speculative execution control mechanism for a number of Intel Haswell and Broadwell processors. The issue can occur when the speculative execution control is actually used within a virtual machine by a patched OS. At this point, it has been recommended that VMware remove exposure of the speculative-execution mechanism to virtual machines on ESXi hosts using the affected Intel processors until Intel provides new microcode at a later date.
Resolution For ESXi hosts that have not yet applied one of the following patches ESXi650-201801402-BG, ESXi600-201801402-BG, or ESXi550-201801401-BG, VMware recommends not doing so at this time. It is recommended to apply the patches listed in VMSA-2018-0002 instead. ...
Ok, there you go, saved you! Having a look at VMSA-2018-0002, it's telling you to go with ESXi650-201712101-SG, which is part of the ESXi-6.5.0-20171201001s-standard version my December article fully documents installing! That's right, for now, if you're not already on Build 7388607, you should get there, simply follow along with this recent TinkerTry article:
Below, I've pasted the full text of my upgrade that I later found out I shouldn't have done. It will help you see what drivers are touched. Just use the horizonal scroll bar or shift + mousewheel to look around, and Ctrl+F to Find stuff quickly:
As seen in my video, here's the full contents of my ssh session, as I completed my Xeon D-1541 upgrade from Version: 6.5.0 Update 1 (Build 7388607)
to: Version: 6.5.0 Update 1 (Build 7526125)
login as: root
Using keyboard-interactive authentication.
Password:
The time and date of this login have been sent to the system logs.
WARNING:
All commands run on the ESXi shell are logged and may be included in
support bundles. Do not provide passwords directly on the command line.
Most tools can prompt for secrets or accept them from standard input.
VMware offers supported, powerful system administration tools. Please
see www.vmware.com/go/sysadmintools for details.
The ESXi Shell can be disabled by an administrative user. See the
vSphere Security documentation for more information.
[root@xd-1541-5028d:~] esxcli software profile install -p ESXi-6.5.0-20180104001-standard -d https://hostupdate.vmware.com/software/VUM/PR
ODUCTION/main/vmw-depot-index.xml
[Exception]
You attempted to install an image profile which would have resulted in the removal of VIBs ['INT_bootbank_intel-nvme_1.2.1.15-1OEM.650.0. 0.4598673']. If this is not what you intended, you may use the esxcli software profile update command to preserve the VIBs above. If this is what you intended, please use the --ok-to-remove option to explicitly allow the removal.
Please refer to the log file for more details.
[root@xd-1541-5028d:~] esxcli software profile install -p ESXi-6.5.0-20180104001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml --ok-to-remove
Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: VMW_bootbank_net-igb_5.0.5.1.1-5vmw.650.0.0.4564106, VMW_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.650.0.0.4564106, VMware_bootbank_cpu-microcode_6.5.0-1.38.7526125, VMware_bootbank_esx-base_6.5.0-1.38.7526125, VMware_bootbank_esx-tboot_6.5.0-1.38.7526125, VMware_bootbank_vsan_6.5.0-1.38.7395176, VMware_bootbank_vsanhealth_6.5.0-1.38.7395177
VIBs Removed: INT_bootbank_intel-nvme_1.2.1.15-1OEM.650.0.0.4598673, INT_bootbank_net-igb_5.3.3-1OEM.600.0.0.2494585, INT_bootbank_net-ixgbe_4.5.3-1OEM.600.0.0.2494585, VMware_bootbank_cpu-microcode_6.5.0-0.0.4564106, VMware_bootbank_esx-base_6.5.0-1.36.7388607, VMware_bootbank_esx-tboot_6.5.0-1.36.7388607, VMware_bootbank_vsan_6.5.0-1.36.7388608, VMware_bootbank_vsanhealth_6.5.0-1.36.7388609
VIBs Skipped: VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.650.0.0.4564106, VMW_bootbank_ata-pata-amd_0.3.10-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-atiixp_0.4.6-4vmw.650.0.0.4564106, VMW_bootbank_ata-pata-cmd64x_0.2.5-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-pdc2027x_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-serverworks_0.4.3-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-sil680_0.4.8-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-via_0.3.3-2vmw.650.0.0.4564106, VMW_bootbank_block-cciss_3.6.14-10vmw.650.0.0.4564106, VMW_bootbank_char-random_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ehci-ehci-hcd_1.0-4vmw.650.0.14.5146846, VMW_bootbank_elxnet_11.1.91.0-1vmw.650.0.0.4564106, VMW_bootbank_hid-hid_1.0-3vmw.650.0.0.4564106, VMW_bootbank_i40en_1.3.1-5vmw.650.1.26.5969303, VMW_bootbank_igbn_0.1.0.0-15vmw.650.1.36.7388607, VMW_bootbank_ima-qla4xxx_2.02.18-1vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-devintf_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ixgben_1.4.1-2vmw.650.1.26.5969303, VMW_bootbank_lpfc_11.1.0.6-1vmw.650.0.0.4564106, VMW_bootbank_lsi-mr3_6.910.18.00-1vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt2_20.00.01.00-3vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt3_12.00.02.00-11vmw.650.0.0.4564106, VMW_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.650.0.0.4564106, VMW_bootbank_misc-drivers_6.5.0-1.36.7388607, VMW_bootbank_mtip32xx-native_3.9.5-1vmw.650.0.0.4564106, VMW_bootbank_ne1000_0.8.0-16vmw.650.1.26.5969303, VMW_bootbank_nenic_1.0.0.2-1vmw.650.0.0.4564106, VMW_bootbank_net-bnx2_2.2.4f.v60.10-2vmw.650.0.0.4564106, VMW_bootbank_net-bnx2x_1.78.80.v60.12-1vmw.650.0.0.4564106, VMW_bootbank_net-cdc-ether_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-cnic_1.78.76.v60.13-2vmw.650.0.0.4564106, VMW_bootbank_net-e1000_8.0.3.1-5vmw.650.0.0.4564106, VMW_bootbank_net-e1000e_3.2.2.1-2vmw.650.0.0.4564106, VMW_bootbank_net-enic_2.1.2.38-2vmw.650.0.0.4564106, VMW_bootbank_net-fcoe_1.0.29.9.3-7vmw.650.0.0.4564106, VMW_bootbank_net-forcedeth_0.61-2vmw.650.0.0.4564106, VMW_bootbank_net-libfcoe-92_1.0.24.9.4-8vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-core_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-en_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-nx-nic_5.0.621-5vmw.650.0.0.4564106, VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.650.0.0.4564106, VMW_bootbank_net-usbnet_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.650.0.0.4564106, VMW_bootbank_nhpsa_2.0.6-3vmw.650.0.0.4564106, VMW_bootbank_nmlx4-core_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-en_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-rdma_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx5-core_4.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_ntg3_4.1.3.0-1vmw.650.1.36.7388607, VMW_bootbank_nvme_1.2.0.32-5vmw.650.1.36.7388607, VMW_bootbank_nvmxnet3_2.0.0.23-1vmw.650.1.36.7388607, VMW_bootbank_ohci-usb-ohci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_pvscsi_0.1-1vmw.650.1.26.5969303, VMW_bootbank_qedentv_2.0.3.29-1vmw.650.0.0.4564106, VMW_bootbank_qfle3_1.0.2.7-1vmw.650.0.0.4564106, VMW_bootbank_qflge_1.1.0.3-1vmw.650.0.0.4564106, VMW_bootbank_qlnativefc_2.1.50.0-1vmw.650.1.26.5969303, VMW_bootbank_sata-ahci_3.0-26vmw.650.1.26.5969303, VMW_bootbank_sata-ata-piix_2.12-10vmw.650.0.0.4564106, VMW_bootbank_sata-sata-nv_3.5-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-promise_2.12-3vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil24_1.1-1vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil_2.3-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-svw_2.3-3vmw.650.0.0.4564106, VMW_bootbank_scsi-aacraid_1.1.5.1-9vmw.650.0.0.4564106, VMW_bootbank_scsi-adp94xx_1.0.8.12-6vmw.650.0.0.4564106, VMW_bootbank_scsi-aic79xx_3.1-5vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-fnic_1.5.0.45-3vmw.650.0.0.4564106, VMW_bootbank_scsi-hpsa_6.0.0.84-1vmw.650.0.0.4564106, VMW_bootbank_scsi-ips_7.12.05-4vmw.650.0.0.4564106, VMW_bootbank_scsi-iscsi-linux-92_1.0.0.2-3vmw.650.0.0.4564106, VMW_bootbank_scsi-libfc-92_1.0.40.9.3-5vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid2_2.00.4-9vmw.650.0.0.4564106, VMW_bootbank_scsi-mpt2sas_19.00.00.00-1vmw.650.0.0.4564106, VMW_bootbank_scsi-mptsas_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-mptspi_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.650.0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-3-0_6.5.0-0.0.4564106, VMW_bootbank_uhci-usb-uhci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usb-storage-usb-storage_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usbcore-usb_1.0-3vmw.650.1.26.5969303, VMW_bootbank_vmkata_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmkplexer-vmkplexer_6.5.0-0.0.4564106, VMW_bootbank_vmkusb_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmw-ahci_1.0.0-39vmw.650.1.26.5969303, VMW_bootbank_xhci-xhci_1.0-3vmw.650.0.0.4564106, VMware_bootbank_emulex-esx-elxnetcli_11.1.28.0-0.0.4564106, VMware_bootbank_esx-dvfilter-generic-fastpath_6.5.0-1.36.7388607, VMware_bootbank_esx-ui_1.23.0-6506686, VMware_bootbank_esx-xserver_6.5.0-0.23.5969300, VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-5vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-10vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-7vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-8vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-mpt2sas-plugin_2.0.0-6vmw.650.1.26.5969303, VMware_bootbank_native-misc-drivers_6.5.0-0.0.4564106, VMware_bootbank_rste_2.0.2.0088-4vmw.650.0.0.4564106, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.10-1.26.5969303, VMware_locker_tools-light_6.5.0-1.33.7273056
[root@xd-1541-5028d:~] reboot
Credit for the above iPhone X-ray image with Qi charging coil goes to iFixit, and it's available for download in full-resolution for use as lock screen wallpaper!
Disclosure: This article contains affiliate links, details in the Affiliate Link Disclosure below.
Many of these items will work with Android phones just fine. For me, my experience has been mostly with iPhones, since the iPhone 3GS arrived back in 2009. Even then, I immediately invested in a cradle base for my car, replacing just the cradles as needed ever since. I've been enjoying my iPhone X since it arrived in November, and it's been a better experience than all my other every-other-year upgrades. Little things like widespread online availability of Apple's leather case made the even the initial experience easier, as did having a USB-C to Lightning cable on hand for extremely fast charging with certain power supplies. It charges fast from my Dell Precision 5520's USB-C port, offering a secure wired Personal Hotspot and high speed charging. Perfect for my business travel, especially in airports! This combo is something I've been wanting for about a decade now.
There was a bit of a learning curve when learning the home-buttonless design. Face ID and swiping to get to my home screen took a few minutes to (mostly) get used to. After just a couple of hours, going back to my iPhone 7 Plus for even a few moments felt awkward, and it appeared ancient, with that big chin. I already forgot to hit the home button instead of swiping up.
All that said, I knew my iPhone X was a keeper. Quickly. I also knew this new iPhone would require some new accessories, greatly enhancing my ownership experience. Qi was a big new thing for me, and fast charging is such a joy.
I do a fair amount of driving around alone, for my job. I'd say it's a 90/10 split between listening to podcasts versus Spotify and conference calls. So having easy access to these functions continues to be of utmost importance, and I like the full quality audio that the wired experience offers me in a car.
Listening to Podcasts
I also listen to podcasts. A lot of podcasts. And the sound quality of the built-in speakers blaring from my pocket is still rather tinny compared to a decent set of speakers. I have my car all wired up with great quality sound, but what about the shower, or at my work desk? Even better would be Bluetooth speakers with playback controls including pause.
Attending conferences
I attend conferences. A lot of conferences. Battery life is always a concern at such events, where cell signal is frequently poor, and a bunch of photos, videos, and tweets are made. This all affects battery life. I'd rather not stay in sluggish battery saver mode all day, and would prefer to be ready in advance with a charging case that easily fits into my pocket.
Below, you'll find that most of the devices have succeeded meeting these needs for a while now, and each product has a story. I've also put together some unboxing videos too. I hope this information helps others, knowing full well what a hard time I had finding devices that would fit these needs, and what a pleasure it is when you finally find solutions that fit the bill, and the need.
my informal tests got me from 56% to 76% battery charge in an hour
Not Good
playback control buttons can be hard to distinguish from one another, visually or tactically
no mute button, so you'll need to mute from your phone if you're going to use it as a speakerphone
Conclusion
I've had mine since July 2017, first available on Amazon Feb 2015. It works well, is loud enough, and seems completely unaffected by moisture. Great for podcasts, sufficient for music or FM radio. Happy with this purchase, and still haven't seen anything else quite like it, even at local malls, Best Buy, The Microsoft Store, or The Apple Store.
My unboxing and testing video will be inserted right here, once production is completed.
iPhone X Battery Case [Support Lightning Headphones],PEYOU 3200mAh ULTRA SLIM Portable Charger Rechargeable Extended Backup Battery TPU Protective Charging Pack Power Bank Case for iPhone X /iPhone 10
labels look cheap (but they're hidden by the phone)
battery safety reputation unknown
currently only a 3.5 star review average
as advertised, it only supports basic headphone/earbud audio functionality
I can't use with my AT2005USB external mic to record podcast content on the go, so I still may need to carry an external battery pack at the bigger, all-day conferences
Seems good enough for occasional use, does provide me roughly one full iPhone X charge when on the go for an entire day without a power source. Much more convenient than carrying a separate battery in your pocket, even the newer Qi types such as this Mophie Powerstation Wireless External Battery Charger for Qi Enabled Smartphones.
Fast Wireless Charger with Bluetooth Speaker, Home Stereo, Computer Speaker, 2 Coils Wireless Charging Stand for Samsung Galaxy Note 8/S8/S7 Edge/S7, iPhone X/8, LG HTC All Qi-Enabled Devices
when plugged in, automatically reconnects with the last paired device, no button pushes necessary
audio quality is vastly better (less tinny) than the built-in speakers on the iPhone X itself
Bad
says "Bluetooth Mode" every time it's plugged in, and the volume of those words is the same, no matter what volume is settings on your paired device are
no playback or volume controls on the device itself
lowest 1 bar of volume on iPhone X itself is a bit too loud, could be an issue for late-night listening
Conclusion
it's looking like a keeper for me and my needs, but I suspect it will soon be joined by better, brand-name choices with similar functionality
Top Adjustable iPhone Holder for Lightning to USB Cable, Item 514998
ProClip USA in Black.
Read more about it at TinkerTry here.
Good
sturdy ProClip USA dashboard mounts for iPhone X allows one-handed dock-and-go
Lightning wired connection for full audio quality
spring loaded top clip handles vehicle's bumps and turns without disconnecting
fits iPhone X nicely, even in a case
I can likely go back to my iPhone 7 Plus cradle, should I upgrade to whatever the iPhone X Plus winds up being called (iPhone 11 Plus/iPhone 11/iPhone XI) someday, already rumored to be a 6.5" display
Bad
doesn't really fit family member's iPhone 7 Plus, work-around is to insert phone at an angle, with the top unsecured by the clip
no playback or volume controls on the device itself
won't fit my 30 pin to Lightning adapter, so I just used the two screws to swap the lower section with my iPhone 7 cradle that already had the adapter/cable assembly
Easy to slide right in, but I had to remove my leather case first, of course.This shows my iPhone 7 Plus cradle at left, and my new iPhone X cradle at right. I simplly moved the plastic block with the white cable out of it, and plan to paint the white cable black. Easy!
In this podcast, you'll also hear about the audio tech that I'm using now, including the Bose Quiet Comfort 35 (Series I) and Audio-Technica AT2005USB Cardioid Dynamic USB/XLR Microphone, it's all detailed at this spot.
I'm currently a vSAN Systems Engineer working at VMware. This consumer focused article is intended for my home lab enthusiast audience at TinkerTry, and is not representative of VMware's official position on the use of consumer SSDs for VMware ESXi workloads. It's your responsibility to verify whether your production workloads are backed by SSDs found on the VMware Hardware Compatibility Guide here.
This article takes a quick look at Intel's latest SSD specs, and their potential for use in a VMware virtualization home lab environment. I do not currently have a sample of the 760p for first-hand testing. The information here is based on a briefing I recently received from an Intel spokesperson.
The new Intel SSD 760p Series far exceeds the specs of the first-generation M.2 NVMe SSD that Intel launched in Q3'16, the Intel SSD 600p Series, that maxed out at 1TB of capacity. The 760p goes up to 2TB, at much better speeds.
Last summer, the Intel SSD 545s 2.5" SATA SSD was the world's first 64-layer 3D NAND Technology SSD:
Intel Takes Another Major Step in Memory Leadership Introducing the First 64-Layer, TLC, Intel 3D NAND Technology for Client Computing By Rob Crooke
Intel has delivered the world’s first commercially available 64-layer, TLC, 3D NAND solid state drive (SSD). While others have been talking about it, we have delivered.
At Intel, our commitment is to drive platform-connected solutions that deliver a better experience wherever compute and data come together. We continue to invest in both Intel® 3D NAND technology and Intel® Optane™ technology to make that happen. ..
And today, we have today's big announcement of the 760p Series. At last, the marketplace seems to have a worthy competitors for the beloved Samsung 960 EVO and 960 PRO NVMe SSDs in heavy use in my own home lab.
What about Optane?
How about that recently launched Optane-based 800P?
Warranty will be 5 years with an endurance of ~200GB per day. No word on cost at this time. Overall these though fit nicely between Optane Memory (16/32GB) and the 900P (280/480+GB) capacity points.
The elephant in the room is the capacity. While these can store more than the 16/32GB variants, 60/120GB may not be enough for most users out there. Fortunately, devices like these are great in Zx70 RAID or even VROC configurations!
In a nutshell, it's really about cost per gigabyte. Optane is currently too low capacity, at too high a cost per gigabyte, to be justified as a everyday use VMFS datastore for VMware workloads in a home lab.
Once the P4800X shipped, it became clear the prices per gigabtye wouldn't make it a match for anything but the most pricey home labs. For straight up VMFS 6.0 VMware datastores, the Samsung 960 actually offered slightly higher raw throughputs, at higher latencies, but a much lower cost. Note, consumer NVMe drives aren't warranted for use as VMware datastores, but they sure do perform very well, given NVMe's high throughput and low latency, and relatively low cost, without requiring a pricey RAID adapter.
How about the Optane 900P (4Q2017)
Next came the consumer oriented 900P, detailed at TinkerTry here. Also many bumps in road, and even a crash, clearly not intended to be used with VMware's P4800X NVME VIB (driver).
I'd need to test this adapter first-hand to see how it goes with VMware ESXi 6.5.x, along with compatibility checks with my Xeon D's PCIe 3.0 x4 slot, and a BIOS that is capable of the bifurcation feature that PCIe to NVMe adapters without PLX chips require.
Additionally, the Product will not be subject to this Limited Warranty if used in:
(i) any compute, networking or storage system that supports workloads or data needs of more than one concurrent user or one
or more remote client device concurrently;
(ii) any server, networking or storage system that is capable of supporting more than one CPU per device; or
(iii) any device that is designed, marketed or sold to support or be incorporated into systems covered in clauses (i) or (ii).
Clearly the Intel SSD 7 Series is faster than the Intel SSD 6 Series, we'll need to see how street prices look to really know what the differences are on a cost per gigabyte basis.
I also do not know how well they will perform with the VMware inbox (included) NVMe VIB (driver). Keep in mind that you may recall how the Intel 750 Series required Intel's VIB for optimal performance. I'm not at all sure I'll have the budget or the time to do first-hand testing of the 760p here at TinkerTry.
Finally, the Samsung 960 PRO 2TB SSD and the (WD/SanDisk) Toshiba XG5-P 2TB M.2 NVMe SSDs may have a worthy competitor. Given how costly NAND and RAM has become, any competition could signal welcome relief to consumers already feeling the pinch of high DDR4 prices.
It seems that more solid details are starting to surface about the follow-on to the popular Xeon D SoC (System on a Chip), with up to 256GB of ECC memory or 512GB of LRDIMM memory, explained in the article Chris references:
Remember Xeon D, gerbils? If you're not employed in networking or systems administration, you probably don't—and if you are, you almost certainly do. The extant Xeon-D chips use up to 16 Broadwell CPU cores, take up to 128 GB of ECC DDR4 memory, and have dual on-chip 10-Gigabit Ethernet controllers. Those chips came out about three years ago though, so it's time for an update. Indeed, Intel's just updated its price list, and it now includes the Xeon D-2191, Xeon D-2161I, and Xeon D-2141I. ...
News about the next update on Intel’s Xeon-D line has been thin. For over a year now, we were expecting to hear what plans were in store for one of the more esoteric Intel SoC lines: the first generation parts were based on Broadwell, had up to sixteen cores, and supported both ECC memory up to 128GB and 10GBase-T on a single bit of silicon for under 45W. When it came out, it was amazing all this was on a single chip, compared to the quad-core parts in the consumer market. Xeon D ended up having a lot of uses for networking, storage, management, and dense server installations. How and when Intel would be updating this product line has been somewhat of a mystery. ...
It took years to get Xeon D shipping in a mature form. For example, the initial Xeon D-1540 was quickly replaced by the Xeon D-1541, then joined by a lot of other shapes and sizes in 2016. There have certainly been bumps in the road, but it's been a great experience overall, especially for the hundreds of TinkerTry'd Supermicro SuperServer Bundle owners out there, which I'm very grateful for.
Based on how long things took last time around, I suspect it will easily be well into 3rd quarter of 2018 before companies like Supermicro have products shipping in volume that are based on Skylake-D. Worth noting that the higher core counts with higher TDP/watts mean these aren't going to work in the tiny 1U chassis that made the SYS-E200-8D and SYS-E300 so popular, for folks more interested in portability than quiet computing. Their CSE-101F and CSE-E300 chassis only offer 60 or 80 watt power bricks.
As for whether Supermicro will come out with a Skylake-D Mini ITX motherboard paired with their popular mini tower form factor, I don't know, but I sure hope so. It does seem that all but the 18 core Skylake-D could possibly use the existing CSE-721TQ-250B chassis that the SYS-5028D-TN4T SuperServers uses, given its 250 watt design. Note that at least one guy already runs 16 Xeon cores in it.
I would hope Supermicro gets much more adventuresome than that though, with more M.2 slots, for all those increasingly affordable NVMe SSDs, or at least a full heigh PCIe slot to fit promising M.2 x 4 devices like the ASUS Hyper M.2 x16 Card.
Even better would be for many big server vendors like Dell EMC, HPE, and Lenovo to offer compact and quiet mini towers suited for home virtualization lab, in a variety of Skylake-D core counts/price points. One can always hope. See also Patrick Kennedy's highlighted comments below.
It will be an interesting year, and I'm very glad Intel continues to invest in compact SoC designs!
...
Intel is confirming the Skylake based Xeon D CPU in early 2018. We take “early” to mean that we will see the new Skylake Xeon D SoC in Q1 2018. ... Why Intel needs a Skylake Xeon D update
We have heard it suggested that the 16 core Atom C3955 is the replacement, but this is not the case. Intel needs the Skylake Xeon D part to have a unified ISA between its mainstream Xeon and SoC products. That allows for things like live migrations of VMs. More importantly, it also allows companies to optimize binaries for caches and instruction sets and run them across embedded and mainstream servers. ...
One of the interesting sub-announcements to come out of Intel’s EPYC benchmark numbers was a slide on the ‘momentum’ of Intel’s new Xeon Scalable Platform using Skylake-SP cores. Alongside the notice of ‘110+ performance world records’ and ‘200 OEM systems shipping’ was a side note on the next iteration of Xeon-D, which will be getting the latest enterprise Skylake-SP cores. ...
Click the infographic to view it full size at space.com
Caption from SpaceX Channel's YouTube Video above:
When Falcon Heavy lifts off, it will be the most powerful operational rocket in the world by a factor of two. With the ability to lift into orbit nearly 64 metric tons (141,000 lb)---a mass greater than a 737 jetliner loaded with passengers, crew, luggage and fuel--Falcon Heavy can lift more than twice the payload of the next closest operational vehicle, the Delta IV Heavy, at one-third the cost.
Falcon Heavy's first stage is composed of three Falcon 9 nine-engine cores whose 27 Merlin engines together generate more than 5 million pounds of thrust at liftoff, equal to approximately eighteen 747 aircraft.
Following liftoff, the two side boosters separate from the center core and return to landing sites for future reuse. The center core, traveling further and faster than the side boosters, also returns for reuse, but lands on a drone ship located in the Atlantic Ocean.
At max velocity the Roadster will travel 11 km/s (7mi/s) and travel 400 million km (250 million mi) from Earth.
I was quite the Estes model rocket enthusiast in my youth. My neighbor's dad used to launch big models, and it certainly inspired me to give it a go myself. Living right near Wethersfield High School gave me the opportunity to thoroughly enjoy this hobby from late elementary school right through middle school, with my older brother helping chaperoning my activities. My favorites were the Estes ASTROCAM with the 110mm film, and Big Bertha.
When I was asked to help move a sister-in-law from Florida back in the early 90s, I jumped at the chance to fly down there. I always enjoyed a travel adventure. As we came in for a landing at Fort Lauderdale, luck would have it that I could easily see the Space Shuttle poised right there on pad 39A, ready for take off later that same afternoon! As soon as my cab arrived at my in-laws, I asked if I could borrow a car and head up to Cape Canaveral, since we weren't moving until the next day anyway. Boy was I glad the answer was yes! Off I went, barely making it on time to public lands along route 404 near Palm Shores, about a 3 hour drive. I was some 12 miles from the launch pad, about as close as you could get at that time, at least without any advanced planning.
I will never forget how profoundly awesome the brilliant white light of the engines was, knowing full well my camcorder couldn't possibly do justice to what I had just witnessed. Then as the shuttle zoomed up to about 30 degrees of elevation above the horizon, the sound began to hit me. Yes, 12 miles x 5 seconds per mile (for the speed of sound) = 60 seconds. There's a loud grumble before the lift-off, and it only got more impressive from there. Shaking my bones, literally. Absolutely breathtakingly awesome.
Image from Pixabay - Creative Commons.
This experience was profoundly impressive. Yes, that's overly redundant, but it was that good.
There were strangers a few dozen feet away, but the experience was pretty solitary, wishing I had family with me. Only later did I realize that I really wasn't alone, with plenty of fire ants keeping me company. Apparently I was too distracted to notice them exacted their revenge, for me kneeling on their turf when I was setting up my tripod moments before the launch. This "won't do that again" Florida lesson learned made for an itchy 24 hour solo U-Haul truck drive back up to Connecticut the next day, but oh so very worth it!
I was visiting family in Florida with my kids back in July of 2005, but seeing the Space Shuttle STS-114 "Return to Flight" mission following the Columbia disaster just didn't work out. The launch kept getting delayed over and over. Turns out it finally did launch, the date after I was back home. So much for the Dad’s a hero thing on that hot road trip. I try. Sigh.
Image courtesy of space.com, click to view the full article.
Back in February of 2010, I had the chance to see a distant STS-130 Shuttle launch from my home Connecticut. Turns out the trajectory was more northern than usual, it could be seen in the pre-dawn hours. I woke my older son to enjoy the experience with me. It was just a rising dot in the southern sky, but he still remembers. And so do I.
On Sunday, Jan 28 2018, like many tech enthusiasts, I started hearing that SpaceX had successfully done some test-firing of their Falcon Heavy, and that a launch date of Tuesday, February 6 2018 was set.
Then I saw articles surface saying that this was the biggest rocket in the world right now. Here's the latest example of many, from the New York Times:
If the launch succeeds, the Falcon Heavy will rank as the most powerful rocket in operation today, and the mightiest space vehicle to blast off from the United States since NASA's Saturn 5 rockets last carried astronauts to the moon 45 years ago.
By Monday, I couldn't resist: I needed to check if my work schedule would allow such a last-minute trip, especially once I discovered airfares from several airlines in the sub-$200 range, round trip. Hmm...
I also noticed that there were 2 chances for an early afternoon launch, Tue Feb 6 and Wed Feb 7th,. So I picked an itinerary that would up my odds of successfully seeing another launch, one that keeps me in the Orlando area for both Tuesday and Wednesday.
I know it's all more than a little crazy and a bit extranvagant, but I also know I don't particular enjoy regretting not trying to get to such momentus occasions as this, even better when the experience is shared. By Monday evening of January 29th, we were all set. Expedia trip locked-and-loaded, Kennedy Space Center here we come!
Oh wait, what about tickets? I mean, tickets to get even closer to the launch pad this time around. The closest/priciest viewing areas were sold out even when I first started checking out the tickets site, so I went with the closer category. Looks promising, right at Kennedy Space Center. In 3 days, the required fuscia parking permit showed up, this was all becoming more real.
Only took about 100 minutes (and 6 times put on hold), but in the end, my politeness and patience prevailed. Expedia and I worked things out, with jetBlue saving my bacon and re-booking my flights (thank you, Lori!), and Expedia paying the difference because a manager finally admitted their booking mistake. Phew!
Click for the latest forecast, which indicates the odds of weather getting in the way.
No flight delay, yay! The space coast forecast is 80% chance of a go, according to the 45th Space Wing Weather Squadron page. I'm just a big kid, who finds seeing and hearing what humanity is capable of first-hand quite thrilling. Like the time my grandpa brought me to JFK airport to see the Concorde take-off right in front of us, from a parking lot rooftop. Only louder.
We just got here to our Orlando-area hotel today, Monday evening, the eve of the launch, publishing this article, with an early start needed to get to Kennedy Space Center on time tomorrow. The parking pass says we need to arrive 4 hours before the launch.
I'll update this post sometime after the launch, but you'll likely see a tweet or two sooner, follow me @paulbraren if you're interested.
Tomorrow should be either quite exciting. Or, deeply disappointing, if the launch is scrubbed both Tuesday and Wednesday. No worries, I know a guy in southern Florida to visit instead, just in case launches don't work out at all.
In case you're wondering something along the lines of "what about your other son"? Yeah, I thought of that, there's always next year, when the even bigger Orion Spacecraft is expected to roar skyward...
Falcon Heavy Demo Mission, December 28, 2017.Falcon Heavy Demo Mission, December 28, 2017.Falcon Heavy Demo Mission - Payload, December 6, 2017.Falcon Heavy Demo Mission - PayloadFalcon on Intelsat 35e Mission, July 5, 2017.
"There's a lot that could go wrong there," Musk said last year. "I encourage people to come down to the Cape to see the first Falcon Heavy mission; it's guaranteed to be exciting."
This article is undergoing rapid updates these first few days, as I collect information after a couple of days of personal travel to see a launch that happened to coincide with this Intel launch.
It's been just one month shy of 3 years since I first wrote about the Intel Xeon D here at TinkerTry, which promised to deliver what I had hoped for in my home lab for years: the ability to run virtualization workloads efficiently and quickly 24x7. That promise has been fulfilled, evident in the dozens of articles featuring this little processor that could easily run your home lab or even small business, using any one of a very wide variety of operating systems/hypervisors.
Last week, information about a Skylake-D based successor that's a tad more upscale (and power hungry) surfaced at a variety of sources, listed here.
Then finally, yesterday, a follow-on to last summer's big announcement:
arrived, with all the details emerged on this promising new Xeon D-2100. Unlike with the recent Intel 760p launch, I wasn't briefed in advance, but I've already taken some time to collect all the details you need to know about this next generation of Xeon D, as I work to try to get my hands on one for testing.
Like I was in 2015, again I'm bullish about the prospects of this major product update for the home lab enthusiast. In my usual TinkerTry form, I'd like you to become a much more informed reader before you draw any conclusions yourself. Note that I've highlighted the key bits of information you're mostly likely to be interested in, at a variety of non-Intel sites too.
First, let's touch upon Intel's press releases, noting that Intel wasted no time boasting planned Specter and Meltdown protection. Too soon?
Intel today introduced the new Intel® Xeon® D-2100 processor, a system-on-chip (SoC) processor architected to address the needs of edge applications and other data center or network applications constrained by space and power.
The Intel Xeon D-2100 processor extends the record-breaking performance and innovation of the Intel Xeon Scalable platform from the heart of the data center to the network edge and web tier, where network operators and cloud service providers face the need to continuously grow performance and capacity without increasing power consumption. ...
The Intel Xeon D-2100 processors include up to 18 “Skylake-server” generation Intel Xeon processor cores and integrated Intel® QuickAssist Technology with up to 100 Gbps of built-in cryptography, decryption and encryption acceleration. In addition to those data protection enhancements, this product will be supported by system software updates to protect customers from the security exploits referred to as “Spectre” and “Meltdown.” ...
With a range of 4 to 18 cores, up-to 512 GB of addressable memory, this system-on-a-chip (SoC) has an integrated platform controller hub (PCH), integrated high-speed I/O, up-to four integrated 10 Gigabit Intel Ethernet ports, and a thermal design point (TDP) of 60 watts to 110 watts.
Today, I’m thrilled to share that we have extended this capability to our Intel Xeon D processor line with the introduction of the Intel Xeon D-2100 processor family, which brings the ground-breaking Intel Xeon Scalable processor architecture to edge applications and other workloads that require power and performance density. ...
The new Intel Xeon D-2100 processor brings advanced intelligence to a lower-power system-on-a-chip (SoC) for edge environments as well as other applications with space and power constraints, including power-sensitive web tier compute and storage infrastructure. The processor’s SoC form factor is optimized for lower power consumption and smaller size with integrated, hardware-enhanced network, security and acceleration capabilities in a single package. ...
The Intel Xeon D-2100 processor offers up to 1.6x general compute performance, up to 2.9x network performance, and up to 2.8x storage performance as compared to the previous-generation Intel Xeon D-1500 processor. ...
Intel announced a major refresh of its Xeon D System on a Chip processors aimed at high density servers that bring the power of the datacenter as close to end user devices and sensors as possible to reduce TCO and application latency. The new Xeon D 2100-series SoCs are built on Intel’s 14nm process technology and feature the company’s new mesh architecture (gone are the days of the ring bus). According to Intel the new chips are squarely aimed at “edge computing” and offer up 2.9-times the network performance, 2.8-times the storage performance, and 1.6-times the compute performance of the previous generation Xeon D-1500 series. ...
Two years after Intel introduced the Xeon D family of low-power server platforms, the wave of ARM processors for the data center that those Broadwell system-on-chips were meant to stave off hasn't yet broken. ...
...
Intel's original Xeon D processors started out with only eight cores and improved to 16 cores in the 1500 series, but the new lineup expands up to 18 cores and 36 threads. ...
We have seen Intel diversifying its lineups in the past with the introduction of Core M and now we see the same thing with Xeon D, which is basically merging the high powered Xeon processors with the low power requirements of Atom SOCs.
Intel launches Xeon D series: up to 18 fully capable Skylake cores in SoC format ...
Let’s talk a bit about the interesting buzz words, specifically ‘Big’ cores. Even though the Xeon D platform consists of SoCs, they are not low performance, rather only low power. The cores are based on the Skylake architecture and will support VT-X/VT-d virtualization, RAS features and the entire TXT, AVX-512, TSX Instruction set. The chipset logic however will be incorporated on system and will make for a more efficient footprint than the customary two chip solution where the chipset logic lies on the motherboard. Since this is an SOC platform, the PCH is integrated. ...
...
Performance is better than initially expected, likely due to the relatively high all core turbo clock speeds. Power consumption is (significantly) higher than the Intel Xeon D-1500 and Atom C3000 series. When we look at higher core count parts, the Intel Xeon D-1587 as a 16 core example, dual-channel DDR4 can start to be a noticeable bottleneck in many workloads. With the Intel Xeon D-2100 series, one essentially gets the quad channel speed of the Intel Xeon E5 V4 generation or potentially slightly more on the DDR4-2666 SKUs. What that means is that this is the first embedded platform that can essentially match if not beat a single socket mainstream platform in both core performance and memory bandwidth. That is no small feat. ...
AnandTech
This article goes into a lot of detail about the various CPU SKUs. There's an amazing amount of information, and you'll want to reach each page.
For certain groups of users, Intel’s Xeon D product line has been a boon in performance per watt metrics. The goal of offering a fully integrated enterprise-class chip, with additional IO features, with lots of cores and at low power, was a draw to many industries: storage, networking, communications, compute, and particularly for ‘Edge’ computing. We reviewed the first generation Xeon D-1500 series back in June 2015, and today Intel is launching the second generation, the Xeon D-2100 series. ...
Don't miss the entire Intel Xeon D-2100 Processor slide deck that Ian has shared at AnandTech's last page of the article here.
More Xeon D-2100 announcement articles in this Google search, including Hexus. See also KitGuru, where Damien Mason says:
Pricing has yet to be revealed, however it’s likely to still be too pricey for a home system given the specifications.
Finally, my thoughts. I had frankly hoped a 3 year wait would result in a smaller than the 14nm design of the original Xeon D, but that just isn't happening quite yet. Perhaps pressure from AMD will accelerate the shink again, which tends to result in even lower watt burn.
I'm also skeptical whether DDR4 prices will fall far enough for the 512GB of memory maximum will ever be something obtainable in the home lab in the Xeon D-2100's lifespan. Those reservations aside, it does appear that there's a signficant speed boost here that will be welcomed by the most enthusiastic home labbers, especially folks like me that don't just occassionally use their servers for learning, but actually use their servers to handle 24x7 workloads like a datacenter, or even as a workstation.
Hits
Faster GHz
Less time to turbo
4x the memory of Xeon D-1500 (128GB -> 512GB)
2x the memory channels (2 -> 4)
Misses
Higher watts generally means louder fans, or larger chassis, or both
Larger CPU size may mean Mini ITX motherboard designs have no room for M.2 slots, example pictured here
Unsure
Not sure what this will do to the market for the 4 and 6 core Xeon D-1500 based systems that seem likely to continue as-is
The Xeon D-2100 generally seems to mostly be targeted at higher TDPs and price points, let's hope they can idle to as few watts as their popular Xeon D-1500 predecessors did
Not sure whether this will significantly erode sales of the popular 4, 6, and 8 core Xeon D-1500 systems out there, too early to know until system pricing is known
Big Plans For Little Systems
I can't know for sure whether systems based on these next generation CPUs will be a great choice for virtualization until I try them out in my 10G equipped home lab. Only time will tell for sure, along with a whole lot of TinkerTry'ing!
Meanwhile, note that I plan to analyze yesterday's Supermicro's SuperServer related announcement soon. Note that the X11SDV line-up curiously doesn't include a mini-tower, which should give you some insight why I'm going to reserve further judgement until I kick the tires myself.
Supermicro SuperServers and Motherboards based on Xeon D-2100 are called X11SDV products here, with the new Supermicro SuperServer SYS-E300-9D likely to be of most interested to home lab enthusiasts who don't mind 40mm fan whine.