As this is my first purchase of a new server using a supermicro motherboard, I saw the IPMI has a “Activate License” web page via the url: https:///cgi/url_redirect.cgi?url_name=mainmenu
I will probably not active the license, but is there any pluses in doing so?
The servers already in my homelab that are supermicro models were purchased as ebay purchases (and the models are no longer available for purchase).
I know the Supermicro IPMI does more than say DELL iDRAC which does require a license.
Look forward to anyone’s insights.
Supermicro has a page on the OOB License which details most of what it unlocks —
I do typically pay for those licenses otherwise you can’t emit syslog towards Loki, ElasticSearch or whatever you’re using for capturing logging events.
Thank you for the information and the webpage link. If it can put things into syslog, then I will probably purchase one.
I will review the page later today, but I am assuming this is a per machine key.
That’s correct. When purchasing you need to supply details about the specific mainboard and the license which is issued can only be used for that exact board. One license is required per Supermicro system.
Also worth noting Supermicro’s eStore is really slow and slightly unintuitive for these keys.
How it works: You sign up, add to cart, pay, then wait at least an hour and sometimes 6-8 hours before they process the order and deposit essentially a “license activation token” in your account. Then you go back in and input the board details, hit activate, accept all the warnings that if you input the wrong details you’re SOL and won’t get a refund, and finally will get the unlock code to put into your system.
Sounds just as bad as the US insurance companies when you try to do an inquire on the spouse medical stuff for claim status.
Fortunately, in comparison to say Dell’s iDrac, or HPE iLO, it’s very easy and MUCH MUCH MUCH cheaper, by at least 10-20x…
Dell and HPE don’t let you use the OOBM Remote Console without a proper level of licensing. Though, of every OOBM I’ve used, Dell is still my favorite and the easiest to work with.
The Dell server I reference was purchased as a refurbished unit - with hardware warranties. I also got the IDRAC included.
Therefore my experience was not as bad because I never had to engage or work directly with Dell. I had to only work directly with the vendor (server monkey).
As I used to be an HP employee, I know how their service experience can be not as great.
I’ve certainly had issues with their support in the past, specifically with the half dozen or so Moonshot chassis we deployed. Really neat system though, tons of blades/cartridges, a dedicated chassis manager via the iLO, and a redundant pair of integrated switches (unfortunately, these would go split brain once in a while).
Heck, even for the money they are great systems and often under the price of say, Dell. My gripes are still the firmware/BIOS support if you don’t have an active support contract, as well as constant disconnects with the iLO. I really dislike their rails as well and have bent quite a few trying to unrack them.
But, I have 3 DL20 G9’s running happily at home in an ESXi 8 cluster. Idling at 35W, almost dead quiet, all after running a full 24/7 life in production until VMware moved them off the HCL.
My previous incarnations of a homelab had old Apple hardware and a Dell CS40-SC – it was a non-public release server that was used at Facebook (found it on ebay).
When I started with the Dell R420, I knew that I was going to be locked into a piece of hardware that may not be able to change much. As I got the unit from “Server Monkey”, the vendor/seller has been good. I have already had a PSU replaced. I purchase the iDRAC license to allow me to remotely connect.
As I got my next servers, I have found the Supermicro are pretty good. I have two used from ebay, a microTower that I got at Microcenter, and the HL-15.
as virtualization is off topic (from the thread)…
When it comes to virutalization, I don’t think I will be a VMWare fan anymore. I started using VMWare when it offered the first version of VMWare workstation via the “hobbyist” license. I have an very old ESXi server on a Dell mid-tower. Once VMWare was sold to EMC/Dell and now Broadcom, I have watched the products go from very polished to less robust.
I started looking at QEMU only as it was free. I did try xcp ng, but I did not like the experience overall. When I read that proxmox was built from qemu, I decided to use Proxmox. I can now connect to the virtual guest using SPICE. It runs a lot smoother that what I have experience with virtualization clients or using “remote desktop” clients.
Proxmox is certainly on my to do list. I find myself very comfortable in a debian environment. However, work just shelled out for an NSX class as we are a heavy VMware shop. I’m going on 2 months with a vSAN support case with no end in sight, so I certainly feel you. But, it currently pays the bills and is probably the single technology stack I have the most experience and knowledge in, so it will always have a special place in my heart.
I find myself these days trying to be as hardware, software, and OS agnostic as I can. Each system has something I like, dislike, and can learn from. I’m a simple man. I like technology, all of it!
I would be happy to help if you have Proxmox questions.
The only features that I have enabled within Proxmox is CEPH and Software Defined Network (SDN).
With my day job, there are a number of projects working with cloud services Azure, AWS, and Linode (as I am in software development). My homelab has been my true experimental lab.
I’ll keep that in mind, thank you!! I love that part of the homelab community!
SDN seems really interesting.
SDN is an experimental feature, but I would like to get it working. There are not many good videos or write ups on this topic.
I was hoping to create the SDN to see how a vm would run compare to physical network vs virtual networking layers within the virtualization host.
At work we would create a bunch of virtual routers to simulate large world networks running at different latencies to make sure our software would perform as designed.