Skip to content
Jun 22 20

Apple pushes MacOS 11 to Apple Silcon

by Jason
Apple Reboots Hardware, Again

Apple has announced they will be leaving Intel behind for powering their core line of computers. I lived through two of these major transitions – once when Apple moved from Motorola 68000 series processors to IBM PowerPC processors, and again from PowerPC to Intel x86 processors. Now Apple has announced they’re scaling up their A series processors for MacOS using hardware. They’re also announcing that MacOS X will be retired as the new OS will be branded MacOS 11 – Big Sur.

This isn’t unexpected. Apple has been making some pretty solid processors in their mobile devices. My iPad Pro (2018) model benchmarks faster than many current Intel i7 based laptops and tablet.

What does this mean for most users? A transition time while developers get their apps recompiled for Apple’s silicon chips. Developer tools are being released already. Apple also plans on a transition support period to allow non-native apps to run on the new hardware using a translation function called Rosetta 2. If Apple’s history of platform transitions plays out again – in about five years Intel support will no longer be provided by Apple. Apple hasn’t provided a definite date – only that will support Intel for “years to come.”

What does it mean for me? It depends. Many of the applications I use are based on open source projects – they aren’t always actively maintained by full time developers so we’ll see. Microsoft has already began releasing supported versions of Office. Also, having an Intel Mac meant I could run Windows in a virtual machine or in a separate partition called Boot Camp.

Apple’s big sales pitch for this transition is that customers want to run iOS and iPad OS apps on our laptops or desktops. This really isn’t true for most people. Menu bars are desktop, touch UI are in mobile. Transitioning between the two – can be annoying. Much like the less than perfect implementation of the touch pad on the iPad Pro Magic Keyboard. It works but app UI doesn’t always “get it” and doesn’t function perfectly – like scrolling with two finger touch or taps on the touch pad for a click.

If I can’t run a Windows VM or Boot Camp, I may be finally looking at a DaaS (Desktop as a Service) or run a Windows desktop VM at home with a VPN connection. I don’t like either option as it requires some sort of internet connection at all time to use – so I’m limited. We will see if Apple supports x86 Virtualization on Apple’s silicon and how it performs. This is really the biggest question for most of us that use Mac in a business environment.

Jun 11 20

Home Lab 2020

by Jason

Part of my job is to maintain my technical certifications and learn as much as I can about the products my clients use and our company sells. Some of these, we have a full size lab engineers and sales architects can log into and get hands on really expensive gear. Most of the products are software, so we can usually get our hands on it to try out – but we need somewhere to run it. That’s where a home lab comes in. Typically retired servers or purpose built computers that handle enterprise software that you can build up, break, tear down, and do over again.

Someone's home lab. found on cisco's facebook page. (With images ...
Not my home lab, but the cabling was done so perfectly had to show it here.

At my previous employer, we had a full stack of gear (switches, servers, software, and storage) and we had full access to use as a lab that mirrored our production equipment. That was ideal and worked great for testing patches and updates before you rolled it out into the enterprise. It gave us a lot of confidence that a normal patch or update wasn’t going to blow up and also gave us the opportunity to practice major upgrades and try advanced features without putting the business at risk… it was really an ideal environment for learning.

Now I’m on my own’ish. So I need to get my hands on some gear but with COVID-19 putting a halt on projects (and bonuses) I need to do it a little cheaper that I could have. The perfect home lab for people in my line of work with run about $4-5,000 and include three very efficient micro servers with 10Gb networking, a network switch to connect them, and a small NAS/SAN device for shared storage. The hardware is typically good for 5-8 years of usable time, so the investment in a lab of this scale is worth it. But that’s not possible today.

I shrunk my use cases and paired back my expectations to the bare minimum with a plan to expand in the future without wasting this smaller investment. I considered building my own, but then I started looking at refurbished hardware from OEMs I work with a lot.

HPE MicroServer Gen10

I landed a HPE Gen 10 Microserver with a very small footprint. It’s a little black cube with four large form factor drive bays inside, two PCI expansion slots, and supports up to 32GB of RAM, came with 8GB and I added another 8GB for 16GB of usable memory for now. Powered by a (not so blazing fast) AMD Opteron X3421 CPU which provides four cores running at 2.2GHz and boost to 3.6Ghz. It has two 1Gb network cards and dual display port adapters for video.

One of the performance issues I’m running into with this, is that both the network controllers and disk controllers are very CPU dependent. So anytime I’m moving data in or out of the box – or writing or reading from the disks – the CPU is taxed. This is easily overcome with some eBay parts. I acquired a dual 1Gbps HPE network card with offload capabilities to take the network tasks off the CPU for $15. For another $20, I have an HPE P222 RAID control card (with battery backup) to take the disk tasks off the CPU.

HPE P222 RAID Card with 512MB Cache and Battery

I placed two 4TB SATA drives in bay 1 and 2 along with a 240GB SSD disk in an adapter in bay 3. With ESXi installed on a USB thumb drive mounted inside the server, I was able to configure a single node vSAN on this host to use the SSD as cache and two SATA disks as capacity. I’ll be tearing this down when the RAID controller arrives and I can actually reduce the CPU load even further on this.

It happily runs ESXi 6, 6.5, 6.7, and 7.0 without issue. It also doesn’t do a bad job running Windows 10 or Server 2019. I’m just getting into this server now, but all in all I’m happy with the $400 investment (+$20 for the NIC and +$15 for the RAID card).

Jun 3 20

vSphere 6.7 Pardoned Until Oct 2022

by Jason
vSphere 7 logo refreshed for new major release.

VMware has decided not to stop providing general support in November of 2021 – extending it until October 2022. We’ll get another eleven months to plan and migrate off vSphere 6.7.

For most small and mid size businesses, this won’t be much of a challenge. The upgrade path to 7.x is very well established. Most of the people I’ve talked to are ready to go to 7 but are letting others deal with x.0 release issues and wait for a major update.

The big relief is on cloud providers, especially those customers who jumped on VCF and are limited to 6.7 for now. Having a short runway to plan an upgrade in a complicated solution brings some anxiety to those in charge of the platform.


Aug 16 19

Chrome Crashing on MacOS

by Jason

I ran into an annoying issue with Chrome with a generic installation of MacOS crashing right after launching. After installing it from Google I simply launched the app, as one does. The Chrome logo appears in the dock then yeets itself off somewhere. Sometimes it may generate a crash report, but most of the time it didn’t until I tried to wipe some of the preference folders out to reinstall per Google’s recommendation. 

MacOS may generate a crash report with the error: EXC_CORPSE_NOTIFY

Execute corpse notify?

Found an interesting fix that may save someone the frustration of wading through dozens of hits only to never find a solution – like I did for the last hour. 

Chrome, why you no launch?

What I found was that Google Chrome is trying to create a folder in the following path:

~\Library\Application Support\Google\

But the folder security settings are preventing my apps from doing anything in the folder. I’m not sure why, but it’s fecking annoying.


Fix this travesty:

Simply add permissions to this folder for your user account to read and write into the Chrome folder so Chrome can make the subfolders it needs to do its job. 

  1. From the desktop type Command K to launch the Go to folder window
  2. Enter the following path: ~\Library\Application Support\
  3. Locate the Google folder
  4. Right click on the Google folder and select Get Info
  5. At the bottom right corner, click the Lock icon and authenticate to unlock it.
  6. Click the + button in the lower left corner to add a user (YOU!) to this list, give yourself read/write access
  7. Close the window and launch Chrome again.


You’ll notice new folders created in the Google folder and Chrome doesn’t yeet itself any longer.



Sep 7 18

External Log Collection for UCS Fabric Interconnects

by Jason

I’ve been troubleshooting some pretty annoying bugs in our Cisco UCS environments. Most are easily solved by collecting some techsupport files, opening a TAC case and working through the glitch or config issue. However one has me really stumped and frustrated. 

When we collect these techsupport files, more specifically techsupport files directed at a specific chassis – an IO Module will randomly disconnect, reboot, or reconfigure – dropping half or both connections to the fabric (we have one uplink per IOM, two per chassis).

As we continue to troubleshoot with Cisco TAC, we mostly find out later that the tech support file we generate after the issue doesn’t contain the information they need, the logs have rolled over… or over written due to activity in the domain. We gather syslogs religiously but the information necessary isn’t sent out via syslog when it happens. Feature request?

After pressing one of the TAC engineers on my fifth case this year on this issue, he clued me in on a feature for exporting logs to an external server. Click away if you know this, I certainly didn’t.

Here’s a quick and dirty on how to do it with a generic Ubuntu server.

I’m going to write this soup to nuts for someone who’s a novice and never setup a Linux server. By no means will this be hardened and secured for public visibility – just a place for your FIs to dump their logs. Chime in with a comment if you have improvements or suggestions.

  1. Deploy Ubuntu Server on a VM with a few gigs of space
  2. Update your server once its online with these two commands:
    1. sudo apt-get update
    2. sudo apt-get upgrade
  3. Reboot your server once upgrades are complete
  4. Create a dedicated user
    • sudo adduser ucsloguser
    • provide password info but nothing else matters
  5. Let’s assume you’ll be dumping logs to the user’s home directory, so the path will be /home/ucsloguser
    • For a permanent home, you could add a second disk and mount it under a dedicated path or get your neckbeard on and use LVM to create a logical volume you can add disks and grow later. For now we’ll keep it simple, stupid.
  6. Using your favorite SSH client (Terminal, PuTTY, XTerm, etc) connect to your new server using the ucsloguser account to verify you can SSH to your Ubuntu server.
  7. Ok, Linux server is ready to go.

Configuring the Fabric Interconnects to dump logs onto your Ubuntu server

  1. SSH to your FI – doesn’t matter which, both will respect the monitoring change. 
  2. Run the follow commands:
    1. scope monitoring
    2. scope sysdebug
    3. scope log-export-policy
  3. Now we set the log export policy
    1. set hostname [Linux server IP or FQDN if your DNS is updated]
    2. set user ucsloguser
    3. set passwd [press enter, then enter the password of the ucsloguser]
    4. set admin-state yes
    5. set proto scp
    6. set path /home/ucsloguser/
    7. commit-buffer
  4. That’s it. Now log into your Linux server and see if log file .tgz bundles are showing up in your home directory.
Configuring log export policy in UCS
Logs arriving in external server!


Use the command sudo ls -thl to sort by newest to oldest files with a human readable size.

Use the command sudo df -h to show the space consumed. 

Use an SCP utility like WinSCP to retrieve files from your log server so you can send them to Cisco TAC now. 

%d bloggers like this: