A Medication / Vitamin / Other Supplement Management Platform?

I feel like maybe this exists and that perhaps I have even read about such a system somewhere but I’m unsure of how to ask Google (or Bing, etc.) in a meaningful way about this platform. I’m hoping someone is aware of what I am describing, if it exists, and if not, maybe we should build it.

It would be great if consumers (and medical professionals) could utilize a simple web/phone-based system to manage the different medications, vitamins, and other supplements they may be taking and monitor for unexpected interactions.

For example, an individual might take two anti-depressants simultaneously. If both operate by increasing serotonin availability there is a risk of serotonin shock syndrome. It would be great if one could enter the two anti-depressants into a system and the system would make them aware of the potential for the syndrome.

Lets begin layering some additional medications onto our imaginary patient – they might be using a muscle relaxer due to a back injury AND may have been exposed to poison and thus are on a NSAID. What are the possible side-effects of these combinations?

Now what if the individual wants to take a multivitamin? Will it increase or decrease the potency of any of these drugs?

And what about specific vitamins? Or minerals? Or herbs? etc.?

OTC medications for a cold, a cough, a headache?

Such a tool would be most useful to individuals who have some chronic condition(s), for those who only ever need to take one medication at a time, this system would be overkill.

Ideally such a system would also allow one to surface other interactions – e.g. specific foods that should be avoided.

For the Hobbyist DIY’er: Simple Ways / Tools to Change Dimensions on Projects

I’ve been wanting to make some stackable wooden crates and have been searching for an option that would be a good fit for me:

  • Needs to be friendly for my child, so I can have him participate using the hammer, drill, etc.
  • Ideally, on the same topic, shouldn’t include too much work that I have to do alone )(e.g. fine details), don’t want him to get bored waiting or feel left out.
  • Looking for something relatively inexpensive, so no need for fancy ornamentation or decoration.
  • Simple enough for my admittedly limited DIY’er skills.

I think I’ve found a good, simple wood crate diy project that fits the bill. We actually went out and purchased a number of the items we’d need to build it (a few tools, screws, wood glue, etc.). (Un)fortunately our local hardware store had run out of the wood we needed and there weren’t any reasonably inexpensive alternatives available.

My one “concern” with this project is the size of the crates. They are using boards that are 12″ wide, this limits the depth and/or width of the finished crates. This is a fairly good size, but I’d like to go a little bigger, maybe a bit deeper.

So that left me wondering – what good methods or tools are there for changing the dimensions of a project? Sure, I can do some hand scratch math but…:

  1. I don’t do this frequently enough to be able to do it without second thought.
  2. I like to check my work, preferably with a tool I know always gives the correct answer.
  3. This works for the larger items like the wood but what about screws? For example, if I double the width of the cart do I need to use longer screws or should the ones specified in the initial specifications still be adequate?

So I’m curious – how do you handle these measurements when resizing dimensions on a project? Are there tools (software, work sheets) that make this simpler to accomplish?

A (Simple) Electronics Question: Unexpected Behavior

I recently bought a Snap Circuit kit for my son that we have been using to build various projects offered in the provided manual. We started with a simple switched light like so:

Then we added a motor with a fan like so:

As one might expect, the light grew dimmer as now the light was sharing the electricity with the fan.

Then Asher took the fan off the motor and something unexpected happened – the light turned on momentarily (when the switch was turned on) and then went out (added detail: I neglected to mention initially that the motor is still rotating at a high speed, it doesn’t stop even though the light goes out). Like so:

It is this last result that has us confused. Shouldn’t the motor be consuming less electricity since it doesn’t have to overcome the air resistance caused by the fan?

I’m a tech guy, but not an electrical guy, any ideas?

Looking for Recommendations on Windows Application Virtualization Software

I’m looking for recommendations on Windows Application Virtualization software. I’ve looked for options over the years but have never found the perfect (or even good enough) solution for my needs.

Windows Application Virtualization software means a lot of different things to different people, so lets start by establishing exactly what I’m looking for…

Runs Locally

The virtualized application should run on the local computer without the need for a separate server.

From my perspective this excludes Microsoft’s App-V, Cameyo, and Turbo which all seem to require a separate server.

Isolates at the Application Level

In other words I’m not looking for a full OS virtualization solution, it is only an individual application that needs to be virtualized.

This rules out Microsoft’s Hyper-V, Oracle’s VirtualBox, Docker, etc.

Persists the Data and Allows Interaction with the Host File System

Windows Sandbox, while not virtualizing at the application level this is an attractive option due to how light the VM is. I could almost me swung to use it with multiple applications BUT it doesn’t persist applications (which is great for its intended use case) nor does it provide access to the local file system.

In my case I’m not working with untrusted applications. Its trusted applications that I simply don’t want become enmeshed with the OS, hopefully delaying the inevitable need to perform a clean install on the base OS (and potentially expediting the setup process once the OS is installed).


Ideally the packaging process should be fairly simple. A UI is nice but not necessary. Avoiding having to setup a separate virtual machine for packaging purposes is ideal.


I’m not looking to use this in an enterprise environment but on my own systems (though these are used for professional purposes). Something with a high price tag won’t do.

What Am I Trying to Accomplish?

When the average Windows application installs itself it inserts files and other modifications into numerous locations – this can include Program Files and Program Files (x86) as well as ProgramData and Users\username as well as (seemingly) randomly choosing Users\username\AppData\Local or LocalLow or Roaming.

Don’t forget the numerous records in Windows Registry and potentially the installation of various dependencies (e.g. redistributables).

Something akin to the virtualization Microsoft’s MSIX performs but without needing the software vendor to provide an MSIX package.

Some Strange (but Logical?) Behavior of GitHub

Lets say there is a repository on GitHub called someweirdname/MyAwesomeRepo. I want to fork this repo, so I do. Now I have davidshq/MyAwesomeRepo.

I don’t get a chance to work on it for a few days and GitHub tells me in the interim 100 commits have been made to the upstream origin repo. So I create a pull request in my own repo (davidshq/MyAwesomeRepo) to pull in all the changes. I approve and merge the changes.

My (erroneous) expectation at this point is that davidshq/MyAwesomeRepo will now be exactly as someweirdname/MyAwesomeRepo – but it isn’t. Instead I’m one commit ahead (merging the pull request I just created).

Okay, slightly annoying but no big deal. But if I now make a change to my repo and attempt to create a pull request to the upstream origin repo GitHub adds in my merge commit as part of the pull request. Well, ain’t that annoying.

Now I know GitHub has an article on syncing forks, I’ve used it successfully in the past. But this time I didn’t.

I’m also sure there are multitudinous answers to this question floating around on the internet but I’m not quite sure how to succinctly query for them. What exactly is the problem I’m having and will Google understand it? Unlikely.

So I appeal to my fellow humans who have these incredibly powerful brains to assist me in finding the answer to my question(s):

  1. Why does creating a pull request of changes from the upstream origin via GitHub make the fork one commit ahead?
  2. Why does this seem not to happen when one performs the command line method offered in GitHub’s syncing forks documentation?
  3. What should one do once one has this aberrant commit (which, granted, is empty, but still shouldn’t be pushing an empty commit)?

Why Does Yarn 2 Work This Way?

One of the larger reasons using node modules can be a nightmare is the amount of time wasted installing/uninstalling/reinstalling modules on a per project basis using npm.

Facebook’s Yarn 2 uses a different methodology for installing modules that should provide significantly faster module management, so I figured I’d give it a try.

The install instructions say thus:

Installing Yarn 2.x globally is discouraged as we’re moving to a per-project install strategy.

Fine, this is the way a lot of development software does things these days. It continues:

We advise you to keep Yarn 1.x (Classic) as your global binary by installing it via the instructions you can find here.

Wait a second, what? We are installing a legacy software application to manage our current application? If we are going to do this why not just use Yarn 2.x globally?

Unfortunately there is no explanation (nor link to an explanation) and I was not able to quickly surface (via a Google search) why this should be.

I assume that:

  1. Having Yarn 2.x installed globally and on a per-project basis might cause problems (or simply not work at all).
  2. Eventually Yarn 2.x will be installable on a per-project basis without the necessity for a global package manager.

Can anyone shed some light on why exactly this is the recommended route?

An Employment Opportunity

I’m Dave Mackey and I’ve been working in IT for the last 18 years. I’ve been coding since childhood.

Most of those 18 years I’ve worked primarily on the systems side with coding as a secondary or tertiary responsibility. But coding is really what I love doing and am now looking to pivot into a development job – ideally remote (but, for the right job/company…).

I was IT Director for Liquid Church (a large, multi-campus nonprofit in Northern NJ) for the past several years and oversaw IT systems across six campuses. I could prattle on about this and other experiences, but I’ll leave that to my LinkedIn profile.

I am in the process of transitioning out of Liquid. This involves onboarding the new IT Director and working on some web development projects for Liquid. I’m looking for my next employer and wondering if it should be you? If the answer is yes, shoot me an email at dave @ davemackey.net. Thanks!

What I’m Looking For

  • Remote First.1 For the right job opportunity I’d be willing to work on-site, and am open to relocation.
  • Reasonable, livable wage.
  • Health insurance, PTO.
  • Positive Culture.
  • Work/Life Balance.

Of course I’d welcome other perks, but those are kind of the essentials.2Other perks that I find particularly attractive (unsorted): flexible work schedule, equipment and/or education allowances, regular (at least annual) full-team retreats, dental/vision insurance, 401(k) matching, life insurance, FSA/HSA, equity/stock options.

Why You Might Be Looking For Me

  • Coding isn’t just a job, it’s part of who I am.3 Starting with BASIC on the Commodore 64 and Apple II Series. Moving to Qbasic on Windows PCs, later Visual Basic, ASP, ASP.NET, VB.NET, C#, Python, PHP, and JS.
  • I solve problems for breakfast.
  • I love (and am adept at) learning technologies / languages / systems.
  • I have diverse experience both inside and outside of IT which allows me to provide productive perspectives.4I’ve worked for an internet startup, in higher education, and for non-profits as well as SMBs. In the distant past I’ve worked in roles such as a commercial fisherman, custodial, on a farm, etc.
  • I work hard, am loyal, and have a passion for doing things well.
  • I enjoy helping/teaching others and am willing to accept help.
  • I am passionate about making a better world.

What Might Be Helpful To Know About How I Code

  • I believe in self-documenting code but I’m also a fan of documenting code (is there a term for that?).5There is Knuth’s Literate Programming but that is a bit beyond what I’m thinking.
  • I favor expressive code over code brevity. I want to understand what I wrote 10 mo.’s later.
  • I’m an advocate of using Google, StackOverflow, et al. in pursuit of answers to coding dilemmas but when I find an answer I want to understand it, not copy and paste.

If you’ve reached the end of this post let me provide my email once again: dave @ davemackey.net. I hope to hear from you!

Can You Run Oracle’s VirtualBox on a Windows System with Microsoft’s Hyper-V enabled?

Every year or two I try running VirtualBox on a system with Hyper-V enabled and it goes poorly. Recently there was some hope that VB would run alongside Hyper-V…and I actually had this working for a split second. But when I updated to the latest Windows build it broke again.

Figured I’d share what I’ve learned here. Right now the information seems to be scattered around the web and it can take a while to follow the threads and figure out what the current status really is.

If you aren’t familiar with why VirtualBox (and other hypervisors) can’t run on a system with Hyper-V enabled check out this SuperUser Q&A.

The confusion is largely driven by VirtualBox’s own documentation which states: “Oracle VM VirtualBox can be used on a Windows host where Hyper-V is running.”

This did work on Windows 10 1809 but then was broken by Windows 10 1903 (the next release). For more details see this post over on the VirtualBox forums.

Currently there are only two ways to run VB and Hyper-V on the same machine – and they aren’t running simultaneously. One is to add/remove Hyper-V every time one wants to use VirtualBox and the other is to edit one’s boot records using BCDEdit, which requires a restart every time you make the switch. There is a free utility available that automates this process, but it still involves a reboot (I haven’t tried the utility yet).

If you are wondering why one would even want to run the two simultaneously there are at least two good reasons I know of: (1) WSL 2 requires Hyper-V and (2) Docker’s future on Windows involves utilizing WSL 2 and thus Hyper-V.

Note: I personally don’t use VMWare these days (have in the past) so this article focuses on VirtualBox and Hyper-V’s interactions but the problem holds true for VMWare as well.

If anyone hears of any new developments regarding this topic, I’d love to hear about them!

Automatically Adding Tests (e.g., Unit, Integration) to a Legacy Code Base

Many of us have written code that lacks tests to ensure code correctness and protect against human error. Most of us have worked with legacy code bases that have lacked these tests. Oftentimes the code has been written in a way that makes such testing quite difficult to add after the fact.

I’m curious if there are currently tools available that automatically add testing to legacy code bases? I understand that in many cases there may not be enough context from the code for the automated creation of fool-proof tests. For example:

a = 10
b = 23
while z < 100:
  t = (b + a) * 2

The variable names tell us nothing about what the values represent – are they ages? monetary values? temperatures? And z appears out of nowhere – potentially a value which is mutated numerous times before being called by this code. And so on…

While we might not be able to provide mathematically provable tests for this code could we not provide “good enough” tests in many instances? Example: If I wanted to ensure I wasn’t accidentally altering my code I could create a test with a limited (but sufficient) number of inputs to create a baseline.

In the example above this might include running the code with some series of z values like: 1, -10, 1.22, 5.88, 100, 100,000,000, 99, 99.887492421, etc. The output would then serve as a baseline. Lets say using these values it came out looking something like: 10, 100, 10, 50, 10,000,000,000, 990, 990, etc.

As I made modifications to the code the test could be rerun and the test would fail if inputting the same value for z resulted in altered outputs. For instance, if inputting -10 resulted in -100 instead of the expected baseline of 100.

This still leaves a human making the final determination on whether the failure is a correct result but at least makes the developer aware that a change has occurred.

Beyond these more difficult cases one will have code segments which are quite easy to test as the available inputs / expected outputs are known quantities. For example, this should be easier:

int myAge = 10;
int magicAge = 23;
int randomFactor = generateRandomNumber();
while randomFactor < 100:
  int theNewAge = (myAge + magicAge) * 2; 

In this case we have clearly limited input/output values (we know that the potential range of numbers is limited to those supported by the integer data type; assuming generateRandomNumber() is fairly self-contained we also should have a good idea of what values will be returned by the function and in a worst case still know that the valid value cannot be outside the range of an integer).

Okay, a bit of a long-winded question. To sum up: “Is there an automated way to add tests to legacy code bases?” with the caveat that, “Some tests need not be mathematically provable, only pragmatically useful.”

A Few Hints For Getting Portainer Running on Windows

Portainer is a GUI mean to make managing Docker easier. By default Docker is managed using mainly CLI tools and while this is all grand it can be a bit much for those who just need quick access for simple purposes.

I recently attempted to setup Portainer on my local Windows 10 Enterprise Edition PC using the basic installation instructions. I opened up PowerShell and ran:

docker volume create portainer_data
docker run -d -p 8000:8000 -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

And of course that didn’t work. Instead I was shown this error message:

C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: pull access denied to portainer/portainer, repository does not exist or may required 'docker login': denied: requested access to the resource is denied. See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.

I tried authenticating without issue to Docker using docker login as recommended in above error. I reran the initial command again but received the same error. I decided to try pulling down the portainer image to my local machine without any attempt to run it:

docker pull portainer/portainer

Shouldn’t work right? It did! Now I rerun the initial command but Docker is still telling me it is “Unable to find image ‘portainer/portainer:latest’ locally”. It is at this point that I notice the Note in the installation documentation:

Note: the -v /var/run/docker.sock:/var/run/docker.sock option can be used in Linux environments only.

Oops. Okay, so how do I get Portainer running on Windows? Unfortunately the next instructions are for setting up Portainer on a Windows Docker Host running Windows Containers – but I’m trying to run Linux Containers on a Windows Docker Container. Still, I get a hint:

$ docker volume create portainer_data 
$ docker run -d -p 8000:8000 -p 9000:9000 --name portainer --restart always -v \\.\pipe\docker_engine:\\.\pipe\docker_engine -v portainer_data:C:\data portainer/portainer 

Note in the above the highlighted portion. Here we see that portainer_data has been specified using a Windows rather than *nix path as in our original command. So I swap this Windows path into my original command (forgetting, btw, to change out the -v /var/run/docker.sock…):

docker run -d -p 8000:8000 -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v C:\ProgramData\Portainer:/data portainer/portainer

Now I’m getting another error…

C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: Mount denied: The source path "C:/ProgramData/Portainer" doesn't exist and is not known to Docker..."

Some Googling done I stumble upon Issue #2575 on GitHub for the portainer repository. And then Manuel Patrone’s question on the Docker Forums. Based on this my understanding is that the issue is two fold:

  1. When we use a *nix path when setting up the data location for Portainer it becomes a local path within the Docker VM on Windows (but theoretically fixed this by changing the path to a Windows path above)
  2. For some reason Docker/Portainer attempt to save the Docker image to C:\ProgramData\Portainer where no such folder exists and Windows isn’t too happy to create one.

So I make one last change to my command:

docker run -d -p 9000:9000 --name portainer --restart always -v /var/run/docker.sock:/var/run/docker.sock -v C:\Portainer:/data portainer/portainer

And it works beautifully. I’m able to launch Portainer, setup my admin password, and begin navigating around the UI. You’ll note in the above that I set the path as C:\Portainer instead of inside C:\ProgramData. This path could be anywhere, but I figured setting it outside of ProgramData might save me some pain in the future (or not).

You may also note that my command is still technically incorrect. According to the Portainer documentation I should have replaced /var/run/docker.sock:/var/run/docker.sock with \\.\pipe\docker_engine:\\.\pipe\docker_engine – since the former only runs on Linux according to the Portainer documentation.

Why is this working? I don’t know. Maybe I’ll go back and rerun it later using the correct path, but it is worth noting that the pipe path used is only available in 1803+ versions of Windows, so if you have something before that you’ll need to (1) upgrade (I would), (2) use the Linux command as I did above, or (3) find some other command that works pre-1803.

I hope this rambling journey helps others who may find themselves running into the same issue.