How to install Dentrix 24.1 onto a remote workstation

For years now, whenever we swapped out a desktop and needed Dentrix reinstalled, our staff would call Dentrix Support to have them do the installation manually. This is because, sometime 5-10 years ago, when you double-clicked on “setup.exe” it just wouldn’t work. It often complained about missing components, .NET runtime versions, or similar. I think Crystal Reports was a frequent culprit. This was a far cry from when we bought the practice >15 years ago, when I could install Dentrix without much issue by putting in the CD(!) into the tray and running setup.

Today, I needed to get Dentrix set up on a remote computer to test some things. But I had no interest in calling Support to make this happen. I was determined, determined I say, to get this done myself.


Note that these instructions were written for Dentrix 24 with Windows 10 Pro (10.0.19045 build 19405). I’m certain that they will break in the future. All of the steps below need to take place on the workstation.

  1. Navigate to \\[server]\DTXCommon\Installs\
  2. Copy the folder (or whatever is the latest one you see) to the workstation. I also copied the sibling files over to the Workstation as well, for good measure; not sure if that’s required or not. So, if this is what my server contained in \\[server]\DTXCommon\Installs\ :
    How Dentrix Server's Installs folder looks
    then this is what my workstation contained in my working folder:
    How the workstation's Installs folder looks, with only a subset of the Dentrix folder copied
    WARNING: even with the reduced number of files copied, this still required 3.2GB on disk.
  3. Try to run\setup.exe . It will probably fail with an error about Crystal Reports.
  4. So, try to run Crystal Reports installation first. Find it in\ISSetupPrerequisites\{CrystalReports-13-0-32bit-Prereq}\CRRuntime_32bit_13_0.msi . But, for me, this failed with Error 1904. Module C:\Program Files (x86)\SAP BusinessObjects\Crystal Reports for .NET Framework 4.0\Common\SAP BusinessObjects Enterprise XI 4.0\win32_x86\crtslv.dll failed to register. HRESULT -2147010895. Contact your support personnel. If it works for you and you don’t get this error, great; you can probably go back to step 3 and run the setup.exe for Dentrix again.
  5. After digging around the Internet, I found this answer on StackOverflow, which gave me the hint I needed. We have to reinstall the Microsoft Visual C++ 2015 Redistributable package. I found that on Microsoft’s website. Note: be sure to download the 32-bit version, because that’s what Crystal Reports relies on here apparently.
  6. When you try to install the Visual C++ 2015 Redistributable package, that may fail with an error if you already have one installed. I had to go to Settings > Control Panel > Add/Remove Programs, and uninstall Microsoft Visual C++ 2017 Redistributable (x64) 14.12.25810 first. After I did that, I rebooted, then worked backwards from here:
    • Installed Microsoft Visual C++ 2015 Redistributable Package (32-bit version), per line 5 above
    • Installed Crystal Reports via the CRRuntime_32bit_13_0.msi file, per line 4 above
    • Installed Dentrix via the setup.exe, per line 3 above

Other notes

Using Dentrix Workstation over a TailScale VPN is significantly slower than using Remote Desktop to the server or another workstation. There must be a ton of synchronous round trip calls made within Dentrix, with the (not invalid, at the time it was created!) assumption that your server and your workstations will all be connected to a high-speed intranet network.

Large content sites and SEO

I’ve always been fascinated with large content sites, domain authority, and long-tail SEO. The idea that one can create a lot of content targeting a specific niche, get that content indexed, and reap benefits of that investment over years through monetizing long-tail SEO, always seemed like magic.

The biggest barrier to entry was always that first part: creating high-quality content that targets a specific niche.

Today, after many years of percolating on this idea, I launched an experiment: a glossary of dental terms, hosted at The term list and definitions were built over time as a part owning the web presence for Forever Dental, with some assistance of large language models to flesh out some missing areas. The design and user interface are intentionally minimalistic, as I want this to be free and clear of ads and clutter. Hosting is intentionally incredibly simple. Not quite static-site, but heavily cached and running on a simple service. The glossary concept is incredibly repeatable too, so if this experiment works, it is scalable across niches.

And just like everything, it started very simply!

Relatedly, I found a fan page for Forever Dental. Turns out, people do love their dentist!

Why are release notes/change logs important?

I got involved in a Twitter/X conversation about whether release notes/change logs are important. It started with this post:

I can understand this sentiment from a consumer perspective. But after having spent 14 years delivering enterprise sales software, I believe release notes are undervalued at most software companies.

(Note: I’m using “release notes” and “change logs” interchangeably here.)

Release notes are important in enterprise software to the admin in charge of the software that you’ve sold to them. Because eventually, someone important above them (think: a VP or higher, often not directly in their reporting chain) will ask, “WTF! This software was working and now it isn’t. Admin, did the vendor change something?”

Admin then has to go figure it out. This is outside of their normal workday and set of responsibilities. Best case for the admin, they look at the change log, see that something did indeed change (and ideally for the better), and can copy/paste that back to the Important Person. Then they can go back to looking at Tiktok videos in between checking email.

The less good case for the admin is that they don’t find anything in the release notes. They then have to reach out to support/CSM and try to figure out what is going on. This takes time and adds to their work. They don’t like that.

If you can provide thorough, accurate release notes, this can increase the chance that the Admin hits the “best case” described above.

Some additional thoughts:

  • The more critical the software is in a revenue-generating workflow, the more watchful the admins will be to the vendor’s change logs/release notes
  • Many companies don’t bother with detailed App Store change logs, because they publish these directly in their product and away from competitors eyes. You wouldn’t believe how much critical information we gleaned from competitors about their product capabilities and core focus simply by watching what they published to their public change logs
  • Around 20-30% of our client admins overall watched our change logs at Mediafly. In some industries, that number was 100%

“The Bitter Lesson” and open source LLMs

Today I learned of a 2019 paper called “The Bitter Lesson

This paper postulates that, over the past 70 years, the biggest drivers of AI advancement were not due to special human-introduced nuances into what makes the model smart, but rather, dramatic advancement in computational resources driven by Moore’s Law (exponentially falling computational cost).

What does this mean?

Major AI leaps are driven not by companies rolling in a ton of “special sauce” into their AI models, but rather because it becomes dramatically cheaper to throw more hardware at the problem.

This gives me hope that the future of LLMs won’t be beholden to companies like OpenAI, Anthropic, and the like. But rather, we’ll see open source models catch up to and possibly surpass OpenAI’s GPT for raw text-to-text generation.

I am seeing that some open models claim to be on par with GPT 3.5, such as Mistral-7B and orca-2, but the empirical evidence is mixed. (p.s. hat tip to Anton Bacaj, who is a wealth of cutting-edge information around open models. )

Of course, there are other competitive areas where the private companies’ inherent advantages will allow them to dominate over open source models (marketing, enterprise features, APIs, wrapper support, stores, integrations, etc.). But at least the core offering won’t be wrapped up in a tight little, expensive, box.

Leaving Mediafly

Most of my writing these days is on LinkedIn and Twitter/X.

Cross-posted from LinkedIn:

Today marks the last day in my 14 year journey with Mediafly.

When I joined, I thought this would be a 2-4 year job. I joined as employee # 9, and have had the good fortune of participating in growing this company to hundreds of employees. I’ve experienced a whole career of experiences during that time, including helping sell and service to the world’s largest companies; building world-class Engineering and Product teams that rival some of the best I’ve ever worked with; completing and integrating 8 acquisitions; successfully guiding us through intense technology audits; and building scalable processes that will far outlast my tenure.

I met Carson, Mediafly’s founder and co-CEO, while at a previous startup where we did a joint project together. After a year of consulting for Mediafly, Carson and the team closed our first enterprise customer, RE/MAX, and I decided to join full-time. During these 14 years, Carson, your force of positive energy brought us through the good times and the bad.

I’m thrilled that Kelly Anderson is taking over as Chief Technology Officer. Kelly was the VP of Engineering for the largest of our engineering teams, and is fantastic with people and process. Kelly, Mediafly is so lucky to have you and for you to step into this role!

Carson, MaryJames, and the rest of the leadership team with whom I worked closely: you have a talented team at Mediafly. I’m so excited to watch where you all progress as I follow my path down this fork in the road. And to my friends I leave behind: I’m rooting for you all!

As for me, after taking a much needed break, I plan to dive back into entrepreneurship. I am exploring buying a small business, or starting one. Likely, but not necessarily, software. Stay tuned and stay in touch!

Dentrix services not starting on boot (2023 edition)

Last week, we experienced catastrophic failures with Dentrix. Every attempt to launch Office Manager, Appointment Book, Family File and Ledger, on the server and all workstations, crashed within seconds of launch.

Our system:

  • Dentrix
  • Windows Server 2019 Essentials
  • 15+ workstations

We were without Dentrix for 6 hours while Dentrix Support uninstalled then reinstalled (!) Dentrix. That resolved it at the time, but it was not clear as to why it worked (and Support had no idea either).

A week later, Windows Update (KB5026362) downloaded on a Thursday, but did not yet install.

We restarted the server on the following Saturday and let the Windows Update complete. On Sunday, we attempted to launch Dentrix. Appointment Book and Office Manager would spin for a few seconds, then crash and shut down, on the server and on all workstations. Mild panic ensued. Interestingly, Document Center, Office Journal, and Timeclock still worked.

We attempted these things, none of which worked:

  • We re-enabled LLMNR
  • We uninstalled KB5026362

Lots of Google searching and deep thinking, we looked at Server Manager, and noticed that not all services were being launched at start (there was a red 5 next to it, indicating that 5 services did not launch). One of those was DtxUpdaterSrv. It listed status of “Stopped” but startup type of “Automatic”. Which means, it attempted to start but couldn’t. On a whim, I started the service manually, and immediately the broken applications started working

Digging deeper, I saw that these services, along with DentrixACEServer, all had attempted to start on system boot. None of them completed; every one of them had an Event log that showed “A timeout was reached (30000 milliseconds) while waiting for the … Service service to connect”.

With some searching, I came across this ServerFault post on how to change the default timeout from 30 seconds to 60 seconds. I applied the change and rebooted. The Dentrix services came up on the reboot, so things may be better! Only time will tell of course.

Getting a Dell OptiPlex back up and running with Microsoft Windows 7

This post chronicles the month-long adventure in getting one of our Dell OptiPlex 3020M (you know, the tiny one with no CD/DVD drive) desktops back in good working order.

First, the problem:

  • Boot times start taking extremely long to get to the Windows loading screen, and then it would spin endlessly
  • Safe mode boot didn’t help this process along

As we were still under warranty with Dell, I reach out to them. Here is what happened:

  • I emailed Dell Support. They replied very quickly, and after a couple of days of back/forth with their tools, we isolated a bad hard drive.
  • I went through Dell Diagnostics with them (which is roughly baked into the BIOS). Turns out the hard drive was bad.
  • They sent me a replacement hard drive via FedEx Overnight.
  • I swapped it out and started the desktop. After booting to Windows and getting through a few prompts, I get stuck in a never-ending Dell configuration loop that is (what I call) the gray-on-black modal dialog of hell. Loop runs for 24 hours with no progress or hard drive activity. I can’t do anything (mouse and keyboard are disabled).

Dell OptiPlex 3020M stuck in gray on black loop of hell

Dell OptiPlex 3020M stuck in gray on black loop of hell

  • Dell Support directed me to download an .iso file that contains a preconfigured Windows 7 for my system. I didn’t know such a thing exists! Now I want to go get one for every one of the machines we maintain…
  • I download it, burn the .iso to USB with Rufus on the Surface Pro we have here (remember, no DVD ROM drive on the desktop I’m trying to recover), and try to get it to run.
  • I am able to boot to it and get through a few prompts. But very soon, I encounter a new error: “A required CD/DVD drive device driver is missing.”

"A required CD DVD driver is missing" dialog

“A required CD DVD driver is missing” dialog

  • I tried a number of things suggested by others on the Internet, including: switching the USB to another port midway through the installation, downloading Dell’s drivers and having Windows attempt to find the correct driver, buying an external CD/DVD writer, trying to burn the .iso to DVD (doesn’t work), and trying to just install Windows 10 from a later Dell machine onto this machine (also doesn’t work). Many hours were wasted.
  • I reach back out to Dell and complain about how much time I’ve wasted. After complaining, Dell Support sent someone onsite to replace the hard drive (again). After this was done, I was back at the neverending gray-on-black loop of hell.
  • I opened a new support ticket with Dell, this time around OS issues. I was sent new instructions for creating the Windows 7 Recovery USB key, which largely maps to this article. The article was very interesting, because:
    • It recommended using the diskpart Windows command line tool, vs. Rufus which I used previously, and
    • It didn’t quite work as is, as there was one significant discrepancy I discovered from those directions. I have a 64GB USB 3.0 key. I had to create a 16GB FAT32 partition (which is known to boot well; I can’t trust NTFS or ExFAT as boot partitions) vs. simply creating a full-disk 64GB partition. It took me a few tries to figure that out.
  • Success! This got me through the two key issues I had in the past (gray-on-black and missing device driver)
  • But, I encountered another problem: an error that said “Windows cannot be installed on this disk. The selected disk has an MBR partition table. On EFI systems, Windows can only installed on GPT disks.” Thankfully, this link indicated that I can safely delete all of the existing HDD partitions and create a new, big one. I did so, and the installation continues.

I wish the process wasn’t as difficult as it was. But at least I’m thankful that it works, and that my interactions with Dell Support helped me down the correct path, eventually. They were patient during the process, and had no problem sending their onsite tech to try to assist, which is the best I can hope for with business-level support.

A dead simple way to improve your writing

Why does anyone think using the phrase “dead simple” is remotely acceptable?


DishwasherNow is the dead simple way to have a brand new dishwasher delivered to your door from iPhone.

Is “dead” an adjective for “simple”, which has morphed into a noun but has also ceased functioning in this world?

Or is this statement implying that the service is so easy to use that the walking dead can even order their dishwasher from the iPhone they are carrying in their lifeless hands?

Why can’t we just use “easy” or “simple”?

DishwasherNow is the easy way to have a brand new dishwasher delivered to your door.

See how much better that reads? You don’t have to be a rockstar engineer or ninja growth hacker to understand what this phrase means.

EDIT: I’m on the losing end of this battle (from Google Books Ngram Viewer):

Dead Simple usage in Google Ngram Viewer

A practiced method to solve hard problems

Here at Mediafly, we are faced with hard engineering, product, sales and marketing problems every day. Each of us takes a different approach to solving these problems. Some of us like to create pros/cons lists. Others dig deep into data and use that help answer every question. No one approach is the “right” approach for everyone.

I recently had a conversation with our Engineering Manager[1], and he described his approach to solving hard problems.

  • Take an attempt to solve the problem, but don’t stress about it if you can’t figure out the solution yet.
  • Review the key aspects of the problem right before you go to sleep. This involves working on the problem from multiple angles. Meaning, if you tried one solution and it doesn’t work, try another. If it did work but it’s ugly, just note the key parts of why it works and why it’s ugly. You need to get intimate with the problem and be really familiar with it from all angles.
  • Now, let the problem go. Sleep on it, take a shower, go for a run. Do something to take your mind off of it entirely.
  • When you least expect it, an insight will find you. When the solution does find you, immediately explain it to as many people as you can. Don’t worry about whether they are an expert in the subject domain. Just start explaining. The mere process of explaining acts as a forcing mechanism to refine the solution further. It also serves as a filter; if what you thought was initially a great idea turns out to not be, attempting to explain may allow you to filter out the seemingly-good idea much more quickly, and get back to solving the problem another way.

I’ve watched him apply this method of problem solving over the years, and it truly is a thing of beauty. He will often take 2, 3, 4 attempts at particularly thorny engineering problems. He will sometimes throw away the code he wrote for an attempt and go back to the drawing board. He will restart this process from scratch as necessary. But, regardless, he almost always comes up with a solution that solves the problem elegantly. And watching his success has led me to begin adopting this approach for problems of all sort that I face as well.

[1] Special thanks to @laimis for being the inspiration for this method and this post! And he credits A Technique for Producing Ideas as inspiration for this process.

Dear Enterprise Software Product Managers: Consider Scale


Discussions about design and UX for software these days so often focus on onboarding. Scott Belsky, founder of Behance, even suggests A good discipline to help you stay simple is to focus at least 50% of your effort on onboarding and the first-time-user-experience. Providing a great onboarding flow is the quickest way for your users to find value in your new feature. After all, the sooner a new user is able to find value, the more sticky it’ll be for them, and the less churn you’ll experience, right?

Makes sense, and it gives a great starting point for how to think about a new feature. For example, from Mediafly’s point of view:

  • A newly signed-up user will start with 0 content, 0 salespeople, 0 users. Envisioning that scenario is very straightforward.
  • A small business might be using SalesKit by Mediafly to manage 50 pieces of sales collateral among 20 salespeople, distributed to 500 prospects. There is some complexity with this level of information, but for most features you might design, it’s probably pretty straightforward.

Often, however, this is where the design of a feature stops.

When you’re working with enterprise organizations with large user counts, diverse business processes, very large data sets, or whatever key metrics you track, however, you need to consider the user experience when there is high volume of use in these key metrics.

Example 1.

From the beginning, we designed Mediafly’s content management system (CMS), Airship, to start as simply as possible. From day 1, users could drag in content from their laptop, with reasonable defaults, and immediately get value. As our customers adopted our CMS and scaled out their use across diverse business processes and groups, we continued to discover issues that we could never have foreseen at launch.

Recently, a large customer (a major CPG enterprise)began uploading merchandising layout diagrams, hierarchically organized by region, for each of their tens of thousands of their customers’ stores to our system. This dramatically increased two key metrics: their volume of content (tens of thousands of new documents) and frequency of updates (thousands of changes every week).

Automating the upload and management process on their end is a no-go, as there is no common backend system where these documents reside. And, asking people to update these layout diagrams with our Airship CMS would require 20-40 hours a week of navigating, clicking, and dragging/dropping.

To address this, we conceived of a new upload model in which an adminstrator of a one of their region’s merchandising layout diagrams could organize the new content hierarchically on their laptop, zip up the file, and upload it into our system. We would then interpret the results and update the content automatically in the correct location. This solution at once solves both the problem of content volume and update frequency. And it can be reused for other customers who encounter similar challenges.

We spend as much time solving user experience challenges of scale as we do thinking about how to build compelling new features whose adoption will begin at very low volumes.

Example 2.

We recently released the ability for our content administrators to create special links to view content, which has been a hit with our Media/Entertainment customers. The link can have a password, be tied to a user account, or be public. Creating a link is straightforward, and initial reception and usage of this feature started off as very positive.

But, after a few months, we began to hear feedback from content admins about some challenges they were feeling as their use cases for links expanded. The volume increased dramatically in some of these use cases. We now see that some admins have to create as many as 200 individual links for individual users in a single day, usually around television pilots or key screening seasons.

After diving deeper into some of these workflows, we created a process diagram to show what the typical process is to create a link. The content admin:

  • Switches to their email client and composes a new email
  • Pastes in a template that they use for their emails
  • Switches to Airship
  • Finds content in the hierarchy
  • Navigates to the Links tab
  • Taps Create Link
  • Configures the link
  • Saves the link
  • Clicks the option to copy the link to the clipboard
  • Switches back to their email client
  • Pastes the link in
  • Sends the email

Whoa, that’s a lot of steps. Imagine having to go through this process 200 times in one day! For some admins, it requires the entire day.

We have since simplified the process to create a large number of links, and continue to improve upon the feature to solve the problem of high volume even further.

How these experiences have changed us

As we design new features, we now include an extra question to answer: What will this look like at high volume? At the design phase, we strive to have a hypothesis on how we would address problems of scale, and to see what we can do to simplify the initial UX even further should scale arrive faster than we can roll out a redesign.

However, just like most things we do from a product and engineering perspective, we operate iteratively. We certainly won’t prematurely optimize for scale. But by simply adding this question to our checklist of considerations, we’ve opened up the ability to solve the seemingly inevitable high volume issues that will arise.

(This post was cross-posted from the Mediafly Blog.)