Don’t consent to searches

If you ever want a damned good reason to never consent to a police search of your phone or other electronic device, take this case out of Oregon recently decided by the Ninth Circuit. First a few facts.

Tyler Smith is a deputy with the Grant County Sheriff’s Department in Oregon. Haley Olson is his girlfriend. But they were keeping their relationship under wraps. Olson drove into Idaho and was arrested for marijuana possession. While in custody, she signed a form that waived her Fourth Amendment rights and granted Idaho police permission to search her phone, and the internal storage was cloned.

Let’s sidebar for a moment.

I’ve said before that your consent is the government’s absolute defense to the Fourth Amendment when it comes to a search without a warrant. There are very few exceptions to the warrant requirement, none of which would’ve applied here.

And combining the Fourth and Fifth Amendments is an exception called the “foregone conclusion” doctrine. In short, absent your consent, law enforcement must be able to articulate what crime they are investigating, what evidence of that crime they expect to find on your phone, and what evidence – testimony or something tangible – gives them reason to believe that evidence exists. I discussed that when discussing whether the government can compel you to unlock your phone. (Short answer: yes, but they have to satisfy the foregone conclusion doctrine first.)

But again, that’s absent your consent. In the above, scenario with Miss Olson, she consented to the search, so the Idaho State Police imaged her phone. Again, do not consent to a search. Let the police hold your phone while trying to get a warrant. Since consenting to a search means also consenting to whatever they find being used against you in a Court of Law.

But consenting to that search definitely does NOT mean she consented to… everything else that allegedly happened with what they found.

Unfortunately Miss Olson wasn’t able to get satisfaction out of the Courts. The Ninth Circuit ruled that, while there was clearly a violation of the Constitution, since no one beyond the Idaho State Police had reason to be in possession of the phone’s storage clone, the police officers and prosecution officials who participated in that are protected by qualified immunity.

So, again, another reason to never consent to a search.

The above case is Olson v. County of Grant, Oregon.

AOC did nothing wrong

Two things to say up front before getting into Rep. Ocasio-Cortez’s recent letter to the Attorney General. First, it is not a crime to inform people of their rights under the Constitution. And second, illegal aliens do have rights under the Constitution.

On the second point, one thing so many people seem to forget is the Federal government is indiscriminately restrained by the Bill of Rights and the rest of the Constitution. As I’ve pointed out on this blog before, there is nothing in the Constitution limiting the protections of rights only to citizens or legal permanent residents.

So let’s get into the details.

Representative Alexandria Ocasio-Cortez (NY-14) sent a letter to the Attorney General of the United States requesting clarification on whether the Department of Justice is pursuing a criminal investigation against her. This stems from a webinar she held on February 12 called “Know Your Rights”:

I’ve mentioned an organization on this blog called “Flex Your Rights” and their two videos informing you of your rights under the Bill of Rights when it comes to interacting with law enforcement.

Now what Ms Ocasio-Cortez released, both a pamphlet and her above webinar, is no different from the above videos with one exception: it explicitly calls out ICE and is targeted to helping illegal immigrants and migrants. Something many have called “aiding and abetting”.

And in response to the webinar, Tom Homan, the Executive Associate Director of Enforcement and Removal Operations (ERO) and the “Border Czar”, threatened an investigation for obstructing law enforcement, saying on Fox News: “I’m working with the Department of Justice and finding out. Where is that line that they cross? So maybe AOC is going to be in trouble now.”

But, again, the Representative did nothing wrong here.

It is not illegal to inform someone of their rights. Doing so is not “practicing law without a license”, something I’ve seen asserted countless times. Nor is it obstructing law enforcement. If I walk by someone who is in handcuffs while the police are processing them and inform them of their Fifth and Sixth Amendment rights, those officers cannot then turn around and arrest me. They absolutely can tell me to go away. But they can’t arrest me for merely informing someone of their rights, even if the person I’m informing is under arrest and in police custody!

I could make up my own pamphlet and walk into the inner cities, talking to gang members and illegal immigrants, distribute the pamphlet and hold conversations with people informing them of their rights. And absolutely nothing in the law can stop me from doing that.

People knowing their rights impedes law enforcement. And many conservatives treat that as a bad thing. I’ve pointed out several times on this blog cases where police have done some… shady things to circumvent the Constitution. Such as the cases of Timothy Bass and Jimmie Bowen. And even defending the Boston Marathon Bomber’s right to remain silent.

Again the Constitution indiscriminately restrains the government. The Bill of Rights is supposed to impede law enforcement. And the more informed everyone is of their rights, the better. And contrary to a very popular belief on the right, illegal aliens are still protected by the Constitution. That is why there is a “due process” they are entitled to per the Fifth Amendment before they are deported.

Again, Ms Ocasio-Cortez did not violate the law by hosting a webinar informing people of their rights. She is absolutely protected by the First Amendment here.

Claudia Tenney seeks to commandeer the States

Now that the text of the bill is finally available, let’s go through this.

Claudia Tenney (R-NY-24) introduced HR.373 as part of the 119th Congress, called simply the SAGA Act, or “Second Amendment Guarantee Act”. It had previously been introduced in the 115th Congress by Chris Collins, also from New York. The meat of the bill seeks to amend 18 USC § 927. Currently that statute reads:

No provision of this chapter shall be construed as indicating an intent on the part of the Congress to occupy the field in which such provision operates to the exclusion of the law of any State on the same subject matter, unless there is a direct and positive conflict between such provision and the law of the State so that the two cannot be reconciled or consistently stand together.

So that’s a bit of legalese, isn’t it?

In short, what this means is no part of Title 18, Chapter 44 – where you’ll find that section – shall be interpreted as precluding States from being able to have their own firearm laws except where Federal law and the laws of a State conflict in a “direct and positive” manner. “Direct or positive” basically means that a State is trying to impose looser restrictions than what Federal law requires. For example, if Federal law had a magazine limit set at 10 rounds, but a State had a magazine restriction of 20 rounds, the latter is not enforceable and the State would instead be required under Federal law to enforce the 10 round limit. This is known as preemption.

Claudia Tenney wants to change §927 to this:

(a) Except as provided in subsection (b), no provision of this chapter shall be construed as indicating an intent on the part of the Congress to occupy the field in which such provision operates to the exclusion of the law of any State on the same subject matter, unless there is a direct and positive conflict between such provision and the law of the State so that the two cannot be reconciled or consistently stand together.

(b) (1) A State or a political subdivision of a State may not impose any regulation, prohibition, or registration or licensing requirement with respect to the design, manufacture, importation, sale, transfer, possession, or marking of a rifle or shotgun that has moved in, or any such conduct that affects, interstate or foreign commerce, that is more restrictive, or impose any penalty, tax, fee, or charge with respect to such a rifle or shotgun or such conduct, in an amount greater, than is provided under Federal law. To the extent that a law of a State or political subdivision of a State, whether enacted before, on, or after the date of the enactment of this subsection, violates the preceding sentence, the law shall have no force or effect. For purposes of this subsection, the term ‘rifle or shotgun’ includes any part of a rifle or shotgun, any detachable magazine or ammunition feeding device, and any type of pistol grip or stock design.

(2) In an action brought for damages or relief from a violation of paragraph (1), the court shall award the prevailing plaintiff a reasonable attorney’s fee in addition to any other damages or relief awarded.

This statute sounds somewhat similar to another that was overturned by the Supreme Court.

In 1992 at the tail end of GHWB’s sole term as President, Congress passed and Bush signed into law the Professional and Amateur Sports Protection Act (PASPA) of 1992, which created Title 28, Chapter 178 of the United States Code. Under that Chapter is §3702, which says:

It shall be unlawful for—

(1) a governmental entity to sponsor, operate, advertise, promote, license, or authorize by law or compact, or

(2) a person to sponsor, operate, advertise, or promote, pursuant to the law or compact of a governmental entity, a lottery, sweepstakes, or other betting, gambling, or wagering scheme based, directly or indirectly (through the use of geographical references or otherwise), on one or more competitive games in which amateur or professional athletes participate, or are intended to participate, or on one or more performances of such athletes in such games.

The law not only was a nationwide ban on sports betting, it prohibited States from legalizing it. In 2011, New Jersey would implement the Bradley Act in direct conflict with PASPA. Other States joined in on the legal challenge. And the Supreme Court would rule in Murphy v. NCAA, 584 US 453 (2018), that PASPA is unconstitutional.

The PASPA provision at issue here—prohibiting state authorization of sports gambling—violates the anticommandeering rule. That provision unequivocally dictates what a state legislature may and may not do. And this is true under either our interpretation or that advocated by respondents and the United States. In either event, state legislatures are put under the direct control of Congress. It is as if federal officers were installed in state legislative chambers and were armed with the authority to stop legislators from voting on any offending proposals. A more direct affront to state sovereignty is not easy to imagine.

Well, Claudia Tenney thought of one. She seeks to rewrite §927 and take it from preemption to commandeering.

In short, commandeering when talking about jurisprudence is when Congress enacts a law or the Executive Branch enacts a regulation requiring that States enact or refrain from enacting laws or regulations that include specified provisions. The case in which the anticommandeering doctrine was established is, rather ironically, New York v United States, 505 US 144 (1992). And in establishing the concept of commandeering a State government, the Supreme Court also established the distinction between an incentive and an attempt to commandeer.

The Federal government can provide incentives to States to enact certain laws – e.g., helmet laws and setting the drinking age to 21 – but cannot directly require they pass or refrain from passing legislation or regulation meeting specified requirements or including specified provisions.

In short, except where the Constitution directly allows it, Congress cannot tell the States what they can and cannot do.

Because the States are also sovereigns. And the Federal government is also sovereign, but only to the extent the States have allowed via the Constitution. This is known as the dual sovereignty doctrine, which is derived from Federalist 39. First,

Each State, in ratifying the Constitution, is considered as a sovereign body, independent of all others, and only to be bound by its own voluntary act.

And,

Among communities united for particular purposes, it is vested partly in the general and partly in the municipal legislatures. In the former case, all local authorities are subordinate to the supreme; and may be controlled, directed, or abolished by it at pleasure. In the latter, the local or municipal authorities form distinct and independent portions of the supremacy, no more subject, within their respective spheres, to the general authority, than the general authority is subject to them, within its own sphere. In this relation, then, the proposed government cannot be deemed a NATIONAL one; since its jurisdiction extends to certain enumerated objects only, and leaves to the several States a residuary and inviolable sovereignty over all other objects.

And that dual sovereignty means, again, that Congress largely cannot tell the States what they can and cannot do. Yet Tenney is explicitly doing that in the language for her bill by saying that State laws that run afoul of the language of her bill “shall have no force or effect”.

That is commandeering, since she is basically trying to tell the States what laws they can and cannot enforce. That absolutely runs afoul of the Constitution, and sets a very, very dangerous precedent as well.

My own Linux challenge

Plenty of tech YouTube channels have taken a plunge into daily-driving Linux. And I’d actually been wanting to do so for the longest time. The holdup hasn’t been any specific requirements. I need Windows for some things, so dual-booting was part of the intent.

Which, up front, I’ll say that if you want to dual-boot Windows and Linux, get one drive for each, install Windows first and get it completely updated, set this setting in the registry, then install Linux.

Anyway… The holdup has actually been the distro.

I’ve been using Linux in virtual machines for… years using VirtualBox on Windows – and briefly when using VMware and ProxMox on a server. And that’s a great way to try out new distros and compartmentalize projects. For example, I created a Linux Mint virtual machine with VS Code for working on the theme I mentioned in this article. Everything was self-contained, separate from my Windows environment. And with bridged networking, I could still test the theme in my Windows browser and on my phone.

So before going further, let’s address the elephant in the room: Windows 11.

Why do I still need Windows 11?

Two things: games and photography.

On the first, while there have been significant leaps in Linux gaming, courtesy of Valve’s work with Proton, Steam isn’t the only service where I have games. I also have games through Epic, Ubisoft Connect, EA Origin, Bethesda, Battle.net, GOG, and the Rockstar Social Club. And getting games from those clients working on Linux can be… interesting, to say the least.

But with the light years leap Valve made for Linux gaming with Proton and Steam, I would not be surprised if Epic is actively developing a Linux client if, for no other reason, to get their client on the Steam Deck and other SteamOS compatible devices. Windows-based handhelds can already run… most any game you throw at it – just watch the many reviews Craft Computing has made. But it is definitely in Epic’s interests to get their client on SteamOS. Same with Blizzard with Battle.net. It’s also in Microsoft’s interest to write a Linux driver for the XBox controllers so we don’t have to rely on reverse-engineered drivers from Github. (The PS5 controller apparently works out of the box.)

However (comma)… until the issues with anti-cheat in online multiplayer games are handled in a viable fashion, gaming on Linux won’t be able to completely catch up to Windows.

Overall gaming is just easier on Windows. Period.

I also use WeMod, which absolutely does not work on Linux and likely never will simply due to what it is and how it works.

Yes I use cheats and trainers in offline games. If that bothers you, and you feel you the need to say anything negative about it, kindly fuck off. I’m approaching my mid-40s. I don’t have enough time on my calendar to spend 100 hours trying to finish a game when I’ve got a backlog a mile long and a trainer or cheat will allow me to see it all and do it all in a fraction of that time.

As an example, even with a trainer, it still took me over 30 hours to 100+% Hollow Knight. And I’m considering going achievement hunting on that, which will probably take another 20+ hours with a trainer. And yes, I’m very much looking forward to Silksong.

On photography…

I use the Adobe suite. Photoshop (Ps) and Lightroom (Lr) specifically. Plain and simple, there is currently no viable alternative to either in the open source space. Not at least without making significant sacrifices to my workflows.

“But GIMP!!!”

Shut the fuck up!

GIMP absolutely is very functional, very capable for photo editing and “manipulation”. I do use it on occasion, though not nearly as much since adding Ps to my Adobe subscription in mid-2023.

But Photoshop is just better. Plain. And. Simple. Far superior where it counts. And one place where it absolutely counts is… context-aware heal. Why GIMP’s devs have not integrated this feature is beyond me? Why “integrate” and not “implement”? Because a plugin has been available for over a decade that provides a context-aware fill. Yet that code has never been adapted to provide a context-aware heal that works very similar to Photoshop’s implementation.

In all seriousness, if they were to implement just. that. feature. they’d bring themselves far closer to Photoshop in needed functionality. That feature alone could give them the leg-up they need, especially among photographers who want to walk away from Photoshop and would be willing to walk to something available for free, like GIMP, if only the functionality was there.

“Why don’t you implement it then? After all GIMP is open source!” Because I value my time and will readily use something that already works, even if I have to pay for it, over spending who knows how much time figuring out GIMP’s source code, the plugin’s source code, and how to integrate them.

“Photoshop works in Wine!”

Shut the fuck up! No it doesn’t!

If it can’t be installed and run through Wine in Linux without issues, it. doesn’t. work. Period. So stop claiming it does. And almost no one using Ps or Lr cares about the ancient versions that you claim work. Only Linux fanboys care about that, and only so they can claim Ps and Lr work in Wine.

And darktable is not a viable alternative to Lr. Plus it’s a lot more difficult to use. Yes, I’ve tried it. Lr’s UI and UX is just far superior.

Moving on…

My primary camera is the Nikon Z5 (buy it on Amazon, Adorama, or direct from Nikon). My previous camera was a Nikon D7200. I’ve been relying on Nikon NX Studio for initial image review before importing photos into Lr for editing as it’s pretty snappy. I’ve used RawTherapee, but Nikon’s NX Studio is just… better. It’s from Nikon, so I would expect it to be. Plus it’s also available for free.

But one hard requirement is… functional 10 bit-per-channel color.

I’ve been relying on that for a couple years, since acquiring two 4K televisions that support it. My camera is configured to write 14-bit RAW files. And the aforementioned software I use on Windows can display 10-bit color without question.

The situation is a bit different on Linux.

It appears to be working on Linux from what I can find. Kind of, at least. But it’s seemingly impossible to determine whether it’s enabled, let alone being used. In all seriousness, it should be as easy to find in Linux as it is in Windows. Put it on the same panel for changing the display resolution and refresh rate.

And, lastly, to a lesser extent is Microsoft Office. But most of the functionality I need with the Office suite is available in the browser, so it’s not a major loss. I have a 365 subscription mostly for OneDrive. I know I lose OneDrive syncing, but I still have access to that through the browser. Plus, as mentioned, I’m going to be dual-booting with Windows.

So moving on…

A few things I didn’t need to worry about

Before getting into the requirements for my setup, day-to-day, workflows, etc., let’s first talk about what I need that’s also pretty ubiquitous with Linux.

Browser. Most everything that doesn’t require significant compute performance or an involved UI has moved to the browser. And even some things you wouldn’t expect to be browser-enabled have become so. For example, you can do light photo and video editing in a browser. Google Docs and Office Online have been available for years – though there are plenty of features Office Online does NOT have compared to Microsoft Office that just aren’t really possible in the browser.

And virtually every browser is also available on Linux. Firefox, Chrome, Brave, and even Microsoft Edge. So there’s no worries with using my preferred browser – Brave – on Linux.

Media players. VLC and Plex in particular. (I do have a Plex Pass.) Plex has been available on Linux since the beginning – both the server and client. And I’ve written a few articles about it here. Same with VLC since it relies on other open source projects.

SSH. This is pretty ubiquitous on Linux. I use PuTTY on the regular on Windows to, for example, remote access the server behind this blog for software updates and the like. PuTTY is also available readily on Linux, so no changes needed here.

Virtualization. VirtualBox is readily available. QEMU is there as well with virt-manager, but isn’t just a single-command install, but I might use this as a chance to play around with it since it has some features that VirtualBox does not.

Requirements

So now let’s talk about functionality I need. While that doesn’t necessarily govern what distro to use, it does determine whether this is viable in the first place.

Lean installation. One thing I don’t like is a distro that doesn’t give me the choice of what to install. So if that choice is going to be taken away from me, I want what is installed to be a lean set of packages so I’m not spending time removing a ton of stuff I know I won’t ever use. Needing to remove some things is fine, but needing to remove half of what’s installed… No.

This means… Mint is pretty much out, as are a lot of distros that don’t have a “slim” or “minimalist” option.

I understand wanting to be more welcoming of people new to Linux by giving them an install base with a bunch of software that meets most anyone’s requirements for a desktop. But… please… please also always give a lean install. Seriously Arch should not be the default choice for lean Linux setups. This was one of my gripes finding a Linux distro for Cordelia.

Rolling release. I’ve played around with Arch, and I like the idea of the operating system not having a set version number. Since that means not having to worry about doing an in-place upgrade – though Ubuntu and Fedora make that mostly trivial – or a reformat and reinstall like when I upgraded from Windows 10 to 11. (I did the in-place upgrade first to register my machine against the Microsoft license servers, then reformatted and reinstalled.)

Now this is a risk in part because rolling releases may include experimental, beta, or even alpha packages. But it also means you don’t have to worry about being horribly out of date on some things like… the kernel.

KDE or Cinnamon. Linux Mint is one of my go-to distros for running in a VM, even with all the packages I need to remove, largely because of Cinnamon. Being somewhat lightweight, it works well in a virtual machine. I’ve fooled around with Arch a little as well, using Cinnamon there. But Cordelia runs Manjaro with KDE. And KDE is the window manager I’ve leaned toward ever since first using Linux… over 25 years ago. And in playing around with the latest versions of it recently, I prefer KDE to Cinnamon.

Never liked GNOME. And I avoid distros and install options where that’s the default – meaning Ubuntu is out just on that alone.

I’ve played around with COSMIC in virtual machines, but it’s only in Alpha right now. And as this is for my personal desktop – i.e., mission critical – I need something that’ll just work. This is a workstation, not a bug testing setup.

Password management. I’ve been relying on KeePass for… years. Before I even contributed code for it all the way back in… 2006. And while I’m evaluating switching to Proton Pass, since I use Proton for email, right now I need something compatible with KeePass. And KeePassXC fits the bill, so it needs to be readily available in the package manager as well. I’ve played around with it on Windows, and I definitely prefer its password generator to the one in KeePass, which has some… issues.

What distro?

OpenMandriva ROME. As of this writing, the latest released version is 24.12, though an update will upgrade you to 25.02. And specifically I’m using the Plasma6 on Wayland ISO. I tried to use the “ROME Plasma Slim” from this page but was not able to get the image to boot on my desktop. Worked fine in VirtualBox, though.

But that also meant I couldn’t get the “slim” release I wanted since the “slim” build seems to be new for the 2025 builds. There isn’t a “slim” option for download on their SourceForge. Meaning I’d be removing packages after installing. But while it isn’t “slim”, I’ve definitely seen worse – e.g., Linux Mint.

So why OpenMandriva? I learned about it through Lunduke’s YouTube channel and checked it out.

Mandriva is a spiritual successor to Mandrake Linux. Mandrake was a RedHat derivative, and version 6.0 (Venus) is where I got started with Linux back in 1999. At the time, it was billed as one of the easier Linux distros to get into. And a boxed set of CDs for it was on the shelf at CompUSA, saving me from having to download it later. (Alongside boxed sets for Caldera, RedHat, Debian, and Slackware.)

The last release of Mandriva Linux was in 2011, and OpenMandriva followed a couple years later. There’s also Mageia, another fork of Mandriva, but its last major update was in 2023.

I’ve bounced between various distros over the years, and I’d considered daily driving Linux for a while. So upon learning about OpenMandriva, I tried it out. And it was what finally gave me the push to daily drive Linux for the first time in… well, a long time. I only wish I could get the 25.01 “slim” release working.

What didn’t work out of the box?

I’ve only tried the Plasma6 release, so I can’t speak for anything else. Out of the box, it gives you Chromium plus quite a few other packages I just… removed. There are third party repos for Brave, VS Code, and a few other things.

And one point to the OpenMandriva team as well if they see this: take out the “We strongly recommend…” text from the third-party repository descriptions. If a user enables one of the non-free repos, it’s likely out of necessity – e.g., the Microsoft Teams repo – so your “We strongly recommend” is largely meaningless.

And with VS Code, a LOT of developers are very used to VS Code – like me! – and so will NOT transfer to something else because… why?

Anyway… There isn’t much I have a gripe with here so far, so I’ll just mention the specific packages that gave me issues.

Printing. I have an HP LaserJet P1505 printer that I bought brand new back in 2009. Still on the original toner cartridge, too, which goes to show how ubiquitous printing no longer is. And while the system did detect and appear to install the printer, that didn’t happen.

I needed to install another package called hplip and then run hp-setup after first removing the printer through the System Settings so it could be re-added using HP’s official Linux driver.

Printing still isn’t working properly directly from an application, though. Brave can see the printer but print jobs sent to it get blackholed, and GIMP doesn’t see it at all. I hardly every print anything so I’m not hugely concerned with getting this working. Most of what I print is PDFs, though, which can be printed using HPLIP directly.

Clipboard. Imagine copying something to the clipboard, but it doesn’t actually get copied. So when you try to paste something into a form, for example, it instead pastes something you had previously copied, if anything.

Yeah for some reason I have to clear the clipboard history periodically to keep this from happening. This became especially apparent when trying to use KeePassXC. Where I think I copied the username or password to the clipboard only for… something else to be pasted into the login form. I’m leaning toward that being a problem with Plasma6, since I don’t recall having this issue on Cinnamon. Perhaps I need to downgrade to Plasma 5. I don’t know…

Anyway…

But it’s little things like these which can keep people from adopting Linux for daily use. Since I definitely should NOT have to keep clearing the clipboard to keep using it. Sure with Windows I still needed to download and install a driver for my printer, but then it just… works after that.

And then there’s…

NVIDIA

So this took more than a few tries to get right. And I definitely spent way more time on this than was reasonable.

I’ve used the NVIDIA proprietary Linux driver in the past, too. A LOT. So that I had this much difficulty getting it working on OpenMandriva is either a problem with NVIDIA or OpenMandriva. And I’m leaning toward the latter simply due to a couple quirks I discovered while trying to get this working.

Out of the box, the Nouveau driver does at least provide something to get NVIDIA owners going. But with more advanced setups, the driver’s lackluster performance becomes very apparent. Especially on a system with two 4K monitors like mine.

The OpenMandriva non-free repo does have an RPM for the NVIDIA proprietary driver, but… I was not able to get it to work. So going with the download available straight from NVIDIA, I’ll outline the steps I followed.

Note: make sure to fully update your install before following these steps.

I also highly recommend backing up your boot and data partitions since this… could get messy. Even if you’re operating on a fresh install since restoring a backup might be quicker than reinstalling everything. In my instance, I just imaged the entire drive using dd from the OpenMandriva live USB, copying the image off to Nasira, making restoration as simple as reversing the process while I was figuring out these steps.

First, you’ll need to download the NVIDIA driver package. If you’re on on the OpenMandriva Rock 5.0 distro, go with the “New Feature Branch” version, which is 565.77 as of this writing since Rock is still on the 6.6 kernel as of this writing. But if you’re using Rome, which is the rolling release distro which was recently upgraded to kernel v6.13, the v565 driver will not build so you’ll need the v570 beta driver.

Next up is package installations. You can do this from within the graphical desktop from a command line – e.g., Konsole for KDE Plasma – or reboot to the console.

sudo dnf install clang libglvnd-devel pahole kernel-desktop-devel kernel-source

Notice I did NOT add “-y” to auto-confirm. You need to pay attention to what’s going to be pulled in here, specifically the kernel-desktop-devel package. The version MUST match your kernel. If you’re fully up-to-date, this should be the case. Make sure it is NOT grabbing something newer. If it is, you need to update your packages first.

After this, disable kernel package updates. This is due to a couple additional steps we must take in order to build the kernel modules that absolutely will brick your installation if you allow the kernel packages to be updated. Make this change by running this command:

echo "exclude=kernel*" | sudo tee --append /etc/dnf/dnf.conf

I’ve filed two bugs with OpenMandriva’s GitHub (here and here), so hopefully those will be addressed. Not sure how easy or difficult those will end up being.

So now to overcome the issues I reported in the above bugs:

# Build resolve_btfids and link to it 

cd /usr/src/linux-`uname -r | sed -E 's/(.+?)-.+?-(.+)/\1-\2/g'`/tools/bpf/resolve_btfids
sudo make
sudo ln -rs resolve_btfids /usr/src/linux-`uname -r`/tools/bpf/resolve_btfids/resolve_btfids

# Create a symlink for vmlinux

cd /usr/lib/modules/`uname -r`/build/
sudo ln -s /sys/kernel/btf/vmlinux vmlinux

That first line will strip “desktop” from the middle of the kernel version string returned by uname -r, so it should take you right to the kernel source folder for the resolve_btfids tool, which the NVIDIA installer will use as part of building the kernel modules. As of this writing, on my updated installation, uname -r will return 6.13.4-desktop-2omv2590, so that first line should resolve to /usr/src/linux-6.13.4-2omv2590/tools/bpf/resolve_btfids

So after all of this, you’ll want to run the installer. If you have not yet rebooted your system to the console, do so now, then run this when you get settled:

sudo /path/to/NVIDIA_install.run --no-rebuild-initramfs --no-nouveau-check --skip-module-load

Let’s go over the options here.

  • --no-rebuild-initramfs: the in-built step to rebuild the initramfs will fail on OpenMandriva, so disabling it here to avoid the error message and early abort.
  • --no-nouveau-check: pretty self-explanatory. This won’t check for nouveau, but it also won’t attempt to disable it. We’ll handle that later.
  • --skip-module-load: This will prevent the installer loading the module, which will error out if Nouveau is still loaded, causing the installer to abort early.

During the install, you’ll get prompted for a few things:

  • NVIDIA Proprietary or MIT/GPL: Proprietary
  • Install 32-bit libraries: Yes. Not sure what uses it, but better to have it and not need it over needing it and not having it
  • Register the kernel module with DKMS?: No! Unless you previously installed DKMS for something else, you likely won’t even get prompted for this.
  • Configure X automatically: Yes if you’re on X11, No if you’re using Wayland

The installer will dump out some progress messages as it goes and eventually confirm at the end that the kernel module was installed. But you will also get a message about needing to reboot due to Nouveau being enabled.

Don’t reboot just yet! We’ve got one more step: disabling Nouveau. First, create a new file at /etc/modprobe.d/nvidia-proprietary.conf:

blacklist nouveau
options nouveau modeset=0

options nvidia_drm modeset=1
options nvidia_drm fbdev=1

Now, edit the /etc/defaults/grub file. The line you’re looking for begins with GRUB_CMDLINE_LINUX_DEFAULT. Add modprobe.blacklist=nouveau to the end of that string.

Now, a few more commands:

sudo dracut --force
sudo cp /boot/grub2/grug.cfg /boot/grub2/grub.cfg.bak
sudo grub2-mkconfig -o /boot/grub2/grub.cfg

That copy command ensures that your grub config is backed up before it’s overwritten so you can easily recover it using a boot USB if necessary.

And now… Reboot.

If you did everything successfully, you should be coming back up into your desktop of choice with everything working as expected. Meaning if you have your system configured for auto-login, that should work as before. And you can verify from a terminal that your system is using the NVIDIA driver instead of Nouveau by running this:

lsmod | grep nouveau

If the NVIDIA driver is properly configured and working, you should get a blank result. You should also be able to run nvidia-smi from the command line and have it return details about your card and what applications are trying to use it:

And – holy shit! – that was far more difficult to figure out than it realistically should have been. So be glad you’re reading this so you don’t have to go through the pain – or at least for much longer. These instructions may also work with other distros with some tweaks for distro-specific commands and package names.

Conclusions

So that’s it for now. I’ll probably revisit this some months down the line – especially as package and kernel updates get rolled out along with a new NVIDIA driver release.

I’ve been daily driving Linux for about two weeks as of this writing, so it’ll be interesting going forward. I’ve been back in Windows a few times, but that was mostly as I was figuring out the NVIDIA driver issues and just wanted to break from it. I have VMs under VirtualBox in Windows I need to migrate to QEMU – which that, along with virt-manager – works a LOT better than VirtualBox under Linux. But things for the most part have been… just working.

Once I figured out the NVIDIA driver and realized what was going on with the clipboard.

And again the only reasons I really have for going back into Windows are gaming and photo work along with the very infrequent print job. I can use the HPLIP GUI to print some document types, though, so that’s a workaround for the occasional need to print a shipping label.

So I’ll report back sometime down the line as to how this goes. But so far I’ll likely be sticking with Linux and popping over to Windows only when necessary – e.g., the above-mentioned reasons why I’m keeping it around.

Open source++

Given what’s been coming out of open source projects in the last few months, with waves of purges and “cleansings” to rid projects and repositories of anyone who is not left of Bernie Sanders, I’m starting to wonder if FOSS is about to have its own version of “Atheism+”.

I first write about Atheism+ not long after it got off the ground back in 2012. To summarize, Atheism+ was about politics and demands by atheism speakers and “leadership” expecting all atheists to move hard left and adopt a preordained set of tenets. And in my above-linked article, I said this:

For one there are more atheists than those who actually use the label “atheist”, and more using the label “atheist” than those active in the atheist movements, meaning Atheism+ is, by definition, another minority within the totality of all who fit the definition of “atheist” even if they don’t use the label.

Atheism+ would die the death it deserved in 2016, but its underlying tenets haven’t gone away. The lesson they failed to learn is, quite simply, that atheists were a LOT more politically diverse than first presumed. Those who formed Atheism+ expected they’d get the majority of atheists on their side. And when that didn’t happen – as shown by the dismal attendance numbers for the 2016 Reason Rally when compared to the 2012 Reason Rally – to say they turned hostile would be an understatement.

I’ll let Peach’s video do the talking:

I’m libertarian and atheist. Skeptical of governments and gods. So when Atheism+ first came around and tried to redefine atheism, despite their assertions they weren’t trying to do that, I wasn’t going along with it. Instead it showed that a lot of people who typically advocated for free thought and free expression were now adopting the idea that you only had freedom of thought and expression only if you used that freedom to think the same way as them.

And one of the counters to Atheism+ by other atheists, such as me, was simply… what does any of that have to do with atheism and being an atheist? Rather than focusing just on… atheism and offsetting or countering the influence religion has over society, along with countering the false ideas many propose such as creationism, Atheism+ I guess decided that wasn’t enough. That atheism meant more than just saying “gods don’t exist”.

And now with open source, we have something similar brewing. Requiring a person hold certain political ideas to contribute to repository rather than being inclusive of everyone who wishes to contribute. Where a lot of projects have people at the reins who are hard left. With “purges” and “cleansings” happening because of it. So are we now witnessing the formation of “Open Source++” or “FOSS++”? Borrowing on Jen McCreight’s tenets for Athesim+:

  • Open source += “we care about social justice”
  • Open source += “we support women’s rights”
  • Open source += “we protest racism”
  • Open source += “we fight homophobia and transphobia”

And to finish with adapting Peach’s ending statement, since that’s the apparent trend: Open source += everyone we haven’t yet blacklisted.

What do all the += have to do with writing software?

Software development, especially open source software development, is the ultimate in egalitarianism. Projects live or die on the merits and with whatever effort their creators or forkers are willing to put into it. User bases are earned, not owed. Though every so often we get project creators and maintainers with a god complex. But the central idea of contributing to open source is… leaving your identities and politics at the door to the code repository. Focus instead on writing code, fixing bugs, and delivering value.

But right now we definitely have people on a power trip. And this has the potential to threaten wider adoption of FOSS projects.

So will we see “FOSS++” or “Open Source++” coming down the pike? I very much hope not. If anything, the push back by others within open source spaces should push these projects to reverse course.

“Bloat” doesn’t mean what you think it does

So a few weeks ago I come across a post calling the engineers of old “lazy” because a decision to not record all 4 digits of a year led to a worldwide crunch that became known as “Y2K”. And now… we have a post saying “And yet we wonder why software today is so bloated and inefficient” because… Doom, from 1993, could fit on two floppy disks…

Oh brother…

“Fast forward to today, and a simple video player app like YouTube is more than 300MB. Just to play videos.”

The YouTube app is a lot more than just a video player. And playing videos involves a lot more than he seems to realize. VLC has an installation footprint of nearly 200 MB, and it is literally *just* a media player.

He’s also vastly overstating Doom’s simplicity. It wasn’t “full 3D”, for starters, nor immersive. And Doom overall is very simple: run and shoot, grab keys and other items and weapons, open doors, push buttons.

Doom also targeted the i386 processor. Yes, that detail matters. A lot.

To get an idea of what trying to write a game like Doom for the i386 involved, the source code for it is on Github.

Here’s a challenge: go through the code and count the number of coding best practices that are NOT being followed. To get something like Doom performing reasonably well on the i386, you needed to make a lot of compromises.

Simply due to the limitations of the hardware. Limitations that largely don’t exist today.

He’d say in another comment: “When your options shrink, your creativity expands. Constraints are the best teacher.”

To an extent this is true. It also reveals whether everything you want to include in the product can be included.

Since that’s the singular reason software today could be called “bloated” compared to years prior: more and better functionality.

VLC, for example, plays most every media file format that’s ever existed. It takes a LOT of code to support that. Even browsers haven’t been immune to that “bloat” with everything they need to support, with Chromium-based browsers and Firefox both having install footprints of a few hundred megabytes.

And sure constraints are a good teacher. I’d like to see young devs today try to write programs for the TI-82 and it’s 28 kilobytes of user space.

But those constraints can also push you to write bad code to get it working per your requirements, provided that’s even possible at all, potentially introducing bugs that may not be repairable due to your constraints.

Personally having faced the limitations of the past, including coding for the TI-82 and TI-85, I’m glad they, for the most part, don’t really exist anymore.

Since, again, much of the “bloat” Marc is describing (and complaining about) is due to… more and better functionality. Giving customers what they want and what they didn’t know they needed till they had it.

Photography made me a better programmer

Actually, no, it didn’t.

But that doesn’t mean the two never overlapped.

Like most photographers out there, I do post my work online. To Instagram, Flickr, and also to a website. Don’t worry, this isn’t turning into an ad placement for Squarespace. I did try them out for a while since it seems every photographer on YouTube is using them, but I was not impressed. I see the appeal, since their whole raison d’etre is to allow someone to build a website with relative ease.

I prefer self-hosted.

Like this blog, my photography website is hosted on AWS. Only it’s powered by Ghost, not WordPress. (And I’m still in the process of migrating this blog to Ghost.) And initially I used the Edge theme simply because it’s free and got me something quick. Later in 2024, though, I decided I needed something… better.

Enter Dope. Mostly.

I liked the look, feel, and functionality of it, but definitely did not like its limitations. But the theme is open source, so I set about making the customizations I wanted, upgrading some outdated components, and releasing the very enhanced theme under the name Blaire. Not really with the hope that others would take the theme and use it, but more so other theme writers can see what is possible.

And from my experience with that, I can pretty confidently say that writing a custom theme for a content management system is a great way to learn HTML, CSS, and JavaScript, along with how to work with the browser’s developer tools. With Ghost, you do also need to learn Handlebars.js, since that’s used for the placeholders and “helpers” for displaying the post, page, feeds, and the front page.

You probably expected me to say that photography helped me develop a better attention to detail or some platitude along those lines. And there are almost certainly programmers who also delve into photography who readily say that. And they’d be lying. Or, at the least, very heavily mistaken.

Being a photographer doesn’t mean you have a better attention to detail. And photography doesn’t allow you to develop a better attention to detail. It will allow you to develop a better attention to some detail, provided you’re editing and inspecting your photos. Which will give you a feeling that you have an overall better attention to detail, but only if you allow yourself to be deluded into thinking that.

It also doesn’t help you look at things all that different either. At least not automatically. You have to push yourself to do that. But you also don’t need to be a photographer or any kind of artist to do that.

What I can say, at least, is that without having been a photographer for the last 6 years, I likely wouldn’t have had reason to dive into making a custom theme for Ghost. And sure, for a while I didn’t, relying on the Edge theme for over two years until I decided I needed something better. Then found Dope and, while I liked the approach, I needed to overcome the limitations.

Which pushed me into new territory.

Commenting your code

My philosophy on comments is pretty simple.

They’re for explaining outlier situations, not handholding. You don’t write comments to point out the obvious — e.g., “Open the file and parse it as JSON…” And if you are, STOP!

Sure there’s badly written code out there. But in general anymore, if you’re reading through code and believe you need comments to understand what’s going on, you don’t understand 1. the language, 2. the framework (React, .NET, MFC, etc.), or 3. the application and concepts involved. Comments shouldn’t be written so just anyone can jump into the codebase.

And if you’re writing a large comment to explain a block of code, you very likely need to rewrite the block of code. Functionality should be obvious from looking at it. And exceptions to that should be extremely rare.

Instead the aim should be writing code that is self-documenting. We’re long past the limitations of the… past that made meaningful abstraction impossible. All names should be descriptive. And given the comprehensive functionality in most any IDE today, there really isn’t any excuse to NOT make these names descriptive. Again, we’re long past the memory and storage limitations that made meaningful abstraction impossible.

I’ll give an example from my recent experience. As of this writing, I’m finishing up an enhancement for the Ghost CMS stemming from a theme I’m writing. The code base is JavaScript. Themes are a combination of HTML, CSS, JavaScript, and Handlebars.js. I don’t have much experience with JavaScript, and the file I’m modifying has zero comments beyond the comment block at the top, except for one line explaining why an index variable needs to be decremented by 1 – because the why isn’t obvious.

Yet.. I didn’t need comments on every code block or, worse, line of code to understand what’s going on. It was fairly easy to figure out. Comments would likely have just been more… stating the obvious.

DF64 Gen 2 coffee grinder

About a year ago, I wrote about the Timemore Chestnut C3 manual grinder and explained that, while it’s a capable grinder, I bought it merely as a stopgap between the Compak K3 I previously used – and still have, actually – and whatever grinder I went for to replace it.

And the replacement, which I’ve had for about a year as of this writing, is the DF64 Gen 2 (or v2). It’s sold under a couple names, but I bought mine directly from Turin Grinders.

And, sure, there are plenty of videos on YouTube about this grinder, but most of them are made after using the grinder for only a short bit. As I just said, I’ve had it for a year, dutifully grinding coffee about twice a day, typically. Who on YouTube can say that?

So let’s get into this grinder and what you can expect. Starting with the things about it that I don’t like so you can decide if they’re dealbreakers for you.

Very short power cable

It’s 3ft. Seriously?

In a way this worked out slightly for the better, honestly, as my extension cable has a right-angle plug instead of a straight plug. But that I needed it at all is more than a little annoying. It’s added expense on my part that could’ve been avoided with slightly higher manufacturing expense on their part. I mean, how much more would it have cost them to double the power cable length? Can’t be much.

The power cable on the Compak is about 6ft, and that seems to be the standard power cable length for small appliances in the United States when there isn’t a heating element involved. And, again, the DF64’s power cable is only about 3ft.

So depending on your setup, plan to buy an extension cable.

And, a little bit of a PSA, even though this is low-wattage, don’t cheap out on that extension cable. In the United States, you will need a 3-prong extension cable for this – which it’s good to see they do ground it. An appliance extension cable rated for 15A will be ideal here since they’re heavy duty being made for, well, appliances. Sure, overkill given the grinder will only pull a couple amps from the wall after the initial current surge that’s characteristic of most motors. But, again, don’t cheap out.

Power button

It’s a 16mm anti-vandal latching push button, with the plunger sitting about 1/8″ above the button’s collar.

To me, at least, that small button is a little annoying. So if there’s anything I’m going to replace on this, it’ll definitely be that, either with a larger power button or, likely, a toggle switch like on the Niche. I would also very much have preferred that button not be close to the base. But since it’s a push-button power switch, having it low does mean you’re not risking pushing over your grinder every time you turn it on or off.

“Declumper”

Now for easily my biggest gripe about this machine: the acetate “declumper” right before the chute and anti-static prongs.

I’ll just say up front that I completely removed that from the machine after twice having to take apart the chute to clean out grinds that were binding up. And removing that actually improved the results in the cup. I already use the WDT (Weiss Distribution Technique) as I explained in this post, so having a “declumper” that barely functions and over time only serves to create clumps rather than prevent them was far more of an annoyance than anything else.

So if you buy this, the first change I recommend you make is removing that “declumper” – you’ll need to remove the chute to get to it – and just running without it. Just make sure to use the WDT to declump and distribute the grounds in your portafilter.

And on any grinders I buy in the future, removing any “declumper” on the chute will likely be the first thing I do since, again, they tend to make clumping worse as time goes on. The Compak has one that isn’t easily removable, but it also never created a clumping issue that I recall and I used that grinder near daily for over 8-1/2 years.

Dosing cup

My annoyance here isn’t with the cup itself. I actually quite like having it, honestly, as opposed to dosing directly into the portafilter. My annoyance is more around needing the dosing collar that comes with it – that I think can double as a dosing funnel for the portafilter. Because the grinder chute is angled, but the prongs for holding the dosing cup are not. Forget to use the dosing collar and you’ll have a mess.

But at least with mine, the dosing collar is a bit tight around the cup, with the gasket on it not helping, so it never seated down completely. Meaning if you didn’t remove the collar before pouring the grinds into your portafilter, some would get caught along the seam between the collar and cup.

Now it does look like Turin (or whoever actually makes the grinder) has realized this is a problem and no longer includes the dosing collar. Instead they include an adapter that tilts the dosing cup toward the chute. I only learned about this as I’m writing this review, so I can’t speak to how well that works. But I have ordered one as of this writing since it’s only $10.

My only gripe with the dosing cup itself is being aluminum. Even though there is the “plasma generator” for preventing static buildup, it isn’t 100% and never can be, so there is still going be some static causing grinds to stick to the cup. But it cleans out easy enough with just a paper towel. And at least it isn’t plastic.

Small footprint and stature

Okay now let’s transition into what I love about this grinder starting with… it’s small but still uses 64mm burrs – hence the 64 in the name. The K3, by comparison, uses 58mm burrs. It’s the smallest grinder I’ve personally ever owned at only about 12″ tall with a countertop footprint of about 10″x6″. So it should easily fit on any countertop.

Just, again, make note of the short power cable I pointed out above.

Stepless, but easy to adjust

The Compak K3 is also a stepless grinder, but… man the threads were tight making even the slightest adjustments a pain. Tiny adjustments on the DF64 are far easier in comparison. And, again, those threads completely binding up is why I don’t use the K3 anymore.

Minimalist design

The simple design is why I went for the Compak K3 when I bought it back in 2015. Though the DF64 is simpler still since the K3 has a timer so you can dose based on griding time if you want. But the K3 was also from a time when single-dose grinding had yet to become a focus for grinders. The K3 is very much built for commercial use.

The Niche changed that direction, being one of the first single-dose countertop grinders on the market. Prior to that, single-dose grinders were typically manual – like the aforementioned Timemore C3.

And the DF64 is a great competitor to the Niche. The small footprint and that it’s intended use as a single-dose grinder – weigh out one dose and grind that – as opposed to keeping a hopper of beans like the K3 and Breville Smart Grinder I had before that – the one laden with electronics that I replaced with a higher-quality grinder that’s little more than a motor and a power switch…

But the minimalist design also adopted by the DF64 also means it’s…

Inexpensive but not cheap

400 USD. Do I really need to say anything more? Though given the smaller still and lesser expensive DF54 at a little north of half the DF64’s price, I do wonder if the price could come down more for the DF64 Gen 2.

Overall

The price point and positive reviews online for the first-gen DF64 is what prompted me to get in line for the DF64 Gen 2 when those were announced. So my unit was part of the first batch of orders – the ones that have the highest risk of issue. (It’s why I almost never buy games right when they come out. Anyway…) But overall my experience with the DF64 Gen 2 has been overall very positive.

Aside from what I’ve said above about the “declumper” that only serves to create clumps rather than prevent them, along with the need for the collar on the dosing cup, I really have no complaints. Is it better than the K3? Not really, but the only thing separating the K3 from the DF64 Gen 2 is the K3’s more powerful motor, while the DF64 has larger burrs.

Grinders have definitely come a long way since I first got into espresso back in 2012. The question that will be answered as time goes on is longevity. Thankfully it looks like the DF64 is reasonably easy to service, with the burrs also easily accessible. So the question really is going to come down to the motor and how long it’ll be till that gives up the ghost.

Otherwise if you’re looking for a great quality grinder that works well with even the highest-end espresso machines but won’t equally break the bank for you, you definitely can’t go wrong with the DF64 Gen 2.

Buy it now through Amazon, Turin Grinders, or MiiCoffee.

Pardons aren’t without consequences

The President’s pardon power is unconditional and unquestionable. He has the power to pardon anyone for crimes against the United States. And in some last-hour maneuvers, President Biden issued a blanket pardon for Dr Fauci going all the way back to January 1, 2014.

Reason Free Media explains why that’s significant:

But a pardon isn’t all that people make it out to be.

Sure the blanket pardon means that Fauci cannot be prosecuted by the Department of Justice for any potential Federal crimes between January 1, 2014, and January 19, 2025. But there’s more.

First, it doesn’t eliminate the possibility he could be prosecuted for any violations of State laws. It doesn’t eliminate the risk he could be held personally liable for any torts that could be alleged, whether he is sued by any State, the Federal government, or any private party.

But what it does eliminate is the risk of jeopardy.

The Fifth Amendment to the Constitution says:

No person… shall be compelled in any criminal case to be a witness against himself…

This means that Anthony Fauci, in not timely and publicly rejecting the full and unconditional pardon, can be compelled to testify to anything and everything regarding COVID-19 and the likelihood it was a “lab leak”, along with any gain-of-function research that may have been going on behind the scenes. And the Fifth Amendment is no shield since Fauci no longer has the risk of prosecution.

So in short, if Senator Rand Paul so desires – and he’s probably already being or has been advised of what the pardon actually allows him – he could make Fauci’s life a living hell. And at the first hearing with Fauci as a witness, if I was Senator Paul, after Fauci was sworn in, the first thing I’d be saying to Fauci is, simply:

Dr Fauci, on January 19, 2025, you were pardoned by then-President Biden for all offenses against the United States extending back to January 1, 2014. While that means can you not be prosecuted for any actions, alleged or actual, from then to the date of the pardon, it also means you no longer have the protection of the Fifth Amendment against self-incrimination and can be compelled, under pain of contempt of Congress, to testify to anything and everything related to, quoting the pardon itself, your service “as Director of the National Institute of Allergy and Infectious Diseases, as a member of the White House Coronavirus Task Force or the White House COVID-19 Response Team, or as Chief Medical Advisor to the President.”

Fauci could not then use that as an opportunity to reject the pardon long after the fact merely because it would be convenient. And if he were to try, the Court would likely say that the time to reject the pardon was when it was delivered to him, not when the full consequences of the pardon became apparent.

But where does it come from the idea the Fifth Amendment protection against self-incrimination are eliminated when a person accepts a pardon? The same Supreme Court decision that also says a person can reject a pardon: Burdick v. United States, 236 US 79 (1915).

In that case, George Burdick, an editor for the New York Tribune, was subpoenaed to appear before a grand jury to testify to the source of information he had regarding smuggling jewels into the United States. Burdick invoked the Fifth Amendment. And then-President Woodrow Wilson recognized that a pardon would eliminate the risk of prosecution, thereby also eliminating the protection of the Fifth Amendment, and so issued a pre-emptive pardon.

Burdick, however, rejected the pardon and refused to testify, citing the Fifth. He was held in contempt. And the Supreme Court ruled on points that are applicable here:

  • Acceptance, as well as delivery, of a pardon is essential to its validity;
  • a person can reject a pardon, and the Court cannot then force it on the person;
  • the President of the United States may exercise the pardoning power before conviction; and
  • a witness retains their Fifth Amendment protection against self-incrimination in rejecting the pardon.

The Supreme Court had previously ruled in United States v. Wilson, 7 Peters, 150 (1833), that a person may reject a conditional pardon. And Burdick merely extended that to unconditional pardons. Burdick also pointed out that accepting a pardon implies a confession of guilt.

But in stating that Burdick retained his Fifth Amendment protections in rejecting the pardon, it implies that accepting the pardon vacates the Fifth Amendment protections against self-incrimination within the scope of the pardon.

A pardon is merely a ticket to getting out of jail or avoiding prosecution entirely. But it has been long recognized that a pardon, in eliminating the risk of prosecution, also eliminates the protection of the Fifth Amendment against self-incrimination within the scope of that pardon.