Stop blaming the tariffs

From Case Labs’ website:

Text for those with accessibility technologies:

We are very sad to announce that CaseLabs and its parent company will be closing permanently. We have been forced into bankruptcy and liquidation. The tariffs have played a major role raising prices by almost 80% (partly due to associated shortages), which cut deeply into our margins. The default of a large account added greatly to the problem. It hit us at the worst possible time. We reached out for a possible deal that would allow us to continue on and persevere through these difficult times, but in the end, it didn’t happen.

We are doing our best to ship as many orders as we can, but we won’t be able to ship them all. Parts orders should all ship, but we won’t be able to fulfill the full backlog of case orders. We are so incredibly sorry this is happening. Our user community has been very devoted to us and it’s awful to think that we have let any of you down. There are over 20,000 of you out there and we are very grateful for all the support we have received over the years. It was a great journey that we took together and we’re thankful that we got that chance.

We understand that there will likely be a great deal of understandable anger over this and we sincerely apologize. We looked at every option we had. This is certainly not what we envisioned. Some things were just out of our control. We thought we had a way to move forward, but it failed and we disabled the website from taking any more orders.

It was a privilege to serve you and we are so very sorry things turned out this way.

This has been picked up by the mainstream media for obvious reasons: they are blaming the Trump tariffs for their closure.

The only tariff that really significantly affected CaseLabs is that on aluminum. Which only went in place in March 2018. While tariffs certainly are NOT a good thing for the economy, it’s a stretch to blame them for bringing down CaseLabs.

The “default of a large account” is the part few pay attention to. And for CaseLabs to blame the tariffs for their woes is beyond the pale, since they’re looking to shift the blame for their closure to the White House and away from their business practices. And many have taken the bait.

They were already insolvent. The tariffs only sealed their fate.

For the tariffs to cause their closure in the equivalent of one fiscal quarter means they were already in the red. Deep in the red. The large account going into default robbed them of the revenue they needed to keep going. To be “forced” into liquidation – Chapter 7 bankruptcy – means they and their creditors do not foresee CaseLabs (and their parent company) ever having the revenue to recover their losses and pay on their liabilities. Which were already mounting and already getting them in trouble with their creditors (likely long) before the tariffs were enacted.

Even without the tariffs, that large account going into default likely would’ve taken them under anyway, especially if it was a customer who went into bankruptcy. Probably not in August 2018, but they likely would not have survived the year – or barely made it into 2019 – without additional revenue or funding to make up the shortfall.

CaseLabs made great computer chassis, with a massive amount of customization options. But it is not proper to lay even the majority of blame on the tariffs. They were already in trouble before the tariffs were enacted. Being forced into Chapter 7 liquidation, bypassing Chapter 11 reorganization, shows this. That they weren’t able to work out a deal with their creditors also shows this.

But everyone wants to blame the tariffs because that’s convenient and… Trump enacted them.

Combating glare in the daylight

Quick product review here. I have a dashcam in my SUV. And glare is one of the major issues trying to use it, since my dashboard reflects onto the windshield when it’s bright outside.

So to take care of that problem, I ordered a DashMat after seeing it recommended on a couple forums. I don’t have it completely properly installed, but it is still making one hell of a difference otherwise.

As you can tell in the lower image, the dashboard is not reflecting to nearly the same degree in the windshield. This isn’t great for just the dashcam, as it means being able to see through the windshield better without the glare.

When it’s clean, that is.

You can order a DashMat here. The price will vary based on the make/model of your vehicle. And the dashcam I use is the Yi Smart Dash Camera 2.7″ (silver).

Treason

The United States is unique when it comes to governance in one spectacular way: treason is defined above the government in our Constitution. This means the definition of treason cannot be changed merely by adopting a new statute. Specifically Article III, the same article that defines the Federal judiciary, in section 3:

Treason against the United States, shall consist only in levying War against them, or in adhering to their Enemies, giving them Aid and Comfort. No Person shall be convicted of Treason unless on the Testimony of two Witnesses to the same overt Act, or on Confession in open Court.

The Congress shall have Power to declare the Punishment of Treason, but no Attainder of Treason shall work Corruption of Blood, or Forfeiture except during the Life of the Person attainted.

Treason has been used to describe Trump’s recent activities with regard to Russia and her President Vladimir Putin. Obviously the President is not levying war against the United States. But did he give Russia “aid and comfort” and “adhere” to them? No.

Let me repeat that: Trump’s most recent actions are not treason under the Constitution. Treason is typically only applicable to actions committed during a time of declared war. If Congress hasn’t declared war, treason cannot apply. For reference, the Second World War was the last time the United States was in a declared state of war.

To be clear, Russia isn’t a US ally or “friendly” to the United States. But not being an ally or “friendly” doesn’t make them an enemy by default. A declared state of war by the United States against Russia or vice versa makes them our enemy. And attempting a treason prosecution against someone for providing aid and comfort to Russia would be on par with a declaration of war, or evidence of an intent by the United States to declare war against Russia.

Quoting Carlton F.W. Larson, professor of law at U.C. Davis, in an article for the Washington Post called “Five Myths about Treason“:

It is, in fact, treasonable to aid the “enemies” of the United States.

But enemies are defined very precisely under American treason law. An enemy is a nation or an organization with which the United States is in a declared or open war . Nations with whom we are formally at peace, such as Russia, are not enemies. (Indeed, a treason prosecution naming Russia as an enemy would be tantamount to a declaration of war.) Russia is a strategic adversary whose interests are frequently at odds with those of the United States, but for purposes of treason law it is no different than Canada or France or even the American Red Cross. The details of the alleged connections between Russia and Trump officials are therefore irrelevant to treason law.

The United States is not in an open or declared state of war with… anyone currently. While certain groups, organizations, and even States have been called “enemies”, making them actual enemies of the United States requires a declaration of war. “Enemy” is defined at 50 USC § 2204(2):

any country, government, group, or person that has been engaged in hostilities, whether or not lawfully authorized, with the United States

Note that an enemy combatant is not the same. An enemy combatant against the United States does not automatically make the country from which they came an enemy of the United States. Again that requires a declaration of war.

As such Russia is not an enemy of the United States. Meaning any person who provides any level of “aid or comfort” to Russia, even to the detriment of the United States, cannot be said to be committing treason. They may still be committing any other crimes against the United States, such as espionage – the Rosenbergs were convicted and executed for espionage – but it cannot be called treason.

Quoting an article by Fred Barbash for the Washington Post called “Trump ‘treason’ in Helsinki? It doesn’t hold up.“:

Problem two: There must be an enemy to aid and comfort.

While many may think of Russia as an adversary and even an enemy, it has not been declared so. An “enemy,” Harvard Law School professor Laurence Tribe said in an email to The Post, “arguably” requires a formal state of war.

“Some commentators,” Tribe writes with co-author Joshua Matz in “To End a Presidency: The Power of Impeachment,” “have argued that Russia also ranks among our ‘enemies’ ” because of its hacking to influence the 2016 election in Trump’s favor. The argument is “interesting and important,” they write, but “continued legal uncertainty about whether it is treasonous to lend ‘aid and comfort’ to Russia militates against basing an impeachment on this theory.” There are plenty of other potential crimes in the Russia investigation, they write, but probably not treason.

The simple fact we aren’t at war with Russia, and they have not declared war against the United States, means Trump’s recent actions cannot be called treason.

Treason is strictly defined in the Constitution to avoid its definition being diluted by the government – see Federalist No. 43 – as kings and dictators have been known to do. We must resist the urge to dilute its definition by casually throwing around the word. And seeing publications like the Washington Post push back against how the word has been used recently is comforting.

Net Neutrality was never going to work

Your ISP oversells your data rate.

I’ll just say that up front. I’m privileged to have Google Fiber, which is a 1 gigabit per second, full duplex, Internet connection. However if I go to run a speed test, I won’t see that full 1 gigabit. During off-peak hours, my speed will be north of 900 Mb/s both ways, but typically it’s a bit lower than that.

Your actual Internet connection is dependent upon many factors. Again, bear in mind that your ISP oversells your bandwidth plan, and I’ll demonstrate not only why that happens, but why it’s unavoidable, and why it typically was never a problem.

* * * * *

Okay let’s start with why this happens. First, let’s talk about network topologies. Starting with a star network.

Star Topology

Missing from this image is the respective maximum bandwidth available to each device on the network. I’ll presume the switch is Gigabit since that is the most common switch used in business and home networks.

The desktops are (likely) all Gigabit capable, same with the server (for simplicity). For simplicity, I’ll assume the printer is a standard laser printer with only 100Mbs, also called Fast Ethernet. And this switch is going to be connected to the larger network and the Internet via an uplink, which is also Gigabit.

Now in most Gigabit switches, unless you buy a really cheap one, the maximum internal throughput will be about the same as all of the ports combined at full duplex (meaning same bandwidth for upload and download). This means if the switch has 8 ports, it’ll support a maximum throughput of 8Gbps, and 16Gbps if it has 16 ports, and so on. This is to make sure everyone can talk to everyone else at the maximum supported throughput.

But what about that uplink? The port that supports the uplink is no different from any other port. None of the devices on the switch will be able to talk to the rest of the network or the Internet faster than the uplink allows. This means they will be sharing bandwidth with the rest of the devices on the switch. This creates contention.

This is why there are Gigabit switches with one or more 10GbE ports intended to be the uplink to the rest of the network. These switches are typically used in enterprise and medium or large office networks since they are more expensive. They can also be connected to larger 10GbE switches to create a backbone. All with the intent of alleviating contention as much as possible on the internal network.

But no device on the network can talk faster than its connection will allow, regardless of how well you design the network. This means that if you have a file or database server that sees a lot of traffic during the day, no one can to talk to it at full bandwidth since it will be overwhelmed. There are ways to alleviate that contention, but you’re merely kicking the can down the road.

A well-designed network is one in which all devices on the network can access whatever resources they desire without significant delay through a combination of switches and routers. And making sure the resources that will see significant traffic have the most bandwidth available – e.g. multiple 10GbE connections trunked into one pipe, also known as “link aggregation“. Along with being integrated into the network in a way that maximizes throughput and minimizes contention.

* * * * *

So what does all of that have to do with the Internet and Net Neutrality? A lot. In large part because those most advocating for Net Neutrality don’t know why it was never going to actually work the way they desired.

Let’s rewind a bit to show how this problem came about. Twenty (20) years ago, dial-up was still very prevalent and broadband (DSL and cable) weren’t particularly common. Your ISP had a modem pool connected to a few servers that routed Internet connections through to the ISP’s trunk. The modems were very low bandwidth – 56kbps at most, with bandwidth varying based on distance and phone line quality – so ISPs didn’t need a lot of throughput.

That everyone was connecting through a phone line made ISP competition a no-brainer. It wasn’t unusual for a metropolitan area to have a few ISPs along with the phone company offering their own Internet service. Your Internet service wasn’t tied to those providing your phone line. Switching your ISP was as simple as changing the phone number and login details on your home computer.

Broadband and the “always on” Internet connection changed all of that. Now those who provide the connection to your house also provide the Internet service. And there is, unfortunately, no easy way to get away from that.

But the physical connection providing your Internet service is not much different from home phone service with regard to how it is provided in your locale. The line to your home leads to a junction box, which will combine your line along with several other lines into one larger trunk. Either through a higher bandwidth uplink, or through multiplexing – also called “muxing”, the opposite of which is “demuxing”. Multiplexing, by the way, is how audio and video are joined together into one stream of bits.

The signal may jump through additional junction boxes before making it to your ISP’s regional switching center. The fewer the jumps, the higher the bandwidth available since there isn’t nearly as much contention. Your home doesn’t have a direct line to the regional switching center. And as I’ll show in a little bit, it doesn’t need it either.

The switching center routes your connection to the Internet along with any regional services, similar to the uplink from your home network to the ISP. The connection between the regional switching center and your home is referred to as the “last mile”.

Here’s a question: does the regional switching center have enough bandwidth to provide maximum throughput to all the “last mile” lines?

* * * * *

As I said at the top, your ISP oversells your data rate. They do not have enough bandwidth available to provide every household with their maximum throughput 24 hours a day, 7 days a week.

Almost no one is using their maximum throughput 24 hours a day. Very, very, very few use even close to that. And a majority of households don’t use a majority of their available bandwidth. We overbuy because the ISP oversells. But that bandwidth does come in handy for those times you need it – such as when downloading a large game or updates for your mobile, computer, or game console.

But ISPs do not need enough bandwidth to provide every household with their full throughput around the clock. But there’s another reason why they won’t ever get to that level: idle hardware.

The hardware at the switching stations and junction boxes across the last mile are expensive to acquire, take up space, and consume power. So ISPs won’t acquire more hardware than they require to provide adequate service to their customers. The fact that most households don’t use most of their available bandwidth most of the time is what allows this to work.

Which means, then, that problems can arise when this no longer remains the case. Enter peer-to-peer networking.

* * * * *

Comcast outright blocking BitTorrent is an oft-cited example of why we “need” Net Neutrality. A lot of people do not know how BitTorrent works. Let me put it this way: it’s in the name.

Peer-to-peer protocols like BitTorrent are designed to maximize download speeds by saturating an Internet connection – hence the name, bit torrent. A few people using BitTorrent around the clock isn’t itself enough to create a problem, since ISPs have the bandwidth to allow for a few heavy users. When it becomes quite a bit more than just “a few”, however, their use affects everyone else on the ISP’s network.

Video streaming – e.g. Netflix, Hulu, YouTube – creates similar contention. Most video streaming protocols are designed to stream video up to either a client-selected maximum quality, or whatever the bandwidth will allow. Increasing video quality and resolution only created more contention across available bandwidth.

ISPs engage in “traffic shaping” to mitigate the problem. Traffic shaping is similar to another concept with which all network administrators and engineers are hopefully familiar: Quality of Service, or QoS. It is used to prioritize certain traffic over others – e.g. a business or enterprise will prioritize VoIP traffic and traffic to and from critical cloud-based services over other network traffic. It can also be used deprioritize, limit, or block certain traffic – e.g. businesses throttling YouTube or other video streaming services to avoid contention with critical services.

And that traffic shaping – blocking or limiting certain traffic – was necessary to avoid a few heavy users making it difficult for everyone else to use the Internet service they were paying for. Regulating certain services to ensure a relative few weren’t spoiling it for everyone else.

Yet somehow that detail is often lost in discussions on Net Neutrality. And misconceptions, misrepresentations, and misunderstandings lead to bad policy.

* * * * *

So let’s talk “fast lanes” for a moment.

There has been a lot of baseless speculation, fear mongering, and doomsday predictions around this concept, along with what could happen with Internet service in the United States should Net Neutrality be allowed to completely expire.

While “fast lanes” have been portrayed as ISPs trying to extort money from Netflix, there’s actually a much more benign motive here: getting Netflix and others to help upgrade the infrastructure needed to support their video streaming. Since it was the start of their streaming service, and the eventual consumption of it by millions of people, that led to massive degradation in Internet service for millions of other customers who weren’t streaming Netflix or much else.

As a means of alleviating traffic congestion, many metropolitan areas have built HOV lanes – “high-occupancy vehicle” lanes – to encourage carpooling. The degree to which this is successful being highly debatable. The “fast lane” concept for the Internet was similar. But when the idea was first mentioned, many took it to mean that ISPs were going to artificially throttle websites who don’t pay up, a presumption mirrored in the Net Neutrality policy at the FCC. What it actually means is providing a separate bandwidth route for bandwidth-intense applications. Specifically video streaming.

The reason for this comes down to the structure of the entire Internet. Which is one giant tree network. Recall from above about network design that devices and services that see the most traffic should be integrated into the larger network in a way that maximizes throughput to those services while minimizing contention and bottlenecks for everything else. This can be achieved in multiple ways.

“Fast lanes”, contrary to popular belief, were intended to divert the traffic for the most bandwidth intense services around everyone else. An HOV lane, of sorts. Yet it was portrayed as ISPs trying to extort money from websites by artificially throttling them unless they paid up, or even outright denying that website access from their regional networks.

Yeah, that’s not even close to what was going to happen.

* * * * *

The “fast lanes” were never implemented, though. So what gives? Does that mean the fearmongers were right and ISPs were planning to artificially throttle or block websites to extort money? Not even close. Instead the largest Internet-based companies put into practice another concept: co-location, combined with load balancing.

Let’s go back to Netflix on this.

Before streaming, Netflix was known for their online DVD rentals, and specifically their vast supply. Initially they didn’t have very many warehouses where they stored, shipped, and received the DVDs (and eventually Blu-Rays).

But as their business expanded, they branched out and opened new warehouses across the country, and eventually across the world. They didn’t exactly have a lot of choice in the matter: one warehouse can only ship so many DVDs, in part because there is a ceiling (quite literally, actually) to how many discs can be housed in a warehouse, and how many people can be employed to package and ship them out and process returns.

This opened up new benefits for both Netflix and their customers. By having warehouses elsewhere, more customers were able to receive their movies faster than previous since they were now closer to a Netflix warehouse. More warehouses also meant Netflix could process more customer requests and returns.

Their streaming service was no different. At the start there were not many customers streaming their movies. In large part because there weren’t many options. I think X-Box Live was the only option initially, and you needed a Live Gold subscription (and still do, I think) to stream Netflix.  So having their streaming service coming from one or two data centers wasn’t a major concern.

That changed quickly.

One of the precursor events was Sony announcing support for Netflix streaming on the PlayStation 3 in late 2009. Initially you had to request a software disc as the Netflix app wasn’t available through the PlayStation Store – I still have mine somewhere. And the software did not require an active PlayStation Network subscription.

Alongside the game console support was Roku, made initially just to stream Netflix. I had a first-generation Roku HD-XR. The device wasn’t inexpensive, and you needed a very good Internet connection to use it. Back when I had mine, the highest speed Internet connection available was a 20Mbps cable service through Time Warner. Google Fiber wasn’t even on the horizon yet.

So streaming wasn’t a major problem early on. While YouTube was supporting 720p and higher, most videos weren’t higher than 480p. But as more people bought more bandwidth along with the Roku devices and game consoles, contention started to build. Amazon and Hulu were also becoming major contenders in the streaming market, and additional services were springing up as well, though Netflix was still on top.

So to get the regional ISPs off their back, Netflix set up co-location centers across the country and in other parts of the world. Instead of everything coming from only one or a few locations, Netflix could divide their streaming bandwidth across multiple locations.

Load balancing servers at Netflix’s primary data center determined which regional data center serves your home based on your ISP and location – both of which can be determined via your IP address. Google (YouTube), Amazon, and Hulu do the same. Just as major online retailers aren’t shipping all of their products from just one or a few warehouses, major online content providers aren’t serving their content from just one or a few data centers.

This significantly alleviates bandwidth contention and results in better overall service for everyone.

* * * * *

But let’s get back to the idea of Net Neutrality and why what it’s proponents expect or demand isn’t possible. Though I feel I’ve adequately pointed out how it would never work the way people expect. So now, let’s summarize.

First, ISPs do not have enough bandwidth to give each customer their full bandwidth allocation 24/7. That alone means Net Neutrality is dead in the water, a no-go concept. ISPs have to engage in traffic shaping, along with employing different pricing structures to keep a few customers from interfering with everyone else’s service.

Unfortunately the fearmongering over the concept has crept into the public consciousness. With Net Neutrality now officially dead at the Federal Communications Commission – despite attempts in Congress and at State levels to revive it – a lot of people are going to start accusing ISPs of throttling for, likely, any amount of buffering trying to watch an online video.

Anything that might look like ISPs are throttling anything will likely result in online public statements from customers accusing their ISP of nefarious things. Because people don’t understand the structure of the Internet and how everything actually works. And the structure of the Internet also spells doom for the various demands on ISPs through “Net Neutrality”.

Chicken and egg economics

How many times have you heard this argument with regard to the minimum wage?

If employers pay employees more, then they’ll have more money to spend, and that’ll make the economy better.

What came first: the revenue or the wage?

It sounds good to say paying employees more will stimulate the economy. But it isn’t that simple. For one businesses need to have the money, the revenue, to cover the increased labor expense. In other words, the revenue comes first. This is why chain stores will close or cut back at under-performing locations.

Forcing a higher minimum wage on businesses that don’t have the revenue to cover it will result in lost jobs as a result. Minimum wages are price floors, and the economic effects of price floors are well studied and easily demonstrable. Price floors price out market participants and result in an increased surplus of whatever is effected. Minimum wage hikes, then, result in a greater surplus of labor.

And there’s also wage push inflation, also called cost push inflation, in which companies raise prices to cover higher minimum wages, as opposed to paying more or hiring more people due to increased revenue and demand for their products or services.

A lot of the arguments in favor of hiking the minimum wage presume the business owner can just absorb that new cost. As if all businesses are sitting on bottomless buckets of cash, or have a supply of revenue sufficient to cover a wage hike, and it’s just greed keeping those companies from paying more to their employees. Except as I’ve pointed out, with Wal-Mart of all companies, that’s just not true.

Along with this is another common argument that I also recently rebutted on Twitter in which someone said that, paraphrasing, companies could pay their employees a living wage if every C-level executive took a pay cut. Yet, as I’ve also pointed out with regard to Wal-Mart, the math just doesn’t work out.

The person in question spoke specifically of Amazon CEO Jeff Bezos and the fact he’s worth billions, but Bezos’s cash salary was only $1.6 million in 2017. The rest of his compensation, and the vast majority of his net worth, is stock which has no cash value unless sold. If he took home no cash salary and that was instead divided up against all other employees, how much would they all get? Not even enough to buy a cup of coffee at Starbucks.

So in the riddle above, revenue obviously must come first. Typically a company won’t hire on employees or raise wages unless they can afford it. And they’ll only do so within what they can afford. And the math determines if they can afford it. So if you force it on them by mandate of law, especially by a significant margin over the current minimum wage, you’re going to see diminished hours, layoffs, and businesses shutting down.

Singularity Computers, JayzTwoCents, and a journalistic debacle

Full disclosure: I provide financial support to Singularity Computers through their Patreon.

Singularity Computers is pushing a new case to market called Spectre. Linus Media Group received a custom version of it, and JayzTwoCents is using another custom version in a special build for rapper Post Malone.

Recently Cairns Post published an article featuring Singularity Computers (behind a paywall). So let’s start with the passage from the article that has everyone in an uproar:

All-purpose assistant Sven Meyer said Singularity Computers’ YouTube channels had attracted more than 1 million followers and led American chart topper Post Malone to track the team down.

“He had some issues in one of his videos, all his followers pointed him in our direction because we had the solution,” Mr Meyer said.

“He dropped us an email, telling us every second comment, out of 10,000 comments were telling him we could help him.

“It is a huge opportunity for us and the exposure is great. Post Malone, even though he has no clue about computers, it is a very important deal.”

Sven wasn’t talking about Post Malone. He was actually talking about JayzTwoCents. The mention of Post Malone at the end of the passage certainly provides the allusion, however, that Singularity Computers is trying to seize credit for the Post Malone project.

And the reaction to this has been far beyond reasonable.

Here’s what people are missing: Singularity Computers has never tried to take credit for the build. They have only ever taken credit for the custom Spectre chassis for the build. Given the exposure the project has received on YouTube and elsewhere, there is no way Singularity could even try to take credit for the entire build.

The response they are receiving when the allusion of such was given by a local newspaper is probably a few magnitudes lighter than what they would’ve actually received had they actually tried to take credit for it. Which means the allusion of them seizing credit falls on the journalist who wrote the article. Who obviously didn’t do her due diligence, as I can readily demonstrate.

Back on December 13, 2017, JayzTwoCents published a video showcasing the start of a new personal build:

At the end of the video, he’s trying to think of a good way to mount the pump and reservoir to the front radiator. In the comments to this video, a lot of people recommended Singularity Computers. About two weeks later, JayzTwoCents posted a video continuing the build, and basically saying he got the hint:

Or to again quote Sven from the article: “He [JayzTwoCents] had some issues in one of his videos, and his followers pointed him in our direction because we had the solution.”

But I highly doubt that Sven said the Singularity Computers channel had attracted 1 million followers. He may have been, again, referring to JayzTwoCents, and the statement was taken out of context.

So how did the confusion occur?

This is speculation, but I feel it’s a combination of a need to trim the article without proper diligence to make sure everything was still accurate. And this is apparent given the picture of the article that Daniel shared on his website and linked across social media (though all social media links have since been removed):

If you’ve ever worked for a newspaper, whether as a student journalist or professional, you likely are acutely aware of the need to keep an article within a certain publication space. And if you run long, your editor will require that you trim the article down. But the key is to trim in such a way that it is still accurate.

Clearly Alicia Nally didn’t maintain accuracy, provided the article was accurate to begin with. Her trimming resulted in an article that made it sound like Post Malone sought out Singularity Computers after being directed to them by a lot of comments. And it was actually JayzTwoCents who sought them out. And her failure to maintain accuracy is leading to the implication Singularity Computers is seizing more credit than due for the Post Malone project, when they likely were reserved in what they said.

As of when this article went live, there has yet to be any comment, clarification, correction or retraction published by either Singularity Computers or the Cairns Post, or the article’s author, Alicia Nally. Hopefully such is forthcoming by all parties.

Update (2018-06-20): Singularity Computers issued a statement after pulling all posts related to the article from their website and social media:

It turns out [the article] was not just missing a bunch of facts but half of it was literally made up. Someone from the media spotted one of our team in the hardware store a few weeks ago and started asking him questions. They ended up interviewing him and we told them what we were up to with Jay building a case for the Post Malone build. So they took out half of the story and put up the article the next day without letting us proof read any of it.

Nasira with a little SAS

Build Log:

So I’ve had Nasira for a little over two years now. Here are the current specifications:

  • CPU: AMD FX-8350 with Noctua NH-D9L
  • Mainboard: ASUS Sabertooth 990FX R2.0
  • GPU: Zotac GT620
  • RAM: 16GB ECC DDR3
  • NIC: Chelsio S320E dual-SFP+ 10GbE
  • Power: Corsair CX750M
  • HDDs: 4x4TB, 2x6TB in three pairs
  • Chassis: PlinkUSA IPC-G3350
  • Hot swap bays: Rosewill RSV-SATA-Cage34

And it’s running FreeNAS. Which is currently booting off a 16GB SanDisk Fit USB2.0 drive. Despite some articles I’ve read saying there isn’t a problem doing this, along with the FreeNAS installation saying it’s preferred, I can’t really recommend it. I’ve had intermittent complete lockups as a result. And I mean hard lockups where I have to pull the USB drive and reinsert it for the system to boot from it again.

Given SanDisk’s reputation regarding storage, this isn’t a case of using a bad USB drive or one from a lesser-reputable brand. And I used that specific USB drive on recommendation of someone else.

So I want to have the NAS boot from an SSD.

Now I could just put an SSD in one of the free trays in the hot-swap bay and call it a day – I have 8 bays, but only 6 HDDs. But I’m also filling out the last two trays. So connecting the HDDs to a controller card would be better, opening up the onboard connections.

And I have a controller card. But not one you’d want to use with a NAS. Plus it wouldn’t reduce the cable bulk either. So what did I go far? IBM M1015, of course, which I found on eBay for a little under 60 USD. Since that’s what everyone seems to use unless they’re using an LSI-branded card. But there’s a… complication.

The mainboard has 6 expansion slots, three of which are hidden under the power supply – it’s a 3U chassis, not a 4U. The exposed slots are two full-length slots (one x16, one x4), and a x1 slot. The full-length slots are currently occupied by the 10GbE card and graphics card (Zotac GT620).

The SAS card is x8. So the only way I can get the SAS controller and the 10GbE card is to use a x1 graphics card. Yes, those exist. YouTube channel RandomGaminginHD recently covered one called the Asus Neon:

And I considered the Zotac GT710 x1 card to get something recent and passively cooled, but didn’t really want to spend $40 on a video card for a server (eBay prices used and new were similar to Amazon’s price new).

Searching eBay a little more, I found an ATI FireMV 2250 with a x1 interface sporting a mind-blowing 256MB DDR2. Released in 2007, it was never made for gaming. It was advertised as a 2D card, making it a good fit for the NAS and other servers where you’re using 10GbE cards or SAS controllers in the higher-lane slots.

The card has a DPS-59 connector, not a standard DVI or VGA connector. But a DPS-59 to dual-VGA splitter was only 8 USD on Amazon, so only a minor setback.

I set up the hardware in a test system to make sure the cards work. And also to flash the SAS card from a RAID card to an HBA card. Which seemed way more involved than it needed to be. In large part because I needed to create a UEFI bootable device. These steps got it running on a Gigabyte 990FXA-UD3 mainboard: How-to: Flash LSI 9211-8i using EFI shell. Only change to the steps: the board would not boot the UEFI shell unless it was named “bootx64.efi” on the drive, instead of “shellx64.efi” as the instructions state. And at the end, you also need to reassign the SAS address to the card:

sas2flash.efi -o -sasadd [address from green sticker (no spaces)]

This may or may not be absolutely necessary, but it keeps you from getting the “SAS Address not programmed on controller” message during boot. So along with the mini-SAS to SATA splitters and 32GB SSD, I took Nasira offline for maintenance (oh God, the dust!!!) and the hardware swaps. Didn’t bother trying to get ALL of the dust out of the system since next year I’ll likely be moving it all into a 4U chassis. More on that later.

Things are a little crowded around the available expansion slots. But with an 80mm fan blowing on them, they’re not going to have too much of an issue keeping cool. LSI cards are known to run hot, and this one is no different. I attempted to remove the heatsink, but I think it’s attached with thermal adhesive, not thermal compound.

I do intend to replace the 80mm fan with a higher-performance fan when I get the chance. Or figure out how to stand up a 120mm fan in its place to have the airflow without the noise. Which is the better option, now that I think about it.

And as predicted, the cables are much, much cleaner compared to before. A hell of a lot cleaner compared to having a bundle of eight (8) SATA cables going between SATA ports. Sure there’s still the nest between the drive bays, but at least it isn’t complicated by SATA cables.

I also took this as a chance to replace the 10GbE card as well. From a Chelsio S320 to a Mellanox Connect-X2. The Chelsio is a PCI-E 1.0 card, while the Mellanox is a 2.0 card. So this allowed me to move the 10GbE card to the x4 slot and put the SAS card in the x16 slot nearest the CPU. FreeNAS 9.x did not support Mellanox chipsets, whereas FreeNAS 11 does.

I reinstalled FreeNAS to the boot SSD – 11.1-U5 is the latest as of when I write this. My previous installation was an 11.1 in-place upgrade over a 9.x installation. While it worked well and I never experienced any issues, I’ve typically frowned against doing in-place upgrades on operating systems. I always recommend against it with Windows. It can work well with some Linux distros – one of the reasons I’ve gravitated toward Fedora – and be problematic with others.

Importing the exported FreeNAS configuration, though, did not work. Likely because it was applying a configuration for the now non-existent Chelsio card. But I was able to look at the exported SQLite database to figure out what I had and reproduce it – shares and users were my only concern. Thankfully the database layout is relatively straightforward.

Added storage

Now here is where things got a little tricky.

I mentioned previously that I had six (6) HDDs in three pairs: two 4TB pairs and one 6TB pair. And I said the 6TB pair was a WD Red at 5400RPM and a Seagate IronWolf at 7200RPM. Had I realized the RPM difference when I ordered them, I likely would’ve gone with an HGST NAS drive instead. So now is the opportunity to correct that deficiency.

The other Seagate 4TB NAS drives are 5900RPM drives compared to the 5400RPM on the WD Reds. But that isn’t nearly as much a difference as 7200RPM to 5400RPM – under 10% higher compared to 1/3rd higher, respectively.

Replacing the disk was straightforward – didn’t capture any screenshots since this happened after 2am.

  1. Take the Seagate IronWolf 6TB drive offline
  2. Prep the new WD Red 6TB drive into a sled
  3. Pull the Seagate drive from the hot swap bay and set it aside
  4. Slide the WD Red drive into the same slot the Seagate drive previously occupied
  5. “Replace” the Seagate drive with the WD Red drive in FreeNAS.

The resilver took a little over 5 hours. I’ve said before, both herein and elsewhere, that resilvering times are the reason to use mirrored pairs over RAID-Zx. And it’s the leading expert opinion from what I could find as well. And always remember that even this is not a replacement for a good backup plan.

The next morning, I connected the Seagate drive to my computer – Mira – using a SATA to USB3.0 adapter to I could wipe the partition table. And I prepped the second Seagate IronWolf drive into a sled. Slid both drives into the hot swap bays. And expanded the volume with a new 6TB mirrored pair.

40TB of HDDs split down into 20TB from mirrored pairs. Effective space is about 18.1TiB with overhead – 20TB = 20 Trillion bytes  = 18.19TiB – with about 8.7TiB free space available.

Future plans

8TiB is a lot of space to fill. It took two years to get to this point with how my wife and I do things. Sure the extra available space means we’ll be tempted to fill it – a “windfall” of storage space, so to speak. And with how I store movies, television episodes, and music on the NAS, that space could fill faster than expected.

Plus it is often advised to not let the pool’s available space fill up to less than 20% free where possible to avoid performance degradation. Prior to the expansion, I was approaching this – under 4TiB free out of ~13TiB space. So I expanded the storage with another pair of 6TB drives to back away from that 20%. That and to pair the 7200RPM with another 7200RPM drive.

So at the new capacity, I have basically until I get to about 3TiB before I start experiencing any ZFS performance degradation. The equivalent of nearly doubling my current movie library.

So let’s say, hypothetically, I needed to add more storage space. What would be the best way to do that now that all eight of the drive bays are filled? There are a few ways to go about it.

Option #1 is to start replacing the 4TB drives. Buying a 6TB or 8TB drive pair, and doing a replace and resilver on the first 4TB pair. Except that would add only 2TB or 4TB of free space. Not exactly a significant gain given the total capacity of the current pool. While this being an option is a benefit of using mirrored pairs over RAID-Zx, it isn’t exactly a practical benefit.

Especially since the 4TB drives are likely to last a very long time before I have to worry about any failures – whether unrecoverable read errors or the drive just flat out giving up the ghost. The drives are under such low usage that I probably could’ve used WD Blues or similar desktop drives in the NAS and been fine (and saved a bit of money). The only reason to pull the 4TB drives entirely would be repurposing them.

So option #2. Mileage may vary on this one. The 990FX mainboard in the NAS has an eSATA port on the rear – disabled currently. If I need to buy another pair of HDDs — e.g. 8TB – then I could connect an eSATA external enclosure to the rear. Definitely not something I’d want to leave long-term, though.

So what’s the better long-term option?

I mentioned previously about moving Nasira into a 4U chassis sometime next year. Possibly before the end of the year depending on what my needs become. Three of the expansion slots on the mainboard are blocked: a second x4 and x16 slot, and a PCI slot. The 990FX chipset provides 42 PCI-E lanes.

Getting more storage on the same system requires a second SAS card. Getting a second SAS card and keeping the 10GbE requires exposing those other slots. Exposing those slots requires moving everything into a 4U chassis. Along with needing an open slot for an SFF-8088 to SFF-8087 converter board. Then use a rack SAS expander to hold all the new drives and any additional ones that get added along the way.

Or buy a large enough SAS expander to hold the drives I currently have plus room to expand, then move the base system into a 2U chassis with a low profile cooler and low-profile brackets on the cards.

I’ll figure out what to do if the time comes.

AK-47 Espresso

Messenger Coffee’s espresso blend has been my go-to for at least the last year to 18 months. Messenger Coffee is in Lenexa, KS, and sold by several local coffee shoppes, including one within walking distance from my apartment. It has a very mellow, smooth taste when blended with milk. In my opinion it’s the perfect blend for lattes, at least of what I’ve tried so far.

And that includes Black Rifle Coffee Co.

Specifically their AK-47 espresso blend. When I saw my local gun range carrying their coffee – ground, not whole bean – I decided to give it a try after looking them up online and discovering their espresso blend.

For reference, I have an ECM Technika IV Profi espresso machine with a Compak K-3 grinder – a $3,000 home espresso setup. I’ll say up front this coffee blend was not made with my setup in mind. It overall tasted like it was made for cheaper equipment – such as my De’Longhi EC-155 before I made the upgrades.

And that showed as I reached the end of the bag.

A 12oz bag of beans typically lasts me about 2 weeks. I store the beans in a large Airscape that works great for keeping beans fresh throughout that. And the freshness of the AK-47 blend degraded much faster than I’ve seen. To the point where the shots were coming out significantly faster – with no changes to the grinder – despite coming out a little slow than typical with the first shots I pulled.

With other espresso blends I’ve tried, from both Messenger Coffee and The Roasterie, I’ve never experienced that.

In all seriousness, the double-shot I pulled this morning looked like I’d used ground coffee that’d been sitting out in the portafilter overnight. It pulled fast with little crema. Coffee beans should not be degrading that fast.

But then espresso blends typically aren’t a mix of roast levels. That’s where AK-47 differs heavily. Despite being advertised on their website as a medium blend, AK-47 is actually a mix of both light and dark roasted beans. Yes, you read that right: light and dark roast. Hence why I said earlier this blend was not made for premium equipment. It’d probably work reasonably well with an entry-level machine and grinder, provided you keep your expectations low on coffee flavor or typically flavor your lattes.

Given my experience with different coffee blends over the… six years I’ve been a home barista, the combination of light and dark roasted beans is the only thing that sets AK-47 apart that explains the very fast degradation despite being kept as whole beans in an Airscape. Seriously, that degradation should not have happened. I’m almost tempted to buy another bag to try with vacuum bags to see if the same degradation occurs. There’s a reason most every espresso blend out there uses the same roast level – or they’ll mix medium with dark, not light with dark – and this is the reason.

In a latte, the coffee flavor stands out from the milk instead of being complemented by it. Meaning if I were to have had espresso shots with this blend, it likely would’ve had an overpowering flavor I’d try to rinse away later. But this blend isn’t unique in that particular quality. So on that aspect, AK-47 doesn’t stand out.

But how quickly the beans went stale, combined with the fact it’s not a great blend for lattes when fresh, or even as straight shots for that matter, this is a blend I’ll be avoiding in the future.

Final verdict: 1 out of 5. It doesn’t bring a unique or pleasant flavor to the table, and the beans went stale way too quickly despite being sealed and ground only on demand.

Redefining terms to infringe the Second Amendment

ArticleWhat does it mean to “bear arms”?, The Economist

I’m not going to waste much space on this. “Bear arms”, or “to keep and bear arms” means exactly what it says, and means exactly what it did back in 1791 when the Amendment was ratified. It means to own, carry, and use.

That is it.

No additional qualifiers like The Economist article attempts to include. The right of the people to keep and bear arms means the right of the individuals who make up “the people” to own, carry, and use firearms for lawful, defensive purposes. And it is a right that shall not be infringed.

It isn’t complicated. So stop acting like it is.

What makes this all the more insidious and aggravating is no one would say “what did they mean by ‘speech’?” with regard to the First Amendment. Yet with almost every word in the Second, and the idea of the right itself, they play this game. It needs to stop.

Speech and consequences

Yep, I’m jumping on this bandwagon. Talking about Roseanne. Or rather what others have been saying in the wake of her recent firing that boils down to this: Roseanne had every right to tweet, but she isn’t immune to the consequences of that speech.

It’s a notion that is not unfamiliar to me. Several years ago I said just that with regard to Duck Dynasty patriarch Phil Robertson. And back at the tail-end of 2013 it was an easy sentiment to hold. Since it wasn’t my political ideology on the chopping block. It wasn’t my political ideology being targeted. And it’s rather interesting today how those who most scream about “consequences” are the ones least likely to, in today’s political climate, experience said “consequences”.

Who is more likely to lose their job and livelihood due to something said on Twitter, someone who leans left, or someone who leans right? Given some of the vile, racist garbage that’s come out of the left, I think the answer is clear. Indeed my article about Duck Dynasty shows this isn’t a recent development.

The swiftness by which Roseanne was “scrubbed” from current entertainment options shows this. Lightning-quick virtue signalling. And not just from ABC either. Because ABC, and the parent company Disney, and everyone connected to the show likely would’ve faced swift backlash from leftists if they didn’t swiftly remove her show and denounce her. Not what she tweeted. But her personally.

And they didn’t just remove her new season, both from the air and their website – the link to the Roseanne show now redirects to the ABC Go homepage. They went so far as to pull reruns of the original series. Hulu did the same. So far, as of this article, Amazon hasn’t capitulated, but they likely will as well – Season 10 is not available through Prime video streaming, but the original seasons are. And YouTube also still has the reboot as of this writing.

At the close of 2017, I wrote this:

RIGHTS limit how the government interacts with the People, PRINCIPLES limit how you interact with everyone else. That is why I’ve spent much of the last several years continually defending PRINCIPLES over rights.

Without the underlying principles of free speech and the presumption of innocence, for example, there is no foundation for the RIGHTS derived from those principles. Yet more and more I see those principles continually violated by people who claim to stand up for the rights derived from those principles.

That is largely the lesson I’ve learned over the last few years as I’ve seen people continuously disparaging the concept and principle of free speech. On the question of consequences, we must always ask ourselves what consequences are fair and legitimate.

And this consequence against Roseanne is hardly fair given the totality of the circumstance. Just as the temporary suspension Phil Robertson suffered in 2013 was also not fair to him, contrary to whatever statements I may have said at the time.

And I would say the same about any leftists who suffer similar fates due to their statements. Except at this point leftists largely know they are immune to those consequences. They don’t have to worry about losing their jobs and careers – those that have them, at least – for saying some rather vile shit. Which is why they have no problem screaming about consequences. Since they’ll likely never suffer them personally.

And yet they call me and people like me “privileged”.