18 March, 2018

8Bitdo NES30 Pro

8Bitdo is a company specializing in custom retro game-controllers (gamepads). Their motto is: "Everything Old Is New Again".

Recently I picked up a pair of NES30 Pro editions. Not too expensive either for what you get. A pro-grade game controller (ergo: can handle some abuse) made for use with practically ANY retro (and even today's) game-systems!

Compatible with: Linux (desktop+RPi3), Mac OS X, Windows, Nintendo Switch, Android and iOS!

Everything I've tested them on functions as advertised, and even some that weren't listed! Easily paired through bluetooth or connected with USB 2.0.

Best thing: its firmware can even be upgraded through desktop-Linux!

15 February, 2018

Meltdown/Spectre + BSD


"Serverless Architecture"

Serverless computing refers to the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.

If you are an administrator, serverless architecture may be something to look into ASAP, as well as Functions-as-a-Service (FaaS) ;)

12 February, 2018

DevOps + Development

DevOps is not easy.

As software transitions from a monolithic to a microservice architecture, organizations are adopting DevOps practices to accelerate delivery of features to customers and improve their experience.

Jumping into continuous testing without the right infrastructure, tools, and processes can be a disaster.

Continuous testing plays an important role to achieve the fastest quality to market. Continuous testing requires several levels of monitoring with automated triggers, collaboration, and actions. Here’s what is required:

  • Automatic Test Triggers to execute tests as software transitions from various stages – development / test / staging / production
  • Service Health Monitoring to automate feedback on failures
  • Test Result Monitoring to automate feedback on failures
  • Identifying Root Cause of Failure and analyzing test results
As one can imagine this takes a hell of a toll on DevOps-personnel.

It is one of the most challenging fields today. Simply because it requires a deep understanding of the right principles, processes and practices that the DevOps philosophy is bringing to the IT world.

Because that is what it is: a philosophy.

05 January, 2018

Meltdown & Spectre --update--

Yeah, sometimes it does not warrant any extra security to be cutting edge... This I know.

That a hardware-vulnerability has gone unchecked for a couple of decades, however, eluded even me. Even more that it wasn't even addressed / announced before very recently.

Turns out, almost every computing-device I own has these bugs. And I find myself in a situation where I do as very many others do with vulnerable equipment, with little to no chance of patching; I just isolate them.

Don't get me wrong, I've taken measures and patched / disabled low-level functions as best I could. But when the issue is basically invisible (ring -3), there's limits to what I can do to fix it.

The ass-hats who made the shit have to fix it properly, or someone considerably smarter than me have to do what they can to mitigate as the circumstances will allow.

Which, from what I understand isn't much, and it's massively complicated to boot. The complications are the reasons for the "considerable performance slow-down" that will result from the software-fixes to the issue.


Seems these bugs / vulnerabilities have been blown totally out of proportion for the average computer-user.

Slowdowns only present themselves at huge workloads (think Big Data databases, enterprise computing, etc.), so average-Joe won't even notice any difference... I've been pretty buzy patching / fixing my affected systems lately, both at work and at home, and I can't say I've noticed any significant slowdowns in any way.

Not that I've got huge workloads, or global-spanning database-queries running 24/7, but I've definitely got bigger and heavier workloads than the average man.

--- If people just patch their systems regularly, they'll be fine ---

If you want to be sure you actually have the bugs, you can run this bash-script on Linux systems:
#!/usr/bin/env bash
echo "cpuinfo           : "$(cat /proc/cpuinfo | grep 'model name' | awk 'NR==1' | awk '{ print $4" "$5" "$6" "$7" "$8" "$9 }');
cat /proc/cpuinfo | grep -i bugs | head -1;