18 May, 2018


I was reading this article a couple of weeks ago, and sure was tempted in getting one... and, I am a weak individual to g33k-marketing, I know this...

So... I ended up shelling out the wet stinky, and it is on its way in the post 😋

Ubuntu on Nintendo Switch
Yes. Indeed. It will be used to do what its supposed main function is...

But, I will also tinker and experiment with this gadget to my hearts content 😅 😎

02 May, 2018

Continuous Integration and Deployment

Continuous Integration is the practice of constantly merging development work with a Master/Trunk/Mainline branch so that you can test changes and test that those changes work with other changes. The idea here is to test your code as often as possible so you can catch issues early on. In the continuous integration process, most of the work is done by an automated tests technique which requires a unit test framework. It is best practice to have a build server designed specifically for performing these tests so your development team can continue merging requests even while tests are being performed...
Yes, automation here is key.
...Continuous Delivery is the continual delivery of code to an environment once the developer feels the code is ready to ship - this could be UAT (User Acceptance Testing), staging or production. The idea behind continuous delivery is that you’re constantly delivering code to a user base, whether it be QA or directly to customers for continual review and inspection. Although similar to continuous integration, continuous delivery differs because it can feed business logic tests where unit tests are unable to catch all business logic, particularly design issues.

...Continuous Deployment is the deployment or release of code to production as soon as it’s ready. There is no large batching in staging nor a long UAT (User Acceptance Testing) process before production. Any testing is done prior to merging to the Mainline branch and is performed on production-like environments. The production branch is always stable and ready to be deployed by an automated process. The automated process is key because it should be able to be performed by anyone in a matter of minutes (preferably by the press of a button).
And after all that, log-auditing after deployment; checking key metrics if they are influenced negatively or positively by change(s).

In the ideal workflow, the entire process could be automated from start to finish:

  • Step 1: Developer checks in code to development branch.
  • Step 2: Continuous integration server picks up the change, merges it with Master/Trunk/Mainline, performs unit tests and votes on the merge to staging environment based on test results.
  • Step 3. If Step 2 is successful, developer deploys it to the staging environment and QA tests the environment.
  • Step 4. If Step 3 passed, you vote to move to production and the continuous integration server picks this up again and determines if it’s ok to merge into production.
  • Step 5. If Step 4 is successful, it will deploy to production environment. 

This process varies slightly based on needs, requirements and approaches.

24 April, 2018

Need for security-professionals in Norway

Yes, it's been an often-discussed topic in Norwegian media in later years:

"Lack of security-professionals."

Well, as commented in this (Norwegian) article, BY a security-professional; there seems to be a lack of security-oriented IT professionals, but, not because they aren't there at all. They are. What is seriously lacking in this scenario, is competence in recruiting firms looking for this kind of competence. Always has been.

Computer-security is not a fixed-set field, AT ALL. Even though a lot of so-called "professionals" seem to be stuck on the idea that it is.

Serious professionals wanting to work in this field on the other hand, are (often) painfully aware of what it actually entails to do so:

  • constant refreshing on networking- / computing- / vulnerability-security in IT
  • vulnerability-monitoring of often-used software in the company
  • a simple awareness of the fact that: nobody is ever 100% secure
Computer-security is a weight-battle; does the securing of something vulnerable affect normal operations? Or, is the fix / security-measure absolutely needed for normal operations? These are everyday obstacles a security-professional has to deal with on a regular basis, so they have to be quite flexible on expanding their knowledge-base, and often.

These points are often completely missed by recruiters. They don't look for ability / knowledge / flexibility, they often tend to only look at academic degrees (preferably multiple(!)), gender, published articles / blog-posts and other non-related (and often quite unrealistic) demands for the position(s) in question.

Then, they complain about not finding any candidates for their outrageous requirements.

Seriously, re-define your demands / requirements to a more realistic degree, maybe you'll find a competent person to do the job. But you most certainly will NOT find the dream-candidate with the kind of demands currently set as standard.

17 April, 2018

when PIGS FLY!!

"After 43 years, this is the first day that we are announcing, and will be distributing, a custom Linux kernel," Microsoft President Brad Smith said

Yeah, well, OSS / Linux won...

18 March, 2018

8Bitdo NES30 Pro

8Bitdo is a company specializing in custom retro game-controllers (gamepads). Their motto is: "Everything Old Is New Again".

Recently I picked up a pair of NES30 Pro editions. Not too expensive either for what you get. A pro-grade game controller (ergo: can handle some abuse) made for use with practically ANY retro (and even today's) game-systems!

Compatible with: Linux (desktop+RPi3), Mac OS X, Windows, Nintendo Switch, Android and iOS!

Everything I've tested them on functions as advertised, and even some that weren't listed! Easily paired through bluetooth or connected with USB 2.0.

Best thing: its firmware can even be upgraded through desktop-Linux!

15 February, 2018

Meltdown/Spectre + BSD


"Serverless Architecture"

Serverless computing refers to the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.

If you are an administrator, serverless architecture may be something to look into ASAP, as well as Functions-as-a-Service (FaaS) ;)

12 February, 2018

DevOps + Development

DevOps is not easy.

As software transitions from a monolithic to a microservice architecture, organizations are adopting DevOps practices to accelerate delivery of features to customers and improve their experience.

Jumping into continuous testing without the right infrastructure, tools, and processes can be a disaster.

Continuous testing plays an important role to achieve the fastest quality to market. Continuous testing requires several levels of monitoring with automated triggers, collaboration, and actions. Here’s what is required:

  • Automatic Test Triggers to execute tests as software transitions from various stages – development / test / staging / production
  • Service Health Monitoring to automate feedback on failures
  • Test Result Monitoring to automate feedback on failures
  • Identifying Root Cause of Failure and analyzing test results
As one can imagine this takes a hell of a toll on DevOps-personnel.

It is one of the most challenging fields today. Simply because it requires a deep understanding of the right principles, processes and practices that the DevOps philosophy is bringing to the IT world.

Because that is what it is: a philosophy.

05 January, 2018

Meltdown & Spectre --update--

Yeah, sometimes it does not warrant any extra security to be cutting edge... This I know.

That a hardware-vulnerability has gone unchecked for a couple of decades, however, eluded even me. Even more that it wasn't even addressed / announced before very recently.

Turns out, almost every computing-device I own has these bugs. And I find myself in a situation where I do as very many others do with vulnerable equipment, with little to no chance of patching; I just isolate them.

Don't get me wrong, I've taken measures and patched / disabled low-level functions as best I could. But when the issue is basically invisible (ring -3), there's limits to what I can do to fix it.

The ass-hats who made the shit have to fix it properly, or someone considerably smarter than me have to do what they can to mitigate as the circumstances will allow.

Which, from what I understand isn't much, and it's massively complicated to boot. The complications are the reasons for the "considerable performance slow-down" that will result from the software-fixes to the issue.


Seems these bugs / vulnerabilities have been blown totally out of proportion for the average computer-user.

Slowdowns only present themselves at huge workloads (think Big Data databases, enterprise computing, etc.), so average-Joe won't even notice any difference... I've been pretty buzy patching / fixing my affected systems lately, both at work and at home, and I can't say I've noticed any significant slowdowns in any way.

Not that I've got huge workloads, or global-spanning database-queries running 24/7, but I've definitely got bigger and heavier workloads than the average man.

--- If people just patch their systems regularly, they'll be fine ---

If you want to be sure you actually have the bugs, you can run this bash-script on Linux systems:
#!/usr/bin/env bash
echo "cpuinfo           : "$(cat /proc/cpuinfo | grep 'model name' | awk 'NR==1' | awk '{ print $4" "$5" "$6" "$7" "$8" "$9 }');
cat /proc/cpuinfo | grep -i bugs | head -1;