19 December, 2015

Keepin' buzy

Been pretty productive (read: teh buzy) the last 6-7 months, doing some (quite heavy) service- and maintenance-work for a local IT-startup.

Been responsible for the x86_64 Linux-rack(s) based infrastructure and backend driving the various product-frameworks, the domain- and network-security, plus client host-security throughout the organization. Quite a lot of networking and some aggregated WiFi-APs etc.

The Linux-rack servers have also been host-hardened and properly secured (SSH/TLS/SSL) for the threats facing online services in 2015.

Without going into too much detail, it all revolves around embedded GPS-tracking, web-apps for viewing and controlling said- tracking system, and the accompanying maintenance / service / troubleshooting involved in said systems and their server-backends.

The servers run both the MariaDB and the PostgreSQL database-systems for serving application- / geo- / PostGIS-data in various parts of the application-flow. I became responsible for servicing / doing maintenance of and fascilitating import / export of SQL-data for backup(s), re-location, and the like. The PostgreSQL database is even served from an iSCSI pool through a dedicated jumbo-frame ethernet connection to a NAS-rack for extra speed when handling huge datasets.

I've also been playing around with the web-framework, doing some development for myself to learn the technology, and also fixing various components in the active applications.

27 August, 2015

AMD hardware

I've known for some time now that AMD has a serious lack of effort on their native implementation of OpenGL on Linux, especially when used for gaming. Not to mention their proprietary (binary) graphics-driver: "AMD Catalyst", which is a story all on it's own...

They JUST RECENTLY aqcuired OpenGL v4.5-compatibility, which by now is over 1 year old! In technology-tems, that's just sad.

They released ('2013 for the wider public) an in-house developed perf-tool (GPUPerfServer2), for optimizing Linux games using OpenGL. Though they did *NOT* release a linux client for this client/server framework; only a linux-based server for running local OpenGL-games, of which they could then connect to via the OSX- / Windows-client (figures...).


A couple of days ago, I stumbled upon this picture in a Google+ post with the following title "Feral Interactive Buys AMD Hardware To Optimize Linux Games":


http://news.softpedia.com/news/feral-interactive-buys-more-amd-hardware-to-optimized-linux-games-490006.shtml?utm_content=buffer4c869&utm_medium=social&utm_source=plus.google.com&utm_campaign=buffer

Are developer-companies actually taking their own steps towards Linux-specific hardware opimization?

It would seem so :) and I hope this could start as a trend for similar game development companies in the coming future.

28 June, 2015

h3x - upgrade part II

Upgrade-time! :) ...and this time h3x is undergoing a complete cabinet+gfx upgrade. From a rather minimalistic HTPC starter-cabinet, into a completely new 2-sectioned, airflow-designed multi-cabinet :)

Fractal Design Node 804 cabinet
Corsair CX600 PSU
3 x Corsair SP120 "Quiet Ed" fans
My new XFX Radeon R9 270X 2GB GDDR5:
1xHDMI - 2xDVI-D - 2xDP

Disassembly


Silverstone MILO (ML03) when first built (nov'2011)
Silverstone MILO (ML03) disassembled.
The above picture shows my old HTPC cabinet after being almost completely stripped. Only things still left inside are the system-SSD (Corsair ForceGT) and the 400W PSU (Silver Power).

I did not foresee how hard it would be to dismantle the MILO HTPC mATX-cabinet, everything was so close together that it made it near impossible to figure out where to start off.

After viewing it from different angles, and bringing in a fresh set of eyes; everything got pulled out eventually

Then, the time to re-mount everything again came around.

Re-assembly


  • First: the PSU.
Right side.
Back-right profile.
  • Then; motherboard + RAM + CPU.
Front-left profile.
Left side.
  • And to finish it off: graphics-accelerator + HDD + SSD.

Left side: Component-compartment.
Right side: Storage- / power-compartment.

I now have a overhauled gaming-rig / workstation / hypervisor with 3-4x more graphics-processing power than I previously had :P and it WILL get tested ;)

I opted to using the 3 Corsair SP120 fans in the front, to complement the total airflow going through the cabinet. So that's a total of 4 fans in the front for intake and 2 in the back for exhaust.



08 June, 2015

"Unix is not an acceptable Unix"

This article really grinds my gears. The author tries to discount the use of command line interfaces by saying it is an obsolete interface-technology (mostly reserved for developers and advanced system administrators).

Pontificating that modern Unices (like Mac OS X and Linux) suffer at exactly the same areas of complexity because of their Unix-heritage. Well, I call bullshit...

  • Linux does not have a Unix-heritage, because it does not contain any UNIX®-code.
  • Computer complexity is really a catch-22 scenario; every time you modify something low-level, something may break on a lower level. Keeping this complexity usable at varying levels is a master-class balancing-act, and not to be taken lightly.
Yes, the Unix-philosophy states that every program should: do one job, and: do it well. And to some extent, Linux has done just that. BUT, since we're operating in a so-called "open-source community", where everyone has the right to shout out concerns, fixes and feature-improvements, the so-called "bloating" of CLI-programs are a direct result of this fundamental right to speech, re-use and community involvement. Which in itself is beneficial to everyone using the software.

The reason why lower-level programs use much the same functionality as was used in the original UNIX®, is because the programming language in question (C) has few alternative methods of performing low-level calls to the hardware, and thus is forced to do certain things in specific ways. Having these low-level functions available at any time for whichever program needs them is fundamental to a POSIX-system. And re-using these functions to build / extend programs is the way it's always been done

I can agree with his point about re-creating and duplicating core-functions in a program if the program already facilitates the components needed to do a specific task, it's unnecessary. But I can see the other side of this matter as well, from a developer-viewpoint. If you spent a lot of time (multiple iterations) developing and extending an already-implemented function, (just) to get specific results presented in a specific manner, wouldn't you want to include it in the program so you wouldn't have to re-implement it further down the line if ever needed again? And does it really affect normal users what kind of extra filters and/or flags a program accepts from the command-line? Not really...

Back in the day (70s and 80s), this was the only way of specifying arguments to programs in a sequential fashion, and to some extent still is. Hardware works on the principals of sequential data-flow and execution, and unless we do something drastic to the hardware-platform we are all using, this is the way it will continue to be used in low-level terms (C+asm) until we do.

Until we see newer and better ideas concerning the data-bus / CPU inter-connection, we will not be able to implement any innovative interface-functionality on a low level.

My point is this: CLI is not meant for normal users. It is not intended to simplify computing, rather, it was intended for developmental and operational purposes, created by the very same type of people who use it on a daily basis.

If you want ease-of-use and simplicity, use Apple or Microsoft, and supplement that usage by providing user-feedback to them for new features / functions and fixes. Don't just crap on the alternative(s) because "it doesn't conform to my unique idea of what user-friendliness is". Linux was never made to be user-friendly. Derivatives were made to suit that scenario, so use them, instead of slagging on about the lacking user-friendliness and simplicity of the OS itself.

To quote one of the best sayings I have ever read:
"Unix is very simple, it just needs a genious to understand it's simplicity" --Dennis M. Ritchie

26 May, 2015

AMD Catalyst VS Gallium3D

Ok, I'll admit to fronting AMD's graphic-cards the past few years... but, that ain't happening anymore. After struggling with *broken* driver-updates the past 2, I'm getting quite fed up.

Purging FGLRX and all it's components EVERY TIME THEY GET UPDATED is seriously testing my patience. Not to mention the piss-poor OpenGL performance in heavy-3d-engine games (especially games that feature NVIDIAs PhysX).

The bugs, video-artifacts and crashes eventually got so bad that I actually had to revert to the open-sauce driver (Gallium3D / MESA), just to play my Steam-games... not that video-acceleration worked flawlessly either (VLC / XBMC).

And I have to say; with the right config Gallium3D can deliver as good, if not even better 3D-performance than the binary proprietary driver ever could. Gallium3D/MESA works, but not perfectly. If you plan on gaming with an AMD-card on Linux, their binary driver (FGLRX) is the safer bet.

Researching AMD / NVIDIA lately got me angry on both holds, but for different reasons.

AMD for being (completely) incompetent (Linux-)driver-wise. I kind of suspect they're trying the old "your shit is unsupported, buy new shit!"-routine to pressure new sales. Well, good luck with that AMD... you'll need it. Especially with that crappy driver.

NVIDIA for dropping more lock-in effects (like: PhysX / GameWorks / HairWorks) than actual bug-fixes and/or updates to their driver(s). But at least their driver works!

So, next time I'm thinking of upgrading my graphics, the choice will be fairly easy... NVIDIA.

AMD; seriously... get your shit together!

02 April, 2015

Logitech Unify on Linux

I recently had to replace all my peripherals for my primary desktop / workstation, and as always; the choice fell on Logitech. I ended up getting so-called "Unify" wireless devices (mouse / keyboard).

Upon reading about using Unify-devices on Linux, I realized some smart fellow had made a simple management-suite for Linux-systems, called Solaar.

Solaar allows pairing of multiple Unify-devices to one "Unifying Receiver" (the USB-receiver dongle), through either a simple GUI tray-application (GTK-interface) or a simple command-line tool. It also offers monitoring on battery-levels (with notifications) and checks if wireless links are using encryption or not. Much like the applications used on both Windows and Mac OS X.

Solaar, running as an Ubuntu desktop-tray widget.

Installing was as simple as running the following commands:
sudo add-apt-repository ppa:daniel.pavel/solaar && sudo apt-get update && sudo apt-get install solaar;
Solaar on the web: http://pwr.github.io/Solaar/

10 March, 2015

Fail2Ban

Fail2Ban works by scanning through log files and reacting to offending actions such as repeated failed login attempts, by using iptables to generate blocking-rules for any defined (listening) protocols / services, aimed at specific offending IP-addresses.

I used to utilize DenyHosts, but as the project was discontinued I had to adapt. And so I also had to retract all my recommendations of DenyHosts and update them all to endorse Fail2Ban instead.

If using Ubuntu or Linux Mint, setting up and using Fail2Ban is easy.

It comes pre-configured (on Ubuntu) to detect malicious SSH-activity with basic notification action,

Firstly, you just have to apt-get it:

sudo apt-get install fail2ban
Then, you just copy over the standard (Ubuntu) "skeleton"-config:
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
 Then stop and (re-)start Fail2Ban to load and run the config:
sudo service fail2ban stop && \
sudo service fail2ban start
It can be customized to send e-mail alerts to designated addresses, and various other "actions_".

Protocols and / or services are easily added to the config-file if they aren't already present. Fail2Bans config-file uses an easy syntax (layout-format) for somebody with basic networking- and logging-knowledge.

I tend to also enable the "ssh-ddos" detection, since these days we're seeing more and more distributed attacks. There are more SSH-specific detection's, but they're not necessary.

07 February, 2015

Best desktop gigabit switch


The Netgear GS108 has been a rock-solid, 24/7, always-on, seriously dependable Internet switch-point for office / desktop use.

I have been using one as an internal gigabit switch-point between gigabit LANs and a FTTH fiber-gateway for 5+ years now. Not one outage!

It is, however, a dumb switch. Meaning: it is not a managed smart-switch, but rather more of a "stupid" repeater-block. But an extraordinary one at that :)

30 January, 2015

gethostbyname()

The last few years (2010-2015), I have been using a command line log-compiler tool I made, called "sshlog", to monitor incoming SSH-logins, both accepted and failed.

It is basically a crude administration-tool (server-side) for remote use on mobile devices and tablets. In combination with "DenyHosts" (python-program for blocking IPs that carry out repetitive and malicious ssh-bruteforce attempts), it proved to be a very solid security-framework.

That is, until January this year.

Suddenly sshlog reported that there weren't made any malicious ssh-connections the last month, andDenyHosts did not add any new IPs since december last year. Weird...

While log-surfing and manually executing my sshlog-algorithm on my system-logs, I noticed there HAD been malicious ssh-bruteforcing (like usual)... but none of the offending addresses had been added to the blacklist because 'auth.log' was full of warnings about unresolvable hostnames in '/etc/hosts.deny'.

After checking out /etc/hosts.deny, it became apparent that DenyHosts had been adding random canonical hostnames (which, in turn, requires DNS resolving of the IP-addresses) instead of legitimate IP-addresses.

I made my discovery thanks to this repeating error in /etc/hosts.deny:

warning: /etc/hosts.deny, line xxxx: can't verify hostname: getaddrinfo(<some-dynamic-ip>, AF_INET)

*SIGH*

Firstly, I had to stop DenyHosts, then I had to manually 'sed' all the IP-addresses that had been added as canonical hostnames from the DenyHosts config-files, then from /etc/hosts.deny and lastly start DenyHosts up again...

The whole ordeal was apparently caused by none other than... myself -_-

I had enabled DenyHosts to do hostname-lookups for blocked IP-addresses :P *Doh!* which in turn forced DenyHosts to add dynamic hostnames to /etc/hosts.deny, no matter if they were legitimate addresses or not.

Fact is; many of these malicious SSH auto-dialers use dynamic IP-addresses that won't resolve as legitimate addresses in DNS (no registered reverse-lookup, or similar), thus, they are added to '/etc/hosts.deny' with (invalid) canonical hostnames that don't correspond to legitimate MAC and IP-address pairs.

27 January, 2015

IT article-comments

Found this very amusing comment on G+ following this article:

Microsoft is a dead man walking.  They represent the last man standing in the first great public software rip off of the 1980s.  The won the brutal "slog" to dominance in that world but the result is a disaster for them.  The corpse they stitched together from parts of dead competitors is obsolete and coming apart at the seams.  Despite decades of effort and purchase of some of the world's finest companies, they can't make their software competitive on servers, cell phones, music players, or anything outside the rapidly shrinking x86 market.  They can't compete with the free world.  Unable to steal the current generation of free software by software patent Ponzi schemes, they must again pretend they love their users and competitors in a last ditch effort to raise another generation of coders to rip off.  Nope, that's not going to happen.
Funny, funny shit :P but does it also have some sense of truth to it? Maybe... maybe it does.

26 January, 2015

Got your IP banned?

Here is a rather useful tutorial / how-to I found online on getting removed from IP blacklists ;)

IP blacklisting is a Internet blocking-practice that adds IP-addresses (used by known cyber-criminals, companies with bad reputations, script-kiddies and/or bot-nets) to a so-called "known bad addresses" (black-) list, which they then sell to service-providers and other similar Internet-companies as a filtering service.

The blog-post also contains a list of removal-links to various blacklist-providers.

http://blog.online-domain-tools.com/2015/01/26/how-to-get-removed-from-blacklists/

23 January, 2015

Government backdoors?

Quite a few people have approached me and asked my opinion on government-implemented backdoors in open-sauce software. Well, I usually refer to this youtube-clip:

06 January, 2015

New smart-phone


Due to certain circumstances (shattered my Nexus 4 :P), I had to get a new smart-phone. I had already pre-chosen my make and model after testing a buddy's Nexus 5.

:) Massively content with CPU and memory performance, coupled with 4G (LTE) network-capabilities; a definite must-have for superusers and devs :D

It is driven by a Qualcomm Snapdragon 800 2.26GHz Quad-Core processor coupled with an Adreno 330 GPU supplemented by 2GB of LPDDR3-1600 RAM, 16 or 32GB internal storage and an 8 megapixel rear-facing camera with OIS (Optical Image Stabilization) and an 1.3 megapixel front-facing camera.

It is also a rather great smart-phone for those who are tired of the typical bloatware (mostly user interface additions / modifications / replacements, and some resource-heavy services) on manufacturer-branded sets (Samsung, HTC, Sony, etc.).



"About" menu-page on Android L.
I wasn't in any way disgusted when it got upgraded to Android 5.x "Lollipop", also known as "Android L". But the 5.0.1-bugfix was dearly appreciated after struggling with battery-trouble on 5.0.
Software update popup on Android L.
Additions and modifications:
  • GUI-modifications (there were a few, notice the navigation-buttons on the screenshots)
  • new and better security-features
  • and function-rewrites (I'm used to it, been using Android since v1.5).
Actually I was really impressed by the speed and performance of the totally re-written runtime format : ART (AndroidRunTime), which replaced DALVIK (from earlier Android-versions 1.5.x to 4.4.x) in v5.x and up.

05 January, 2015

Google Chrome / Chromium browser


I have been a promoter for Google-branded software for some time now. Basically, because they do it right!

And I am especially fond of Google's Chrome / Chromium browser(s). They have adequate security (sandboxing, etc.), superb' GUI-handling / -principles (tabs, integrated tools, etc.). It is the all-in-one multi-purpose web-tool for both users and developers in my opinion, and a lot of others.

But, I tend to hear a lot of complaining about "chrome getting slower over time", "chrome tends to crash rather abruptly", "it's beta-software, don't use it, use explorer", and the like.

I only have one thing to say about the matter:
STOP INSTALLING PLUGINS AND EXTENSIONS WITHOUT CAUTION!

Extending and plugging a software-package can be a good thing, indeed, when it's necessary to, but it really is a double-edged sword technology... too many extensions slows loading / writing and even exiting of a program / OS. OS-specific resource-handling suffers as well, which in turn is misinterpreted as performance-issues of the program itself.



This particular advice also concerns operating systems (you know; OS X, Windows and Linux), the more crap (especially third-party bloatware and/or fake-service(s)) that hog up precious resources, the more sluggish and decrepit your system will appear to be.

Mac OS X and Windows also suffer from bloatware, malware and occasionally; viruses. Of which all can be prevented and removed, easily. If you prepare!

Updating core system components is vital! If a system is properly updated, it makes it that much more hard for a potential cracker to break in / steal / hi-jack / sabotage.

Be vigilant, clean your shit (browser-cache(s), OS-specific temp-files, etc) and secure your system(s) to the best of your ability (or the abilities of a more competent friend / family-member / co-worker).

04 January, 2015

AMD Radeon HD 6xxx / 7xxx / GCN

I must admit,. still being able to run newer 3D-accelerated games (2014+) on Linux, at acceptable resolutions (720p/1080p), on a rather outdated (but not deprecated) AMD Radeon HD 66xx-series discrete GPU ('2012) is quite amazing.

Sure, there are hiccups with certain games (mostly b/c of the cross-platform game-engine selected for development), but they'll get ironed out (eventually...) I'm not worried ;)

And, yes... I realize certain games are optimized for Nvidia-cards, but AMD is finally catching up on driver-compatibility / -support and -optimization.

The open-source Linux-driver (Radeon Gallium3D) and the proprietary Linux-driver (Catalyst / "Omega") both showed amazing feature-progress and bug-fixing in 2014, and the Catalyst-driver even benchmarked better than the Windows-version: http://www.phoronix.com/scan.php?page=article&item=amd_cat_winlin2014&num=1.

Performance and feature additions for 2014:

  • OpenMAX / AMD video encode, UVD for older AMD GPUs.
  • various new OpenGL extensions.
  • continued work on OpenCL.
  • power management improvements.
  • ...and the start of open-source HSA.
The benefits are mostly for the newer HD 7xxx / GCN(GraphicsCoreNext)-series of GPU's, but, the driver-tests also showed the HD 6xxx-series (3+ years old) was still improving on both performance and bug-fixes.

From personal experience, gaming on the HD 6xxx-series is adequate for newer games (2013-2014+). But, if you want awesomesauce eyecandy graphics, tip: go for a higher series-range card.

I currently have a Sapphire Radeon HD 6670 1GB 64-bit GDDR5 PCIe-2.1 discrete graphics-card in my h3x4c0m box, and the card sports a GPU with a total of 480 stream-processors. I got the card b/c my deprecated Nvidia-card sported a whopping 48(!) CUDA-cores (lol), suffice to say; it should have been binned a long time ago, or at least moved over to a crap-box to act as a PVR or TV-tuner or something.


If you are a hardcore ultra-gamer, stick with Nvidia. But, AMD is still catching up, so keep an eye out for them as they have proven (2014) their game on Linux, and maybe even accelerating even newer methods of both integrated graphics (APU+GPU) and discrete graphics (GPU) in the near future ;)

Phoronix article-links:
http://hardware.slashdot.org/story/14/12/27/2234222/phoronix-lauds-amds-open-source-radeon-driver-progress-for-2014?sdsrc=rel
http://linux.slashdot.org/story/15/01/03/1426208/amd-catalyst-linux-driver-catching-up-to-and-beating-windows?utm_source=rss1.0mainlinkanon&utm_medium=feed