28 June, 2015

h3x - upgrade part II

Upgrade-time! :) ...and this time h3x is undergoing a complete cabinet+gfx upgrade. From a rather minimalistic HTPC starter-cabinet, into a completely new 2-sectioned, airflow-designed multi-cabinet :)

Fractal Design Node 804 cabinet
Corsair CX600 PSU
3 x Corsair SP120 "Quiet Ed" fans
My new XFX Radeon R9 270X 2GB GDDR5:
1xHDMI - 2xDVI-D - 2xDP

Disassembly


Silverstone MILO (ML03) when first built (nov'2011)
Silverstone MILO (ML03) disassembled.
The above picture shows my old HTPC cabinet after being almost completely stripped. Only things still left inside are the system-SSD (Corsair ForceGT) and the 400W PSU (Silver Power).

I did not foresee how hard it would be to dismantle the MILO HTPC mATX-cabinet, everything was so close together that it made it near impossible to figure out where to start off.

After viewing it from different angles, and bringing in a fresh set of eyes; everything got pulled out eventually

Then, the time to re-mount everything again came around.

Re-assembly


  • First: the PSU.
Right side.
Back-right profile.
  • Then; motherboard + RAM + CPU.
Front-left profile.
Left side.
  • And to finish it off: graphics-accelerator + HDD + SSD.

Left side: Component-compartment.
Right side: Storage- / power-compartment.

I now have a overhauled gaming-rig / workstation / hypervisor with 3-4x more graphics-processing power than I previously had :P and it WILL get tested ;)

I opted to using the 3 Corsair SP120 fans in the front, to complement the total airflow going through the cabinet. So that's a total of 4 fans in the front for intake and 2 in the back for exhaust.



08 June, 2015

"Unix is not an acceptable Unix"

This article really grinds my gears. The author tries to discount the use of command line interfaces by saying it is an obsolete interface-technology (mostly reserved for developers and advanced system administrators).

Pontificating that modern Unices (like Mac OS X and Linux) suffer at exactly the same areas of complexity because of their Unix-heritage. Well, I call bullshit...

  • Linux does not have a Unix-heritage, because it does not contain any UNIX®-code.
  • Computer complexity is really a catch-22 scenario; every time you modify something low-level, something may break on a lower level. Keeping this complexity usable at varying levels is a master-class balancing-act, and not to be taken lightly.
Yes, the Unix-philosophy states that every program should: do one job, and: do it well. And to some extent, Linux has done just that. BUT, since we're operating in a so-called "open-source community", where everyone has the right to shout out concerns, fixes and feature-improvements, the so-called "bloating" of CLI-programs are a direct result of this fundamental right to speech, re-use and community involvement. Which in itself is beneficial to everyone using the software.

The reason why lower-level programs use much the same functionality as was used in the original UNIX®, is because the programming language in question (C) has few alternative methods of performing low-level calls to the hardware, and thus is forced to do certain things in specific ways. Having these low-level functions available at any time for whichever program needs them is fundamental to a POSIX-system. And re-using these functions to build / extend programs is the way it's always been done

I can agree with his point about re-creating and duplicating core-functions in a program if the program already facilitates the components needed to do a specific task, it's unnecessary. But I can see the other side of this matter as well, from a developer-viewpoint. If you spent a lot of time (multiple iterations) developing and extending an already-implemented function, (just) to get specific results presented in a specific manner, wouldn't you want to include it in the program so you wouldn't have to re-implement it further down the line if ever needed again? And does it really affect normal users what kind of extra filters and/or flags a program accepts from the command-line? Not really...

Back in the day (70s and 80s), this was the only way of specifying arguments to programs in a sequential fashion, and to some extent still is. Hardware works on the principals of sequential data-flow and execution, and unless we do something drastic to the hardware-platform we are all using, this is the way it will continue to be used in low-level terms (C+asm) until we do.

Until we see newer and better ideas concerning the data-bus / CPU inter-connection, we will not be able to implement any innovative interface-functionality on a low level.

My point is this: CLI is not meant for normal users. It is not intended to simplify computing, rather, it was intended for developmental and operational purposes, created by the very same type of people who use it on a daily basis.

If you want ease-of-use and simplicity, use Apple or Microsoft, and supplement that usage by providing user-feedback to them for new features / functions and fixes. Don't just crap on the alternative(s) because "it doesn't conform to my unique idea of what user-friendliness is". Linux was never made to be user-friendly. Derivatives were made to suit that scenario, so use them, instead of slagging on about the lacking user-friendliness and simplicity of the OS itself.

To quote one of the best sayings I have ever read:
"Unix is very simple, it just needs a genious to understand it's simplicity" --Dennis M. Ritchie

26 May, 2015

AMD Catalyst VS Gallium3D

Ok, I'll admit to fronting AMD's graphic-cards the past few years... but, that ain't happening anymore. After struggling with *broken* driver-updates the past 2, I'm getting quite fed up.

Purging FGLRX and all it's components EVERY TIME THEY GET UPDATED is seriously testing my patience. Not to mention the piss-poor OpenGL performance in heavy-3d-engine games (especially games that feature NVIDIAs PhysX).

The bugs, video-artifacts and crashes eventually got so bad that I actually had to revert to the open-sauce driver (Gallium3D / MESA), just to play my Steam-games... not that video-acceleration worked flawlessly either (VLC / XBMC).

And I have to say; with the right config Gallium3D can deliver as good, if not even better 3D-performance than the binary proprietary driver ever could. Gallium3D/MESA works, but not perfectly. If you plan on gaming with an AMD-card on Linux, their binary driver (FGLRX) is the safer bet.

Researching AMD / NVIDIA lately got me angry on both holds, but for different reasons.

AMD for being (completely) incompetent (Linux-)driver-wise. I kind of suspect they're trying the old "your shit is unsupported, buy new shit!"-routine to pressure new sales. Well, good luck with that AMD... you'll need it. Especially with that crappy driver.

NVIDIA for dropping more lock-in effects (like: PhysX / GameWorks / HairWorks) than actual bug-fixes and/or updates to their driver(s). But at least their driver works!

So, next time I'm thinking of upgrading my graphics, the choice will be fairly easy... NVIDIA.

AMD; seriously... get your shit together!

02 April, 2015

Logitech Unify on Linux

I recently had to replace all my peripherals for my primary desktop / workstation, and as always; the choice fell on Logitech. I ended up getting so-called "Unify" wireless devices (mouse / keyboard).

Upon reading about using Unify-devices on Linux, I realized some smart fellow had made a simple management-suite for Linux-systems, called Solaar.

Solaar allows pairing of multiple Unify-devices to one "Unifying Receiver" (the USB-receiver dongle), through either a simple GUI tray-application (GTK-interface) or a simple command-line tool. It also offers monitoring on battery-levels (with notifications) and checks if wireless links are using encryption or not. Much like the applications used on both Windows and Mac OS X.

Solaar, running as an Ubuntu desktop-tray widget.

Installing was as simple as running the following commands:
sudo add-apt-repository ppa:daniel.pavel/solaar && sudo apt-get update && sudo apt-get install solaar;
Solaar on the web: http://pwr.github.io/Solaar/

10 March, 2015

Fail2Ban

Fail2Ban works by scanning through log files and reacting to offending actions such as repeated failed login attempts, by using iptables to generate blocking-rules for any defined (listening) protocols / services, aimed at specific offending IP-addresses.

I used to utilize DenyHosts, but as the project was discontinued I had to adapt. And so I also had to retract all my recommendations of DenyHosts and update them all to endorse Fail2Ban instead.

If using Ubuntu or Linux Mint, setting up and using Fail2Ban is easy.

It comes pre-configured (on Ubuntu) to detect malicious SSH-activity with basic notification action,

Firstly, you just have to apt-get it:

sudo apt-get install fail2ban
Then, you just copy over the standard (Ubuntu) "skeleton"-config:
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
 Then stop and (re-)start Fail2Ban to load and run the config:
sudo service fail2ban stop && \
sudo service fail2ban start
It can be customized to send e-mail alerts to designated addresses, and various other "actions_".

Protocols and / or services are easily added to the config-file if they aren't already present. Fail2Bans config-file uses an easy syntax (layout-format) for somebody with basic networking- and logging-knowledge.

I tend to also enable the "ssh-ddos" detection, since these days we're seeing more and more distributed attacks. There are more SSH-specific detection's, but they're not necessary.

07 February, 2015

Best desktop gigabit switch


The Netgear GS108 has been a rock-solid, 24/7, always-on, seriously dependable Internet switch-point for office / desktop use.

I have been using one as an internal gigabit switch-point between gigabit LANs and a FTTH fiber-gateway for 5+ years now. Not one outage!

It is, however, a dumb switch. Meaning: it is not a managed smart-switch, but rather more of a "stupid" repeater-block. But an extraordinary one at that :)

30 January, 2015

gethostbyname()

The last few years (2010-2015), I have been using a command line log-compiler tool I made, called "sshlog", to monitor incoming SSH-logins, both accepted and failed.

It is basically a crude administration-tool (server-side) for remote use on mobile devices and tablets. In combination with "DenyHosts" (python-program for blocking IPs that carry out repetitive and malicious ssh-bruteforce attempts), it proved to be a very solid security-framework.

That is, until January this year.

Suddenly sshlog reported that there weren't made any malicious ssh-connections the last month, andDenyHosts did not add any new IPs since december last year. Weird...

While log-surfing and manually executing my sshlog-algorithm on my system-logs, I noticed there HAD been malicious ssh-bruteforcing (like usual)... but none of the offending addresses had been added to the blacklist because 'auth.log' was full of warnings about unresolvable hostnames in '/etc/hosts.deny'.

After checking out /etc/hosts.deny, it became apparent that DenyHosts had been adding random canonical hostnames (which, in turn, requires DNS resolving of the IP-addresses) instead of legitimate IP-addresses.

I made my discovery thanks to this repeating error in /etc/hosts.deny:

warning: /etc/hosts.deny, line xxxx: can't verify hostname: getaddrinfo(<some-dynamic-ip>, AF_INET)

*SIGH*

Firstly, I had to stop DenyHosts, then I had to manually 'sed' all the IP-addresses that had been added as canonical hostnames from the DenyHosts config-files, then from /etc/hosts.deny and lastly start DenyHosts up again...

The whole ordeal was apparently caused by none other than... myself -_-

I had enabled DenyHosts to do hostname-lookups for blocked IP-addresses :P *Doh!* which in turn forced DenyHosts to add dynamic hostnames to /etc/hosts.deny, no matter if they were legitimate addresses or not.

Fact is; many of these malicious SSH auto-dialers use dynamic IP-addresses that won't resolve as legitimate addresses in DNS (no registered reverse-lookup, or similar), thus, they are added to '/etc/hosts.deny' with (invalid) canonical hostnames that don't correspond to legitimate MAC and IP-address pairs.

27 January, 2015

IT article-comments

Found this very amusing comment on G+ following this article:

Microsoft is a dead man walking.  They represent the last man standing in the first great public software rip off of the 1980s.  The won the brutal "slog" to dominance in that world but the result is a disaster for them.  The corpse they stitched together from parts of dead competitors is obsolete and coming apart at the seams.  Despite decades of effort and purchase of some of the world's finest companies, they can't make their software competitive on servers, cell phones, music players, or anything outside the rapidly shrinking x86 market.  They can't compete with the free world.  Unable to steal the current generation of free software by software patent Ponzi schemes, they must again pretend they love their users and competitors in a last ditch effort to raise another generation of coders to rip off.  Nope, that's not going to happen.
Funny, funny shit :P but does it also have some sense of truth to it? Maybe... maybe it does.

26 January, 2015

Got your IP banned?

Here is a rather useful tutorial / how-to I found online on getting removed from IP blacklists ;)

IP blacklisting is a Internet blocking-practice that adds IP-addresses (used by known cyber-criminals, companies with bad reputations, script-kiddies and/or bot-nets) to a so-called "known bad addresses" (black-) list, which they then sell to service-providers and other similar Internet-companies as a filtering service.

The blog-post also contains a list of removal-links to various blacklist-providers.

http://blog.online-domain-tools.com/2015/01/26/how-to-get-removed-from-blacklists/

23 January, 2015

Government backdoors?

Quite a few people have approached me and asked my opinion on government-implemented backdoors in open-sauce software. Well, I usually refer to this youtube-clip:

06 January, 2015

New smart-phone


Due to certain circumstances (shattered my Nexus 4 :P), I had to get a new smart-phone. I had already pre-chosen my make and model after testing a buddy's Nexus 5.

:) Massively content with CPU and memory performance, coupled with 4G (LTE) network-capabilities; a definite must-have for superusers and devs :D

It is driven by a Qualcomm Snapdragon 800 2.26GHz Quad-Core processor coupled with an Adreno 330 GPU supplemented by 2GB of LPDDR3-1600 RAM, 16 or 32GB internal storage and an 8 megapixel rear-facing camera with OIS (Optical Image Stabilization) and an 1.3 megapixel front-facing camera.

It is also a rather great smart-phone for those who are tired of the typical bloatware (mostly user interface additions / modifications / replacements, and some resource-heavy services) on manufacturer-branded sets (Samsung, HTC, Sony, etc.).



"About" menu-page on Android L.
I wasn't in any way disgusted when it got upgraded to Android 5.x "Lollipop", also known as "Android L". But the 5.0.1-bugfix was dearly appreciated after struggling with battery-trouble on 5.0.
Software update popup on Android L.
Additions and modifications:
  • GUI-modifications (there were a few, notice the navigation-buttons on the screenshots)
  • new and better security-features
  • and function-rewrites (I'm used to it, been using Android since v1.5).
Actually I was really impressed by the speed and performance of the totally re-written runtime format : ART (AndroidRunTime), which replaced DALVIK (from earlier Android-versions 1.5.x to 4.4.x) in v5.x and up.

05 January, 2015

Google Chrome / Chromium browser


I have been a promoter for Google-branded software for some time now. Basically, because they do it right!

And I am especially fond of Google's Chrome / Chromium browser(s). They have adequate security (sandboxing, etc.), superb' GUI-handling / -principles (tabs, integrated tools, etc.). It is the all-in-one multi-purpose web-tool for both users and developers in my opinion, and a lot of others.

But, I tend to hear a lot of complaining about "chrome getting slower over time", "chrome tends to crash rather abruptly", "it's beta-software, don't use it, use explorer", and the like.

I only have one thing to say about the matter:
STOP INSTALLING PLUGINS AND EXTENSIONS WITHOUT CAUTION!

Extending and plugging a software-package can be a good thing, indeed, when it's necessary to, but it really is a double-edged sword technology... too many extensions slows loading / writing and even exiting of a program / OS. OS-specific resource-handling suffers as well, which in turn is misinterpreted as performance-issues of the program itself.



This particular advice also concerns operating systems (you know; OS X, Windows and Linux), the more crap (especially third-party bloatware and/or fake-service(s)) that hog up precious resources, the more sluggish and decrepit your system will appear to be.

Mac OS X and Windows also suffer from bloatware, malware and occasionally; viruses. Of which all can be prevented and removed, easily. If you prepare!

Updating core system components is vital! If a system is properly updated, it makes it that much more hard for a potential cracker to break in / steal / hi-jack / sabotage.

Be vigilant, clean your shit (browser-cache(s), OS-specific temp-files, etc) and secure your system(s) to the best of your ability (or the abilities of a more competent friend / family-member / co-worker).

04 January, 2015

AMD Radeon HD 6xxx / 7xxx / GCN

I must admit,. still being able to run newer 3D-accelerated games (2014+) on Linux, at acceptable resolutions (720p/1080p), on a rather outdated (but not deprecated) AMD Radeon HD 66xx-series discrete GPU ('2012) is quite amazing.

Sure, there are hiccups with certain games (mostly b/c of the cross-platform game-engine selected for development), but they'll get ironed out (eventually...) I'm not worried ;)

And, yes... I realize certain games are optimized for Nvidia-cards, but AMD is finally catching up on driver-compatibility / -support and -optimization.

The open-source Linux-driver (Radeon Gallium3D) and the proprietary Linux-driver (Catalyst / "Omega") both showed amazing feature-progress and bug-fixing in 2014, and the Catalyst-driver even benchmarked better than the Windows-version: http://www.phoronix.com/scan.php?page=article&item=amd_cat_winlin2014&num=1.

Performance and feature additions for 2014:

  • OpenMAX / AMD video encode, UVD for older AMD GPUs.
  • various new OpenGL extensions.
  • continued work on OpenCL.
  • power management improvements.
  • ...and the start of open-source HSA.
The benefits are mostly for the newer HD 7xxx / GCN(GraphicsCoreNext)-series of GPU's, but, the driver-tests also showed the HD 6xxx-series (3+ years old) was still improving on both performance and bug-fixes.

From personal experience, gaming on the HD 6xxx-series is adequate for newer games (2013-2014+). But, if you want awesomesauce eyecandy graphics, tip: go for a higher series-range card.

I currently have a Sapphire Radeon HD 6670 1GB 64-bit GDDR5 PCIe-2.1 discrete graphics-card in my h3x4c0m box, and the card sports a GPU with a total of 480 stream-processors. I got the card b/c my deprecated Nvidia-card sported a whopping 48(!) CUDA-cores (lol), suffice to say; it should have been binned a long time ago, or at least moved over to a crap-box to act as a PVR or TV-tuner or something.


If you are a hardcore ultra-gamer, stick with Nvidia. But, AMD is still catching up, so keep an eye out for them as they have proven (2014) their game on Linux, and maybe even accelerating even newer methods of both integrated graphics (APU+GPU) and discrete graphics (GPU) in the near future ;)

Phoronix article-links:
http://hardware.slashdot.org/story/14/12/27/2234222/phoronix-lauds-amds-open-source-radeon-driver-progress-for-2014?sdsrc=rel
http://linux.slashdot.org/story/15/01/03/1426208/amd-catalyst-linux-driver-catching-up-to-and-beating-windows?utm_source=rss1.0mainlinkanon&utm_medium=feed

29 December, 2014

Open networking; leaf-spine...

I recently came across a norwegian tech-article regarding open networking vs proprietary networking.

More specifically, it talks about the Open Compute Project - for those of you who don't know, read this. More information can be found here.

The project discusses the use of open standards when building huge, energy-demanding and efficient data-centers. Basically, OCP talks about replacing the common standard networking architecture with something (kind of) new, to meet the ever evolving (and consequently; demanding) bandwidth and streaming of tomorrows data-centers. Mainly, replacing the tree-span network-design with the leaf-spine network-design.

A leaf-spine network design consists of the following:

  • Top-of-rack (leaf) switches (leading to spine-switches).
  • Spine-switches (between leaf- and core-layers in a leaf-spine network).
  • ... and other hardware and software solutions for rack-use.
LEAF-SPINE NETWORK
In a so-called "leaf-spine" layer-2 network; the spanning-tree protocol is often replaced with either a version of Transparent Interconnection of Lots of Links ("TILL") or Shortest Path Bridging ("SPB").

In a leaf-spine layer-3 network; each link is a routed link. Open Shortest Path First ("OSPF") is often used as the routing-protocol to compute paths between leaf and spine switches.

The norwegian in me thinks the reporter has (way too) big expectations, especially if he's counting on norwegian authorities and heavy industry for this (quite HUGE) change to take place.

But on the other hand, my inner sysadmin really likes the possibility of administrating a data-center with fully open networking-protocols / -equipment and/or -hardware :)



My own little Âµ-datacenter is based on open protocols, open systems, and somewhat open hardware.
http://blog.pizslacker.org/2014/10/datacenter-so-far.html

Norwegian article:
http://www.cw.no/artikkel/kommentar/apent-ikke-proprietaert


01 November, 2014

What is (was) UNIX?

Well, here is a simple explanation from the creators themselves ;)



Here is another blogg-post about UNIX:
http://blog.pizslacker.org/2009/09/brief-little-unix-history-video.html

Nekro >:D

I don't usually tend to fork out wet stinky for Early Access-games on Steam, but, on this occasion I made an exception. After watching LinuxGameCast's episode about it, I just couldn't help myself :P




( Forward to 9:02 for review of Nekro's Early Access )



Excerpt from the Nekro Steam-page:
Raise demons and spawn unholy weapons of war as a powerful Necromancer with an insatiable blood lust. Customize your minions to your liking and perfect your ideal "monster loadout." Tear your enemies limb from limb and harvest their corpses to strengthen your dark magics.



For an Early Access beta, this game is surprisingly well made (and even debugged(!)) to be at version 0.8.5.5. Not even at pre-release stage and it behaves like a release already.

It ran exceptionally well on Linux (32-bit "Unity"-game on 64-bit Linux).

http://store.steampowered.com/app/246400/

All the following screenshots are from the 0.8.0.3-version.























24 October, 2014

Civilization V @Linux

I'm really not a big fan of strategy-builder games (at least not since C&C), but Sid Meier's Civilization-series are more interesting than other similar games, at least in my opinion.

The game did have some peculiar bugs (ex: not running on selected screen in multi-monitor setups), but nothing major. The screenshots below were taken on an Intel Core 2 Duo Linux-laptop.





17 October, 2014

AMD FGLRX + Steam for Linux

During these past 2 years of testing Steam-for-Linux betas and various Linux-based/-ported games I have been using a mid-range nvidia graphics-accelerator. Which has been fine. But I wanted a little more kick from the GPU.
DOOM 3: BFG Edition (non-steam version) - running natively on Linux.
After visiting a gamer-friend with years of bleeding-edge hardware experience, especially related to full HD 3D-rendering, I decided to go over to AMD-based stream-processor gfx-cards. My friend really recommended AMD due to their aqcuirement of the graphics-company ATi in 2006. Having both CPU-production and GPU-production under the same company really put them ahead of other CPU-vendors in terms of graphics-performance, both in integrated and discrete graphics-solutions.

I have to note that laptops sporting AMD GPU's (integrated graphics-cards) don't work so well with the proprietary Catalyst-driver, I have tried to make it work on several occasions, but the driver (Catalyst) always seems to crap out. Unless you REALLY know what you are doing (manually tuning the Catalyst-driver), I would not recommend using Catalyst on laptops or notebooks.

ONLY use the Catalyst-driver if you are using a gaming-rig (desktop PC) with a discrete (separate component) graphics-card installed.
Space Hulk - running natively on Linux.
AMDs Catalyst graphics-driver had quite a few improvements the last few years as well, so I was eager to test it out on Linux and see for myself.
Left 4 Dead 2 - running natively on Linux.
The proprietary driver worked rather well (for the most part), but it still suffers from a set of annoying bugs (window-manager artifacts, exiting fullscreen HD video won't reset to default resolution, etc.), but not as annoying as they used to be in the start... believe me (video-tearing, 3D-tearing, kernel panics, etc.).
Trine 2 - running natively on Linux.
For 3D-accelerated games, it works really good. Impressively good. The lower high-end card I decided to opt for raised my rigs eyecandy-potential at least tenfold(!). After realizing this, I would really recommend AMD's graphics to any serious gaming-enthusiast looking for OP graphics that does not compromise the gaming-experience.
Darwinia - running natively on Linux.
To put it rather simply: it just works, better than I expected, but not without the odd bug or two...
DOTA 2 - running natively on Linux.
My final verdict: I would recommend AMD graphics-cards to hardcore gaming-enthusiasts who doesn't get shocked by an odd bug or two (at least, not serious bugs anyway), however, I would NOT recommend it to average-joe.
FEZ - running natively on Linux.


ACCELERATOR-TIP:
To enable video-accelerator (VA-)chips on discrete AMD-based (Radeon/HD) graphics-cards (on Linux), run the following command in a terminal on Ubuntu Desktop or Linux Mint:
sudo apt-get install xvba-va-driver libva-glx1 libva-egl1 vainfo
These libraries and drivers prevents video-artifacts/-tearing, stabilizes video-movement, de-processes video-interlacing and similar video-issues.



I will be testing and experimenting with the open-source 3D/video driver for AMD-based cards at a later date, namely the Gallium3D-driver. And I'll be posting about it too, so stay tuned...

16 October, 2014

2 years Linux-gaming

SteamOS and Steam for Linux have really progressed the last two years. And with this year showing AAA-titles also wanting "teh' working Linux", things are really looking good for VALVe ;) :)

Personally, the bugs I've encountered playing beta- and alpha-games on Linux since December 2012 really weren't all that bad.

I really only had one major issue; the Steam-for-Linux client itself had some CPU-bugs.

In VALVe's defense, that was due to the fact I was launching it on a low-powered Asus Eee 900 netbook that sported an Intel Celeron 900MHz uni-processor (single-core). One lousy, 32-bit, barely-functioning little crappy "mini-computer".

This low-end CPU did not sport certain CPU-flags that Steam-for-Linux expected it to have, so they had to make workarounds for low-powered devices with integrated graphics (in part because of yours truly).

Apart from that VALVe-specific debugging, I've also sent a few bug reports to various game-studios about specific library-support issues, audio-bugs, and similar low-priority bugs. Nothing big.

07 October, 2014

PC-gaming is dead... (RLY?)

OK.., how and where did the proclamation of "PC-gaming is dead" come round?

People, please... what kind of platform do you think games are created on?

Seriously... that's just, a completely new level of ignorance.

Consoles? Eeeeeh, they will co-exist, and probably evolve, but they will NEVER crush, or even replace PC-gaming.

EVAH...



I would rather believe this article when it comes to the current gaming-climate: http://www.techradar.com/news/gaming/consoles/console-gaming-is-dead-everything-good-is-happening-on-pc-right-now-1260162

06 October, 2014

µ-cloud, so far...

Sporting as a: VM-lab / Workstation / A/V-rig / Gaming-rig / VM-production :P


3 years - 24/7 operation (2011-2014), and still running like a well-greased steam-engine :P everything runs some form of Linux ;) open standards - open systems :D all the way.

I've never had OP processing-capabilities before, especially at my own complete personal disposal (20+ years of computing-experience), until now :P 3,5 years later (that's including half a year of planning before starting the build).

Never going out of CPU-time, practically never overflowing (swapping) the system memory. And rarely even getting into CPU-loops (unless I'm doing some risky low-level programming for lulz).

I am proud to say that my domain is fully OSS-operated ;) :D


Network-wise, I have separated internal traffic into separate VLANs, mainly for layer-2 separation of different protocols and various traffic-types, like: Internet-traffic, incoming web-server traffic, VPS' Internet-traffic, management sub-net and iSCSI sub-net.

iSCSI even runs on separate NICs (on both storage-box and server), through it's own switch-block and separated sub-net, to prevent interference with other packets running on high-traffic sub-nets (like Internet-access).

Outgoing traffic (and incoming of course :P) is pumped through a VPN / Firewall network-box that has a firewall-throughput equal the total speed of the FTTH Internet-uplink connection ;D


h3x4c0m-build: http://blog.pizslacker.org/2011/12/h3x4c0m.html