[blfs-support] Knowing when to ditch an old (desktop) system

Michael Shell list1 at michaelshell.org
Tue Dec 30 12:19:42 PST 2014


On Mon, 29 Dec 2014 21:40:51 +0000
Ken Moffat <zarniwhoop at ntlworld.com> wrote:

> Summary - it is probably easiest to build a new system rather than
> try to keep a desktop running safely for years.

Well, I suppose it depends on the specific definition of "safely".
You are defining it to mean no known exploits which are either
open to the "outside" (e.g., a flaw in the kernel's TCP/IP stack,
a browser exploit) or that are open to normal users (e.g., running
a command that can elevate a normal account to root access, trick
an application into doing something it is not supposed to, etc.).
And that goal can be a real tall order these days.

---
I am more concerned about the former than the latter because there
aren't any potentially "hostile" users on the system. Of course,
a malicious data/audio/video file does have the potential to turn
an unsuspecting user into a hostile force, but some thought always
should be given to the origin of any data files that are to be
processed. With browsers, we often aren't even aware what is "running"
within it. The Noscript plugin can help here, but so many sites won't
operate without javascript - I do so wish this were not the case.

I think we are approaching a time in which some thought needs to be
given to changing the system architecture at a more fundamental level
because it is becoming increasingly difficult to ensure all the various
applications are "secure". i.e., even if a browser does have a flaw,
it should still not be able to do very harmful things.

Something like system-wide enforced application individualized sandboxing
is what I have in mind - each application should have an associated set
of permissions (installed as a text file in something like
/etc/perms/theapp) that sets limits on just what it is allowed to do - 
that are passed on to any other applications that may be started by it.
And perhaps the rules could be defined in ways such that only half a dozen
or so predefined rule sets are needed for 99%+ of the applications so as
to make administration easy.

For example, rules such as "no modification of system files" (thus even
if root were running mplayer it could not touch /boot),  "no modification
of user .config or startup files except for those in the config directory
for the application itself" (which should apply to most applications),
"no other outside exec ability except for any already linked-in libraries".

Maybe something like the SELinux concept, but perhaps lighter and easier
to apply to every application on the system.

We may be burdened by this, but we are also burdened by having to upgrade
all the time and even this does not provide assurance that hackers are
not using flaws that are not yet known by developers.

I also think that man-in-the-middle in-network alteration of downloaded
program files and source code tar balls is of increasing concern (esp.
with regard to sudo make install) - along with the simultaneous compromise
of any corresponding .sig check files. The entire net needs to go https
and even that has a weakness with regard to the certificate authority
system.

https://www.grc.com/fingerprints.htm
https://bitsandchaos.wordpress.com/2010/03/29/certificate-patrol-can-really-save-your-pocket/
http://patrol.psyced.org/
http://files.cloudprivacy.net/ssl-mitm.pdf 


Finally, on the hardware level there should be stricter enforced
separation of code and data so that things like buffer overruns cannot
alter the code path to be executed (also code could be declared to be 
"read only" so as to not allow any self-modification). CPUs should have
hardware dedicated to enforcing such policies including data object
boundary limits (and these limits should be more tightly linked to
structures such as arrays in C).
---

Anyway, there also are differences with regard to how "big a deal" it is
to upgrade a package. In my view, a kernel tends to be easy, gcc is
easy/mid range (the largest hurdle tends to be the run time needed to
compile it) and glibc can either be easy or a really big deal depending
on whether upgrading that will break anything.

Generally speaking:

 1. Most individual apps, kernels and libraries tend to be easy.
 2. If you need to upgrade gcc, do go ahead and do it as that
    is often easier than dealing with all the work arounds.
 3. IMHO, Xorg is a big deal
 4. Heavy weight applications and libraries with many dependencies
    such as browsers can turn into big deals.

When encountering a "big deal" we should give serious consideration
to rebuilding the whole system.

Another angle is the obsolescence of the *hardware* itself. Has anyone
attempted to maintain a very latest system on older hardware, and if so,
what was the result - increasing slowness due to ever increasing demands
on the hardware, or is this at least partly countered by coding
improvements? How does GTK3 fare in this regard compared to GTK2?


  Cheers,

  Mike Shell










More information about the blfs-support mailing list