# What is security



## bcomputerguy (Nov 14, 2017)

That title is a bit much but seriously, with hardware that can leak info like in this talk and others; Is there really any security?

How could stuff like this be mitigated?


----------



## tingo (Nov 14, 2017)

Security isn't a thing, a state, or a certification. It is a continuous process, where you look at what risks your information (or physical items) can be exposed to, what actions you can take to mitigate some of those risks, and implement the actions you can afford in order to be reasonably secure.
And the the process starts again.


----------



## ShelLuser (Nov 14, 2017)

Of course there is security, but how much depends on several factors. As long as you continue to get spam in your inbox then you'll know that the security measures aren't full proof.

But just because you get spam does that mean you shouldn't bother protecting your mail server because what's the use? Even that small configuration which blocks others from using your server as a relay accounts for security. And it most definitely has value.


----------



## tingo (Nov 15, 2017)

Here are some guides:
https://motherboard.vice.com/en_us/...ide-to-not-getting-hacked-online-safety-guide
https://ssd.eff.org/
https://www.cryptoparty.in/learn/handbook


----------



## bcomputerguy (Jan 7, 2018)

This post is all the more relevant this year now that those massive new exploits have been made public.

When I first ran into rowhammer I couldn't understand how security researches didn't see the implications for any program running on a multi-user system to spy on anything on the system.

This is an inherit design flaw stemming back from original computer architecture wasn't designed with multi untrusted users and security.


----------



## Deleted member 9563 (Jan 7, 2018)

You can't define security until you've defined your opsec.


----------



## bcomputerguy (Jan 7, 2018)

OJ said:


> You can't define security until you've defined your opsec.



That's the type of mentality that leads to massive exploits like these.

What processors in use from any mobile phone to supercomputers in science, government or any other sector not affected by Malware & Spectre?

What does opsec have to do with anything? Any "opsec" is useless since this is a fundamental design oversight.


----------



## ShelLuser (Jan 7, 2018)

bcomputerguy said:


> What does opsec have to do with anything?


Everything.

Keep in mind that the thread is about security in general. Take the Intel bug you're talking about right now. One can consider that to be a local exploit. Knowing your opsec will tell you if this exploit is accessible, thus also exploitable, by 3rd parties.

But opsec can also help risk assessment. For example the effect against `# sysctl kern.securelevel=1`.


----------



## Deleted member 9563 (Jan 7, 2018)

bcomputerguy said:


> That's the type of mentality that leads to massive exploits like these.


I think you have the wrong end of the stick there. 

Anybody wanting to learn might benefit from this The OPSEC Process.


----------



## ralphbsz (Jan 8, 2018)

To think about security, you need to know: (a) what is it you are protecting, (b) who are the attackers, and what motivates them, (c) what is the cost of security being breached, compared to (d) what is the cost (both one-time and recurring) of maintaining security.  If you look at this breakdown, you see that actually implementing security is the art of doing a compromise.  One one extreme, one can run the system like some government labs do, no external network connections, no USB sticks in and out the door, and hire dozens of marines to protect the data center (those sites do exist, I've worked with projects where every sys admin has an assault rifle on their back).  But that is expensive, and working in such an environment is slow and annoying and unproductive.  On the other hand, you can just order a laptop from Dell, install Windows with the free virus scanner that comes with it.  If it is just for just browsing the web and watching movies, this is sufficient and appropriate (just don't try to check your bank account from that machine).

This is by the way one of the disturbing aspects of the recent Meltdown and Spectre problems, also present in row hammer: It only allows software that is already running on *your* computer to spy on your data.  These problems are not like a burglar, who comes to your house in the middle of the night, uses a brick to break a window, and then steals your jewelry from the bedroom dresser.  The programs that could theoretically use meltdown/spectre/row-hammer have all been authorized to be started, by the owner of the system!  Instead it's more like a person who you regularly invite to come to your house.  Imagine you hire a plumber, but completely forget to authenticate the plumber (does he actually have a contractor's license, which would not be issued to a person who has a track record of theft and robbery), you don't supervise him when you send him upstairs to fix the bedroom sink, you leave your jewelry right on the bedroom dresser, open for everyone to see and take, and after the plumber leaves, you don't even check for a few weeks whether the jewelry is still there.  Look at the context that's being discussed for these vulnerabilities, and they are often about running javascript in the browser, or sharing one server with many VMs without considering that information will leak.  We (both as a society and as the computer industry) have not been thinking about what information actually needs to be protected (and what doesn't), what the cost of that protection is, and conversely what the benefit of convenience (like any web page can run anything in java or javascript) actually is.  This is where the security debate really has to happen, not by crucifying Intel or Linux or Windows.


----------



## bcomputerguy (Jan 8, 2018)

OJ said:


> I think you have the wrong end of the stick there.
> 
> Anybody wanting to learn might benefit from this The OPSEC Process.



Let's say your a multi-national corporation footbook,
with hundreds of thousands of computers on every continent;
They're running many different types of processors.

You have your machines secured, 
your most valuable machines are locked away on private VPN,
etc, ect..

Now, something like spectre and meltdown come out.

All it takes is one person in your whole organization to
infect your billions in infrastructure.

How would your opsec prevent any of this?


----------



## bcomputerguy (Jan 8, 2018)

ralphbsz said:


> To think about security, you need to know: (a) what is it you are protecting, (b) who are the attackers, and what motivates them, (c) what is the cost of security being breached, compared to (d) what is the cost (both one-time and recurring) of maintaining security.  If you look at this breakdown, you see that actually implementing security is the art of doing a compromise.  One one extreme, one can run the system like some government labs do, no external network connections, no USB sticks in and out the door, and hire dozens of marines to protect the data center (those sites do exist, I've worked with projects where every sys admin has an assault rifle on their back).  But that is expensive, and working in such an environment is slow and annoying and unproductive.  On the other hand, you can just order a laptop from Dell, install Windows with the free virus scanner that comes with it.  If it is just for just browsing the web and watching movies, this is sufficient and appropriate (just don't try to check your bank account from that machine).
> 
> This is by the way one of the disturbing aspects of the recent Meltdown and Spectre problems, also present in row hammer: It only allows software that is already running on *your* computer to spy on your data.  These problems are not like a burglar, who comes to your house in the middle of the night, uses a brick to break a window, and then steals your jewelry from the bedroom dresser.  The programs that could theoretically use meltdown/spectre/row-hammer have all been authorized to be started, by the owner of the system!  Instead it's more like a person who you regularly invite to come to your house.  Imagine you hire a plumber, but completely forget to authenticate the plumber (does he actually have a contractor's license, which would not be issued to a person who has a track record of theft and robbery), you don't supervise him when you send him upstairs to fix the bedroom sink, you leave your jewelry right on the bedroom dresser, open for everyone to see and take, and after the plumber leaves, you don't even check for a few weeks whether the jewelry is still there.  Look at the context that's being discussed for these vulnerabilities, and they are often about running javascript in the browser, or sharing one server with many VMs without considering that information will leak.  We (both as a society and as the computer industry) have not been thinking about what information actually needs to be protected (and what doesn't), what the cost of that protection is, and conversely what the benefit of convenience (like any web page can run anything in java or javascript) actually is.  This is where the security debate really has to happen, not by crucifying Intel or Linux or Windows.



This is my whole point. It's software that you authorize to run that can be doing anything and you'd be none the wiser.

I didn't crucify anyone with my original post, it was more to get people to think about the consequences that I saw at the time.
I didn't know about spectre and meltdown at the time but it was definitely in my mind; I just didn't know when or how it would show up.

This is something that will have to fundamentally change the way microprocessors are designed because no amount of "opsec"
can fix hardware bugs, unless u choose to not use computers.

Even on something like OpenBSD; just lightly picking on them a bit. They claim to have only 2 zero days in the past XXXX years.
Okay, well you don't need a zero day for your OpenBSD box to be popped open and poured out like a can of coke.

Computers came from a time when very few people had access, or the know how to even operate them.
That legacy allowed architects to make a lot of assumptions that allowed for speed while ignoring security; those assumptions are no longer true.

The question now is, how do we move forward?


----------



## Deleted member 9563 (Jan 8, 2018)

bcomputerguy said:


> How would your opsec prevent any of this?


It wouldn't prevent it. Thinking that everything can be prevented is a common mistake. In this case what has been discovered is not something that can't be imagined and certainly one can have a relevant plan in place. In this case, if "security" is important enough, then the approach might be to save what you have. If security is not so important, then they might just continue as is and let insurance (or serendipity) take care of it. 

Also, I might point out that the thread title says "what is security" and is not about what actual security measures one might take in any given situation.  At least that's the assumption, regarding this thread, that I've been working under here.


----------



## bcomputerguy (Jan 9, 2018)

OJ said:


> It wouldn't prevent it. Thinking that everything can be prevented is a common mistake. In this case what has been discovered is not something that can't be imagined and certainly one can have a relevant plan in place. In this case, if "security" is important enough, then the approach might be to save what you have. If security is not so important, then they might just continue as is and let insurance (or serendipity) take care of it.
> 
> Also, I might point out that the thread title says "what is security" and is not about what actual security measures one might take in any given situation.  At least that's the assumption, regarding this thread, that I've been working under here.



The point of this thread was to bring up something more fundamental, when rowhammer became public and all those rowhammer based attacks were targeting everything from android to servers I though people would think about these issues more deeply.

Research Papers like these:
Flipping Bits in Memory Without Accessing Them: An Experimental Study of DRAM Disturbance Errors Yoongu Kim1 Ross Daly Jeremie Kim1 Chris Fallin Ji Hye Lee1 Donghyuk Lee1 Chris Wilkerson2 Konrad Lai Onur Mutlu1 1Carnegie Mellon University 2 Intel Labs 
https://users.ece.cmu.edu/~yoonguk/papers/kim-isca14.pdf

have been out forever.

Just ignoring it and trying to move on, how's that even a valid solution?

If we think this was bad when rowhammer was only rooting android phones.

Wait until those efforts are targeted at all computers.


----------



## Deleted member 9563 (Jan 9, 2018)

bcomputerguy said:


> The point of this thread was to bring up something more fundamental, when rowhammer became public and all those rowhammer based attacks were targeting everything from android to servers I though people would think about these issues more deeply.



OK, I'm with you.



bcomputerguy said:


> Just ignoring it and trying to move on, how's that even a valid solution?



Like a lot of things it's not a solution. But it _is_ an action, and unfortunately a common one. Governments and corporations don't work in rational ways, although there are indeed different "rationalities". What's "valid" to them is not necessarily even reasonable to you and I.

But yes, one has to wonder just how drastic these fundamental threats have to become before the frog will jump.


----------



## bookwormep (Jan 9, 2018)

If you turn on a lamp, the electricity runs from the power grid to the bulb's filament, then back to the grid.
Similarly, where is the security?


----------

