greypanther wrote:I have to admit that I find all this fascinating, ( though obviously lack the education to understand it properly, ) but a question occurs to me: Will it put people off allowing so much of themselves online? Or even just on their connected PC? Identity theft is a frightening thing after all, at least to me.
I have always thought people were way to blasé about their online details and leave way too much information way too open. Now it seems even if you are careful, you may be leaving yourself too open to attack.
Bottom line: will it affect how you use the internet/your PC? Or will most just ignore it and carry on as usual?
Back in the day, there was this company called FunWebProducts that gave away free programs like "Cursor Mania," "Smiley Central," and "Zwinky." They hooked you by annoying ads with sound, and also anyone downloading it would send you content that you could only see via getting it yourself. After using it myself for some time, my antivirus ended up targetting it. I found out it wasn't a false positive, did some research, found out that they openly admit (in legalese) stealing your personal information (including financial information) and that you agree that you won't hold them responsible if one of their workers makes off with this information and sells it or uses it themselves (in case of your debit and credit card numbers). You should've seen people's responses to my posts before all this, and how they responded when i pointed this out. Basically, people started to pretend that i didn't exist, because i was bursting their bubble.
TL;DR: people will respond with apathy.
Morkonan wrote:greypanther wrote:...Bottom line: will it affect how you use the internet/your PC? Or will most just ignore it and carry on as usual?
Few people ever take proactive measures, even when a big exploit is announced.
But, the thing is that if someone wants "in", they're gonna get in. (Oh, I'm sure someone is going to come along and lecture me on why that can't happen.)
The most important thing you can do is to keep your OS up to date as well as any security software you have.
NEVER, EVER "trust" anything to protect you from your own ignorance.
Don't click on stuff if you don't know what it is.
Don't download stuff if you don't know who made it and what it is.
Don't install stuff you can't authenticate as coming from a trusted source and that does trusted stuff with your trusted stuff.
Don't open emails from senders you don't recognize and, especially, don't accept any attached files from sources you can't authenticate. And, if there isn't a reason they should be emailing you about anything, maybe give them a phone call, just to check.
Don't engage in risky online behaviors, like surfing pron sites, posting incriminating pictures of your private parts, engaging in embarrassing webcam activity, pirating software, blah blah blah.
Even so, if someone wants to get in, they're going to get in. How difficult are you going to make that for them? The more difficult it is, the more determined they have to be in order to succeed. Most issues aren't caused by targeted attacks or, at least, don't start that way.
No hacker worth worrying about is going to spend their time specifically trying to hack some grandma's computer. Everything is automated, which is good for innocent users, since they can get easy protection for "known" common issues without having to turn their boxes into NORAD database servers that are prepared for "anything."
But, you can't protect yourself from yourself unless you take appropriate measures, like learning what and where the risks are.
You ask a bit much from people. On the other hand, I say people deserve the trouble they get into online. That said, enjoy this quote:
Hear me out. Linux is Microsoft's main competition right now. Because of
this we are forcing them to "innovate", something they would usually avoid.
Now if MS Bob has taught us anything, Microsoft is not a company that
should be innovating. When they do, they don't come up with things like
"better security" or "stability", they come back with "talking
paperclips", and "throw in every usless feature we can think of, memory
footprint be dammed".
Unfortunatly, they also come up with the bright idea of executing email.
Now MIME attachments aren't enough, they want you to be able to run/open
attachments right when you get them. This sounds like a good idea to
people who believe renaming directories to folders made computing possible
for the common man, but security wise it's like vigorously shaking a
package from the Unibomber.
So my friends, we are to blame. We pushed them into frantically trying to
invent "necessary" features to stay on top, and look where it got us. Many
of us are watching our beloved mail servers go down under the strain and
rebuilding our company's PC because of our pointless competition with MS.
I implore you to please drop Linux before Microsoft innovates again.
-- From a Slashdot.org post in regards to the ILOVEYOU email virus
red assassin wrote:kohlrak wrote:Win7, here, and tsearch and cheatengine still work. Yes, they use the same hooks API. It probably has some changes, but it does the same thing. As for not reading access down, you're right, but realistically we always knew what was there. Part of the protection is protecting applications in one ring from each other, which they intentionally thwarted, which then gives rise to other programs that might be in one ring but through some other API step up (poorly made drivers?). Hooks are not necessary for debugging, and the API shouldn't really exist.
tsearch and cheatengine require elevated privileges to run, which is the entire point. The more powerful features of cheatengine require that it installs itself as a hypervisor (i.e., it runs the system with Windows running in a VM), thus installing itself at a privilege level above the kernel. Again, it needs to be highly privileged in the first place to do this.
IIRC, not for 32bit apps, which still exist and will for a very long time, especially given the speed boost writing 32bit apps provides.
Further, debuggers by definition allow all of these capabilities - being able to arbitrarily read/write a debugged process is pretty much the definition of a debugger - so I'm not sure why you state debuggers don't need to be able to. I don't think the argument that processes at the same privilege level should be protected from each other reflects modern security thinking very well - if you're the same privilege level, you can access all the same data as the other process, debug it, or modify it and rerun it. Instead, we use multiple levels of privilege which are isolated from each other - separate user accounts, sandboxes, process elevation levels, capabilities, features like the new Virtual Secure Mode in Windows, and the like.
x86 and many other processors have special dedicated instructions to debugging. Why can't the debugger be added to the EXE on compile time? I've made my own. Linux and Windows both have APIs for writing your own debuggers within the programs. But no, userland is where the sensitive data is most likely to get collected. In windows, for example, the libraries for getting text from the user are userland. All you'd have to do is know the program you want to grab data from. For most cases, IE (edge is IE, i don't care what people say), Firefox, Chrome, etc., to get sensitive information.
Also, Windows 7 is eight years old at this point, and hardly the very model of a secure operating system.
The point is, it still exists in 64bit.
kohlrak wrote:Protecting the user from himself. This is a separate issue, and while it supposedly improves security, a user should be able to do with their machine what they want. To be fair, there is a workaround, but it's unhealthy. Signing methods can be thwarted, anyway, so it just ends up being a way to charge devs to be able to develope. I noticed when i used visual studio to make a program on this computer, i sent the demo to people on win10 and they said it wouldn't let them run it.
That's... not at all what separation of privileges is about. It protects your documents from your sandboxed browser processes, it protects your login credentials from everything else, it protects your data from other users on the same system, it protects your kernel from things installing themselves as drivers. (The main reason Microsoft cracked down on driver signing enforcement is that the majority of Windows crash reports were caused by badly written drivers, including antiviruses!) As the user you
can turn any and all security measures off if you want to, but it's deliberately hard because you shouldn't be doing that unless you have a damn good reason, and you probably don't.
Of course it's a separate issue. I asked myself why driver signing was brought up, but felt it a good opportunity. It's my device the software's running on. I should be able to take responsibility for my own actions. I shouldn't need my OS provider to be my nanny. I understand unsigned drivers cause bluescreens, but you could also benefit from not giving device drivers the option to bluescreen. KeBugCheck is callable from device drivers. Is it necessary? Absolutely not. Fix your bad design in the first place. Turn it into a wrapper and restart the device, which, IIRc, is what Linux does.
If sysenter flushed caches automatically, that takes away the ability of the OS dev to control which need flushed. Plus, syscall/sysenter, iirc, is an interrupt, anyway. I forget if it's set in an MSR or tied to a specific number. I never looked that deep into it since my kernel was a toy.
But, no, that's like saying everyone's obligated to tell everyone not to use gets(), because of buffer overflows. It's up to you to figure this out. I'm responsible for the security of my code, not the library I use, unless i was made certain specific guarantees (like with read functions that specify the size of the buffer). The bug is in the OSes, not the processors.
Also, Meltdown also apparently affects the newer ARMs as well, otherwise patches would not have been made for them as well. That means AMD *IS* likely affected. If not, that might be another reason why AMD's been behind. Though, to be fair, I've warned people about Out-of-Order execution potentially being hazardous for other reasons. Lately, processor improvements have circled around trying to tackle Wirth's Law, rather than actual improvements, hence intel's cache size increased processors having creamed AMD's focus on increasing execution speed of individual instructions: compilers overpadd, devs don't seem to set optimization flags, etc.
SYSCALL/SYSENTER is not an interrupt - that's literally the entire point of the instruction (replacing the old int 0x80 approach).
So how does one set up their kernel to accept syscall/sysenter?
Oh, i stand corrected. I looked it up, myself. So instead of using an ISR (which is a function pointed to by the IDT), you write a function that is pointed to by the GDT, and the address is in the MSRs. I made an incorrect assumption from the fact that both int80 and syscall work on linux.
Also, the Meltdown vulnerability is quite specific - it's not like it's situation dependent; the vulnerability exists as long as the ring 0 memory pages are mapped while executing in ring 3 (i.e. the way every kernel on x86 and similar architectures have always worked), so if this was a known insecure thing the fix would be to have the instructions enforce that, or at the very least have a giant "this is insecure" warning in the documentation.
That's not intel's job. You should learn how to use your tools. You can't sit there and write "this needs to be wrapped between calls to accomplish thsi for security" for every instruction or function. It's a fool's errand. Instead, the programmer should understand their tools and foresee this themselves.
Speaking of which: Have you read the man page for gets() recently? It starts with "Never use this function." (underline in original). Or have you tried actually using it? It's removed from modern versions of C, and if you deliberately request an older version, the compiler and linker both output warnings (by default, I haven't turned on any extra warnings here). Security isn't about waiting until everyone gets pwned and then chortling "well you should have known not to do that, peasant" at the responsible developer. We can't possibly expect every developer to be intimately aware of every possible security issue. If something's unsafe, you make it as hard as possible for people to do it.
They must've removed it fairly recently. Anyway, programmers are not toddlers, they shouldn't need nannies. You are supposed to know your tools. I can't nanny every developer out there. What if they make a buffer of size 1024, but then realize they want a larger buffer, but only raise the read function's number? How am i supposed to protect the programmer from themselves? The OS devs made some really stupid mistakes here. It's understandable that they made them, just as people who used gets() and got owned. But they need to accept responsibility for a stupid mistake instead of blaming the library provider for not being their nanny (to be fair, i dn't really see this from the OS devs).
AMD's microarchitecture is quite specifically not vulnerable to Meltdown because they do not do the speculative reads that are responsible; ARM and Intel both do. It's possible there are other flaws in AMD's architecture that lead to similar results, but that would be a separate vulnerability with a separate exploitation approach (and I suspect they'd have had a damn good look before publicly proclaiming their immunity to Meltdown).
So i was right, amd doesn't allow out of order execution.
kohlrak wrote:Outside of the NSA, you mean? We're finding alot of security holes, as of late, and how did we suddenly catch these?
Because we (fairly abruptly) hit the point as a society at which security issues started costing a serious amount of money, and therefore a serious amount of money started getting thrown at trying to deal with them. Google's Project Zero is responsible for a lot of the big security flaws that have been published in the last couple of years, and it's an incredibly expensive endeavour employing a lot of very expensively talented researchers, for example, and many of the large tech firms have their own security teams with similar goals.
They've costed alot of money for a long time now. I honestly think the gravity of the NSA spying finally woke tech people up. Your average joe, unfortuntely, still couldn't care less. Your average Joe still yells at his TV as a joke.