Welcome and Featured Speakers all morning.
Chrisopher Ahlberg (CEO/Co-Founder of Recorded Future - Opening remarks
Christopher provided some insights into where Recorded Future is going and the current state of the company. One word "GROWTH", the company has grown 90% since RFUN17! Congrats to the team at RF!
A couple of points from his talk:
* In the near future (prior to 2020) your company/business will be judged not just on your earnings but also your on-line/corporate risk reputation. The corporate risk surface will be made up of many factors(data points) gleamed from incidents, attack surface, company RELATION to other companies with incident issues, company RELATION to supply chain incidents, and overall on-line persona(rep).
** Recorded future is in the 'beta' stage for a new offering which will help companies understand their Corporate Risk Surface. By analyzing data available to the RF platform they can provide details of where/what/who/how things are affecting this overall rating or score.
Geoff Brown from NYC Cyber Command: Geoff is the Chief Information Security Officer for the City of New York. he discussed the NYC Cyber Command and it's role in the overall security of the city. He discussed how the CITY is making a strong effort to keep citizens information PRIVATE. NYC provides a free app for to citizens to help them know if their mobile device has a security issue (ie: malware, connecting to suspicious wifi, etc) https://secure.nyc/ The city is also trying to provide 100% free wifi to all citizens: https://www.link.nyc/ One major reason for providing this 'service' is to give users a secure wifi/network to connect and to reduce the number of wifi hotspot attacks. The NYC Cyber Command has multiple roles: * Education - Cyber education for citizens AND to bolster the Cyber workforce * Incubation - Research and Innovation * VC/City Funding into new Security and Safety Measures in the city. |
Priscilla discussed the importance of Attribution.
Key takeaways, with Attribution you need to know:
* How
* Who
* What was hit
* Risk of future attacks
Operations/Companies cannot be happy with just getting threat/attackers off their networks after an incident. They need to understand how it happened and fix the root cause of the problem. To do this you need to understand the environment, assets, and have the ability to constantly improve your detection, defenses, response capabilities, and have an understanding of who/why someone would want to attack.
Threat Intelligence Awards:
Shout out to my colleague Danny Chong for his nomination!
Discussed the role of Corporate risk and how very soon companies will be evaluated by their Corporate Risk Score along with other metrics used today in deals/purchasing/and every day business activities. He mentioned the importance of Sector Analysis and understanding that every Sector(Industry) will be affected by threats in different ways and in different attack vectors. Key takeaway here is Corporate Reputation scores will be influenced by Risk Scores and security incidents. Supply chain attacks CAN and WILL have an affect on your Corporate Reputation so you need to be aware of what your partners are doing (and aren't doing).
Presentation about different Threat Actors. Great discussion but not allowed to post about it :)
Key takeaway: Don't use/resuse passwords or passwords across platforms/applications.
Rich Dube from the Recorded Future delivery team presented on integrating Recorded Future Threat Intel into Splunk. Utilizing the watch lists and correlation rules in the Recorded Future Splunk app allows users to have the information needed for better decision making and alerting. Threat Intel ingestion keeps getting more efficient into the platform and data enrichment is a necessity when doing IR!
Good and Bad, Indicators Beget Indicators - Why Not All Indicators are Good IOCs
Adrian Porcescu
Adrian Porcescu from the Recorded Future Professional Services team discussed the use of IOCs and how one size doesn't fits all. An organization needs to understand their environment and assets to be able to apply good Threat Intel. An IOC against a company in another industry might be more severe then what you have in your environment. You need to have the ability to adjust the risk score (or severity) associated to an IOC against the asset it is trying to attack/hit. Just like IDS rules, if you have a UNIX rule in your rule base with NO unix servers in your environment you will be flooded with false flags and missing some of the important alerts.
Adrian discussed the importance of chaining IOCs together for a better understanding of what is happening. One example used was hosts calling out to the DNS server 8.8.8.8. Which is most cases would be a LOW severity because it is a google dns server, but if you are paying attention to traffic before an after the call you might see some activity related to malware or some other threat. The 8.8.8.8 IOC on it's own might not be helpful, but together with more log sources and visibility it could be an indicator to something bad.
Organization 'context' is important:
* What do they do?
* What do they have?
* What are they running?
* Who do they work with?
The ability to categorize assets and understand the traffic patterns and behaviors in your environment will be the determining factor in stopping threats. One example he used was traffic seen in a client environment 'talking' with a vendor service which the client uses as part of their delivery. It was originally marked as a false alarm but upon further review and after categorizing assets in the environment they could determine the server 'calling' out to the vendor service was not part of the systems that 'should' be using the service. IR process was followed and the host was removed and examined off line.
Adrian hit on Discovery and Detection.
- Discovery using IOC data/context against historical data
- Detection using IOC data against real time/now data
It was nice to see ARGUS in his presentation. If you don't know argus check it out: https://qosient.com/argus/ . It can be used for all sorts of captures/replays of pcap type data. In Adrians example he was passing a list of IOCs into a filter in argus to see if there had been any traffic in the capture (historical search). I've used ARGUS a ton in the past and will be starting a new project which includes argus in the near future. keep an eye here: https://github.com/mabartle/bloodhound
$ ra -nnnr argus.log.1.gz - ‘indicator’
- Argus - next-generation network flow technology, processing packets, either on the
wire or in captures, into advanced network flow data.
- nnn - lookup any address names
- r - read from file
- You can add the logic of you logrotate
Randy Conner
Randy presented on automating some of their security operations in Service Now with Recorded Future data. He hit on the high points of Automation and Orchestration and the time/resource savings that can be made. He provided some great examples of using Threat Intelligence with asset data to drive IR and weed out false positives. Further driving home one of the important takeaways from the conference that YOU NEED TO KNOW YOUR ENVIRONMENT. No level of threat intel will protect you from the 'bad guys' if you don't understand what is in your environment. Randy discussed some great use cases and showed examples of where automation has saved his organization a ton of time and effort. By utilizing their CMDB to cross-reference some threats (with CVE numbers) they can quickly identify where/what assets are affected to roll out patches and/or protections quickly.
A word of caution for anyone entering the 'world' of automation, it isn't just building a playbook and calling it a day. A lot like software development you need to define what(and why) you are developing a playbook, how it will be tested, how it will be rolled out into production, how(and who) this automation will effect, and how changes will be communicated to the organization overall.
Ryan Miller
Ryan presented some great information vulnerabilities and exploits. He showed time lines of some of the heavy exploits used last year and the time it took between the vulnerability and delivery method. In a lot of cases the vulnerability to exploit time might have been shorter than it took for the vendor to come out with a patch or workaround. Highlighting the need to have a good knowledge of your environment/assets and the related products/services/applications they use so you can KNOW when a new vulnerability is out and if/when it will affect you. He stressed the need for dedicated resources within organizations to do vulnerability research and keep up with trends in this area. He discussed using the Recorded Future platform to gain an insight into new vulnerabilities which are coming out which may not be talked about in the 'normal' channels (dark web). using this data for tracking/alerting, proactive analysis using mentions/notes from hackers on what exploits might be used, and using this data along with internal data to get a better understanding of trends in your company (like which Vulnerabilities were used the most vs what you have in your environment).
Take aways:
* every company should have a dedicated resources to investigate/research vulnerabilities and how it relates to the company environment (hosts, services, applications, etc)
* Vulnerability management in an organization is paramount, you need to know what has a patch and what is exploitable when the stuff hits the fan. You don't want to waste a ton of time on an exploit which will have little to no damage within your org.
* The '30 day standard' for patching is too long in most cases. You need to have a good patch process which takes into account the priority/severity of new vulnerabilities with what is in the environment.
* A majority of the time if you patch for the most wide spread vulnerabilities it will protect you from a wide range of attacks. Bad guys are using readily available exploits not 0 day type attacks.
* You need to have the internal ability to create your own detection methods against some of these new exploits. If you have an ear to the web (with tools like recorded future) you can create rules/alerts to trigger in case someone starts hitting you with an exploit against a vulnerability with no patch.
The Recorded Future team puts on a great conference. Every year the venue, events, and talks get better! It was great seeing everyone!