https://docs.google.com/presentation/d/1VWj0SeMFmxwFr0zQq2RC1eazuw8AxNhk5ZSdyqE_X0M/edit?usp=sharing
Computer Equipment for Kids |
|
Here is a presentation i did for the Girls that Code group at ODU last year:
https://docs.google.com/presentation/d/1VWj0SeMFmxwFr0zQq2RC1eazuw8AxNhk5ZSdyqE_X0M/edit?usp=sharing
0 Comments
![]() RFUN is Recorded Futures yearly conference. It's a 2 day event. First day was multiple key notes and breakout sessions and second day was talk and training sessions for the tool. This year it was held at the InterContinental (The Wharf) in Washington, DC. Day 1: Welcome and Featured Speakers all morning. Chrisopher Ahlberg (CEO/Co-Founder of Recorded Future - Opening remarks Christopher provided some insights into where Recorded Future is going and the current state of the company. One word "GROWTH", the company has grown 90% since RFUN17! Congrats to the team at RF! A couple of points from his talk: * In the near future (prior to 2020) your company/business will be judged not just on your earnings but also your on-line/corporate risk reputation. The corporate risk surface will be made up of many factors(data points) gleamed from incidents, attack surface, company RELATION to other companies with incident issues, company RELATION to supply chain incidents, and overall on-line persona(rep). ** Recorded future is in the 'beta' stage for a new offering which will help companies understand their Corporate Risk Surface. By analyzing data available to the RF platform they can provide details of where/what/who/how things are affecting this overall rating or score.
Priscilla Moriuchi from Recorded Future : Priscilla discussed the importance of Attribution. Key takeaways, with Attribution you need to know: * How * Who * What was hit * Risk of future attacks Operations/Companies cannot be happy with just getting threat/attackers off their networks after an incident. They need to understand how it happened and fix the root cause of the problem. To do this you need to understand the environment, assets, and have the ability to constantly improve your detection, defenses, response capabilities, and have an understanding of who/why someone would want to attack. ![]() Threat Intelligence Awards: Shout out to my colleague Danny Chong for his nomination! Alexander Schlager from Verizon: Discussed the role of Corporate risk and how very soon companies will be evaluated by their Corporate Risk Score along with other metrics used today in deals/purchasing/and every day business activities. He mentioned the importance of Sector Analysis and understanding that every Sector(Industry) will be affected by threats in different ways and in different attack vectors. Key takeaway here is Corporate Reputation scores will be influenced by Risk Scores and security incidents. Supply chain attacks CAN and WILL have an affect on your Corporate Reputation so you need to be aware of what your partners are doing (and aren't doing). Mind Hunter: Presentation about different Threat Actors. Great discussion but not allowed to post about it :) Key takeaway: Don't use/resuse passwords or passwords across platforms/applications. Splunk Smarter: Security Operations with Threat Intelligence: Rich Dube from the Recorded Future delivery team presented on integrating Recorded Future Threat Intel into Splunk. Utilizing the watch lists and correlation rules in the Recorded Future Splunk app allows users to have the information needed for better decision making and alerting. Threat Intel ingestion keeps getting more efficient into the platform and data enrichment is a necessity when doing IR! Day 2 Good and Bad, Indicators Beget Indicators - Why Not All Indicators are Good IOCs Adrian Porcescu Adrian Porcescu from the Recorded Future Professional Services team discussed the use of IOCs and how one size doesn't fits all. An organization needs to understand their environment and assets to be able to apply good Threat Intel. An IOC against a company in another industry might be more severe then what you have in your environment. You need to have the ability to adjust the risk score (or severity) associated to an IOC against the asset it is trying to attack/hit. Just like IDS rules, if you have a UNIX rule in your rule base with NO unix servers in your environment you will be flooded with false flags and missing some of the important alerts. Adrian discussed the importance of chaining IOCs together for a better understanding of what is happening. One example used was hosts calling out to the DNS server 8.8.8.8. Which is most cases would be a LOW severity because it is a google dns server, but if you are paying attention to traffic before an after the call you might see some activity related to malware or some other threat. The 8.8.8.8 IOC on it's own might not be helpful, but together with more log sources and visibility it could be an indicator to something bad. Organization 'context' is important: * What do they do? * What do they have? * What are they running? * Who do they work with? The ability to categorize assets and understand the traffic patterns and behaviors in your environment will be the determining factor in stopping threats. One example he used was traffic seen in a client environment 'talking' with a vendor service which the client uses as part of their delivery. It was originally marked as a false alarm but upon further review and after categorizing assets in the environment they could determine the server 'calling' out to the vendor service was not part of the systems that 'should' be using the service. IR process was followed and the host was removed and examined off line. Adrian hit on Discovery and Detection.
It was nice to see ARGUS in his presentation. If you don't know argus check it out: https://qosient.com/argus/ . It can be used for all sorts of captures/replays of pcap type data. In Adrians example he was passing a list of IOCs into a filter in argus to see if there had been any traffic in the capture (historical search). I've used ARGUS a ton in the past and will be starting a new project which includes argus in the near future. keep an eye here: https://github.com/mabartle/bloodhound $ ra -nnnr argus.log.1.gz - ‘indicator’ - Argus - next-generation network flow technology, processing packets, either on the wire or in captures, into advanced network flow data. - nnn - lookup any address names - r - read from file - You can add the logic of you logrotate Threat Intelligence with Automation and Orchestration - Randy Conner Randy presented on automating some of their security operations in Service Now with Recorded Future data. He hit on the high points of Automation and Orchestration and the time/resource savings that can be made. He provided some great examples of using Threat Intelligence with asset data to drive IR and weed out false positives. Further driving home one of the important takeaways from the conference that YOU NEED TO KNOW YOUR ENVIRONMENT. No level of threat intel will protect you from the 'bad guys' if you don't understand what is in your environment. Randy discussed some great use cases and showed examples of where automation has saved his organization a ton of time and effort. By utilizing their CMDB to cross-reference some threats (with CVE numbers) they can quickly identify where/what assets are affected to roll out patches and/or protections quickly. A word of caution for anyone entering the 'world' of automation, it isn't just building a playbook and calling it a day. A lot like software development you need to define what(and why) you are developing a playbook, how it will be tested, how it will be rolled out into production, how(and who) this automation will effect, and how changes will be communicated to the organization overall. Intelligence, Vulnerabilities, and Patching Ryan Miller Ryan presented some great information vulnerabilities and exploits. He showed time lines of some of the heavy exploits used last year and the time it took between the vulnerability and delivery method. In a lot of cases the vulnerability to exploit time might have been shorter than it took for the vendor to come out with a patch or workaround. Highlighting the need to have a good knowledge of your environment/assets and the related products/services/applications they use so you can KNOW when a new vulnerability is out and if/when it will affect you. He stressed the need for dedicated resources within organizations to do vulnerability research and keep up with trends in this area. He discussed using the Recorded Future platform to gain an insight into new vulnerabilities which are coming out which may not be talked about in the 'normal' channels (dark web). using this data for tracking/alerting, proactive analysis using mentions/notes from hackers on what exploits might be used, and using this data along with internal data to get a better understanding of trends in your company (like which Vulnerabilities were used the most vs what you have in your environment). Take aways: * every company should have a dedicated resources to investigate/research vulnerabilities and how it relates to the company environment (hosts, services, applications, etc) * Vulnerability management in an organization is paramount, you need to know what has a patch and what is exploitable when the stuff hits the fan. You don't want to waste a ton of time on an exploit which will have little to no damage within your org. * The '30 day standard' for patching is too long in most cases. You need to have a good patch process which takes into account the priority/severity of new vulnerabilities with what is in the environment. * A majority of the time if you patch for the most wide spread vulnerabilities it will protect you from a wide range of attacks. Bad guys are using readily available exploits not 0 day type attacks. * You need to have the internal ability to create your own detection methods against some of these new exploits. If you have an ear to the web (with tools like recorded future) you can create rules/alerts to trigger in case someone starts hitting you with an exploit against a vulnerability with no patch. Until Next Year!
The Recorded Future team puts on a great conference. Every year the venue, events, and talks get better! It was great seeing everyone! ![]() While trying to implement an Agile development environment with a new team a few years ago one of the developers said to me; "I don't like Agile, it makes the whole team mediocre. Your bad developers will bring the team down and your good developers will have to slow down to wait for the rest." It's a comment that has stayed with me ever since. Like many things in the Agile world it is easy to point at "Agile" and state that IT is the issue when the underlying issue has nothing to do with Agile practices/ways. When applying Agile practices to a team/group/project it is important to remember that it isn't a cure all for any problem you have. I've seen Agile work great, taking a great team and improving their execution, communication, and collaboration. I've also see the 'other' side of Agile when a group/team of individuals are brought together for a project and nothing can help the outcome. It takes MORE than just AGILE to have a high performing team. Agile helps with recommending different frameworks or methodologies but how they are implemented, adjusted, and stream lined is really up to the people that make up the team. If the team is not ready to ask the hard questions... I'm not talking about the hard questions related to a project, but the hard questions for people:
Agile will NOT solve the above issues! But as a manager/project lead/team lead you need to have a way of righting the ship when team members do NOT have the drive necessary to deliver. This comes in a couple different ways at different times during a project/employee cycle. 1. Hire the RIGHT people! A line that is easier said then done. Where do you start? You can always start with the skills necessary to fill a role. Or you could look at how the current team interacts and ask what type of traits make up a good team member. At times there are soft skills that are more important than the technical skills. If i can train you up on what is needed and you have the drive and determination it might suit me well to do that vs hiring someone with that type of experience and being stuck with an under performing employee that will do the bare minimum and 'punch the clock' day in and day out. Hiring the RIGHT developer can be tricky, you can't take a Entry level Developer and push them into a Sr Dev role, this is where the experience is necessary. You need someone in the Sr Role who can help mentor and motivate the younger developers on the team. Ideally the Sr Dev is a 'teacher' and will help the team in software architecture, design, and best practices necessary to build great applications. But when hiring a less experienced developer if they have the skill set and know how in a specific programming language you will have flexibility to train them in another language. With the pace of technology today new languages, techniques, and frameworks are available to make the development process easier and provide an equal playing field when switching languages. I do recommend having some level of code challenge/scenario the interviewee should run through during the interview process so you can verify they are able to do what they 'say' on their resume. Lastly TAKE THE TIME to have all team members and folks associated with the project meet the candidate during the interview process. Everyone will have their own perception of the candidate and this is valuable feedback when determining if they are the right fit for the team AND team members will be more invested in new hires if they are part of the process and have a 'vote' in the decision. It's a LOT easier to NOT hire someone vs dealing with PIPs and/or performance issues after they are on board. Look for the red flags in the interview process and if anything feels 'off' examine why. 2. The Legacy Developer Teams are made up of all types of people and personalities. What do you do if you take over a team/project with members who have been involved for years and under performing? First off don't make any changes right away! Learn how the team operates (or doesn't) See how folks interact, do they work together or in silos? Do they share information and techniques? Do they help each other? Or does that only happen when the 'manager' dictates for folks to help each other? Talk to the team members. Start up a 1v1 at least once a month to discuss the goals and aspirations of the team members. Great book on 1v1 conversations: Behind Closed Doors: Secrets of Great Management Sometimes employees are NOT doing what they want in their work/career. Having these conversations are hard but needed. Knowing 'what' an employee would like to be working on can help YOU put them in the right place when the opportunity arrives. If the developer is not producing have the talk, be honest and let them know what you expect. If the developer does not start producing after a few of these talks then it is time for a Performance Plan. Don't wait on this step! It sucks having to put anyone on a PIP but what sucks more??? Months of wasted time and effort on a project! 3. Set expectations Set clear expectations! Ideally in an Agile 'team' the team members will be defining what they expect from themselves and others on the team. Holding each other accountable when expectations are NOT met and offering helping when needed. Different teams handle this in different ways. On some of the Scrum teams i have been a part of this happens in different areas: 1. Sprint Planning - Setting the Sprint Goal. This is an expectation for the iteration. The Team is 'committing' to the goal so a precedent is set that this will be achieved. 2. Day to Day Process - Software Development has many steps and teams handle these steps in different variations. For example; many teams have some form of code review before code is merged into 'master'/the product.
Here is a great article (which leads to other articles) on ways to set expectations and have these discussions: letsgrowleaders.com/2018/08/21/how-to-motivate-your-team-stop-treating-them-like-family/ 4. Provide a way to improve/learn As part of the feedback and expectations you will need to provide a way for them to improve. Point them in the right direction. Could be:
5. Provide the Vision There are times in business where the day to day gets a little ho-hum and the excitement might not be there. YOU need to provide the team a vision of where you will be 'in the future'. Could be a vision for next month, 6 months, or a year. People do better when they know where they are headed. This might be a difficult task if vision and priorities are shifting within the company. If so, focus on the SHORT AND OBTAINABLE opportunities which the team can do! Things that will allow the team to brush up on existing technologies, learn new technologies, update those build/deployment plans which haven't been touched in a year. Small victories but victories none the less!
Requirement gathering.. I was going to start with this one but there is actually one step you need to complete before you get to this step.
Feedback gathering and documentation There needs to be some understanding of HOW this will be done. A process if you will. Sometimes folks get all bent out of shape when a 'process' is mentioned but without process the team cannot be in sync. Everyone cannot use their own steps when trying to work together. This process would define who, what, where, and how requirements and documentation will happen: Who:
Requirement breakdown and Prioritization Each requirement needs to be discussed, planned, and brain stormed, by the project manager and tech leads. Thinking about the design, implementation, deployment, and training of new features is critical here. Spend the time and document what it will look like, how it will operate, how the user will interact with it, etc. Visuals are important here; mockups, data flows, system diagrams, etc. Everyone will have a picture of what it will do and/or look like in their mind you NEED visuals so everyone can see what the collective sees. A lot of time and effort will be wasted with assumptions; product owners assuming the development team understands the requirement can lead to problems later. Along that line, if there is too much time between the requirement discussion and the actual work it will take the team time to understand the requirement all over again and where/how it fits into the application they are building. Just like multitasking it takes time to shift from one thing to another. Breakdown the requirements into small issues to ensure the team does not overextend themselves when planning an iteration. What is 'small'? That is a whole other blog :) The team should be providing the level of effort on the issues so inexperienced/non-technical folks are not providing the wrong expectations on the work. The team doing the work should be estimating the work. Story points vs hour estimates.. Doesn't really matter what matter is the team has decided on the scale that will be used and what it means to them. And you have to be consistent with this scale from sprint to sprint and while doing estimations. The "real" work This section is what most folks think of when they think about a software development team. A group of developers that are skilled in writing code in a language or multiple languages and producing a tool/application. But it's more than that, much more. A typical software team will have a list of issues/stories/tasks prioritized and ready for work. Developers will take a task/issue/story to work and go 'head down' into coding the feature/bugfix. Along with the coding the dev should be creating new tests for new code and update existing tests for any code updates. So for any piece of work started there really is a lot of other work already attached to it; code, test, build, code review, merge(version build), deploy. Before any line of code is started the team needs to understand the work! Don't skimp on the Sprint Planning (if doing Scrum) the more discussion in the planning session the less questions should arise during the work cycle. If you have a question about the work ASK NOW there really is no dumb question (as they say) and others might have the same question or worse yet, no one has thought about your specific question and you just saved yourself and the team a lot of heartache later. Read the issues all the way through, sometimes teams get so used to pulling in work into the sprint during planning that they don't go through and review the issue, description, overall goal of work, and the acceptance criteria. Development work is a discipline once you start getting lazy, issues will start arising in other areas of your work. Cut corners and you will feel the pain later. My advise, take the time up front because once you are working on the code you have set the expectation that you(and the team) can accomplish it, if you can't deliver that code on time and in the amount of effort discussed it impacts the whole process. Deploy After the code has been written, tested (both with automated and manual testing), reviewed, merged, and version tested... it is time for a production deployment! My biggest advice in this section is if there is anything manual you are doing during your deployment take the TIME AND AUTOMATE. There are CI/CD servers for a reason and it is to make the building and deployment of code consistent. You might be thinking.. O the 2 minutes I take every deployment to run a DB backup script or do the VM snapshot doesn't take long.. It doesn't, but it does take time AND God forbid you forget these steps when deploying. Build them into your deployment plan, if you think they are too complicated to script discuss it with the team. Take those ideas and break them down into their own game-plan on how you can make small changes to improve the overall deployment process. (2 minutes x 52 releases/year = 104 minutes which doesn't sound like much but a lot of shops these days are doing multiple deployments a DAY!) Another part of the deployment plan should include deployment verification. How is your team verifying the deployment was successful (above the CI/CD saying it was)? Are you doing health monitoring on the server? Are you running a list of tests against the production server to verify API and basic application functionality post deployment? Feedback! The job is NOT over after the deployment has been made. There needs to be interaction with the user base to see how the new features have been improving their work life. Did it save them time? Did the feature get delivered the way the user group expected? Or are there a lot of 'this is great but..' conversations happening after deployment. This could point to multiple problems in the overall process. The team needs to identify where the disconnect has happened and fix the problem.
My advice here for feedback is to bake in as many ways of getting feedback as possible. If using metrics generated by your application is possible use it! Put in counters and controls to tell you what pieces of the application are being used and which ones are sitting collecting dust after a release. If a user base is not using new features after a deployment it could be a matter of the prioritization of the requirements is wrong.. You are delivering things that aren't crucial to the user base. or worse yet, THEY DON"T CARE and you are working on a tool/application that would not be missed if the lights were turned off. The only way to know any of these things is to keep that open line of communication between all the teams, personnel, management, and users. (I also wrote a blog about picking a communication medium everyone can agree on.) Had an unfortunate event at work today. One of my coworkers deleted the Deployment plan for one of our projects. Not just one of the deployment plans, but the WHOLE projects deployment plans. We are an Atlassian shop utilizing everything from Confluence(for requirements and documentation), Jira (for issue and project management), Bitbucket (for code repo, peer review, branch management), to Bamboo (for builds/automated testing/deployments). Bamboo can be a pain at times but it gets the job done and like any CI/CD server it has its quirks and you need to provide it proper care and feeding.
Well, half way through the morning I receive a message basically saying "O SH**" and that the deployment plans had been deleted on accident. After a few minutes of fuming it was time to get this thing back up and running. I contacted our IT team to find out this VM has not been backed up since Sept of last year. This day kept getting better and better. I took a little time to run through the deployment plans, build plans, and configurations on the server and document anything and everything I thought would or could be useful after the revert. I was surprised by the lack of options in Bamboo to recover from changes, when it's gone IT'S GONE! Long story short. We reverted. Luckily a lot of the deployment plan which was deleted was in that snapshot but it got me thinking about what we could have done to avoid this problem all together.
Today was rough, but with any issue comes opportunity. Use the 'lesson' and learn from it so it doesn't happen again. EVERYONE should walk away from the experience with more knowledge of the tool/process, better skills around the tool, and the confidence that this type of issue will NOT happen again because the team is taking the right steps in the future. What are some of your worst backup/recovery experiences? Bartlett I updated my All Aboard presentation. It's my thoughts on project management, vision, goals, and impediments. Some thoughts around team management and project management. Using small incremental steps to build up to the big project. Happy Fall everyone! I attended my 5th graders field trip to The EDGE challenge course a few weeks ago. It was great experience to see their young minds trying to solve/complete challenges in a group environment.
My thoughts on micro-management and other areas of project/product and team management. |
AuthorSecurity Researcher with about 20 years in the Computer Security Field. Going to talk even if no one is listening.. Archives
February 2020
Categories
All
|