Mr Bartlett Blogs
  • Ramblings...
  • Contact Info

You don't know what you don't know - Inventory

1/3/2022

0 Comments

 
Picture
With all the Log4J madness happening over the last few weeks it got me thinking ..... (https://www.cisa.gov/uscert/apache-log4j-vulnerability-guidance)  Here are a few questions you should be kicking around with your OPS/Security teams when the dust settles.  But don’t wait too long after an incident, you want the wound to be fresh so you get the details.   People forget over time, especially things they do not want to remember. 



  • How do we know if we are affected?
    • Do we even run the product/library/application that is affected by the vulnerability?   Where do you start?  It all begins with INVENTORY.  Over the many years of the cyber industry one thing still remains: we are horrible at inventory and tracking of assets.  If you do not have a centralized asset list or inventory, start today!  Open a spreadsheet and start filling in rows with asset details;   name, ip, OS version, Patch info for OS, Applications running on Asset with Version Numbers.  This is a small list and can be made as complicated/detailed as you want.  I know, most folks are saying a spreadsheet?  Why a spreadsheet?  The point isn’t the spreadsheet, the point is you have a list of assets.  Most of the time it isn’t the medium you are capturing the data, it is the process around the inventory that collapses.  After creating the list, share the list in a centralized location where multiple ‘approved’ folks can update the list.  Bake it into the company's psyche.  Onboarding/offboarding of assets, update the inventory list.   Patches to inventory, update the asset list.   Most companies have that ‘running’ inventory file that when you look at it you find multiple assets that haven’t been living on the network for years..   
    • This is where a centralized CMDB or Asset Database might be helpful using a vendor tool but without the basics you will still need to have a basic list of assets.  When the vendor comes in to plan the implementation one of the first questions they will ask is ‘ do you have a list of assets/CIs?
  • How do we know ‘what’ is affected?
    • Most vulnerabilities relate to a version or library within a product/application/service it is important to capture the version numbers of applications running on your assets so you can narrow down what is affected.  The building blocks to your inventory start with an asset/device from there start filling in the application list and version numbers.  It does feel like a daunting task when starting from nothing but remember, every little piece of information you add to your inventory list will save you hours of pain later or during an actual incident when you are scrambling for the information.   
    • This is where a centralized CMDB with the version details is helpful or an Asset Inventory/Patching system.  Most Vulnerability scanners today now create a list of Assets and associated application versions as well but might not be your central inventory list and you will need to integrate that platform with your inventory of record.  Most vendor tools are providing integrations between these types of systems and updating/adding to your central inventory has gotten a lot easier then yesteryear.   When doing a  review of your current systems make sure to lay out/document the details around each platform, the use, data sets/information that it provides, and how this could be useful within other platforms when trying to answer a question.  If all the data is collected and integrated together it becomes a lot easier to narrow down what asset might be affected by a vulnerability or in answering other questions like: 
      • What system is not patched?
      • What system is running unapproved software/applications? 
  • How do we remediate what is affected?
    • Answering this question needs to happen far before an incident is happening.  Your organization needs to have a process/SoP/Plan documented on the incident response and patching procedures with different use cases/scenarios and actions different teams in the organization will take to remediate the issue and how communications will happen. How will you remediate?  Patch, apply  workaround, network configuration change, pull device from the network.  This scenario can play out like a bad ‘pick your own adventure’ book if you haven’t thought through some of these scenarios and understand what is needed for each.  Who is involved?  How will we track changes during the incident?  How will we communicate during the incident?  Does the severity of the vulnerability affected roll out?  How does that affect the Change Management Process?   What does an emergency Change Request encompass when dealing with the situation?  When  do we go ‘back’ to using “normal” changes for an asset?   How will the company handle the post incident review and incorporate lessons learned?  Evolution of process/SoPs is important and it all starts with having the basic block in place to add to it.   Most of the time the block is a first cut at a document to capture the process.  It doesn’t have to be perfect, it won’t be as long as you refer to it and improve upon it. 
  • How do we protect against the vulnerability if we cannot remediate?
    • Patch, work around, network config change, pull from network?  Really depends on a lot of factors:     
      • What is affected?
      • What systems?  Are they mission critical?   Are they internet facing?  Do they have security controls around them that already loosen the vulnerability criticality 
  • How do we check for this being exploited prior to knowledge of vulnerability? 
    • Need to have that historical look into your network and assets.  What has happened to them? Who has talked/communicated with them?  Did they take actions not normal to their behavior? 
    • Logging of assets will go a long way here in helping you understand if you were impacted by a vulnerability. Yes, logging sucks, setting up the data pipeline, collecting logs, setting up monitoring of logs (alerts/triggers), etc.   It’s a lot of work but if you build a repeatable process that is implemented into other processes (onboarding/offboarding) the burden gets easier every time you do it.  One major area that isn’t done enough is log analysis and learning the logs.  There are TONS of log types out there and most products don’t stick to a universal log format so you’ll need to roll up your sleeves and learn the specifics of the log types, event types, fields associated with log/event types, what types of events are logged by the product/app/service/etc…  Take some time here do high level queries against the data to start from the 50k foot view and drill into the data.  Start with high level count queries against the log type(event type) fields.  
      • COUNT, ACTION FIELD
        • (24987, Deny;   300009, Accept;  23421, Reject)
      • COUNT, DISINCT(CNT((SrcIP), ACTION FIELD, order by COUNT
      • Look for patterns in event type events.  Are there event types that should never appear in logging?  If so, might be a good alert to setup in case they do start showing up in the logs. 
  • How do we shorten the above?
    • Automation…  Looking at each of the above, the process around it and breaking out a game plan on how you can take a manual process and automated it with a tool/script/platform.
      • Any piece of a manual process that can be shortened with the help of a computer should be looked at.  Does it make sense to automate it?  Where do you start?
      • Do you integrate this workflow into a SIEM, Ticketing platform, Automation Platform? Ideally some of the tools you are running in your environment already handle some of these tasks and you will just need to work on enabling plugins/add-ons to create the connections and communications between systems.   Lean on your vendor for assistance in this area, they should have ample documentation to get you started and if there is a custom plugin you need to discuss it with them to get support in place.  Last thing you want to be doing is managing a plugin and trying to keep up with all the API updates/changes done by a vendor. 
  • How do we onboard devices/services/assets with auto tracking/inventory built in?
    • This question builds off the Automation response above.   Look at your basic SoPs that are done for a majority of the employees in your organization.  From onboarding of the employee to hardware assignment to MFA and security controls.  Anything you can automate within this process will save you time/effort 3 fold.  
    • There should be a repeatable process when it comes to server build out as well so identify where you can start automating.    Gold image server build, call to inventory system to add asset to CMDB, update inventory with applications, etc. 
  • How do we query/dashboard using the above data?   
    • If a new vulnerability is disclosed can you build reports or ‘close to’ real time dashboards showing your current exposure?   
      • Do you have the data to do this?
      • Do the tools you use have the ability to do this?
      • How easy is it to create these reports/dashboards?  Do you need a trained person to whip out these reports or is the tool easy enough to use that in a time of need someone could go in and create a report? 

Once you start getting a handle on the above..  You will need to ask the same questions around your vendors, supply chain, and partners.   It never ends…

Supply chain, vendors, partners: 
  • How do we check vendors for vulnerabilities? 
  • How do we check our supply chain for vulnerabilities? 
  • How do we get the above details without the wait?
  • How do these parties notify you of the vulnerabilities?  How do they communicate remediation/patching/etc? 


That’s all for now. 
Bartlett


​

0 Comments

You Never Know What You Are Going to Get

1/2/2022

0 Comments

 
Picture
Who doesn’t like a good meeting

As we gathered into the conference room to discuss transitioning of applications to the Development team I knew we were about to uncover some nasty secrets hidden in the OPS team…..  Manager presents an application called CENTRON, it is basically an analyst task and incident tracking tool with scheduling capabilities.   I was familiar with this app, 2 years ago during the initial discussions of the app i had recommended the code, requirements, and issue tracking be done using the Development teams dev studio.   We got as far as importing the v1 code base into the repo and then never heard anything else from the application author/developer/team….   Now, with turnover and organization ‘realignment’ they were coming to the development team for help (and to take over ownership of the tool).    I was about to say:  “I told you so….”  but when looking around the room there was NO ONE from the prior organization that i had worked with or worked on the tool. Acquisitions can have a huge impact on people and vision. So I focused my energy on cleanup and getting this application under a supportable maintenance cycle. 

Laying the Foundation

First step, onboard the project into the Development Studio: 
  • Create needed Confluence space to store documents, notes, requirements, etc. 
  • Create needed Jira Project for issue management, resource planning, and prioritization
  • Create needed Bitbucket repo(s) for storing code and configuration items. 
  • Create needed Bamboo plans for CI/CD 

Our Development Studio has all the necessary components to on board any type of project and work it from the requirements/gathering phases all the way to the automated deployment of said tool.  The studio doesn’t just have integrated tools that make for quick execution but the documentation and processes to keep the studio running.  If folks are hired or the path of the team changes there should be an overall layout of how things run.  In this day and age of all things Agile, sometimes we lose the fact that documentation needs to be written, ‘lived’, and reviewed to keep it up to date and relevant to the current day operation.  
 

Collect ALL CODE!  
  • Need to get ALL code and configurations under version control.  A majority of the work done to the app was on the production server so it was a phased approach: 
    • Get all code into the repo
    • Verify the files ‘match’ what is available in production
    • Cleanup unneeded files
      • With many projects that do not have a repeatable release process you will find there are backup files and directories include.  (ie:  CENTRON.php_backup or CENTRON.php_old)
  • Removing the unnecessary files will allow the new maintainers to focus on the important files/parts of the tool.  Any time saved is well spent.  Think about the code review of the application by the ‘new’ development team.  If they are reviewing copies of the originals it is still time they have used for ‘wasted work’ and ineffective cycles.  Removing the files will guarantee that this time will be saved in the future over and over again. 
Documentation Review/Creation: 
  • Collect all related/relevant documentation and notes about the application.  Storing them all in a Confluence Space for the Application. 
  • Create a System Integration Diagram to visualize the other systems the application integrates/communicates with.  Without some type of diagram or map you can refer to, it becomes very difficult to understand the other pieces of the puzzle.
  • ANY documentation about the system/application will help you in the long run!


System Review: 
  • Review current application in production
  • Noting directories, configuration items, notes, security concerns, etc 
  • Adding all notes to a central page in confluence
  • Maintaining an application in a production environment is more than just the CODE BASE!  You need to understand the overall architecture of the application, how it integrates and communicates with other tools and services, operating system level configurations, asset details, user/account information, Outage and recovery procedures/documents are all important and cannot be overlooked. 

Backups/Recover: 
  • Do a complete backup of the system before ANYTHING is done on the server.
    • In our case it is a VM Snapshot which makes life soooo much easier then yesteryear 
    • Along with the backup procedure there should be recovery steps documented somewhere for the greater good and to reduce SPF (Single Point of Failure) within the teams. 
      • It’s hard to train without some documentation. 
      • How do you know procedure is followed if it isn’t documented? 

A New Day
Start a new:
  • Deploy a new server(vm) to replace the old production server.  After years of a system living in production you can no longer fully understand who was on the box, what they did on the box, what changes were made on the box, and who has root access on the box.  You are better off starting from a clean slate.
  • Determine how the code and components can be more easily managed.  Does it make sense to switch to containers and have different parts/components updatable by piece or all at once.
  • Determine how testing will be done to the application.  This project had no unit or integration tests.  Any change of the code needs to be manually verified with the knowledge that other areas of the tool could break.  We implemented a basic testing framework and focused on adding tests as we fixed bugs and/or implemented new features. DOCUMENT those test cases!!!
  • Determine how deployments will be done.  The old application was updated by hand on the server.  We moved it to a bamboo deployment plan which pulled docker images down from our private repo and started them with a docker-compose file.  Now when deployments are done it is automated and done by the machine which greatly reduces the errors made by someone making the changes.
  • Determine when and how the "lift and shift" will happen from the old production server to the new pristine server.  Keep in mind that with any active application communication is key.  Users need to be aware when maintenance windows will happen and understand "how" to handle this time in their day to day jobs.  Very much like a disaster incident, folks will need to know how to do work without this system for the short period of time.  Have a shadow period with both systems up and running.  This way you can always hit the o sh88 handle if things are found missing (or wrong) on the new system.
  • Determine where issues, complaints, and feedback will go during the cut over AND moving into the future.  We have a designated email distro list setup just for this.  Next step for us is to automate the case creation off of email submissions (ie: outage email submitted, case created and put in IT team queue for resolution)
  • Document and make known where your documentation is for the application, system, and environment.  If your IT team has to respond to outages they better know where to go and what to do.
  • Make an effort to understand WHY and WHAT areas of the tool/application are used for operations.  Times change, new technology/tools come along, don’t repeat the same steps when new and improved actions can happen.  Management input will be a huge portion of this discussion, they will need to drive the change and what will happen with legacy applications in your workplace! 


​

0 Comments

You don't know what you got....   Unless you know what you got

1/1/2022

0 Comments

 
With all the Log4J madness happening over the last few weeks it got me thinking ..... (https://www.cisa.gov/uscert/apache-log4j-vulnerability-guidancewww.cisa.gov/uscert/apache-log4j-vulnerability-guidance) 
0 Comments

    Archives

    July 2024
    January 2022
    June 2021
    February 2020
    June 2019
    October 2018
    September 2018
    August 2018
    March 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    December 2015
    August 2013
    January 2013
    September 2012
    June 2012
    March 2012
    February 2012
    January 2012
    December 2011
    November 2011
    October 2011

    Categories

    All
    Activation
    Agile
    Backup
    Centos Vmware Interfaces Error
    Collaboration
    Communication
    Computer Security Scans Passwords
    Conferences
    Drones
    Emergency Response
    Exploit Kits
    Exploits
    Life
    Links
    Malware Security Dnschanger
    Organization
    Passwords
    Patches
    Phish Security Email
    Project Management
    Rfun
    Scrum
    Security
    Security Blackhole Exploit Kit Browser Phish
    Security New
    Software Development
    Team
    Windows
    Work

    RSS Feed

Powered by Create your own unique website with customizable templates.