What is Secure Data Management?

Secure Data Management is an incredibly complex topic to discuss. Whenever we talk about this subject, we are often hindered by three general issues, each of which dramatically affect proper policy, approach, and practice.

First and foremost of these issues is a case of simple misunderstanding. Those who deal with data, even secure or confidential data, are often managers or specialists dealing with limited resources, financial concerns, and legal compliance issues. While this is of course to be expected in any business, the simple fact is that many managers developing a high level data management plan are not engineers or specialists, and as such, have a limited, “business only” view of data.

Secondly, we often see an issue that can broadly be summarized as “misunderstanding”. Even if a manager or other professional is familiar with the idea of Secure Data Management, simple misunderstandings of the more complex inner workings and general procedural approaches can lead to a cascade effect, having negative effects across the entire process. These effects can in turn lead to additional cascading issues, and can quickly turn small issues into larger ones.

Finally, even if the user in question is familiar with data management and understands how the process works in a general sense, there are a huge number of caveats, concerns, and general procedural lessons that, unless known, often have to be learned the hard way. These small deviations can make or break a system, and knowing them from the get-go can have hugely positive benefits to the system as a whole. Not knowing them, on the other hand, can cause issues at the beginning of the data management process, and these issues can very easily propagate throughout the approach and system in question.

This guide attempts to fix all three of these issues in a single knowledge base. This Secure Data Management guide is sorted into several sections, each identifying specific aspects of the topic at hand. While it is best to read the guide as a single cohesive piece, it has also been designed to be readable and approachable in broad subjects and topics, allowing it to function as a knowledge base.

I - Defining Secure Data Management

Before we dive too deep into all the nooks and crannies of Secure Data Management, it helps to actually understand the overall topic as a concept. While Secure Data Management as a concept might be familiar to many engineers and other specialists, this is often a misdefined or misunderstood concept at the higher level.

While the overall process is far more complex than a single sentence can summarize, the best way to think of Secure Data Management is “the conceptual practice of managing data from creation to destruction, anticipating risks and disaster circumstances, and crafting a plan to respond in kind.” This plan typically considers a wide variety of circumstances and the resultant solutions - it’s basically a knowledge base and “how to” for the worst situation a provider might find their data being subjected to.

II - Considerations of a Secure Data Management Process
A much better way of defining Secure Data Management is to define what considerations should be factored into a management plan. While these considerations are as varied as they are important, they can be broadly sorted into one of several categories - Disaster Recovery, Access Controls, Asset Lifecycle Management, Security Risk Assessment, and Risk Management.

Disaster Recovery

Not every threat to data is going to be solely digital in nature. We often make a big deal about hackers, state actors, data loss, and other digital threats, but the physical space holds just as many threats to data - and in many cases, these threats can be even more destructive than their digital counterparts.

“Disaster” is a huge word meaning a lot of different things, but when we talk about Secure Data Management, we’re really talking about “any situation in which massive data is lost, typically due to natural or man-made events”.

Of note - while disasters can be digital in nature, it’s very rare to have entire data destruction or data breach without direct, physical access to the server hardware serving data. If such a loss of data is incurred, there are more things to worry about than Secure Data Management - this would imply your servers, operating systems, and hardware have essentially no protection, and in that situation, there’s simply no recovery.

When crafting a Secure Data Management approach, disaster needs to be taken into account, and a plan needs to be set to ensure that the disaster can actually be recovered from. This plan often consists of policies and solutions to ensure that the business can keep going after a massive data event, be it loss, corruption, or whatever may be the threat at hand.

Access Controls

You wouldn’t leave your home unlocked in the middle of the night to just anyone. At the same time, you wouldn’t want to lock out people you want coming in and out - your kids, your parents, even your friends might, from time to time, need to come in, and simply locking everyone out isn’t a real solution.

The same applies to data. While the vast majority of data will often be locked down and secured under heavy security, there are people, applications, and services that will need to access data. These services will obviously need access, but again, we run into complexity - not every single solution is going to have the same need for access or the same level of privileges to access different resources.

This is where access control comes into being, at the intersection of security and usability. In terms of Secure Data Management, access controls are a big part of “secure”, and serve an essential role in limiting the usage of data outside of a given purview and use case as delineated by the services requesting access.

While this might seem simple at first, access controls are very, very easy to mess up. Delegating access or even allowing propagation through a single sign-on federated system (that is, a collection of systems sharing a single credential) can cause access to resources that were never meant to be accessed. Furthermore, resources are not always clearly delineated - sometimes, data on a secure disk might need to be accessed by a given service only authorized to access data from a specific customer, and in that case, controls can add a very important layer of granularity.

Asset Lifecycle Management

As part of a consideration of data security, the lifecycle of each asset needs to be taken into consideration. Data is not just something that floats in the aether - it’s something that is stored in a medium, and the security of that medium is heavily affected by what stage in its lifecycle it currently is.

For instance, when a hard drive is introduced into a system, there’s a certain consideration on the mean time between failure and the relative security of that data. As the drive ages and is ready for recycling, or even removal from the system, the data needs to be handled in some way. What happens to the data at this crucial point in time is a serious concern, as it will be a movement from secure medium to secure medium in a way that can be fundamentally insecure.

Furthermore, the nature of the medium in which the data is stored has a lot to do with security profiles in general. While it’s tempting to simply write data when needed and keep it for only as long as needed, this has adverse effects on the hardware, and can lead to artificially lower mean time between failure, causing data loss.

On the other hand, storing this data for prolonged periods of time can have the effect of causing the hardware needed to function to balloon exponentially - and this is saying nothing of the legal and ethical concerns of doing so, either.

The lifecycle is thus a balance between usefulness, longevity, legality, and usability - it’s this balance of the lifecycle management that makes it so key in the Secure Data Management process.

Security Risk Assessment

Security is a calculation of usability to limited access, and in this calculation, the variables chosen will determine the relative security of your resource. With that being said, there is no such thing as perfect security. Even a system closed to all external resources will eventually be subject to the woes of aging hardware, power surges, or natural disaster, and when this occurs, the same steps will need to be taken to recover as if the system had absolutely no security whatsoever.

Thus, every element of Secure Data Management is a simple calculation. When we deal with Secure Data Management, what we’re really saying is “how secure is my system given the risks it is subjected to”.

Accordingly, risk assessment is a very big part of this approach. When looking at the risks a system might encounter, there’s a wide variety of considerations that must be attended to - each of which will be addressed in the next section.

A quick note here - risk assessment is not a set and forget proposition. You cannot look at the risk, document it, and move on, never to test the system again. The technology industry is ever evolving, and as new technology and solutions come to the front, so to do new threats and methodologies to breach or damage your data.

Accordingly, assessment should be considered an ongoing process, not a singular instance. We will address this later under our “auditing” section, but this simple mantra should be kept in mind - “assess, and assess often”.

Risk Management

Once we’ve identified our risks through assessment, we must naturally address them in such a way as to minimize their threat while magnifying our usability. This is what is called “mitigation through risk management”.

Risk management is nothing more than another piece in the puzzle. Think of a grand series of cogs all set in a row, each changing the cog next to it. If we were to introduce a foreign object into that series, what would occur?

If we can predict generally what would occur, then we have a wide variety of potential solutions that can be employed to mitigate this damage and minimize the effects. We can introduce new gears, new runs, perhaps even a pane of acrylic to encase the cogs to prevent foreign intrusion.

Simply knowing that there is a foreign object threatening to encroach on your grand machine is risk assessment - knowing how to mitigate this damage is risk management. The same applies to data. We know what our risks are, and we know what they might do. Managing these effects, through solutions like load balancing, traffic inspection, and other such actions, can help turn these disasters into non-issues.

IT Considerations for Secure Data Management Plans

With all of this in mind, what are the chief concerns for IT teams looking to construct a solid Secure Data Management Plan? While the specific answer to this question depends largely on the type of data being handled and the situation in which it is being handled, the answer is roughly “CIA”.

Now we’re not talking about governmental agencies, here - we’re talking about “CIA”, a common security term that almost any IT professional is familiar with. “CIA” stands for Confidentiality, Integrity, and Availability. Balancing these three concepts and ensuring that they are protected is exactly what a Data Management Plan should be based upon.

Confidentiality is the idea that data should only be accessed by the owner or the user who the owner has designated as having viewing privileges. Going back to our house key analogy at the start of this guide, a homeowner will only want specific people to access the house at any given time. Issuing keys to certain users while ensuring the locks are robust enough to keep out unauthorized visitors is ensuring confidentiality. We’ve discussed this previously in our piece title “Creating a Secure Data Management Process”, but it bears repeating - Confidentiality is perhaps the most key aspect of Secure Data Management, and should be given the appropriate weight.

In data terms, this can be done in a wide variety of ways, but in practical terms, it typically means the aforementioned access controls, authorization/authentication tokens, or some other methodology of verifying the user. This will be a chief concern in our Secure Data Management Plan.

Integrity is the idea that data should be left unchanged from the last state it was left in. While this seems like a no-brainer, data can be manipulated in a wide variety of ways, not all of which is transparent. Electrical surges, magnetic distortions, hardware failures, and illicit access can all result in data that is changed from its previous state, and if the owner of that data is not made aware of said change, they might be relying on bad data.

How many times have you downloaded a Windows Update or a Mac OSX patch? Now imagine if those servers were open to manipulation. Millions of users could be infected with malware and adware, simply because the resource they trusted did not ensure proper integrity. Data sharing is about trust - and if you can’t trust a data repository, that repository is functionally useless.

Finally, Availability is the idea that resources should be available whenever they are needed. This doesn’t mean the data should be available without any control on its access or usage, of course, but plainly stated, it means that if you have access rights to the data, you should be able to access it whenever you want.

This doesn’t just mean having good hardware, either - it means assessing risks such as buffer overflows or Denial of Service Attacks, both instances where data could be suppressed and users with otherwise good authority might be barred from data. While this isn’t really important for something like basic email or social media, imagine what this could mean for a hospital who couldn’t access their health records, or a mining company who couldn’t access their client lists - it could potentially mean huge economic loss at best, and loss of life at worst.

Even in such situations where a data breach doesn’t result in loss of life or cessation of additional processes, the economics of a data breach are staggering. In 2016, IBM reported that the average data breach cost for a corporation is $4M USD. Taking the time to assess these threats might be a cost of time and effort, but it’s certainly better than $4M USD in damages.

The balance of these three concepts, “CIA”, is what any good Secure Data Management Plan should center on. Every concept we discuss here, every solution, hinges on that singular set of concepts.

Data Destruction

A Secure Data Management Plan is not just about origination and maintenance of data - at some point, data will need to be wiped and destroyed, and when that time comes, doing so properly is of paramount importance. Preventing everything from premature data exposure to identity theft is directly impacted by data destruction, and as such, ensuring that this data is erased is the most important part of the end-of-data lifecycle.

A quick aside - almost as important as the actual process of data destruction is the identification of whether data should be destroyed or retained. This is often a very human judgement process, but has with it its own considerations of legality and ethicality. Accordingly, all data destruction processes should necessarily start with the question “should this data be destroyed?”

Now that we’ve marked our data, we can start the data destruction process. But just what is data destruction? Simply put, it’s the removal of data from a medium, ranging from simple (erasing data from a cassette tape) to complex (multi-pattern data overwriting on magnetic disks). The real answer that we are concerned about is not “what is data destruction”, but rather “what is effective and proper data destruction”.

That’s a more complex question, because when we effectively destroy data, we’re doing much more than erasing the surface level. When data is written to a physical drive, the drive is magnetically changed to either off (0) or on (1). This makes the drive have great versatility and durability, but it comes at a cost.

When data is erased, these bits all need to be set to a value, and this is time consuming. Thus, operating systems typically just mark the partition for overwriting and call it a day. The problem is that this leaves forensic data on the drive, and while that’s not an issue when you delete an .mp3, it certainly is when you’re dealing with data governed by law and considered extremely private.

This isn’t even the data’s final form, either. Even when data is erased, it can leave residue if improperly written over. If you simply change every 0 to a 1 or have a set pattern, in theory, this pattern could be reverse-engineered, restructuring the data into its original form. In our piece “Data Wiping – What Is It And How Does It Work?”, we used the following analogy for this type of forensic data:

Think of it this way. Imagine a wall being spray painted with graffiti, and left for a week. A cleaner comes out to clean the paint off the wall. Even with the paint washed away, it still leaves a “space” underneath the paint where grime and dirt didn’t adhere. Even though the graffiti is gone, the message is still there, waiting to be read.

Unfortunately, the complexity isn’t even over at this stage. While forensic data can easily be dealt with on all company devices, any company with a Bring Your Own Device policy faces unique, specific BYOD data destruction caveats that almost create a process in and of itself. Ensuring that data on such devices is first limited to only need to access, and that the remotely duplicated content is then specifically and deliberately erased while preserving non-company data is a huge task, but one that is easily dealt with once separated as a distinct issue from basic data destruction.

Establishing and Maintaining Chain of Custody

While the technical aspects of data deletion and Secure Data Management in general are important, the legal and ethical concerns are just as much so. There are heaps of legislative bodies who have passed laws concerning the creation, management, and destruction of data, and chief of all of their concerns is adherence to these laws.

Thus, chain of custody should be a huge aspect of your Secure Data Management Plan as well. Throughout the entire process of managing data, a chain of custody - and a record of transactions - should be kept and stored separate from the data in question. This chain of custody should not have any personally identifiable information, of course - this would only magnify your security issues - but should instead be uniquely identifiable, append this identifier to the data in question, and track each change and alteration to the data as it occurs.

This serves several purposes, chief of which is to establish legal record of action. Being able to prove that data is destroyed and attaching that destruction to a specific technique, system, and technician is not only good practice, in many cases, it’s a legal requirement.

Chain of custody doesn’t just mean for data destruction, either. When disaster inevitably strikes, having a steady chain of command with which to identify who is engaging which process creates a system of accountability, allowing technicians to work independently towards the same group goal.

Additionally, when faced with moving from data center to data center or even simply from physical machine to physical machine, chain of custody can ensure this data maintains its security throughout the entire process, and establishes a strong accountability focus for all parties involved.

The Components of a Good Secure Data Management Plan

Now that we have a solid grasp of what Secure Data Management is and what it entails, we can start to craft a long-term application. First, let’s figure out what makes a good Secure Data Management Plan, and what these elements look like in the greater scope of things. These steps are broadly organized into three sections - Internal Qualities, External Qualities, and Solutions.

User Identification

Before we can build in the later aspects of our plan, we have some groundwork to cover. First, we must determine exactly who our users are. “Users” is somewhat of a misnomer, as it almost implies we are trying to figure out only who is going to be using our services - what we’re really after is a sum-total list of potential user classes from the internal (i.e. database administrators) to the external (i.e. end users).

By doing this, we can use this information to begin to figure out which natural user groups divide these users, and thereby what rights need to be given to each, what controls need to be applied, and which groups have increased threats due to the nature of their grouping.

A great way to go about doing this is to adopt a database methodology commonly referred to as “normalization”, adapting it to our use case. Make a big list of all potential users. Find their commonalities, and separate them into tables dependent on what they might need. Let’s say you have three users: “Bob”, an admin, “Joe”, a user, and “Susan”, a human resources manager.

For your internal data, Bob will need to access all basic systems in order to implement solutions. He should be considered part of a general “Administrator” group, and granted basic administrator rights. Joe, on the other hand, is simply a user accessing his own data records. He can be granted simple read permissions for the address of his record with no additional permissions - he is thus classed under the “User” group. Susan needs to access and edit employee data that is hosted on the same admin systems that Bob accesses, but she does not need to edit the fields for other values here. She can be given specific rights to the fields she needs to access, and thereby she can be considered under the “Manager” group.

With these three basic users, we’ve created three groups that 95% of all other users can be broadly categorized under, thereby simplifying our processes later on for access control implementation.

Data Identification

While we’ve identified our user types, we still need to do some more legwork to figure out our data types. Certain data needs higher access controls, regardless of what user type is accessing the data. For instance, stored information such as social security numbers or bank account routing information needs to be encrypted not only against external users, but against internal users who may need access to other data sections with the exception of this particular type.

This should also help to inform whether or not your plan is covering contingencies of data manipulation. For instance, a Secure Data Management Plan is great, but without securing data that has an extremely short lifespan, you’re creating a very narrow window of high opportunity for man-in-the-middle or other such digital attacks.

Again, just as we did for user identification, we don’t need to heavily subcategorize all of this data. Broad, general categories can be extremely helpful in the long-run for processes, and are more valuable in a general form, as granularity can come specifically with the system access controls later on when we set up the actual Secure Data Management Plan.

Hardware and Software Cataloguing

We need to take an overview at what systems we have available for the processing of our data. While this is important in a general business sense, what we’re trying to do is to have a good understanding of how our data is physically handled, and thereby get a sense of where we might have internal issues.

For instance, if our later stages offer a solution centered on load balancing, we need to have the hardware and software to handle this. If we are missing this during our cataloguing stage, we can’t accurately measure the amount of data we can process, nor the theoretical limits that in themselves, through overflow, become a threat.

Identify Common Security Vulnerabilities

With a solid understanding of our data and our internal systems, we can now look externally to see the potential issues our data might be faced with. First of this consideration is the concern over common security vulnerabilities.

First, we need to survey what the biggest threats to data security are currently. Once we’ve identified a roster of significant threats, we can look into our system to ensure we meet them. A good rule of thumb is that for every potential threat, there should be two solutions - a primary solution, such as encryption or authentication, and a secondary solution should that fail, such as automatic port closure or data rejection.

The simple fact is that your data is valuable - if it weren’t, it wouldn’t be undergoing management in a secure way. Therefore, you need to think like a criminal, and ensure that every access point is as secure as humanly possible.

Identify Potential Physical and Natural Threats

Not every threat is digital, however, and planning only for digital attacks is leaving a gigantic hole in your approach and understanding. Natural disasters are a huge threat, as is physical intrusion, hardware faults, etc. You need to plan appropriately for these.

Some of these solutions are going to be simple, such as locking server cabinets and installing mantraps on building entrances, but others will not be simple. There’s no way to negate a tornado or a tsunami completely, and in those cases, your solutions will have to focus more on offsite backup and redundancy (more on this later during our Solutions phase).

Consider Solutions for Non-Negatable Threats

As previously stated, not every issue has a solution. There are going to be digital or physical threats that occur at some point that do not have a solution, and are not intrinsically recoverable. “Acts of God” roughly fall under this umbrella consideration, with the caveat added of simple bad luck. What happens if your backup system also fails? What about the rare instance of both backups having cluster failures? What occurs if your data center is in a place that is faced with the outbreak of war?

Planning for these rare, but possible situations is incredibly important. Ensure you have the capacity for backups, data redundancy, or at very worst, a business recovery plan that ensures such damage can be “bounced back” from. While you can’t negate the problems themselves, you can create a system that is easier to bounce back within. Notably, if you properly follow these procedures and concepts for identifying threats, early threat identification can help to negate this damage at its lowest possible level.

System Auditing

Any good Secure Data Management Plan should have a system in place for constant auditing. Simply getting a good solution and walking away is not developing a secure system. You’re going to need to test your system often to ensure compliance not only with your chosen solutions, but with legal elements as well.

Of course, your solutions are also not always going to work exactly as you intended them to work, and this will be caught in auditing. This is actually the principle reason you would want to audit in the first place - theoretical doesn’t always mean practical, and ensuring your solutions are working as you planned them to is very important to the longevity of your system and the data it protects and processes.

Data Destruction Processing

You’re not going to hold on to data forever, and so you need to incorporate the destruction of this data and the handling of the physical medium that hosted that data in your plan. Ensuring that data destruction is complete and efficient can have huge legal, ethical, and economical impacts - conversely, incomplete and inefficient erasure can cause many, many problems.

There are three chief areas to cover in this section of your plan:

  • How is Data Destroyed - Having multiple options to destroy data is very important, as you need to have custom solutions for the wide variety of data you’re going to be handling.
  • Who Provides the Service - While internal solutions are acceptable, there are a great variety of third-party solutions like ClaraWipe that can not only get the job done, but get it done in a way that is compliant with the ever-evolving set of legal and ethical constraints and legislative actions. For tips on choosing the right third party Data Destruction provider, see our piece titled “Selecting a Data Destruction Provider”.
  • What Happens to the Hardware - The data is only half the equation - what happens to the hardware that hosted it is a chief economic concern, and shortened lifespan can make this a complex topic to handle.

Chain of Custody Maintenance

Perhaps most important in all of these steps is the generation and maintenance of solid Chain of Custody records for the data in question. While this is often referenced in terms of data destruction, a Chain of Custody for each step in the data’s lifespan is very important for a variety of reasons:

  • Accountability - Issues with data and lax security policies can be traced to specific workstations or servers, shortening your downtime. This also means that, should an internal threat come to fruition (such as a belligerent technician or mass hardware failure), all elements related to that singular node or component of the process can be easily referenced and checked for resultant errors and issues.
  • Compliance - Though this is more specific to destruction, data Chain of Custody is required by many jurisdictions, and can help legal investigations proceed smoothly and in a non-interruptive manner. While this has obvious legal implications, there’s also something to be said for the negation of possible economic and punitive measures for non-compliance, making this literally something you cannot afford to not do.
  • Historical Record - Establishing a strong Chain of Custody also grants a valuable historical overview of data handling, allowing for mini-auditing to see where choke points occur in the handoff of data. Additionally, this historic record is useful to reference previous efforts in case unique issues arise, documenting these solutions as a mode of long-term knowledge base construction.
  • Certification - Having a strong chain of custody means that you can verify your efforts throughout the process. For internal use, this means peace of mind to ensure that everything was done properly, and according to the principles and procedures of the Secure Data Management Plan. For service providers, this has the added benefit of potentially tying into automated systems to alert clients to processing of information, which adds tremendous value.

Step-By- Step Guide on Creating a Plan

Now that we have a general idea of what is included in a good Secure Data Management Plan, let’s put it all together. In order to construct your Secure Data Management Plan, you can follow these simple steps:

Step 1 - Inventory All Users and Categorize - First, create all of your groups of users. You can easily do this by writing out the average load on a server, and roughly categorizing them into several categories, such as “Administrator”, “User”, “Auditor”, etc.

These groups will be used in the next step. Keep in mind that these groups are not meant for granularity - granularity will come from specific user rights that can be configured on a user by user basis within whatever software, server, or database system you create. These categories are simply meant to be used as a “general case” for all users that fall within the category.

For instance, an Administrator, regardless of their level of Administratorship within the company, is obviously going to have different rights than a newly registered user attempting to access a record online - even if you have more than one type of administrator, a basic level of rights for all admins will need to be defined before implementing any granularity.

Step 2 - Implement Access Controls - With your user groups created from Step 1, set up your access controls. Certain user groups, like Administrators, will have certain basic rights. Any group’s rights will therefore need to be the absolute minimum amount of data access rights to do their basic functions as a user group.

As stated before, granularity will come from the systems themselves, not from this Secure Data Management Plan. The Access Controls are a broad stroke, general solution. Think of this step as blocking out a painting or roughing a sculpture - you simply cannot add detail to a piece of art until you have the rough shape defined, and the same is true of access control.

This is not just in software terms, either. You are likely to have hard, physical copies of data (such as invoices for record keeping or older physical data that is being generated), hardware that must be protected (such as non-active, non-system hard drives for long-term storage), and policy/procedure documentation that must remain private (such as HR documents and contracts).

All of this must be classified under access control limitations at this point.

Step 3 - Identify and Document all Internal/External Threats - List out all of the known threats your system might encounter. Broadly separate all threats between “Natural” and “Man-Made”, then within these sub-groups, divide them into “Internal” and “External”. Documenting these threats can help not only prevent them from having an impact on your system, but help your decision making process for when a threat does come to fruition.

When you’re applying a solution, what matters is not whether or not you have a solution, but whether the solution is in fact the correct one. Data loss due to earthquake has a much different solution process than digital intrusion from a remote server. Accordingly, identifying proper processes as solutions to specific issues is vital.

Step 4 - Document Solutions and Establish Procedures - Now that you have all of your internal and external threats, craft a solution to match with each. These solutions can absolutely overlap - the solution for “fire damage” is pretty much the same as “water damage” - but you need to document a solution for each potential threat.

Of vital importance here is to document a procedure for each solution. Once you’ve documented the problem, create a process that includes everything from noticing the issue to reporting it, and from implementing a solution to verifying the solution. Make a note for each procedure that a historical record must be kept, and that the solution should always be documented even if it’s known.

This is done to keep derivations of solutions in line with their documented solutions - should a problem with a small permutation occur later, the actual fix should be included in a historical record to make recovery faster, easier, and more complete.

Additionally, policies and procedures for your systems should be created at this time. Enforcement of policies like “Create a Historical Record” or “Enforce Access Controls” come from the policies stated at this level, so it’s very important to attach these policies to their respective procedures.

Step 5 - Create a Disaster Recovery Plan - With all of these structural piece in place, create a Disaster Recovery Plan. This plan should attach ownership to each object and data resource, should enforce responsibility for reporting and repair to departments and engineers, and serve as a “how-to” resource in the case of a disaster.

Foundationally, this plan should outline all of the steps necessary for a single engineer to recover their data, secure their resources, or otherwise recover from a range of disasters. Every step should be documented, even if the plan is simply “contact a superior”.

This plan should be audited against your system continually as part of the next step, but should also be audited after initial completion to ensure it is covering every perceived threat facing the system at large.

Step 6 - Create an Audit Procedure and Training Program, and Enforce Compliance - Now that you have your solutions in place, as well as your access controls, create a system to test compliance with all regulations and guidelines. This will ensure that your system retains its usability and impact over time, and that those who are governed by the procedure do not become lax in their enforcement.

This auditing can take any number of forms, from random observance to review of historical record compliance policies. Regardless of the form, the auditing should be consistent, random, and documentable to ensure workplace-wide compliance and understanding.

As part of this, creating a training resource, whatever form it might take, is key to enforcing compliance. Requiring regular policy and procedure training can directly impact the real-world effectiveness of your plan, and must be considered.

Step 7 - Create a Lifecycle Management Plan - Now that we’ve address procedural issues, one of our final steps is to document and manage the lifecycle of our data and related assets. Every point in the data and hardware lifespan, from generation to destruction, must be documented and have a set of principles and procedures applied to match our considerations from earlier.

Chief of this concern is legal and ethical compliance, as well as the creation of a historical record and chain of custody. These are huge elements of the lifecycle plan, and should be heavily reviewed for weaknesses.

Data Destruction processes at this stage should be laid out in a clear, methodical way, and should detail the handoff of data from production to destruction, and back into whatever form the hardware will take.

Step 8 - Create Adherence and Compliance Policies and Consequences - There is no plan if there is nothing to back it up. Everyone who works within the plan must comply with it, and at this stage, punitive consequences must be put into place to ensure that everyone at every step complies.

This can range from relatively simple warnings to retraining and even removal from the company, but whatever form it takes, there must be documented, well-noted consequences for failure to comply. This is because your consumers depend on these processes being in place, and failure to properly defend these policies is tantamount to negligence.

Next Steps

You now have a Secure Data Management Plan - congratulations! Now you’re onto the arduous task of applying it, and making it a procedural norm within your organization.

Implementing the Plan

So how do you implement a Secure Data Management Plan in your company? Simply put, this is a matter of procedure and enforcement. This will be a concept that is routinely referenced internally and within the scope of your organization, so making it a condition of continued employment or at least continued interaction with data is certainly one way of implementing it.

Once you’ve implemented the solution, you need to audit - this cannot be stressed enough. This is not a set and forget solution, you MUST continually audit to ensure compliance with your plan, or the plan has absolutely no value.

Additional Support

Your plan will need additional support, of course. While this can be done punitively in your organization, you can also get the support of your colleagues and engineers by making them a part of the plan. Deriving your solutions from the efforts of workers guarantees that these plans will be adhered to - after all, this was their project as much as it was “management’s.

Making a system of credibility and accountability is very important at this stage, and should have been done as part of the plan in the early stages. Having specific IDs attached to the efforts of your employees, and thus tracking their efforts and auditing their compliance, helps to support and reinforce just how important these procedures are. Everybody does their best when they know they’re being held accountable.

Additionally, not all auditing has to come internally, either. Hiring an independent, third party auditing organization can help avoid the pitfalls of internal auditing, and leverage the external audit to increase performance and compliance rates. You don’t have to use an external auditor - this process can be done internally - but external auditing is a very powerful tool that you would be remiss to ignore.

Finally, ensure that the policies you are creating are publicly and clearly adhered to by higher ups. Do not allow your Vice President of Finance to access all records from home because he prefers to telecommute and does not want to use a VPN - enforce your access controls universally, without concern for rank or position. When engineers see their boss held to the same, even higher standards, they will see it as their personal responsibility to do the same.

Software Solutions

Much of this process can also be automated. Your policies can be enforced directly, and for many smaller businesses, this is an acceptable plan, but at the enterprise level, something else must be done. By using software with centralized controls, you can make sure access controls are put in place in the correct manner.

For instance, data destruction policies can be implemented strongly by making the policies part of the program functionality, and enforcing them transparently behind the actions of each technician. Likewise, auditing can be transparently done remotely by observing the traffic from a workstation. The same can be done with chain of custody by simply tracking each action and generating a report at given set intervals.

This might seem Orwellian, but we’re dealing with some quite important data, and as such, we need to take every precaution we can. This can be granular, as well - if you have only a few technicians, making it a policy and auditing can work. If you have a large workforce, imaging an approved workstation with limited rights and enforcing a digital workstation policy through thin clients can be a more powerful, granular level of control to ensure policies are adhered to.

Conclusion

We hope this guide has helped you to understand just what Secure Data Management is, and how a Secure Data Management Plan is created. By following these simple steps and adhering to the auditing and policy enforcement guidelines we’ve created, you can create a very complex, powerful, extensible solution to secure all of your data.

For additional reading and resources, please visit the Clarabyte blog, where we’ve covered additional topics in depth.

Clarabyte ClaraWipe Clean Hard Drive Clear All SATA Complete Data Removal Cyber Security Data Destruction Data Removal Verification DBAN DoD 5220.22-M e-steward e-stewardship FACTA GDPR GLBA HIPAA HITECH ISO 27001 NIST 800.88 PCI DSS PIPEDA r2 Remove Data from Hard Drive Remove Data from SSD Secure Data Removal SOx Verify Complete Erasure Wipe Hard Drive

← Older Post Newer Post →