The difference between data integrity and security is clear. As far as information technology is concerned, you can’t have one without the other, as the integrity of data relies in large part on it being kept secure throughout its entire lifecycle.
There are of course other factors that dictate the degree to which the integrity of data is maintained. The accessibility and traceability of data play a role, but at the end of the day, its trustworthiness and reliability is front and center, especially when the data belongs to consumers of a SaaS application. In such a context, security errors and malware/ cyber attacks must be addressed as constant, potential threats and doing so effectively is just as much an art as a science.
For starters, you must consider the individual phases in the data lifecycle, as each represents a point at which the data is vulnerable. Each stage also presents firms with unique challenges. The exact number varies, but most agree it has several critical components that can be grouped as follows:
- Collection/ Processing
- Analysis/ Usage
- Archival/ Purge
The Collection and Processing of Data
These can be considered two separate stages, but for the purposes of this simple blog post, we’re grouping them together. After all, data must first be collected and processed before it can be used.
Hypothetically, in this post-GDPR (General Data Protection Regulation) world, it’s generally considered to be a good business practice to protect data collected from consumers and be completely forthcoming with regard to how it will be used. Ethics aside, it makes you look good in the eyes of consumers. At the very least, you’d be keeping pace with competitors by being transparent. Why risk lagging behind by being secretive about it when there’s so much more to gain by getting consent and keeping your consumers’ data as secure as possible?
In any case, at this stage, you would limit your vulnerability from a legal standpoint by being selective about the data you collect. There’s no need to ask the consumer for their life story when only certain bits are relevant. Getting more than you need might enable the data to be repurposed, but in such an instance additional consent may be required anyway.
When it comes to processing the data, the National Institute of Standards and Technology (NIST) has you covered. It established NIST SP 800-53 as a standard, whereby compliance is dependent on, among other things, limited data access only to parties that need to use it. Lax access policies only detract from the point of implementing them in the first place… and attract attackers.
The Analysis and Usage of Data
There’s also the potential for harm if data leaves the organization by way of it being shared or published. Even an invoice sent to the customer can fit the bill (no pun intended), here.
An in-house comprehensive data management policy should be enacted to ensure universality of any agreed-upon practices/ processes. From an IT perspective though, cryptographic key management for the cloud is one option that can protect data as it moves throughout the network. Ironically, with regard to data sharing, there’s still a shared responsibility model at play here.
It’s justifiably the usage stage that worries consumers the most, as they go about submitting their data. It’s a reasonable expectation that it will be used responsibly, but it’s also the consumer’s responsibility to ensure they read any applicable terms and conditions as well as to appropriately set privacy settings. Assuming they do, the firms receiving the data have no choice but to respect their wishes… and do their best to keep their cloud infrastructure as secure as possible.
The Archival or Purge of Data
It might not be something every company considers at first, but after the data is used there is still a need to manage it. Whether the decision is made to retain the data or destroy it, there are still steps that need to be taken to keep it secure.
Archiving, which effectively translates to the removal of data from the active environment into storage, is always attractive to firms. It’s an inexpensive, low-maintenance option for companies who may want to preserve the ability to analyze it. The data still has to be secured and, the more data a company chooses to hold onto, the more data that company has to protect. Even if archiving data is becoming more and more cost-effective, space is still a finite resource.
Hence, the alternative, to destroy it. In fact, depending on the industry in question, there might be a requirement to destroy it, like in finance and healthcare. Nevertheless, it’s a bit of a gray area. Not only are there different disposal methods, but different extents to which the data can be destroyed. For example, a user deleting his account could just mean they would be denied access from that point onward. The company could still keep the account active in case the user changes their mind.
In any case, the way data is classified dictates how it will be deleted. Files can be time-stamped, facilitating the purging process, if there are regulatory timelines and guidelines to follow. Meanwhile, metadata helps to identify obsolete data, which may not be so easy to delete. The redundancies and back-ups that may have once been a godsend in case of mishap have to be addressed too, so as to prevent data from becoming zombified, which would also put your security at risk.
Once security goes out the window, so does your brand equity and then the very customers whose data you once collected. Needless to say, with such a thing as zombie data in play, the data lifecycle takes on a whole new meaning, one whose every single intricacy companies would do well to understand and then master, for the sake of their customers and themselves.