How Data Vault can help with privacy concerns & regulations like GDPR

How to work and store privacy sensitive information and why Data Vault is helping you without losing information

Of course deleting information from a data warehouse is in essence a no-go. Bill Inmon stated long time ago that one of the 4 characteristics of a data warehouse is that it is non-volatile. This means we do not delete data. The reason for this rule is that the data warehouse should be the one place where we keep track of the complete history of the organization. As we are removing data from this storage we essentially removing part of the history.

And there comes GDPR (or AVG in Dutch) and other privacy regulations. In essence this requires that “all information which can identify to an individual person” needs to be protected for misuse and eventually deleted whenever requested by that individual.

Let’s take some time and see what this last part means in terms of a data warehouse. What will be the consequence in my data warehouse when I do need to delete data in my data warehouse? Let’s take some time and see what this last part means in terms of a data warehouse. What will be the consequences if I need to delete data from my historized data warehouse?

If I need to delete, for instance, a customer from my customer table in 3NF or a dimensional schema then I am not only deleting the record itself but, because I need to cascade delete – I also delete all history of that person plus all relations attached to that individual. This will ultimately mean that I can end up with sales without an identified customer – the sale itself would still be in place, it has only not a related – because deleted – customer any more.

Because of the separation of business key, relations and context in Data Vault modeling (which is typical for all Ensemble modeling patterns) there is some hope.

In Data Vault the business key itself is stored in the Hub with a one to one relation to the primary key. As long as you are able to choose a business key without any to the individual related attributes (in case of a concatenation or just because it is THE business key in itself) there is no need to delete this information from the Hub. When you do need to create a unique business key when it is not available and you are not able to avoid individual related attributes in creating it  (e.g. a concatenation which includes something like first name, last name, date of birth, email address, etc.) you need to take extra care. See later in this post. This is because a Hub is a construct that hold the business key and no context whatsoever. This will also mean that all relations will stay in place when all to the individual person identifiable data needs to be deleted.

The context or individual attributes which can be made up to identify an individual will be stored in their own satellite where we design according to more or less privacy sensitive areas. See below an example how this will work out.


Looking into more detail, how will this work?

Each Hub representing an individual person need to have a non-identifiable business key. This means that the business key might have to be scrambled / pseudonimize before it is stored into the Hub whenever there are to the individual related attributes as part of the business key. Every new entry in the Hub will automatically create a corresponding surrogate ID – just a stupid number – as its primary key (see further down why this is a good idea). This surrogate ID is of course the primary key of the Hub where all the Satellites and Links will refer to. To be extra secure, this business key attribute – the column – can be masked for all users, except of course for the data warehouse user for the loading of the Hub and lookups when loading Links and Satellites. The masking of this column can be seen as an extra layer of security and thus follows the intention of the GDPR and other privacy regulations.

Another option is to opt for a “Focal” style for each hub representing individual persons – store the business key separate from the Hub itself. On the downside meaning that we break the pure Data Vault pattern and make automation / generation more complex. On the plus we do not need to mask or hide the business key within the Hub.

Of course the design of the Satellites must be as such that privacy sensitive information will be stored separately from non privacy sensitive information. And also here you will have an extra choice

  • scramble or  pseudonimize that information before storing => meaning the data might be less valuable but will most likely not needed to be deleted.
  • delete the specific information, including history, when requested => meaning that there will be keys in the Hub without corresponding context in that Satellite.

An extra option is to create a s_DEA (Satellite with Derived, Enhanced & Augmented context) for generic use where all privacy sensitive information is  pseudonimized or grouped making it more general but still useful even after the details are deleted or non accessible. In this case you can also use different personas to replace the original information to provide some context in case the original data is no longer available.

Why a surrogate ID?

Last thing helping with privacy regulations is the use of stupid numbers or surrogate ID as the primary key of the Hub instead of hash keys or even natural business keys. Why am I in favor of using stupid numbers instead of hashing my business key? There is no way to know from this surrogate ID what the original business key ever was. Even when I do just a “try & error” check – hashing what might be the original key and see if the hash value is still available – will not work when using a surrogate ID. Let alone when you are using natural keys as your primary key, there is no other way then to delete that key completely if it contains to the individual related attributes. Replacing a natural key with a more generic – persona – key will also mean that you will need to replace that key in all Satellites and Links to maintain the referential integrity. Plus it will no longer be recognized as an unique person.

In the case the business key itself is non-accessible and might be scrambled/pseudonimized as well I still suggest to delete the business key (or replace with a standard business key referring to a deleted individual) whenever the request to remove an individual. But keep the unique surrogate ID to know it is recognized as an unique person.

Another important reason for doing this that in any case an individual might become a customer (or employee or …) again and allowing us to store information again this will lead to be just a completely new individual without any reference to its future or past self. So this is to prevent that there is a connection between the information we now have on the customer to be connected with data we are not allowed to connect to this individual from its past.

To come to a conclusion

The separation of business key, relations and the context, especially if the context itself can be separated between to the individual related context and generic context is a big help with privacy regulations like GDPR. In case of an opt out there is no need anymore to cascade delete major parts of your data. The use of a surrogate ID will make sure that all related information – like transactions, generic context, etc. – will still find an unique landing place at the Hub still providing valuable information. You are not loosing transactions, sales, events, etc. from your data warehouse and still comply with privacy regulations like GDPR.


Is Self-Service truly Self-Service?

The last couple of years the word is on “Self-Service” BI. Almost all BI-vendors are feeling the need to tell the world that their tool is a perfect match for “Self-Service”. Finally the business users don’t need IT anymore to analyze data and create reports, etc. Even nowadays with the possibility to explore all data available in data lakes (to use another buzzword). In this post I will explain why I do not agree with the promise of “Self-Service” vendors and SI’s try to sell as a concept to their customers.

When I started in BI in 1999 tool vendors and SI’s, including myself, where fighting against users with MS Excel and MS Access creating their own reports and analyze their data. For this they asked for direct ODBC access or rather “a connection” to the different source systems and work their way thru the data and try to integrate within different spreadsheets with lookups and such. Apart from being time consuming and labor intensive it was also error prone. But it was also very agile and direct for the users themselves. Of course they still needed help them combining the data and also understanding the data. And sometimes they indeed ask for that support.

The major reason for the fight against this type of “Self-Service BI” was that this was considered dangerous. Users could change or manipulate the data or create calculations without any governance, on purpose or just out of ignorance. The answer from vendors and IT-department was: use proper BI tools like Cognos, BO, Microstrategy, etc. Tool vendors, SI’s, myself included, embraced the possibility to deliver reports in PDF to restrict the ‘misuse’ of data.

We tried to create the proper reports and dashboards for all users and include as much filters as requested to make the users happy. In the end users still asked for the “export to XLS” button plus asked for filtered reports with lots of data in it at a lowest possible level of grain and a maximum of X number of rows. Of course this was their replacement of the direct ODBC access to the source systems which they no longer had.

For me this already changed my opinion in the restriction of users in accessing the (their own) data. I’d rather gave them the freedom on the governed data out of data marts instead of let them access the source systems again. I like to think this was also in the users interest, the datamarts made more sense then the source systems.

The need of users to control their data and have unlimited access was, and always will be, big. This is also heard by new and upcoming tool vendors like Qlik and later on Tableau and such. These claimed to be easy to use tools which provide the promise of full control by the user (according to Qlik in the early days you don’t need a data warehouse anymore!). For me it led to a discussion on an event in Amsterdam opposed to some tool vendors where I claimed that I don’t mind what tool a user is using to access their data. From my side this could even be Excel! Of course all the other people in the panel were not amused and even offended. How could I suggest to let users use the darkest of all tools! No control on presented data is the worst thing. The only thing is that they did not listen to what I suggested: Have a governed and trusted set of data available which users can freely access and explore without any restriction on tooling. Of course, but that is governance as well, when there is a meeting or presentation the data is presented in a formal way with a standard tool the whole organization is using . This is to prevent endless discussions about calculations and definitions. This is the architecture I want to create for my users .

So for me there is no such thing as true and complete Self-Service BI for the entire organization. There are a limited number of users capable of doing it all on their own and still got the right data / information on the right time for the actions to be made. These are however a minority if they ever exist in an organization.

In my opinion there will always be a governed set of data – let’s call these the data marts, virtual or physical, in any or non modeling pattern – which users can freely access to explore and analyze. Next to a fixed set of reports, dashboards, etc. which have a formal status in the organization with agreed definitions and use (which can be based upon the same datasets). This is what I call Managed-Service, the data is already combined/integrated/curated and ready for the users to explore. Which tool they are using for it and if they want to combine it with other data it is to them to decide. However in my experience the only true “Self-Service” tool is Excel, a tool which users feel free to use and explore data the way they want it. Tableau. Qlik, PowerBI and all others have a too big technology component which scares users. Except of course the power users, or those people on the department which are feeling more happy to work with more advanced tools and in fact are the BI people within each department. These are also the people you want in your “Center of Excellence” or at least help your team to create reports and analysis to be used in the entire organization.

I like to thank Ronald Damhof who brought up his 4 Quadrant model (see: Keynote  22 mei 2014 – dwh automation – 4 Quadrant) . For me that is a guidance when I tell people my thoughts on “Self-Service”. People are free to use and explore data as long as they are aware that this is a non (less) governed area or Quadrant IV even if the data itself comes directly out of the data warehouse or Quadrant I. Also thanks to Martijn Imrich in his post on Data Lab ( where he is suggesting to make the data full available for exploration where he is focusing on Quadrant I, III & IV of the Quadrant model and thus extending governed data with non-governed data.

In my Managed-Service thoughts the Data Lab is consisting of a governed set of data which the data scientist can and probably will extend with extra data (external or other data marts or other sources or all) to explore and discover useful possibilities for their organizations. A part of this data is still managed and the group of users are highly skilled and trained to work with tools and data.

Why I love using Post-Its for data modeling

 In my early days working as an environmental engineer, almost 20 yrs ago, I tried to be organized and use the famous ToDo lists we all had on our desks. The purpose was to write down all your tasks of that day / week preferably with some kind of ordering or prioritizing. Looking back this was actually a really Agile way of working before Agile was commonly used. Tasks could shift during the day or in the week depending on customer demands or priority. My only issue was and still is that I am not that organized. Writing everything down on the ToDo list on the right day or rewrite a task to another moment in time if that was more appropriate, this was not my style of working. My solution was writing it all down on Post-Its. Soon, these could be found on my desktop computer, screen, phone (not a mobile one but a proper desk phone), drawer, desk itself, etc. in no particular order or systematic way. In other words it help me write down tasks and organize it in my own special way of which I hoped I could found the right task at the right moment. Of course it should have been done in a better way to put the Post-Its in the ToDo lists and reorder them in there to make sure I could not forget a task but unfortunately I am not that organized kind of person.

In my first period as a BI consultant I stopped using Post-Its except for the usual notes about who to call back or other small notes which I found on various spots trying to read back what I wrote down as being important. All the work I had to do was written down in functional and technical requirements and came from Modeling tools like Erwin, Powerdesigner, ets.. In that period of time my work consisted of creating cubes and reports on top of Kimball style data warehouses or dimensional datamarts. I loved the Starschemas and BI-Matrices when I had a meeting with a user, These provide structure and overview in the complexity of the data and made it easily understood by users.

After a while I started as an ETL developer and needed to build the appropriate dimensions and fact tables. I still received the big functional and technical requirements, design documents and data models and try to maintain the BI matrices to ensure conformity around the dimensions.

The BI Matrix was giving me, my customers and colleagues, the structure needed to ensure we did it all right the first time. We already knew that changing a dimension or a fact table is hard work and needs thorough re-engineering. That is why we tried to communicate with the star schemas plus the BI-Matrices to avoid being forced to create a new dimension next to the existing one as a simple and secure solution. And thus started to loose conformity instead which was a quicker and more solid solution than doing the hard work of re-engineering and keeping conformity.

After building the 4th, active, Customer dimension for the same organization I felt there must be another / better way. My initial thought was to analyze more upfront to make sure we covered all before building. Of course that was not the real answer. It simply takes too much time to analyze all and information delivery will always be too late for the business. That is why Kimball style data warehouse became more popular in the first place over Inmonn’s Corporate Information Factory, at least in the Netherlands.

Around 2004 / 2005 I first heard about Data Vault. My initial thoughts on Data Vault were: ” A way too complex and complicated model, far to technical”. Being trained in Data Vault modeling in 2005/2006 improved a lot for me but I still faced some issues in the communication of the model with my Customers. Of course all Data Models were still written down in Erwin or Powerdesigner or any other modeling tool using ERD’s. It did not helped then back in the days we were still facing some modeling issues in Data Vault making it more complex which we now solved in the current state of Data Vault modeling (no more Link on Link constructs, no transactional links, reference tables, etc). But still it was hard to show the model to a user and discuss the model with them. Undoubtedly Linstedts Data Vault changed a lot for me but it needed a bit more time to see the real beauty of the promise of Data Vault.

It became more clear to me working with Hans Hultgren on some projects and learning about Ensemble Modeling (the family name for modeling patterns like Data Vault, Focal Point, Anchor, DV2.0, and many more). By designing Ensembles first instead of full blown Data Vault models, the model suddenly became clear again for the users which directly lead to valuable input by the users and creating a stronger model. This also led to a more interactive way of modeling using whiteboards to draw the ensembles together with the business users staying away from the technical stuff. By wiping out some ensembles and redrawing them on another, more appropriate, part of the whiteboard during the modelstorming process we got to the best possible model to be translated into a physical model and feed the data warehouse automation tool.

In our training, Genesee Academy’s Certified Data Vault Data Modeler, we started to use Post-Its again for flexibility and speed in the workshops. Also in the training rooms we do not always have enough whiteboards or flip overs to work on. And there is always room to stick Post-Its. The simple way of changing and reordering a model by just replacing the Post-Its is very helpful in the learning process.

Although this way of working is very Agile and communicative I had the feeling it could be improved further. This became clear for me after talking to Larence Corr a couple of years ago and reading his book “Agile Data Warehouse Design”. To me it felt that the BEAM methodology could be the part I was missing. However, the book and the methodology are completely focused on dimensional modeling as an end-result . For me this is not my desired model for a Data Warehouse., I really love the speed, flexibility and adaptability of Data Vault and other Ensemble Model Patterns and still wants to implement them for my customers.

Recently I followed Lawrence Corr’s training and learned his way of modelstorming with the 7W’s and the BEAM Canvas which really enriched my toolbox for Modelling again. I learned to write all down on Post-Its and place them in a certain order on the 7W framework and next transfer them to the BEAM Canvas (easy to not have to rewrite just re-use). This use of Post-Its provides me the structure I first missed when moving towards Data Vault modeling and leaving the BI Matrix behind.

Combining his techniques and thoughts with the premise of Ensemble Modeling from Hans Hultgren I am now able to completely build an entire datamodel staying away from the technical ERD’s or modelling tools except the most important tools: Post-It plus Whiteboard. And the best part of it is that is totally interactive with the users which makes the result, Data Warehouse plus (virtual) datamarts, understandable for my user. And to top that, staying away from ERD’s also means that implementation in JSON or any other Big Data (non) structure is possible from the same basis.

The only thing missing in my toolbox now is the Post-It app of 3M which gives the opportunity to rearrange the Post-Its in Powerpoint for documentation. The app is only available in for IoS :-(.

Hope this will help others to start using the most powerful modeling tools and get the user involvements. In the end, we are building data warehouses, data marts, reports, etc. for them, not just for ourselves

The Shuttle, connecting data

A new Data Vault construct to connect to the Big Data Environment
In my former post I wrote that if you really want to create and maintain a connection from a Data Vault model (or almost any kind of Ensemble) to the big data environment you need to search for an elegant way to solve this. In this article I will describe my search and my claim to have found this elegant solution. I will describe this construct from the perspective of Data Vault but I am convinced that other types of Ensemble Data Warehouses can use the same construct.

I wrote before that if one wants to create and maintain the connection to unstructured data on a hashkey, which is a hash representive of the Business key (the driver for the hub), this will not work the majority of the time. In fact only in a small number of cases the business key (or the Hash representation of that business key) is also meaningful or available in the unstructured world. For instance: where sku is the business key for products in retail organizations, this will rarely be used in social media, streams or calls etc. This can be of course resolved with a taxonomy or ontology which matches “white sneakers” to a range of sku’s (for each different size, etc). In this case it does not matter if you are matching Business keys, hashkeys or surrogate keys (if you are looking for the matching key or keys it doesn’t matter if it is a surrogate key, a hashkey or even the business key itself).
And if we take it one step further you might even want to analyze “running shoe” from your Big Data environment and run that against your structured data, this will lead to another lookup and matching effort. — Remember it is Big Data so we do not know upfront where to start. —
So it is not only a hashkey representation that won’t work but also the lookup seems not to be the solution we are looking for.

In Data Vault modeling links are representing business relations so we might want to consider using a link to connect to the Big Data Environment. A link construct consists of its own key and all the keys of its connecting hubs. In the case connecting to a Big Data Environment we will probably be able to calculate the necessary hashes to “white sneaker” and/or  “running shoe” upfront and use these in the link with all appropriate sku’s or use the search strings themselves as a key in the link. This however brings in another problem: a link construct will only store existing relations, not possible relations. Since we are connecting to the Big Data Environment we do not know upfront if there is something to connect to or it just represents a possible relation or a relation that just might be there in the not foreseen future.  — Remember it is Big Data so we do not know upfront where to start. —
So the use of a link is also not the solution.

Another path to investigate is to store the information in a satellite, just as we would do with reference information, have the search string with its hashkey  for all possible foreseen search strings in the appropriate satellite(s) and refer to the Big Data Environment. This make sense because this connection could be seen as some kind of context information. This is coming close to the solution I am looking for.
Thinking thoroughly on this one I noticed that I need to look further. The reason to look further is there might be a problem if the components of the search string are stored in different satellites due to different rate of change, source system, etc. (this will also prevent virtualization of the hashkey). This will mean that I might need to redesign my satellites to store complete search strings. Also the adding of a new search string will be a destructive change to a satellite which we want to prevent as much as possible. Last thing which is making me to look further is that I do not want to store the appropriate search string (in any format) in each satellite which might be of interest to combine with the unstructured data.   — Remember it is Big Data so we do not know upfront where to start. —

Comes in the Shuttle. This is a structure which looks like a satellite but its sole purpose is to store the search string and its hashkey representation. This might be based on the business key, a part of the business key or a just a description or a grouping of context attributes from the EDW perspective. The search string and its hash key representation will be created upon information already available in the Ensemble regardless if there will be a match in the Big Data Environment and regardless where it’s parts are stored (Hub for the business key or to be found in the different Satellites).
The creation of the search string will be no more than just formatting the search string to match the required pattern coming from the Big Data environment. Due to this formatting and the calculations of the hashkey this will also imply that a Shuttle is a BDV construct and will be calculated after loading the Ensemble. Adding a new search string is possible by adding another shuttle to the hub (or link if applicable).  Same for removing a search string which is no longer applicable is just a matter of storing the Shuttle offline.

IMO the Shuttle is the most elegant and non-destructive solution to connect the structured and unstructured world together.

Data Vault2.0 vs Data Vault, a battle?

Main differences between Data Vault and DataVault2.0

I get more and more questions about the difference between Data Vault (DV) and Data Vault2.0 (DV2.0). In this article I try to explain the differences between the two based upon my knowledge and experience.
To get started: Data Vault is a modeling technique which can be used in data warehouse projects almost regardless of the project methodology used where DV2.0 claims to be a data warehousing methodology by itself. Besides this different approach DV2.0 has of course a modeling component where the focus of this article is upon. A DV2.0 model is different from a standard DV model however they are closely related to each other. I will pick the 3 main differences concerning the underlying modeling approach between these two and will describe my observations.

Use of Hashkeys
In DV2.0 the use of hashkeys as the surrogate key is a must. This gives the flexibility to load all data in parallel since there is no dependency between Hubs, its Satellites or the Links. This also seems to pave a way to use unstructured data or use DV2.0 in a NoSQL environment.
In standard DV there is an option to use a hashkey instead of a surrogate key and the decision to use one or the other requires some thoughtful consideration. One of the reasons for this is that there is a benefit to maintaining the keyed dependency of Links and Sats with the Hubs. This benefit is that it maintains the integrity of the Ensemble.

Also the connection to unstructured data on a hashkey, which is a hash representive of the business key (the driver for the Hub) will not work the majority of the time. In fact only in a small number of cases is the business key (or the Hash representation of that business key) also meaningful in the unstructured world. For instance: where sku is the business key for products in retail organizations, this will rarely be used in social media, streams or calls etc. This can be of course resolved with a taxonomy or ontology which matches “white sneakers” to a range of sku’s (for each different size, etc) but this can also be done to match with a surrogate key (if you are looking for the key it doesn’t matter if it is a surrogate key, a hashkey or even the business key itself). If you really want to connect both worlds you need to search for a more elegant way to solve this. This elegant way is underway in a special construct which handles the connection from a DV model (or even DV2.0 model or almost any kind of Ensemble) to the big data environment.

The use and purpose of Links evolved during the last decade. In the past Links were used to represent a relation, event or transaction. This led to Link on Link constructs (or even Link to Link to Link or more Links constructs) to represent grain shifts (order – orderline) and extended Sats on these Links. Where in DV2.0 this still is the case in the current best practice for the DV modeling pattern the choice is made to model only business relations as a Link. Transactions and events have business meaning as their own core business concept (person, place, thing, event) and so become Ensembles complete with a Hub, Sats and Link(s).
This is an evolution based upon modeling experience around the world. Links are solely there to represent the business relations. If we have something with business meaning or we want to add lots of context to it -> this requires a Hub. The business key for this Hub might be a concatenation of the combined business keys of the attached Hubs. For example, a sale modeled as a Link between a customer, employee, store and product has several issues. First there is no way to track an additional identical sale arriving in the same load. Also the grain of the sale is not represented if the sale is for more than one product or there is more than one employee related to the sale. And payments, deliveries, returns, or other subject areas can only connect to this sale via a Link on Link construct. The Data Vault community now recognizes that an event or transaction modeled as a Link is not a part of the core pattern and in fact any transactional Link is in reality not a Link but rather a form of 3NF entity. The sale, as with all events and transactions, is its own business concept thus an Ensemble requiring its own Hub.

Reference tables
The last main model difference between DV and DV2.0 is the modeling of reference tables. Common practice in DV modeling is to go for Hubs and Sats. In fact it is strongly recommended and taught to stick with DV all throughout the model (even when there is no need for history or there is a fixed referenced list). This simplifies a lot. There is no need for discussion which pattern is used in your data warehouse. Sticking with the pattern also means automation throughout the entire model. In DV2.0 there are several ways to go including use of 3NF tables. All depending on need for history, fixed datasets, etc. This means that in DV2.0 the modeler needs to think which kind of reference is being asked for (now and in the future) and model the correct kind of reference table. In my opinion this is a complication which is not needed and also breaks the automation pattern or at the least makes it more complex for automation.

DV2.0 claims to be the new and improved version for Data Vault implementations however IMO I see some points remaining in DV2.0 which were in DV modeling many years ago where based upon best practice and experience of modelers all over the world Data Vault has evolved to what it is now. In fact, I see more new development and improvement in Data Vault modeling then in DV2.0.

The point of this article is to encourage you to be aware of the differences between the current Data Vault pattern and the DV2.0 pattern and pick your own optimal approach. Not based upon a claim that one is new, better and improved (if it could make this claim) but based upon your own analysis and your own needs and requirements.

Note to the reader:
For this article I used my own modeling experience and knowledge on Data Vault and DV2.0. Also I read thoroughly the books “Building a Scalable Data Warehouse with Data Vault 2.0” by Daniel Linstedt & Michael Olschimke and “Modeling the Agile Data Warehouse with Data Vault” by Hans Hultgren. I do have my own preference between the two but I tried to be as objective and precise as possible so the reader can be better informed. The start to be better informed is to read the books, follow training & certification and discuss with experienced modelers.

A Business Key Satelite

Today a short post on a best practice from my own experience in Data Vault modeling.

For those who know Data Vault modeling one of the important things (the first thing to look for) is to find the Unique Enterprise Wide Business Key . If you find the business key (BK) but it is not enterprise wide unique, or not unique at all, there are a couple of options. The preferred one is to create a concatenation to make the BK  unique. The problem occurs when you only need a part, or several parts of the concatenated bk in reports you need to split the bk. This is always a pain because each report builder needs to know how your concatenation is build and which part is the part they need.

It is of course also a possibility to store the parts in the regular satellites on the hub but there are reasons not to do this. When there are multiple satellites, especially grouped on purpose, you need to store the individual parts in each of the satellites. When there is a need to change the concatenation (extending or decreasing) all satellites needs to be changed.

To avoid all that it is wise to create a separate satellite where all the individual parts are stored. This satellite is 1 on 1 with the hub (each record in the hub is one record in the satellite).  Because the bk satellite is a 1 on 1 satellite there is always one and only one valid record where each part can be found – no issue with different time slices in the different satellites.