Just read an interesting article about soft updates. It seems that SQL Server 2016 has a new feature to enable temporal tables: SYSTEM_VERSIONING. For those who don’t know, temporal databases maintain multiple copies of each piece of data instead of only storing the data that’s true at the current time.
Why do I want this: this approach maintain the entire history of the data, including updates and delete information. Data mining anyone? Auditing anyone?
How have we done this in the past? I have seen DBAs creating tools to maintain the database schema. Those tools generate SQL scripts that are executed in SQL Server to create and to update the tables. In addition, the tools would create an aggregate table (appended with _History or _Audit) and some triggers that would populate the aggregate table with the state of the data in the main table prior to the update or delete. The tool is resposible for maintaining consistency among the main table, the aggregate table, and the triggers.
I guess life will be a bit easier in the SQL Server 2016 world, hopefully, despite the limitations.