|8.30 - 9.00||Walk-in and registration|
|9.00 - 9.15||Welcome|
|9.15 - 11.15||Data Vault, 2.0
by Dan Linstedt
There are many needs in the organizations today - ranging from big data to NoSQL, and from Data Governance to Automation. Unfortunately today's enterprise data warehouses simply don't fit the bill. Most of the processes and procedures take too long to implement, can't be automated, or don't adapt well to Big Data or NoSQL solutions. Enter Data Vault 2.0. Dan Linstedt will be discussing the evolution of enterprise data warehousing, why DV2.0 is needed, what it is, and how it meets today's demands for enterprise analytical solutions. The discussion will focus on the business benefits and the changes needed to reach Data Vault 2.0 compliance. The topics included are:
|11.15 - 11.45||Coffee/Tea Break|
|11.45 - 12.45||
Datavault en Datavirtualisatie: Dubbele Flexibiliteit,
Modeling and developing data warehouses using Data Vault creates very flexible, integrated data structures that help ensure compliance. New data and relations can easily be added without this influencing existing data and relations. All data directly derived from the Data Vault and used in data marts, star schemas, and even semantic layers and reports can be reconstructed to meet compliance requirements at any time.
Alas, there is a catch. As the Data Vault structures are not intended for reporting and analysis, organizations are forced to develop multiple data stores, such as data marts and cubes. This is to ensure that the data can be organized in a more practical structure, so that data can be accessed easier and quicker. However, all these derived data stores are a potential maintenance nightmare. This session addresses a strategy called SuperNova, which uses Data Vault to access information easily, without having to create derived data stores. This reduces operational costs and improves flexibility. This method borrows heavily from data-virtualization technology.
|12.45 - 13.45||Lunch|
|13.45 - 14.45||User Case De Bijenkorf, Robert Winters & Andrei Scorus
The Bijenkorf recently embarked on an ambitious journey: have a team of two design, build, and deploy a full BI stack (systems, DWH, marts, reporting, production feedback) in four months. As the data included event streams, 3NF databases, APIs, and “what is normalization?” systems, a partial Data Vault model was designed to facilitate integration without compromising speed of data availability and a highly automated, metadata-driven loading system developed to minimize overhead/maintenance and accelerate development. We will cover our architectural and data model design, challenges in the implementation process, and benefits of the approach taken.
With feedback from Dan linstedt
|14.45 - 15.45||User Case MN Services, Martijn de Beer & Richard Siebeling
In april 2014 MN, a large pension administration and investment management company, started with a new enterprise datawarehouse. With new hardware (Exadata), new software (Oracle Data Integrator) and a new architecture for the enterprise data warehouse with Data Vault as core and SCRUM as the new methodology. During the implementation of the Data Vault we encountered a few issues like the relation between backdated corrections and the masterdata time validity. Also we experienced that storing ‘all the data, all the time’ was the easy part, but extracting the data from Data Vault to the datamarts proved to be time consuming. For speeding up the development of the datamarts we were inspired by the article of Rick van der Lans about the Supernova views. We translated the concept to our own situation and are now able to generate a view-layer where all difficult time-related extractions are handled. Our development team can now concentrate on implementing the business logic into the datamarts. Our presentation will give an outline of our Data Vault implementation, the specific issues we’ve encountered and our implementation of the SuperNova view-layer.
With feedback from Dan Linstedt
|15.45 -16.00||Coffee/Tea Break|
|16.00 - 17.00||User Case CMIS, Marcel Wiegand
In the beginning of 2014 CMIS made the decision that they wanted to take there current datawarehouse structure to the next level.
They wanted to have an enterprise datawarehouse structure that would help them in order to shorten the time-to-market and was flexible when it comes to changes and adding new sources. They had different sources ranging from flat files to multiple OLTP databases where some databases contained more than 1000 tables. The final solutions CMIS chose were generated Datavault datawarehouses and generated ETL that store almost all the information from the sources. The datawarehouse structure combines ideas from Datavault and Anchor Modeling. In addition views and functions were added to make it easier to query the datawarehouse and build datamarts.This generated structure is combined with a Businessvault. The Businessvault is tailored to meet the data needs of the datamarts.
In this presentation I want to show you the architecture, how this architecture helps us to be flexible in storing the data, how views and functions help us while querying the datawarehouse and how this solution helps us to shorten the time-to-market
With Feedback from Dan Linstedt
|17.00||Closure and network drink|
2013 - 2014Order or take a peek >