A document describing user interface elements
A high-level representation of how data is organized and related
A report on daily sales figures
A compression algorithm for database backups
A programming language keyword
A representation of a person, place, thing, or concept about which we store data
A memory address in a computer’s RAM
A type of network protocol
A property or characteristic of an entity
A physical disk location for data files
A user ID in a system log
A graphical symbol in user interface design
To show how tables are indexed in the database
To depict how entities are logically associated
To determine the database’s backup strategy
To encrypt data at rest
A unique identifier for each entity instance
A code used to decrypt sensitive data
A foreign file pointer in a file system
A field that can store multiple data types simultaneously
To reference a primary key in another table, ensuring referential integrity
To store authentication tokens
To specify a backup key for system recovery
To define a user’s locale preferences
It ensures all tables are immediately indexed for queries
It provides a high-level view of business concepts and relationships
It defines exact column data types for all attributes
It physically allocates disk space for databases
The encryption method used for data in transit
The minimum and maximum number of entity instances that can be related
The CPU usage of the database server
The total number of queries processed per second
Each attribute value is atomic
Each table must contain a foreign key
No null values are allowed
All tables must have exactly one relationship
Ensuring every non-key attribute is fully dependent on the entire primary key
Removing all relationships from the model
Converting data into binary format
Storing each table on a separate physical disk
Introduce transitive dependencies
Eliminate transitive dependencies for greater data integrity
Force every table to have only one column
Add redundant columns to improve read speeds
Stored procedures
Database tables
User roles
Backup logs
High-level entities
Detailed attributes or data types
Relationships between broad concepts
Representation of business domain concepts
To reduce performance intentionally
To improve query performance by reducing the number of joins
To increase data redundancy without purpose
To meet strict normalization rules
Reference data for authentication
Measurement or metric data about business processes
Only text attributes like comments
User access logs
The main facts and metrics of the business process
Context and descriptive attributes related to facts
Binary data streams for ETL processes
Details of server configurations
A star schema with denormalized dimensions
A more normalized variant of a star schema where dimensions are split into sub-dimensions
A random collection of unrelated tables
A schema used only for XML data
Store dimensions that never change
Manage attribute changes in dimension tables over time
Archive old transactional data
Ensure immediate data loss on updates
Duplicate data in multiple tables
Referential integrity between related tables
Faster disk reads
Complete independence between entities
Decrease data redundancy and improve data integrity
Maximize disk space usage
Limit query complexity
Eliminate primary keys entirely
Eliminating partial dependencies
Ensuring every determinant is a candidate key
Adding redundant attributes to speed up queries
Denormalizing the schema structure
Removing atomicity from attributes
Eliminating multi-valued dependencies
Restricting the number of attributes per table
Ensuring no table has more than 4 attributes
Decompose tables so no loss of information occurs when recombining
Introduce circular dependencies for testing
Combine all attributes into a single table
Ensure every attribute is numeric
No natural attribute uniquely identifies an entity
The primary key must be a natural attribute
The database should not have unique identifiers
A foreign key is not needed
A key made up of multiple attributes
A key that allows partial duplicates
A key that references multiple tables at once
A key that encrypts table data
Limit the values that can be stored in a column
Create database backups automatically
Control user access rights
Schedule index rebuilds
Speed up data retrieval by allowing quick lookups
Remove duplicate rows automatically
Encrypt the data stored in a table
Store metadata about users
All values in a column appear at least twice
All values in a column are distinct
A column can only store numeric values
A column is always nullable
Rows in a child table must have corresponding rows in the referenced parent table
A table can reference itself multiple times
All tables are stored on one server
The database does not allow any NULL values
Deleting a parent row automatically deletes matching child rows
Adding a row automatically adds related child rows
Updating a child row updates the parent row’s key
Dropping a column renames other columns
High-level business concepts without technical details
Precise data types and indexing strategies
Detailed server configuration parameters
ETL batch scheduling
Fully defined tables, columns, and relationships without considering physical storage
Actual disk space allocation
Network topology of database servers
Physical file partitioning strategies
How data is physically stored, indexed, and accessed on the chosen platform
Only the conceptual relationships between entities
Broad business requirements only
The management hierarchy of the organization
Depicting classes as entities and attributes as class properties
Replacing all ER diagrams completely
Modeling only database indexes
Creating binary executables for DB servers
A glossary of terms and definitions related to data elements
A tool to encrypt database credentials
An index rebuilding strategy document
A script that deletes old backups
Entities as rectangles and relationships as lines
Pie charts for CPU usage
Network diagrams of routers and switches
Real-time query execution plans
One-to-many relationships
Binary attributes only
IPv6 network addresses
Recursive triggers
A logical grouping of database objects (tables, views, etc.)
A hardware specification for the server
A tool for database backup scheduling
A synonym for a single table
The primary focus is on conceptual or logical data structure
The primary goal is to model object-oriented software classes
The system requires real-time analytics on unstructured data
Only physical storage details matter
Automating conceptual, logical, and physical modeling tasks
Serving as database query engines
Acting as web hosting platforms
Storing unencrypted passwords
Flat tables without any hierarchy
A tree of parent-child relationships
Random binary blobs
Key-value pairs only
It uses multiple parent-child relationships (graphs) rather than a strict hierarchy
It never uses pointers or links
It is limited to one-to-one relationships only
It requires no primary keys
Rows and columns only
Simple pairs of unique keys and associated values
Strict schemas with multiple joins
Hierarchical trees with pointers
Tables with fixed schemas
Flexible, semi-structured documents (e.g., JSON)
CSV files only
Strict binary formats only
Storing entire rows in a single file
Grouping data by column families for efficient compression and retrieval
Eliminating columns altogether
Using only primary keys without attributes
Flat files of text
A network of nodes (entities) and edges (relationships)
Strictly normalized relational tables
Binary search trees only
Additional attributes attached to nodes and edges
Encryption keys for data files
Backup labels for historical versions
Network addresses for remote servers
Subject-predicate-object triples representing semantic data
Fully normalized relational schemas
Highly denormalized star schemas
Using a single database technology for all needs
Using multiple databases or storage technologies to handle different data requirements
Converting relational data to binary each time
Relying solely on in-memory data stores
Compressing data into shards for archival
Distributing data horizontally across multiple servers for scalability
Merging all data into one large server
Creating backup copies on the same machine
Extract, Transform, Load
Encrypt, Transfer, Lock
Evaluate, Test, Log
Erase, Time, Loop
Centralize facts surrounded by denormalized dimension tables for simpler queries
Spread data evenly across multiple normalized tables
Store only unstructured binary data
Provide a hierarchical model of all data
Is unique to a single fact table
Is shared and consistent across multiple fact tables and data marts
Cannot be used in more than one schema
Always uses a surrogate key
Discarding them immediately
Loading fact data with a default dimension key until the actual dimension arrives
Stopping all ETL processes
Merging them into unrelated dimensions
Overwriting old data with new data
Keeping historical versions by adding new rows
Ignoring changes altogether
Storing changes in a separate database
All measures into a single dimension
Random low-cardinality attributes into a single dimension
Only numeric attributes into one table
Unrelated fact tables into a single table
Dimensions as foreign keys
Fact tables as non-key attributes
Separate metadata repositories
External files only
Fully normalize all dimensions
Provide a flexible, scalable, and auditable data warehouse architecture
Replace star schemas with XML files
Eliminate the need for ETL processes
They are tied to changing business logic
They remain stable over time, independent of natural keys
They are always larger integers for performance
They cannot be used in foreign keys
The physical network layout of servers
A set of standardized conformed dimensions used across multiple data marts
A method of compressing fact tables
Migrating data using a physical bus device
Creating multiple inconsistent versions of core data
Providing a single, trusted version of key business entities across the enterprise
Managing only transactional records
Storing historical logs of ETL processes
The CPU architecture of the database host
The origins, transformations, and movements of data throughout its lifecycle
The encryption keys used by the database
The downtime schedule of servers
Data about data (e.g., definitions, data types, rules)
Application user logs
Binary files containing source code
Only relevant to physical disk storage
Automatically deleting old data
Providing a searchable inventory of available data assets and their metadata
Limiting user access to a single table
Encrypting the entire database schema
Monitoring and ensuring data quality, consistency, and compliance
Developing the front-end applications
Managing only database upgrades
Scheduling hardware maintenance
Only numeric data is stored
Data meets defined standards of completeness, accuracy, and consistency
All data is in XML format
Data is always encrypted
Constantly changing transactional data
Managing sets of stable, standard values (like country codes) used across systems
Backing up database indexes only
Archiving old log files
They store data physically
They define common business terminology, ensuring everyone uses consistent definitions
They enforce foreign key constraints
They represent encrypted backup keys
Restrict data usage entirely
Establish policies, standards, and processes to ensure effective data management
Provide free access to all data for everyone
Eliminate data modeling altogether
Having too few data sources
Reconciling inconsistent representations of the same master data from multiple systems
Lack of storage space
Inability to encrypt data
Ignoring the concept of time entirely
Managing data that changes over time and capturing historical states
Only future predictions of data values
Removing historical data from the database
Entities and static attributes only
Storing and analyzing sequences of events or actions over time
Converting relational data to key-value pairs
Eliminating the concept of time in data storage
Aligning the data model closely with the domain’s ubiquitous language and concepts
Using only surrogate keys
Storing all data in one large table for simplicity
Ignoring business requirements
Ensuring all data is normalized into 5NF
Structuring data to facilitate feature extraction and training of models
Eliminating numeric attributes
Removing all categorical attributes
Describe meaning, relationships, and classification of data concepts
Are only used in relational databases
Replace all ERDs immediately
Cannot represent hierarchical information
Centralizing all data modeling in one department
Decentralizing data ownership to domain teams and treating data as a product
Eliminating domain-specific models
Using only graph databases for all storage
Batch updates only
Real-time ingestion, storage, and processing of continuously generated data
Storing everything in CSV files offline
Using only hierarchical data structures
Only relational and hierarchical models
Multiple data models (relational, document, graph) within the same platform
Only key-value and network models
Models that cannot be queried
Physically moving all data into one database
Providing a unified, virtual view of data from multiple sources without physically integrating them
Compressing tables into binary form
Only supporting structured data
Hard-coded data definitions only
The metadata repository to dynamically generate and maintain data structures
Manual schema changes only
Non-versioned definitions of data
Encrypt node data
Classify nodes or edges into categories
Delete unwanted relationships
Perform indexing only
Represent and link data using subject-predicate-object triples on the Semantic Web
Store only numeric metrics
Replace SQL queries with MapReduce jobs
Ensure relational integrity in star schemas
Strict normalization to 3NF
A schema-on-read approach, allowing flexible ingestion of various data formats
Only CSV-formatted input
Immediate indexing of all attributes
Ignoring data quality issues
Cleaning, transforming, and structuring raw data to fit the model’s requirements
Storing data in raw binary without schema
Eliminating the need for ETL processes
Rigidity and no changes after initial design
Iterative, flexible development of models to adapt to changing requirements
Using a single type of database for all solutions
Avoiding stakeholder input
Designing schemas tied to specific on-prem hardware
Considering scalability, distribution, and eventual consistency
Ignoring data redundancy completely
Only using hierarchical models
Storing only textual descriptions of locations
Handling spatial attributes (coordinates, shapes) and enabling spatial queries
Preventing any location-based queries
Using only key-value stores for addresses
Separate transactional and analytical data strictly
Support both real-time transactional and analytical workloads on the same data
Only perform analytics once a day
Disallow any updates to fact tables
Stop all writes
Enforce structure and constraints on document data
Convert documents to CSV format
Encrypt all documents by default
Promoting a single large monolithic database
Encouraging domain-specific, decoupled data stores for each service
Eliminating the concept of domain boundaries
Forcing strict relational schemas
Random node-edge models without meaning
Graph structures with semantic metadata and ontologies for richer context
Only hierarchical data sets
Data extracted from CPU caches
Automating identification and tagging of sensitive data
Removing the need for metadata
Ensuring only numeric data is stored
Disabling policy enforcement
Storing only the latest state of an entity
Recording a series of events that lead to the current state, enabling reconstruction of past states
Removing historical logs entirely
Only working with denormalized schemas
Handling high-velocity, time-stamped sensor data with flexible schemas
Storing only static reference data
Eliminating all time-based attributes
Using only relational databases
Representing nodes and edges as vector embeddings for machine learning tasks
Removing relationships entirely
Converting documents to tables
Storing only textual labels
Create isolated data silos
Provide a unified environment for data management and governance, regardless of location
Remove metadata repositories
Restrict flexibility of data access
Splits a table into subsets of rows across multiple servers
Splits a table into subsets of columns for performance or compliance
Merges all tables into one giant table
Eliminates indexes from the schema
Remove all data entirely
Alter or mask sensitive attributes to protect privacy while maintaining utility
Store data in plaintext for performance
Involve only primary key encryption
That any node can have identical properties
That certain node properties are unique within the graph database
That relationships have no properties
That all queries run faster
Live data is always available
Representative test data is needed without exposing sensitive real data
Data sets must remain empty for testing
Only fixed numeric values are allowed