Force.com makes configuring the data model deceptively simple; creating new custom objects and custom fields can be accomplished with a few clicks and in a matter of seconds. This simplicity can end up creating stifling complexity, because the tendency of developers new to the platform is to apply traditional relational design techniques to the Force.com data model, which is a mistake.
In a traditional relational database, data can be easily stored and retrieved by traversing the relationships between tables. In Force.com, the misuse of relationships by applying normalization patterns can significantly increase coding requirements and create major performance issues due to limitations imposed to govern the use of resources in the multitenant environment.
While it may seem counterintuitive, the key to successful Force.com data model design is to understand that a pattern of denormalization, with tightly coupled attributes that describe a core entity, is the most efficient way to expose data to the Controller and Business Logic layers for presentation to the View / UI layer.
The Force.com data model is, for all intents and purposes, “metadata metadata.” The database is abstracted by a core metadata layer that exposes basic attributes such as field name and field type, but this metadata is intended to enable further abstraction for data layer interfaces and validation logic. The actual core database tables are provisioned in a manner similar to a “big table” paradigm in that new objects are allocated space for the core attributes to describe the object such as name and ID, along with up to 500 columns that are abstracted by metadata. To this point, trying to apply relational database normalization patterns is actually detrimental to overall system performance, as each new object that is created consumes significant metadata storage and provides no performance gains.