5 reasons for not using db triggers
Triggers are nice little feature that most relational DBMS offer to automatically execute code in response to certain events on a particular table or view.
Their primary aim is to maintain data integrity but they are often used to enforce arbitrary data transformations in response to changes or generic events on the database
Here are 5 reasons why you shouldn’t use them though
1. Triggers add programming magic to your application
Once you add triggers on a table, the results of updates to that table are no longer solely dependent on the SQL statement executed: no developer cannot tell the outcome of a INSERT/UPDATE/DELETE statement by just looking at the SQL code.
2. Triggers add business logic in the DB
Business logic in the data layer is a bad idea: it violates the principle of separation of concerns of a multi tier architecture
Moreover your business logic is now coded in at least 2 programming languages:
- the one you choosed for your app
- DBMS specific SQL-ish trigger language
3. Triggers limit scalability
Triggers can improve performance on the short term but put more burden on the DB layer (that by definition can’t scale out)
This same observations applies to expensive joins, whose performance degrades significantly as the table sizes grow
Scalability requires take as much business logic out of the DB layer to put it on the application layer as the latter can scale out easily (watch out for race conditions though)
4. Triggers cannot be automatically tested and/or version controlled
As the title says.
Moreover your DBA will want to mess around with them… great!
5. Triggers are not easily portable
Despite some effort in terms of standardization over the years, triggers are the least portable bit of the SQL world.