Works great for simpler projects. At scale (even at medium scale), it becomes a complex ballet of multi-step, backward- and forward-compatible changes to both code and DB schema and/or data. With zero downtime releases, multiple app instances, possible code rollbacks, or feature toggles, there are limited opportunities to rename your tables, columns, to drop indexes, or to insert/update/delete your data in a single batch. From my experience, the only way to achieve robust DB migrations is by doing non-destructive changes, which requires careful planning, a good development process (such as well-maintained dev/test/staging environments, code reviews, integration, and end-to-end (e2e) tests) and the right level of automation. And yes, having a complete DB schema versioned is a must-have. Alternatively, you could run 'dotnet ef database update' from your build/release software, cross your fingers, and hope for the best :)
34:00 I thought that the main reason of having unique versions is to avoid those "check if exists" that are considerably more code than the changes alone, isn't it? I think the missing piecie here, as we could not have transactions, is automated backup of Development database before migration that we could restore if we make a mistake. Those "check if exist" can be very complex. Thus migrations are hard to review as you could just guess that the reason you have complex checks is someone mada a mistake and tried to run the migration again. The most troublesome migration I had so far was to to alter foreign key constraint. You could argue it's "stateless" or idempotent change. I do agree that for "stateless" queries, views, functions, indexes, etc. it's very convenient to have scripts that are always run (RoundhousE everytime scripts). They are simple, you have history of changes. :-) But beware because when you gather enough of them the migration might be very slow.
there's nothing stateless on a DB, as it's constantly live & "living": if you always run scripts to re/create views, functions and indexes, those objects 1) may reference each other 2) have a specific creation time 3) and so, the creation order Always counts.. that means, you cannot "always run" ::
The problem with migrations is: you cannot see the state of the database in your source control. Search for a table or stored procedure and you get multiple hits. Imagine you manage your code based on migrations: Now you only have diffs of your functions.
Works great for simpler projects. At scale (even at medium scale), it becomes a complex ballet of multi-step, backward- and forward-compatible changes to both code and DB schema and/or data. With zero downtime releases, multiple app instances, possible code rollbacks, or feature toggles, there are limited opportunities to rename your tables, columns, to drop indexes, or to insert/update/delete your data in a single batch. From my experience, the only way to achieve robust DB migrations is by doing non-destructive changes, which requires careful planning, a good development process (such as well-maintained dev/test/staging environments, code reviews, integration, and end-to-end (e2e) tests) and the right level of automation. And yes, having a complete DB schema versioned is a must-have. Alternatively, you could run 'dotnet ef database update' from your build/release software, cross your fingers, and hope for the best :)
34:00 I thought that the main reason of having unique versions is to avoid those "check if exists" that are considerably more code than the changes alone, isn't it? I think the missing piecie here, as we could not have transactions, is automated backup of Development database before migration that we could restore if we make a mistake.
Those "check if exist" can be very complex. Thus migrations are hard to review as you could just guess that the reason you have complex checks is someone mada a mistake and tried to run the migration again.
The most troublesome migration I had so far was to to alter foreign key constraint. You could argue it's "stateless" or idempotent change. I do agree that for "stateless" queries, views, functions, indexes, etc. it's very convenient to have scripts that are always run (RoundhousE everytime scripts). They are simple, you have history of changes. :-) But beware because when you gather enough of them the migration might be very slow.
there's nothing stateless on a DB, as it's constantly live & "living": if you always run scripts to re/create views, functions and indexes, those objects 1) may reference each other 2) have a specific creation time 3) and so, the creation order Always counts.. that means, you cannot "always run" ::
The problem with migrations is: you cannot see the state of the database in your source control.
Search for a table or stored procedure and you get multiple hits.
Imagine you manage your code based on migrations: Now you only have diffs of your functions.
Excellent talk. Pity you didn't get deeper into rolling back failed migrations
In my experience, rollbacks are very rare and often are more involved than just running a script, so they're often of a waste of time
Roundhouse has the worst documentation