Hi Nick, why don't you make a video how Db migrations are performed with DB first approach using DbUp or similar library. I do understand why everybody is pushing towards EF, but I believe that a lot of devs that are starting their careers now are getting the wrong opinion the EF is the only way.
@@ЛюбомирГеоргиев-о5й DbUp doesn't support rollback, yet no migration system should be designed without it! Use FluentMigrator, if you wish to have an alternative. ;)
@gui.ferreira has such a calming voice. He could tell me to run migrations under any context and I'd consider it, because he sounds so reasonable. Too much power.
what is missing is the rollback, have you ever tried to rollback a previous version after a migration is done in a later version? the down method for the new migration does not exist in the previous version so you need to use the new software to first call the down method and then you can put the older software back in. My strategie is simple, i have a blazor ui for each database. when you start it it checks if it needs migrations, you can click the migration to execute it (or copy the genereated sql to do it manually. sometimes needed to prevent timeouts when you create a large index or something) on the same blazor app you can also click the previous migration and roll back to before that migration. so manually controlled, no issue for scaling. easy. we do not mind a few minutes downtime for a deployment so this fits us. it may not fit you
Good question. I've openned comments to write the same one. What I think about your approach - it requires to do addition steps and can lead to release errors within large team. So it has dependencies on skills of release person. Besides that it increases ready time and time lag between two consistency phases. But it looks great for small projects👍
It’s more popular to take a roll forward only approach these days. Rolling back has its own risks. The more guarantees someone can add to help ensure the migration and app work can be a better investment than rollback scripts. i.e. automated testing, health checks, etc
@RebelZach whilst a role forward might be beneficial if you can't control installation. A roll back is my preferred way of resolving issues quickly if it's severe enough.
Thanks Nick and Gui for the really good content. I have been releasing using efbundle from Azure DevOps for a while now and everything Gui spoke about makes a lot of sense. One video that would be very beneficial is how to squash or rollup your migrations. I have noticed on several projects that over time your builds start to slow down significantly as the number of migrations you have increases. I recently rolled up 500+ migrations into a single migration for schema changes and another migration for running SQL scripts directly which has halved my build times. The best solution I have come up with to do this so far is to drop everything in the migrations folder and recreate the initial migration and then in my case another migration to run direct SQL specific migrations. This works well for when new database but does require that you manually run a script to insert the new migrations into the __EfMigrationsHistory table on your production and uat, etc databases to prevent EF from trying to run any of the new migrations here.
What is entirely missing here is. What about open source self hostable / on prem software? I only publish, for example a container image. The users just run that. They do not want to worry about any migrations or dotnet cli. So I will still have to run migrations on app startup.
If your users are expected to treat the database as a blackbox and you enforce an upgrade order, that’s fine. The longer the gaps between users updating (if they control that), the greater the chances of them running into a failure.
If you have multiple instances your container, check init containers, which at least exists in Kubernetes. This was something I saw already at some other Kubernetes Helm charts. Your init container exists only once and after completion your main container starts in any number of instances.
What you could do is give the user the option to have it done during startup, but be disabled by default. They can enable it with an environment variable or something.
I have always encapsulated the migrations into separate console application that is being run by pipelines. And no migration system should be designed without rollback support. So I also have to implement a "release batch" support onto ef that allows me to give a release name to every applied migration so that I can also rollback all the applied migrations with the same release name. I can't understand why naming you migration installation is not oobe... Can you rollback the bundle? It might actually be an alternative...
@@Miggleness pipeline should do automatic rollback on its own when health checks don't go green after deployment. Backups are too large and slow to work with if you want to keep a decent uptime. And you will lose some data with backups as creating a backup also takes time and the service is being used at the same time. And sometimes you have to make a decision to roll back to previous version after a day or two. Backups are a wrong tool for ci/cd.
The workflow assumes your database is publicly accessible which isn't a good idea. It would be good to see this performed on a secure production environment. I guess a private runner within the vnet would resolve it.
What happens if you have a step in the deploy pipeline after the migrations that fails. Do you rollback the migration, if so how? Otherwise the database and the deployed code would be out of sync.
the efbundle.exe allows you to specify the migration you want to bring your database at. it's a bit of work but you can use the same application to rollback the database, assuming you had a valid down step.
@nickchapsas what is the reason to create bundle file, instead of including dotnet ef database update -- --environment production command in presented pipeline? Would't it be simpler?
14 ชั่วโมงที่ผ่านมา
Because if you have several environments you just want to apply the same migration bundle and not have (again) to get sources/restore/build.
I would also add one item to the video that is missing and very important in my opinion: in the main routine, you should check whether the current migration matches the database or if there are pending migrations to apply. In that case, at least log a warning, and consider stopping your app immediately. Don't wait for the first query (of a real customer) before discovering the mismatch!
Decent video, typically I want my migrations to be retryable and not in a transaction so I'm not holding any locks for schema stability which could force downtime in production. For example if I add an index in sql server and use WITH (ONLINE = ON), this is fine, it's a small lock at the end to swap some metadata. If I do this in a transaction, write access to the table is blocked by a schema stability lock until the index is built.
I only see one issue, regarding the IP of the database connection request. How can I make this configurable when there are network access restrictions?
The script will contain seed data, if you use .HasData through the ModelBuilder. More precisely the migrations itself will contain those data inserts if you add data OnModelCreating, and the scripting will translate it into SQL. I'm surprised the bundle gets data from startup code as well. It certainly not just evaluates the migration code.
awesome video Nick and Gui! The only thing I missed is the fact that ef migration needs a connection string set up so it would know where to try to access, assuming thats not necessarily the default connection string set in the application, and that it also might be part of a seperate github action, I would probably add that as a secret for the specific github environment that is used, and then override the connection string environment, or pass it explicitly with the `--connection` argument. also, a Nick video without 69 or 420? come on guys
Your idea of using a secret specific to an environment is correct (if you have access to GH Environments). Regarding the 69 or 42... sorry. I am "low-cost" version of Nick 😅
We actually have a project that takes a parameter and execute different migrations depending on schema etc. That project is integrated with with a big system and always runs wth build pipeline. There we check if database migrations history tabell is updated or not. We also do run some tables as script that triggers with the other migrations like nservicebus tables that setup what it needs to. Work pretty well. Everytime we need a new compontent or change build that needs new tables etc we do it with new parameter or just run the same parameter with that db schema if we need to update table for example. A regular migration.
Migration bundle for Entity framework grate way to deploy the database changes peacefully, I know it has few drawback as you mentioned like seeding the data with script. Do more video content how to push those changes into git and seamless review process in devops or github.
How would you do this for a multi-tenant environment, especially one where you don't know the tenants at build time (but the application knows during runtime).
I've actually worked on a project where the migrations were controlled by the tenants, essentially they were run via API calls. The main challenge was allowing the codebase to run on multiple different database schemas, for that I had to change the way EF caches the schemas, so a single DbContext could have different schemas depending on the version being used. Security around it to ensure customers couldn't mess up their own databases was another ballache. But we had very specific enterprise requirements for our tenants where some wanted to have full control over the data but for the SaaS to still be be hosted by us. Multi tenant applications are just a pain in the arse for data storage management
@@АлексейЩербак-б3ь But how are you automate the deployment, if you gave no runner at all access to prod? Someting needs to move bits and bytes there and needs connectivity.
What if migration is applied and web deploy fails? I dont consider this more "production ready" in comparison to Startup approach (if we dont scale app, and lot of apps out there are not)
A miserable little pile of code? 😂 It’s just a console app, you won’t run into problems after 1000 migrations anymore than the EF migrations history table would run into issues.
@ahupond Yes, but the point of the pipeline was to automate it. In the past, I have temporarily added the GitHub Actions runner's IP address to the database allow list while the pipeline is running.
using MySQL any DDL failure will result in a need for a database restore. Those "down" methods are kinda pointless IMO All seems a bit overly complicated, you can store a special DDL connection string as a separate secret and use that for DDL, and a different one for runtime (We have a process for clustered service deployments, it ensures the first service that hits the migration will cause others to wait) We have specific requirements though as we have some clients who purely pull a container image from us and run in their environments, and just want a black box.
imo the application is just, and with a db it is just one (if currently the only) application user of that db. And really the DB design and build should be a 'project' - even if just a set of sql scripts. Not a step in starting up an app. Unless that app and that db are really trivial. In a serious db, the data will live longer than that initial app. Replaced by one or more succeeding apps.
I'm still not the least bit convinced code-first and migrations is the right way to manage your schemas and propagation of schema changes throughout your environments. Are devs that afraid/annoyed/indifferent of working with databases directly? I don't understand the problem we're solving, other than creating devs who have never actually worked in a MSSQL/PostgreSQL or other modern RDBMS. Data and where and how we store it isn't (or shouldn't be) just an afterthought.
Often times devs cannot / are not allowed to access production databases directly. Commonly they must provide an executable or installer directly to the client, and the client just needs to press a button to deploy and run.
@@CodeAbstract Restricting ALTER SCHEMA permissions on elevated (test/stage/prod) environments is a good practice. Migrations doesn't solve that, and I don't think it was ever intended to. Having a professional who understands databases and schema changes and data migration involved in propagating changes throughout the environments is a good thing, not something we should try to skip or work around as an industry. I understand some devs don't want to concern themselves with tables, columns, constraints, and indexes, but they should let the plentiful devs and dbas who do care about that do what they do (and love) to make sure it's done right. Migrations were/are a really ill-advised end-around good and proper database design and planned migrations with professional oversight.
And who says it all should be an afterthought? Using an ORM is just a means to abstract and make yourself more productive by simplifying implementation details that are otherwise very repetitive, but by no means is intended to completely dissociate yourself from such details (ie queries produced, schema produced, etc). Thinking otherwise is just blatant ignorance about what's the goal of an ORM and how to use them.
@@CesarDemi81 ORMs != CodeFirst+Migrations ORMs were originally about mapping objects to existing database objects (read: tables and/or views), and they came long before CF+M was ever a thing. Now, as a result of devs working long enough and solely in ORMs without ever really having to touch databases, that led to CF+M and unfortunately to the ultimate disconnect between those developers and proper database development. We're talking people in mid to senior level dev positions that have never written a single CREATE, ALTER, or DROP statement. That's when they started started saying things like "I don't want to worry about database schema stuff at all". If you haven't heard that from one or more devs on a team, you haven't been in the industry long enough and/or haven't worked on enough teams. Yes, it's a thing, and it's more prevalent than you realize. Note I'm not saying you yourself believe this way. I used to believe like you, that it wasn't being misused this way, but I assure you it is. A group of younger developers found the ultimate set of training wheels and a way to completely avoid learning proper database design and they are trying to ride on those training wheels into higher level positions (fake it til you make it) and right into Staging and Production on high-level projects. CF+M can be useful early on during initial development, but very quickly falls apart when you start moving into Test/Stage/Prod. Migrations, as in tracked and automated by EF, should really be avoided beyond Greenfield and the Dev environment.
@@keyser456 I don't think you understood what I just said. I'm a professional senior developer and able to handle database design and manipulations. My client is not a professional dev, they just need an application. I'm not allowed to be nowhere near their production environment, let alone access their database and manipulate it first hand. I'm also not allowed to control their deployment strategy, otherwise I would just use docker or Kubernetes on a Linux server and we're good to go. Their production server also has network limitations so I cannot give them any online installer that does the same. Due to these limitations, I have to provide them with an offline installer, they press execute and that's the end of it. I cannot tell them have this application, have this microservice, these network flows, none of it. I have to work within the limitations and restrictions provided by my clients, as do many other professional developers. If I were their internal architect or devops guy with certain permissions then yes, I would design their application infrastructure according to what I think would be best. But this is not the reality in many cases, which you're not seeing.
This seems overly complicated. It is really simple to make this work in a cluster whether it is dev, test or production on startup. At least under java, there are numerous libraries to help with this. I'm sure c# does too.
So how do these libraries help you to deal with situation when there's a migration and you have several instances of the service? Each instance checks if the migration is executed during startup and every instance sees that migration is not executed yet. That's the situation when you need to extract migrations to separate step in the deployment process
@@dsvechnikov The only thing I could image would be in Kubernetes context the use of init containers. Then you know, they exists only once and are executed before your main container gets started. But that separation you need to do on your own anyway...
It basically locks all other instances while the single instance is being upgraded. The lock would be released after the migration and the other instances would skip the migration. Pretty simple really. On the other hand our system is not that complex and we deploy in chunks of instances in a rolling update migration.
Subscribe to Gui: www.youtube.com/@gui.ferreira
Get the source code: mailchi.mp/dometrain/stbf3-m5wdm
Hi Nick, why don't you make a video how Db migrations are performed with DB first approach using DbUp or similar library. I do understand why everybody is pushing towards EF, but I believe that a lot of devs that are starting their careers now are getting the wrong opinion the EF is the only way.
@@ЛюбомирГеоргиев-о5й DbUp doesn't support rollback, yet no migration system should be designed without it! Use FluentMigrator, if you wish to have an alternative. ;)
Thanks, Nick!
Keep Coding!💙
@@gui.ferreira I like your content Gui!
@gui.ferreira has such a calming voice. He could tell me to run migrations under any context and I'd consider it, because he sounds so reasonable. Too much power.
what is missing is the rollback, have you ever tried to rollback a previous version after a migration is done in a later version? the down method for the new migration does not exist in the previous version so you need to use the new software to first call the down method and then you can put the older software back in. My strategie is simple, i have a blazor ui for each database. when you start it it checks if it needs migrations, you can click the migration to execute it (or copy the genereated sql to do it manually. sometimes needed to prevent timeouts when you create a large index or something) on the same blazor app you can also click the previous migration and roll back to before that migration. so manually controlled, no issue for scaling. easy. we do not mind a few minutes downtime for a deployment so this fits us. it may not fit you
Good question. I've openned comments to write the same one. What I think about your approach - it requires to do addition steps and can lead to release errors within large team. So it has dependencies on skills of release person. Besides that it increases ready time and time lag between two consistency phases. But it looks great for small projects👍
It’s more popular to take a roll forward only approach these days. Rolling back has its own risks. The more guarantees someone can add to help ensure the migration and app work can be a better investment than rollback scripts. i.e. automated testing, health checks, etc
@RebelZach whilst a role forward might be beneficial if you can't control installation. A roll back is my preferred way of resolving issues quickly if it's severe enough.
A very good tutorial on a very common company task - but every company has their own tweaks on it
Thanks Nick and Gui for the really good content. I have been releasing using efbundle from Azure DevOps for a while now and everything Gui spoke about makes a lot of sense. One video that would be very beneficial is how to squash or rollup your migrations. I have noticed on several projects that over time your builds start to slow down significantly as the number of migrations you have increases. I recently rolled up 500+ migrations into a single migration for schema changes and another migration for running SQL scripts directly which has halved my build times. The best solution I have come up with to do this so far is to drop everything in the migrations folder and recreate the initial migration and then in my case another migration to run direct SQL specific migrations. This works well for when new database but does require that you manually run a script to insert the new migrations into the __EfMigrationsHistory table on your production and uat, etc databases to prevent EF from trying to run any of the new migrations here.
Thank you Gui, you are among the best and a big thanks to Nick for sharing the video.
Dude, what amazing timing! I am currently working on doing pretty much that (except we roll our own database migration utility for the time being).
What is entirely missing here is. What about open source self hostable / on prem software? I only publish, for example a container image. The users just run that. They do not want to worry about any migrations or dotnet cli. So I will still have to run migrations on app startup.
If it's a single container that doesn't need to scale, it's likely the same scenario as the mobile app I mentioned initially.
If your users are expected to treat the database as a blackbox and you enforce an upgrade order, that’s fine.
The longer the gaps between users updating (if they control that), the greater the chances of them running into a failure.
If you have multiple instances your container, check init containers, which at least exists in Kubernetes. This was something I saw already at some other Kubernetes Helm charts. Your init container exists only once and after completion your main container starts in any number of instances.
Consider something like helm pre install and pre upgrade hooks for k8s.
What you could do is give the user the option to have it done during startup, but be disabled by default. They can enable it with an environment variable or something.
I have always encapsulated the migrations into separate console application that is being run by pipelines. And no migration system should be designed without rollback support. So I also have to implement a "release batch" support onto ef that allows me to give a release name to every applied migration so that I can also rollback all the applied migrations with the same release name.
I can't understand why naming you migration installation is not oobe...
Can you rollback the bundle? It might actually be an alternative...
why would you need rollback support? I personally haven’t found a use case for that. Always roll forward.
if you need to rollback, use DB backups
@@Miggleness pipeline should do automatic rollback on its own when health checks don't go green after deployment. Backups are too large and slow to work with if you want to keep a decent uptime. And you will lose some data with backups as creating a backup also takes time and the service is being used at the same time.
And sometimes you have to make a decision to roll back to previous version after a day or two.
Backups are a wrong tool for ci/cd.
Didn't expect Chuck Norris to give us programming lessons 💪💪
Great one. Well described. Easy to understand. Thanks.
What about on-prem deployments? My pipeline generates an installer that customer runs.
run the migration when the app starts?
@@7th_CAV_Trooper this is what the video explicitly advises against.
In such cases, often running the migrations during the installation process is the best idea.
I would generate an idempotent SQL script, which will be executed as part of your installer. Makes without further information the most sense for me.
@@markovcd yeah, but it works fine.
The timing on this is crazy :p I just spoke about us needing to revamp our migrations in our standup this morning. Great video
This is part of the TH-cam API for content creators. #bigbrother
Yes! I worked with migrations this morning too! We literally solved the same issue with our DevOps guy. F.. magic. Not the first time.
The workflow assumes your database is publicly accessible which isn't a good idea. It would be good to see this performed on a secure production environment. I guess a private runner within the vnet would resolve it.
Came here to say the same.
What happens if you have a step in the deploy pipeline after the migrations that fails.
Do you rollback the migration, if so how?
Otherwise the database and the deployed code would be out of sync.
the efbundle.exe allows you to specify the migration you want to bring your database at. it's a bit of work but you can use the same application to rollback the database, assuming you had a valid down step.
@nickchapsas what is the reason to create bundle file, instead of including dotnet ef database update -- --environment production command in presented pipeline? Would't it be simpler?
Because if you have several environments you just want to apply the same migration bundle and not have (again) to get sources/restore/build.
indeed, Gui has also really good content and the migrations video is well done
what about using Database Visual Studio project template instead of EF migrations ?
What about tools like DbUp and grate?
I would also add one item to the video that is missing and very important in my opinion: in the main routine, you should check whether the current migration matches the database or if there are pending migrations to apply. In that case, at least log a warning, and consider stopping your app immediately. Don't wait for the first query (of a real customer) before discovering the mismatch!
Decent video, typically I want my migrations to be retryable and not in a transaction so I'm not holding any locks for schema stability which could force downtime in production.
For example if I add an index in sql server and use WITH (ONLINE = ON), this is fine, it's a small lock at the end to swap some metadata. If I do this in a transaction, write access to the table is blocked by a schema stability lock until the index is built.
Great video! Thanks for sharing
I only see one issue, regarding the IP of the database connection request. How can I make this configurable when there are network access restrictions?
The script will contain seed data, if you use .HasData through the ModelBuilder. More precisely the migrations itself will contain those data inserts if you add data OnModelCreating, and the scripting will translate it into SQL. I'm surprised the bundle gets data from startup code as well. It certainly not just evaluates the migration code.
awesome video Nick and Gui!
The only thing I missed is the fact that ef migration needs a connection string set up so it would know where to try to access,
assuming thats not necessarily the default connection string set in the application, and that it also might be part of a seperate github action, I would probably add that as a secret for the specific github environment that is used, and then override the connection string environment, or pass it explicitly with the `--connection` argument.
also, a Nick video without 69 or 420? come on guys
Your idea of using a secret specific to an environment is correct (if you have access to GH Environments).
Regarding the 69 or 42... sorry. I am "low-cost" version of Nick 😅
The seed value was 420 in the seed data!
We actually have a project that takes a parameter and execute different migrations depending on schema etc. That project is integrated with with a big system and always runs wth build pipeline. There we check if database migrations history tabell is updated or not. We also do run some tables as script that triggers with the other migrations like nservicebus tables that setup what it needs to. Work pretty well. Everytime we need a new compontent or change build that needs new tables etc we do it with new parameter or just run the same parameter with that db schema if we need to update table for example. A regular migration.
Migration bundle for Entity framework grate way to deploy the database changes peacefully, I know it has few drawback as you mentioned like seeding the data with script. Do more video content how to push those changes into git and seamless review process in devops or github.
Hey @Nick what do you think about Abp framework and it DbMigrator ?
How would you do this for a multi-tenant environment, especially one where you don't know the tenants at build time (but the application knows during runtime).
deploy the same migration script for all tenants
@@AmateurSpecialist I use fluentmigrator in a console app
I've actually worked on a project where the migrations were controlled by the tenants, essentially they were run via API calls. The main challenge was allowing the codebase to run on multiple different database schemas, for that I had to change the way EF caches the schemas, so a single DbContext could have different schemas depending on the version being used. Security around it to ensure customers couldn't mess up their own databases was another ballache. But we had very specific enterprise requirements for our tenants where some wanted to have full control over the data but for the SaaS to still be be hosted by us.
Multi tenant applications are just a pain in the arse for data storage management
@@Potato-m1l That's why there is a better solution called multi-instance architecture.
What editor does he use? It doesn't look like Visual Studio.
Rider
Why is the bundle needed? We can apply migrations with the dotnet-ef tool directly
Because, runing your pipeline you have an artifact only and do not have source code.
Ci and Cd should be separated things. Moreover your runners should not have any access to prod. It was explained at the beginning of the video.
From your machine to the cloud!
@@АлексейЩербак-б3ь But how are you automate the deployment, if you gave no runner at all access to prod? Someting needs to move bits and bytes there and needs connectivity.
@@ray89520 u wanna give them everything? Connection string to DB, root access? No. We should share limited permissions. As limited as possible.
What about Data-tier applications (DACPAC)? I think this is one of the safest and easiest ways to deploy/migrate a database.
Idea for an episode: Dacpac db management
this is gold, thanks a gazillion
Have you ever attempted to use AtlasGo?
What if migration is applied and web deploy fails? I dont consider this more "production ready" in comparison to Startup approach (if we dont scale app, and lot of apps out there are not)
Use flyway.
And what will this bundle look like after 1000 migrations?
A miserable little pile of code? 😂
It’s just a console app, you won’t run into problems after 1000 migrations anymore than the EF migrations history table would run into issues.
This approach seems to assume to database has no firewall and is accessible to every host on the internet. Seems like a major security issue.
This is why you can make a bundle that a dba can download and apply offline or on premises on a firewalled network
@ahupond Yes, but the point of the pipeline was to automate it. In the past, I have temporarily added the GitHub Actions runner's IP address to the database allow list while the pipeline is running.
using MySQL any DDL failure will result in a need for a database restore. Those "down" methods are kinda pointless IMO
All seems a bit overly complicated, you can store a special DDL connection string as a separate secret and use that for DDL, and a different one for runtime
(We have a process for clustered service deployments, it ensures the first service that hits the migration will cause others to wait)
We have specific requirements though as we have some clients who purely pull a container image from us and run in their environments, and just want a black box.
Am I the only one who always hears "hello everybody im naked" at the beginning? 😅
I DO ALL AMNUALLY.. generate SQL script for migration adn copy paste it in DB and run... only then im sure :D
Just finished cssd semester 😅
imo the application is just, and with a db it is just one (if currently the only) application user of that db. And really the DB design and build should be a 'project' - even if just a set of sql scripts. Not a step in starting up an app. Unless that app and that db are really trivial. In a serious db, the data will live longer than that initial app. Replaced by one or more succeeding apps.
I’ve never came this fast, this usually doesn’t happen
👀
Bet you say that to all the girls!
I was expecting more, this is like 1paragraph from docs put in 20minute video
I'm still not the least bit convinced code-first and migrations is the right way to manage your schemas and propagation of schema changes throughout your environments. Are devs that afraid/annoyed/indifferent of working with databases directly? I don't understand the problem we're solving, other than creating devs who have never actually worked in a MSSQL/PostgreSQL or other modern RDBMS. Data and where and how we store it isn't (or shouldn't be) just an afterthought.
Often times devs cannot / are not allowed to access production databases directly. Commonly they must provide an executable or installer directly to the client, and the client just needs to press a button to deploy and run.
@@CodeAbstract Restricting ALTER SCHEMA permissions on elevated (test/stage/prod) environments is a good practice. Migrations doesn't solve that, and I don't think it was ever intended to. Having a professional who understands databases and schema changes and data migration involved in propagating changes throughout the environments is a good thing, not something we should try to skip or work around as an industry. I understand some devs don't want to concern themselves with tables, columns, constraints, and indexes, but they should let the plentiful devs and dbas who do care about that do what they do (and love) to make sure it's done right. Migrations were/are a really ill-advised end-around good and proper database design and planned migrations with professional oversight.
And who says it all should be an afterthought?
Using an ORM is just a means to abstract and make yourself more productive by simplifying implementation details that are otherwise very repetitive, but by no means is intended to completely dissociate yourself from such details (ie queries produced, schema produced, etc).
Thinking otherwise is just blatant ignorance about what's the goal of an ORM and how to use them.
@@CesarDemi81 ORMs != CodeFirst+Migrations
ORMs were originally about mapping objects to existing database objects (read: tables and/or views), and they came long before CF+M was ever a thing.
Now, as a result of devs working long enough and solely in ORMs without ever really having to touch databases, that led to CF+M and unfortunately to the ultimate disconnect between those developers and proper database development. We're talking people in mid to senior level dev positions that have never written a single CREATE, ALTER, or DROP statement. That's when they started started saying things like "I don't want to worry about database schema stuff at all". If you haven't heard that from one or more devs on a team, you haven't been in the industry long enough and/or haven't worked on enough teams. Yes, it's a thing, and it's more prevalent than you realize. Note I'm not saying you yourself believe this way. I used to believe like you, that it wasn't being misused this way, but I assure you it is. A group of younger developers found the ultimate set of training wheels and a way to completely avoid learning proper database design and they are trying to ride on those training wheels into higher level positions (fake it til you make it) and right into Staging and Production on high-level projects.
CF+M can be useful early on during initial development, but very quickly falls apart when you start moving into Test/Stage/Prod. Migrations, as in tracked and automated by EF, should really be avoided beyond Greenfield and the Dev environment.
@@keyser456 I don't think you understood what I just said.
I'm a professional senior developer and able to handle database design and manipulations.
My client is not a professional dev, they just need an application.
I'm not allowed to be nowhere near their production environment, let alone access their database and manipulate it first hand.
I'm also not allowed to control their deployment strategy, otherwise I would just use docker or Kubernetes on a Linux server and we're good to go. Their production server also has network limitations so I cannot give them any online installer that does the same.
Due to these limitations, I have to provide them with an offline installer, they press execute and that's the end of it.
I cannot tell them have this application, have this microservice, these network flows, none of it. I have to work within the limitations and restrictions provided by my clients, as do many other professional developers.
If I were their internal architect or devops guy with certain permissions then yes, I would design their application infrastructure according to what I think would be best.
But this is not the reality in many cases, which you're not seeing.
This seems overly complicated. It is really simple to make this work in a cluster whether it is dev, test or production on startup. At least under java, there are numerous libraries to help with this. I'm sure c# does too.
So how do these libraries help you to deal with situation when there's a migration and you have several instances of the service? Each instance checks if the migration is executed during startup and every instance sees that migration is not executed yet. That's the situation when you need to extract migrations to separate step in the deployment process
@@dsvechnikov The only thing I could image would be in Kubernetes context the use of init containers. Then you know, they exists only once and are executed before your main container gets started. But that separation you need to do on your own anyway...
It basically locks all other instances while the single instance is being upgraded. The lock would be released after the migration and the other instances would skip the migration. Pretty simple really. On the other hand our system is not that complex and we deploy in chunks of instances in a rolling update migration.
comments.First();
CA1806
@@yegorandrosov6334 Dude, that was great. Gave me a chuckle =)
Guess the first will always be Nick's pinned comment. Yours some place in the middle here.
And now without entity framework. Go. Title should be: Do database migration with entity framework right.
...that IS the title
@@nicholaskinzel3908 It was not initially.