What Is the Fastest Way To Do a Bulk Insert? Let’s Find Out

แชร์
ฝัง
  • เผยแพร่เมื่อ 6 ม.ค. 2025

ความคิดเห็น • 68

  • @MilanJovanovicTech
    @MilanJovanovicTech  7 หลายเดือนก่อน +4

    Want to master Clean Architecture? Go here: bit.ly/3PupkOJ
    Want to unlock Modular Monoliths? Go here: bit.ly/3SXlzSt

    • @ksmemon1
      @ksmemon1 7 หลายเดือนก่อน +1

      Subscribed 👍

  • @alexanderst.7993
    @alexanderst.7993 7 หลายเดือนก่อน +3

    Milan just wanted to say, thanks to you and your C# videos, i managed to land a job. Appreciate ya pal ;)

    • @MilanJovanovicTech
      @MilanJovanovicTech  7 หลายเดือนก่อน +1

      Nah, that's all on you buddy. Great work 💪

  • @たろ羊
    @たろ羊 7 หลายเดือนก่อน +12

    Thank you for the video. I'd like to see a video on bulk upserts/merging data

    • @MilanJovanovicTech
      @MilanJovanovicTech  7 หลายเดือนก่อน +1

      Great suggestion!

    • @oneshot2579
      @oneshot2579 7 หลายเดือนก่อน

      Especially for postresql

  • @anonymoos
    @anonymoos 7 หลายเดือนก่อน +4

    I like using the BulkCopy function, it is amazingly fast for importing large datasets. One thing to note is that the column names specified for the database side are case sensitive. If there's a mismatch in case on column names, the import will fail. You can also eek out even more performance by tweaking the batch size in BulkCopy using `bulk.BatchSize = 10_000;`. Actual performance will vary based on how many columns you're inserting.

    • @lolyasuo1235
      @lolyasuo1235 7 หลายเดือนก่อน

      What about primary keys? can you import data on table with incremental PK or you have to specify the PK?

    • @MilanJovanovicTech
      @MilanJovanovicTech  7 หลายเดือนก่อน

      That's an excellent suggestion. Will try it with different batch sizes.

    • @pilotboba
      @pilotboba 7 หลายเดือนก่อน

      @@lolyasuo1235 SQL Server will generate identity values just like it does on a normal insert. There is an option though to "keep identity" so if you did have PK values you wanted retained you could use that option. Of course, if those values exist in the table then you will get duplicate insert errors.
      I believe it will update the next identity value on the table after it is done if you use that option.

  • @FolkoFess
    @FolkoFess 7 หลายเดือนก่อน +2

    There is another way to speed up bulk insert for some specific cases (for example during periodic ETL process when the entire table or partition should be fully cleaned and recreated from scratch using bulk insert). In this case bulk insert can be slowed down by indexes, constraints, concurrent access , etc . The solution for this would be to do bulk insert into temporary table that does not have any indexes or constraints using ReadUncommited transaction -> after build all needed indexes -> after do partition swap operation with the main table/partition. Another advantage of this approach is that up until the last step - data from original table stays fully available . And partition swap is almost instant and atomic operation

    • @MilanJovanovicTech
      @MilanJovanovicTech  7 หลายเดือนก่อน +1

      Ah, great addition to this topic 👌

  • @vasiliylu8054
    @vasiliylu8054 7 หลายเดือนก่อน

    Thank you, Milan! This video must be in top.

  • @pilotboba
    @pilotboba 7 หลายเดือนก่อน +1

    It also looks like there is a library called dapper plus that has bulk insert feature as well. Also a commercial paid library.

  • @10Totti
    @10Totti 7 หลายเดือนก่อน +1

    Another best tutorial!
    Thanks!

  • @-INC0GNIT0-
    @-INC0GNIT0- 7 หลายเดือนก่อน

    Thanks for doing the research !
    Very insightful investigation

  • @pilotboba
    @pilotboba 7 หลายเดือนก่อน +1

    A few things.
    You never adjusted the batch size for EF Core. It is possible to speed up inserts by increasing the batch since. I think by default it is 100.
    Also bulk-copy has a way to set the batch size. By default I believe it is set to 0 which means all rows. But, its recommended to use it.
    Bulk-copy by default does a non-trasacted insert. So, if there is an issue there is no way to roll it back. There is an option to have it use a transaction, but I assume that will slow it down a bit.
    I'm curious if you match the bulkcopy and efcore batch size settings and enable Internal transactions in bulk-copy if the speeds would be closer?
    I'm not sure, but did your code create the collection each time? Perhaps to remove the overhead of that you could create the user collection in the constructor?

    • @MilanJovanovicTech
      @MilanJovanovicTech  7 หลายเดือนก่อน

      That is great constructive criticism. I think I'll do a Part 2 of this video in a few weeks, with these remarks + some others I got. I wanted to include data creation for some reason, but I can also do a benchmark without it.

  • @giammin
    @giammin 7 หลายเดือนก่อน

    Really interesting! thanks
    I think you can send directly the array without the need to convert to anonymous objects in dapper. Anyway it will not change much the benchmark results

    • @MilanJovanovicTech
      @MilanJovanovicTech  7 หลายเดือนก่อน

      Had some trouble with the Id field so this was a workaround

  • @xtazyxxx3487
    @xtazyxxx3487 7 หลายเดือนก่อน +1

    Can you try to concatenate the insert query then try sql raw query with ef and see the results

  • @rsrodas
    @rsrodas 7 หลายเดือนก่อน

    Another alternative, if you already have files ready to import, is to use inside of SQL Server the OPENROWSET command:
    INSERT INTO Table (
    Col1, Col2, ...
    )
    SELECT
    Col1, Col2, ..
    FROM OPENROWSET(
    BULK 'c:\myfile.txt', FORMATFILE='c:\format.xml'
    )
    In the XML file, you define the rules in how the file you want to import is formatted (fixed size, comma split, etc...)

    • @MilanJovanovicTech
      @MilanJovanovicTech  7 หลายเดือนก่อน

      Does it have to be an XML file? Can it work with CSV? What about JSON?

    • @rsrodas
      @rsrodas 7 หลายเดือนก่อน

      @@MilanJovanovicTech Other option for format file is to use a non-XML...

    •  7 หลายเดือนก่อน

      can you do this in an azure sql database?

  • @harshakumar6890
    @harshakumar6890 7 หลายเดือนก่อน

    It's possible to check for dapper with executing SP that accept UDT table as parameter?

    • @MilanJovanovicTech
      @MilanJovanovicTech  7 หลายเดือนก่อน +1

      I'll see if I can update the article

  • @antonmartyniuk
    @antonmartyniuk 7 หลายเดือนก่อน +1

    Surprisingly Dapper doesn't perform well. Still I would like to see results when using Dapper with SQL Bulk Insert command.
    I personally have used a EFCore.Extensions library, which is a paid one, to do the bulk inserts. My company bought a license for this library and it saved many development days for such things as bulk merge and bulk synchronize operations.
    Interesting to compare its performance to sql bulk copy class

    • @MilanJovanovicTech
      @MilanJovanovicTech  7 หลายเดือนก่อน

      It's nor surprising if you understand how that specific SQL statement works with Dapper.

  • @dy0mber847
    @dy0mber847 7 หลายเดือนก่อน

    Will results be different in case of using postgres?🤔

    • @MilanJovanovicTech
      @MilanJovanovicTech  7 หลายเดือนก่อน

      Not in relative terms between the different options

  • @islandparadise
    @islandparadise 7 หลายเดือนก่อน

    Love this. One quick qn: For the EFCore approaches, would the performance be consistent on Postgres as well as SQL server?

    • @MilanJovanovicTech
      @MilanJovanovicTech  7 หลายเดือนก่อน +1

      Hmm, I'm pretty sure the performance wouldn't change dramatically. However, I didn't test that.

    • @islandparadise
      @islandparadise 7 หลายเดือนก่อน

      @@MilanJovanovicTech got it. Thanks mate you're a champ!

  • @musabytes
    @musabytes 7 หลายเดือนก่อน

    Have you try OpenJson or other json structure with raw sql query?

    • @MilanJovanovicTech
      @MilanJovanovicTech  7 หลายเดือนก่อน

      No, I did not, but you're welcome to give it a try

  • @belediye_baskani
    @belediye_baskani 7 หลายเดือนก่อน

    What do you think about Bulk Update? Can you run Benchmark for us?

    • @MilanJovanovicTech
      @MilanJovanovicTech  7 หลายเดือนก่อน

      It all comes down to an UPDATE operation

  • @lolyasuo1235
    @lolyasuo1235 7 หลายเดือนก่อน +2

    How dapper can be 5 times slower at 1m records than efcore addall? This doesn't make sense at all.

    • @MilanJovanovicTech
      @MilanJovanovicTech  7 หลายเดือนก่อน

      Because Dapper has to unwrap the (collection) loop and run the SQL commands one by one. :)

  • @way_no6810
    @way_no6810 7 หลายเดือนก่อน

    Can u test Dapper Plus

  • @EzequielRegaldo
    @EzequielRegaldo 7 หลายเดือนก่อน

    So DataTable is EF core without paid lib?

    • @MilanJovanovicTech
      @MilanJovanovicTech  7 หลายเดือนก่อน +1

      DataTable is a .NET construct

    • @EzequielRegaldo
      @EzequielRegaldo 7 หลายเดือนก่อน

      @@MilanJovanovicTech amazing ! Thank you for your response :D

  • @ExtremeTeddy
    @ExtremeTeddy 7 หลายเดือนก่อน

    All shown methods are slow compared to „load data from file“. Whenever possible for large data imports use load data from file. It will load gigabyte of data within seconds. One of the best approaches in my experience is to create a temporary table for the datasource and do the load data file command. Then perform the inserts to the entity tables on the database server.
    Only issue / drawback can be the network connection when loading large datasets.

    • @MilanJovanovicTech
      @MilanJovanovicTech  7 หลายเดือนก่อน

      What if we don't have a file? Would it be worthwhile storing the file locally before calling that command?

    • @ExtremeTeddy
      @ExtremeTeddy 7 หลายเดือนก่อน

      @@MilanJovanovicTech Care to elaborate on it? Large data imports without a file or source material won't make any sense to me. LOAD DATA FROM FILE requires a file.
      When a databas is the source I recommend using raw SQL rather than writing application logic.

  • @たろ羊
    @たろ羊 7 หลายเดือนก่อน

    😊

  • @Ivang017
    @Ivang017 7 หลายเดือนก่อน

    Hey Milan, any discount incoming for the The Ultimate Modular Monolith Blueprint course? I bought your Clean Architecture course and I loved it. Just wondering if there is a sale soon or discount for the Modular Monolith Course. Thanks