A great talk, as always! 👏 It's interesting that you casually refer to the proper solution to the problems of the middle part but don't emphasize on it: none of the problems induced by using full aggregate references in the BankTransfer object occur if you model them as identifier references instead. A many-to-… relation simply has no place in an aggregate model. You can avoid the additional mapping customizations, the usage of JPA-specific repository API and query customizations on the repository methods. Most of the JPA query performance issues can - and should - be solved by using *less* JPA, not more. 😉 Big thumbs up for the recommendation to use projections and the awareness about the pitfalls regarding transactions. 👍
@odrotbohm After reading your comment about identity references and aggregates, I haven't been able to let it go. Do you have any blog posts or other information on the subject?
He also said this at min 32:05. In real world, most teams tend to use the hibernate mappings and not follow religiously the DDD paradigm of having the identifier to another aggregate. And even in DDD world, you can still have the hibernate mapping if you require strong consistency between aggregates.
one should consider to do the whole transaction in the database, i.e transfer from and to accounts in the database. that makes up one query to a custom database function, using the connection just once and which also takes care of the transaction.
I know this is very practical and nice solution but just keep in mind that we should also keep running the application at least 5 years after development. If the system becomes bigger and bigger, many quires are wasted without performance checking because the production DB is fast enough with them. DB price is cheaper then refactoring or revieweing generally. So please do not over use projection class unless the business logic(use case) requires high performance like batch job. Management of type and design cost keeps increasing and it sometimes causes replacement of application at the end. But N+1 problem is easy to become big issue in production, so the techniques are brilliant and it was nice guide. Thank you.
This is complete nonsense. You should always query the database as efficiently as possible because accessing the DB is always the bottleneck in almost every application. It takes less effort to query to database efficiently from the beginning rather than trying to research performance issues later.
Thanks, Quality Stuff. One concern or confusion , Whenever we use REQUIRES_NEW, it creates separate new transaction and not nested transaction. Although one starting in another but they are not nested, to be precise, right ?
they are nested in the sense that the new transaction B will keep transaction A open until transaction B has committed. Transaction A's lifecycle is tied to transaction B
Must watch for everyone starting with JPA!
Not for beginners, getting the annotations right itself is a major win for any beginner.
A great talk, as always! 👏
It's interesting that you casually refer to the proper solution to the problems of the middle part but don't emphasize on it: none of the problems induced by using full aggregate references in the BankTransfer object occur if you model them as identifier references instead. A many-to-… relation simply has no place in an aggregate model. You can avoid the additional mapping customizations, the usage of JPA-specific repository API and query customizations on the repository methods. Most of the JPA query performance issues can - and should - be solved by using *less* JPA, not more. 😉
Big thumbs up for the recommendation to use projections and the awareness about the pitfalls regarding transactions. 👍
The best way to optimize Spring Data JPA is to learn DDD and use JMolecules ;)
@odrotbohm After reading your comment about identity references and aggregates, I haven't been able to let it go. Do you have any blog posts or other information on the subject?
He also said this at min 32:05. In real world, most teams tend to use the hibernate mappings and not follow religiously the DDD paradigm of having the identifier to another aggregate. And even in DDD world, you can still have the hibernate mapping if you require strong consistency between aggregates.
one should consider to do the whole transaction in the database, i.e
transfer from and to accounts in the database. that makes up one query to a custom database function, using the connection just once and which also takes care of the transaction.
Wow...! Thanks I learnt something new today.
Thanks very much.
Addressed some of the problems I faced recently
Realy helpful talk, a lot of great tips
so helpful.
thanks for your hard works!
Nice. Tks for share!!
What a great explannation
23:10 , you could start a new thread?
I know this is very practical and nice solution but just keep in mind that we should also keep running the application at least 5 years after development.
If the system becomes bigger and bigger, many quires are wasted without performance checking because the production DB is fast enough with them. DB price is cheaper then refactoring or revieweing generally.
So please do not over use projection class unless the business logic(use case) requires high performance like batch job. Management of type and design cost keeps increasing and it sometimes causes replacement of application at the end.
But N+1 problem is easy to become big issue in production, so the techniques are brilliant and it was nice guide. Thank you.
This is complete nonsense. You should always query the database as efficiently as possible because accessing the DB is always the bottleneck in almost every application. It takes less effort to query to database efficiently from the beginning rather than trying to research performance issues later.
Thanks, Quality Stuff. One concern or confusion , Whenever we use REQUIRES_NEW, it creates separate new transaction and not nested transaction. Although one starting in another but they are not nested, to be precise, right ?
they are nested in the sense that the new transaction B will keep transaction A open until transaction B has committed. Transaction A's lifecycle is tied to transaction B