Spring Data JPA Implementing Bulk Updates
HTML-код
- Опубликовано: 19 окт 2024
- Join the Persistence Hub and become a Java Persistence Expert: thorben-jansse...
When using Spring Data JPA, most developers are used to letting Spring handle almost all database operations. That’s especially the case for all update operations. Thanks to JPA’s entity mappings and the managed lifecycle of all entity objects, you only need to change an attribute of an entity object. Everything else happens automatically.
But having a good, automated solution for the most common use cases doesn’t mean it’s the ideal solution for all use cases. JPA’s and Spring Data JPA’s handling of update operations is a good example of that. The default handling is great if you only need to update a few entities. Your persistence provider automatically detects changes to all managed entity objects. For each changed object, it then executes an SQL UPDATE statement. Unfortunately, this is a very inefficient approach if you need to update a huge number of entities. It often causes the execution of several dozen or even hundreds of SQL UPDATE statements.
This is a general problem when using JPA. Still, especially users of Spring Data JPA are surprised when I tell them about this and show them that even a call of the saveAll method on their repository doesn’t avoid these statements.
But there are 2 easy-to-use options to reduce or avoid the performance impact entirely.
Like my channel? Subscribe!
➜ bit.ly/2cUsid8
Read the accompanying post: thorben-jansse...
Want to connect with me?
Blog: thorben-jansse...
Twitter: / thjanssen123
Facebook: / thorbenjanssenofficial
Linkedin: / thorbenjanssen
Your courses are just so structured, I am enjoying them.
very good, thank you, hugs from Brazil
Great video! one question tho, you mentioned is better to create our own update query than activating jdbc batching, why is that? thank you and keep up the good work!
Perfect mate!
Why cant just i like it twice?? Thanks a lot
So can I implement this batching when a saveAll() is used to insert a large number of entities and it leads to broken pipe error during inserting in postgres DB?
How to reduce the timing of a find jpa query for entities having joins ?
Wonderful, I was needing this exactly lecture fro my project. But... Why dont you talk about Mongo?
i guess his focus is jpa/hibernate
Good night. I have an Oracle database with a table with 4 Clob-type fields. The CRUD (Read, Modify and Delete) works correctly except for the create shows me the following error when I use Postman. How is the correct syntax in Spring to insert clob type data without showing me the following error:
023-06-13 19:21:15.353 ERROR 6556 --- [nio-8084-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.dao.InvalidDataAccessResourceUsageException: could not extract ResultSet; SQL [n/a]; nested exception is org.hibernate.exception.SQLGrammarException: could not extract ResultSet] with root cause
oracle.jdbc.OracleDatabaseException: ORA-00932: tipos de dato inconsistentes: se esperaba - se ha obtenido CLOB