Reading big data from db¶
| 1. using limit offset | 2. select all data and pgx.Next() | 3. using filter for example by incrementing id |
|---|---|---|
| 🌟The service will not load all the data into memory but will process it incrementally | 🌟The service will not load all the data into memory but will process it incrementally | 🌟The service will not load all the data into memory but will process it incrementally |
| ❌ Postgres will use more CPU and memory because it needs to recalculate the query and find the offset each time. | ⚠️ Depends on the query and indexes: Postgres might use more CPU by recalculating the query each time compared to the second option. However, in some cases, filtering can reduce calculation time, making this option potentially better. | |
| ❌ The service may skip or process the same row twice, as new records can be added in the middle while handling the data. | ⚠️ Depends on the filter: The service may skip or process the same row twice, as new records can be added in the middle while handling the data. | |
| ❌ It will take much more time compared to the other options | ⚠️ Depends on the query: It might take more time compared to the second option. |
See examples here