How do I handle large result sets with psql?

I have a query that gives about 14M rows (I was not aware of this). When I use psql to run a query, my Fedora machine froze. Also after the request was completed, I could no longer use Fedora and had to restart my machine. Fedora also froze when I redirected standard output to a file.

So how do I handle large results with psql?

+3


source to share


1 answer


psql

accumulates complete results in client memory by default. This is the usual behavior for all Postgres based applications and drivers libpq

. Solutions are cursors - then you only get N rows from the server. Cursors can also be used psql

. You can change it by setting a variable FETCH_COUNT

, then it will use batch search size cursors FETCH_COUNT

.



postgres = # \ set FETCH_COUNT 1000
postgres = # select * from generate_series (1,100000); - big query
+4


source







All Articles