From: | Thomas Heller <info(at)zilence(dot)net> |
---|---|
To: | Alistair Bayley <alistair(at)abayley(dot)org> |
Cc: | pgsql-interfaces(at)postgresql(dot)org |
Subject: | Re: Protocol Question |
Date: | 2014-08-14 08:00:27 |
Message-ID: | CAGTxmOug+0mheOYfxiZX65Vno3E641EBwubRCmhezBD7LKnQbg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-interfaces |
Yeah, I went with that and let the server send me everything. If more
control is required, a cursor can be used.
Thanks for the feedback.
Onto implementing the Types. I assume the only place the binary encoding is
documented is the source? Can't find anything anywhere else.
Regards,
/thomas
On Thu, Aug 14, 2014 at 6:44 AM, Alistair Bayley <alistair(at)abayley(dot)org>
wrote:
> Why wouldn't TCP be sufficient to throttle the data rate from the server?
> If the client stops/slows consuming bytes from its socket, TCP will make
> the server slow its sending.
>
>
> On 13 August 2014 12:09, Thomas Heller <info(at)zilence(dot)net> wrote:
>
>> I tried sending P/D/S...B/E/S...E/S... but the second Execute fails
>> because the portal no longer exists.
>>
>> So I need a BEGIN/COMMIT for a SELECT in order to read in multiple steps?
>>
>> I can process the data on-the-fly, my motivation for using Execute with
>> a limit was to introduce some kind of control on how fast things travel
>> over the wire. I'm not sure how the backend handles large results sets and
>> clients that can't keep up. Say I have a SELECT that returns 100k rows,
>> after 60k rows the clients into some sort of resource limit. Since the
>> backend just dumps everything on the wire I now have to continue reading
>> and discarding the remaining rows? I doubt I can Close the statement early
>> since the buffers on the backend should be full with more rows to send to
>> the frontend. I don't expect into these kinds of situations very often,
>> just want the client to "do the right thing". If I run into some sort of
>> hard limit I can just terminate the connection since I probably have other
>> things to worry about than reusing a connection.
>>
>>
>> Anyways, I might be overthinking things. The "standard" flow of
>> P/D/S.../B/E/S is probably the way to go then.
>>
>> Regards,
>> /thomas
>>
>> PS: The whole thing is available at
>> https://github.com/thheller/shadow-pgsql but be warned its very alpha.
>>
>>
>>
>>
>>
>> On Wed, Aug 13, 2014 at 12:08 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>
>>> Thomas Heller <info(at)zilence(dot)net> writes:
>>> > In an Extended Query Lifecycle, in order to prepare a query I send the
>>> > Commands
>>>
>>> > Parse('P') / Describe('D') / Sync('S')
>>>
>>> > read 1/t/T/Z then to execute
>>>
>>> > Bind('B') / Execute('E') / Flush('H')
>>>
>>> This is not a good idea. You *need* to use Sync to terminate a command
>>> sequence in order to be sure of proper error recovery (because if there's
>>> an error during the Execute, the backend will discard subsequent messages
>>> until it sees Sync).
>>>
>>> > If I skip the Flush after Execute I receive no data, if I Execute and
>>> Sync
>>> > I receive the the Limit of rows and a ReadyForQuery('Z').
>>>
>>> That's probably because you're not wrapping this in a transaction so the
>>> Sync implicitly does a commit, discarding the open portal. If you want
>>> to read from a portal in multiple steps then you should issue a BEGIN
>>> first and a COMMIT (or ROLLBACK) after you're done. However, have you
>>> considered just processing the data on-the-fly instead of using a limit?
>>>
>>> regards, tom lane
>>>
>>
>>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Heller | 2014-08-16 10:01:13 | Re: Protocol Question |
Previous Message | Alistair Bayley | 2014-08-14 04:44:50 | Re: Protocol Question |