From: | Joerg Sonnenberger <joerg(at)bec(dot)de> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Joerg Sonnenberger <joerg(at)bec(dot)de>, John Naylor <jcnaylor(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: reducing the footprint of ScanKeyword (was Re: Large writable variables) |
Date: | 2019-01-04 23:38:45 |
Message-ID: | 20190104233845.GA24282@britannica.bec.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | Postg롤 토토SQL : |
On Fri, Jan 04, 2019 at 02:36:15PM -0800, Andres Freund wrote:
> Hi,
>
> On 2019-01-04 16:43:39 -0500, Tom Lane wrote:
> > Joerg Sonnenberger <joerg(at)bec(dot)de> writes:
> > >> * What's the generator written in? (if the answer's not "Perl", wedging
> > >> it into our build processes might be painful)
> >
> > > Plain C, nothing really fancy in it.
> >
> > That's actually a bigger problem than you might think, because it
> > doesn't fit in very nicely in a cross-compiling build: we might not
> > have any C compiler at hand that generates programs that can execute
> > on the build machine. That's why we prefer Perl for tools that need
> > to execute during the build. However, if the code is pretty small
> > and fast, maybe translating it to Perl is feasible. Or perhaps
> > we could add sufficient autoconfiscation infrastructure to identify
> > a native C compiler. It's not very likely that there isn't one,
> > but it is possible that nothing we learned about the configured
> > target compiler would apply to it :-(
There is a pre-made autoconf macro for doing the basic glue for
CC_FOR_BUILD, it's been used by various projects already including libXt
and friends.
> I think it might be ok if we included the output of the generator in the
> buildtree? Not being able to add keywords while cross-compiling sounds like
> an acceptable restriction to me. I assume we'd likely grow further users
> of such a generator over time, and some of the input lists might be big
> enough that we'd not want to force it to be recomputed on every machine.
This is quite reasonable as well. I wouldn't worry about the size of the
input list at all. Processing the Webster dictionary needs something
less than 0.4s on my laptop for 235k entries.
Joerg
From | Date | Subject | |
---|---|---|---|
Next Message | Bossart, Nathan | 2019-01-04 23:49:46 | Re: A few new options for vacuumdb |
Previous Message | a Marath | 2019-01-04 23:04:48 | Re: BUG #15572: Misleading message reported by "Drop function operation" on DB with functions having same name |