[Prev][Next][Index][Thread]

Re: Followup on SEARCH URL's / processing GET forms



At 12:16 PM 1996-06-20, Glenn Iba wrote:
>John,
>
>  Thanks for the response, but I'm still confused...
>
>
>>At 5:36 PM 1996-06-14, Glenn Iba wrote:

>>>
>>>1. PROBLEM ONE -- GET
>>>
>>>   I have this kind of working using a :search URL, but am having
>>>   trouble accessing the "search-keys".  I suspect there may be a
>>>   bug in the search-keys parser -- it seems to be splitting at
>>>   #\+ characters which are encodings for space, rather than the
>>>   #\& characters which separate the name=value "fields".  Please
>>>   somebody correct me if I'm misunderstanding the syntax, as I'm
>>>   still a rank neophyte at HTTP protocol.
>>
>>Search URLs only take + as field delimiters.
>
>Am I wrong to map general processing of GET forms onto SEARCH URL's ??

Technically no, but practically yes.

Forms should be handled with post.  The HTTP spec describes the GET form
method as legacy.  Additionally, your returned data is limited to 1024
characters,
including the URL,  by the URL spec.

Consequently, I don't think anyone has tested whether the gGET approach
works in recent
memory.

>
>The above HTTP-SEARCH object IS the URL that was passed to my response
>function.  All I did was store the URL in a global variable so I could
>look at it.  And (http::search-keys url) in the response function just
>returns the same result as what's stored in the URL:SEARCH-KEYS slot.

You are accessing the parent search URL minus the search args.  The instance of
the search URL with the args is passed to your response function when a
search URL comes
in.  You probably don't want to by-pass these standard mechanisms.
>
>If SEARCH-URLs are the intended way of handling GET FORMS, then
>I don't think the search-key mechanism is doing the right thing.

Correct, but search URLs were not written to handle this.  Perhaps, they
should.

>The results are not useful as parsed.
>To do what I want I'll have to parse the argument string myself,
>which shouldn't be too hard, and ignore search-keys.  I just thought
>(and I'll volunteer to do it if I'm on the right track) that maybe
>the search-key mechanism could be extended to handle multiple
>inputs in GET's, so as to be more generally useful.

This would be useful in case people want to use the GET approach.
The current search URL parsing has been hacked for some speed,
and so,  it would be desirable to have a very cheap test that decides to
pass control to a separate parsing function.
>
>Let me know if this makes sense, or whether I have some fundamental
>conceptual misunderstanding.

To summarize, there are four issues with handling search urls and get-style
forms returns
with the same mechanism:

1. Dispatch overhead of supporting two parsing schemes on the same input,
search url encoding and form url encoding.  This is poor design in the HTTP
Protocol,
but we can deal.

2. Overloading the search-keys slot with two different data formats:

        * List of values

        * Alist of query keyword and value

I suppose it would be reasonable to just drop an alist just like the ones
you get from
via the post method.  the documentation would need to warn users that their
application
needs to differentiate these cases,  in case a client submits with the
wrong syntax.

3. The same program interface as found when using the POST method should be
supplied
here so that user code need not concern itself with how the form values
arrived.  That means
recycling some code from parse-form-values.

4. Consider whether a specialized class of search url and http-form should
be used to clean
up some of the ambiguities.

Regards, John




Follow-Ups: