Jump to content

[FEATURE] Enhance D/T selection in PQ


frinklabs

Recommended Posts

Interesting idea, especially for a PQ to find caches with the D/T combos I haven't found yet.

 

It would need to have a "quick select" option, maybe where a user could select either method of entry. It would allow a quick and easy selection of something like "all caches with D and T of 4 or less" while also allowing a selection of more obscure combinations.

Link to comment

Sounds pretty good to me, but it would definitely need the quick select option. There could also be select whole row and whole column buttons above or to the left of the numbers...

 

I think many people have been wanting a better system for pq d/t selections... I would make it much easier to find caches that have unique d/t ratings you haven't found.

 

Condorito

Link to comment

Could be nice, but TPTB have already stated that the PQ generation page WILL NOT

 

*REPEAT*

WILL NOT

 

be getting an upgrade or any modifications.

 

The API is now the method of choice when acquiring cache data.

 

(Perhaps not) obviously, that means you will need some 'additional software/an app' to actually get the data since you can't 'install the API' to your computer or GPSr.

Link to comment

The API is now the method of choice when acquiring cache data.

 

Does the API even work that way? It works very well to receive data on a list of caches you already have in a database (thinking GSAK here). You can even pull a new cache into the database if you know its GC code.

 

What I don't see yet is a way to pass search parameters to the API and have it come back with a list of caches.

 

 

(Perhaps not) obviously, that means you will need some 'additional software/an app' to actually get the data since you can't 'install the API' to your computer or GPSr.

 

I asked for something like that a year ago and was met with derision: Generic SQL access via the API

Link to comment

I asked for something like that a year ago and was met with derision: Generic SQL access via the API

 

In fairness the API gives more ability to restrict how many caches someone can download. Direct SQL access would not only mean someone could say something really far-reaching (select * from cache) but also royally screw it up for everyone (select * from cache, cachelog, user) where selecting over a million caches, millions of users and probably several hundred million cache logs with no join would result in several hundred million million million rows and cause everything to grind to a halt.

 

It would be a nice thing to have but I fear it would either be sufficiently open that everything would stop working sooner or later because someone forgot their join clauses, or sufficiently restricted that it would be of no practical use at all.

Link to comment

I asked for something like that a year ago and was met with derision: Generic SQL access via the API

 

In fairness the API gives more ability to restrict how many caches someone can download. Direct SQL access would not only mean someone could say something really far-reaching (select * from cache) but also royally screw it up for everyone (select * from cache, cachelog, user) where selecting over a million caches, millions of users and probably several hundred million cache logs with no join would result in several hundred million million million rows and cause everything to grind to a halt.

 

It would be a nice thing to have but I fear it would either be sufficiently open that everything would stop working sooner or later because someone forgot their join clauses, or sufficiently restricted that it would be of no practical use at all.

 

Thank you - a sensible reply whose objections I can address.

 

The SQL interface would be through the API, not instead of it. Thus all the associated API call quantity limitations would also apply to queries made through this interface.

 

The poorly-formed-database-query issue is one that I think might also be preventing them from building a useful advanced PQ interface.

 

If it were me, I'd construct a sandbox database server against which new queries could be tested and gauged for their ability to play well with others before being let loose in the park.

Link to comment

In theory I suppose they could allow direct SQL, sanitized to prevent a malicious query along the lines of "select count(*) from cache; drop table cache". It could potentially even be against a pre-built query that performed all the joining of cachers, users, cache logs etc so there was no chance of a badly formed query wiping everything out (it would also mean that only selected columns were available, archived caches could be hidden, and the real coordinates of a multi/puzzle couldn't be extracted), a "set rowcount" command executed before the query ran to restrict the maximum number of rows returned in case someone did try to build an offline copy of the entire cache database and the like.

 

The trouble is it would take time and resources to build it, resources that Groundspeak appear to either lack or be unwilling to commit to the task.

 

Personally I'd like to see them focus on the web site that anyone can use and spend less time with endless mobile apps catering to the idea that downloading an app on an iPhone makes someone an instant geocacher, but I guess the mobile apps and twitface integrations are what people want these days.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...