|
![](/i/fill.gif) |
Hi(gh)!
On 19.04.2013 00:04, Urs Holzer wrote:
> First of all, doing something like this is a huge amount of work never
> to be underestimated.
Yes, we speak of currently about 18,500 image files...
> I find the idea intriguing and I like to share a
> few ideas of my own regarding this.
>
> Let me first extract some keywords from your post:
> * publicly accessible database
> * lots of metadata for every image/post
> - extractable from the images:
> size, ratio, color distribution
I'm working on a small C program running on the console which will be
able to do this - at least after having the image file converted to
uncompressed TGA!
> - extractable from scene files:
> used features, POV-Ray version
...or, if no scripts are available, probably from the POVer's posting on
p.b.i!
> - not easily extractable:
> topics, keywords for objects shown in the image
> * Connecting images to scene files (linking p.b.i posts to p.b.s-f
> posts)
>
> I would recommend the following strategy: Intertwine the metadata with
> the semantic web. This gives us a plethora of keywords. For example,
> look at DBPedia. It provides identifiers for everything described by an
> article on Wikipedia. Yes, POV-Ray too:
> http://dbpedia.org/resource/POV-Ray
I looked it up... appears both vague and complicated to me! And as you
say it was still in an experimental stage, I think a relational keyword
table (or better, several tables making up a whole keyword hierarchy)
would be far more reliable.
> Clear drawback: The semantic web is kind of new territory while the
> usual relational database with a web interface is robust and proven.
I think so!
See you in Khyberspace!
Yadgar
Post a reply to this message
|
![](/i/fill.gif) |