POV-Ray : Newsgroups : povray.tools.general : VR Brainstorming Server Time
19 Mar 2024 00:56:46 EDT (-0400)
  VR Brainstorming (Message 1 to 10 of 13)  
Goto Latest 10 Messages Next 3 Messages >>>
From: clipka
Subject: VR Brainstorming
Date: 13 Sep 2014 16:57:52
Message: <5414afd0$1@news.povray.org>
So, I've made up my mind for good to get me an Oculus Rift DK2 to play 
Elite:Dangerous with.

But this is officially not an end-user product but a "Development Kit", 
and me being a software developer by trade, and co-developer of a 3D 
rendering software, obviously I just /must/ also put it to some good use 
related to POV-Ray. Integration into POV-Ray itself doesn't seem to make 
much sense, but surely there must be a few helpful tools waiting to be 
invented?


(1) The most simple of such tools will certainly be a viewer for light 
probes and other 360 degrees imagery, which I guess will make a 
formidable project for the first toying-around with the OR and its API. 
Obviously its primary use in the context of POV-Ray would be to browse 
your library of HDR light probes to choose one for your scene, or to 
view a 360 degrees output image generated by POV-Ray.

Other use cases might be to decide how to orient a given light probe 
within your scene relative to the camera, and maybe also deciding on a 
preliminary camera viewport, so some features might later be added to 
facilitate that.

It might also be used to view a 360 degrees preview render of a scene, 
and from that choose a camera viewport for a final render.

(In the long run it would of course be fancy for the tool to also help 
you pick a camera /location/, but that would probably not be possible 
without converting the entire scene to some mesh or voxel representation 
first.)


(2) Of course we all would want the Oculus Rift to be integrated into a 
full-fledged modeling tool. You know, something like a modernized 
version of Moray with Oculus Rift support. Obviously this would be even 
more work than the above mentioned tool to pick a camera location in a 
static scene, and I wouldn't want to tackle this alone; however, 
specialized modeling tools might also be helpful while being "within 
budget". One thing that came to my mind was what I'd call "VirtuaLathe": 
An immersive tool to model spline-based rotationally symmetric objects.

Now one thing that bugs me about this is that we (or I) don't have the 
proper input device for such a project yet. A "data glove" would be 
fancy, and might be used directly for a kind of "virtual potter's 
wheel", but there's no hot candidate for an affordable de-facto standard 
piece of hardware in that area like the Oculus Rift is for VR goggles. 
The standard mouse & keyboard doesn't seem like a particularly good 
input device for this purpose, as I think we need more than two degrees 
of freedom at our fingertips.

Enter aforementioned Elite:Dangerous, a space flight simulator inspired 
by (and produced by the original author of) the infamous classic Elite 
home computer game. It, too, asks for a special input device that allows 
for plenty degrees of freedom (yaw, roll, pitch and forward thrust like 
in an airplane, but also vertical, horizontal and backward thrust); for 
E:D, the obvious solution to that problem is a modern joystick. So 
somehow the idea got lodged into my brain whether a joystick or gamepad 
might also make a gread input device for that "VirtuaLathe". I haven't 
thought out the details yet, but I guess it could work quite well.

This opens up another train of thoughts: Traditionally, 3D modelling 
tools have a rather analytical approach to the UI, catering to people 
who do modelling as a job; however, I guess for the majority of POV-Ray 
users modelling is a leisure activity, so using the tool should be a fun 
thing to do. And what could possibly be more fun than a game? So I think 
I'll be tackling this "VirtuaLathe" project as a kind of immersive 
computer game. And yes, it will have joysticks and gamepads as its 
preferred choice of input device.

Unfortunately I have only a rather basic understanding of how a 
real-world lathe is used in practice, and have never gotten my hands on 
one myself, but I know there are various "handicrafters" among you 
people, who certainly have plenty of experience in this area, so I'd 
appreciate any input especially from you guys.


That's it for now; comments and related brainish storms, winds and wisps 
welcome.


Post a reply to this message

From: Stephen
Subject: Re: VR Brainstorming
Date: 13 Sep 2014 17:38:12
Message: <5414b944$1@news.povray.org>
On 13/09/2014 21:57, clipka wrote:
> So, I've made up my mind for good to get me an Oculus Rift DK2 to play
> Elite:Dangerous with.
>
<Green eyed monster> Would a laptop run one?
I just checked. :-(


> (In the long run it would of course be fancy for the tool to also help
> you pick a camera /location/, but that would probably not be possible
> without converting the entire scene to some mesh or voxel representation
> first.)
>
>
Are you sure? Bishop3D can import a subset of SDL and display it in 
OpenGl. No I'm not asking you to write a parser as I've read the answer 
a thousand times. :-(
Just a thought.


> Unfortunately I have only a rather basic understanding of how a
> real-world lathe is used in practice, and have never gotten my hands on
> one myself, but I know there are various "handicrafters" among you
> people, who certainly have plenty of experience in this area, so I'd
> appreciate any input especially from you guys.
>
>
It is basically a SOR.

StephenS might be able to help if his workshop anaglyphs are anything to 
go by. He is very approachable. He helped me a lot during and after 
testing the aforementioned Bishop3D.



-- 

Regards
     Stephen


Post a reply to this message

From: clipka
Subject: Re: VR Brainstorming
Date: 13 Sep 2014 18:51:54
Message: <5414ca8a$1@news.povray.org>
Am 13.09.2014 23:38, schrieb Stephen:
> On 13/09/2014 21:57, clipka wrote:
>> So, I've made up my mind for good to get me an Oculus Rift DK2 to play
>> Elite:Dangerous with.
>>
> <Green eyed monster> Would a laptop run one?
> I just checked. :-(
>
>
>> (In the long run it would of course be fancy for the tool to also help
>> you pick a camera /location/, but that would probably not be possible
>> without converting the entire scene to some mesh or voxel representation
>> first.)
>>
>>
> Are you sure? Bishop3D can import a subset of SDL and display it in
> OpenGl. No I'm not asking you to write a parser as I've read the answer
> a thousand times. :-(
> Just a thought.

Well, OpenGL does need a mesh representation of the scene, so yes, I'm 
sure about this statement ;-)

Whether the conversion process would be render-ish or parse-ish is an 
entirely different question.


As for the parser: We already have one - it's part of POV-Ray.

Ideally, we'd have a proper clear-cut C++ API for the representation of 
a scene (at present the internal representation /still/ partially C-ish 
and poorly delineated), and the parser would be just one module using 
this API to generate such a scene. Add to that a feature in the API to 
get a mesh representation from any arbitrary shape, and we'd have the 
best core for a POV-Ray SDL import filter one could ever wish for.

As a matter of fact this is the direction the dev team intends to go, in 
order to make it easier to integrate components of POV-Ray into other 
pieces of software - be it as an input filter or a render engine. But it 
won't happen overnight, and I won't be the only one working towards this 
goal.


>> Unfortunately I have only a rather basic understanding of how a
>> real-world lathe is used in practice, and have never gotten my hands on
>> one myself, but I know there are various "handicrafters" among you
>> people, who certainly have plenty of experience in this area, so I'd
>> appreciate any input especially from you guys.
>>
> It is basically a SOR.

Thanks, my basic understanding of a real-world lathe does cover /that/ 
fact ;-)

I've even heard tell that it is typically used with sharp tools to 
remove parts of the material. :-)


Post a reply to this message

From: Stephen
Subject: Re: VR Brainstorming
Date: 14 Sep 2014 08:42:19
Message: <54158d2b@news.povray.org>
On 13/09/2014 23:51, clipka wrote:
> Am 13.09.2014 23:38, schrieb Stephen:

>> Are you sure? Bishop3D can import a subset of SDL and display it in
>> OpenGl. No I'm not asking you to write a parser as I've read the answer
>> a thousand times. :-(
>> Just a thought.
>
> Well, OpenGL does need a mesh representation of the scene, so yes, I'm
> sure about this statement ;-)
>

I did not know that. You have boggled my mind.

> Whether the conversion process would be render-ish or parse-ish is an
> entirely different question.
>

I'm out of my depth here.

>
> As for the parser: We already have one - it's part of POV-Ray.
>

That I knew. :-)

> Ideally, we'd have a proper clear-cut C++ API for the representation of
> a scene (at present the internal representation /still/ partially C-ish
> and poorly delineated), and the parser would be just one module using
> this API to generate such a scene. Add to that a feature in the API to
> get a mesh representation from any arbitrary shape, and we'd have the
> best core for a POV-Ray SDL import filter one could ever wish for.
>

I like the sound of that.

> As a matter of fact this is the direction the dev team intends to go, in
> order to make it easier to integrate components of POV-Ray into other
> pieces of software - be it as an input filter or a render engine. But it
> won't happen overnight, and I won't be the only one working towards this
> goal.
>
Are we thinking about Pov 4.0?
It all sounds good

>
>>> Unfortunately I have only a rather basic understanding of how a
>>> real-world lathe is used in practice, and have never gotten my hands on
>>> one myself, but I know there are various "handicrafters" among you
>>> people, who certainly have plenty of experience in this area, so I'd
>>> appreciate any input especially from you guys.
>>>
>> It is basically a SOR.
>
> Thanks, my basic understanding of a real-world lathe does cover /that/
> fact ;-)
>

I knew I should have put a smiley there.

> I've even heard tell that it is typically used with sharp tools to
> remove parts of the material. :-)
>
And abrasive ones for Anti-Aliasing.

Think wood turning, it is simpler and more "hands on".
I've never used a lathe myself but I've stood beside people who were 
using them and watched.

In the virtual world you could have an object spinning in mid air to be 
turned (shaped). So to add verisimilitude the left hand side (LHS) will 
have a spindle or a chuck that holds the wood and rotates it. The 
spindle is set into bearings called a headstock. This is driven either 
by a motor or a foot treadle. The RHS of the wood is kept in place using 
what is called a Tailstock.
(At this point I thought that if you could have haptic feedback a 
virtual potters wheel might be simpler. Less parts.)
Or if you are turning a bowl no Tailstock.
You need a toolrest to support the cutting tool.
That is it in its simplest form.

See if these help
http://www.woodworking.co.uk/Technical/Beginners/beginners.html#Lathe

http://www.getwoodworking.com/news/article/turning-for-beginners-part-1/885



-- 

Regards
     Stephen


Post a reply to this message

From: Bald Eagle
Subject: Re: VR Brainstorming
Date: 14 Sep 2014 09:40:01
Message: <web.54159a25a15068255e7df57c0@news.povray.org>
Usually woodlathes are used with a variety of fairly large tools (leverage) and
you work your way from the rough-turning of an octagonally cut workpiece into a
cylindrical one, then medium turning to get the countour/profile that you want,
and then some finer tools that you actually use to sort of slice or peel to get
a very fine finish.

Big industrial ones, like the ones used to make baseball bats or other
high-throughput items use a copying template and really go FAST.
See youtube for "How It's Made"

Metal lathes have the cutter mounted in a carriage that you either move with
screws/gears and a wheel, or by motor-driven means for things like cutting screw
threads.  Different speeds are used to account for things like heat, chatter,
chipping, etc. that are peculiar to the turning and feeding speed coupled with
the type and hardness of the metal you're cutting.
There are also specialized tools like "knurling wheels" that actually press a
texture into the surface of the metal rather than cut the pattern into it.

I'd say youtube is your friend, or if you want an actual lathe manual, I might
be able to dig you up a pdf ...


Post a reply to this message

From: clipka
Subject: Re: VR Brainstorming
Date: 14 Sep 2014 11:20:55
Message: <5415b257$1@news.povray.org>
Am 14.09.2014 14:42, schrieb Stephen:

>> Whether the conversion process would be render-ish or parse-ish is an
>> entirely different question.
>
> I'm out of my depth here.

A mesh representation of the scene would probably be easiest to achieve 
by parsing the scene, then having dedicated code convert each and every 
object separately into a mesh.

A voxel ("volume pixel", i.e. 3D array of boxes) representation of the 
scene could be generated in a similar way; it could, however, also be 
generated by having POV-Ray parse the scene, and then use existing code 
in the render engine to systematically ray-trace it, collecting not only 
colour information but also the intersection position information.

>> As a matter of fact this is the direction the dev team intends to go, in
>> order to make it easier to integrate components of POV-Ray into other
>> pieces of software - be it as an input filter or a render engine. But it
>> won't happen overnight, and I won't be the only one working towards this
>> goal.
>>
> Are we thinking about Pov 4.0?

Not exactly; more like POV-Ray 3.8 and 3.9, as POV-Ray 4.0 will most 
probably be the step that introduces a brand new parser with a brand new 
syntax. That'll obviously be easier to implement once we already have a 
clear-cut API for the render engine.

>> I've even heard tell that it is typically used with sharp tools to
>> remove parts of the material. :-)
>>
> And abrasive ones for Anti-Aliasing.

*ROTFLMAO!*

> Think wood turning, it is simpler and more "hands on".

Well, the primary tool used for that /is/ a lathe, isn't it?

Actually that's the thing I'm primarily thinking of - certainly not a 
CNC metal-machining lathe, that would be boring (uh... no pun intended).


Post a reply to this message

From: clipka
Subject: Re: VR Brainstorming
Date: 14 Sep 2014 11:25:46
Message: <5415b37a$1@news.povray.org>
Am 14.09.2014 15:37, schrieb Bald Eagle:
> Usually woodlathes are used with a variety of fairly large tools (leverage) and
> you work your way from the rough-turning of an octagonally cut workpiece into a
> cylindrical one, then medium turning to get the countour/profile that you want,
> and then some finer tools that you actually use to sort of slice or peel to get
> a very fine finish.

I think I'd want the user to start with a square or rectangular cut 
workpiece, allowing them to leave some part of the item in that shape. 
Output to POV-Ray would then of course be an intersection of a box and a 
lathe object.


Post a reply to this message

From: Stephen
Subject: Re: VR Brainstorming
Date: 14 Sep 2014 13:25:28
Message: <5415cf88$1@news.povray.org>
On 14/09/2014 16:20, clipka wrote:
> Am 14.09.2014 14:42, schrieb Stephen:
>
>>> Whether the conversion process would be render-ish or parse-ish is an
>>> entirely different question.
>>
>> I'm out of my depth here.
>
> A mesh representation of the scene would probably be easiest to achieve
> by parsing the scene, then having dedicated code convert each and every
> object separately into a mesh.
>

Not using the tessellation that can be done in SDL, then?

> A voxel ("volume pixel", i.e. 3D array of boxes) representation of the
> scene could be generated in a similar way; it could, however, also be
> generated by having POV-Ray parse the scene, and then use existing code
> in the render engine to systematically ray-trace it, collecting not only
> colour information but also the intersection position information.
>

How would that handle parts that are obscured by itself or other objects?

>>> As a matter of fact this is the direction the dev team intends to go, in
>>> order to make it easier to integrate components of POV-Ray into other
>>> pieces of software - be it as an input filter or a render engine. But it
>>> won't happen overnight, and I won't be the only one working towards this
>>> goal.
>>>
>> Are we thinking about Pov 4.0?
>
> Not exactly; more like POV-Ray 3.8 and 3.9, as POV-Ray 4.0 will most
> probably be the step that introduces a brand new parser with a brand new
> syntax. That'll obviously be easier to implement once we already have a
> clear-cut API for the render engine.
>

Interesting, thanks for explaining in a way I can understand.

>>> I've even heard tell that it is typically used with sharp tools to
>>> remove parts of the material. :-)
>>>
>> And abrasive ones for Anti-Aliasing.
>
> *ROTFLMAO!*
>

We are here to serve. :-)

>> Think wood turning, it is simpler and more "hands on".
>
> Well, the primary tool used for that /is/ a lathe, isn't it?
>

Yes of course. A lathe is a turning machine.

> Actually that's the thing I'm primarily thinking of - certainly not a
> CNC metal-machining lathe, that would be boring (uh... no pun intended).
>

You know the drill. ;-)

A manual metal turning lathe would be over complicated IMO.
And where is the fun in pushing a button to get your shape.
Have you thought about the Kinect as an i/p device?


-- 

Regards
     Stephen


Post a reply to this message

From: clipka
Subject: Re: VR Brainstorming
Date: 15 Sep 2014 14:35:12
Message: <54173160$1@news.povray.org>
Am 14.09.2014 19:25, schrieb Stephen:
> On 14/09/2014 16:20, clipka wrote:
>> Am 14.09.2014 14:42, schrieb Stephen:
>>
>>>> Whether the conversion process would be render-ish or parse-ish is an
>>>> entirely different question.
>>>
>>> I'm out of my depth here.
>>
>> A mesh representation of the scene would probably be easiest to achieve
>> by parsing the scene, then having dedicated code convert each and every
>> object separately into a mesh.
>
> Not using the tessellation that can be done in SDL, then?

The underlying algorithm may be the same for certain objects (most 
notably isosurfaces), but no - tesselation in SDL is a tad too slow for 
my taste ;-)

Besides, IIRC Jerome has already included some inbuilt tesselation 
features in his fork - still need to steal his code and put it into 
UberPOV...

>> A voxel ("volume pixel", i.e. 3D array of boxes) representation of the
>> scene could be generated in a similar way; it could, however, also be
>> generated by having POV-Ray parse the scene, and then use existing code
>> in the render engine to systematically ray-trace it, collecting not only
>> colour information but also the intersection position information.
>>
>
> How would that handle parts that are obscured by itself or other objects?

It would have to trace from different locations.

Maybe something like, you will initially see only the parts visible from 
the initial camera position, but tracing continues as you move your head 
about, and the missing pieces will be filled in over time, maybe with 
gradually increasing detail.

> A manual metal turning lathe would be over complicated IMO.
> And where is the fun in pushing a button to get your shape.
> Have you thought about the Kinect as an i/p device?

The idea briefly crossed my mind, but not long enough to be examined in 
any noteworthy detail. Might be a way to go - but then I'd need to 
obtain a Kinect as well, and fight my way through its API in addition to 
the Oculus Rift's. So, bottom line: Kinect input will most certainly not 
feature in the initial version. I might re-visit the idea once the 
Oculus Rift part and the game controller input are flying.


Post a reply to this message

From: Stephen
Subject: Re: VR Brainstorming
Date: 16 Sep 2014 10:55:53
Message: <54184f79$1@news.povray.org>
On 15/09/2014 19:34, clipka wrote:
>> How would that handle parts that are obscured by itself or other objects?
>
> It would have to trace from different locations.
>
> Maybe something like, you will initially see only the parts visible from
> the initial camera position, but tracing continues as you move your head
> about, and the missing pieces will be filled in over time, maybe with
> gradually increasing detail.
>

Real time rendering?
Is that using the feature from Beta 17?

>> A manual metal turning lathe would be over complicated IMO.
>> And where is the fun in pushing a button to get your shape.
>> Have you thought about the Kinect as an i/p device?
>
> The idea briefly crossed my mind, but not long enough to be examined in
> any noteworthy detail. Might be a way to go - but then I'd need to
> obtain a Kinect as well, and fight my way through its API in addition to
> the Oculus Rift's. So, bottom line: Kinect input will most certainly not
> feature in the initial version. I might re-visit the idea once the
> Oculus Rift part and the game controller input are flying.

Fairy Nuff. :-)

-- 

Regards
     Stephen


Post a reply to this message

Goto Latest 10 Messages Next 3 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.