|
|
|
|
|
|
| |
| |
|
|
From: William F Pokorny
Subject: Probable thread safety issue in functions using splines. 3.7.0.RC7.
Date: 3 Jun 2013 06:54:08
Message: <51ac75d0$1@news.povray.org>
|
|
|
| |
| |
|
|
Hi,
I believe I've hit a thread safety issue. Could someone confirm? Running
Ubuntu 12.1. I'll also post images to p.b.images.
Thanks for your time.
Bill P.
Post a reply to this message
Attachments:
Download 'utf-8' (2 KB)
|
|
| |
| |
|
|
|
|
| |
| |
|
|
William F Pokorny wrote:
> I believe I've hit a thread safety issue. Could someone confirm? Running
> Ubuntu 12.1. I'll also post images to p.b.images.
I see the same behavior under Ubuntu 12.04, AMD 2431 CPU, Linux 3.2.0-45 kernel.
The artifacts appear for any value of worker threads (+WTn) when n is greater
than 1. The artifacts differ with each render, even for the same number of
worker threads.
I rendered it using 3.7.0.RC7 (g++ 4.6 @x86_64-unknown-linux-gnu) as follows:
povray +P +WT6 somefilename.pov
FWIW, using povray -D +WT6 somefilename.pov yields similar results.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
William F Pokorny <ano### [at] anonymousorg> wrote:
> I believe I've hit a thread safety issue. Could someone confirm? Running
> Ubuntu 12.1. I'll also post images to p.b.images.
Confirmed with openSUSE 12.2.
I don't have an answer to this problem, but I can tell you that you'll get a
much faster render if you change the contained_by to box { <-0.5,-2.0,-0.5>,
<0.5,2.0,0.5> } and set max_gradient to 28.
To the POV Team: Last year I posted to p.beta-test that POV-Ray 3.7 did not
always post the warning for a bad max_gradient value, but could not reproduce
the problem with a simplified scene. Well, serendipity has come to the rescue,
and I cannot raise the max_gradient warning with Mr. Pokorny's scene either.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Here's another data point. The artifacts also appear under the following
conditions, tested with the default (16) worker threads.
POV-Ray 3.7.0.RC7 (icpc 13.1.0 @x86_64-unknown-linux-gnu)
Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz
uname -or
2.6.32-279.14.1.el6.x86_64 GNU/Linux
lsb_release -irc
Distributor ID: CentOS
Release: 6.3
Codename: Final
Post a reply to this message
|
|
| |
| |
|
|
From: Le Forgeron
Subject: Re: Probable thread safety issue in functions using splines. 3.7.0.RC7.
Date: 3 Jun 2013 17:51:41
Message: <51ad0fed$1@news.povray.org>
|
|
|
| |
| |
|
|
Le 03/06/2013 12:54, William F Pokorny nous fit lire :
> Hi,
> I believe I've hit a thread safety issue. Could someone confirm? Running
> Ubuntu 12.1. I'll also post images to p.b.images.
>
> Thanks for your time.
> Bill P.
>
>
> SplineThreadSafety.pov
>
>
>
//----------------------------------------------------------------------------------------------------
> // Scene to show thread safet issue with splines - or maybe functions using splines.
> //
> // On a multi-core machine, without AA, run this command to see the problem (8 cores
< 60 seconds) :
> //
that might be less than 60 seconds, but with Intel Inspector (XE 2013),
it becomes 42:50 (just 14 data races, oh well, that's just so friendly).
ID Type Sources Modules State
P1 Data race isosurf.cpp; mutex.hpp povray New
P2 Data race mutex.hpp; povms.cpp povray New
P3 Data race povray.cpp povray New
P4 Data race povray.cpp povray New
P5 Data race mutex.hpp; pov_mem.cpp; splines.cpp povray New
P6 Data race mutex.hpp; pov_mem.cpp; splines.cpp povray New
P7 Data race mutex.hpp; pov_mem.cpp; splines.cpp povray New
P8 Data race recursive_mutex.hpp; scene.cpp; task.cpp; taskqueue.cpp;
view.cpp povray New
P9 Data race condition_variable.hpp; vfe.cpp; vfesession.cpp povray New
P10 Data race condition_variable.hpp; unixconsole.cpp; vfesession.cpp
povray New
P11 Data race condition_variable.hpp; unixconsole.cpp; vfesession.cpp
povray New
P12 Data race unixconsole.cpp; vfesession.cpp; vfesession.h povray New
P13 Data race unixconsole.cpp; vfesession.cpp povray New
P14 Data race [Unknown]; unixconsole.cpp; vfesession.cpp
libboost_thread.so.1.49.0; povray New
Numbers 2, 3, 4, 8, 9, 10, 11, 12, 13 & 14 are related to the handling
of session (and occur once or twice only, excepted #8, four times).
Number 1 is about isosurface (adjusting gradient at isosurf.cpp:1099 vs
1098 (testing its value), and copying the isosurface) (IMHO, rendering
threads updating the object... not the best move without some
atomic/protection (and not sure a DBL is/can be atomic)) (occurs 2505
times).
Number 5, 6 and 7 are about splines
* sp->Cache_Type & Cache_Point, splines.cpp :803 vs :814/815 (2024 times)
* sp->Cache_Valid, :805 vs :813 vs :904 (1770 times)
* sp->Cache_Data, :807 vs :903 (5025 times)
Only my 0.02¢ (yes, very cheap), but it seems to confirm.
Post a reply to this message
|
|
| |
| |
|
|
From: William F Pokorny
Subject: Re: Probable thread safety issue in functions using splines. 3.7.0.RC7.
Date: 4 Jun 2013 06:57:37
Message: <51adc821@news.povray.org>
|
|
|
| |
| |
|
|
On 06/03/2013 05:51 PM, Le_Forgeron wrote:
> Le 03/06/2013 12:54, William F Pokorny nous fit lire :
>> Hi,
>> I believe I've hit a thread safety issue. Could someone confirm? Running
>> Ubuntu 12.1. I'll also post images to p.b.images.
>>
>> Thanks for your time.
>> Bill P.
>>
>>
>> SplineThreadSafety.pov
>>
>>
>>
//----------------------------------------------------------------------------------------------------
>> // Scene to show thread safet issue with splines - or maybe functions using
splines.
>> //
>> // On a multi-core machine, without AA, run this command to see the problem (8
cores < 60 seconds) :
>> //
>
> that might be less than 60 seconds, but with Intel Inspector (XE 2013),
> it becomes 42:50 (just 14 data races, oh well, that's just so friendly).
>
> ID Type Sources Modules State
> P1 Data race isosurf.cpp; mutex.hpp povray New
> P2 Data race mutex.hpp; povms.cpp povray New
> P3 Data race povray.cpp povray New
> P4 Data race povray.cpp povray New
> P5 Data race mutex.hpp; pov_mem.cpp; splines.cpp povray New
> P6 Data race mutex.hpp; pov_mem.cpp; splines.cpp povray New
> P7 Data race mutex.hpp; pov_mem.cpp; splines.cpp povray New
> P8 Data race recursive_mutex.hpp; scene.cpp; task.cpp; taskqueue.cpp;
> view.cpp povray New
> P9 Data race condition_variable.hpp; vfe.cpp; vfesession.cpp povray New
> P10 Data race condition_variable.hpp; unixconsole.cpp; vfesession.cpp
> povray New
> P11 Data race condition_variable.hpp; unixconsole.cpp; vfesession.cpp
> povray New
> P12 Data race unixconsole.cpp; vfesession.cpp; vfesession.h povray New
> P13 Data race unixconsole.cpp; vfesession.cpp povray New
> P14 Data race [Unknown]; unixconsole.cpp; vfesession.cpp
> libboost_thread.so.1.49.0; povray New
>
> Numbers 2, 3, 4, 8, 9, 10, 11, 12, 13 & 14 are related to the handling
> of session (and occur once or twice only, excepted #8, four times).
>
> Number 1 is about isosurface (adjusting gradient at isosurf.cpp:1099 vs
> 1098 (testing its value), and copying the isosurface) (IMHO, rendering
> threads updating the object... not the best move without some
> atomic/protection (and not sure a DBL is/can be atomic)) (occurs 2505
> times).
>
> Number 5, 6 and 7 are about splines
> * sp->Cache_Type & Cache_Point, splines.cpp :803 vs :814/815 (2024 times)
> * sp->Cache_Valid, :805 vs :813 vs :904 (1770 times)
> * sp->Cache_Data, :807 vs :903 (5025 times)
>
> Only my 0.02¢ (yes, very cheap), but it seems to confirm.
>
Thanks everyone for the confirmation, the code tips - and the cents ;-).
The Intel Inspector tool is news to me. I'm actually impressed at how
"quickly" it ran what must of been heavily instrumented code.
I'll make my first run at opening an official bug report after work
tonight.
Bill P.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Thanks everyone for the confirmation, the code tips - and the cents ;-).
> The Intel Inspector tool is news to me. I'm actually impressed at how
> "quickly" it ran what must of been heavily instrumented code.
>
> I'll make my first run at opening an official bug report after work
> tonight.
>
> Bill P.
Just to add to the system list: with Windows and a core i7 one yields the same
result.
Best regards,
Michael
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|