|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hi all,
I experienced a problem after upgrading a server from 1TB RAM to 2TB RAM (yes,
really RAM) in our dual socket AMD EPYC machine with 48 cores/96 threads under
Windows Server 2016. If I render a scene with 3500x1750 pixel, half of the cores
are used (Windows processor group stuff) and the performance is quite nice. If I
increase the resolution to 3600x1800 pixel (or more), it starts with 48
threads/24 cores, but performance is bad and after several seconds, just one
core/thread is doing all the work. Rendering can also not be stopped and runs
till the end.
I never experienced such a problem before. Do you have any ideas?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Mirfaelltkeinerein" <nomail@nomail> wrote:
> Hi all,
>
> I experienced a problem after upgrading a server from 1TB RAM to 2TB RAM (yes,
> really RAM) in our dual socket AMD EPYC machine with 48 cores/96 threads under
> Windows Server 2016. If I render a scene with 3500x1750 pixel, half of the cores
> are used (Windows processor group stuff) and the performance is quite nice. If I
> increase the resolution to 3600x1800 pixel (or more), it starts with 48
> threads/24 cores, but performance is bad and after several seconds, just one
> core/thread is doing all the work. Rendering can also not be stopped and runs
> till the end.
> I never experienced such a problem before. Do you have any ideas?
Sorry, I posted the message into the wrong group. Could an Admin move it to the
Windows group?
By the way, it doesn't happen, if I restrict the rendered area to the first 1000
rows for example, so it might be an scheduling issue with the total number of
pixels to be rendered.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Mirfaelltkeinerein" <nomail@nomail> wrote:
> "Mirfaelltkeinerein" <nomail@nomail> wrote:
> > Hi all,
> >
> > I experienced a problem after upgrading a server from 1TB RAM to 2TB RAM (yes,
> > really RAM) in our dual socket AMD EPYC machine with 48 cores/96 threads under
> > Windows Server 2016. If I render a scene with 3500x1750 pixel, half of the cores
> > are used (Windows processor group stuff) and the performance is quite nice. If I
> > increase the resolution to 3600x1800 pixel (or more), it starts with 48
> > threads/24 cores, but performance is bad and after several seconds, just one
> > core/thread is doing all the work. Rendering can also not be stopped and runs
> > till the end.
> > I never experienced such a problem before. Do you have any ideas?
>
> Sorry, I posted the message into the wrong group. Could an Admin move it to the
> Windows group?
>
> By the way, it doesn't happen, if I restrict the rendered area to the first 1000
> rows for example, so it might be an scheduling issue with the total number of
> pixels to be rendered.
It still does happen for larger resolutions like 3800x1900 pixel even with
restricted render region.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
"Mirfaelltkeinerein" <nomail@nomail> wrote:
> "Mirfaelltkeinerein" <nomail@nomail> wrote:
> > "Mirfaelltkeinerein" <nomail@nomail> wrote:
> > > I experienced a problem after upgrading a server from 1TB RAM to 2TB RAM (yes,
> > > really RAM) in our dual socket AMD EPYC machine with 48 cores/96 threads under
> > > Windows Server 2016. If I render a scene with 3500x1750 pixel, half of the cores
> > > are used (Windows processor group stuff) and the performance is quite nice. If I
> > > increase the resolution to 3600x1800 pixel (or more), it starts with 48
> > > threads/24 cores, but performance is bad and after several seconds, just one
> > > core/thread is doing all the work. Rendering can also not be stopped and runs
> > > till the end.
> > > I never experienced such a problem before. Do you have any ideas?
> >
> > Sorry, I posted the message into the wrong group. Could an Admin move it to the
> > Windows group?
> >
> > By the way, it doesn't happen, if I restrict the rendered area to the first 1000
> > rows for example, so it might be an scheduling issue with the total number of
> > pixels to be rendered.
>
> It still does happen for larger resolutions like 3800x1900 pixel even with
> restricted render region.
are you sure about the resolution being the .. culprit? at least one user
renders (much) larger images with no mention of ill-effects.
<http://news.povray.org/web.5ceaeabd5e75640e3c1c78400%40news.povray.org>
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
From: William F Pokorny
Subject: Re: Problems with large RAM installed
Date: 10 Feb 2020 12:52:24
Message: <5e419858@news.povray.org>
|
|
|
| |
| |
|
|
On 2/10/20 10:50 AM, Mirfaelltkeinerein wrote:
> "Mirfaelltkeinerein" <nomail@nomail> wrote:
>> "Mirfaelltkeinerein" <nomail@nomail> wrote:
>>> Hi all,
>>>
>>> I experienced a problem after upgrading a server from 1TB RAM to 2TB RAM (yes,
>>> really RAM) in our dual socket AMD EPYC machine with 48 cores/96 threads under
>>> Windows Server 2016. If I render a scene with 3500x1750 pixel, half of the cores
>>> are used (Windows processor group stuff) and the performance is quite nice. If I
>>> increase the resolution to 3600x1800 pixel (or more), it starts with 48
>>> threads/24 cores, but performance is bad and after several seconds, just one
>>> core/thread is doing all the work. Rendering can also not be stopped and runs
>>> till the end.
>>> I never experienced such a problem before. Do you have any ideas?
>>
>> Sorry, I posted the message into the wrong group. Could an Admin move it to the
>> Windows group?
>>
>> By the way, it doesn't happen, if I restrict the rendered area to the first 1000
>> rows for example, so it might be an scheduling issue with the total number of
>> pixels to be rendered.
>
> It still does happen for larger resolutions like 3800x1900 pixel even with
> restricted render region.
>
>
Are you using POV-Ray v3.7 or the in progress v3.8 master branch? If the
former, it might be worth trying the latter as Christoph fixed quite a
few (all ?) fixed memory allocations which might cause stack issues and
hence odd problems/crashes(1). I'm not a window user, but there are
pre-compiled windows binaries for v3.8 releases. See:
https://github.com/POV-Ray/povray/releases
---
Aside: There was a recent TechSpot review of a dual socket 3990X system
including POV-Ray where they mention AMD providing a POV-Ray patch to
support all the cores/threads instead of 'just' 64. There is a pull
request in:
https://github.com/POV-Ray/povray/pull/387
which I assume is this AMD fix, but I don't know for certain. Perhaps
someone on the core team got email saying for sure? In any case if you
are up to merging and compiling that pull request, it would be
interesting to see if you can then use all your cores and threads - even
if just on the smaller render.
When you say the render cannot be stopped and runs until the end, do you
mean it continues with 1 thread AND still produces a good result?
Surprised if this is what's happening, but maybe?
Why do you think your memory upgrade might be playing a part in what you
now see? Were you able to render the 'problem' dimensions with the
smaller amount of memory previously?
In whatever version of POV-Ray you are using have you tried the problem
render with fewer threads using say +wt24 or something? Lastly, might
guess with a machine this powerful, it might be running other things? If
so, might these be causing what you see?
Bill P.
(1) - A part of this work was tangled in moving off boost threads to
C++11 built in threads. The boost threads implementation could be
tangled in your problem too I guess if you are not using a recent 3.8
release.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 10/02/2020 à 16:25, Mirfaelltkeinerein a écrit :
> Hi all,
>
> I experienced a problem after upgrading a server from 1TB RAM to 2TB RAM (yes,
> really RAM) in our dual socket AMD EPYC machine with 48 cores/96 threads under
> Windows Server 2016. If I render a scene with 3500x1750 pixel, half of the cores
> are used (Windows processor group stuff) and the performance is quite nice. If I
> increase the resolution to 3600x1800 pixel (or more), it starts with 48
> threads/24 cores, but performance is bad and after several seconds, just one
> core/thread is doing all the work. Rendering can also not be stopped and runs
> till the end.
> I never experienced such a problem before. Do you have any ideas?
>
There is a default setting to replace "in-memory" picture in progress
with "on-disk" picture in progress when the memory needed for such
picture to render would be too large.
Obviously with 2TB of RAM (OMG!), you do not need such limitation. (it
is in place to save the innocents which cannot hold the picture in
progress in their small memory).
The setting is Max_Image_Buffer_Memory (default is 128 MB, using about
20 bytes per pixels, so check your size: first is 122 500 000, second is
129 600 000): you are crossing the boundary.
http://wiki.povray.org/content/Reference:General_Output_Options#Max_Image_Buffer_Memory
>
http://wiki.povray.org/content/Reference:General_Output_Options#Max_Image_Buffer_Memory
You can totally disable that security by setting the parameter to 0, but
do not cry when your process get swapped and the system crawls.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
William F Pokorny <ano### [at] anonymousorg> wrote:
> On 2/10/20 10:50 AM, Mirfaelltkeinerein wrote:
> > "Mirfaelltkeinerein" <nomail@nomail> wrote:
> >> "Mirfaelltkeinerein" <nomail@nomail> wrote:
> >>> Hi all,
> >>>
> Are you using POV-Ray v3.7 or the in progress v3.8 master branch? If the
> former, it might be worth trying the latter as Christoph fixed quite a
> few (all ?) fixed memory allocations which might cause stack issues and
> hence odd problems/crashes(1). I'm not a window user, but there are
> pre-compiled windows binaries for v3.8 releases. See:
>
> https://github.com/POV-Ray/povray/releases
I'm using 3.7 and I didn't know of 3.8. I should try it out.
> ---
> Aside: There was a recent TechSpot review of a dual socket 3990X system
> including POV-Ray where they mention AMD providing a POV-Ray patch to
> support all the cores/threads instead of 'just' 64. There is a pull
> request in:
>
> https://github.com/POV-Ray/povray/pull/387
>
> which I assume is this AMD fix, but I don't know for certain. Perhaps
> someone on the core team got email saying for sure? In any case if you
> are up to merging and compiling that pull request, it would be
> interesting to see if you can then use all your cores and threads - even
> if just on the smaller render.
Thank you very much, I'll try it out.
>
> When you say the render cannot be stopped and runs until the end, do you
> mean it continues with 1 thread AND still produces a good result?
> Surprised if this is what's happening, but maybe?
Yes, the render goes on and seemingly with the appropriate amount of render
threads (it looks like that, as a number of tiles become finished at different
times). But you can push the 'Stop' button in the GUI or press Alt-G, nothing
happens, the rendering continues. And Task manager says that just one core is
busy. The 'PPS' number supports that reading.
>
> Why do you think your memory upgrade might be playing a part in what you
> now see? Were you able to render the 'problem' dimensions with the
> smaller amount of memory previously?
Yes, I upgraded the memory, rebooted and the very first test was POVRay with the
saved settings I used before for a rendering. And I used the setting before many
many times without problems. When I reduced the image size to halve of that
amount (2000x1000), rendering (in PPS) was much faster and also all 24 cores/48
threads were used.
Then I increased image size step by step.
>
> In whatever version of POV-Ray you are using have you tried the problem
> render with fewer threads using say +wt24 or something? Lastly, might
> guess with a machine this powerful, it might be running other things? If
> so, might these be causing what you see?
After the reboot nothing else was running, except the operating system with all
its stuff.
But what I found out now is that when I run Comsol in parallel doing some heavy
calculations, POVRay also behaves as expected. Slightly slower as if it were
alone, but much faster as with the problem. I'm confused...
>
> Bill P.
>
> (1) - A part of this work was tangled in moving off boost threads to
> C++11 built in threads. The boost threads implementation could be
> tangled in your problem too I guess if you are not using a recent 3.8
> release.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Sounds a lot like this:
http://news.povray.org/povray.windows/thread/%3Cweb.56edd77df9a19c455e7df57c0%40news.povray.org%3E/
But I haven't tried rendering that scene at that size again, and I'm running
Linux, not Window$.
But yeah - upgrade to the latest 3.8 and see if that helps any.
Sounds like a sweet system. University? Corporate machine?
What's the ballpark cost of that, and what are you rendering on that monster?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"jr" <cre### [at] gmailcom> wrote:
> hi,
>
>
> are you sure about the resolution being the .. culprit? at least one user
> renders (much) larger images with no mention of ill-effects.
> <http://news.povray.org/web.5ceaeabd5e75640e3c1c78400%40news.povray.org>
>
>
> regards, jr.
No, not at all. But the problem manifested itself at higher resolutions and
vanishes at lower ones.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Bald Eagle" <cre### [at] netscapenet> wrote:
> Sounds a lot like this:
>
>
http://news.povray.org/povray.windows/thread/%3Cweb.56edd77df9a19c455e7df57c0%40news.povray.org%3E/
>
> But I haven't tried rendering that scene at that size again, and I'm running
> Linux, not Window$.
>
> But yeah - upgrade to the latest 3.8 and see if that helps any.
>
> Sounds like a sweet system. University? Corporate machine?
> What's the ballpark cost of that, and what are you rendering on that monster?
I'm at a basic research institute and normally wave optic simulations run on
that system. And 1TB became a little... tight, therefore the upgrade. Cost is
transparent objects for presentations, too, as it's so much faster than my
desktop machine...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|