SemiAccurate Forums  

 
Go Back   SemiAccurate Forums > Main Category > GPUs

GPUs Talk about graphics, cards, chips and technologies

Reply
 
Thread Tools Display Modes
  #41  
Old 03-23-2012, 07:28 AM
kalizec kalizec is offline
itanic
 
Join Date: Oct 2011
Location: Utrecht, Netherlands
Posts: 195
kalizec is on a distinguished road
Default

Quote:
Originally Posted by integrated View Post
Never witnessed it myself, but can it accurately be called "horrible"??
Yes, microstuttering can accurately be called "horrible".

Why? Microstuttering is noticeable and irritating. It also detracts from the perceived performance. Where lack of microstuttering would result in better perceived performance with the same hardware/powerbill/etc.

Finally, microstuttering is one of the reasons why average FPS isn't a reliable metric. I'd rather own a setup that averages 30fps than a microstuttering one that averages 50fps.
__________________
Quote:
Originally Posted by Copper
Yes, you can quote me on that
Reply With Quote
  #42  
Old 03-23-2012, 07:30 AM
James James is offline
2^11
 
Join Date: Jul 2009
Posts: 3,068
James will become famous soon enough
Default

http://www.tomshardware.com/reviews/...fire,2995.html

Quote:
Even at frame rates above 50 FPS, micro-stuttering rears its ugly head, pronounced enough to significantly detract from the gaming experience. A paradigm shift seems necessary, at least until both AMD and Nvidia are able to prevent or mask the artifact. Right now, if you asked us whether it'd be smart to "go cheap" on an inexpensive card and double-down later with another one, we'd have to suggest against it if you're the sort to be bothered by micro-stuttering. The improvement in performance would be negated by the phenomenon's impact.
I would never crossfire again unless microstutter was removed entirely. It appears that 3-way crossfire eliminates it, at least in this article - but that's getting into some pretty serious expense.
Reply With Quote
  #43  
Old 03-23-2012, 08:02 AM
Amiga500 Amiga500 is offline
640k who needs more?
 
Join Date: Nov 2009
Posts: 647
Amiga500 is on a distinguished road
Default

Quote:
Originally Posted by Guild View Post
Time taken for each frame. Not sure what that is called, frame latency? The idea is that on average each frame should take roughly the same amount of time. Microstutter happens when a situation arises like when frame 1 takes 20ms, frame 2 50ms, frame 3 20ms, frame 4 60ms, and so on and so forth.
So you'd look at the standard deviation of each frame within a specified time period.
Reply With Quote
  #44  
Old 03-23-2012, 09:39 AM
Regenweald's Avatar
Regenweald Regenweald is offline
640k who needs more?
 
Join Date: Mar 2010
Location: Trinidad, WI
Posts: 650
Regenweald is on a distinguished road
Default

Quote:
Originally Posted by madmax396 View Post
But the wind is now blowing in a new direction, and it is, in fact, gaming performance only, and not GPGPU performance + gaming performance that matters.

If/When the big die is released, breeze will probably blow differently.
__________________
My get up and go got up and left.
Reply With Quote
  #45  
Old 03-24-2012, 07:29 AM
SatansEvilTwin SatansEvilTwin is offline
byte
 
Join Date: Jul 2010
Posts: 14
SatansEvilTwin is on a distinguished road
Default

Yeah, mGPU is still only for benchmarkers, but not true gamers.


First of all, since each graphics card still renders a full frame, which
still takes the same amount of time as on a single card (or even much longer, since most people will want to increase the image quality with multiple GPUs),
the MOST IMPORTANT metric for immersive gameplay, the time it takes
from moving your mouse until your computer follows that movement, is not improved at all. If you got a CF or SLi running at 60 fps,
it still has the responsiveness of a single card running at 30fps, which is 33.3ms delay just for rendering the frame, about as much as a
wireless mouse induces over a wired one, or as much as a good internet connection.


But it even actively corrupts the gaming experience by introducing microstuttering, whose effect can be nicely seen on these graphs:

http://www.computerbase.de/artikel/g...m_mikroruckler


Or even experienced yourself at home with this little tool:

http://www.computerbase.de/downloads...er-single-gpu/

Which however does not account for input lag, but only framerate inconsistency.


So, for me as a true gamer mGPU with AFR is just retarded, just buy the fastest GPU if you think to need that image quality,
or reduce the most taxing settings and just play away (at at least 60 MIN fps IMHO ).

Last edited by SatansEvilTwin; 03-24-2012 at 07:33 AM.
Reply With Quote
  #46  
Old 03-24-2012, 07:43 AM
Bitey Bitey is offline
itanic
 
Join Date: Apr 2011
Posts: 222
Bitey is on a distinguished road
Default

Does enabling vsync help with micro stuttering?
Reply With Quote
  #47  
Old 03-24-2012, 10:28 AM
flippin_waffles flippin_waffles is offline
640k who needs more?
 
Join Date: Jul 2009
Posts: 663
flippin_waffles is on a distinguished road
Default

Quote:
Originally Posted by SatansEvilTwin View Post
Yeah, mGPU is still only for benchmarkers, but not true gamers.


First of all, since each graphics card still renders a full frame, which
still takes the same amount of time as on a single card (or even much longer, since most people will want to increase the image quality with multiple GPUs),
the MOST IMPORTANT metric for immersive gameplay, the time it takes
from moving your mouse until your computer follows that movement, is not improved at all. If you got a CF or SLi running at 60 fps,
it still has the responsiveness of a single card running at 30fps, which is 33.3ms delay just for rendering the frame, about as much as a
wireless mouse induces over a wired one, or as much as a good internet connection.


But it even actively corrupts the gaming experience by introducing microstuttering, whose effect can be nicely seen on these graphs:

http://www.computerbase.de/artikel/g...m_mikroruckler


Or even experienced yourself at home with this little tool:

http://www.computerbase.de/downloads...er-single-gpu/

Which however does not account for input lag, but only framerate inconsistency.


So, for me as a true gamer mGPU with AFR is just retarded, just buy the fastest GPU if you think to need that image quality,
or reduce the most taxing settings and just play away (at at least 60 MIN fps IMHO ).
If the fancy pants microstutter needs to be experienced by using graphs and tools, I question the seriousness of the problem to gamers.

Last edited by flippin_waffles; 03-24-2012 at 10:30 AM.
Reply With Quote
  #48  
Old 03-24-2012, 10:35 AM
__shared__ __shared__ is offline
Banned
 
Join Date: Jan 2012
Posts: 185
__shared__ is on a distinguished road
Default

Quote:
Originally Posted by DarthShader View Post
The GK104, being a gaming card, has DP limited to 1/16 SP performance, so how is this news?

There are also many other architectural choices that make it not so great for compute, that would take up place and do very little for gaming performance.

Yes, there will be a GK110(2) later, possibly this year, to take care of compute.
Darthshader is on the right track here, only it's actually 1/24 th DP !

This is definetly a gaming card... Like someone said, NV has beaten AMD at their own game here....
Reply With Quote
  #49  
Old 03-26-2012, 06:43 AM
TheAgentD TheAgentD is offline
nibble
 
Join Date: Mar 2012
Posts: 4
TheAgentD is on a distinguished road
Default

Quote:
Originally Posted by Bitey View Post
Does enabling vsync help with micro stuttering?
Yes, it completely eliminates it. Microstuttering occurs when the GPUs sync up, producing two frames at the almost the same time, making one of them never reaching your screen. Adaptive VSync could very well be the ultimate solution to micro-stuttering, since even if it's only enabled at some times, it would help the GPUs to generate frames at even intervals. The only problem then is the delay caused by VSync. At 60 FPS, it takes 16.6666 ms to render a frame. With SLI/CF you have two GPUs, so if you get 60 FPS, it actually takes 33.333 ms to render a frame for each GPU. It gets even worse for three or four GPUs.

Now VSync also forces the GPU to wait for a screen refresh, so it adds even more delay. Add monitor delay (might be as low as 2 ms for good desktop monitors, or over 20ms for laptops) and input delay (most USB mice are polled at 100hz, meaning you'll get 0-10ms delay here depending on where between polls you click) and you could be looking at over 100ms delay.

I have huge problems with microstuttering in BFBC2 with my GTX 295. VSync solves it, but aiming becomes so sluggish and sniping moving targets is impossible thanks to both input (= rendering) delay and network lag. Definitely going single GPU next time.
Reply With Quote
  #50  
Old 03-26-2012, 07:10 AM
TheAgentD TheAgentD is offline
nibble
 
Join Date: Mar 2012
Posts: 4
TheAgentD is on a distinguished road
Default

Sorry for the double post, but it won't let me edit the original one anymore...

Does any of you guys have any idea what double precision floating point is? Normal precision floats are 32-bit, and double precision floats are obviously 64-bit. When do you need the extra precision? In games: never. Even 32-bit floats are overkill for lots of things, for example: HDR rendering. Traditionally games render to a 8-bit RGB framebuffer, but HDR needs more precision than that, which is why we use 16-bit half-floats for framebuffers since they are half as big, and the precision still allows color intensities over 1000 times as bright as standard 8-bit integer framebuffers can represent. 32-bit floats are used for vertex positions, matrix transformations and for almost all shader calculations like lighting and post-processing. 64-bit is almost exclusively used for scientific computing. Heck, 64-bit values only became supported with DirectX 11 / OpenGL 4! NO GAMES USE THESE AT ALL!

When NVidia says that these boards are not good at computing, they therefore mean scientific computing. BF3 uses DirectCompute for doing tile-based deferred lighting. This will run at full speed, not 1/24th, since it does not use double precision floats in any way. Heck, I've written an OpenCL particle system simulator that can process and render almost 2.5 million particles on my laptop's GTX 460M myself. 64-bit floats are completely unused for games, at least on the GPUs.
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


Forum Jump


All times are GMT -5. The time now is 01:55 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.
SemiAccurate is a division of Stone Arch Networking Services, Inc. Copyright 2009 Stone Arch Networking Services, Inc, all rights reserved.