This is the mail archive of the cygwin mailing list for the Cygwin project.
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |
Other format: | [Raw text] |
On Mar 21 23:59, Christian Franke wrote:clock_getres(CLOCK_REALTIME,&ts) queries the actual resolution through NtQueryTimerResolution (&coarsest,&finest,&actual) during its first call and returns this value unchanged afterwards.
This returns a global Windows setting which may be temporarily modified by other applications by using e.g. timeBegin/EndPeriod(). For example playing a flash video in browser sets the resolution to 1ms. It is reset to default ~15ms when the browser is closed.
As a consequence the actual resolution might be much lower than reported by clock_getres() when clock_gettime() is used for measurements later. It would IMO be better to return the 'coarsest' instead of the 'actual' value.clock_getres already returns the coarsest time. Did you mean the setting in hires_ms::resolution, by any chance? It's using the actual setting right now.
If clock_setres() is used, this setting should be returned instead of the 'actual' value at the time of the setting.Well, I'm not overly concerned about clock_setres, given that it's probably not used at all :)
BTW: GetSystemTimeAsFileTime() apparently provides the same resolution (at least on Win7 x64). So the more complex use of SharedUserData.InterruptTime may have less benefit than expected.On pre-Vista, the accuracy of GetSystemTimeAsFileTime is 15.625 ms, fixed. On Vista and later it seems to be 15ms or better, but its resultion is not constant anymore.
[1] ./testres started/stopped to read default resolutions. SystemTime CLOCK_REALTIME (getres) CLOCK_MONOTONIC (getres) 0.0156000 0.015600000 (0.015600000) 0.000000301 (0.000000301) ... ^C
[3] ./testres started SystemTime CLOCK_REALTIME (getres) CLOCK_MONOTONIC (getres) 0.0010000 0.001000000 (0.001000000) 0.000000302 (0.000000301) ... 0.0005000 0.000500000 (0.001000000) 0.000000302 (0.000000301) [4] 0.0010000 0.001000000 (0.001000000) 0.000000301 (0.000000301) [5] 0.0156000 0.015600000 (0.001000000) 0.000000302 (0.000000301) [6]
Where: [4] VM started in VirtualBox [5] VM closed [6] SeaMonkey closed
But I'm not shure either, if the timeGetTime_ns shuffle has any positive effect. Given that we allow a jitter of 40 ms, we''re potentially worse off than by just calling GetSystemTimeAsFileTime and be done with it. Also, all processes would be guaranteed to be on the same time, not only the processes within the same session sharing gtod.
Attachment:
testres.cc
Description: Text document
-- Problem reports: http://cygwin.com/problems.html FAQ: http://cygwin.com/faq/ Documentation: http://cygwin.com/docs.html Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |