Search Results

Search found 1190 results on 48 pages for 'sinister clock'.

Page 42/48 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • C#: Why am I still getting an RCW error when exiting my application?

    - by shifuimam
    To be specific, the error is: An attempt has been made to free an RCW that is in use. The RCW is in use on the active thread or another thread. Attempting to free an in-use RCW can cause corruption or data loss. The application in question is a small applet that displays the current clock speed of the CPU as a system tray icon. I'm using a System.Timers.Timer object to refresh the systray icon at an interval of 2.5 seconds. There's a single context menu item to exit the application, which uses the following function to unload everything: public void ExitApp(object sender, EventArgs e) { //stop and disable the timer theInterval.Stop(); theInterval.Enabled = false; //unload the system tray icon TheIcon.Visible = false; //exit the application Application.Exit(); } If I take out everything but Application.Exit(); I get an RCW error and the systray icon doesn't go away until I mouse over it. Everything was working perfectly until I changed the window style of the hidden form used to initialize the systray icon and the timer. If it helps any, I'm using C# 2010 Express.

    Read the article

  • Changing the system time zone succeeds once and then no longer changes

    - by Adam Driscoll
    I'm using the WinAPI to set the time zone on a Windows XP SP3 box. I'm reading the time zone information from the HKLM\Software\Microsoft\WindowsNT\Time Zones\<time zone name> key and then setting the time zone to the specified time zone. I enumerate the keys under the Time Zones key, grab the TZI value and stuff it into a TIME_ZONE_INFORMATION struct to be passed to SetTimeZoneInformation. All seems to work on the first pass. The time zone changes, no error is returned. The second time I perform this operation (same user, new session, on login before userinit) the call succeeds but the system does not reflect the time zone change. Neither the clock nor time stamps on files are updated to the new time zone. When I navigate to: HKLM\System\CurrentControlSet\Control\TimeZoneInformation my new time zone information is present. A couple strange things are happening when I'm setting my time zone: Also when I parse the TZI binary value from the registry to store in my TIME_ZONE_INFORMATION struct I'm noticing the struct has the DaylightDate.wDay and StandardDate.wDay field always set to 0 I tried to call GetTimeZoneInformation right after I call SetTimeZoneInformation but the call fails with a 1300 error (Not all privileges or groups referenced are assigned to the caller. ) I'm also making sure to send a WM_BROADCAST message so Explorer knows whats going on. Think it's the parsing of the byte array to the TIME_ZONE_INFORMATION struct? Or am I missing some thing else important? EDIT: Found a document stating why this is happening: here. Privilege was introduced in Vista...thanks MSDN docs... Per the Microsoft documentation I'm enabling the SE_TIME_ZONE_NAME privilege for the current processes token. But when I attempt to call LookupPriviledgeValue for SE_TIME_ZONE_NAME I get a 1313 error (A specified privilege does not exist. ).

    Read the article

  • What is the rationale to non allow overloading of C++ conversions operator with non-member functio

    - by Vicente Botet Escriba
    C++0x has added explicit conversion operators, but they must always be defined as members of the Source class. The same applies to the assignment operator, it must be defined on the Target class. When the Source and Target classes of the needed conversion are independent of each other, neither the Source can define a conversion operator, neither the Target can define a constructor from a Source. Usually we get it by defining a specific function such as Target ConvertToTarget(Source& v); If C++0x allowed to overload conversion operator by non member functions we could for example define the conversion implicitly or explicitly between unrelated types. template < typename To, typename From operator To(const From& val); For example we could specialize the conversion from chrono::time_point to posix_time::ptime as follows template < class Clock, class Duration operator boost::posix_time::ptime( const boost::chrono::time_point& from) { using namespace boost; typedef chrono::time_point time_point_t; typedef chrono::nanoseconds duration_t; typedef duration_t::rep rep_t; rep_t d = chrono::duration_cast( from.time_since_epoch()).count(); rep_t sec = d/1000000000; rep_t nsec = d%1000000000; return posix_time::from_time_t(0)+ posix_time::seconds(static_cast(sec))+ posix_time::nanoseconds(nsec); } And use the conversion as any other conversion. So the question is: What is the rationale to non allow overloading of C++ conversions operator with non-member functions?

    Read the article

  • Cleaning a string consisting of html/server-side tags in Java

    - by Denzil
    I have a text like: I've got a date with this fellow tomorrow. Well me and thousands of others. <br /><br /><img src="http://www.newwest.net/images/thumbnails_feature/barack_obama_westerners.jpg"><br /><br />Tomorrow morning I will be getting up at stupid o'clock and driving up to Manchester, NH to see Barak Obama speak. <br /><br />You all should come too!<br /><br /><a href="http://nh.barackobama.com/manchesterchange">RSVP for the event</a> I would want to like to clean it too : I've got a date with this fellow tomorrow. Well me and thousands of others http://www.newwest.net/images/thumbnails_feature/barack_obama_westerners.jpg Tomorrow morning I will be getting up at stupid o'clock and driving up to Manchester, NH to see Barak Obama speak.You all should come too! h**p://nh.barackobama.com/manchesterchange RSVP for the event I would like to write a JAVA program for the same. Any pointers/suggestions would be appreciated.The tags aren't limited to the above post. This was just an example. Thanks! PS: Replace *'s by t's in the second hyperlink as Stack Overflow doesn't allow me to post more than one link.

    Read the article

  • How to use javascript class from within document ready

    - by Richard
    Hi, I have this countdown script wrapped as an object located in a separate file Then when I want to setup a counter, the timeout function in the countdown class can not find the object again that I have setup within the document ready. I sort of get that everything that is setup in the document ready is convined to that scope, however it is possible to call functions within other document ready´s. Does anyone has a solution on how I could setup multiple counters slash objects. Or do those basic javascript classes have to become plugins This is some sample code on how the class begins function countdown(obj) { this.obj = obj; this.Div = "clock"; this.BackColor = "white"; this.ForeColor = "black"; this.TargetDate = "12/31/2020 5:00 AM"; this.DisplayFormat = "%%D%% Days, %%H%% Hours, %%M%% Minutes, %%S%% Seconds."; this.CountActive = true; this.DisplayStr; this.Calcage = cd_Calcage; this.CountBack = cd_CountBack; this.Setup = cd_Setup; } thanks, Richard

    Read the article

  • Performance degrades for more than 2 threads on Xeon X5355

    - by zoolii
    Hi All, I am writing an application using boost threads and using boost barriers to synchronize the threads. I have two machines to test the application. Machine 1 is a core2 duo (T8300) cpu machine (windows XP professional - 4GB RAM) where I am getting following performance figures : Number of threads :1 , TPS :21 Number of threads :2 , TPS :35 (66 % improvement) further increase in number of threads decreases the TPS but that is understandable as the machine has only two cores. Machine 2 is a 2 quad core ( Xeon X5355) cpu machine (windows 2003 server with 4GB RAM) and has 8 effective cores. Number of threads :1 , TPS :21 Number of threads :2 , TPS :27 (28 % improvement) Number of threads :4 , TPS :25 Number of threads :8 , TPS :24 As you can see, performance is degrading after 2 threads (though it has 8 cores). If the program has some bottle neck , then for 2 thread also it should have degraded. Any idea? , Explanations ? , Does the OS has some role in performance ? - It seems like the Core2duo (2.4GHz) scales better than Xeon X5355 (2.66GHz) though it has better clock speed. Thank you -Zoolii

    Read the article

  • "hour" int taken from NSDate not behaving as expected at midnight??

    - by Eric
    I feel like I've lost my mind. Can someone tell me what's going on here? Also, I'm sure there is a better way to do what I'm trying to do, but I'm not interested in that now. I'd just like to solve the mystery of why my ints are not responding to logic as expected. // Set "At: " field close to current time NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init]; [dateFormatter setDateFormat:@"HH"]; int hour = [[dateFormatter stringFromDate:[NSDate date]] intValue]; [dateFormatter setDateFormat:@"mm"]; int minute = [[dateFormatter stringFromDate:[NSDate date]] intValue]; NSLog(@"currently %i:%i",hour, minute); if(hour >= 12){ // convert to AM/PM selectedMeridiem = 1; if(hour != 12){ hour = hour - 12; } } else{ selectedMeridiem = 0; } selectedHour = hour - 1; if(selectedHour <= 0){ selectedHour = 11; } When I debug the above code with my clock set to 12:XX AM, the integer "hour" returned is 0. But then any if statements with the condition if(hour == 0) are not evaluated. Likewise, this would not be evaluated either: if(hour < 1). The code above puts the hour int into another int, selectedHour (don't worry about why I'm doing this for now), but selectedHour suffers from the same weird behavior; the if(selectedHour <= 0) line is never evaluated. Am I going crazy, or am I just an idiot? Maybe there's some behavior of 0 integers that I'm not aware of. All of my code runs fine as long as it's not 12:XX AM.

    Read the article

  • Custom webserver caching

    - by Mark Kinsella
    I'm working with a custom webserver on an embedded system and having some problems correctly setting my HTTP Headers for caching. Our webserver is generating all dynamic content as XML and we're using semi-static XSL files to display it with some dynamic JSON requests thrown in for good measure along with semi-static images. I say "semi-static" because the problems occur when we need to do a firmware update which might change the XSL and image files. Here's what needs to be done: cache the XSL and image files and do not cache the XML and JSON responses. I have full control over the HTTP response and am currently: Using ETags with the XSL and image files, using the modified time and size to generate the ETag Setting Cache-Control: no-cache on the XML and JSON responses As I said, everything works dandy until a firmware update when the XSL and image files are sometimes cached. I've seen it work fine with the latest versions of Firefox and Safari but have had some problems with IE. I know one solution to this problem would be simply rename the XSL and image files after each version (eg. logo-v1.1.png, logo-v1.2.png) and set the Expires header to a date in the future but this would be difficult with the XSL files and I'd like to avoid this. Note: There is a clock on the unit but requires the user to set it and might not be 100% reliable which is what might be causing my caching issues when using ETags. What's the best practice that I should employ? I'd like to avoid as many webserver requests as possible but invalidating old XSL and image files after a software update is the #1 priority.

    Read the article

  • Win32 DLL importing issues (DllMain)

    - by brady
    I have a native DLL that is a plug-in to a different application (one that I have essentially zero control of). Everything works just great until I link with an additional .lib file (links my DLL to another DLL named ABQSMABasCoreUtils.dll). This file contains some additional API from the parent application that I would like to utilize. I haven't even written any code to use any of the functions exported but just linking in this new DLL is causing problems. Specifically I get the following error when I attempt to run the program: The application failed to initialize properly (0xc0000025). Clock on OK to terminate the application. I believe I have read somewhere that this is typically due to a DllMain function returning FALSE. Also, the following message is written to the standard output: ERROR: Memory allocation attempted before component initialization I am almost 100% sure this error message is coming from the application and is not some type of Windows error. Looking into this a little more (aka flailing around and flipping every switch I know of) I linked with /MAP turned on and found this in the resulting .map file: 0001:000af220 ??3@YAXPEAX@Z 00000001800b0220 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af226 ??2@YAPEAX_K@Z 00000001800b0226 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af22c ??_U@YAPEAX_K@Z 00000001800b022c f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af232 ??_V@YAXPEAX@Z 00000001800b0232 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll If I undecorate those names using "undname" they give the following (same order): void __cdecl operator delete(void * __ptr64) void * __ptr64 __cdecl operator new(unsigned __int64) void * __ptr64 __cdecl operator new[](unsigned __int64) void __cdecl operator delete[](void * __ptr64) I am not sure I understand how anything from ABQSMABasCoreUtils.dll can exist within this .map file or why my DLL is even attempting to load ABQSMABasCoreUtils.dll if I don't have any code that references this DLL. Can anyone help me put this information together and find out why this isn't working? For what it's worth I have confirmed via "dumpbin" that the parent application imports the same DLL (ABQSMABasCoreUtils.dll), so it is being loaded no matter what. I have also tried delay loading this DLL in my DLL but that did not change the results.

    Read the article

  • updating system's time using .Net

    - by user62958
    I am trying to update my system time using the following: [StructLayout(LayoutKind.Sequential)] private struct SYSTEMTIME { public ushort wYear; public ushort wMonth; public ushort wDayOfWeek; public ushort wDay; public ushort wHour; public ushort wMinute; public ushort wSecond; public ushort wMilliseconds; } [DllImport("kernel32.dll", EntryPoint = "GetSystemTime", SetLastError = true)] private extern static void Win32GetSystemTime(ref SYSTEMTIME lpSystemTime); [DllImport("kernel32.dll", EntryPoint = "SetSystemTime", SetLastError = true)] private extern static bool Win32SetSystemTime(ref SYSTEMTIME lpSystemTime); public void SetTime() { TimeSystem correctTime = new TimeSystem(); DateTime sysTime = correctTime.GetSystemTime(); // Call the native GetSystemTime method // with the defined structure. SYSTEMTIME systime = new SYSTEMTIME(); Win32GetSystemTime(ref systime); // Set the system clock ahead one hour. systime.wYear = (ushort)sysTime.Year; systime.wMonth = (ushort)sysTime.Month; systime.wDayOfWeek = (ushort)sysTime.DayOfWeek; systime.wDay = (ushort)sysTime.Day; systime.wHour = (ushort)sysTime.Hour; systime.wMinute = (ushort)sysTime.Minute; systime.wSecond = (ushort)sysTime.Second; systime.wMilliseconds = (ushort)sysTime.Millisecond; Win32SetSystemTime(ref systime); } When I debug everything looks good and all the values are correct but when it calles the Win32SetSystemTime(ref systime) th actual time of system(display time) doesn't change and stays the same. The strange part is that when I call the Win32GetSystemTime(ref systime) it gives me the new updated time. Can someone give me some help on this?

    Read the article

  • Algorithm for count-down timer that can add on time

    - by Person
    I'm making a general timer that has functionality to count up from 0 or count down from a certain number. I also want it to allow the user to add and subtract time. Everything is simple to implement except for the case in which the timer is counting down from some number, and the user adds or subtracts time from it. For example: (m_clock is an instance of SFML's Clock) float Timer::GetElapsedTime() { if ( m_forward ) { m_elapsedTime += m_clock.GetElapsedTime() - m_elapsedTime; } else { m_elapsedTime -= m_elapsedTime - m_startingTime + m_clock.GetElapsedTime(); } return m_elapsedTime; } To be a bit more clear, imagine that the timer starts at 100 counting down. After 10 seconds, the above function would look like 100 -= 100 - 100 + 10 which equals 90. If it was called after 20 more seconds it would look like 90 -= 90 - 100 + 30 which equals 70. This works for normal counting, but if the user calls AddTime() ( just m_elapsedTime += arg ) then the algorithm for backwards counting fails miserably. I know that I can do this using more members and keeping track of previous times, etc. but I'm wondering whether I'm missing some implementation that is extremely obvious. I'd prefer to keep it as simple as possible in that single operation.

    Read the article

  • Why ntp settle time toe long?

    - by fercis
    I simply try to set-up NTP to my system. Both server and clients will run on my computer which are linked together with local link. One of them will have the reference clock. Both the server and Client are linux Ubuntu. I install ntp daemon to both sides. In clients, I enter the ip address of the server to /etc/ntp.conf. Everything works fine. However, the setting of the time to correct time in client side takes too long time (around 17 minutes). Is it possible to gather correct time just at startup. I write some code that regularly calls "ntpdate " by system call and the problem is solved but there has to be something that allows me to decrease the poll time of the client to 1-2 minutes. There are some settings as maxpoll - minpoll in ntp.conf, but I couldn't manage to understand their function, because with the best configuration (minpoll 4? 16 seconds?) I also cannot see that client side corrects its time before 10 minutes. In addition, in some cases my client is an embedded system (ARM - IGEP board) and it always opens with an irrelevant date (2-3 years ago). So the time that takes to correct the time should not depend on the time difference also.

    Read the article

  • Given a main function and a cleanup function, how (canonically) do I return an exit status in Bash/Linux?

    - by Zac B
    Context: I have a bash script (a wrapper for other scripts, really), that does the following pseudocode: do a main function if the main function returns: $returncode = $? #most recent return code if the main function runs longer than a timeout: kill the main function $returncode = 140 #the semi-canonical "exceeded allowed wall clock time" status run a cleanup function if the cleanup function returns an error: #nonzero return code exit $? #exit the program with the status returned from the cleanup function else #cleanup was successful .... Question: What should happen after the last line? If the cleanup function was successful, but the main function was not, should my program return 0 (for the successful cleanup), or $returncode, which contains the (possibly nonzero and unsuccessful) return code of the main function? For a specific application, the answer would be easy: "it depends on what you need the script for." However, this is more of a general/canonical question (and if this is the wrong place for it, kill it with fire): in Bash (or Linux in general) programming, do you typically want to return the status that "means" something (i.e. $returncode) or do you ignore such subjectivities and simply return the code of the most recent function? This isn't Bash-specific: if I have a standalone executable of any kind, how, canonically should it behave in these cases? Obviously, this is somewhat debatable. Even if there is a system for these things, I'm sure that a lot of people ignore it. All the same, I'd like to know. Cheers!

    Read the article

  • LNK 1104 error to lib file - Continues despite removing includes and links

    - by user1556594
    A link error to a lib file popped up out of the blue in a c++ application of mine after code was working fine in my last session. Error 1 error LNK1104: cannot open file '..........\Program Files (x86)\FMOD SoundSystem\FMOD Programmers API Windows\api\lib\fmodex_vc.lib' I triple checked my project directories were set up correctly to link to the lib file, that the file existed in said directory and that it was a working version of the .lib. My next step was to remove the includes to the file and the links to bypass the error and work on the rest of my code until the problem was solved. The error remains, however, despite: Commenting out absolutely every include relating to the lib. Commenting out absolutely every line of code dependant on the includes. Removing the directory from VC++ Directories in the project properties. Checking the Additional Library Directories field was also clear of references. To my understanding this should have made the library and related code virtually non-existant to the compiler. What am I missing? The library itself is fmodex_vc.lib - part of the FMOD API for providing sound to interactive applications. Again, the application was working one session, but failed to compile the next. I hadn't touched the code since so this led me to believe some aspect of VS is at fault. I'd like to avoid the time involded in re-installing if possible as I'm on the clock for a review tomorrow evening and there are a few more things I'd like to smooth out before then. If necessary, however, I won't hesitate. Very much appreciate the help.

    Read the article

  • C++ syntax issue

    - by Doug
    It's late and I can't figure out what is wrong with my syntax. I have asked other people and they can't find the syntax error either so I came here on a friend's advice. template <typename TT> bool PuzzleSolver<TT>::solve ( const Clock &pz ) { possibConfigs_.push( pz.getInitial() ); vector< Configuration<TT> > next_; //error is on next line map< Configuration<TT> ,Configuration<TT> >::iterator found; while ( !possibConfigs_.empty() && possibConfigs_.front() != pz.getGoal() ) { Configuration<TT> cfg = possibConfigs_.front(); possibConfigs_.pop(); next_ = pz.getNext( cfg ); for ( int i = 0; i < next_.size(); i++ ) { found = seenConfigs_.find( next_[i] ); if ( found != seenConfigs_.end() ) { possibConfigs_.push( next_[i] ); seenConfigs_.insert( make_pair( next_[i], cfg ) ); } } } } What is wrong? Thanks for any help.

    Read the article

  • Storing date/times as UTC in database

    - by James
    I am storing date/times in the database as UTC and computing them inside my application back to local time based on the specific timezone. Say for example I have the following date/time: 01/04/2010 00:00 Say it is for a country e.g. UK which observes DST (Daylight Savings Time) and at this particular time we are in daylight savings. When I convert this date to UTC and store it in the database it is actually stored as: 31/03/2010 23:00 As the date would be adjusted -1 hours for DST. This works fine when your observing DST at time of submission. However, what happens when the clock is adjusted back? When I pull that date from the database and convert it to local time that particular datetime would be seen as 31/03/2009 23:00 when in reality it was processed as 01/04/2010 00:00. Correct me if I am wrong but isn't this a bit of a flaw when storing times as UTC? Example of Timezone conversion Basically what I am doing is storing the date/times of when information is being submitted to my system in order to allow users to do a range report. Here is how I am storing the date/times: public DateTime LocalDateTime(string timeZoneId) { var tzi = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); return TimeZoneInfo.ConvertTimeFromUtc(DateTime.UtcNow, tzi).ToLocalTime(); } Storing as UTC: var localDateTime = LocalDateTime("AUS Eastern Standard Time"); WriteToDB(localDateTime.ToUniversalTime());

    Read the article

  • Boost Date_Time problem compiling a simple program

    - by Andry
    Hello! I'm writing a very stupid program using Boost Date_Time library. int main(int srgc, char** argv) { using namespace boost::posix_time; date d(2002,Feb,1); //an arbitrary date ptime t1(d, hours(5)+nanosec(100)); //date + time of day offset ptime t2 = t1 - minutes(4)+seconds(2); ptime now = second_clock::local_time(); //use the clock date today = now.date(); //Get the date part out of the time } Well I cannot compile it, compiler does not recognize a type... Well I used many features of Boost libs like serialization and more... I correctly built them and, looking in my /usr/local/lib folder I can see that libboost_date_time.so is there (a good sign which means I was able to build that library) When I compile I write the following: g++ -lboost_date_time main.cpp But the errors it showed me when I specify the lib are the same of those ones where I do not specify any lib. What is this? Anyone knows? The error is main.cpp: In function ‘int main(int, char**)’: main.cpp:9: error: ‘date’ was not declared in this scope main.cpp:9: error: expected ‘;’ before ‘d’ main.cpp:10: error: ‘d’ was not declared in this scope main.cpp:10: error: ‘nanosec’ was not declared in this scope main.cpp:13: error: expected ‘;’ before ‘today’

    Read the article

  • Delphi: How to avoid EIntOverflow underflow when subtracting?

    - by Ian Boyd
    Microsoft already says, in the documentation for GetTickCount, that you could never compare tick counts to check if an interval has passed. e.g.: Incorrect (pseudo-code): DWORD endTime = GetTickCount + 10000; //10 s from now ... if (GetTickCount > endTime) break; The above code is bad because it is suceptable to rollover of the tick counter. For example, assume that the clock is near the end of it's range: endTime = 0xfffffe00 + 10000 = 0x00002510; //9,488 decimal Then you perform your check: if (GetTickCount > endTime) Which is satisfied immediatly, since GetTickCount is larger than endTime: if (0xfffffe01 > 0x00002510) The solution Instead you should always subtract the two time intervals: DWORD startTime = GetTickCount; ... if (GetTickCount - startTime) > 10000 //if it's been 10 seconds break; Looking at the same math: if (GetTickCount - startTime) > 10000 if (0xfffffe01 - 0xfffffe00) > 10000 if (1 > 10000) Which is all well and good in C/C++, where the compiler behaves a certain way. But what about Delphi? But when i perform the same math in Delphi, with overflow checking on ({Q+}, {$OVERFLOWCHECKS ON}), the subtraction of the two tick counts generates an EIntOverflow exception when the TickCount rolls over: if (0x00000100 - 0xffffff00) > 10000 0x00000100 - 0xffffff00 = 0x00000200 What is the intended solution for this problem? Edit: i've tried to temporarily turn off OVERFLOWCHECKS: {$OVERFLOWCHECKS OFF}] delta = GetTickCount - startTime; {$OVERFLOWCHECKS ON} But the subtraction still throws an EIntOverflow exception. Is there a better solution, involving casts and larger intermediate variable types?

    Read the article

  • High precision event timer

    - by rahul jv
    #include "target.h" #include "xcp.h" #include "LocatedVars.h" #include "osek.h" /** * This task is activated every 10ms. */ long OSTICKDURATION; TASK( Task10ms ) { void XCP_FN_TYPE Xcp_CmdProcessor( void ); uint32 startTime = GetQueryPerformanceCounter(); /* Trigger DAQ for the 10ms XCP raster. */ if( XCPEVENT_DAQ_OVERLOAD & Xcp_DoDaqForEvent_10msRstr() ) { ++numDaqOverload10ms; } /* Update those variables which are modified every 10ms. */ counter16 += slope16; /* Trigger STIM for the 10ms XCP raster. */ if( enableBypass10ms ) { if( XCPEVENT_MISSING_DTO & Xcp_DoStimForEvent_10msRstr() ) { ++numMissingDto10ms; } } duration10ms = (uint32)( ( GetQueryPerformanceCounter() - startTime ) / STOPWATCH_TICKS_PER_US ); } What would be the easiest (and/or best) way to synchronise to some accurate clock to call a function at a specific time interval, with little jitter during normal circumstances, from C++? I am working on WINDOWS operating system now. The above code is for RTAS OSEK but I want to call a function at a specific time interval for windows operating system. Could anyone assist me in c++ language ??

    Read the article

  • How to resolve the only ImagePicker control view in landscap mode and whole application in portrait mode?

    - by Wolvorin
    I have tried almost all the answers during last two days provided by Google and SO but no luck :( What I want is my whole application is in portrait mode only. And it working fine in ios 6+. The only support required at now. But the problem is I need to launch UIImagePickerViewController with image source type camera in only landscap mode. What I tried till now is : (1) I try to create one category for UIImagePickerController for orientation. -(BOOL)shouldAutorotate { return NO; } -(NSUInteger)supportedInterfaceOrientations { return UIInterfaceOrientationMaskLandscape; } - (UIInterfaceOrientation)preferredInterfaceOrientationForPresentation { return UIInterfaceOrientationLandscapeLeft; } Like this. But the camera view is not proper aligned. It just follows the orientation of device with some +/- 90 angle but not what I required. Even the button of the camera shown by camera view as camera control is also follows the camera view, ie. the view is rotated to 90 anti clock vise and stays to that way. Is there any way to use the camera with proper alignment? or have to use other framework to work with it? Please help me. I stuck with it for last two days.

    Read the article

  • Passing time from server to client

    - by gskoli
    Dear all, i am creating a digital clock using images & javascript but i want to pass a server time that time ... How to do that ... I am getting time from server and passing it to Date Following i have given snippet . var time_str = document.clock_form.time_str.value ; //alert (time_str); function dotime(){ theTime=setTimeout('dotime();',1000); //d = new Date(Date.parse(time_str)); d= new Date(time_str); hr= d.getHours()+100; mn= d.getMinutes()+100; se= d.getSeconds()+100; var time_str = document.clock_form.time_str.value ; //alert (time_str); alert(' TIME ---> '+hr+' :: '+mn+' :: '+ se); if(hr==100){ hr=112;am_pm='am'; } else if(hr<112){ am_pm='am'; } else if(hr==112){ am_pm='pm'; } else if(hr>112){ am_pm='pm';hr=(hr-12); } tot=''+hr+mn+se; document.hr1.src = '/flash_files/digits/dg'+tot.substring(1,2)+'.gif'; document.hr2.src = '/flash_files/digits/dg'+tot.substring(2,3)+'.gif'; document.mn1.src = '/flash_files/digits/dg'+tot.substring(4,5)+'.gif'; document.mn2.src = '/flash_files/digits/dg'+tot.substring(5,6)+'.gif'; document.se1.src = '/flash_files/digits/dg'+tot.substring(7,8)+'.gif'; document.se2.src = '/flash_files/digits/dg'+tot.substring(8,9)+'.gif'; document.ampm.src= '/flash_files/digits/dg'+am_pm+'.gif'; } dotime(); But it is not working Help me out Thanks in advance.

    Read the article

  • new Date() In javascript :: need Help

    - by gskoli
    Dear all, i am creating a digital clock using images & javascript but i want to pass a server time that time ... How to do that ... I am getting time from server and passing it to Date Following i have given snippet . var time_str = document.clock_form.time_str.value ; //alert (time_str); function dotime(){ theTime=setTimeout('dotime();',1000); //d = new Date(Date.parse(time_str)); d= new Date(time_str); hr= d.getHours()+100; mn= d.getMinutes()+100; se= d.getSeconds()+100; var time_str = document.clock_form.time_str.value ; //alert (time_str); alert(' TIME ---> '+hr+' :: '+mn+' :: '+ se); if(hr==100){ hr=112;am_pm='am'; } else if(hr<112){ am_pm='am'; } else if(hr==112){ am_pm='pm'; } else if(hr>112){ am_pm='pm';hr=(hr-12); } tot=''+hr+mn+se; document.hr1.src = '/flash_files/digits/dg'+tot.substring(1,2)+'.gif'; document.hr2.src = '/flash_files/digits/dg'+tot.substring(2,3)+'.gif'; document.mn1.src = '/flash_files/digits/dg'+tot.substring(4,5)+'.gif'; document.mn2.src = '/flash_files/digits/dg'+tot.substring(5,6)+'.gif'; document.se1.src = '/flash_files/digits/dg'+tot.substring(7,8)+'.gif'; document.se2.src = '/flash_files/digits/dg'+tot.substring(8,9)+'.gif'; document.ampm.src= '/flash_files/digits/dg'+am_pm+'.gif'; } dotime(); But it is not working Help me out Thanks in advance.

    Read the article

  • how should i create my own 'now' / DateTime.Now ?

    - by Michel
    Hi all, i'm starting to build a part of a system which will hold a lot of DateTime validations, and a lot of 'if it was done before now' or 'if it will start in an hour etc'. Usual way to go is to use DateTime.Now to get the actual time. I predict however, that during unit test that will give me a real headache because i will have to setup my testdata for the time when the test will run in stead of use a default set of test data. So i thought: why not use my own 'now' so i can set the current datetime to any moment in time. As i don't want to set the testservers internal clock i was thinking about this solution, and i was wondering what you think of it. Base thought is that i use my own DateTime class. That class gives you the current datetime, but you can also set your own time from outside. public static class MyDateTime { private static TimeSpan _TimeDifference = TimeSpan.Zero; public static DateTime Now { get { return DateTime.Now + _TimeDifference; } } public static void SetNewNow(DateTime newNow) { _TimeDifference = newNow - DateTime.Now; } public static void AddToRealTime(TimeSpan timeSpan ) { _TimeDifference = timeSpan; } public static void SubtractFromRealTime(TimeSpan timeSpan) { _TimeDifference = - timeSpan; } }

    Read the article

  • issue with vhdl structural coding

    - by user3699982
    The code below is a simple vhdl structural architecture, however, the concurrent assignment to the signal, comb1, is upsetting the simulation with the outputs (tb_lfsr_out) and comb1 becoming undefined. Please, please help, thank you, Louise. library IEEE; use IEEE.STD_LOGIC_1164.all; entity testbench is end testbench; architecture behavioural of testbench is CONSTANT clock_frequency : REAL := 1.0e9; CONSTANT clock_period : REAL := (1.0/clock_frequency)/2.0; signal tb_master_clk, comb1: STD_LOGIC := '0'; signal tb_lfsr_out : std_logic_vector(2 DOWNTO 0) := "111"; component dff port ( q: out STD_LOGIC; d, clk: in STD_LOGIC ); end component; begin -- Clock/Start Conversion Generator tb_master_clk <= (NOT tb_master_clk) AFTER (1 SEC * clock_period); comb1 <= tb_lfsr_out(0) xor tb_lfsr_out(2); dff6: dff port map (tb_lfsr_out(2), tb_lfsr_out(1), tb_master_clk); dff7: dff port map (tb_lfsr_out(1), tb_lfsr_out(0), tb_master_clk); dff8: dff port map (tb_lfsr_out(0), comb1, tb_master_clk); end behavioural;

    Read the article

  • Update text on CCLabelTFF end in bad access?

    - by TheDeveloper
    I'm doing a little game in Coco2D and I have a countdown clock Note: As I am just trying to fix a bug, I am not working on cleanup so the timer can stop, etc. Here is my code I'm using to setup the label and start the timer: timer = [CCLabelTTF labelWithString:@"10.0000" fontName:@"Helvetica" fontSize:20]; timerDisplay = timer; timerDisplay.position = ccp(277,310); [self addChild:timerDisplay]; timeLeft = 10; timerObject = [NSTimer scheduledTimerWithTimeInterval:0.1 target:self selector:@selector(updateTimer) userInfo:nil repeats:YES]; Note: timeLeft is a double This is updateTimers's code: -(void)updateTimer { NSLog(@"Got Called!"); timeLeft = timeLeft -0.1; [timer setString:[NSString stringWithFormat:@"%f",timeLeft]]; timerDisplay = timer; timerDisplay.position = ccp(277,310); [self removeChild:timerDisplay cleanup:YES]; //[self addChild:timerDisplay]; if (timeLeft <= 0) { [timerObject invalidate]; } } When I run this I toggle between crashing on this this: [timer setString:[NSString stringWithFormat:@"%f",timeLeft]]; and in the green arrow thing it gives Thread 1: EXEC_BAD_ACCESS (code=2, address=0x8) and 0x197a7ff: movl 16(%edi), %esi and in the green arrow thing it gives Thread 1: EXEC_BAD_ACCESS (code=2, address=0x8)

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48  | Next Page >