Search Results

Search found 61797 results on 2472 pages for 'time wait'.

Page 3/2472 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • JVM system time runs faster than HP UNIX OS system time

    - by winston
    Hello I have the following output from a simple debug jsp: Weblogic Startup Since: Friday, October 19, 2012, 08:36:12 AM Database Current Time: Wednesday, December 12, 2012, 11:43:44 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:45:38 AM Line 1 was a recorded variable during WebLogic webapp startup. Line 2 was output from database query select sysdate from dual; Line 3 was output from java code new Date() I have checked from shell date command that line 2 output conforms with OS time. The output of line 3 was mysterious. I don't know how it comes from Java VM. On another machine with same setting, the same jsp output like this: Weblogic Startup Since: Tuesday, December 11, 2012, 02:29:06 PM Database Current Time: Wednesday, December 12, 2012, 11:51:48 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:51:50 AM Another machine: Weblogic Startup Since: Monday, December 10, 2012, 05:00:34 PM Database Current Time: Wednesday, December 12, 2012, 11:52:03 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:52:07 AM Findings: the pattern shows that the longer Weblogic startup, the larger the discrepancy of OS time with JVM time. Anybody could help on HP JVM? On HP UNIX, NTP was done daily. Anyway here comes the server versions: HP-UX machinex B.11.31 U ia64 2426956366 unlimited-user license java version "1.6.0.04" Java(TM) SE Runtime Environment (build 1.6.0.04-jinteg_28_apr_2009_04_46-b00) Java HotSpot(TM) Server VM (build 11.3-b02-jre1.6.0.04-rc2, mixed mode) WebLogic Server Version: 10.3.2.0 Java properties java.runtime.name=Java(TM) SE Runtime Environment java.runtime.version=1.6.0.04-jinteg_28_apr_2009_04_46-b00 java.vendor=Hewlett-Packard Co. java.vendor.url=http\://www.hp.com/go/Java java.version=1.6.0.04 java.vm.name=Java HotSpot(TM) 64-Bit Server VM java.vm.info=mixed mode java.vm.specification.vendor=Sun Microsystems Inc. java.vm.vendor="Hewlett-Packard Company" sun.arch.data.model=64 sun.cpu.endian=big sun.cpu.isalist=ia64r0 sun.io.unicode.encoding=UnicodeBig sun.java.launcher=SUN_STANDARD sun.jnu.encoding=8859_1 sun.management.compiler=HotSpot 64-Bit Server Compiler sun.os.patch.level=unknown os.name=HP-UX os.version=B.11.31

    Read the article

  • Apple: Time capsule, 2 questions

    - by Patrick
    1) Can I use time capsule as server ? Can I run operating systems on it ? 2) I'm using time machine with my mac with time capsule. Let's say my mac crashes, and I cannot use it anymore. Can i restore my mac disk on another laptop from time capsule ? In other words, can I have a perfect copy of my mac hard disk on another mac ? thanks

    Read the article

  • Windows Media Player - always show Current Time/Total Time

    - by Siim K
    I'm using Windows Media Player 12 on Windows 7. When I open a video file then it by default only shows the current position (time). If I click on it once then it changes to format Current Time / Total Time Is there any way to make this format permanent (registry hack/some setting I have not noticed)? Right now every time I close WMP and open another file it's back to the default (only current time) setting.

    Read the article

  • Java threads, wait time always 00:00:00-Producer/Consumer

    - by user3742254
    I am currently doing a producer consumer problem with a number of threads and have had to set priorities and waits to them to ensure that one thread, the security thread, runs last. I have managed to do this and I have managed to get the buffer working. The last thing that I am required to do is to show the wait time of threads that are too large for the buffer and to calculate the average wait time. I have included code to do so, but everything I run the program, the wait time is always returned as 00:00:00, and by extension, the average is returned as the same. I was speaking to one of my colleagues who said that it is not a matter of the code but rather a matter of the computer needing to work off of one processor, which can be adjusted in the task manager settings. He has an HP like myself but his program prints the wait time 180 times, whereas mine prints usually about 3-7 times and is only 00:00:01 on one instance before finishing when I have made the processor adjustments. My other colleague has an iMac and hers puts out an average of 42:00:34(42 minutes??) I am very confused about this because I can see no difference between our codes and like my colleague said, I was wondering is it a computer issue. I am obviously concerned as I wanted to make sure that my code correctly calculated an average wait time, but that is impossible to tell when the wait times always show as 00:00:00. To calculate the thread duration, including the time it entered and exited the buffer was done by using a timestamp import, and then subtracting start time from end time. Is my code correct for this issue or is there something which is missing? I would be very grateful for any solutions. Below is my code: My buffer class package com.Com813cw; import java.text.DateFormat; import java.text.SimpleDateFormat; /** * Created by Rory on 10/08/2014. */ class Buffer { private int contents, count = 0, process = 200; private int totalRam = 1000; private boolean available = false; private long start, end, wait, request = 0; private DateFormat time = new SimpleDateFormat("ss:SSS"); public int avWaitTime =0; public void average(){ System.out.println("Average Application Request wait time: "+ time.format(request/count)); } public synchronized int get() { while (process <= 500) { try { wait(); } catch (InterruptedException e) { } } process -= 200; System.out.println("CPU After Process " + process); notifyAll(); return contents; } public synchronized void put(int value) { if (process <= 500) { process += value; } else { start = System.currentTimeMillis(); try { wait(); } catch (InterruptedException e) { } end = System.currentTimeMillis(); wait = end - start; count++; request += wait; System.out.println("Application Request Wait Time: " + time.format(wait)); process += value; contents = value; calcWait(wait, count); } notifyAll(); } public void calcWait(long wait, int count){ this.avWaitTime = (int) (wait/count); } public void printWait(){ System.out.println("Wait time is " + time.format(this.avWaitTime)); } } My spotify class package com.Com813cw; import java.sql.Timestamp; /** * Created by Rory on 11/08/2014. */ class Spotify extends Thread { private Buffer buffer; private int number; private int bytes = 250; public Spotify(Buffer c, int number) { buffer = c; this.number = number; } long startTime = System.currentTimeMillis(); public void run() { for (int i = 0; i < 20; i++) { buffer.put(bytes); System.out.println(getName() + this.number + " put: " + bytes + " bytes "); try { sleep(1000); } catch (InterruptedException e) { } } long endTime = System.currentTimeMillis(); long timeTaken = endTime - startTime; java.util.Date date = new java.util.Date(); System.out.println("-----------------------------"); System.out.println("Spotify has finished executing."); System.out.println("Time taken to execute was " + timeTaken + " milliseconds"); System.out.println("Time that Spotify thread exited Buffer was " + new Timestamp(date.getTime())); System.out.println("-----------------------------"); } } My BubbleWitch class package com.Com813cw; import java.lang.*; import java.lang.System; import java.sql.Timestamp; /** * Created by Rory on 10/08/2014. */ class BubbleWitch2 extends Thread { private Buffer buffer; private int number; private int bytes = 100; public BubbleWitch2(Buffer c, int number) { buffer = c; this.number=number ; } long startTime = System.currentTimeMillis(); public void run() { for (int i = 0; i < 10; i++) { buffer.put(bytes); System.out.println(getName() + this.number + " put: " + bytes + " bytes "); try { sleep(1000); } catch (InterruptedException e) { } } long endTime = System.currentTimeMillis(); long timeTaken = endTime - startTime; java.util.Date date = new java.util.Date(); System.out.println("-----------------------------"); System.out.println("BubbleWitch2 has finished executing."); System.out.println("Time taken to execute was " +timeTaken+ " milliseconds"); System.out.println("Time Bubblewitch2 thread exited Buffer was " + new Timestamp(date.getTime())); System.out.println("-----------------------------"); } } My Test class package com.Com813cw; /** * Created by Rory on 10/08/2014. */ public class ProducerConsumerTest { public static void main(String[] args) throws InterruptedException { Buffer c = new Buffer(); BubbleWitch2 p1 = new BubbleWitch2(c,1); Processor c1 = new Processor(c, 1); Spotify p2 = new Spotify(c, 2); SystemManagement p3 = new SystemManagement(c, 3); SecurityUpdate p4 = new SecurityUpdate(c, 4, p1, p2, p3); p1.setName("BubbleWitch2 "); p2.setName("Spotify "); p3.setName("System Management "); p4.setName("Security Update "); p1.setPriority(10); p2.setPriority(10); p3.setPriority(10); p4.setPriority(5); c1.start(); p1.start(); p2.start(); p3.start(); p4.start(); p2.join(); p3.join(); p4.join(); c.average(); System.exit(0); } } My security update package com.Com813cw; import java.lang.*; import java.lang.System; import java.sql.Timestamp; /** * Created by Rory on 11/08/2014. */ class SecurityUpdate extends Thread { private Buffer buffer; private int number; private int bytes = 150; private int process = 0; public SecurityUpdate(Buffer c, int number, BubbleWitch2 bubbleWitch2, Spotify spotify, SystemManagement systemManagement) throws InterruptedException { buffer = c; this.number = number; bubbleWitch2.join(); spotify.join(); systemManagement.join(); } long startTime = System.currentTimeMillis(); public void run() { for (int i = 0; i < 15; i++) { buffer.put(bytes); System.out.println(getName() + this.number + " put: " + bytes + " bytes"); try { sleep(1500); } catch (InterruptedException e) { } } long endTime = System.currentTimeMillis(); long timeTaken = endTime - startTime; java.util.Date date = new java.util.Date(); System.out.println("-----------------------------"); System.out.println("Security Update has finished executing."); System.out.println("Time taken to execute was " + timeTaken + " milliseconds"); System.out.println("Time that SecurityUpdate thread exited Buffer was " + new Timestamp(date.getTime())); System.out.println("------------------------------"); } } I'd be grateful as I said for any help as this is the last and most frustrating obstacle.

    Read the article

  • How should I log time spent on multiple tasks?

    - by xenoterracide
    In Joel's blog on evidence based scheduling he suggests making estimates based on the smallest unit of work and logging extra work back to the original task. The problem I'm now experiencing is that I'll have create object A with subtask method A which creates object B and test all of the above. I create tasks for each of these that seems to be resulting in ok-ish estimates (need practice), but when I go to log work I find that I worked on 4 tasks at once because I tweak method A and find a bug in the test and refactor object B all while coding it. How should I go about logging this work? should I say I spent, for example, 2 hours on each of the 4 tasks I worked on in the 8 hour day?

    Read the article

  • Help to calculate hours and minutes between two time periods in Excel 2007

    - by Mestika
    Hi, I’m working on a very simple timesheet for my work in Excel 2007 but have ran into trouble about calculate the hours and minutes between two time periods. I have a field: timestart which could be for example: 08:30 Then I have a field timestop which could be for example: 12:30 I can easy calculate the result myself which is 4 hours but how do I create a “total” table all the way down the cell that calculate the hours spend on each entry? I’ve tried to play around with the time settings but it just give me wrong numbers each time. Sincerely Mestika

    Read the article

  • Time tracking tool for monitoring application usage

    - by wizlog
    I want to know how I'm really using my computer, and where the time goes (eg. I have an English paper due, and I intend on getting it done, its 2:30 PM... no wait, its 8:30 PM...). What software can tell me- a. what programs I use, and when b. within programs like Google Chrome or Firefox, which tabs do I spend the most time on. (So I know if I'm spending the time playing a game, or watching a movie on Hulu...)

    Read the article

  • How to change time (Advanced Eastern Time) on Slackware 8.1

    - by r0ca
    Hi all, I have a linux (Slackware) machine and the time/date is like, June 23rd 2003, 10:00am (It's 11 here) and I am not able to set the time to have it correct. I change the timezome to Montreal but the time is still wrong. Is there a way to force it to sync with my domain controler or even another online NTP server? Thanks, David.

    Read the article

  • Amazon EC2 instances changes server time/date on reboots and other time weirdness

    - by puffpio
    I have a windows instance up in EC2. I manually set the timezone to Pacific. 1) For some reason using window's built in time sync doesn't work in the instance...but whatever. I turn off automatic time syncing... but 2) On reboot the time on the server changes! For example, if i reboot it at 4PM on Wednesday, when the server comes back up it will read 12 noon on Thursday! As a result any access to Amazon's other services like SImpleDB fail because the timestamps generated are too far off the current time. Has anyone seen this or figured this out?

    Read the article

  • Time Warp

    - by Jesse
    It’s no secret that daylight savings time can wreak havoc on systems that rely heavily on dates. The system I work on is centered around recording dates and times, so naturally my co-workers and I have seen our fair share of date-related bugs. From time to time, however, we come across something that we haven’t seen before. A few weeks ago the following error message started showing up in our logs: “The supplied DateTime represents an invalid time. For example, when the clock is adjusted forward, any time in the period that is skipped is invalid.” This seemed very cryptic, especially since it was coming from areas of our application that are typically only concerned with capturing date-only (no explicit time component) from the user, like reports that take a “start date” and “end date” parameter. For these types of parameters we just leave off the time component when capturing the date values, so midnight is used as a “placeholder” time. How is midnight an “invalid time”? Globalization Is Hard Over the last couple of years our software has been rolled out to users in several countries outside of the United States, including Brazil. Brazil begins and ends daylight savings time at midnight on pre-determined days of the year. On October 16, 2011 at midnight many areas in Brazil began observing daylight savings time at which time their clocks were set forward one hour. This means that at the instant it became midnight on October 16, it actually became 1:00 AM, so any time between 12:00 AM and 12:59:59 AM never actually happened. Because we store all date values in the database in UTC, always adjust any “local” dates provided by a user to UTC before using them as filters in a query. The error we saw was thrown by .NET when trying to convert the Brazilian local time of 2011-10-16 12:00 AM to UTC since that local time never actually existed. We hadn’t experienced this same issue with any of our US customers because the daylight savings time changes in the US occur at 2:00 AM which doesn’t conflict with our “placeholder” time of midnight. Detecting Invalid Times In .NET you might use code similar to the following for converting a local time to UTC: var localDate = new DateTime(2011, 10, 16); //2011-10-16 @ midnight const string timeZoneId = "E. South America Standard Time"; //Windows system timezone Id for "Brasilia" timezone. var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); var convertedDate = TimeZoneInfo.ConvertTimeToUtc(localDate, localTimeZone); The code above throws the “invalid time” exception referenced above. We could try to detect whether or not the local time is invalid with something like this: var localDate = new DateTime(2011, 10, 16); //2011-10-16 @ midnight const string timeZoneId = "E. South America Standard Time"; //Windows system timezone Id for "Brasilia" timezone. var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); if (localTimeZone.IsInvalidTime(localDate)) localDate = localDate.AddHours(1); var convertedDate = TimeZoneInfo.ConvertTimeToUtc(localDate, localTimeZone); This code works in this particular scenario, but it hardly seems robust. It also does nothing to address the issue that can arise when dealing with the ambiguous times that fall around the end of daylight savings. When we roll the clocks back an hour they record the same hour on the same day twice in a row. To continue on with our Brazil example, on February 19, 2012 at 12:00 AM, it will immediately become February 18, 2012 at 11:00 PM all over again. In this scenario, how should we interpret February 18, 2011 11:30 PM? Enter Noda Time I heard about Noda Time, the .NET port of the Java library Joda Time, a little while back and filed it away in the back of my mind under the “sounds-like-it-might-be-useful-someday” category.  Let’s see how we might deal with the issue of invalid and ambiguous local times using Noda Time (note that as of this writing the samples below will only work using the latest code available from the Noda Time repo on Google Code. The NuGet package version 0.1.0 published 2011-08-19 will incorrectly report unambiguous times as being ambiguous) : var localDateTime = new LocalDateTime(2011, 10, 16, 0, 0); const string timeZoneId = "Brazil/East"; var timezone = DateTimeZone.ForId(timeZoneId); var localDateTimeMaping = timezone.MapLocalDateTime(localDateTime); ZonedDateTime unambiguousLocalDateTime; switch (localDateTimeMaping.Type) { case ZoneLocalMapping.ResultType.Unambiguous: unambiguousLocalDateTime = localDateTimeMaping.UnambiguousMapping; break; case ZoneLocalMapping.ResultType.Ambiguous: unambiguousLocalDateTime = localDateTimeMaping.EarlierMapping; break; case ZoneLocalMapping.ResultType.Skipped: unambiguousLocalDateTime = new ZonedDateTime( localDateTimeMaping.ZoneIntervalAfterTransition.Start, timezone); break; default: throw new InvalidOperationException(string.Format("Unexpected mapping result type: {0}", localDateTimeMaping.Type)); } var convertedDateTime = unambiguousLocalDateTime.ToInstant().ToDateTimeUtc(); Let’s break this sample down: I’m using the Noda Time ‘LocalDateTime’ object to represent the local date and time. I’ve provided the year, month, day, hour, and minute (zeros for the hour and minute here represent midnight). You can think of a ‘LocalDateTime’ as an “invalidated” date and time; there is no information available about the time zone that this date and time belong to, so Noda Time can’t make any guarantees about its ambiguity. The ‘timeZoneId’ in this sample is different than the ones above. In order to use the .NET TimeZoneInfo class we need to provide Windows time zone ids. Noda Time expects an Olson (tz / zoneinfo) time zone identifier and does not currently offer any means of mapping the Windows time zones to their Olson counterparts, though project owner Jon Skeet has said that some sort of mapping will be publicly accessible at some point in the future. I’m making use of the Noda Time ‘DateTimeZone.MapLocalDateTime’ method to disambiguate the original local date time value. This method returns an instance of the Noda Time object ‘ZoneLocalMapping’ containing information about the provided local date time maps to the provided time zone.  The disambiguated local date and time value will be stored in the ‘unambiguousLocalDateTime’ variable as an instance of the Noda Time ‘ZonedDateTime’ object. An instance of this object represents a completely unambiguous point in time and is comprised of a local date and time, a time zone, and an offset from UTC. Instances of ZonedDateTime can only be created from within the Noda Time assembly (the constructor is ‘internal’) to ensure to callers that each instance represents an unambiguous point in time. The value of the ‘unambiguousLocalDateTime’ might vary depending upon the ‘ResultType’ returned by the ‘MapLocalDateTime’ method. There are three possible outcomes: If the provided local date time is unambiguous in the provided time zone I can immediately set the ‘unambiguousLocalDateTime’ variable from the ‘Unambiguous Mapping’ property of the mapping returned by the ‘MapLocalDateTime’ method. If the provided local date time is ambiguous in the provided time zone (i.e. it falls in an hour that was repeated when moving clocks backward from Daylight Savings to Standard Time), I can use the ‘EarlierMapping’ property to get the earlier of the two possible local dates to define the unambiguous local date and time that I need. I could have also opted to use the ‘LaterMapping’ property in this case, or even returned an error and asked the user to specify the proper choice. The important thing to note here is that as the programmer I’ve been forced to deal with what appears to be an ambiguous date and time. If the provided local date time represents a skipped time (i.e. it falls in an hour that was skipped when moving clocks forward from Standard Time to Daylight Savings Time),  I have access to the time intervals that fell immediately before and immediately after the point in time that caused my date to be skipped. In this case I have opted to disambiguate my local date and time by moving it forward to the beginning of the interval immediately following the skipped period. Again, I could opt to use the end of the interval immediately preceding the skipped period, or raise an error depending on the needs of the application. The point of this code is to convert a local date and time to a UTC date and time for use in a SQL Server database, so the final ‘convertedDate’  variable (typed as a plain old .NET DateTime) has its value set from a Noda Time ‘Instant’. An 'Instant’ represents a number of ticks since 1970-01-01 at midnight (Unix epoch) and can easily be converted to a .NET DateTime in the UTC time zone using the ‘ToDateTimeUtc()’ method. This sample is admittedly contrived and could certainly use some refactoring, but I think it captures the general approach needed to take a local date and time and convert it to UTC with Noda Time. At first glance it might seem that Noda Time makes this “simple” code more complicated and verbose because it forces you to explicitly deal with the local date disambiguation, but I feel that the length and complexity of the Noda Time sample is proportionate to the complexity of the problem. Using TimeZoneInfo leaves you susceptible to overlooking ambiguous and skipped times that could result in run-time errors or (even worse) run-time data corruption in the form of a local date and time being adjusted to UTC incorrectly. I should point out that this research is my first look at Noda Time and I know that I’ve only scratched the surface of its full capabilities. I also think it’s safe to say that it’s still beta software for the time being so I’m not rushing out to use it production systems just yet, but I will definitely be tinkering with it more and keeping an eye on it as it progresses.

    Read the article

  • A lot of TCP: time wait bucket table overflow in CentOS 6

    - by divaka
    we have the following output from dmesg: __ratelimit: 33491 callbacks suppressed TCP: time wait bucket table overflow TCP: time wait bucket table overflow TCP: time wait bucket table overflow TCP: time wait bucket table overflow TCP: time wait bucket table overflow TCP: time wait bucket table overflow TCP: time wait bucket table overflow TCP: time wait bucket table overflow TCP: time wait bucket table overflow TCP: time wait bucket table overflow Also we have the following setting: cat /proc/sys/net/ipv4/tcp_max_tw_buckets 524288 We are under some kind of attack, but we could not detect what cause this problem?

    Read the article

  • Internet Time tab has disappeared from the Date and Time applet of the Control Panel

    - by Robert Thornton
    Previously, there was an Internet Time tab on the Date and Time applet of the Control Panel, wherein one could force a query of an internet time server and also type in a different server from the ones supplied. However, this tab has now disappeared, and I need to have it back. I should mention that this machine has never been part of a domain, since it seems that machines that are such do not have such a tab. I should be obliged to anyone who can help me restore the missing tab. Windows 7 Home Premium Service Pack 1

    Read the article

  • Work Time Start / Stop Tracking Software

    - by Shaharyar
    Is there a software that allows you to keep track of someones working time digitally? We are growing to an extent where we work remotely and we would like to have fixed working times. All it should do is kind of register when someone starts working (i.e. someone needs to login somewhere or set a flag.. really it could be anything) Do you have any ideas?

    Read the article

  • Java's Object.wait method with nanoseconds: Is this a joke or am I missing something

    - by Krumia
    I was checking out the Java API source code (Java 8) just out of curiosity. And I found this in java/lang/Object.java. There are three methods named wait: public final native void wait(long timeout): This is the core of all wait methods, which has a native implementation. public final void wait(): Just calls wait(0). And then there is public final void wait(long timeout, int nanos). The JavaDoc for the particular method tells me that, This method is similar to the wait method of one argument, but it allows finer control over the amount of time to wait for a notification before giving up. The amount of real time, measured in nanoseconds, is given by: 1000000*timeout+nanos But this is how the methods achieves "finer control over the amount of time to wait": if (nanos >= 500000 || (nanos != 0 && timeout == 0)) { timeout++; } wait(timeout); So this method basically does a crude rounding up of nanoseconds to milliseconds. Not to mention that anything below 500000ns/0.5ms will be ignored. Is this piece of code bad/unnecessary code, or am I missing some unseen virtue of declaring this method, and it's no argument cousin as the way they are?

    Read the article

  • Displaying wait cursor in while backgroundworker is running

    - by arc1880
    During the start of my windows application, I have to make a call to a web service to retrieve some default data to load onto my application. During the load of the form, I run a backgroundworker to retrieve this data. I want to display the wait cursor until this data is retrieved. How would I do this? I've tried setting the wait cursor before calling the backgroundworker to run. When I report a progress of 100 then I set it back to the default cursor. The wait cursor comes up but when I move the mouse it disappears. Environment: Windows 7 Pro 64-bit VS2010 C# .NET 4.0 Windows Forms EDIT: I am setting the cursor the way Jay Riggs suggested. It only works if I don't move the mouse. **UPDATE: I have created a button click which does the following: When I do the button click and move my mouse, the wait cursor appears regardless if I move my mouse or not. void BtnClick() { Cursor = Cursors.WaitCursor; Thread.Sleep(8000); Cursor = Cursors.Default; } If I do the following: I see the wait cursor and when I move the mouse it disappears inside the form. If I move to my status bar or the menu bar the wait cursor appears. Cursor = Cursors.WaitCursor; if (!backgroundWorker.IsBusy) { backGroundWorker.RunWorkerAsync(); } void backGroundWorkerDoWork(object sender, DoWorkEventArgs e) { Thread.Sleep(8000); } void backGroundWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { Cursor = Cursors.Default; } If I do the following: The wait cursor appears and when I move the mouse it still appears but will sometimes flicker off and on when moving in text fields. if (!backgroundWorker.IsBusy) { backGroundWorker.RunWorkerAsync(); } void backGroundWorkerDoWork(object sender, DoWorkEventArgs e) { UseWaitCursor = true; Thread.Sleep(8000); } void backGroundWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { UseWaitCursor = false; }

    Read the article

  • Why wait should always be in synchronized block

    - by diy
    Hi gents, We all know that in order to invoke Object.wait() , this call must be placed in synchronized block,otherwise,IllegalMonitorStateException is thrown.But what's the reason for making this restriction?I know that wait() releases the monitor, but why do we need to explicitly acquire the monitor by making particular block synchronized and then release the monitor by calling wait() ? What is the potential damage if it was possible to invoke wait() outside synch block, retaining it's semantics - suspending the caller thread ? Thanks in advance

    Read the article

  • How to make a system time-zone sensitive?

    - by Jerry Dodge
    I need to implement time zones in a very large and old Delphi system, where there's a central SQL Server database and possibly hundreds of client installations around the world in different time zones. The application already interacts with the database by only using the date/time of the database server. So, all the time stamps saved in both the database and on the client machines are the date/time of the database server when it happened, never the time of the client machine. So, when a client is about to display the date/time of something (such as a transaction) which is coming from this database, it needs to show the date/time converted to the local time zone. This is where I get lost. I would naturally assume there should be something in SQL to recognize the time zone and convert a DateTime field dynamically. I'm not sure if such a thing exists though. If so, that would be perfect, but if not, I need to figure out another way. This Delphi system (multiple projects) utilizes the SQL Server database using ADO components, VCL data-aware controls, and QuickReports (using data sources). So, there's many places where the data goes directly from the database query to rendering on the screen, without any code to actually put this data on the screen. In the end, I need to know when and how should I get the properly converted time? There must be a standard method for this, and I'm hoping SQL Server 2008 R2 has this covered...

    Read the article

  • WPF Show wait cursor before application fully loads

    - by e28Makaveli
    I want to show the wait cursor before my WPF application, composed using CAL, fully loads. In the constructor of the main window, I have the following code: public MainWindow([Dependency] IUnityContainer container) { InitializeComponent(); Cursor = System.Windows.Input.Cursors.Wait; Mouse.OverrideCursor = System.Windows.Input.Cursors.Wait; ForceCursor = true; //this.Cursor = System.Windows.Input.Cursors.AppStarting; // event subscriptions PresenterBase.EventAggregate.GetEvent<ModulesLoadedEvent>().Subscribe(OnModulesLoaded); } After all modules have been loaded, the following handler is invoked: private void OnModulesLoaded(EventArgs e) { allModulesLoaded = true; Mouse.OverrideCursor = null; Cursor = System.Windows.Input.Cursors.Arrow; } Problem is, I do not see this wait cursor. What I am missing here? FWIW, I got a hint from this post http://stackoverflow.com/questions/2078766/showing-the-wait-cursor TIA.

    Read the article

  • Can a thread call wait() on two locks at once in Java (6)

    - by Dr. Monkey
    I've just been messing around with threads in Java to get my head around them (it seems like the best way to do so) and now understand what's going on with synchronize, wait() and notify(). I'm curious about whether there's a way to wait() on two resources at once. I think the following won't quite do what I'm thinking of: synchronized(token1) { synchronized(token2) { token1.wait(); token2.wait(); //won't run until token1 is returned System.out.println("I got both tokens back"); } } In this (very contrived) case token2 will be held until token1 is returned, then token1 will be held until token2 is returned. The goal is to release both token1 and token2, then resume when both are available (note that moving the token1.wait() outside the inner synchronized loop is not what I'm getting at). A loop checking whether both are available might be more appropriate to achieve this behaviour (would this be getting near the idea of double-check locking?), but would use up extra resources - I'm not after a definitive solution since this is just to satisfy my curiosity.

    Read the article

  • Apache on Windows random long wait times

    - by Jaxbot
    I have a development machine with Apache installed as a service on Windows. The installation is fresh out of the box, with no changes to configuration aside from adding the PHP module. From day one, I've had a problem that looks like this: Essentially, Apache is freezing for about 11 seconds before replying on random requests. This appears to happen more frequently when the host hasn't been connected to in a while, but this is not always the case. I've eliminated MySQL, PHP, and the specific application; this long wait problem will occur even when loading a static file such as favicon.ico. Thus, the only factor remaining is Apache, which is freezing for consistently around 10-11 seconds before replying. The problem is not the DNS problem that many people point to; as you can see, the DNS lookup is instant, and the problem occurs both on localhost and 127.0.0.1. Thanks for the time.

    Read the article

  • Time management and self improvement

    - by Filip
    I hope I can open a discussion on this topic as this is not a specific problem. It's a topic I hope to get some ideas on how people in similar situation as mine manage their time. OK, I'm a single developer on a software project for the last 6-8 months. The project I'm working on uses several technologies, mainly .net stuff: WPF, WF, NHibernate, WCF, MySql and other third party SDKs relevant for the project nature. My experience and knowledge vary, for example I have a lot of experience in WPF but much less in WCF. I work full time on the project and im curios on how other programmers which need to multi task in many areas manage their time. I'm a very applied type of person and prefer to code instead of doing research. I feel that doing research "might" slow down the progress of the project while I recognize that research and learning more in areas which I'm not so strong will ultimately make me more productive. How would you split up your daily time in productive coding time and time to and experiment, read blogs, go through tutorials etc. I would say that Im coding about 90%+ of my day and devoting some but very little time in research and acquiring new knowledge. Thanks for your replies. I think I will adopt a gradual transition to Dominics block parts. I kinda knew that coding was taking up way to much of my time but it feels good having a first version of the project completed and ready. With a few months of focused hard work behind me I hope to get more time to experiment and expand my knowlegde. Now I only hope my boss will cut me some slack and stop pressuring me for features...

    Read the article

  • Daylight Saving Time Visualized

    - by Jason Fitzpatrick
    When you map out the Daylight Saving Time adjusted sunrise and sunset times over the course of the year, an interesting pattern emerges. Chart designer Germanium writes: I tried to come up with the reason for the daylight saving time change by just looking at the data for sunset and sunrise times. The figure represents sunset and sunrise times thought the year. It shows that the daylight saving time change marked by the lines (DLS) is keeping the sunrise time pretty much constant throughout the whole year, while making the sunset time change a lot. The spread of sunrise times as measured by the standard deviation is 42 minutes, which means that the sunrise time changes within that range the whole year, while the standard deviation for the sunset times is 1:30 hours. Whatever the argument for doing this is, it’s pretty clear that reason is to keep the sunrise time constant. You can read more about the controversial history of Daylight Saving Time here. Daylight Saving Time Explained [via Cool Infographics] 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Time Tracking on an Agile Team

    - by Stephen.Walther
    What’s the best way to handle time-tracking on an Agile team? Your gut reaction to this question might be to resist any type of time-tracking at all. After all, one of the principles of the Agile Manifesto is “Individuals and interactions over processes and tools”.  Forcing the developers on your team to track the amount of time that they devote to completing stories or tasks might seem like useless bureaucratic red tape: an impediment to getting real work done. I completely understand this reaction. I’ve been required to use time-tracking software in the past to account for each hour of my workday. It made me feel like Fred Flintstone punching in at the quarry mine and not like a professional. Why You Really Do Need Time-Tracking There are, however, legitimate reasons to track time spent on stories even when you are a member of an Agile team.  First, if you are working with an outside client, you might need to track the number of hours spent on different stories for the purposes of billing. There might be no way to avoid time-tracking if you want to get paid. Second, the Product Owner needs to know when the work on a story has gone over the original time estimated for the story. The Product Owner is concerned with Return On Investment. If the team has gone massively overtime on a story, then the Product Owner has a legitimate reason to halt work on the story and reconsider the story’s business value. Finally, you might want to track how much time your team spends on different types of stories or tasks. For example, if your team is spending 75% of their time doing testing then you might need to bring in more testers. Or, if 10% of your team’s time is expended performing a software build at the end of each iteration then it is time to consider better ways of automating the build process. Time-Tracking in SonicAgile For these reasons, we added time-tracking as a feature to SonicAgile which is our free Agile Project Management tool. We were heavily influenced by Jeff Sutherland (one of the founders of Scrum) in the way that we implemented time-tracking (see his article http://scrum.jeffsutherland.com/2007/03/time-tracking-is-anti-scrum-what-do-you.html). In SonicAgile, time-tracking is disabled by default. If you want to use this feature then the project owner must enable time-tracking in Project Settings. You can choose to estimate using either days or hours. If you are estimating at the level of stories then it makes more sense to choose days. Otherwise, if you are estimating at the level of tasks then it makes more sense to use hours. After you enable time-tracking then you can assign three estimates to a story: Original Estimate – This is the estimate that you enter when you first create a story. You don’t change this estimate. Time Spent – This is the amount of time that you have already devoted to the story. You update the time spent on each story during your daily standup meeting. Time Left – This is the amount of time remaining to complete the story. Again, you update the time left during your daily standup meeting. So when you first create a story, you enter an original estimate that becomes the time left. During each daily standup meeting, you update the time spent and time left for each story on the Kanban. If you had perfect predicative power, then the original estimate would always be the same as the sum of the time spent and the time left. For example, if you predict that a story will take 5 days to complete then on day 3, the story should have 3 days spent and 2 days left. Unfortunately, never in the history of mankind has anyone accurately predicted the exact amount of time that it takes to complete a story. For this reason, SonicAgile does not update the time spent and time left automatically. Each day, during the daily standup, your team should update the time spent and time left for each story. For example, the following table shows the history of the time estimates for a story that was originally estimated to take 3 days but, eventually, takes 5 days to complete: Day Original Estimate Time Spent Time Left Day 1 3 days 0 days 3 days Day 2 3 days 1 day 2 days Day 3 3 days 2 days 2 days Day 4 3 days 3 days 2 days Day 5 3 days 4 days 0 days In the table above, everything goes as predicted until you reach day 3. On day 3, the team realizes that the work will require an additional two days. The situation does not improve on day 4. All of the sudden, on day 5, all of the remaining work gets done. Real work often follows this pattern. There are long periods when nothing gets done punctuated by occasional and unpredictable bursts of progress. We designed SonicAgile to make it as easy as possible to track the time spent and time left on a story. Detecting when a Story Goes Over the Original Estimate Sometimes, stories take much longer than originally estimated. There’s a surprise. For example, you discover that a new software component is incompatible with existing software components. Or, you discover that you have to go through a month-long certification process to finish a story. In those cases, the Product Owner has a legitimate reason to halt work on a story and re-evaluate the business value of the story. For example, the Product Owner discovers that a story will require weeks to implement instead of days, then the story might not be worth the expense. SonicAgile displays a warning on both the Backlog and the Kanban when the time spent on a story goes over the original estimate. An icon of a clock is displayed. Time-Tracking and Tasks Another optional feature of SonicAgile is tasks. If you enable Tasks in Project Settings then you can break stories into one or more tasks. You can perform time-tracking at the level of a story or at the level of a task. If you don’t break a story into tasks then you can enter the time left and time spent for the story. As soon as you break a story into tasks, then you can no longer enter the time left and time spent at the level of the story. Instead, the time left and time spent for a story is rolled up from its tasks. On the Kanban, you can see how the time left and time spent for each task gets rolled up into each story. The progress bar for the story is rolled up from the progress bars for each task. The original estimate is never rolled up – even when you break a story into tasks. A story’s original estimate is entered separately from the original estimates of each of the story’s tasks. Summary Not every Agile team can avoid time-tracking. You might be forced to track time to get paid, to detect when you are spending too much time on a particular story, or to track the amount of time that you are devoting to different types of tasks. We designed time-tracking in SonicAgile to require the least amount of work to track the information that you need. Time-tracking is an optional feature. If you enable time-tracking then you can track the original estimate, time left, and time spent for each story and task. You can use time-tracking with SonicAgile for free. Register at http://SonicAgile.com.

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >