Search Results

Search found 60669 results on 2427 pages for 'time tracking'.

Page 474/2427 | < Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >

  • asp.net: moving from session variables to cookies

    - by P a u l
    My forms are losing session variables on shared hosting very quickly (webhost4life), and I think I want to replace them with cookies. Does the following look reasonable for tracking an ID from form to form: if(Request.Cookies["currentForm"] == null) return; projectID = new Guid(Request.Cookies["currentForm"]["selectedProjectID"]); Response.Cookies["currentForm"]["selectedProjectID"] = Request.Cookies["currentForm"]["selectedProjectID"]; Note that I am setting the Response cookie in all the forms after I read the Request cookie. Is this necessary? Do the Request cookies copy to the Response automatically? I'm setting no properties on the cookies and create them this way: Response.Cookies["currentForm"]["selectedProjectID"] = someGuid.ToString(); The intention is that these are temporary header cookies, not persisted on the client any longer than the browser session. I ask this since I don't often write websites.

    Read the article

  • How to append to an array that contains blank spaces - Java

    - by Cameron Townley
    I'm trying to append to a char[] array that contains blank spaces on the end. The char array for example contains the characters 'aaa'. When I append the first time the method functions properly and outputs 'aaabbb'. The initial capacity of the array is set to 80 or multiples of 80. The second time I try and append my output looks like"aaabbb bbb". Any psuedocode would be great.

    Read the article

  • Disk (EXT4) suddenly empty without any sign of why

    - by Ohnomydisk
    I have a Ubuntu 10.04 server with several disks in it. The disks are setup with a union filesystem, which presents them all as one logical /home. A few days ago, one of the disks appears to have suddenly 'become empty', for lack of better explanation. The amount of data on the /home mount almost halved within minutes - the disk appears to have had just over 400 GB of data prior to 'becoming empty'. I have absolutely no idea what happened. I was not using the server at the other time, but there are half a dozen other users who may have been (without root access and without the ability to hose a whole disk). I've ran SMART tests on the disk and it comes back clean. The filesystem checks fine (it has 12 GB used now, as some user software continued downloading after the incident). All I know is that around around midnight on October 19, the disk usage changed dramatically: The data points are every 15 minutes, and the full loss occured between captures: 2012-10-18 23:58:03.399647 - has 953.97/2059.07 GB [46.33 percent] 2012-10-19 00:13:15.909010 - has 515.18/2059.07 GB [25.02 percent] Other than that, I have not much to go off :-( I know that: There's nothing interesting in log files at that time Nobody appeared to be logged in via SSH at the time it occured (most users do not even use SSH) The server was online through whatever occured (3 months uptime) None of the other disks were affected and everything else on the server looks completely normal I have tried using "extundelete" on the disk and it didn't really find anything (some temporary files, but they looked new anyway) I am completely at a loss to what could have caused this. I was initially thinking maybe root escalation exploit, but even if someone did maliciously "rm" the disk contents, it would take more than 15 minutes for 400 GB?

    Read the article

  • How to track conversion rate (clicks to sales) from an internal advertising system?

    - by Ed Woodcock
    I am currently writing an interal advertising system for a company client's website, where the adverts will only be seen by internal users, and all transactions take place internally to the site (i.e. the adverts are for member-only content available on the site). Does anyone have any recommendations as to the best way to track the conversion rate of these adverts (i.e. views:clicks:sales)? EDIT I'm not looking for a 'Why don't you use google analystics'-type answer, I'm looking into possible architecture outlines, i.e. a 'why don't use store a guid in a cache temporarily and see if it ties to the advert' kind of answer. /EDIT In a previous job I did something based on an internal cache, which simply did view:click tracking, however the addition of the sales rate makes this task more complex, especially if we take into account the idea that someone may click through to an advert and not purchase immediately. Cheers, Ed (N.B. I'm leaving this purposely vague in order to (hopefully) get some answers that provide ideas I've yet to have thought of by coming at the problem from a different angle)

    Read the article

  • best way to update client site data in GWT application

    - by bmscomp
    When getting data from the server to the client side in GWT application we need to refresh every period of time to get updates for data, I think this is not a good method because if consume lot of time and resources, just thinking about another method is amazing :), any one get a good and effeciant idea ??

    Read the article

  • Algorithm performance

    - by william007
    I am testing an algorithm for different parameters on a computer. I notice the performance fluctuates for each parameters. Say I run for the first time I got 20 ms, second times I got 5ms, third times I got 4ms: But the algorithm should work the same for these 3 times. I am using stopwatch from C# library to count the time, is there a better way to measure the performance without subjecting to those fluctuations?

    Read the article

  • Controlling youtube traffic path ingoing to multihoming network

    - by Hamdy Ali
    Scenario: I've network multihoming (dual ISP) setup. each ISP bandwidth 500Mbps Currently ISP-A link bandwidth almost fully utilized then the second ISP-B link From our investigation, it is because youtube server cache response to link ISP-A. Some time the utilization of link ISP B increased because at that time youtube server cached is response to ISP B. My question how/Why did this happen? how do I force youtube cache server using ISP link B?

    Read the article

  • Profile not loaded correctly (Cannot access registry)

    - by xaav
    Every so often, I log on and get the Following Message: User profile was not loaded correctly. You have been logged on with a temporary profile. Changes you make to this profile will be lost when you log off. Please see the event log for details or contact your administrator This almost always happens when somebody else has been on the computer for a while, and then I log on. This never used to happen, but now it happens pretty often. My profile is not permanently corrupted, all I have to do is restart my computer, but this annoys me, and I would like to fix it. I was curios about the reason of this cause, so I looked into the Event Log, and found the root of the problem was the ntuser.dat file in the profile that I was logging on to was locked at logon time. This resulted in the current users registry not being loaded, resulting in failure to load the profile. I just found a microsoft article that mentions this exact issue: http://support.microsoft.com/kb/960464/ The problem is that I do not want to delete this profile; and this issue does not come up every time that I log on, only when somebody else has been on a long time before me. What could be locking this file? Is there any way to get a process list without logging on so that I can identify which process has the file locked? Any other suggestions?

    Read the article

  • Cross vertion line matching.

    - by BCS
    I'm considering how to do automatic bug tracking and as part of that I'm wondering what is available to match source code line numbers (or more accurate numbers mapped from instruction pointers via something like addr2line) in one version of a program to the same line in another. (Assume everything is in some kind of source control and is available to my code) The simplest approach would be to use a diff tool/lib on the files and do some math on the line number spans, however this has some limitations: It doesn't handle cross file motion. It might not play well with lines that get changed It doesn't look at the information available in the intermediate versions. It provides no way to manually patch up lines when the diff tool gets things wrong. It's kinda clunky Before I start diving into developing something better: What already exists to do this? What features do similar system have that I've not thought of?

    Read the article

  • MySQL: LOAD DATA reclaim disk space after delete

    - by Michael
    I have a DB schema composed of MYISAM tables, i am interested to delete old records from time to time from some of the tables. I know that delete does not reclaim the memory space, but as i found in a description of DELETE command, inserts may reuse the space deleted In MyISAM tables, deleted rows are maintained in a linked list and subsequent INSERT operations reuse old row positions. I am interested if LOAD DATA command also reuses the deleted space? UPDATE I am also interested how the index space reclaimed?

    Read the article

  • windows server 2003 speed issues

    - by farzinSH
    I have a HP server with windows server 2003 and 50 windows XP clients. Since a week and a half the networks speed suddenly drop 2-3 times per day. It gets so slow that none of the clients could work with the HIS program installed on them. We tried so many different things such as replacing the hubs,switches and even some wires. Every time one of these changes solves the problem and the network goes back to its normal state. I checked everything. Even when I disconnected all the clients from the server and connected it to just one computer the problem still remained for 2 hours. I just narrowed down the problem to the couple of likely speculations as follows: viruses? (Updated Kaspersky running on the server shows none) server hardware failure? Physical memory usage on the server? (Because the last time the problem occurred none of the changes above solved the issue so I restarted the server an checked the physical memory usage which was 2 GBs. But I noticed it's increasing over time to over 9 GBs...the server has 16 GBs of RAM.) I surfed the internet and got nothing. Any help would do us a lot....thanks in advance

    Read the article

  • Prevent web2py from caching ?

    - by Joe
    Hi ! I'm working with web2py and for some reason web2py seems to fail to notice when code has changed in certain cases. I can't really narrow it down, but from time to time changes in the code are not reflected, web2py obviously has the old version cached somewhere. The only thing that helps is quitting web2py and restarting it (i'm using the internal server). Any hints ? Thank you !

    Read the article

  • Best way to say "sync all system clocks to this server, when and ONLY when I say so?" Mixed setup of Windows+Linux servers.

    - by twblamer
    Title pretty much explains it. Let's say there's 100 servers, various versions of Windows and Linux, and one Windows server is the "master clock." I did look at this question: How do I synchronize clocks between Linux and Windows? This hints that ntp can do what I want if I run "ntpd -q" on a client (?). If I install ntp I also need to guarantee that it will only sync the times when I force it to. Even better if I have a log that tells me every time a sync was performed. I'm doing benchmark runs and I need to be able to say something like this: "Clocks were synced on all the benchmark systems at 09:42:01am on the master. A benchmark run was then initiated and allowed to run for six hours. None of the system clocks were altered during this time interval." I understand there is subsequent clock drift, but for now that's the way we're doing things and I'm doing it with a manual process. I'd rather at least automate the one-time sync.

    Read the article

  • Should I create a new extension for an xml file?

    - by macleojw
    I'm working with a data model stored in XML files. I want to create some metadata for the model and store it alongside, but would like to be able to distinguish between the two. The data model is imported into some software from time to time and we don't want it to try to import the meta data files. To get round this, I've been thinking of creating a new extension for the metadata xml files (say .mdml). Is this good practice?

    Read the article

  • Mac OS X: getting file system changes of the last minute?

    - by Patrick
    From older times (Mac OS 10.4) I had found this command line on the web somewhere: mdfind '(kMDItemFSContentChangeDate >= $time.now(-60)) && (kMDItemFSContentChangeDate <= $time.now)' it gave me a list of files that where changed in the last minute. This does not work anymore on Mac OS 10.6. Can anybody explain why this doesn't work? And even suggest a working command line?

    Read the article

  • What to check to see if server has enough free resources?

    - by kyrisu
    The windows service I am writing will need to run some processor intensive operations once in a while (sound encoding wav - mp3) on a machine that takes part in real time voice communication (so I cannot just run them any-time). What would you check (what counters maybe) before running such operation? Can you point me to any good articles?

    Read the article

  • How to check total cache size using a program

    - by user1888541
    so I'm having some trouble creating a program to measure cache size in C. I understand the basic concept of going about this but I'm still having trouble figuring out exactly what I am doing wrong. Basically, I create an array of varying length (going by power of 2s) and access each element in the array and put it in a dummy variable. I go through the array and do this around 1000 times to negate the "noise" that would otherwise occur if I only did it once to get an accurate measurement for time. Then, I look for the size that causes a big jump in access time. Unfortunately, this is where I am having my problem, I don't see this jump using my code and clearly I am doing something wrong. Another thing is that I used /proc/cpuinfo to check the cache and it said the size was 6114 but that was not a power of 2. I was told to go by powers of 2 to figure out the cache can anyone explain why this is? Here is the just of my code...I will post the rest if need be { struct timeval start; struct timeval end; // int n = 1; // change this to test different sizes int array_size = 1048576*n; // I'm trying to check the time "manually" first before creating a loop for the program to do it by itself this is why I have a separate "n" variable to increase the size char x = 0; int i =0, j=0; char *a; a =malloc(sizeof(char) * (array_size)); gettimeofday(&start,NULL); for(i=0; i<1000; i++) { for(j=0; j < array_size; j += 1) { x = a[j]; } } gettimeofday(&end,NULL); int timeTaken = (end.tv_sec * 1000000 + end.tv_usec) - (start.tv_sec *1000000 + start.tv_usec); printf("Time Taken: %d \n", timeTaken); printf("Average: %f \n", (double)timeTaken/((double)array_size); }

    Read the article

  • reading a file word by word

    - by nalbina
    I can read from a file 1 character at a time, but how do i make it go just one word at a time? So, read until there is a space and take that as a string. This gets me the characters: while (!fin.eof()){ while (fin f ){ F.push_back ( f ); }

    Read the article

  • Sort MySQL result set using comparison between 2 columns of same value type

    - by Kyobul
    Hello, I have a table containing last_updated_1 and last_updated_2 columns, used respectively for text and images update time on a post. I wish I could get a result set of 10 rows based on all time last updated records contained in the 2 columns, ex. row 1 = last_updated_1 record, row 2 = last_updated_2 record, row 3 = last_updated_1 record, etc. How could I compare inside a MySQL query the both columns values, to get unique & mixed result set ? Thank you in advance for your help

    Read the article

  • Metaprogramming on web server

    - by bobobobo
    From time to time, I find myself writing server code that produces JavaScript code as the output result. I can point out why it is really bad: Inextricable tie between server code and client code. Can render client code un-reusable. But sometimes, it just seems to make sense. And isn't it kinda sorta interesting? I guess the question is, is writing server code that produces JavaScript code a really bad practice, or "does everyone do it"?

    Read the article

< Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >