Search Results

Search found 6612 results on 265 pages for 'seconds'.

Page 44/265 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Precise: Evolution laggy due to IMAP -profile or due to some odd Sync -issue?

    - by Izzy
    I'm fighting with Evolution. Basically it's working fine -- but it is very slow to react in certain situations. There is apparently some problem with syncing and IMAP. Helper questios Could be that changing away from Bonobo has to do with slowing-down? There might be some trouble with the new engine and "asynchronous actions". What to do about it? I want to get the previous "working mood" back. How can I speed this thing up? Different scenarios when sending a mail, the composer window hangs there inactive for a couple of seconds, everything grayed out. Though there is a green check mark saying it's sent, I'm not sure a) why it's still blocking everything and b) whether I could simply close it without "breaking"/"losing" anything. In earlier versions, the composer window was closing pretty fast, and one could see the message being stored into the local "outbox" until it was sent, and one could immediately continue with the next task. I prefer that behaviour over the current. switching between modules. Coming from mail and switching to the address book takes a couple of seconds. Same for switching to the calendar. I read about different "possible causes" and tried a few things: I only have 3 local address books, so no networking should be involved here. To make sure, I switched to offline mode and then tried to access the address book. No noticeable difference. I use 3 Google Calendars. Switching to offline mode made a minor difference, but so minor that it also could be "imagination" since one might have expected this in this case according to some reports, disabling the tasks should help. Well, it didn't in my case, as I don't use them regularly (just two local items stored here)

    Read the article

  • Ubuntu Boot Slow - Late Drive Access

    - by Mo2
    So I just installed Ubuntu on a brand new Samsung 840 Pro SSD and have noticed that it is slower than expected. The LED access light doesn't start blinking until after 20 seconds or so have passed, after which the boot up seems to actually start. Does anyone have any ideas as to why this is happening? I have Windows 7 installed on a separate SSD and use Windows Loader as the main loader. Windows Loader points to GRUB2, from which I then start Ubuntu up. The 20 seconds count that I mentioned earlier starts AFTER I select Ubuntu 14.04.1 from the GRUB2 menu. Any help is appreciated. EDIT: I forgot to mention that I have only one partition for / and /home. I intend on using a shared NTFS drive for my documents and other personal files, so I had no need for the /home. I did not make a /swap since I have 12 GB of RAM. I also did not make a /boot since I was told that it's not really necessary in my case.

    Read the article

  • XNA 2D Top Down game - FOREACH didn't work for checking Enemy and Switch-Tile

    - by aldroid16
    Here is the gameplay. There is three condition. The player step on a Switch-Tile and it became false. 1) When the Enemy step on it (trapped) AND the player step on it too, the Enemy will be destroyed. 2) But when the Enemy step on it AND the player DIDN'T step on it too, the Enemy will be escaped. 3) If the Switch-Tile condition is true then nothing happened. The effect is activated when the Switch tile is false (player step on the Switch-Tile). Because there are a lot of Enemy and a lot of Switch-Tile, I have to use foreach loop. The problem is after the Enemy is ESCAPED (case 2) and step on another Switch-Tile again, nothing happened to the enemy! I didn't know what's wrong. The effect should be the same, but the Enemy pass the Switch tile like nothing happened (They should be trapped) Can someone tell me what's wrong? Here is the code : public static void switchUpdate(GameTime gameTime) { foreach (SwitchTile switch in switchTiles) { foreach (Enemy enemy in EnemyManager.Enemies) { if (switch.Active == false) { if (!enemy.Destroyed) { if (switch.IsCircleColliding(enemy.EnemyBase.WorldCenter, enemy.EnemyBase.CollisionRadius)) { enemy.EnemySpeed = 10; //reducing Enemy Speed if it enemy is step on the Tile (for about two seconds) enemy.Trapped = true; float elapsed = (float)gameTime.ElapsedGameTime.Milliseconds; moveCounter += elapsed; if (moveCounter> minMoveTime) { //After two seconds, if the player didn't step on Switch-Tile. //The Enemy escaped and its speed back to normal enemy.EnemySpeed = 60f; enemy.Trapped = false; } } } } else if (switch.Active == true && enemy.Trapped == true && switch.IsCircleColliding(enemy.EnemyBase.WorldCenter, enemy.EnemyBase.CollisionRadius) ) { //When the Player step on Switch-Tile and //there is an enemy too on this tile which was trapped = Destroy Enemy enemy.Destroyed = true; } } } }

    Read the article

  • Why does plymouth start so late?

    - by Marky
    It appears that starting with 11.04 Plymouth starts so late in the boot process. Sometimes I only have a split second to see it before it transitions to the login screen. This is the same for 11.10. Compared to 10.04 and 10.10, Plymouth starts only a couple seconds or so after Grub and is very visible within the entire boot process. Is there something that can be done to have Plymouth run earlier? I have experienced this on 3 different machines and on 2 of these machines, I've been running Ubuntu since 10.04. So it's not just my notebook's hardware that is causing this. *One a side note, the boot process is one of the ugliest parts of modern Linux. Ubuntu is not excluded. After almost a decade, (I forget but was bootsplash the first?) this still has only been partly solved. For a couple of seconds ugly text is still seen when shutting down. On several ocassions, the same ugly text is seen when logging out of a session. It's never as smooth as you want it to be. Splash themes are great, don't get me wrong. It's just the transitions that are way off and you get glimpses of what's underneath. I'm used to this but for those new to Ubuntu and coming from Windows. It is a turn off.* pardon the rant. :)

    Read the article

  • Updating the jump in game

    - by Luka Tiger
    I am making a Java game and I want my game to run the same on any FPS so I'm using time delta between each update. This is the update method of the Player: public void update(long timeDelta) { //speed is the movement speed of a player on X axis //timeDelta is expressed in nano seconds so I'm dividing it with 1000000000 to express it in seconds if (Input.keyDown(37)) this.velocityX -= speed * (timeDelta / 1000000000.0); if (Input.keyDown(39)) this.velocityX += speed * (timeDelta / 1000000000.0); if(Input.keyPressed(38)) { this.velocityY -= 6; } velocityY += g * (timeDelta/1000000000.0); //applying gravity move(velocityX, velocityY); /*this is method which moves a player according to velocityX and velocityY, and checking the collision */ this.velocityX = 0.0; } The strange thing is that when I have unlimited FPS (and update number) my player is jumping about 10 blocks. It jumps even higher when the FPS is increasing. If I limit FPS it is jumping 4 blocks. (BLOCK: 32x32) I have just realized that the problem is this: if(Input.keyPressed(38)) { this.velocityY -= 6; } I add -6 to velocityY which increases player's Y proportionally to the update number and not to the time. But I don't know how to fix this.

    Read the article

  • List<T>.AddRange is causing a brief Update/Draw delay

    - by Justin Skiles
    I have a list of entities which implement an ICollidable interface. This interface is used to resolve collisions between entities. My entities are thus: Players Enemies Projectiles Items Tiles On each game update (about 60 t/s), I am clearing the list and adding the current entities based on the game state. I am accomplishing this via: collidableEntities.Clear(); collidableEntities.AddRange(players); collidableEntities.AddRange(enemies); collidableEntities.AddRange(projectiles); collidableEntities.AddRange(items); collidableEntities.AddRange(camera.VisibleTiles); Everything works fine until I add the visible tiles to the list. The first ~1-2 seconds of running the game loop causes a visible hiccup that delays drawing (so I can see a jitter in the rendering). I can literally remove/add the line that adds the tiles and see the jitter occur and not occur, so I have narrowed it down to that line. My question is, why? The list of VisibleTiles is about 450-500 tiles, so it's really not that much data. Each tile contains a Texture2D (image) and a Vector2 (position) to determine what is rendered and where. I'm going to keep looking, but from the top of my head, I can't understand why only the first 1-2 seconds hiccups but is then smooth from there on out. Any advice is appreciated.

    Read the article

  • How do I stop video tearing? (Nvidia prop driver, non-compositing window manager)

    - by Chan-Ho Suh
    I have that problem which seemingly afflicts many using the proprietary Nvidia driver: Video tearing: fine horizontal lines (usually near the top of my display) when there is a lot of panning or action in the video. (Note: switching back to the default nouveau driver is not an option, as its seemingly nonexistent power-management drains my battery several times faster) I've tried Totem, Parole, and VLC, and tearing occurs with all of them. The best result has been to use X11 output in VLC, but there is still tearing with relatively moderate action. Hardware: MacBook Air 3,2 -- which has an Nvidia GeForce 320M. There are two common fixes for tearing with Nvidia prop drivers: Turn off compositing, since Nvidia proprietary drivers don't usually play nice with compositing window managers on Linux (Compiz is an exception I'm aware of). But I use an extremely lightweight window manager (Awesome window manager) which is not even capable of compositing (or any cool effects). I also have this problem in Xfce, where I have compositing disabled. Enabling sync to VBlank. To enable this, I set the option in nvidia-settings and then autostart it as nvidia-settings -l with my other autostart programs. This seems to work, because when I run glxgears, I get: $ glxgears Running synchronized to the vertical refresh. The framerate should be approximately the same as the monitor refresh rate. 303 frames in 5.0 seconds = 60.500 FPS 300 frames in 5.0 seconds = 59.992 FPS And when I check the refresh rate using nvidia-settings: $ nvidia-settings -q RefreshRate Attribute 'RefreshRate' (wampum:0.0; display device: DFP-2): 60.00 Hz. All this suggests sync to VBlank is enabled. As I understand it, this is precisely designed to stop tearing, and a lot of people's problem is even getting something like glxgears to output the correct info. I don't understand why it's not working for me. xorg.conf: http://paste.ubuntu.com/992056/ Example of observed tearing::

    Read the article

  • Why is my USB data transfer so slow?

    - by Dave M G
    Whenever I do any kind of file transfer using USB, whether to a USB stick, or with my Android phone, or anything else, it is ridiculously slow. It says 59.8 KB/sec, which would be an awesome speed if this were 1991 and I was using a modem to dial up to my local BBS. Surely USB technology is better than that...? 37 seconds to move less data than the equivelent of 1 MP3 file? Also, regardless of what it says about speed and time, the reality is much, much slower. I routinely see it say something like "37 seconds left" and have to wait for minutes. Sometimes, if I want to move large amounts of files, it can say it will take 8 hours or more. Is this normal? My computer may not be the most awesome on the market, and about a year old, but it's an i5 with 4GB RAM and modern components, so surely this isn't the hardware's fault. What can I do to get better USB data transfer performance? Also, I did look at this question, but my newbie eyes don't see anything that look like an actual solution, just a lot of discussion about what transfer rates could or should be. Update: As requested in the comments, I've generated a whole bunch of output from the command line, and put it on Ubuntu Pastebin. Please see it here.

    Read the article

  • OpenGL : sluggish performance in extracting texture from GPU

    - by Cyan
    I'm currently working on an algorithm which creates a texture within a render buffer. The operations are pretty complex, but for the GPU this is a simple task, done very quickly. The problem is that, after creating the texture, i would like to save it. This requires to extract it from GPU memory. For this operation, i'm using glGetTexImage(). It works, but the performance is sluggish. No, i mean even slower than that. For example, an 8MB texture (uncompressed) requires 3 seconds (yes, seconds) to be extracted. That's mind puzzling. I'm almost wondering if my graphic card is connected by a serial link... Well, anyway, i've looked around, and found some people complaining about the same, but no working solution so far. The most promising advise was to "extract data in the native format of the GPU". Which i've tried and tried, but failed so far. Edit : by moving the call to glGetTexImage() in a different place, the speed has been a bit improved for the most dramatic samples : looking again at the 8MB texture, it knows requires 500ms, instead of 3sec. It's better, but still much too slow. Smaller texture sizes were not affected by the change (typical timing remained into the 60-80ms range). Using glFinish() didn't help either. Note that, if i call glFinish() (without glGetTexImage), i'm getting a fixed 16ms result, whatever the texture size or complexity. It really looks like the timing for a frame at 60fps. The timing is measured for the full rendering + saving sequence. The call to glGetTexImage() alone does not really matter. That being said, it is this call which changes the performance. And yes, of course, as stated at the beginning, the texture is "created into the GPU", hence the need to save it.

    Read the article

  • New Workstation &ndash; Lenovo W530 Core i7 32GB 256GB SSD Win8Pro

    - by Brian Lanham
    So I pretty-much have my new machine up and running full-time. I am still going to have to hit my old workstation for some things but am more-or-less working on my new machine.  It’s really fast. And Bret was right, I’m not so far using all the RAM. 16 would have been enough but as @CodeMonkeyJava “go big or go home”. Windows 8 is…interesting.  So far I still seem to do most of my work in the “Desktop”.  However, I like the Store concept and I like the Metro UX.  Live tiles are also nice.  I really like how I can switch between Desktop and Metro easily.  Overall I think Microsoft has done a great job of combining the needed experience for touch and mouse. My overall Windows 8 rating is 5.9 because of the video card. Otherwise I’m hitting 7.8.  The system boots from cold in about 11 seconds and performs complete shutdown in 4.7 seconds.  It wakes from sleep in less than 1 second. VS 2012 starts and restarts almost instantly.  In fact, I find myself staring at the start page without realizing it.  Build time doesn’t seem to be significantly increased but it is faster. I seem to already be reinvigorated for work with this new machine. I’m looking forward to the performance.

    Read the article

  • Ubuntu 11.10 very slow compared to Windows

    - by Patrick
    I'm new to Ubuntu/linux. Since my PC is very old and not very fast with Windows 7, I decided to give Ubuntu a try, so I downloaded and installed Ubuntu 11.10 today. When I first started it, I had bad 800x600 resolution and it was very slow and annoying. So I installed a driver for my graphic card and now everything looks very nice (1280x1024).But I think it's still far slower than Windows 7. I tried to run in Ubuntu like a few people suggested on the forum but if I log in I get a black screen saying something like "this video mode cannot be displayed". I get that same screen when booting Ubuntu btw, but after about 15 seconds it disappears and just starts Ubuntu. I also installed other drivers for my graphic card but everything stayed the same. I noticed that i.e. when I open Firefox or system settings it takes about 5 seconds till it opens (while Windows 7 takes under 1 second to start i.e. Chrome) and when I do this my CPU usage gets to 100% for a short time. Computer specs: Memory: 2GB RAM Processor: Intel Pentium 4 2.8GHz Graphics Card: Nvidia GeForce 6800 400MHz. I read on various forums that 11.04 works flawless on many PCs, where 11.10 is very slow. Should I install 11.04 or could anybody help me with this problem?

    Read the article

  • Problems with dual monitor & resolutions, only in 14.04

    - by theLadder
    I installed Ubuntu 14.04 but i am having weird problems with my dual monitors and the resolutions. I also tried Xubuntu 14.04 and was having the same problem. I have one 32 inch LG TV with 1920x1080 and one monitor with 1280x1024 resolution. When i first start my 32 inch gets 1360x768, if i then try to change to 1920x1080, everythings looks fine and the prompt asking me if i want to keep settings comes up and starts the countdown, but after 2 seconds my computer freezes, and after a few more seconds it reboots itself. However, if i disable my smaller monitor first, i can change to 1920x1080 on my 32 inch without problems, but if i then activate the second monitor the same problem happens again. in Xubuntu 14.04 i can change the refresh rate, if i run the 32 inch at 30hz or 50hz everytying works, but i would like to be able to run it at 60hz. I'm currently running Xubuntu 13.10 without this problem. My graphics card is a ATI Radeon HD 4850. What is causing this problem, grahpics drivers? Kernel? Xorg? And how do i solve it?

    Read the article

  • The Clocks on USACO

    - by philip
    I submitted my code for a question on USACO titled "The Clocks". This is the link to the question: http://ace.delos.com/usacoprob2?a=wj7UqN4l7zk&S=clocks This is the output: Compiling... Compile: OK Executing... Test 1: TEST OK [0.173 secs, 13928 KB] Test 2: TEST OK [0.130 secs, 13928 KB] Test 3: TEST OK [0.583 secs, 13928 KB] Test 4: TEST OK [0.965 secs, 13928 KB] Run 5: Execution error: Your program (`clocks') used more than the allotted runtime of 1 seconds (it ended or was stopped at 1.584 seconds) when presented with test case 5. It used 13928 KB of memory. ------ Data for Run 5 ------ 6 12 12 12 12 12 12 12 12 ---------------------------- Your program printed data to stdout. Here is the data: ------------------- time:_0.40928452 ------------------- Test 5: RUNTIME 1.5841 (13928 KB) I wrote my program so that it will print out the time taken (in seconds) for the program to complete before it exits. As can be seen, it took 0.40928452 seconds before exiting. So how the heck did the runtime end up to be 1.584 seconds? What should I do about it? This is the code if it helps: import java.io.; import java.util.; class clocks { public static void main(String[] args) throws IOException { long start = System.nanoTime(); // Use BufferedReader rather than RandomAccessFile; it's much faster BufferedReader f = new BufferedReader(new FileReader("clocks.in")); // input file name goes above PrintWriter out = new PrintWriter(new BufferedWriter(new FileWriter("clocks.out"))); // Use StringTokenizer vs. readLine/split -- lots faster int[] clock = new int[9]; for (int i = 0; i < 3; i++) { StringTokenizer st = new StringTokenizer(f.readLine()); // Get line, break into tokens clock[i * 3] = Integer.parseInt(st.nextToken()); clock[i * 3 + 1] = Integer.parseInt(st.nextToken()); clock[i * 3 + 2] = Integer.parseInt(st.nextToken()); } ArrayList validCombination = new ArrayList();; for (int i = 1; true; i++) { ArrayList combination = getPossibleCombinations(i); for (int j = 0; j < combination.size(); j++) { if (tryCombination(clock, (int[]) combination.get(j))) { validCombination.add(combination.get(j)); } } if (validCombination.size() > 0) { break; } } int [] min = (int[])validCombination.get(0); if (validCombination.size() > 1){ String minS = ""; for (int i=0; i<min.length; i++) minS += min[i]; for (int i=1; i<validCombination.size(); i++){ String tempS = ""; int [] temp = (int[])validCombination.get(i); for (int j=0; j<temp.length; j++) tempS += temp[j]; if (tempS.compareTo(minS) < 0){ minS = tempS; min = temp; } } } for (int i=0; i<min.length-1; i++) out.print(min[i] + " "); out.println(min[min.length-1]); out.close(); // close the output file long end = System.nanoTime(); System.out.println("time: " + (end-start)/1000000000.0); System.exit(0); // don't omit this! } static boolean tryCombination(int[] clock, int[] steps) { int[] temp = Arrays.copyOf(clock, clock.length); for (int i = 0; i < steps.length; i++) transform(temp, steps[i]); for (int i=0; i<temp.length; i++) if (temp[i] != 12) return false; return true; } static void transform(int[] clock, int n) { if (n == 1) { int[] clocksToChange = {0, 1, 3, 4}; add3(clock, clocksToChange); } else if (n == 2) { int[] clocksToChange = {0, 1, 2}; add3(clock, clocksToChange); } else if (n == 3) { int[] clocksToChange = {1, 2, 4, 5}; add3(clock, clocksToChange); } else if (n == 4) { int[] clocksToChange = {0, 3, 6}; add3(clock, clocksToChange); } else if (n == 5) { int[] clocksToChange = {1, 3, 4, 5, 7}; add3(clock, clocksToChange); } else if (n == 6) { int[] clocksToChange = {2, 5, 8}; add3(clock, clocksToChange); } else if (n == 7) { int[] clocksToChange = {3, 4, 6, 7}; add3(clock, clocksToChange); } else if (n == 8) { int[] clocksToChange = {6, 7, 8}; add3(clock, clocksToChange); } else if (n == 9) { int[] clocksToChange = {4, 5, 7, 8}; add3(clock, clocksToChange); } } static void add3(int[] clock, int[] position) { for (int i = 0; i < position.length; i++) { if (clock[position[i]] != 12) { clock[position[i]] += 3; } else { clock[position[i]] = 3; } } } static ArrayList getPossibleCombinations(int size) { ArrayList l = new ArrayList(); int[] current = new int[size]; for (int i = 0; i < current.length; i++) { current[i] = 1; } int[] end = new int[size]; for (int i = 0; i < end.length; i++) { end[i] = 9; } l.add(Arrays.copyOf(current, size)); while (!Arrays.equals(current, end)) { incrementWithoutRepetition(current, current.length - 1); l.add(Arrays.copyOf(current, size)); } int [][] combination = new int[l.size()][size]; for (int i=0; i<l.size(); i++) combination[i] = (int[])l.get(i); return l; } static int incrementWithoutRepetition(int[] n, int index) { if (n[index] != 9) { n[index]++; return n[index]; } else { n[index] = incrementWithoutRepetition(n, index - 1); return n[index]; } } static void p(int[] n) { for (int i = 0; i < n.length; i++) { System.out.print(n[i] + " "); } System.out.println(""); } }

    Read the article

  • ie7 innerhtml strange display problem

    - by thoraniann
    Hello, I am having a strange problem with ie7 (ie8 in compatibility mode). I have div containers where I am updating values using javascript innhtml to update the values. This works fine in Firefox and ie8. In ie7 the values do not update but if a click on the values and highlight them then they update, also if a change the height of the browser then on the next update the values get updated correctly. I have figured out that if I change the position property of the outer div container from relative to static then the updates work correctly. The page can be viewed here http://islendingasogur.net/test/webmap_html_test.html In internet explorer 8 with compatibility turned on you can see that the timestamp in the gray box only gets updated one time, after that you see no changes. The timestamp in the lower right corner gets updated every 10 seconds. But if you highlight the text in the gray box then the updated timestamp values appears! Here is the page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <meta http-equiv="cache-control" content="no-cache"/> <meta http-equiv="pragma" content="no-cache"/> <meta http-equiv="expires" content="Mon, 22 Jul 2002 11:12:01 GMT"/> <title>innerhtml problem</title> <script type="text/javascript"> <!-- var alarm_off_color = '#00ff00'; var alarm_low_color = '#ffff00'; var alarm_lowlow_color = '#ff0000'; var group_id_array = new Array(); var var_alarm_array = new Array(); var timestamp_color = '#F3F3F3'; var timestamp_alarm_color = '#ff00ff'; group_id_array[257] = 0; function updateParent(var_array, group_array) { //Update last update time var time_str = "Last Reload Time: "; var currentTime = new Date(); var hours = currentTime.getHours(); var minutes = currentTime.getMinutes(); var seconds = currentTime.getSeconds(); if(minutes < 10) {minutes = "0" + minutes;} if(seconds < 10) {seconds = "0" + seconds;} time_str += hours + ":" + minutes + ":" + seconds; document.getElementById('div_last_update_time').innerHTML = time_str; //alert(time_str); alarm_var = 0; //update group values for(i1 = 0; i1 < var_array.length; ++i1) { if(document.getElementById(var_array[i1][0])) { document.getElementById(var_array[i1][0]).innerHTML = unescape(var_array[i1][1]); if(var_array[i1][2]==0) {document.getElementById(var_array[i1][0]).style.backgroundColor=alarm_off_color} else if(var_array[i1][2]==1) {document.getElementById(var_array[i1][0]).style.backgroundColor=alarm_low_color} else if(var_array[i1][2]==2) {document.getElementById(var_array[i1][0]).style.backgroundColor=alarm_lowlow_color} //check if alarm is new var_id = var_array[i1][3]; if(var_array[i1][2]==1 && var_array[i1][4]==0) { alarm_var = 1; } else if(var_array[i1][2]==2 && var_array[i1][4]==0) { alarm_var = 1; } } } //Update group timestamp and box alarm color for(i1 = 0; i1 < group_array.length; ++i1) { if(document.getElementById(group_array[i1][0])) { //set timestamp for group document.getElementById(group_array[i1][0]).innerHTML = group_array[i1][1]; if(group_array[i1][4] != -1) { //set data update error status current_timestamp_color = timestamp_color; if(group_array[i1][4] == 1) {current_timestamp_color = timestamp_alarm_color;} document.getElementById(group_array[i1][0]).style.backgroundColor = current_timestamp_color; } } } } function update_map(map_id) { document.getElementById('webmap_update').src = 'webmap_html_test_sub.html?first_time=1&map_id='+map_id; } --> </script> <style type="text/css"> body { margin:0; border:0; padding:0px;background:#eaeaea;font-family:verdana, arial, sans-serif; text-align: center; } A:active { color: #000000;} A:link { color: #000000;} A:visited { color: #000000;} A:hover { color: #000000;} #div_header { /*position: absolute;*/ background: #ffffff; width: 884px; height: 60px; display: block; float: left; font-size: 14px; text-align: left; /*overflow: visible;*/ } #div_container{ background: #ffffff;border-left:1px solid #000000; border-right:1px solid #000000; border-bottom:1px solid #000000; float: left; width: 884px;} #div_image_container{ position: relative; width: 884px; height: 549px; background: #ffffff; font-family:arial, verdana, arial, sans-serif; /*display: block;*/ float:none!important; float/**/:left; border:1px solid #00ff00; padding: 0px; } .div_group_box{ position: absolute; width: -2px; height: -2px; background: #FFFFFF; opacity: 1; filter: alpha(opacity=100); border:1px solid #000000; font-size: 2px; z-index: 0; padding: 0px; } .div_group_container{ position: absolute; opacity: 1; filter: alpha(opacity=100); z-index: 5; /*display: block;*/ /*border:1px solid #000000;*/ } .div_group_container A:active {text-decoration: none; display: block;} .div_group_container A:link { color: #000000;text-decoration: none; display: block;} .div_group_container A:visited { color: #000000;text-decoration: none; display: block;} .div_group_container A:hover { color: #000000;text-decoration: none; display: block;} .div_group_header{ background: #17B400; border:1px solid #000000;font-size: 12px; color: #FFFFFF; padding-top: 1px; padding-bottom: 1px; padding-left: 2px; padding-right: 2px; text-align: center; } .div_var_name_container{ color: #000000;background: #FFFFFF; border-left:1px solid #000000; border-top:0px solid #000000; border-bottom:0px solid #000000;font-size: 12px; float: left; display: block; text-align: left; } .div_var_name{ padding-top: 1px; padding-bottom: 1px; padding-left: 2px; padding-right: 2px; display: block; } .div_var_value_container{ color: #000000;background: #FFFFFF; border-left:1px solid #000000; border-right:1px solid #000000; border-top:0px solid #000000; border-bottom:0px solid #000000;font-size: 12px; float: left; text-align: center; } .div_var_value{ padding-top: 1px; padding-bottom: 1px; padding-left: 2px; padding-right: 2px; } .div_var_unit_container{ color: #000000;background: #FFFFFF; border-right:1px solid #000000; border-top:0px solid #000000; border-bottom:0px solid #000000;font-size: 12px; float: left; text-align: left; } .div_var_unit{ padding-top: 1px; padding-bottom: 1px; padding-left: 2px; padding-right: 2px; } .div_timestamp{ float: none; color: #000000;background: #F3F3F3; border:1px solid #000000;font-size: 12px; padding-top: 1px; padding-bottom: 1px; padding-left: 2px; padding-right: 2px; text-align: center; clear: left; z-index: 100; position: relative; } #div_last_update_time{ height: 14px; width: 210px; text-align: right; padding: 1px; font-size: 10px; float: right; } .copyright{ height: 14px; width: 240px; text-align: left; color: #777; padding: 1px; font-size: 10px; float: left; } a img { border: 1px solid #000000; } .clearer { clear: both; display: block; height: 1px; margin-bottom: -1px; font-size: 1px; line-height: 1px; } </style> </head> <body onload="update_map(1)"> <div id="div_container"><div id="div_header"></div><div class="clearer"></div><div id="div_image_container"><img id="map" src="Images/maps/0054_gardabaer.jpg" title="My map" alt="" align="left" border="0" usemap ="#_area_links" style="padding: 0px; margin: 0px;" /> <div id="group_container_257" class="div_group_container" style="visibility:visible; top:10px; left:260px; cursor: pointer;"> <div class="div_group_header" style="clear:right">Site</div> <div class="div_var_name_container"> <div id="group_name_257_var_8" class="div_var_name" >variable 1</div> <div id="group_name_257_var_7" class="div_var_name" style="border-top:1px solid #000000;">variable 2</div> <div id="group_name_257_var_9" class="div_var_name" style="border-top:1px solid #000000;">variable 3</div> </div> <div class="div_var_value_container"> <div id="group_value_257_var_8" class="div_var_value" >0</div> <div id="group_value_257_var_7" class="div_var_value" style="border-top:1px solid #000000;">0</div> <div id="group_value_257_var_9" class="div_var_value" style="border-top:1px solid #000000;">0</div> </div> <div class="div_var_unit_container"> <div id="group_unit_257_var_8" class="div_var_unit" >N/A</div> <div id="group_unit_257_var_7" class="div_var_unit" style="border-top:1px solid #000000;">N/A</div> <div id="group_unit_257_var_9" class="div_var_unit" style="border-top:1px solid #000000;">N/A</div> </div> <div id="group_257_timestamp" class="div_timestamp" style="">-</div> </div> </div><div class="clearer"></div><div class="copyright">© Copyright</div><div id="div_last_update_time">-</div> </div> <iframe id="webmap_update" style="display:none;" width="0" height="0"></iframe></body> </html> The divs with class div_var_value, div_timestamp & div_last_update_time all get updated by the javascript function. The div "div_image_container" is the one that is causing this it seems, atleast if I change the position property for it from relative to static the values get updated correctly This is the page that updates the values: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Loader</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <script type="text/javascript"> <!-- window.onload = doLoad; function refresh() { //window.location.reload( false ); var _random_num = Math.floor(Math.random()*1100); window.location.search="?map_id=54&first_time=0&t="+_random_num; } var var_array = new Array(); var timestamp_array = new Array(); var_array[0] = Array('group_value_257_var_9','41.73',-1, 9, 0); var_array[1] = Array('group_value_257_var_7','62.48',-1, 7, 0); var_array[2] = Array('group_value_257_var_8','4.24',-1, 8, 0); var current_time = new Date(); var current_time_str = current_time.getHours(); current_time_str += ':'+current_time.getMinutes(); current_time_str += ':'+current_time.getSeconds(); timestamp_array[0] = Array('group_257_timestamp',current_time_str,'box_group_container_206',-1, -1); //timestamp_array[0] = Array('group_257_timestamp','11:33:16 23.Nov','box_group_container_257',-1, -1); window.parent.updateParent(var_array, timestamp_array); function doLoad() { setTimeout( "refresh()", 10*1000 ); } //--> </script> </head> <body> </body> </html> I edited the post and added a link to the webpage in question, I have also tested the webpage in internet explorer 7 and this error does not appear there. I have only seen this error in ie8 with compatibility turned on. If anybody has seen this before and has a fix, I would be very grateful. Thanks.

    Read the article

  • Auto DOP and Concurrency

    - by jean-pierre.dijcks
    After spending some time in the cloud, I figured it is time to come down to earth and start discussing some of the new Auto DOP features some more. As Database Machines (the v2 machine runs Oracle Database 11.2) are effectively selling like hotcakes, it makes some sense to talk about the new parallel features in more detail. For basic understanding make sure you have read the initial post. The focus there is on Auto DOP and queuing, which is to some extend the focus here. But now I want to discuss the concurrency a little and explain some of the relevant parameters and their impact, specifically in a situation with concurrency on the system. The goal of Auto DOP The idea behind calculating the Automatic Degree of Parallelism is to find the highest possible DOP (ideal DOP) that still scales. In other words, if we were to increase the DOP even more  above a certain DOP we would see a tailing off of the performance curve and the resource cost / performance would become less optimal. Therefore the ideal DOP is the best resource/performance point for that statement. The goal of Queuing On a normal production system we should see statements running concurrently. On a Database Machine we typically see high concurrency rates, so we need to find a way to deal with both high DOP’s and high concurrency. Queuing is intended to make sure we Don’t throttle down a DOP because other statements are running on the system Stay within the physical limits of a system’s processing power Instead of making statements go at a lower DOP we queue them to make sure they will get all the resources they want to run efficiently without trashing the system. The theory – and hopefully – practice is that by giving a statement the optimal DOP the sum of all statements runs faster with queuing than without queuing. Increasing the Number of Potential Parallel Statements To determine how many statements we will consider running in parallel a single parameter should be looked at. That parameter is called PARALLEL_MIN_TIME_THRESHOLD. The default value is set to 10 seconds. So far there is nothing new here…, but do realize that anything serial (e.g. that stays under the threshold) goes straight into processing as is not considered in the rest of this post. Now, if you have a system where you have two groups of queries, serial short running and potentially parallel long running ones, you may want to worry only about the long running ones with this parallel statement threshold. As an example, lets assume the short running stuff runs on average between 1 and 15 seconds in serial (and the business is quite happy with that). The long running stuff is in the realm of 1 – 5 minutes. It might be a good choice to set the threshold to somewhere north of 30 seconds. That way the short running queries all run serial as they do today (if it ain’t broken, don’t fix it) and allows the long running ones to be evaluated for (higher degrees of) parallelism. This makes sense because the longer running ones are (at least in theory) more interesting to unleash a parallel processing model on and the benefits of running these in parallel are much more significant (again, that is mostly the case). Setting a Maximum DOP for a Statement Now that you know how to control how many of your statements are considered to run in parallel, lets talk about the specific degree of any given statement that will be evaluated. As the initial post describes this is controlled by PARALLEL_DEGREE_LIMIT. This parameter controls the degree on the entire cluster and by default it is CPU (meaning it equals Default DOP). For the sake of an example, let’s say our Default DOP is 32. Looking at our 5 minute queries from the previous paragraph, the limit to 32 means that none of the statements that are evaluated for Auto DOP ever runs at more than DOP of 32. Concurrently Running a High DOP A basic assumption about running high DOP statements at high concurrency is that you at some point in time (and this is true on any parallel processing platform!) will run into a resource limitation. And yes, you can then buy more hardware (e.g. expand the Database Machine in Oracle’s case), but that is not the point of this post… The goal is to find a balance between the highest possible DOP for each statement and the number of statements running concurrently, but with an emphasis on running each statement at that highest efficiency DOP. The PARALLEL_SERVER_TARGET parameter is the all important concurrency slider here. Setting this parameter to a higher number means more statements get to run at their maximum parallel degree before queuing kicks in.  PARALLEL_SERVER_TARGET is set per instance (so needs to be set to the same value on all 8 nodes in a full rack Database Machine). Just as a side note, this parameter is set in processes, not in DOP, which equates to 4* Default DOP (2 processes for a DOP, default value is 2 * Default DOP, hence a default of 4 * Default DOP). Let’s say we have PARALLEL_SERVER_TARGET set to 128. With our limit set to 32 (the default) we are able to run 4 statements concurrently at the highest DOP possible on this system before we start queuing. If these 4 statements are running, any next statement will be queued. To run a system at high concurrency the PARALLEL_SERVER_TARGET should be raised from its default to be much closer (start with 60% or so) to PARALLEL_MAX_SERVERS. By using both PARALLEL_SERVER_TARGET and PARALLEL_DEGREE_LIMIT you can control easily how many statements run concurrently at good DOPs without excessive queuing. Because each workload is a little different, it makes sense to plan ahead and look at these parameters and set these based on your requirements.

    Read the article

  • What's New In 11.1.2.1 (Talleyrand SP1)

    - by russ.bishop
    This release is primarily about bug fixes and that's what we spent the most time on, but we also addressed a number of other things: 1. Performance improvements We've done a lot of work to improve the performance of page load and execution times. For example, the View Compare page is about half the size it was previously! We've also done a lot of work on the server to improve performance of queries, exports, action scripts, etc. We implemented some finer-grained locking so fewer operations will block other users while they are in progress. We made some optimizations to improve performance when you have a lot of network or database latency as well. Just a few examples: An Import that previously took 8 GB of memory and hours to complete now runs in about 30 minutes and never takes more than 1 GB of RAM. Searching by exact Node Name now completes within 2 seconds even for a hierarchy with millions of nodes. Another search that was taking 30 seconds to run now completes in less than 5 seconds. 2. Upgrade support This release supports automatic upgrade from previous releases, built right into the console. 3. Console Improvements The Console has been reorganized and made easier to use. It is also much more multi-threaded so it responds quicker without freezing up when you save changes or when it needs to get status. 4. Property Namespaces Properties now have a concept called a Namespace. This is tied into the Application Templates to prevent conflicts with duplicate property names. Right now, if you have an AccountType and you pull in the HFM template, it also has AccountType so you end up creating properties with decorations on the name like "Account Type (HFM)". This is no longer necessary. In addition, properties within a namespace must have unique labels but they can be duplicated across namespaces. So in the Property Grid when you click on the HFM category, you just see "AccountType". When you click on MyCategory, you see "AccountType", but they are different properties with different values. Within formulas, the names are still unique (eg: Custom.AccountType vs HFM.AccountType). I'll write more about this one later. 5. Single Sign On DRM now supports Single Sign-On via HSS. For example, if you are using Oracle's OAM as your SSO solution then you configure HSS to use OAM just like you would before. You also configure DRM to use HSS, again just like before. Then you configure OAM to protect the DRM web app, like you would any other website. However once you do those things, users are no longer prompted to enter their username/password. They simply get redirected to OAM if they don't already have a login token, otherwise they pick their application and sail right into DRM. You can also avoid having to pick an application (see the next item) 6. URL-based navigation You can now specify the application you want to log into via the URL. Combined with SSO and your Intranet, it becomes easy to provide links on our intranet portal that take users directly into a specific DRM application. We also support specifying the Version, Hierarchy, and Node. Again, this can be used on your internal portal, but the scenarios get even more interesting when you are using workflow like Oracle BPEL you can automatically generate links within emails that will take users directly to a specific node in the UI. 7. Job status and cancellation A lot of the jobs now report their status and support true cancellation. Action Scripts also report a progress complete percentage since the amount of work is known ahead of time. 8. Action Script Options Action scripts support Option declarations at the top of the file so a script can self-describe (when specified in the file, the corresponding item in the file is ignored). For example: Option|DetectDelimiter Option|UsePropertyNames|true This will tell DRM to automatically detect the delimiter (a pipe symbol in this case) and that all references to properties are by Name, not by Label. Note that when you load a script in the UI, if you use Labels we automatically try to match them up if they are unique. Any duplicates are indicated and you are presented with a choice to pick which property you actually referred to. This is somewhat similar to Version substitution, but tailored for properties. There are other more minor changes and like I said earlier a lot of bug fixes and performance improvements. Hopefully I will get a chance to dig into some of these things in future blog posts.

    Read the article

  • Oracle VM RAC template - what it took

    - by wcoekaer
    In my previous posting I introduced the latest Oracle Real Application Cluster / Oracle VM template. I mentioned how easy it is to deploy a complete Oracle RAC cluster with Oracle VM. In fact, you don't need any prior knowledge at all to get a complete production-ready setup going. Here is an example... I built a 4 node RAC cluster, completely configured in just over 40 minutes - starting from import template into Oracle VM, create VMs to fully up and running Oracle RAC. And what was needed? 1 textfile with some hostnames and ip addresses and deploycluster.py. The setup is a 4 node cluster where each VM has 8GB of RAM and 4 vCPUs. The shared ASM storage in this case is 100GB, 5 x 20GB volumes. The VM names are racovm.0-racovm.3. The deploycluster script starts the VMs, verifies the configuration and sends the database cluster configuration info through Oracle VM Manager to the 4 node VMs. Once the VMs are up and running, the first VM starts the actual Oracle RAC setup inside and talks to the 3 other VMs. I did not log into any VM until after everything was completed. In fact, I connected to the database remotely before logging in at all. # ./deploycluster.py -u admin -H localhost --vms racovm.0,racovm.1,racovm.2,racovm.3 --netconfig ./netconfig.ini Oracle RAC OneCommand (v1.1.0) for Oracle VM - deploy cluster - (c) 2011-2012 Oracle Corporation (com: 26700:v1.1.0, lib: 126247:v1.1.0, var: 1100:v1.1.0) - v2.4.3 - wopr8.wimmekes.net (x86_64) Invoked as root at Sat Jun 2 17:31:29 2012 (size: 37500, mtime: Wed May 16 00:13:19 2012) Using: ./deploycluster.py -u admin -H localhost --vms racovm.0,racovm.1,racovm.2,racovm.3 --netconfig ./netconfig.ini INFO: Login password to Oracle VM Manager not supplied on command line or environment (DEPLOYCLUSTER_MGR_PASSWORD), prompting... Password: INFO: Attempting to connect to Oracle VM Manager... INFO: Oracle VM Client (3.1.1.305) protocol (1.8) CONNECTED (tcp) to Oracle VM Manager (3.1.1.336) protocol (1.8) IP (192.168.1.40) UUID (0004fb0000010000cbce8a3181569a3e) INFO: Inspecting /root/rac/deploycluster/netconfig.ini for number of nodes defined... INFO: Detected 4 nodes in: /root/rac/deploycluster/netconfig.ini INFO: Located a total of (4) VMs; 4 VMs with a simple name of: ['racovm.0', 'racovm.1', 'racovm.2', 'racovm.3'] INFO: Verifying all (4) VMs are in Running state INFO: VM with a simple name of "racovm.0" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.1" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.2" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.3" is in a Stopped state, attempting to start it...OK. INFO: Detected that all (4) VMs specified on command have (5) common shared disks between them (ASM_MIN_DISKS=5) INFO: The (4) VMs passed basic sanity checks and in Running state, sending cluster details as follows: netconfig.ini (Network setup): /root/rac/deploycluster/netconfig.ini buildcluster: yes INFO: Starting to send cluster details to all (4) VM(s)....... INFO: Sending to VM with a simple name of "racovm.0".... INFO: Sending to VM with a simple name of "racovm.1"..... INFO: Sending to VM with a simple name of "racovm.2"..... INFO: Sending to VM with a simple name of "racovm.3"...... INFO: Cluster details sent to (4) VMs... Check log (default location /u01/racovm/buildcluster.log) on build VM (racovm.0)... INFO: deploycluster.py completed successfully at 17:32:02 in 33.2 seconds (00m:33s) Logfile at: /root/rac/deploycluster/deploycluster2.log my netconfig.ini # Node specific information NODE1=db11rac1 NODE1VIP=db11rac1-vip NODE1PRIV=db11rac1-priv NODE1IP=192.168.1.56 NODE1VIPIP=192.168.1.65 NODE1PRIVIP=192.168.2.2 NODE2=db11rac2 NODE2VIP=db11rac2-vip NODE2PRIV=db11rac2-priv NODE2IP=192.168.1.58 NODE2VIPIP=192.168.1.66 NODE2PRIVIP=192.168.2.3 NODE3=db11rac3 NODE3VIP=db11rac3-vip NODE3PRIV=db11rac3-priv NODE3IP=192.168.1.173 NODE3VIPIP=192.168.1.174 NODE3PRIVIP=192.168.2.4 NODE4=db11rac4 NODE4VIP=db11rac4-vip NODE4PRIV=db11rac4-priv NODE4IP=192.168.1.175 NODE4VIPIP=192.168.1.176 NODE4PRIVIP=192.168.2.5 # Common data PUBADAP=eth0 PUBMASK=255.255.255.0 PUBGW=192.168.1.1 PRIVADAP=eth1 PRIVMASK=255.255.255.0 RACCLUSTERNAME=raccluster DOMAINNAME=wimmekes.net DNSIP= # Device used to transfer network information to second node # in interview mode NETCONFIG_DEV=/dev/xvdc # 11gR2 specific data SCANNAME=db11vip SCANIP=192.168.1.57 last few lines of the in-VM log file : 2012-06-02 14:01:40:[clusterstate:Time :db11rac1] Completed successfully in 2 seconds (0h:00m:02s) 2012-06-02 14:01:40:[buildcluster:Done :db11rac1] Build 11gR2 RAC Cluster 2012-06-02 14:01:40:[buildcluster:Time :db11rac1] Completed successfully in 1779 seconds (0h:29m:39s) From start_vm to completely configured : 29m:39s. The other 10m was the import template and create 4 VMs from template along with the shared storage configuration. This consists of a complete Oracle 11gR2 RAC database with ASM, CRS and the RDBMS up and running on all 4 nodes. Simply connect and use. Production ready. Oracle on Oracle.

    Read the article

  • Most efficient way to implement delta time

    - by Starkers
    Here's one way to implement delta time: /// init /// var duration = 5000, currentTime = Date.now(); // and create cube, scene, camera ect ////// function animate() { /// determine delta /// var now = Date.now(), deltat = now - currentTime, currentTime = now, scalar = deltat / duration, angle = (Math.PI * 2) * scalar; ////// /// animate /// cube.rotation.y += angle; ////// /// update /// requestAnimationFrame(render); ////// } Could someone confirm I know how it works? Here what I think is going on: Firstly, we set duration at 5000, which how long the loop will take to complete in an ideal world. With a computer that is slow/busy, let's say the animation loop takes twice as long as it should, so 10000: When this happens, the scalar is set to 2.0: scalar = deltat / duration scalar = 10000 / 5000 scalar = 2.0 We now times all animation by twice as much: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 2.0; angle = (Math.PI * 4) // which is 2 rotations When we do this, the cube rotation will appear to 'jump', but this is good because the animation remains real-time. With a computer that is going too quickly, let's say the animation loop takes half as long as it should, so 2500: When this happens, the scalar is set to 0.5: scalar = deltat / duration scalar = 2500 / 5000 scalar = 0.5 We now times all animation by a half: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 0.5; angle = (Math.PI * 1) // which is half a rotation When we do this, the cube won't jump at all, and the animation remains real time, and doesn't speed up. However, would I be right in thinking this doesn't alter how hard the computer is working? I mean it still goes through the loop as fast as it can, and it still has render the whole scene, just with different smaller angles! So this a bad way to implement delta time, right? Now let's pretend the computer is taking exactly as long as it should, so 5000: When this happens, the scalar is set to 1.0: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 1; angle = (Math.PI * 2) // which is 1 rotation When we do this, everything is timsed by 1, so nothing is changed. We'd get the same result if we weren't using delta time at all! My questions are as follows Mostly importantly, have I got the right end of the stick here? How do we know to set the duration to 5000 ? Or can it be any number? I'm a bit vague about the "computer going too quickly". Is there a way loop less often rather than reduce the animation steps? Seems like a better idea. Using this method, do all of our animations need to be timesed by the scalar? Do we have to hunt down every last one and times it? Is this the best way to implement delta time? I think not, due to the fact the computer can go nuts and all we do is divide each animation step and because we need to hunt down every step and times it by the scalar. Not a very nice DSL, as it were. So what is the best way to implement delta time? Below is one way that I do not really get but may be a better way to implement delta time. Could someone explain please? // Globals INV_MAX_FPS = 1 / 60; frameDelta = 0; clock = new THREE.Clock(); // In the animation loop (the requestAnimationFrame callback)… frameDelta += clock.getDelta(); // API: "Get the seconds passed since the last call to this method." while (frameDelta >= INV_MAX_FPS) { update(INV_MAX_FPS); // calculate physics frameDelta -= INV_MAX_FPS; } How I think this works: Firstly we set INV_MAX_FPS to 0.01666666666 How we will use this number number does not jump out at me. We then intialize a frameDelta which stores how long the last loop took to run. Come the first loop frameDelta is not greater than INV_MAX_FPS so the loop is not run (0 = 0.01666666666). So nothing happens. Now I really don't know what would cause this to happen, but let's pretend that the loop we just went through took 2 seconds to complete: We set frameDelta to 2: frameDelta += clock.getDelta(); frameDelta += 2.00 Now we run an animation thanks to update(0.01666666666). Again what is relevance of 0.01666666666?? And then we take away 0.01666666666 from the frameDelta: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 2 - 0.01666666666 frameDelta = 1.98333333334 So let's go into the second loop. Let's say it took 2(? Why not 2? Or 12? I am a bit confused): frameDelta += clock.getDelta(); frameDelta = frameDelta + clock.getDelta(); frameDelta = 1.98333333334 + 2 frameDelta = 3.98333333334 This time we enter the while loop because 3.98333333334 = 0.01666666666 We run update We take away 0.01666666666 from frameDelta again: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 3.98333333334 - 0.01666666666 frameDelta = 3.96666666668 Now let's pretend the loop is super quick and runs in just 0.1 seconds and continues to do this. (Because the computer isn't busy any more). Basically, the update function will be run, and every loop we take away 0.01666666666 from the frameDelta untill the frameDelta is less than 0.01666666666. And then nothing happens until the computer runs slowly again? Could someone shed some light please? Does the update() update the scalar or something like that and we still have to times everything by the scalar like in the first example?

    Read the article

  • boot issues - long delay, then "gave up waiting for root device"

    - by chazomaticus
    I've had this issue on and off for about two years now. I noticed it on a new (custom built) machine running 10.04 when that first came out, but then it went away until a few months ago. I've gone through a number of hard drive changes but I can't say specifically what if anything I changed hardware-wise to make it stop or start happening. I had assumed upgrading to a modern Ubuntu version would fix the issue, so I installed 12.04 beta on a spare partition last night, but it's still happening. Here's the issue. After grub loads and I select a kernel to boot, the screen goes blank save for a blinking cursor. It sits in this state for many long minutes before it finally gives up and gives me an initramfs shell with the message gave up waiting for root device (and lists the /dev/disk/by-uuid/... path it was waiting for) but no other specific diagnostic information. Now, here's the tricky part. For one, the problem is intermittent - sometimes it progresses from the blinking cursor to the Ubuntu splash boot screen in a few seconds, and once it gets that far it always continues booting fine. The really bizarre thing is that I can "force" it to "find" the root device by repeatedly pressing the space bar and hitting the machine's power button. If I tap those enough, eventually I will notice the hard drive light coming on, at which point it will always continue the boot process after a few seconds. Interestingly, if I wait slightly too long before pressing the power button (30s?), as soon as I press it I get the gave up waiting message and the initramfs shell. I've tried setting up /etc/fstab (and the grub menu.lst or whatever it's called nowadays) to use device names (e.g. /dev/sda1) instead of UUIDs, but I get the same effect just with the device name, not UUID, in the error message. I should also mention that when I boot to Windows 7, there is no issue. It boots slowly all the time just by virtue of being Windows, but it never hangs indefinitely. This would seem to indicate it's a problem in Ubuntu, not the hardware. It's pretty annoying to have to babysit the computer every time it boots. Any ideas? I'm at a loss. Not even sure how to diagnose the issue. Thanks! EDIT: Here's some dmesg output from 10.04. The 15 second gap is where it was doing nothing. I pressed the power button and space bar a few times, and the stuff at 16 seconds happened. Not sure what any of it means. [ 1.320250] scsi18 : ahci [ 1.320294] scsi19 : ahci [ 1.320320] ata19: SATA max UDMA/133 abar m8192@0xfd4fe000 port 0xfd4fe100 ir q 18 [ 1.320323] ata20: SATA max UDMA/133 abar m8192@0xfd4fe000 port 0xfd4fe180 ir q 18 [ 1.403886] usb 2-4: new high speed USB device using ehci_hcd and address 4 [ 1.562558] usb 2-4: configuration #1 chosen from 1 choice [ 16.477824] ata16: SATA link down (SStatus 0 SControl 300) [ 16.477843] ata19: SATA link down (SStatus 0 SControl 300) [ 16.477857] ata3: SATA link down (SStatus 0 SControl 300) [ 16.477895] ata15: SATA link down (SStatus 0 SControl 300) [ 16.477906] ata20: SATA link down (SStatus 0 SControl 300) [ 16.477977] ata17: SATA link down (SStatus 0 SControl 300) [ 16.478003] ata12: SATA link down (SStatus 0 SControl 300) [ 16.478046] ata13: SATA link down (SStatus 0 SControl 300) [ 16.478063] ata14: SATA link down (SStatus 0 SControl 300) [ 16.478108] ata11: SATA link down (SStatus 0 SControl 300) [ 16.478123] ata18: SATA link up 1.5 Gbps (SStatus 113 SControl 300) [ 16.478127] ata6: SATA link down (SStatus 0 SControl 300) [ 16.478157] ata5: SATA link down (SStatus 0 SControl 300) [ 16.478193] ata18.00: ATAPI: MARVELL VIRTUALL, 1.09, max UDMA/66 After that, it took its sweet time, and I had to keep hitting space bar to coax it along. Here's some more dmesg output from a little later in the boot process: [ 17.982291] input: BTC USB Multimedia Keyboard as /devices/pci0000:00/0000:00 :13.0/usb5/5-2/5-2:1.0/input/input4 [ 17.982335] generic-usb 0003:046E:5506.0002: input,hidraw1: USB HID v1.10 Key board [BTC USB Multimedia Keyboard] on usb-0000:00:13.0-2/input0 [ 18.005211] input: BTC USB Multimedia Keyboard as /devices/pci0000:00/0000:00 :13.0/usb5/5-2/5-2:1.1/input/input5 [ 18.005274] generic-usb 0003:046E:5506.0003: input,hiddev96,hidraw2: USB HID v1.10 Device [BTC USB Multimedia Keyboard] on usb-0000:00:13.0-2/input1 [ 22.484906] EXT4-fs (sda6): INFO: recovery required on readonly filesystem [ 22.484910] EXT4-fs (sda6): write access will be enabled during recovery [ 22.548542] EXT4-fs (sda6): recovery complete [ 22.549074] EXT4-fs (sda6): mounted filesystem with ordered data mode [ 32.516772] Adding 20482832k swap on /dev/sda5. Priority:-1 extents:1 across:20482832k [ 32.742540] udev: starting version 151 [ 33.002004] Bluetooth: Atheros AR30xx firmware driver ver 1.0 [ 33.008135] parport_pc 00:09: reported by Plug and Play ACPI [ 33.008186] parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE] [ 33.012076] lp: driver loaded but no devices found [ 33.037271] ppdev: user-space parallel port driver [ 33.090256] lp0: using parport0 (interrupt-driven). Any clues in there?

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • Concurrency Utilities for Java EE Early Draft (JSR 236)

    - by arungupta
    Concurrency Utilities for Java EE is being worked as JSR 236 and has released an Early Draft. It provides concurrency capabilities to Java EE application components without compromising container integrity. Simple (common) and advanced concurrency patterns are easily supported without sacrificing usability. Using Java SE concurrency utilities such as java.util.concurrent API, java.lang.Thread and java.util.Timer in a Java EE application component such as EJB or Servlet are problematic since the container and server have no knowledge of these resources. JSR 236 enables concurrency largely by extending the Concurrency Utilities API developed under JSR-166. This also allows a consistency between Java SE and Java EE concurrency programming model. There are four main programming interfaces available: ManagedExecutorService ManagedScheduledExecutorService ContextService ManagedThreadFactory ManagedExecutorService is a managed version of java.util.concurrent.ExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/BatchExecutor")ManagedExecutorService executor; Its recommended to bind the JNDI references in the java:comp/env/concurrent subcontext. The asynchronous tasks that need to be executed need to implement java.lang.Runnable or java.util.concurrent.Callable interface as: public class MyTask implements Runnable { public void run() { // business logic goes here }} OR public class MyTask2 implements Callable<Date> {  public Date call() { // business logic goes here   }} The task is then submitted to the executor using one of the submit method that return a Future instance. The Future represents the result of the task and can also be used to check if the task is complete or wait for its completion. Future<String> future = executor.submit(new MyTask(), String.class);. . .String result = future.get(); Another example to submit tasks is: class MyTask implements Callback<Long> { . . . }class MyTask2 implements Callback<Date> { . . . }ArrayList<Callable> tasks = new ArrayList<();tasks.add(new MyTask());tasks.add(new MyTask2());List<Future<Object>> result = executor.invokeAll(tasks); The ManagedExecutorService may be configured for different properties such as: Hung Task Threshold: Time in milliseconds that a task can execute before it is considered hung Pool Info Core Size: Number of threads to keep alive Maximum Size: Maximum number of threads allowed in the pool Keep Alive: Time to allow threads to remain idle when # of threads > Core Size Work Queue Capacity: # of tasks that can be stored in inbound buffer Thread Use: Application intend to run short vs long-running tasks, accordingly pooled or daemon threads are picked ManagedScheduledExecutorService adds delay and periodic task running capabilities to ManagedExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/timedExecutor")ManagedExecutorService executor; And then the tasks are submitted using submit, invokeXXX or scheduleXXX methods. ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS); This will create and execute a one-shot action that becomes enabled after 5 seconds of delay. More control is possible using one of the newly added methods: MyTaskListener implements ManagedTaskListener {  public void taskStarting(...) { . . . }  public void taskSubmitted(...) { . . . }  public void taskDone(...) { . . . }  public void taskAborted(...) { . . . } }ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS, new MyTaskListener()); Here, ManagedTaskListener is used to monitor the state of a task's future. ManagedThreadFactory provides a method for creating threads for execution in a managed environment. A simple usage is: @Resource(name="concurrent/myThreadFactory")ManagedThreadFactory factory;. . .Thread thread = factory.newThread(new Runnable() { . . . }); concurrent/myThreadFactory is a JNDI resource. There is lot of interesting content in the Early Draft, download it, and read yourself. The implementation will be made available soon and also be integrated in GlassFish 4 as well. Some references for further exploring ... Javadoc Early Draft Specification concurrency-ee-spec.java.net [email protected]

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • Why is HttpClient's GetStringAsync is unbelivable slow?

    - by Jason94
    I have a Windows Phone 8 project where I've taken to use the PCL (Portable Class Library) project too since I'm going to build a Win8 app to. However, while calling my api (in Azure) my HttpClient's GetStringAsync is so slow. I threw in a couple of debugs with datetime and GetStringAsync took like 14 seconds! And sometimes it takes longer. What I'm doing is retrieving simple JSON from my Azure API site. My Android client has no problem with getting that same data in a split second... so is there something I'm missing? The setup is pretty straight forward: HttpClient client = new HttpClient(); client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); client.DefaultRequestHeaders.Add("X-Token", "something"); string responseJSON = await client.GetStringAsync("url"); I've places the debug times right before and after the await there, in between it is 14 seconds! Does someone know why?

    Read the article

  • MSDeploy: error using runCommand provider to call remote .cmd file (timeout)

    - by mjw.
    We are running into an error when trying to use the MSDeploy "runCommand" provider to execute a .cmd file on a remote machine. The expected run time should be about 10 seconds, but MSDeploy only runs it for about 2-3 seconds, after which time error details are returned. Here is the complete MSDeploy "runCommand" command line text I am using: msdeploy.exe -verb:sync -source:runCommand="D:\web deploy tester\test_cmd.cmd",dontUseCommandExe=false,waitAttempts=5,waitInterval=1000 -dest:auto,computername=http://test-machine:89/MsDeployAgentService/,userName=aaa,password=bbb Here are the error details returned: Error 'Error: (4/21/2010 12:19:25 PM) An error occurred when the request was processed on the remote computer. Error: The process 'C:\WINDOWS\system32\cmd.exe' (command line '/c "D:\web deploy tester\test_cmd.cmd"') was terminated because it exceeded the wait time. Error count: 1. ' occurred in call to RunCommand Any ideas as to why this is happening and how to resolve it?

    Read the article

  • Problem with waitable timers in Windows (timeSetEvent and CreateTimerQueueTimer)

    - by MusiGenesis
    I need high-resolution (more accurate than 1 millisecond) timing in my application. The waitable timers in Windows are (or can be made) accurate to the millisecond, but if I need a precise periodicity of, say, 35.7142857141 milliseconds, even a waitable timer with a 36 ms period will drift out of sync quickly. My "solution" to this problem (in ironic quotes because it's not working quite right) is to use a series of one-shot timers where I use the expiration of each timer to call the next timer. Normally a process like this would be subject to cumulative error over time, but in each timer callback I check the current time (with System.Diagnostics.Stopwatch) and use this to calculate what the period of the next timer needs to be (so if a timer happens to expire a little late, the next timer will automagically have a shorter period to compensate). This works as expected, except that after maybe 10-15 seconds the timer system seems to get bogged down, and a few timer callbacks here and there arrive anywhere from 25 to 100 milliseconds late. After a couple of seconds the problem goes away and everything runs smoothly again for 10-15 seconds, and then the stuttering again. Since I'm using Stopwatch to set each timer period, I'm also using it to monitor the arrival times of each timer callback. During the smooth-running periods, most (maybe 95%) of the intervals are either 35 or 36 milliseconds, and no intervals are ever more than 5 milliseconds away from the expected 35.7142857143. During the "glitchy" stretches, the distribution of intervals is very nearly identical, except that a very small number are unusually large (a couple more than 60 ms and one or two longer than 100 ms during maybe a 3-second stretch). This stuttering is very noticeable, and it's what I'm trying to fix, if possible. For the high-resolution timer, I was using the extremely antique timeSetEvent() multimedia timer from winmm.dll. In pursuit of this problem, I switched to using CreateTimerQueueTimer (along with timeBeginPeriod to set the high-resolution), but I'm seeing the same problem with both timer mechanisms. I've tried experimenting with the various flags for CreateTimerQueueTimer which determine which thread the timer runs on, but the stuttering appears no matter what. Is this just a fundamental problem with using timers in this way (i.e. using each one-shot timer to call the next)? If so, do I have any alternatives? One thing I was considering was to determine how many consecutive 1-millisecond-accuracy ticks would keep my within some arbitrary precision limit before I need to reset the timer. So, for example, if I wanted a 35.71428 period, I could let a 36 ms timer elapse 15 times before it was off by 5 milliseconds, then kill it and start a new one.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >