Search Results

Search found 6670 results on 267 pages for 'speed dial'.

Page 255/267 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • My server app works strangely. What could be the reason(s)?

    - by Poni
    Hi! I've written a server app (two parts actually; proxy server and a game server) using C++ (board game). It uses IOCP as the sockets interface. For that app I've also written a "client simulator" (hereafter "client") app that spawns many client connections, where each of them plays, in very high speed, getting the CPU to be 100% utilized. So, that's how it goes in terms of topology: Game server - holds the game state. Real players do not connect it directly but through the proxy server. When a player joins a game, the proxy actually asks for it on behalf of that player, and the game server spawns a "player instance" for that player, and from now on, every notification between the game server and the player is being passed through the proxy. Proxy server - holds TCP connections with the real players. Players communicate with the game server through it only. Client simulator - connects to the proxy only. When running the server (again, it's actually two server apps) & client locally it all works just fine. I'm talking about 40k+ player instances in which all of them are active in a game. On the other hand, when running the server remotely with, say, 1000 clients who play things getting strange. For example, I run it as said above. Then with Task Manager I kill the client simulator app ("End Process Tree"). Then it seems like the buffer of the remote server got modified by another thread, or in other words, a memory corruption has been occurred. The server crashes because it got an unknown message id (it's a custom protocol where each message has it's own unique number). To make things clear, here is how I run the apps: PC1 - game server and clients simulator (because the clients will connect the proxy). PC2 - proxy server. The strangest thing is this: Only the remote side gets "corrupted". Remote in terms that it's not the PC I use to code the app (VC++ 2008). Let's call the PC I use to code the apps "PC1". Now for example, if this time I ran the game server on PC1 (it means that proxy server on PC2 and clients simulator on PC1), then the proxy server crashes with an "unknown message id" error. Another variation is when I run the proxy server on PC1 (again, the dev machine), the game server and the clients simulator on PC2, then the game server on PC2 gets crashed. As for the IOCP config: The servers' internal connections use the default receive/send buffer sizes. Tried even with setting them to 1MB, but no luck. I have three PCs in total; 2 x Vista 64bit <<-- one of those is the dev machine. The other is connected through WiFi. 1 x WinXP 32bit They're all connected in a "full duplex" manner. What could be the reason? Tried about everything; Stack tracing, recording some actions (like read/write logging).. I want to stress that only the PC I'm not using to code the apps crashes (actually the server app "role" which is running on it - sometimes the game server and sometimes the proxy server). At first I thought that maybe the wireless PC has problems (it's wireless..) but: TCP has it's own mechanisms to make sure the packet is delivered properly. Also, a crash also happens when trying it with the two PCs that are physically connected (Vista vs. XP). Another option is that the Windows DLLs versions might have problems, but then again, one of the tests is Vista vs. Vista, and the other is Vista vs. XP. Any idea?

    Read the article

  • What is the best algorithm for this array-comparison problem?

    - by mark
    What is the most efficient for speed algorithm to solve the following problem? Given 6 arrays, D1,D2,D3,D4,D5 and D6 each containing 6 numbers like: D1[0] = number D2[0] = number ...... D6[0] = number D1[1] = another number D2[1] = another number .... ..... .... ...... .... D1[5] = yet another number .... ...... .... Given a second array ST1, containing 1 number: ST1[0] = 6 Given a third array ans, containing 6 numbers: ans[0] = 3, ans[1] = 4, ans[2] = 5, ......ans[5] = 8 Using as index for the arrays D1,D2,D3,D4,D5 and D6, the number that goes from 0, to the number stored in ST1[0] minus one, in this example 6, so from 0 to 6-1, compare each res array against each D array My algorithm so far is: I tried to keep everything unlooped as much as possible. EML := ST1[0] //number contained in ST1[0] EML1 := 0 //start index for the arrays D While EML1 < EML if D1[ELM1] = ans[0] goto two if D2[ELM1] = ans[0] goto two if D3[ELM1] = ans[0] goto two if D4[ELM1] = ans[0] goto two if D5[ELM1] = ans[0] goto two if D6[ELM1] = ans[0] goto two ELM1 = ELM1 + 1 return 0 //If the ans[0] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers two: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[1] goto three if D2[ELM1] = ans[1] goto three if D3[ELM1] = ans[1] goto three if D4[ELM1] = ans[1] goto three if D5[ELM1] = ans[1] goto three if D6[ELM1] = ans[1] goto three ELM1 = ELM1 + 1 return 0 //If the ans[1] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers three: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[2] goto four if D2[ELM1] = ans[2] goto four if D3[ELM1] = ans[2] goto four if D4[ELM1] = ans[2] goto four if D5[ELM1] = ans[2] goto four if D6[ELM1] = ans[2] goto four ELM1 = ELM1 + 1 return 0 //If the ans[2] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers four: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[3] goto five if D2[ELM1] = ans[3] goto five if D3[ELM1] = ans[3] goto five if D4[ELM1] = ans[3] goto five if D5[ELM1] = ans[3] goto five if D6[ELM1] = ans[3] goto five ELM1 = ELM1 + 1 return 0 //If the ans[3] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers five: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[4] goto six if D2[ELM1] = ans[4] goto six if D3[ELM1] = ans[4] goto six if D4[ELM1] = ans[4] goto six if D5[ELM1] = ans[4] goto six if D6[ELM1] = ans[4] goto six ELM1 = ELM1 + 1 return 0 //If the ans[4] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers six: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[5] return 1 ////If the ans[1] number is not found in either D1[0-6]..... if D2[ELM1] = ans[5] return 1 which will then include ans[0-6] numbers return 1 if D3[ELM1] = ans[5] return 1 if D4[ELM1] = ans[5] return 1 if D5[ELM1] = ans[5] return 1 if D6[ELM1] = ans[5] return 1 ELM1 = ELM1 + 1 return 0 As language of choice, it would be pure c

    Read the article

  • PHP MINISERVER DOWNLOAD RESUME-ERROR! Resource id # 4

    - by snikolov
    $httpsock = @socket_create_listen("9090"); if (!$httpsock) { print "Socket creation failed!\n"; exit; } while (1) { $client = socket_accept($httpsock); $input = trim(socket_read ($client, 4096)); $input = explode(" ", $input); $range = $input[12]; $input = $input[1]; $fileinfo = pathinfo($input); switch ($fileinfo['extension']) { default: $mime = "text/html"; } if ($input == "/") { $input = "index.html"; } $input = ".$input"; if (file_exists($input) && is_readable($input)) { echo "Serving $input\n"; $contents = file_get_contents($input); $output = "HTTP/1.0 200 OK\r\nServer: APatchyServer\r\nConnection: close\r\nContent-Type: $mime\r\n\r\n$contents"; } else { //$contents = "The file you requested doesn't exist. Sorry!"; //$output = "HTTP/1.0 404 OBJECT NOT FOUND\r\nServer: BabyHTTP\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n$contents"; if(isset($range)) { list($a, $range) = explode("=",$range); str_replace($range, "-", $range); $size2 = $size-1; $new_length = $size-$range; $output = "HTTP/1.1 206 Partial Content\r\n"; $output .= "Content-Length: $new_length\r\n"; $output .= "Content-Range: bytes $range$size2/$size\r\n"; } else { $size2=$size-1; $output .= "Content-Length: $new_length\r\n"; } $chunksize = 1*(1024*1024); $bytes_send = 0; $file = "a.mp3"; $filesize = filesize($file); if ($file = fopen($file, 'r')) { if(isset($range)) $output = 'HTTP/1.0 200 OK\r\n'; $output .= "Content-type: application/octet-stream\r\n"; $output .= "Content-Length: $filesize\r\n"; $output .= 'Content-Disposition: attachment; filename="'.$file.'"\r\n'; $output .= "Accept-Ranges: bytes\r\n"; $output .= "Cache-Control: private\n\n"; fseek($file, $range); $download_rate = 1000; while(!feof($file) and (connection_status()==0)) { $var_stat = fread($file, round($download_rate *1024)); $output .= $var_stat;//echo($buffer); // is also possible flush(); sleep(1);//// decrease download speed } fclose($file); } /** $filename = "dada"; $file = fopen($filename, 'r'); $filesize = filesize($filename); $buffer = fread($file, $filesize); $send = array("Output"=$buffer,"filesize"=$filesize,"filename"=$filename); $file = $send['filename']; */ //@ob_end_clean(); // $output .= "Content-Transfer-Encoding: binary"; //$output .= "Connection: Keep-Alive\r\n"; } socket_write($client, $output); socket_close ($client); } socket_close ($httpsock); hey guys i have create a miniwebserver downloader it can download files from your server, however i am unable to resume my download when i download the file i get Resource id # 4 and also i cant resume the download,i would like to know how i can monitor record the client output how much bandwidth he has downloaded perl has something like this put its hardcore if possible kindly provide me with some pointers thank you :)

    Read the article

  • Why is my mdadm raid-1 recovery so slow?

    - by dimmer
    On a system I'm running Ubuntu 10.04. My raid-1 restore started out fast but quickly became ridiculously slow (at this rate the restore will take 150 days!): dimmer@paimon:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdc1[2] sdb1[1] 1953513408 blocks [2/1] [_U] [====>................] recovery = 24.4% (477497344/1953513408) finish=217368.0min speed=113K/sec unused devices: <none> Eventhough I have set the kernel variables to reasonably quick values: dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_min 1000000 dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_max 100000000 I am using 2 2.0TB Western Digital Hard Disks, WDC WD20EARS-00M and WDC WD20EARS-00J. I believe they have been partitioned such that their sectors are aligned. dimmer@paimon:/sys$ sudo parted /dev/sdb GNU Parted 2.2 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00M (scsi) Disk /dev/sdb: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 (parted) unit s (parted) p Number Start End Size File system Name Flags 1 2048s 3907028991s 3907026944s ext4 (parted) q dimmer@paimon:/sys$ sudo parted /dev/sdc GNU Parted 2.2 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00J (scsi) Disk /dev/sdc: 2000GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 I am beginning to think that I have a hardware problem, otherwise I can't imagine why the mdadm restore should be so slow. I have done a benchmark on /dev/sdc using Ubuntu's disk utility GUI app, and the results looked normal so I know that sdc has the capability to write faster than this. I also had the same problem on a similar WD drive that I RMAd because of bad sectors. I suppose it's possible they sent me a replacement with bad sectors too, although there are no SMART values showing them yet. Any ideas? Thanks. As requested, output of top sorted by cpu usage (notice there is ~0 cpu usage). iowait is also zero which seems strange: top - 11:35:13 up 2 days, 9:40, 3 users, load average: 2.87, 2.58, 2.30 Tasks: 142 total, 1 running, 141 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3096304k total, 1482164k used, 1614140k free, 617672k buffers Swap: 1526132k total, 0k used, 1526132k free, 535416k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 45 root 20 0 0 0 0 S 0 0.0 2:17.02 scsi_eh_0 1 root 20 0 2808 1752 1204 S 0 0.1 0:00.46 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.17 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/1 ... dmesg errors, definitely looking like hardware: [202884.000157] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [202884.007015] ata5.00: failed command: FLUSH CACHE EXT [202884.013728] ata5.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 [202884.013730] res 40/00:00:ff:59:2e/00:00:35:00:00/e0 Emask 0x4 (timeout) [202884.033667] ata5.00: status: { DRDY } [202884.040329] ata5: hard resetting link [202889.400050] ata5: link is slow to respond, please be patient (ready=0) [202894.048087] ata5: COMRESET failed (errno=-16) [202894.054663] ata5: hard resetting link [202899.412049] ata5: link is slow to respond, please be patient (ready=0) [202904.060107] ata5: COMRESET failed (errno=-16) [202904.066646] ata5: hard resetting link [202905.840056] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [202905.849178] ata5.00: configured for UDMA/133 [202905.849188] ata5: EH complete [203899.000292] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [203899.007096] ata5.00: failed command: IDENTIFY DEVICE [203899.013841] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [203899.013843] res 40/00:00:ff:f9:f6/00:00:38:00:00/e0 Emask 0x4 (timeout) [203899.041232] ata5.00: status: { DRDY } [203899.048133] ata5: hard resetting link [203899.816134] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [203899.826062] ata5.00: configured for UDMA/133 [203899.826079] ata5: EH complete [204375.000200] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204375.007421] ata5.00: failed command: IDENTIFY DEVICE [204375.014799] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204375.014800] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204375.044374] ata5.00: status: { DRDY } [204375.051842] ata5: hard resetting link [204380.408049] ata5: link is slow to respond, please be patient (ready=0) [204384.440076] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204384.449938] ata5.00: configured for UDMA/133 [204384.449955] ata5: EH complete [204395.988135] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204395.988140] ata5.00: failed command: IDENTIFY DEVICE [204395.988147] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204395.988149] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204395.988151] ata5.00: status: { DRDY } [204395.988156] ata5: hard resetting link [204399.320075] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204399.330487] ata5.00: configured for UDMA/133 [204399.330503] ata5: EH complete

    Read the article

  • My D-Link's Ethernet bridge downlink just got 10-30x slower?

    - by Jay Levitt
    TL;DR: I unplugged my network to move my desk, and now downloading via my DIR-655's Ethernet LAN bridge is 10-30x slower than the Ethernet switch it's plugged into. Background My network is SMC cable modem <-> Cisco firewall <-> Netgear switch <-> D-Link WiFi† | | | | SMC8014 ASA-5505 GS608v2 gigE DIR-655 rev A3 gigE †The DIR-655 is used as an access point, not a router (although what D-Link calls an access point, I'd call a bridge). The "WAN" port is unused; the Netgear connects to the built-in 4-port Ethernet LAN switch, inside the built- in router/firewall. Endpoints: MacBook Pro 17" mid-2010 iPhone 4S Fedora 12 Linux server running reasonably fast dual-Athlon X2, VelociRaptors, etc. All cables are <10 feet, mostly CAT-5e, some CAT-6, all premade. All WiFi endpoints are within three feet of the D-Link. Yesterday I unplugged and rearranged stuff, and now connecting via the D-Link - even through the wired switch, right next to the incoming network cable - is 30x slower than connecting directly to the Netgear switch, on both my MacBook and iPhone. How I'm measuring "slower" I'm mostly using http://speedtest.net, which of course only really measures broadband speeds. I've also installed http://www.speedtest.net/mini.php on my local server, but can't test the iPhone with that. Results Speedtest.net, closest server over Comcast business-class: CONFIG | PING (ms) | DOWN (Mbps) | UP (Mbps) Mac <-> Ethernet <-> Netgear | 9 | 31.6 | 6.8 Mac <-> Ethernet <-> D-Link | 8 | 4.1 | 6.0 Mac <-> WiFi <-> D-Link | 9 | 1.4 | 2.9 iPhone <-> WiFi <-> D-Link | 67 | 0.4 | 1.6 Speedtest Mini on Linux PC: CONFIG | DOWN (Mbps) | UP (Mbps) Mac <-> Ethernet <-> NetGear | 97.2 | 76.9 Mac <-> Ethernet <-> D-Link | 8.2 | 24.2 Mac <-> WiFi <-> D-Link | 1.0 | 8.6 Slow typing in SSH: Mac <-> Ethernet <-> Netgear <-> Linux PC: smooth Mac <-> Ethernet <-> D-Link <-> Linux PC: choppy Note that D-Link upload speeds are normal on broadband, slower locally (but I'd believe that's a D-Link limitation), and always faster than the downloads! Since ssh is choppy just with slow typing, I don't believe it's a throttling-type problem either; that's not a lot of bandwidth. What I've tried Swapping all "good" and "bad" cables Re-plugging "bad" cable from D-Link to Netgear and watching it be the "good" cable pulling cables away from power lines Verify that the Mac auto-detects the D-Link as gigE Try to verify the link speed of the D-Link <- Netgear connection, but the firmware doesn't report that Verify that the D-Link sees no TX/RX errors or collisions Use different Ethernet ports on both Netgear and D-Link Reset the D-Link to factory settings Upgrade the D-Link firmware from 1.21 to 1.35NA, 2010/11/12, the latest Reboot everything at least once On the Mac, disable Wi-Fi during the Ethernet tests, and unplug Ethernet during the Wi-Fi tests Using iStumbler, verify that the D-Link isn't picking overloaded Wi-Fi channels (usually just 1-5 neighbors on my and adjacent channels, average for my apt building) Verify that the only client connected to the Wi-Fi was the iPhone Verify that nothing was being chatty on my network according to the WISH log Enable and disable all sorts of D-Link settings, including forcing WAN auto-detect to gigE So. I don't mind buying a new access point—I wouldn't mind having a dual-link network—but as a guy who's been networking since gated v4 was a drastic rewrite, and who often used physical sniffers in the days before Wireshark, I'm baffled. I hate being baffled. What could I possibly have changed that would result in this? How can I measure it? All I can think of is a static zap—thick carpet, socks, HVAC—but I didn't feel one, and does that really happen anymore? Can I test if it's Ethernet vs. TCP layer slowness? I'm not familiar with modern network utilities; it's hard to Google without hitting "Q: Why is my network slow? A: Is your microwave on?" If I don't get an answer here, will someone big and powerful help me migrate it to serverfault without getting screamed back here? In the words of Inigo Montoya, "I must know." Don't get all Dread Pirate Roberts on me.

    Read the article

  • How do I get through proxy server environments for non-standard services?

    - by Ripred
    I'm not real hip on exactly what role(s) today's proxy servers can play and I'm learning so go easy on me :-) I have a client/server system I have written using a homegrown protocol and need to enhance the client side to negotiate its way out of a proxy environment. I have an existing client and server system written in C and C++ for the speed and a small amount of MFC in the client to handle the user interface. I have written both the server and client side of the system on Windows (the people I work for are mainly web developers using Windows everything - not a choice) sticking to Berkeley Sockets as it were via wsock32 for efficiency. The clients connect to the server through a nonstandard port (even though using port 80 is an option to get out of some environments but the protocol that goes over it isn't HTTP). The TCP connection(s) stay open for the duration of the clients participation in real time conferences. Our customer base is expanding to all kinds of networked environments. I have been able to solve a lot of problems by adding the ability to connect securely over port 443 and using secure sockets which allows the protocol to pass through a lot environments since the internal packets can't be sniffed. But more and more of our customers are behind a proxy server environment and my direct connections don't make it through. My old school understanding of proxy servers is that they act as a proxy for external HTML content over HTTP, possibly locally caching popular material for faster local access, and also allowing their IT staff to blacklist certain destination sites. Customer are complaining that my software doesn't recognize and easily navigate its way through their proxy environments but I'm finding it difficult to decide what my "best fit" solution should be. My software doesn't tear down the connection after each client request, and on top of that packets can come from either side at any time, basically your typical custom client/server system for a specific niche. My first reaction is "why can't they just add my servers addresses to their white list" but if there is a programmatic way I can get through without requiring their IT staff to help it is politically better and arguably a better solution anyway. Plus maybe I'm still not understanding the role and purpose of what proxy servers and environments have grown to be these days. My first attempt at a solution was to use WinInet with its various proxy capabilities to establish a connection over port 80 to my non-standard protocol server (which knows enough to recognize and answer a simple HTTP-looking GET request and answer it with a simple HTTP response page to get around some environments that employ initial packet sniffing (DPI)). I retrieved the actual SOCKET handle behind WinInet's HINTERNET request object and had hoped to use that in place of my software's existing SOCKET connection and hopefully not need to change much more on the client side. It initially seemed to be my solution but on further inspection it seems that the OS gets first-chance at the received data on this socket since when I get notified of events via the standard select(...) statement on the socket and query the size of the data available via ioctlsocket the call succeeds but returns 0 bytes available, the reads don't work and it goes downhill from there. Can someone tell me of a client-side library (commercial is fine) will let me get past these proxy server environments with as little user and IT staff help as possible? From what I read it has grown past SOCKS and I figure someone has to have solved this problem before me. Thanks for reading my long-winded question, Ripred

    Read the article

  • Windows 7: Search indexing is stuck

    - by Ricket
    When I open Indexing Options, it says: 4,317 items indexed Indexing in progress. Search results might not be complete during this time. It's stuck at 4,317 though; no more items have been indexed. Worst of all, SearchIndexer.exe is taking up 100% CPU (well, 50%, but I have a dual core CPU; it's taking up all processing power it can). It is not causing hard drive activity though. I tried clicking "Troubleshoot search and indexing" at the bottom of the Indexing Options window, but it couldn't find any problem. I've also tried the repair registry key that several websites suggest; I change HKLM\SOFTWARE\Microsoft\Windows Search SetupCompletedSuccessfully to 0 and restarted the computer, and it apparently repaired because it flipped back to 1, but the same problem continues to occur. It's reducing the battery life of my laptop and making it really hot so that my fans are running all the time. I've had to disable the Windows Search service. How can I fix this? Do I need to just flat-out reformat my computer? Update: I've tried rebuilding a couple times. There's nothing unusual about the locations I have to index, and I don't have any downloads in progress or anything like that. I don't see any reason why it stopped, and I noticed it much too late to do a system restore. At this point, I'm hoping someone will offer up some secret answer that will fix the problem, thus the bounty. Another update: I tried starting the service again, just to let it try yet again. It seemed okay at first (Indexing Options showed it operating at reduced speed due to user activity, and the number of files was going up). A while later I checked, and the service had stopped. Event viewer revealed some errors like this: Log Name: Application Source: Application Error Date: 2/1/2010 7:34:23 PM Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: ricky-win7 Description: Faulting application name: SearchIndexer.exe, version: 7.0.7600.16385, time stamp: 0x4a5bcdd0 Faulting module name: NLSData0007.dll, version: 6.1.7600.16385, time stamp: 0x4a5bda88 Exception code: 0xc0000005 Fault offset: 0x002141ba Faulting process id: 0x13a0 Faulting application start time: 0x01caa39f2a70ec02 Faulting application path: C:\Windows\system32\SearchIndexer.exe Faulting module path: C:\Windows\System32\NLSData0007.dll Report Id: b4f7a7ae-0f92-11df-87fc-e5d65d8794c2 Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Application Error" /> <EventID Qualifiers="0">1000</EventID> <Level>2</Level> <Task>100</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-02-02T00:34:23.000000000Z" /> <EventRecordID>10689</EventRecordID> <Channel>Application</Channel> <Computer>ricky-win7</Computer> <Security /> </System> <EventData> <Data>SearchIndexer.exe</Data> <Data>7.0.7600.16385</Data> <Data>4a5bcdd0</Data> <Data>NLSData0007.dll</Data> <Data>6.1.7600.16385</Data> <Data>4a5bda88</Data> <Data>c0000005</Data> <Data>002141ba</Data> <Data>13a0</Data> <Data>01caa39f2a70ec02</Data> <Data>C:\Windows\system32\SearchIndexer.exe</Data> <Data>C:\Windows\System32\NLSData0007.dll</Data> <Data>b4f7a7ae-0f92-11df-87fc-e5d65d8794c2</Data> </EventData> </Event> If you are having the same error and arrived here from a Google search, please comment or add an answer detailing your progress on this, if any...

    Read the article

  • Optimizing Python code with many attribute and dictionary lookups

    - by gotgenes
    I have written a program in Python which spends a large amount of time looking up attributes of objects and values from dictionary keys. I would like to know if there's any way I can optimize these lookup times, potentially with a C extension, to reduce the time of execution, or if I need to simply re-implement the program in a compiled language. The program implements some algorithms using a graph. It runs prohibitively slowly on our data sets, so I profiled the code with cProfile using a reduced data set that could actually complete. The vast majority of the time is being burned in one function, and specifically in two statements, generator expressions, within the function: The generator expression at line 202 is neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) and the generator expression at line 204 is neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) The source code for this function of context provided below. selected_nodes is a set of nodes in the interaction_graph, which is a NetworkX Graph instance. node_neighbors is an iterator from Graph.neighbors_iter(). Graph itself uses dictionaries for storing nodes and edges. Its Graph.node attribute is a dictionary which stores nodes and their attributes (e.g., 'weight') in dictionaries belonging to each node. Each of these lookups should be amortized constant time (i.e., O(1)), however, I am still paying a large penalty for the lookups. Is there some way which I can speed up these lookups (e.g., by writing parts of this as a C extension), or do I need to move the program to a compiled language? Below is the full source code for the function that provides the context; the vast majority of execution time is spent within this function. def calculate_node_z_prime( node, interaction_graph, selected_nodes ): """Calculates a z'-score for a given node. The z'-score is based on the z-scores (weights) of the neighbors of the given node, and proportional to the z-score (weight) of the given node. Specifically, we find the maximum z-score of all neighbors of the given node that are also members of the given set of selected nodes, multiply this z-score by the z-score of the given node, and return this value as the z'-score for the given node. If the given node has no neighbors in the interaction graph, the z'-score is defined as zero. Returns the z'-score as zero or a positive floating point value. :Parameters: - `node`: the node for which to compute the z-prime score - `interaction_graph`: graph containing the gene-gene or gene product-gene product interactions - `selected_nodes`: a `set` of nodes fitting some criterion of interest (e.g., annotated with a term of interest) """ node_neighbors = interaction_graph.neighbors_iter(node) neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) try: max_z_score = max(neighbor_z_scores) # max() throws a ValueError if its argument has no elements; in this # case, we need to set the max_z_score to zero except ValueError, e: # Check to make certain max() raised this error if 'max()' in e.args[0]: max_z_score = 0 else: raise e z_prime = interaction_graph.node[node]['weight'] * max_z_score return z_prime Here are the top couple of calls according to cProfiler, sorted by time. ncalls tottime percall cumtime percall filename:lineno(function) 156067701 352.313 0.000 642.072 0.000 bpln_contextual.py:204(<genexpr>) 156067701 289.759 0.000 289.759 0.000 bpln_contextual.py:202(<genexpr>) 13963893 174.047 0.000 816.119 0.000 {max} 13963885 69.804 0.000 936.754 0.000 bpln_contextual.py:171(calculate_node_z_prime) 7116883 61.982 0.000 61.982 0.000 {method 'update' of 'set' objects}

    Read the article

  • Cisco: unable to negotiate IP using IPCP with Windows server

    - by lnk
    I am connecting to Windows server using PPP (for vpn), I establish connection but server does not respond me for my address requests: *Mar 23 00:40:06.055: Vi1 MS-CHAP-V2: I CHALLENGE id 0 len 25 from "MSDC" *Mar 23 00:40:06.063: Vi1 MS CHAP V2: Using hostname from interface CHAP *Mar 23 00:40:06.063: Vi1 MS CHAP V2: Using password from interface CHAP *Mar 23 00:40:06.067: Vi1 MS-CHAP-V2: O RESPONSE id 0 len 69 from "XXX" *Mar 23 00:40:06.087: Vi1 PPP: I pkt type 0xC223, datagramsize 50 link[ppp] *Mar 23 00:40:06.087: Vi1 MS-CHAP-V2: I SUCCESS id 0 len 46 msg is "S=XXX" *Mar 23 00:40:06.087: Vi1 MS CHAP V2 No Password found for : XXX *Mar 23 00:40:06.091: Vi1 MS CHAP V2 Check AuthenticatorResponse Success for : XXX *Mar 23 00:40:06.091: Vi1 IPCP: O CONFREQ [Closed] id 1 len 20 *Mar 23 00:40:06.091: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:06.091: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:07.091: %LINEPROTO-5-UPDOWN: Line protocol on Interface Virtual-Access1, changed state to up *Mar 23 00:40:07.091: Vi1 LCP: O ECHOREQ [Open] id 1 len 12 magic 0x194CAFCF *Mar 23 00:40:07.103: Vi1 LCP-FS: I ECHOREP [Open] id 1 len 12 magic 0x361B62E5 *Mar 23 00:40:07.103: Vi1 LCP-FS: Received id 1, sent id 1, line up *Mar 23 00:40:08.083: Vi1 IPCP: TIMEout: State REQsent *Mar 23 00:40:08.083: Vi1 IPCP: O CONFREQ [REQsent] id 2 len 20 *Mar 23 00:40:08.083: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:08.083: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:10.099: Vi1 IPCP: TIMEout: State REQsent *Mar 23 00:40:10.099: Vi1 IPCP: O CONFREQ [REQsent] id 3 len 20 *Mar 23 00:40:10.099: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:10.099: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:12.115: Vi1 IPCP: TIMEout: State REQsent *Mar 23 00:40:12.115: Vi1 IPCP: O CONFREQ [REQsent] id 4 len 20 *Mar 23 00:40:12.115: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:12.115: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:12.211: Vi1 LCP: O ECHOREQ [Open] id 2 len 12 magic 0x194CAFCF *Mar 23 00:40:12.219: Vi1 LCP-FS: I ECHOREP [Open] id 2 len 12 magic 0x361B62E5 *Mar 23 00:40:12.219: Vi1 LCP-FS: Received id 2, sent id 2, line up *Mar 23 00:40:14.131: Vi1 IPCP: TIMEout: State REQsent *Mar 23 00:40:14.131: Vi1 IPCP: O CONFREQ [REQsent] id 5 len 20 *Mar 23 00:40:14.131: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:14.131: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:16.147: Vi1 IPCP: TIMEout: State REQsent *Mar 23 00:40:16.147: Vi1 IPCP: O CONFREQ [REQsent] id 6 len 20 *Mar 23 00:40:16.147: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:16.147: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:17.331: Vi1 LCP: O ECHOREQ [Open] id 3 len 12 magic 0x194CAFCF *Mar 23 00:40:17.343: Vi1 LCP-FS: I ECHOREP [Open] id 3 len 12 magic 0x361B62E5 *Mar 23 00:40:17.343: Vi1 LCP-FS: Received id 3, sent id 3, line up You see: My router asks for address, but only keepalives are on line. But the same server works with windows client!! ! version 12.4 no service pad service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption service internal ! hostname Router ! boot-start-marker boot-end-marker ! ! no aaa new-model ! resource policy ! ip subnet-zero ! ! ip cef vpdn enable ! vpdn-group pptp request-dialin protocol pptp pool-member 1 initiate-to ip XXXX ! ! ! ! ! ! ! bridge irb ! ! interface ATM0 no ip address shutdown no atm ilmi-keepalive dsl operating-mode auto ! interface FastEthernet0 ! interface FastEthernet1 ! interface FastEthernet2 ! interface FastEthernet3 ! interface Dot11Radio0 no ip address shutdown speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 12.0 18.0 24.0 36.0 48.0 54.0 station-role root ! interface Vlan1 no ip address bridge-group 1 ! interface Dialer0 ip address negotiated encapsulation ppp dialer pool 1 dialer idle-timeout 0 dialer string XXX dialer persistent dialer vpdn dialer-group 1 keepalive 5 3 no cdp enable ppp authentication ms-chap-v2 optional ppp eap refuse ppp chap hostname XXX ppp chap password 0 XXX ppp ipcp mask request ppp ipcp ignore-map ppp ipcp address accept ! interface BVI1 mac-address XXX.XXX.XXX ip address dhcp ! ip classless ip route 172.0.0.0 255.0.0.0 Dialer0 ! no ip http server no ip http secure-server ! dialer-list 1 protocol ip permit ! control-plane ! bridge 1 protocol vlan-bridge bridge 1 route ip ! line con 0 no modem enable line aux 0 line vty 0 4 login ! scheduler max-task-time 5000 end

    Read the article

  • Sporadic disk clicking sound

    - by Abdó
    Hi, I'm having some unusual and sporadic hard disk clicking issues. Here is a cronological description of the facts. I'm using an ASUS P6T-SE with Intel Core i7, 6Gb RAM 600W Power supply and ATI4670 graphics, running Ubuntu 10.10. About one month ago my hard disk (SATA II Seagate Barracuda 1Tb 7200 rpm) started making a clicking sound: a sort of loud tic-tac, every second or so, when involved in disk activity. The system was clearly slower than before at disk access, but it was functional and I could not find any signal of trouble on the linux logs. I disconnected the disk and tried an older SATA drive I had around: no problem with it. Then I reconnected the Seagate disk, and the problem was mysteriously gone. Ubuntu booted normally, usual speed, no clicking. A couple of weeks later, the problem reappeared. I tried disconnecting reconnecting (as it somehow solved the problem before) without luck. So, despite it was a rather new drive, I assumed it was a hardware issue, made backups and bought a new drive. The new drive is a SATA II Seagate Barracuda 1.5 Tb 7200 rpm. I installed both drives at the same time, with the intention of transferring my files from on to the other. To my surprise, when I booted the computer with both drives, both started making the clicking sound !! Even worse, I removed the old drive, leaving the unformated new drive connected, and booted from a LiveCD. It kept clicking ! Puzzled by this, I tried both drives on my laptop with a SATA to USB cable. At the moment I connected any of them, they made one or two unusual clicks and immediately stopped doing that and worked normally. The old drive I thought almost dead, was working like a charm as if nothing happened. Then I thought: "ok, it must be the motherboard. Let's try again". So, I reconnected the old drive to the ASUS P6T motherboard (the same cables and SATA port as before), and it worked as if nothing happened ! The problem was gone again. The new 1.5 Tb drive was also working ok: No clicking nor slowdown. So I left the old 1Tb disk connected and kept using the computer daily during 3 weeks, until today it happened again. Now I don't really know what to do or check. I'm not even sure if it is a hardware issue any more ! This is rather annoying as it seems it happens with a period of 2 or 3 weeks and I have no means of forcing it to happen. Does anyone have a clue of what can causes this behaviour or have any suggestions of things I should check when it happens again ? What I did today is checking some SMART parameters Error log: smartctl -l error /dev/sda. No errors Short selftest: smartctl -t short /dev/sda. No errors Disk Health check: smartctl -H /dev/sda. passed And here are the vendor specific parameters (smartctl -A /dev/sda) Which I'm not quite sure how to interpret. === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 120 099 006 Pre-fail Always - 235962588 3 Spin_Up_Time 0x0003 095 095 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 187 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 072 060 030 Pre-fail Always - 16348045 9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 3590 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 94 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 097 000 Old_age Always - 4295164029 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 070 057 045 Old_age Always - 30 (Lifetime Min/Max 19/31) 194 Temperature_Celsius 0x0022 030 043 000 Old_age Always - 30 (0 18 0 0) 195 Hardware_ECC_Recovered 0x001a 037 026 000 Old_age Always - 235962588 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 73950746906346 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 1832967731 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 3294986902 Any clue to this mystery will be really welcome. Thank you very much !!

    Read the article

  • PHP mini-server download resulme-error! Resource id # 4

    - by snikolov
    <?php $httpsock = @socket_create_listen("9090"); if (!$httpsock) { print "Socket creation failed!\n"; exit; } while (1) { $client = socket_accept($httpsock); $input = trim(socket_read ($client, 4096)); $input = explode(" ", $input); $range = $input[12]; $input = $input[1]; $fileinfo = pathinfo($input); switch ($fileinfo['extension']) { default: $mime = "text/html"; } if ($input == "/") { $input = "index.html"; } $input = ".$input"; if (file_exists($input) && is_readable($input)) { echo "Serving $input\n"; $contents = file_get_contents($input); $output = "HTTP/1.0 200 OK\r\nServer: APatchyServer\r\nConnection: close\r\nContent-Type: $mime\r\n\r\n$contents"; } else { //$contents = "The file you requested doesn't exist. Sorry!"; //$output = "HTTP/1.0 404 OBJECT NOT FOUND\r\nServer: BabyHTTP\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n$contents"; if(isset($range)) { list($a, $range) = explode("=",$range); str_replace($range, "-", $range); $size2 = $size-1; $new_length = $size-$range; $output = "HTTP/1.1 206 Partial Content\r\n"; $output .= "Content-Length: $new_length\r\n"; $output .= "Content-Range: bytes $range$size2/$size\r\n"; } else { $size2=$size-1; $output .= "Content-Length: $new_length\r\n"; } $chunksize = 1*(1024*1024); $bytes_send = 0; $file = "a.mp3"; $filesize = filesize($file); if ($file = fopen($file, 'r')) { if(isset($range)) $output = 'HTTP/1.0 200 OK\r\n'; $output .= "Content-type: application/octet-stream\r\n"; $output .= "Content-Length: $filesize\r\n"; $output .= 'Content-Disposition: attachment; filename="'.$file.'"\r\n'; $output .= "Accept-Ranges: bytes\r\n"; $output .= "Cache-Control: private\n\n"; fseek($file, $range); $download_rate = 1000; while(!feof($file) and (connection_status()==0)) { $var_stat = fread($file, round($download_rate *1024)); $output .= $var_stat;//echo($buffer); // is also possible flush(); sleep(1);//// decrease download speed } fclose($file); } /** $filename = "dada"; $file = fopen($filename, 'r'); $filesize = filesize($filename); $buffer = fread($file, $filesize); $send = array("Output"=>$buffer,"filesize"=>$filesize,"filename"=>$filename); $file = $send['filename']; */ //@ob_end_clean(); // $output .= "Content-Transfer-Encoding: binary"; //$output .= "Connection: Keep-Alive\r\n"; } socket_write($client, $output); socket_close ($client); } socket_close ($httpsock); Hey guys, I haved create a miniwebserver downloader. It can download files from your server. However, I am unable to resume my download when I download the file – I get Resource id # 4 – and I also can't resume the download. I would like to know how I can monitor and record the client output and how much bandwidth he has downloaded. Perl has something like this, but it's hardcore; if possible, kindly provide me with some pointers thank you :)

    Read the article

  • convert remote object result to array collection in flex...........

    - by user364199
    HI guys, im using zend_amf and flex. My problem is i have to populate my advance datagrid using array collection. this array collection have a children. example: [Bindable] private var dpHierarchy:ArrayCollection = new ArrayCollection([ {trucks:"Truck", children: [ {trucks:"AMC841", total_trip:1, start_time:'3:46:40 AM'}, {trucks:"AMC841", total_trip:1, start_time:'3:46:40 AM'}]) ]}; but the datasource of my datagrid should come from a database, how can i convert the result from remote object to array collection that has the same format like in my example, or any other way. here is my advance datagrid <mx:AdvancedDataGrid id="datagrid" width="500" height="200" lockedColumnCount="1" lockedRowCount="0" horizontalScrollPolicy="on" includeIn="loggedIn" x="67" y="131"> <mx:dataProvider> <mx:HierarchicalData id="dpHierarchytest" source="{dp}"/> </mx:dataProvider> <mx:groupedColumns> <mx:AdvancedDataGridColumn dataField="trucks" headerText="Trucks"/> <mx:AdvancedDataGridColumn dataField="total_trip" headerText="Total Trip"/> <mx:AdvancedDataGridColumnGroup headerText="PRECOOLING"> <mx:AdvancedDataGridColumnGroup headerText="Before Loading"> <mx:AdvancedDataGridColumn dataField="start_time" headerText="Start Time"/> <mx:AdvancedDataGridColumn dataField="end_time" headerText="End Time"/> <mx:AdvancedDataGridColumn dataField="precooling_time" headerText="Precooling Time"/> <mx:AdvancedDataGridColumn dataField="precooling_temp" headerText="Precooling Temp"/> </mx:AdvancedDataGridColumnGroup> <mx:AdvancedDataGridColumnGroup headerText="Before Dispatch"> <mx:AdvancedDataGridColumn dataField="bd_start_time" headerText="Start Time"/> <mx:AdvancedDataGridColumn dataField="bd_end_time" headerText="End Time"/> <mx:AdvancedDataGridColumn dataField="bd_precooling_time" headerText="Precooling Time"/> <mx:AdvancedDataGridColumn dataField="bd_precooling_temp" headerText="Precooling Temp"/> </mx:AdvancedDataGridColumnGroup> <mx:AdvancedDataGridColumn dataField="remarks" headerText="Remarks"/> </mx:AdvancedDataGridColumnGroup> <mx:AdvancedDataGridColumnGroup headerText="Temperature Compliance"> <mx:AdvancedDataGridColumn dataField="total_hit" headerText="Total Hit"/> <mx:AdvancedDataGridColumn dataField="total_miss" headerText="Total Miss"/> <mx:AdvancedDataGridColumn dataField="cold_chain_compliance" headerText="Cold Chain Compliance"/> <mx:AdvancedDataGridColumn dataField="average_temp" headerText="Average Temp"/> </mx:AdvancedDataGridColumnGroup> <mx:AdvancedDataGridColumnGroup headerText="Productivity"> <mx:AdvancedDataGridColumn dataField="total_drop_points" headerText="Total Drop Points"/> <mx:AdvancedDataGridColumn dataField="total_delivery_time" headerText="Total Delivery Time"/> <mx:AdvancedDataGridColumn dataField="total_distance" headerText="Total Distance"/> </mx:AdvancedDataGridColumnGroup> <mx:AdvancedDataGridColumnGroup headerText="Trip Exceptions"> <mx:AdvancedDataGridColumn dataField="total_doc" headerText="Total DOC"/> <mx:AdvancedDataGridColumn dataField="total_eng" headerText="Total ENG"/> <mx:AdvancedDataGridColumn dataField="total_fenv" headerText="Total FENV"/> <mx:AdvancedDataGridColumn dataField="average_speed" headerText="Average Speed"/> </mx:AdvancedDataGridColumnGroup> </mx:groupedColumns> </mx:AdvancedDataGrid> Thanks, and i really need some help.

    Read the article

  • Parallel processing via multithreading in Java

    - by Robz
    There are certain algorithms whose running time can decrease significantly when one divides up a task and gets each part done in parallel. One of these algorithms is merge sort, where a list is divided into infinitesimally smaller parts and then recombined in a sorted order. I decided to do an experiment to test whether or not I could I increase the speed of this sort by using multiple threads. I am running the following functions in Java on a Quad-Core Dell with Windows Vista. One function (the control case) is simply recursive: // x is an array of N elements in random order public int[] mergeSort(int[] x) { if (x.length == 1) return x; // Dividing the array in half int[] a = new int[x.length/2]; int[] b = new int[x.length/2+((x.length%2 == 1)?1:0)]; for(int i = 0; i < x.length/2; i++) a[i] = x[i]; for(int i = 0; i < x.length/2+((x.length%2 == 1)?1:0); i++) b[i] = x[i+x.length/2]; // Sending them off to continue being divided mergeSort(a); mergeSort(b); // Recombining the two arrays int ia = 0, ib = 0, i = 0; while(ia != a.length || ib != b.length) { if (ia == a.length) { x[i] = b[ib]; ib++; } else if (ib == b.length) { x[i] = a[ia]; ia++; } else if (a[ia] < b[ib]) { x[i] = a[ia]; ia++; } else { x[i] = b[ib]; ib++; } i++; } return x; } The other is in the 'run' function of a class that extends thread, and recursively creates two new threads each time it is called: public class Merger extends Thread { int[] x; boolean finished; public Merger(int[] x) { this.x = x; } public void run() { if (x.length == 1) { finished = true; return; } // Divide the array in half int[] a = new int[x.length/2]; int[] b = new int[x.length/2+((x.length%2 == 1)?1:0)]; for(int i = 0; i < x.length/2; i++) a[i] = x[i]; for(int i = 0; i < x.length/2+((x.length%2 == 1)?1:0); i++) b[i] = x[i+x.length/2]; // Begin two threads to continue to divide the array Merger ma = new Merger(a); ma.run(); Merger mb = new Merger(b); mb.run(); // Wait for the two other threads to finish while(!ma.finished || !mb.finished) ; // Recombine the two arrays int ia = 0, ib = 0, i = 0; while(ia != a.length || ib != b.length) { if (ia == a.length) { x[i] = b[ib]; ib++; } else if (ib == b.length) { x[i] = a[ia]; ia++; } else if (a[ia] < b[ib]) { x[i] = a[ia]; ia++; } else { x[i] = b[ib]; ib++; } i++; } finished = true; } } It turns out that function that does not use multithreading actually runs faster. Why? Does the operating system and the java virtual machine not "communicate" effectively enough to place the different threads on different cores? Or am I missing something obvious?

    Read the article

  • Various problems with software raid1 array built with Samsung 840 Pro SSDs

    - by Andy B
    I am bringing to ServerFault a problem that is tormenting me for 6+ months. I have a CentOS 6 (64bit) server with an md software raid-1 array with 2 x Samsung 840 Pro SSDs (512GB). Problems: Serious write speed problems: root [~]# time dd if=arch.tar.gz of=test4 bs=2M oflag=sync 146+1 records in 146+1 records out 307191761 bytes (307 MB) copied, 23.6788 s, 13.0 MB/s real 0m23.680s user 0m0.000s sys 0m0.932s When doing the above (or any other larger copy) the load spikes to unbelievable values (even over 100) going up from ~ 1. When doing the above I've also noticed very weird iostat results: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1589.50 0.00 54.00 0.00 13148.00 243.48 0.60 11.17 0.46 2.50 sdb 0.00 1627.50 0.00 16.50 0.00 9524.00 577.21 144.25 1439.33 60.61 100.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 0.00 1602.00 0.00 12816.00 8.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 And it keeps it this way until it actually writes the file to the device (out from swap/cache/memory). The problem is that the second SSD in the array has svctm and await roughly 100 times larger than the second. For some reason the wear is different between the 2 members of the array root [~]# smartctl --attributes /dev/sda | grep -i wear 177 Wear_Leveling_Count 0x0013 094% 094 000 Pre-fail Always - 180 root [~]# smartctl --attributes /dev/sdb | grep -i wear 177 Wear_Leveling_Count 0x0013 070% 070 000 Pre-fail Always - 1005 The first SSD has a wear of 6% while the second SSD has a wear of 30%!! It's like the second SSD in the array works at least 5 times as hard as the first one as proven by the first iteration of iostat (the averages since reboot): Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 10.44 51.06 790.39 125.41 8803.98 1633.11 11.40 0.33 0.37 0.06 5.64 sdb 9.53 58.35 322.37 118.11 4835.59 1633.11 14.69 0.33 0.76 0.29 12.97 md1 0.00 0.00 1.88 1.33 15.07 10.68 8.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 1109.02 173.12 10881.59 1620.39 9.75 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.41 0.01 3.10 0.02 7.42 0.00 0.00 0.00 0.00 What I've tried: I've updated the firmware to DXM05B0Q (following reports of dramatic improvements for 840Ps after this update). I have looked for "hard resetting link" in dmesg to check for cable/backplane issues but nothing. I have checked the alignment and I believe they are aligned correctly (1MB boundary, listing below) I have checked /proc/mdstat and the array is Optimal (second listing below). root [~]# fdisk -ul /dev/sda Disk /dev/sda: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00026d59 Device Boot Start End Blocks Id System /dev/sda1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sda2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sda3 4605952 814106623 404750336 fd Linux raid autodetect root [~]# fdisk -ul /dev/sdb Disk /dev/sdb: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dede Device Boot Start End Blocks Id System /dev/sdb1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sdb2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sdb3 4605952 814106623 404750336 fd Linux raid autodetect /proc/mdstat root # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204736 blocks super 1.0 [2/2] [UU] md2 : active raid1 sdb3[1] sda3[0] 404750144 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 2096064 blocks super 1.1 [2/2] [UU] unused devices: Running a read test with hdparm root [~]# hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 664 MB in 3.00 seconds = 221.33 MB/sec root [~]# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 288 MB in 3.01 seconds = 95.77 MB/sec But look what happens if I add --direct root [~]# hdparm --direct -t /dev/sda /dev/sda: Timing O_DIRECT disk reads: 788 MB in 3.01 seconds = 262.08 MB/sec root [~]# hdparm --direct -t /dev/sdb /dev/sdb: Timing O_DIRECT disk reads: 534 MB in 3.02 seconds = 176.90 MB/sec Both tests increase but /dev/sdb doubles while /dev/sda increases maybe 20%. I just don't know what to make of this. As suggested by Mr. Wagner I've done another read test with dd this time and it confirms the hdparm test: root [/home2]# dd if=/dev/sda of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 38.0855 s, 282 MB/s root [/home2]# dd if=/dev/sdb of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 115.24 s, 93.2 MB/s So sda is 3 times faster than sdb. Or maybe sdb is doing also something else besides what sda does. Is there some way to find out if sdb is doing more than what sda does? UPDATE Again, as suggested by Mr. Wagner, I have swapped the 2 SSDs. And as he thought it would happen, the problem moved from sdb to sda. So I guess I'll RMA one of the SSDs. I wonder if the cage might be problematic. What is wrong with this array? Please help!

    Read the article

  • [C#][XNA] Draw() 20,000 32 by 32 Textures or 1 Large Texture 20,000 Times

    - by Rudi
    The title may be confusing - sorry about that, it's a poor summary. Here's my dilemma. I'm programming in C# using the .NET Framework 4, and aiming to make a tile-based game with XNA. I have one large texture (256 pixels by 4096 pixels). Remember this is a tile-based game, so this texture is so massive only because it contains many tiles, which are each 32 pixels by 32 pixels. I think the experts will definitely know what a tile-based game is like. The orientation is orthogonal (like a chess board), not isometric. In the Game.Draw() method, I have two choices, one of which will be incredibly more efficient than the other. Choice/Method #1: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { SpriteBatch.Draw( MyLargeTexture, // One large 256 x 4096 texture new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(x, y, 32, 32), // Notice the source rectangle 'cuts out' 32 by 32 squares from the texture corresponding to the loop Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the first method is referencing one large texture many many times, each time using a small rectangle of this large texture to draw the appropriate tile image. Choice/Method #2: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { Texture2D tileTexture = map.GetTileTexture(x, y); // Getting a small 32 by 32 texture (different each iteration of the loop) SpriteBatch.Draw( tileTexture, new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(0, 0, tileTexture.Width, tileTexture.Height), // Notice the source rectangle uses the entire texture, because the entire texture IS 32 by 32 Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the second method is drawing many small textures many times. The Question: Which method and why? Personally, I would think it would be incredibly more efficient to use the first method. If you think about what that means for the tile array in a map (think of a large map with 2000 by 2000 tiles, let's say), each Tile object would only have to contain 2 integers, for the X and Y positions of the source rectangle in the one large texture - 8 bytes. If you use method #2, however, each Tile object in the tile array of the map would have to store a 32by32 Texture - an image - which has to allocate memory for the R G B A pixels 32 by 32 times - is that 4096 bytes per tile then? So, which method and why? First priority is speed, then memory-load, then efficiency or whatever you experts believe.

    Read the article

  • Accessing a vector<vector<int>> as a flat array

    - by user1762276
    For this array: vector<vector<int> > v; v.push_back(vector<int>(0)); v.back().push_back(1); v.back().push_back(2); v.back().push_back(3); v.back().push_back(4); I can output {1, 2, 3, 4} easily enough: cout << v[0][0] << endl; cout << v[0][1] << endl; cout << v[0][2] << endl; cout << v[0][3] << endl; To access it as a flat array I can do this: int* z = (int*)&v[0].front(); cout << z[0] << endl; cout << z[1] << endl; cout << z[2] << endl; cout << z[3] << endl; Now, how do I access the multidimensional vector as a flat multidimensional array? I cannot use the same format as accessing a single-dimensional vector: // This does not work (outputs garbage) int** n = (int**)&v.front(); cout << n[0][0] << endl; cout << n[0][1] << endl; cout << n[0][2] << endl; cout << n[0][3] << endl; The workaround I've found is to do this: int** n = new int* [v.size()]; for (size_t i = 0; i < v.size(); i++) { n[i] = &v.at(i).front(); } cout << n[0][0] << endl; cout << n[0][1] << endl; cout << n[0][2] << endl; cout << n[0][3] << endl; Is there a way to access the entire multidimensional vector like a flat c-style array without having to dynamically allocate each dimension above the data before accessing it? Speed is not critical in the implementation and clarity for maintenance is paramount. A multidimensional vector is just fine for storing the data. However, I want to also expose the data as a flat c-style array in the SDK so that it can be easily accessible by other languages. This means that exposing the vectors as an STL object is a no go. The solution I came up with works fine for my needs as I only evaluate the array once at the very end of processing to "flatten" it. However, is there a better way to go about this? Or am I already doing it the best way I possibly can without re-implementing my own data structure (overkill since my flatten code is only a few lines). Thank you for your advice, friends!

    Read the article

  • Confusion on C++ Python extensions. Things like getting C++ values for python values.

    - by Matthew Mitchell
    I'm wanted to convert some of my python code to C++ for speed but it's not as easy as simply making a C++ function and making a few function calls. I have no idea how to get a C++ integer from a python integer object. I have an integer which is an attribute of an object that I want to use. I also have integers which are inside a list in the object which I need to use. I wanted to test making a C++ extension with this function: def setup_framebuffer(surface,flip=False): #Create texture if not done already if surface.texture is None: create_texture(surface) #Render child to parent if surface.frame_buffer is None: surface.frame_buffer = glGenFramebuffersEXT(1) glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, c_uint(int(surface.frame_buffer))) glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, surface.texture, 0) glPushAttrib(GL_VIEWPORT_BIT) glViewport(0,0,surface._scale[0],surface._scale[1]) glMatrixMode(GL_PROJECTION) glLoadIdentity() #Load the projection matrix if flip: gluOrtho2D(0,surface._scale[0],surface._scale[1],0) else: gluOrtho2D(0,surface._scale[0],0,surface._scale[1]) That function calls create_texture, so I will have to pass that function to the C++ function which I will do with the third argument. This is what I have so far, while trying to follow information on the python documentation: #include <Python.h> #include <GL/gl.h> static PyMethodDef SpamMethods[] = { ... {"setup_framebuffer", setup_framebuffer, METH_VARARGS,"Loads a texture from a Surface object to the OpenGL framebuffer."}, ... {NULL, NULL, 0, NULL} /* Sentinel */ }; static PyObject * setup_framebuffer(PyObject *self, PyObject *args){ bool flip; PyObject *create_texture, *arg_list,*pyflip,*frame_buffer_id; if (!PyArg_ParseTuple(args, "OOO", &surface,&pyflip,&create_texture)){ return NULL; } if (PyObject_IsTrue(pyflip) == 1){ flip = true; }else{ flip = false; } Py_XINCREF(create_texture); //Create texture if not done already if(texture == NULL){ arglist = Py_BuildValue("(O)", surface) result = PyEval_CallObject(create_texture, arglist); Py_DECREF(arglist); if (result == NULL){ return NULL; } Py_DECREF(result); } Py_XDECREF(create_texture); //Render child to parent frame_buffer_id = PyObject_GetAttr(surface, Py_BuildValue("s","frame_buffer")) if(surface.frame_buffer == NULL){ glGenFramebuffersEXT(1,frame_buffer_id); } glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, surface.frame_buffer)); glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, surface.texture, 0); glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0,surface._scale[0],surface._scale[1]); glMatrixMode(GL_PROJECTION); glLoadIdentity(); //Load the projection matrix if (flip){ gluOrtho2D(0,surface._scale[0],surface._scale[1],0); }else{ gluOrtho2D(0,surface._scale[0],0,surface._scale[1]); } Py_INCREF(Py_None); return Py_None; } PyMODINIT_FUNC initcscalelib(void){ PyObject *module; module = Py_InitModule("cscalelib", Methods); if (m == NULL){ return; } } int main(int argc, char *argv[]){ /* Pass argv[0] to the Python interpreter */ Py_SetProgramName(argv[0]); /* Initialize the Python interpreter. Required. */ Py_Initialize(); /* Add a static module */ initscalelib(); }

    Read the article

  • extensible database design: automatic ALTER TABLE or serialize() field BLOB ?

    - by mario
    I want an adaptable database scheme. But still use a simple table data gateway in my application, where I just pass an $data[] array for storing. The basic columns are settled in the initial table scheme. There will however arise a couple of meta fields later (ca 10-20). I want some flexibility there and not adapt the database manually each time, or -worse- change the application logic just because of new fields. So now there are two options which seem workable yet not overkill. But I'm not sure about the scalability or database drawbacks. (1) Automatic ALTER TABLE. Whenever the $data array is to be saved, the keys are compared against the current database columns. New columns get defined before the $data is INSERTed into the table. Actually seems simple enough in test code: function save($data, $table="forum") { // columns if ($new_fields = array_diff(array_keys($data), known_fields($table))) { extend_schema($table, $new_fields, $data); } // save $columns = implode("`, `", array_keys($data)); $qm = str_repeat(",?", count(array_keys($data)) - 1); echo ("INSERT INTO `$table` (`$columns`) VALUES (?$qm);"); function known_fields($table) { return unserialize(@file_get_contents("db:$table")) ?: array("id"); function extend_schema($table, $new_fields, $data) { foreach ($new_fields as $field) { echo("ALTER TABLE `$table` ADD COLUMN `$field` VARCHAR;"); Since it is mostly meta information fields, adding them just as VARCHAR seems sufficient. Nobody will query by them anyway. So the database is really just used as storage here. However, while I might want to add a lot of new $data fields on the go, they will not always be populated. (2) serialize() fields into BLOB. Any new/extraneous meta fields could be opaque to the database. Simply sorting out the virtual fields from the real database columns is simple. And the meta fields can just be serialize()d into a blob/text field then: function ext_save($data, $table="forum") { $db_fields = array("id", "content", "flags", "ext"); // disjoin foreach (array_diff(array_keys($data),$db_fields) as $key) { $data["ext"][$key] = $data[$key]; unset($data[$key]); } $data["ext"] = serialize($data["ext"]); Unserializing and unpacking this 'ext' column on read queries is a minor overhead. The advantage is that there won't be any sparsely filled columns in the database, so I guess it's compacter and faster than the AUTO ALTER TABLE approach. Of course, this method prevents ever using one of the new fields in a WHERE or GROUP BY clause. But I think none of the possible meta fields (user_agent, author_ip, author_img, votes, hits, last_modified, ..) would/should ever be used there anyway. So I currently prefer the 'ext' blob approach, even if it's a one-way ticket. How are such columns called usually? (looking for examples/doc) Would you use XML serialization for (very theoretical) in-database queries? The adapting table scheme seems a "cleaner" interface, even if most columns might remain empty then. How does that impact speed? How many such sparse VARCHAR fields can MySQL/innodb stomach? But most importantly: Is there any standard implementation for this? A pseudo ORM with automatic ALTER TABLE tricks? Storing a simple column list seems workable, but something like pdo::getColumnMeta would be more robust.

    Read the article

  • jQuery , Trigger change event on newly created elements

    - by kwhohasamullet
    Hi Guys, I have a button in a form that when clicked adds another set of form fields, In these form fields there are 2 drop downs where the contents of the 2nd dropdown rely on what is selected in the first dropdown... What i want to do is when the new form field button is clicked for the new items to be added and then the change event to be triggered on the drop down that was created so what only that drop down changes and not all the drop downs with the same name currently in that form. THe first drop down is called product Category The code for the addFormField function is: function addFormField() { var id = document.getElementById("field_id").value; $("#products").append("<table width='600' cellpadding='5' cellspacing='0' class='Add_Products' id='row" + id + "'><td width='250' class='left'><label>Select Product Category</label></td><td class='right' ><label><select name='" + id + "' id='ProductCategory'><?php foreach($categories as $key=>$category){ echo "<option value=".$key.">".$category."</option>"; } ?></select></label></td></tr><tr><td width='250' class='left'><label>Select Product Template</label></td><td class='right' ><label><select name='data[QuoteItem][" + id + "][product_id]' id='QuoteItem" + id + "product_id' class='Product' title='" + id + "'></select></label></td></tr><tr ><td class='left'>Name</td><td class='right'><label><input name='data[QuoteItem][" + id + "][name]' type='text' id='QuoteItem" + id + "name' size='50' /></label></td></tr><tr ><td class='left'>Price (ex GST)</td><td class='right'><input type='text' name='data[QuoteItem][" + id + "][price]' id='QuoteItem" + id + "price' onchange='totalProductPrice();' class='quote-item-price' value='0' /></td></tr><tr><td class='left'>Description</td><td class='right'><label><textarea name='data[QuoteItem][" + id + "][description]' cols='38' rows='5' id='QuoteItem" + id + "description'></textarea></label></td></tr><tr><td><a href='#' onClick='removeFormField(\"#row" + id + "\"); return false;'>Remove</a></td></tr></table>"); $('#row' + id).highlightFade({ speed:1000 }); id = (id - 1) + 2; document.getElementById("field_id").value = id; } The code that detects change in ProductCategory dropdown and triggers the AJAX is below: $("select#ProductCategory").live('change', function(){ var url = base + "/quotes/productList/" + $(this).val() + ""; var id = $(this).attr('name'); $.getJSON(url,{id: $(this).val(), ajax: 'true'}, function(j){ var options = ''; options += '<option value="0">None</option>'; $.each(j, function(key, value){ options += '<option value="' + key + '">' + value + '</option>'; }) $("select#QuoteItem" + id + "product_id").html(options); }) }).trigger('change'); I have been trying all afternoon to work this out and the closest one i got to work applied the returned ajax values to all items. Currently using the live function people can add new fields and are able to use the drops down independant of each other dropdown but its only when the field is first added that i have trouble getting is populated Thanks in advance for any help

    Read the article

  • what persistence layer (xml or mysql) should i use for this xml data?

    - by fayer
    i wonder how i could store a xml structure in a persistence layer. cause the relational data looks like: <entity id="1000070"> <name>apple</name> <entities> <entity id="7002870"> <name>mac</name> <entities> <entity id="7002907"> <name>leopard</name> <entities> <entity id="7024080"> <name>safari</name> </entity> <entity id="7024701"> <name>finder</name> </entity> </entities> </entity> </entities> </entity> <entity id="7024080"> <name>iphone</name> <entities> <entity id="7024080"> <name>3g</name> </entity> <entity id="7024701"> <name>3gs</name> </entity> </entities> </entity> <entity id="7024080"> <name>ipad</name> </entity> </entities> </entity> as you can see, it has no static structure but a dynamical one. mac got 2 descendant levels while iphone got 1 and ipad got 0. i wonder how i could store this data the best way? what are my options. cause it seems impossible to store it in a mysql database due to this dynamical structure. is the only way to store it as a xml file then? is the speed of getting information (xpath/xquery/simplexml) from a xml file worse or greater than from mysql? what are the pros and cons? do i have other options? is storing information in xml files, suited for a lot of users accessing it at the same time? would be great with feedbacks!! thanks! EDIT: now i noticed that i could use something called xml database to store xml data. could someone shed a light on this issue? cause apparently its not as simple as just store data in a xml file?

    Read the article

  • Converting switch statements to more elegant solution.

    - by masfenix
    I have a 9 x 9 matrix. (think of suduko). 4 2 1 6 8 1 8 5 8 3 1 5 8 1 1 7 5 8 1 1 4 0 5 6 7 0 4 6 2 5 5 4 4 8 1 2 6 8 8 2 8 1 6 3 5 8 4 2 6 4 7 4 1 1 1 3 5 3 8 8 5 2 2 2 6 6 0 8 8 8 0 6 8 7 2 3 3 1 1 7 4 now I wanna be able to get a "quadrant". for example (according to my code) the quadrant 2 , 2 returns the following: 5 4 4 2 8 1 6 4 7 If you've noticed, this is the matrix from the very center of the 9 x 9. I've split everything up in to pairs of "3" if you know what i mean. the first "ROW" is from 0 - 3, the second from 3 - 6, the third for 6 - 9.. I hope this makes sense ( I am open to alternate ways to go about this) anyways, heres my code. I dont really like this way, even though it works. I do want speed though beccause i am making a suduko solver. //a quadrant returns the mini 3 x 3 //row 1 has three quads,"1", "2", 3" //row 2 has three quads "1", "2", "3" etc public int[,] GetQuadrant(int rnum, int qnum) { int[,] returnMatrix = new int[3, 3]; int colBegin, colEnd, rowBegin, rowEnd, row, column; //this is so we can keep track of the new matrix row = 0; column = 0; switch (qnum) { case 1: colBegin = 0; colEnd = 3; break; case 2: colBegin = 3; colEnd = 6; break; case 3: colBegin = 6; colEnd = 9; break; default: colBegin = 0; colEnd = 0; break; } switch (rnum) { case 1: rowBegin = 0; rowEnd = 3; break; case 2: rowBegin = 3; rowEnd = 6; break; case 3: rowBegin = 6; rowEnd = 9; break; default: rowBegin = 0; rowEnd = 0; break; } for (int i = rowBegin ; i < rowEnd; i++) { for (int j = colBegin; j < colEnd; j++) { returnMatrix[row, column] = _matrix[i, j]; column++; } column = 0; row++; } return returnMatrix; }

    Read the article

  • Game logic dynamically extendable architecture implementation patterns

    - by Vlad
    When coding games there are a lot of cases when you need to inject your logic into existing class dynamically and without making unnecessary dependencies. For an example I have a Rabbit which can be affected by freeze ability so it can't jump. It could be implemented like this: class Rabbit { public bool CanJump { get; set; } void Jump() { if (!CanJump) return; ... } } But If I have more than one ability that can prevent it from jumping? I can't just set one property because some circumstances can be activated simultanously. Another solution? class Rabbit { public bool Frozen { get; set; } public bool InWater { get; set; } bool CanJump { get { return !Frozen && !InWater; } } } Bad. The Rabbit class can't know all the circumstances it can run into. Who knows what else will game designer want to add: may be an ability that changes gravity on an area? May be make a stack of bool values for CanJump property? No, because abilities can be deactivated not in that order in which they were activated. I need a way to seperate ability logic that prevent the Rabbit from jumping from the Rabbit itself. One possible solution for this is making special checking event: class Rabbit { class CheckJumpEventArgs : EventArgs { public bool Veto { get; set; } } public event EventHandler<CheckJumpEvent> OnCheckJump; void Jump() { var args = new CheckJumpEventArgs(); if (OnCheckJump != null) OnCheckJump(this, args); if (!args.Veto) return; ... } } But it's a lot of code! A real Rabbit class would have a lot of properties like this (health and speed attributes, etc). I'm thinking of borrowing something from MVVM pattern where you have all the properties and methods of an object implemented in a way where they can be easily extended from outside. Then I want to use it like this: class FreezeAbility { void ActivateAbility() { _rabbit.CanJump.Push(ReturnFalse); } void DeactivateAbility() { _rabbit.CanJump.Remove(ReturnFalse); } // should be implemented as instance member // so it can be "unsubscribed" bool ReturnFalse(bool previousValue) { return false; } } Is this approach good? What also should I consider? What are other suitable options and patterns? Any ready to use solutions? UPDATE The question is not about how to add different behaviors to an object dynamically but how its (or its behavior) implementation can be extended with external logic. I don't need to add a different behavior but I need a way to modify an exitsing one and I also need a possibiliity to undo changes.

    Read the article

  • What version-control system is most trivial to set up and use for toy projects?

    - by Norman Ramsey
    I teach the third required intro course in a CS department. One of my homework assignments asks students to speed up code they have written for a previous assignment. Factor-of-ten speedups are routine; factors of 100 or 1000 are not unheard of. (For a factor of 1000 speedup you have to have made rookie mistakes with malloc().) Programs are improved by a sequence is small changes. I ask students to record and describe each change and the resulting improvement. While you're improving a program it is also possible to break it. Wouldn't it be nice to back out? You can see where I'm going with this: my students would benefit enormously from version control. But there are some caveats: Our computing environment is locked down. Anything that depends on a central repository is suspect. Our students are incredibly overloaded. Not just classes but jobs, sports, music, you name it. For them to use a new tool it has to be incredibly easy and have obvious benefits. Our students do most work in pairs. Getting bits back and forth between accounts is problematic. Could this problem also be solved by distributed version control? Complexity is the enemy. I know setting up a CVS repository is too baffling---I myself still have trouble because I only do it once a year. I'm told SVN is even harder. Here are my comments on existing systems: I think central version control (CVS or SVN) is ruled out because our students don't have the administrative privileges needed to make a repository that they can share with one other student. (We are stuck with Unix file permissions.) Also, setup on CVS or SVN is too hard. darcs is way easy to set up, but it's not obvious how you share things. darcs send (to send patches by email) seems promising but it's not clear how to set it up. The introductory documentation for git is not for beginners. Like CVS setup, it's something I myself have trouble with. I'm soliciting suggestions for what source-control to use with beginning students. I suspect we can find resources to put a thin veneer over an existing system and to simplify existing documentation. We probably don't have resources to write new documentation. So, what's really easy to setup, commit, revert, and share changes with a partner but does not have to be easy to merge or to work at scale? A key constraint is that programming pairs have to be able to share work with each other and only each other, and pairs change every week. Our infrastructure is Linux, Solaris, and Windows with a netapp filer. I doubt my IT staff wants to create a Unix group for each pair of students. Is there an easier solution I've overlooked? (Thanks for the accepted answer, which beats the others on account of its excellent reference to Git Magic as well as the helpful comments.)

    Read the article

  • Help with optimizing C# function via C and/or Assembly

    - by MusiGenesis
    I have this C# method which I'm trying to optimize: // assume arrays are same dimensions private void DoSomething(int[] bigArray1, int[] bigArray2) { int data1; byte A1; byte B1; byte C1; byte D1; int data2; byte A2; byte B2; byte C2; byte D2; for (int i = 0; i < bigArray1.Length; i++) { data1 = bigArray1[i]; data2 = bigArray2[i]; A1 = (byte)(data1 >> 0); B1 = (byte)(data1 >> 8); C1 = (byte)(data1 >> 16); D1 = (byte)(data1 >> 24); A2 = (byte)(data2 >> 0); B2 = (byte)(data2 >> 8); C2 = (byte)(data2 >> 16); D2 = (byte)(data2 >> 24); A1 = A1 > A2 ? A1 : A2; B1 = B1 > B2 ? B1 : B2; C1 = C1 > C2 ? C1 : C2; D1 = D1 > D2 ? D1 : D2; bigArray1[i] = (A1 << 0) | (B1 << 8) | (C1 << 16) | (D1 << 24); } } The function basically compares two int arrays. For each pair of matching elements, the method compares each individual byte value and takes the larger of the two. The element in the first array is then assigned a new int value constructed from the 4 largest byte values (irrespective of source). I think I have optimized this method as much as possible in C# (probably I haven't, of course - suggestions on that score are welcome as well). My question is, is it worth it for me to move this method to an unmanaged C DLL? Would the resulting method execute faster (and how much faster), taking into account the overhead of marshalling my managed int arrays so they can be passed to the method? If doing this would get me, say, a 10% speed improvement, then it would not be worth my time for sure. If it was 2 or 3 times faster, then I would probably have to do it. Note: please, no "premature optimization" comments, thanks in advance. This is simply "optimization".

    Read the article

  • Javascript and Twitter API rate limitation? (Changing variable values in a loop)

    - by Pablo
    Hello, I have adapted an script from an example of http://github.com/remy/twitterlib. It´s a script that makes one query each 10 seconds to my Twitter timeline, to get only the messages that begin with a musical notation. It´s already working, but I don´t know it is the better way to do this... The Twitter API has a rate limit of 150 IP access per hour (queries from the same user). At this time, my Twitter API is blocked at 25 minutes because the 10 seconds frecuency between posts. If I set up a frecuency of 25 seconds between post, I am below the rate limit per hour, but the first 10 posts are shown so slowly. I think this way I can guarantee to be below the Twitter API rate limit and show the first 10 posts at normal speed: For the first 10 posts, I would like to set a frecuency of 5 seconds between queries. For the rest of the posts, I would like to set a frecuency of 25 seconds between queries. I think if making somewhere in the code a loop with the previous sentences, setting the "frecuency" value from 5000 to 25000 after the 10th query (or after 50 seconds, it´s the same), that´s it... Can you help me on modify this code below to make it work? Thank you in advance. var Queue = function (delay, callback) { var q = [], timer = null, processed = {}, empty = null, ignoreRT = twitterlib.filter.format('-"RT @"'); function process() { var item = null; if (q.length) { callback(q.shift()); } else { this.stop(); setTimeout(empty, 5000); } return this; } return { push: function (item) { var green = [], i; if (!(item instanceof Array)) { item = [item]; } if (timer == null && q.length == 0) { this.start(); } for (i = 0; i < item.length; i++) { if (!processed[item[i].id] && twitterlib.filter.match(item[i], ignoreRT)) { processed[item[i].id] = true; q.push(item[i]); } } q = q.sort(function (a, b) { return a.id > b.id; }); return this; }, start: function () { if (timer == null) { timer = setInterval(process, delay); } return this; }, stop: function () { clearInterval(timer); timer = null; return this; }, empty: function (fn) { empty = fn; return this; }, q: q, next: process }; }; $.extend($.expr[':'], { below: function (a, i, m) { var y = m[3]; return $(a).offset().top y; } }); function renderTweet(data) { var html = ''; html += ''; html += twitterlib.ify.clean(data.text); html += ''; since_id = data.id; return html; } function passToQueue(data) { if (data.length) { twitterQueue.push(data.reverse()); } } var frecuency = 10000; // The lapse between each new Queue var since_id = 1; var run = function () { twitterlib .timeline('twitteruser', { filter : "'?'", limit: 10 }, passToQueue) }; var twitterQueue = new Queue(frecuency, function (item) { var tweet = $(renderTweet(item)); var tweetClone = tweet.clone().hide().css({ visibility: 'hidden' }).prependTo('#tweets').slideDown(1000); tweet.css({ top: -200, position: 'absolute' }).prependTo('#tweets').animate({ top: 0 }, 1000, function () { tweetClone.css({ visibility: 'visible' }); $(this).remove(); }); $('#tweets p:below(' + window.innerHeight + ')').remove(); }).empty(run); run();

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >