Search Results

Search found 13403 results on 537 pages for '2 epm performance tuning'.

Page 43/537 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • HTML 5 performance on Firefox ?

    - by asksuperuser
    I tried this sample here: http://9elements.com/io/projects/html5/canvas/ After a few minutes, Firefox slows down so much I can't even popup any menu. When I closed the tab, Firefox comes back to normal again. So is HTML 5 really a good choice now ?

    Read the article

  • C++ application as a service with high performance

    - by sand
    I need to provide a C++ application as a service. Client of the service and the service can be on the same machine or distributed on different machines based on the load. This application takes a ~2KB string as input and returns almost similar size of string after some processing. Turnaround time for the client should be really quick. What is the best mechanism to implement this?

    Read the article

  • Sharepoint Web performance optimization

    - by hertzel
    We are running on SSL on following server topology: 1 ISA (SSL Terminate/cache/proxy+AD authentication) 1 Sharepoint 1 IBM DB2 Database as enterprise/corporate DB 1 MS SQL Server as local DB We have recently optimized the caching, compression, minification, and other ASP.net best practices such as viewstate and cookie sizes, minimizing round trips, parallel connections/domain sharding and a lot more.... Now we are not convinced that the we are in an optimized position as the network resources i.e. bandwidth and especially latency are out of our control!! The client/browser to server/sharepoint is trans-Atlantic i.e. (ASIA, USA, EUROPE). As of my understanding the only ways to improve the network (latency) are: - TCP/SSL optimization - hardware/software? - CDNs - cloud or our own ? Your opinion and insights would be much appreciated Best regards Hertzel

    Read the article

  • Improve disk read performance (multiple files) with threading

    - by pablo
    I need to find a method to read a big number of small files (about 300k files) as fast as possible. Reading them sequentially using FileStream and reading the entire file in a single call takes between 170 and 208 seconds (you know, you re-run, disk cache plays its role and time varies). Then I tried using PInvoke with CreateFile/ReadFile and using FILE_FLAG_SEQUENTIAL_SCAN, but I didn't appreciate any changes. I tried with several threads (divide the big set in chunks and have every thread reading its part) and this way I was able to improve speed just a little bit (not even a 5% with every new thread up to 4). Any ideas on how to find the most effective way to do this?

    Read the article

  • Performance of Serialized Objects in C++

    - by jm1234567890
    Hi Everyone, I'm wondering if there is a fast way to dump an STL set to disk and then read it back later. The internal structure of a set is a binary tree, so if I serialize it naively, when I read it back the program will have to go though the process of inserting each element again. I think this is slow even if it is read back in correct order, correct me if I am wrong. Is there a way to "dump" the memory containing the set into disk and then read it back later? That is, keep everything in binary format, thus avoiding the re-insertion. Do the boost serialization tools do this? Thanks! EDIT: oh I should probably read, http://www.parashift.com/c++-faq-lite/serialization.html I will read it now... no it doesn't really help

    Read the article

  • postgres min function performance

    - by wutzebaer
    hi i need the lowest value for runnerId this query: SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ; takes 80ms (1968 result rows) this SELECT min("runnerId") FROM betlog WHERE "marketId" = '107416794' ; takes 1600ms is there a faster way to find the minimum, or should i calc the min in my java programm? "Result (cost=100.88..100.89 rows=1 width=0)" " InitPlan 1 (returns $0)" " -> Limit (cost=0.00..100.88 rows=1 width=9)" " -> Index Scan using runneridindex on betlog (cost=0.00..410066.33 rows=4065 width=9)" " Index Cond: ("runnerId" IS NOT NULL)" " Filter: ("marketId" = 107416794::bigint)" CREATE INDEX marketidindex ON betlog USING btree ("marketId" COLLATE pg_catalog."default"); another idea SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ORDER BY "runnerId" LIMIT 1 >1600ms SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ORDER BY "runnerId" >>100ms how can a limit slow the query down?

    Read the article

  • C# Performance on Errors

    - by pm_2
    It would appear that catching an error is slower that performing a check prior to the error (for example a TryParse). The related questions that prompt this observation are here and here. Can anyone tell me why this is so - why is it more costly to catch an error that to perform one or many checks of the data to prevent the error?

    Read the article

  • Objective-C vs JavaScript loop performance

    - by micadelli
    I have a PhoneGap mobile application that I need to generate an array of match combinations. In JavaScript side, the code hanged pretty soon when the array of which the combinations are generated from got a bit bigger. So, I thought I'll make a plugin to generate the combinations, passing the array of javascript objects to native side and loop it there. To my surprise the following codes executes in 150 ms (JavaScript) whereas in native side (Objective-C) it takes ~1000 ms. Does anyone know any tips for speeding up those executing times? When players exceeds 10, i.e. the length of the array of teams equals 252 it really gets slow. Those execution times mentioned above are for 10 players / 252 teams. Here's the JavaScript code: for (i = 0; i < GAME.teams.length; i += 1) { for (j = i + 1; j < GAME.teams.length; j += 1) { t1 = GAME.teams[i]; t2 = GAME.teams[j]; if ((t1.mask & t2.mask) === 0) { GAME.matches.push({ Team1: t1, Team2: t2 }); } } } ... and here's the native code: NSArray *teams = [[NSArray alloc] initWithArray: [options objectForKey:@"teams"]]; NSMutableArray *t = [[NSMutableArray alloc] init]; int mask_t1; int mask_t2; for (NSInteger i = 0; i < [teams count]; i++) { for (NSInteger j = i + 1; j < [teams count]; j++) { mask_t1 = [[[teams objectAtIndex:i] objectForKey:@"mask"] intValue]; mask_t2 = [[[teams objectAtIndex:j] objectForKey:@"mask"] intValue]; if ((mask_t1 & mask_t2) == 0) { [t insertObject:[teams objectAtIndex:i] atIndex:0]; [t insertObject:[teams objectAtIndex:j] atIndex:1]; /* NSArray *newCombination = [[NSArray alloc] initWithObjects: [teams objectAtIndex:i], [teams objectAtIndex:j], nil]; */ [combinations addObject:t]; } } } ... the array in question (GAME.teams) looks like this: { count = 2; full = 1; list = ( { index = 0; mask = 1; name = A; score = 0; }, { index = 1; mask = 2; name = B; score = 0; } ); mask = 3; name = A; }, { count = 2; full = 1; list = ( { index = 0; mask = 1; name = A; score = 0; }, { index = 2; mask = 4; name = C; score = 0; } ); mask = 5; name = A; },

    Read the article

  • Python performance profiling (file close)

    - by user1853986
    First of all thanks for your attention. My question is how to reduce the execution time of my code. Here is the relevant code. The below code is called in iteration from the main. def call_prism(prism_input_file,random_length): prism_output_file = "path.txt" cmd = "prism %s -simpath %d %s" % (prism_input_file,random_length,prism_output_file) p = os.popen(cmd) p.close() return prism_output_file def main(prism_input_file, number_of_strings): ... for n in range(number_of_strings): prism_output_file = call_prism(prism_input_file,z[n]) ... return I used statistics from the "profile statistics browser" when I profiled my code. The "file close" system command took the maximum time (14.546 seconds). The call_prism routine is called 10 times. But the number_of_strings is usually in thousands, so, my program takes lot of time to complete. Let me know if you need more information. By the way I tried with subprocess, too. Thanks.

    Read the article

  • How can I improve the performance of this algorithm

    - by Justin
    // Checks whether the array contains two elements whose sum is s. // Input: A list of numbers and an integer s // Output: return True if the answer is yes, else return False public static boolean calvalue (int[] numbers, int s){ for (int i=0; i< numbers.length; i++){ for (int j=i+1; j<numbers.length;j++){ if (numbers[i] < s){ if (numbers[i]+numbers[j] == s){ return true; } } } } return false; }

    Read the article

  • PHP array performance

    - by dfo
    Hi, this is my first question on Stackoverflow, please bear with me. I'm testing an algorithm for 2d bin packing and I've chosen PHP to mock it up as it's my bread-and-butter language nowadays. As you can see on http://themworks.com/pack_v0.2/oopack.php?ol=1 it works pretty well, but you need to wait around 10-20 seconds for 100 rectangles to pack. For some hard to handle sets it would hit the php's 30s runtime limit. I did some profiling and it shows that most of the time my script goes through different parts of a small 2d array with 0's and 1's in it. It either checks if certain cell equals to 0/1 or sets it to 0/1. It can do such operations million times and each times it takes few microseconds. I guess I could use an array of booleans in a statically typed language and things would be faster. Or even make an array of 1 bit values. I'm thinking of converting the whole thing to some compiled language. Is PHP just not good for it? If I do need to convert it to let's say C++, how good are the automatic converters? My script is just a lot of for loops with basic arrays and objects manipulations. Thank you! Edit. This function gets called more than any other. It reads few properties of a very simple object, and goes through a very small part of a smallish array to check if there's any element not equal to 0. function fits($bin, $file, $x, $y) { $flag = true; $xw = $x + $file->get_width();; $yh = $y + $file->get_height(); for ($i = $x; $i < $xw; $i++) { for ($j = $y; $j < $yh; $j++) { if ($bin[$i][$j] !== 0) { $flag = false; break; } } if (!$flag) break; } return $flag; }

    Read the article

  • C/C++ variable length automatic array performance

    - by aaa
    hello. Is there significant cpu/memory overhead associated with using automatic arrays with g++/Intel on 64-bit x86 linux platform? int function(int N) { double array[N]; overhead compared to allocating array before hand (assuming function is called multiple times) overhead compared to using new overhead compared to using malloc range of N maybe from 1kb to 16kb roughly, stack overrun is not a problem Thank you

    Read the article

  • Which should I use? (performance)

    - by Yim
    I want to know a simple thing: when setting up a style that is inherited by all its children, is it recommended most specific? (even if you don't care others having this style) Structure: html body parent_content wrapper p I don't care having parent_content or wrapper having the style I do care changing the html or body style (or all p) So what should I use? #parent_content{ color:#555; } #parent_content p{ color:#555; } #wrapper{ color:#555; } ... Also, some links to tutorials about this would be great

    Read the article

  • Will logging debugging incur a performance hit if I don't turn debugging on?

    - by romandas
    On a Cisco device, I know that enabling debugging can incur a performance hit since debugging has such a high priority on the CPU. I know that to log debugging, you have to set logging up to the debugging level (logging buffered 4096 debugging, for example) and also enable debugging on some feature. Does configuring the logging debugging incur the performance hit even if you don't enable debugging on some feature, or would it be safe (assuming you want and can handle all the logging events via syslog) to configure 'logging buffered 4096 debugging' to have maximum logging available if/when someone uses debug?

    Read the article

  • Tuning performance of Ubuntu 10.04 on Compaq Evo W4000.

    - by Fantomas
    Hi, I got this computer free and installed Ubuntu 10.04 on it + updates, plus followed the following tutorial all the way: http://www.unixmen.com/linux-tutorials/937-things-to-do-after-installing-ubuntu-1004-lts-lucid-lynx I love the Docky which comes with it, but the computer has been running rather slowly. The System: kernel 2.6.32-22-generic Gnome 2.30.0 (I like Gnome!) Memory: 1GB Processor: Intel (R) Pentium (R) 4 CPU 1700 MHz (needless to say, it is 32 bit). I think I dedicated 128 Mb to video memory while installing, but cannot find this setting now. I did also install an NVidia driver for the 3D card, so I probably want to reclaim that memory back. I want to trim the fat but I also want to keep some of the sex appeal of Ubuntu 10.04. I will gift this computer to a friend, who will use it for Internet, music, videos, word processing, Skype and instant messaging - he is non-technical, so this hardware and Linux should work for him; I just need to speed it up while keeping the good software and having a nice UI. I sort of know my way around Linux, but not that well. Feel free to ask me to run particular commands if you want more info. For starters, here are the services below. Which ones can I kill and how? What else can go? There is no need to run ssh or ftp or http or ntp servers. As I said before, this computer is for non-technical person. There is also absolutely no bluetooth or wireless networking needed - it will feed off a regular ethernet cable. What I do not want to do is reinstall some other distro or recompile a kernel. I want to make it 80% perfect spending 20% of the energy :) Thanks! $ service --status-all [ ? ] acpi-support [ ? ] acpid [ ? ] alsa-mixer-save [ ? ] anacron [ - ] apparmor [ ? ] apport [ ? ] atd [ ? ] avahi-daemon [ ? ] binfmt-support [ - ] bluetooth [ - ] bootlogd [ - ] brltty [ ? ] console-setup [ ? ] cron [ + ] cups [ ? ] dbus [ ? ] dmesg [ ? ] dns-clean [ ? ] failsafe-x [ - ] fancontrol [ ? ] gdm [ - ] grub-common [ ? ] hostname [ ? ] hwclock [ ? ] hwclock-save [ ? ] irqbalance [ - ] kerneloops [ ? ] killprocs [ - ] lm-sensors [ ? ] module-init-tools [ ? ] network-interface [ ? ] network-interface-security [ ? ] network-manager [ ? ] networking [ ? ] ondemand [ ? ] pcmciautils [ ? ] plymouth [ ? ] plymouth-log [ ? ] plymouth-splash [ ? ] plymouth-stop [ ? ] pppd-dns [ ? ] procps [ + ] pulseaudio [ ? ] rc.local [ - ] rsync [ ? ] rsyslog [ - ] saned [ ? ] screen-cleanup [ ? ] sendsigs [ ? ] speech-dispatcher [ ? ] stop-bootlogd [ ? ] stop-bootlogd-single [ ? ] udev [ ? ] udev-finish [ ? ] udevmonitor [ ? ] udevtrigger [ ? ] ufw [ ? ] umountfs [ ? ] umountnfs.sh [ ? ] umountroot [ ? ] unattended-upgrades [ - ] urandom [ + ] winbind [ ? ] wpa-ifupdown [ - ] x11-common

    Read the article

  • Sysadmin 101: How can I figure out why my server crashes and monitor performance?

    - by bflora
    I have a Drupal-powered site that seems to have neverending performance problems. It was butt-slow about 5 months ago. I brought in some guys who installed nginx for anonymous visitors, ajaxified a few queries so they wouldn't fire during page load, and helped me find a few bottlenecks in the code. For about a month, the site was significantly faster, though not "fast" by any stretch of the word. Meanwhile, I'm now shelling out $400/month to Slicehost to host a site that gets less than 5,000/uniques a day. Yes, you read that right. Go Drupal. Recently the site started crashing again and is slow again. I can't afford to hire people to come in, study my code from top to bottom, and make changes that may or may not help anymore. And I can't afford to throw more hardware at the problem. So I need to figure out what the problem is myself. Questions: When apache crashes, is it possible to find out what caused it to crash? There has to be a way, right? If so, how can I do this? Is there software I can use that will tell me which process caused my server to die? (e.g. "Apache crashed because someone visited page X." or "Apache crashed because you were importing too many RSS items from feed X.") There's got to be a way to learn this, right? What's a good, noob-friendly way to monitor my current apache performance? My developer friends tell me to "just use Top, dude," but Top shows me a bunch of numbers without any context. I have no clue what qualifies as a bad number or a good number in Top, or which processes are relevant and which aren't. Are there any noob-friendly server monitoring tools out there? Ideally, I could have a page that would give me a color-coded indicator about how apache is performing and then show me a list of processes or pages that are sucking right now. This way, I could know when performance is bad and then what's causing it to be so bad. Why does PHP memory matter? My apparently has a 30MB memory foot print. Will it run faster if I bring that number down? Thanks for any advice. I spent a year or so trying to boost my advertising income so I could hire a contractor to solve my performance woes. I didn't want to have to learn all this sysadmin voodoo. I'm now resigned to the fact that might not have a choice.

    Read the article

  • How to increase performance of Acer Aspire One 751h netbook?

    - by Wolfarian
    Hello! I have bought my new netbook Acer Aspire One 751h some days ago and was very unpleased with it performance - videotalking in skype is almost unuseable, watching videos on YouTube(even in standart definition) is like watching slideshow and all netbook have increadible lags if I'm running more then 4-5 programms in one time. So, can somebody tell me how to impruve the performance of the netbook(OS - WinXP SP3)? And can you say me where to control power managment, please? Thank you!

    Read the article

  • Configuration Tuning for PostgreSQL 9.1 PostGIS 1.5 Ubuntu 12.04 Server

    - by Martin
    My server performance is poor. At times SSH, top, and other features or commands are very slow to respond, taking several seconds or more. A query that normally takes 5 minutes can sometimes take 30 minutes. The database is mostly being used to do a spatial query (grid and summarize) on approximately 500GB of stored data spread between 4 tables. Restarting the server works as a temporary fix, but cannot be used as a long term solution. Any suggestions for how to diagnose and solve my performance issues? Hardware and Configuration: 3.3 GHz Intel quad core i5 16 GB DDR3 RAM 6 TB software RAID 10 (6 x 2 TB drives) Ubuntu 12.04 64-bit Postgres 9.1 PostGIS 1.5

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >