Search Results

Search found 13206 results on 529 pages for 'performance measurement'.

Page 81/529 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • How does TopCoder evaluates code?

    - by Carlos
    If you are familiar with TopCoder you know that your source-code gets a final "grade/points" this depends on time, how many compiles, etc, one of the highest weighted being performance. But how can they test that, is there some sort of simple code (java or c++) to do it that you could share for me to evaluate and hopefully write my own to test the programs I write for University? This is sort of a follow up question to this one where I ask if shorter code results in best performance. P.S: Im interested in both of how topcoders knows performance and writing code to test performance.

    Read the article

  • JSR 275 - Units, Percent per second

    - by I82Much
    Hi all, I need to represent the unit of Percent per second using the JScience.org's JSR 275 units and measures implementation. I am trying to do to the following: Unit<Dimensionless> PERCENT_PER_SECOND = NonSI.PERCENT.divide(Si.SECOND).asType(Dimensionless.class) but I am getting a ClassCastException when I try to do that. The following works, but I'm not sure if there's a better way: public interface PercentOverTime extends Quantity {} public static Unit<PercentOverTime> PERCENT_PER_SECOND = new BaseUnit<PercentOverTime>("%/s"); Any thoughts? The closest I could find to this is the question on Cooking Measurements (which is how I saw how to define your own units).

    Read the article

  • Is it possible to pickle python "units" units?

    - by Ajaxamander
    I'm using the Python "units" package (http://pypi.python.org/pypi/units/) and I've run into some trouble when trying to pickle them. I've tried to boil it down to the simplest possible to case to try and figure out what's going on. Here's my simple test: from units import unit, named_unit from units.predefined import define_units from units.compatibility import compatible from units.registry import REGISTRY a = unit('m') a_p = pickle.dumps(a) a_up = pickle.loads(a_p) logging.info(repr(unit('m'))) logging.info(repr(a)) logging.info(repr(a_up)) logging.info(a.is_si()) logging.info(a_up.is_si()) logging.info( compatible(a,a_up) ) logging.info(a(10) + a_up(10)) The output I'm seeing when I run this is: LeafUnit('m', True) LeafUnit('m', True) LeafUnit('m', True) True True False IncompatibleUnitsError I'd understand if pickling units broke them, if it weren't for the fact that repr() is returning identical results for them. What am I missing? This is using v0.04 of the units package, and Google App Engine 1.4 SDK 1

    Read the article

  • What is the correct stage to use for Google Guice in production in an application server?

    - by Yishai
    It seems like a strange question (the obvious answer would Production, duh), but if you read the java docs: /** * We want fast startup times at the expense of runtime performance and some up front error * checking. */ DEVELOPMENT, /** * We want to catch errors as early as possible and take performance hits up front. */ PRODUCTION Assuming a scenario where you have a stateless call to an application server, the initial receiving method (or there abouts) creates the injector new every call. If there all of the module bindings are not needed in a given call, then it would seem to have been better to use the Development stage (which is the default) and not take the performance hit upfront, because you may never take it at all, and here the distinction between "upfront" and "runtime performance" is kind of moot, as it is one call. Of course the downside of this would appear to be that you would lose the error checking, causing potential code paths to cause a problem by surprise. So the question boils down to are the assumptions in the above correct? Will you save performance on a large set of modules when the given lifetime of an injector is one call?

    Read the article

  • Easily measure elapsed time

    - by hap497
    I am trying to use time() to measure various points of my program. What I don't understand is why the values in the before and after are the same? I understand this is not the best way to profile my program, I just want to see how long something take. printf("**MyProgram::before time= %ld\n", time(NULL)); doSomthing(); doSomthingLong(); printf("**MyProgram::after time= %ld\n", time(NULL)); I have tried: struct timeval diff, startTV, endTV; gettimeofday(&startTV, NULL); doSomething(); doSomethingLong(); gettimeofday(&endTV, NULL); timersub(&endTV, &startTV, &diff); printf("**time taken = %ld %ld\n", diff.tv_sec, diff.tv_usec); How do I read a result of **time taken = 0 26339? Does that mean 26,339 nanoseconds = 26.3 msec? What about **time taken = 4 45025, does that mean 4 seconds and 25 msec?

    Read the article

  • nVidia performance with newer X and newer driver abysmal with Compiz

    - by Nakedible
    I recently upgraded Debian to Xorg 2.9.4 and installed nvidia-glx from experimental, version 260.19.21. This was somewhat of an uphill battle as the dependencies for the experimental nvidia-glx package are still somewhat broken. I got it to work without forcing the installation of any packages and without modifying the packages. However, after the upgrade compiz performance has been abysmal. I am using the desktop wall plugin and switching viewports is really slow - takes a few seconds for each switch. In addition to this, every effect that compiz does, such as zoom animations for icons when launching applications, takes seconds. The viewport switching speed changes relative to the amount of windows on that virtual screen - empty screens switch almost at normal speed, single browser windows work almost decently, but just 4 rxvt terminals slows the switches down to a crawl. My compiz configuration should be pretty basic. Xorg is likewise configured without anything special - the only "custom" configuration is forcing the driver name to be "nvidia". I've fiddled around with the nvidia-settings and compizconfig trying different VSync settings, but none of those helped. My graphics card is: NVIDIA GPU NVS 3100M (GT218) at PCI:1:0:0 (GPU-0). This is laptop GPU that is from the Geforce GTX 200 series. Graphics card performance should naturally be no problem. EDIT: In the end, nothing really worked, and I got really annoyed with the state of compiz and its support in Debian. Many nVidia driver revisions have passed and I am using Gnome 3 now, so I am accepting the best answers to this question even though the issue was not resolved.

    Read the article

  • Linux iptables / conntrack performance issue

    - by tim
    I have a test-setup in the lab with 4 machines: 2 old P4 machines (t1, t2) 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t3) Intel e1000 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t4) Intel e1000 to test linux firewall performance since we got bitten by a number of syn-flood attacks in the last months. All machines run Ubuntu 12.04 64bit. t1, t2, t3 are interconnected through an 1GB/s switch, t4 is connected to t3 via an extra interface. So t3 simulates the firewall, t4 is the target, t1,t2 play the attackers generating a packetstorm thorugh (192.168.4.199 is t4): hping3 -I eth1 --rand-source --syn --flood 192.168.4.199 -p 80 t4 drops all incoming packets to avoid confusion with gateways, performance issues of t4 etc. I watch the packet stats in iptraf. I have configured the firewall (t3) as follows: stock 3.2.0-31-generic #50-Ubuntu SMP kernel rhash_entries=33554432 as kernel parameter sysctl as follows: net.ipv4.ip_forward = 1 net.ipv4.route.gc_elasticity = 2 net.ipv4.route.gc_timeout = 1 net.ipv4.route.gc_interval = 5 net.ipv4.route.gc_min_interval_ms = 500 net.ipv4.route.gc_thresh = 2000000 net.ipv4.route.max_size = 20000000 (I have tweaked a lot to keep t3 running when t1+t2 are sending as many packets as possible). The result of this efforts are somewhat odd: t1+t2 manage to send each about 200k packets/s. t4 in the best case sees aroung 200k in total so half of the packets are lost. t3 is nearly unusable on console though packets are flowing through it (high numbers of soft-irqs) the route cache garbage collector is no way near to being predictable and in the default setting overwhelmed by very few packets/s (<50k packets/s) activating stateful iptables rules makes the packet rate arriving on t4 drop to around 100k packets/s, efectively losing more than 75% of the packets And this - here is my main concern - with two old P4 machines sending as many packets as they can - which means nearly everyone on the net should be capable of this. So here goes my question: Did I overlook some importand point in the config or in my test setup? Are there any alternatives for building firewall system especially on smp systems?

    Read the article

  • My program is spending most of its time in objc_msgSend. Does that mean that Objective-C has bad per

    - by Paperflyer
    Hello Stackoverflow. I have written an application that has a number of custom views and generally draws a lot of lines and bitmaps. Since performance is somewhat critical for the application, I spent a good amount of time optimizing draw performance. Now, activity monitor tells me that my application is usually using about 12% CPU and Instrument (the profiler) says that a whopping 10% CPU is spent in objc_msgSend (mostly in drawing related system calls). On the one hand, I am glad about this since it means that my drawing is about as fast as it gets and my optimizations where a huge success. On the other hand, it seems to imply that the only thing that is still using my CPU is the Objective-C overhead for messages (objc_msgSend). Hence, that if I had written the application in, say, Carbon, its performance would be drastically better. Now I am tempted to conclude that Objective-C is a language with bad performance, even though Cocoa seems to be awfully efficient since it can apparently draw faster than Objective-C can send messages. So, is Objective-C really a language with bad performance? What do you think about that?

    Read the article

  • How to quantify your "slow" development machine?

    - by lance
    ( Please provide the question this one duplicates. I'm disappointed I couldn't find it. ) My development machine is "slow". I wait on it "a lot". I've been asked by decision makers who want to help to fairly and accurately measure that time. How do you quantify the amount of time you spend waiting on the computer (during compiles, waiting for apps to open every day, etc). Is there software which effectively reports on this sort of thing? Is there an OS metric (I/O something something, pagefile swapping frequency, etc, etc) that captures and communicates this particularly well? Some sort of benchmark you'd recommend me testing against?

    Read the article

  • Developing high-performance and scalable zend framework website

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand in terms of servers approch (MySQL, NGINX, MongoDB or pgsql) for a scalable application expected to have a lot of traffic using PHP?...I would be interested in your approch. Note: I'm a Zend Framework developer and don't have to much experience with the servers part (to determin what would be best solution for my scalable application)

    Read the article

  • Pattern Matching of Units of Measure in F#

    - by Oldrich Svec
    This function: let convert (v: float<_>) = match v with | :? float<m> -> v / 0.1<m> | :? float<m/s> -> v / 0.2<m/s> | _ -> failwith "unknown" produces an error The type 'float<'u>' does not have any proper subtypes and cannot be used as the source of a type test or runtime coercion. Is there any way how to pattern match units of measure?

    Read the article

  • How do I make "simple" throughput j2ee-filter?

    - by Tommy
    I'm looking to create a filter that can give me two things: number of request pr minute, and average responsetime pr minute. I already got the individual readings, I'm just not sure how to add them up. My filter captures every request, and it records the time each request takes: public void doFilter(ServletRequest request, ...() { long start = System.currentTimeMillis(); chain.doFilter(request, response); long stop = System.currentTimeMillis(); String time = Util.getTimeDifferenceInSec(start, stop); } This information will be used to create some pretty Google Chart charts. I don't want to store the data in any database. Just a way to get current numbers out when requested As this is a high volume application; low overhead is essential. I'm assuming my applicationserver doesn't provide this information.

    Read the article

  • How to measure the time taken by C# NetworkStream.Read?

    - by publicENEMY
    I want to measure time taken for client to receive data over tcp using c#. Im using NetworkStream.Read to read 100 megabits of data that are sent using NetworkStream.Write. I set the buffer to the same size of data, so there no buffer underrun problem etc. Generally it looks like this. Stopwatch sw = new Stopwatch(); sw.Start(); stream.Read(bytes, 0, bytes.Length); sw.Stop(); The problem is, there is a possibility where the sender hasnt actually sent the data but the stopwatch is already running. how can i accurately measure the time taken to receive the data? i did try to use the time lapse of the remote pc stream.Write, but the time it took to write is extremely small. by the way, is the stopwatch is the most accurate tool for this task?

    Read the article

  • How can I measure my (SAMP) server's bandwidth usage?

    - by enkrates
    I'm running a Solaris server to serve PHP through Apache. What tools can I use to measure the bandwidth my server is currently using? I use Google analytics to measure traffic, but as far as I know, it ignores file size. I have a rough idea of the average size of the pages I serve, and can do a back-of-the-envelope calculation of my bandwidth usage by multiplying page views (from Google) by average page size, but I'm looking for a solution that is more rigorous and exact. Also, I'm not trying to throttle anything, or implement usage caps or anything like that. I'd just like to measure the bandwidth usage, so I know what it is. An example of what I'm after is the usage meter that Slicehost provides in their admin website for their users. They tell me (for another site I run) how much bandwidth I've used each month and also divide the usage for uploading and downloading. So, it seems like this data can be measured, and I'd like to be able to do it myself. To put it simply, what is the conventional method for measuring the bandwidth usage of my server?

    Read the article

  • Should I enable "Intel NIC DMA Channels"?

    - by javapowered
    I have HP DL360p Gen8 646902-xx1 I'm trying to optimize my config for low latency trading. Should I enable "Intel NIC DMA Channels"? Will that help/affect my system? From HP doc: Added a new ROM Based Setup Utility (RBSU) Advanced Performance Option menu that allows the user to enable Intel NIC DMA Channels (IOAT). This option is disabled by default. When enabled, certain networking devices may see an improvement in performance by utilizing Intel's DMA engine to offload network activity. Please consult documentation from the network adapter to determine if this feature is supported.

    Read the article

  • NSClient++ FAIL on Windows 2008 R2 -- PDHCollector.cpp(215) Failed to query performance counters

    - by John DaCosta
    I am attempting to monitor windows server 2008 r2 x64 Enterprise with Nagios. When I test/install the nsclientI get the following error: PDHCollector.cpp(215) Failed to query performance counters: \Processor(_total)\% Processor Time: PdhGetFormattedCounterValue failed: A counter with a negative denominator value was detected. (800007D6) Has anyone else encountered the same issue and / or resolved it, found a work around?

    Read the article

  • Upgrading a PERC h310 to a PERC H710 mini RAID controller on a Dell R620

    - by Gregg Leventhal
    I have an ESXi 5.0 Free license host using an internal Datastore (RAID 5, 5 Disk) that was configured with a Dell PERC h310 RAID controller. The disk performance was very poor, so I upgraded to the PERC H710 Mini. The IT Tech installed the controller and powered the host back on. I had to rescan the controller and the datastore appeared. Should any settings be changed in the RAID BIOS, or should the default settings be sufficient? Is they anything to be aware of when performing this type of upgrade in order to achieve the maximum performance?

    Read the article

  • Unix sort keys cause performance problems

    - by KenFar
    My data: It's a 71 MB file with 1.5 million rows. It has 6 fields All six fields combine to form a unique key - so that's what I need to sort on. Sort statement: sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 -o output.csv input.csv The problem: If I sort without keys, it takes 30 seconds. If I sort with keys, it takes 660 seconds. I need to sort with keys to keep this generic and useful for other files that have non-key fields as well. The 30 second timing is fine, but the 660 is a killer. More details using unix time: sort input.csv -o output.csv = 28 seconds sort -t ',' -k1 input.csv -o output.csv = 28 seconds sort -t ',' -k1,1 input.csv -o output.csv = 64 seconds sort -t ',' -k1,1 -k2,2 input.csv -o output.csv = 194 seconds sort -t ',' -k1,1 -k2,2 -k3,3 input.csv -o output.csv = 328 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 input.csv -o output.csv = 483 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 input.csv -o output.csv = 561 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 input.csv -o output.csv = 660 seconds I could theoretically move the temp directory to SSD, and/or split the file into 4 parts, sort them separately (in parallel) then merge the results, etc. But I'm hoping for something simpler since looks like sort is just picking a bad algorithm. Any suggestions? Testing Improvements using buffer-size: With 2 keys I got a 5% improvement with 8, 20, 24 MB and best performance of 8% improvement with 16MB, but 6% worse with 128MB With 6 keys I got a 5% improvement with 8, 20, 24 MB and best performance of 9% improvement with 16MB. Testing improvements using dictionary order (just 1 run each): sort -d --buffer-size=8M -t ',' -k1,1 -k2,2 input.csv -o output.csv = 235 seconds (21% worse) sort -d --buffer-size=8M -t ',' -k1,1 -k2,2 input.csv -o ouput.csv = 232 seconds (21% worse) conclusion: it makes sense that this would slow the process down, not useful Testing with different file system on SSD - I can't do this on this server now. Testing with code to consolidate adjacent keys: def consolidate_keys(key_fields, key_types): """ Inputs: - key_fields - a list of numbers in quotes: ['1','2','3'] - key_types - a list of types of the key_fields: ['integer','string','integer'] Outputs: - key_fields - a consolidated list: ['1,2','3'] - key_types - a list of types of the consolidated list: ['string','integer'] """ assert(len(key_fields) == len(key_types)) def get_min(val): vals = val.split(',') assert(len(vals) <= 2) return vals[0] def get_max(val): vals = val.split(',') assert(len(vals) <= 2) return vals[len(vals)-1] i = 0 while True: try: if ( (int(get_max(key_fields[i])) + 1) == int(key_fields[i+1]) and key_types[i] == key_types[i+1]): key_fields[i] = '%s,%s' % (get_min(key_fields[i]), key_fields[i+1]) key_types[i] = key_types[i] key_fields.pop(i+1) key_types.pop(i+1) continue i = i+1 except IndexError: break # last entry return key_fields, key_types While this code is just a work-around that'll only apply to cases in which I've got a contiguous set of keys - it speeds up the code by 95% in my worst case scenario.

    Read the article

  • What would be the optimal disk config for SQL Server 2008 R2?

    - by Kev
    We have a new Dell R710 server that came with the following storage configuration: 8 x 146GB SAS 10k 6Gbps disks 1 x Perc H700 Integrated Controller (2 x 4 disks - 2 ports each supporting 4 disks) What would be the optimal configuration if we were just after performance? What would be the optimal configuration if we were after performance but wanted data resilience. As per 2 above but with a hot standby disk? We plan to run Windows 2008 R2 and SQL Server 2008 R2. Maximising storage capacity isn't a prime concern.

    Read the article

  • Dell OpenManage Causing Periodic Slowness

    - by Zorlack
    Today we diagnosed the cause of a Periodic slowness issue: see here. Dell OpenManage Server Administrator seems to have been causing hourly slowness. It would occasionally peg one of the CPUs for upwards of two minutes. Disabling it drastically improved the performance of the SQL Server. The server hardware: Dell R710 Dual Quad Core 2.9GHz Processors 96GB Memory 2 Disk RAID 1 SAS System Disk (Internal) 4 Disk RAID 10 SAS Log Disk (Internal) 14 Disk RAID 10 SAS Data Disk (External DAS MD1000) Windows 2008 Enterprise R2 x64 We installed the OS using Dell OpenManage Server Assistant, so I assume that it was correctly configured. For now we have disabled OMSA to alleviate the performance issues it was causing, but I'd like to be able to re-enable it. Has anyone had a similar experience that can shed a little light on the nature of this problem?

    Read the article

  • Justifying a memory upgrade

    - by AngryHacker
    My employer has over a thousand servers (running SQL Server 2005 x64 and a couple of other apps) all across the country. And in my opinion they are all massively underpowered for what they need to do. Specifically, I feel that the servers simply do not have enough RAM for the amount of volume the machines are asked to do. All the servers currently have 6GB of RAM. The users are pretty much always complaining about performance (mostly because, immo, the server dips into the paging file quite often). I finally convinced the powers that be to at least try out a memory upgrade on one box and see the results. However, they want before and after metrics, so that they can see that the expense will be justified. My question is what metrics should I collect to see whether the performance truly improves on the box? I am a dev, so I am not sure how and what to collect (i have a passing knowledge of Perfmon).

    Read the article

  • what is acceptable datastore latency on VMware ESXi host?

    - by BeowulfNode42
    Looking at our performance figures on our existing VMware ESXi 4.1 host at the Datastore/Real-time performance data Write Latency Avg 14 ms Max 41 ms Read Latency Avg 4.5 ms Max 12 ms People don't seem to be complaining too much about it being slow with those numbers. But how much higher could they get before people found it to be a problem? We are reviewing our head office systems due to running low on storage space, and are tossing up between buying a 2nd VM host with DAS or buying some sort of NAS for SMB file shares in the near term and maybe running VMs from it in the longer term. Currently we have just under 40 staff at head office with 9 smaller branches spread across the country. Head office is runnning in an MS RDS session based environment with linux ERP and mail systems. In total 22 VMs on a single host with DAS made from a RAID 10 made of 6x 15k SAS disks.

    Read the article

  • Overhead of Perfmon -> direct to SQL Database

    - by StuartC
    HI All, First up, I'm a total newb at Performance Monitoring. I'm looking to set up central performance monitoring of some boxes. 2K3 TS ( Monitor General OS Perf & Session Specific Counters ) 2K8 R2 ( XenApp 6 = Monitor General OS Perf & Session Specific Counters ) File Server ( Standard File I/O ) My ultimate aim is to get as many counters/information, without impacting the clients session experience at all. Including counters specific to their sessions. I was thinking it logging directly to a SQL on another server, instead of a two part process of blg file then relog to sql. Would that work ok? Does anyone know the overhead of going straight to SQL from the client? I've searched around a bit, but havent found so much information it can be overwhelming. thanks

    Read the article

  • Speedup vmware esx guest hdd access

    - by Uwe
    Hello, we run several windows servers and windows clients on our vmware esx. One of the Windows 2003 Servers is a build-server with major HDD-reads/writes. as it is our build server. This machine was a hardware before and was virtualized to the ESX. Is there any way to increase the HDD-Performance? Perhaps there are special windows (guest) drivers? The files are stored on a Raid6 base. Performance graph of vmware infrastructure client shows reads up to 650 KBps and writes up to 4000 KBps. Thank you. Regards, Uwe

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >