Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 95/537 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • how to store dynamically generated pages in html?

    - by Dharmik Bhandari
    I'm working on ASP.net MVC3 Web application that is facing scalability issue. For improving performance I want to store dynamically generated pages in html and serve them from generated html directly rather then querying database for each page request. I'm sure this will dramatically increase performance. Can any one share any hint / example / tutorial on how to do it? And what are challenges? I would also like to know how others are handling performance issue for large e-commerce sites with at-least thousand categories and 200k products with at least 200-500 concurrent visitors? What are the best approaches? Thanks in advance.

    Read the article

  • Would using a MemoryMappedFile for IPC across AppDomains be faster than WCF/named pipes?

    - by Morten Mertner
    Context: I am loading and executing untrusted code in a separate AppDomain and am currently communicating between the two using WCF (using named pipes as the underlying transport). I am exchanging relatively simple object graphs using a reasonably coarse-grained API, but would like to use a more fine-grained API if it does not cost me performance-wise. I've noticed that 4.0 adds a MemoryMappedFile class (which doesn't need a physical file, so could be entirely memory based). What kind of performance gains could I expect to see (if any) by using this new class? I know that it would take some "infrastructure code" to get the request/response behavior of WCF, but for now I'm only interested in the performance difference.

    Read the article

  • Do bit operations cause programs to run slower?

    - by flashnik
    I'm dealing with a problem which needs to work with a lot of data. Currently its values are represented as an unsigned int. I know that real values do not exceed a limit of 1000. Questions I can use unsigned short to store it. An upside to this is that it'll use less storage space to store the value. Will performance suffer? If I decided to store data as short but all the calling functions use int, it's recognized that I need to convert between these datatypes when storing or extracting values. Will performance suffer? Will the loss in performance be dramatic? If I decided to not use short but just 10 bits packed into an array of unsigned int. What will happen in this case comparing with previous ones?

    Read the article

  • MS SQL 2005 - Understanding ouput of DBCC SHOWCONTIG

    - by user169743
    I'm seeing some slow performance on a MS SQL 2005 database. I've been doing some research regarding MS SQL performance but I'm having difficulty fully understanding the output of SHOWCONTIG and would be very grateful if someone could have a look and offer some suggestions to improve performance. TABLE level scan performed. Pages Scanned................................: 19348 Extents Scanned..............................: 2427 Extent Switches..............................: 3829 Avg. Pages per Extent........................: 8.0 Scan Density [Best Count:Actual Count].......: 63.16% [2419:3830] Logical Scan Fragmentation ..................: 8.40% Extent Scan Fragmentation ...................: 35.15% Avg. Bytes Free per Page.....................: 938.1 Avg. Page Density (full).....................: 88.41%

    Read the article

  • SQL Server 2005 - Understanding ouput of DBCC SHOWCONTIG

    - by user169743
    I'm seeing some slow performance on a SQL Server 2005 database. I've been doing some research regarding SQL Server performance but I'm having difficulty fully understanding the output of SHOWCONTIG and would be very grateful if someone could have a look and offer some suggestions to improve performance. TABLE level scan performed. Pages Scanned................................: 19348 Extents Scanned..............................: 2427 Extent Switches..............................: 3829 Avg. Pages per Extent........................: 8.0 Scan Density [Best Count:Actual Count].......: 63.16% [2419:3830] Logical Scan Fragmentation ..................: 8.40% Extent Scan Fragmentation ...................: 35.15% Avg. Bytes Free per Page.....................: 938.1 Avg. Page Density (full).....................: 88.41%

    Read the article

  • Common causes of slow performing jQuery and how to optimize the code?

    - by Polaris878
    Hello, This might be a bit of a vague or general question, but I figure it might be able to serve as a good resource for other jQuery-ers. I'm interested in common causes of slow running jQuery and how to optimize these cases. We have a good amount of jQuery/JavaScript performing actions on our page... and performance can really suffer with a large number off elements. What are some obvious performance pitfalls you know of with jQuery? What are some general optimizations a jQuery-er can do to squeeze every last bit of performance out of his/her scripts? One example: a developer may use a selector to access an element that is slower than some other way. Thanks

    Read the article

  • C++0x optimizing compiler quality

    - by aaa
    hello. I do some heavy numbercrunching and for me floating-point performance is very important. I like performance of Intel compiler very much and quite content with quality of assembly it produces. I am thinking at some point to try C++0x mainly for sugar parts, like auto, initializer list, etc, but also lambdas. at this point I use those features in regular C++ by the means of boost. How good of assembly code do compilers C++0x generate? specifically Intel and gcc compilers. Do they produce SSE code? is performance comparable to C++? are there any benchmarks? My Google search did not reveal much. Thank you.

    Read the article

  • Writing at the end of file

    - by user342534
    Hi, I'm working on a system that requires high file I/O performance (with C#). Basically, I'm filling up large files (~100MB) from the start of the file until the end of the file. Every ~5 seconds I'm adding ~5MB to the file (sequentially from the start of the file), on every bulk I'm flushing the stream. Every few minutes I need to update a structure which I write at the end of the file (some kind of metadata). When flushing each one of the bulks I have no performance issue. However, when updating the metadata at the end of the file I get really low performance. My guess is that when creating the file (which also should be done extra fast), the file doesn't really allocates the entire 100MB on the disk and when I flush the metadata it must allocates all space until the end of file. Guys/Girls, any Idea how I can overcome this problem? Thanks a lot!

    Read the article

  • Why would SQL be very slow when doing updates?

    - by ooo
    Suddenly doing updates into a few tables have gotten 10 times slower than they used to be. What are some good recommendations to determine root cause and optimization? Could it be that indexing certain columns are causing updates to be slow? Any other recommendations? I guess more important than guesses would be help on the process of identifying the root cause or metrics around performance. Is there anything in Fluent NHibernate that you can use to help identify the root cause of performance issues?

    Read the article

  • USB 3 vs. eSATA

    - by Robert Nickens
    Will the full speed advantages of the future USB 3.0 be negated by the fact the most HD being mass produced are SATA 3? If so, what would you suggest a person do? For performance reasons go with eSATA or 1394 for external HDs. Why spend the money on USB 3.0 next year,even if the prices come down quickly. Given that SATA 6 is not here and may be a while.

    Read the article

  • "Task Manager" addon for Firefox?

    - by eidylon
    Hello all... I'm wondering if there is an addon for Firefox that would basically replicate the performance monitoring of Task Manager in Windows - seeing memory and cpu used - but for all the tabs in your current Firefox session. I want to be able to see which tabs are taking up the most memory or hitting hardest on the CPU. Thanks in advance!

    Read the article

  • ASPNET WMI class not available

    - by Nexus
    I need to extract the ASPNET\Requests Queued performance counter from some IIS servers via WMI. The WMI class for this sort of thing appears to be contained in Win32_PerfFormattedData_ASPNET_ASPNET. I've queried all available classes in root\cimv2 on my Win 2003/IIS6 servers, and it's not listed. It is, however, available on an unrelated Win2008/IIS7 box (which is interesting but doesn't really help me much) What gives? Why is this WMI class not available on my Windows 2003 servers?

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • Why does jmeter not work?

    - by Foolish
    I use jmeter to record requests and then perform a performance test after it records all the requests with proxy server. These requests contain a post form. After that I run the test cases, but I found the post form doesn't work -- it cannot create a record in the website's database automatically. But before that I used Webload and everything was OK. What's the problem? What can I do for this?

    Read the article

  • Any reason not to disable Windows kernel paging?

    - by Nathaniel
    So I'm planning on eventually going to 2 GB (mobo max) RAM from 1 GB, and I want to disable kernel paging once I do, because I've heard it can give a performance boost (and that I believe). Any reason not to do it or any general thoughts about it? Edit: for clarification, this is not disabling general RAM paging. This is disabling having kernel memory paged (or at least parts of it, as Charlls noted).

    Read the article

  • How to correctly partition usb flash drive and which filesystem to choose considering wear leveling?

    - by random1
    Two problems. First one: how to partition the flash drive? I shouldn't need to do this, but I'm no longer sure if my partition is properly aligned since I was forced to delete and create a new partition table after gparted complained when I tried to format the drive from FAT to ext4. The naive answer would be to say "just use default and everything is going to be alright". However if you read the following links you'll know things are not that simple: https://lwn.net/Articles/428584/ and http://linux-howto-guide.blogspot.com/2009/10/increase-usb-flash-drive-write-speed.html Then there is also the issue of cylinders, heads and sectors. Currently I get this: $sfdisk -l -uM /dev/sdd Disk /dev/sdd: 30147 cylinders, 64 heads, 32 sectors/track Warning: The partition table looks like it was made for C/H/S=*/255/63 (instead of 30147/64/32). For this listing I'll assume that geometry. Units = mebibytes of 1048576 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End MiB #blocks Id System /dev/sdd1 1 30146 30146 30869504 83 Linux $fdisk -l /dev/sdd Disk /dev/sdd: 31.6 GB, 31611420672 bytes 255 heads, 63 sectors/track, 3843 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00010c28 So from my current understanding I should align partitions at 4 MiB (currently it's at 1 MiB). But I still don't know how to set the heads and sectors properly for my device. Second problem: file system. From the benchmarks I saw ext4 provides the best performance, however there is the issue of wear leveling. How can I know that my Transcend JetFlash 700's microcontroller provides for wear leveling? Or will I just be killing my drive faster? I've seen a lot of posts on the web saying don't worry the newer drives already take care of that. But I've never seen a single piece of backed evidence of that and at some point people start mixing SSD with USB flash drives technology. The safe option would be to go for ext2, however a serious of tests that I performed showed horrible performance!!! These values are from a real scenario and not some synthetic test: 42 files: 3,429,415,284 bytes copied to flash drive original fat32: 15.1 MiB/s ext4 after new partition table: 10.2 MiB/s ext2 after new partition table: 1.9 MiB/s Please read the links that I posted above before answering. I would also be interested in answers backed up with some references because a lot is said and re-said but then it lacks facts. Thank you for the help.

    Read the article

  • Android emulator performance on linux

    - by Rado
    I installed the android SDK and eclipse plugin on my laptop, but I was surprised to find out that the emulator eats up 100% of one of my cpu cores. I have exactly the same setup on a desktop machine that does not have this issue. Both computers are running arch linux and both were updated yesterday. Granted, the desktop has better hardware than the laptop, but I was expecting to get closer to 50% cpu usage than 100% on the laptop. Both android virtual devices have the same specs: CPU: ARM Target: Android 2.3.3 - API Level 10 Skin: WVGA800 SD Card: 512M hw.lcd.density: 240 vm.heapSize: 24 hw.ramSize: 256 Laptop host has Intel Core 2 T7200 @ 2GHz cpu with 2Gb RAM. Desktop host has AMD Phenom II X4 940 @ 3GHz cpu with 8Gb RAM. The android emulator uses only 1 core and here are the CPU usage results: Laptop: Cpu0 : 22.8%us, 76.5%sy, 0.0%ni, 0.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu1 : 11.2%us, 2.4%sy, 0.0%ni, 86.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2055484k total, 1860304k used, 195180k free, 5276k buffers Swap: 2000088k total, 106872k used, 1893216k free, 350780k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2026 xyz 20 0 396m 207m 7192 R 100 10.3 4:11.58 emulator-arm Desktop: Cpu0 : 0.7%us, 0.0%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 1.3%us, 0.0%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 5.0%us, 1.3%sy, 0.0%ni, 91.9%id, 1.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.3%us, 0.3%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 7666324k total, 6506808k used, 1159516k free, 1650960k buffers Swap: 8988348k total, 0k used, 8988348k free, 2867300k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2811 xyz 20 0 392m 220m 6276 S 8 2.9 0:33.58 emulator-arm Is there any way I can improve the emulator performance on the laptop? [UPDATE] I ran the emulator with the same settings, on the same laptop under Win7 and after starting up, it didn't use 100% of a CPU core unlike under linux. Also, I tried running the emulator from a terminal in Linux and I get this message when I don't get it under the desktop Linux host: Could not configure '/dev/hpet' to have a 1024Hz timer. This is not a fatal error, but for better emulation accuracy type: 'echo 1024 /proc/sys/dev/hpet/max-user-freq' as root. I'm not really familiar with rtc or hpet, but it doesn't seem that max-user-freq setting does anything, I still get the same warning.

    Read the article

  • is there anyway to know if your supposedly fully dedicated server is really a virtually resource-sha

    - by siran
    Hi, sometimes I feel my server not responding as smoothly as I would expect (i have a Intel(R) Xeon(TM) CPU 2.80GHz Quad Core), given that for example, the 'top' commands reports a low load < 0.5, CPU are almost completely idle ... I maybe have internet connectivity issues, so I don't really know if it's me or if it's the server itself. Is there anykind of benchmarking script (or something analogous) I could run and see the actual performance of the server ?

    Read the article

  • SAS Array with or without expander

    - by tegbains
    Is it better to use a SAS Expander backplane for 12 drives via one SAS connection or is it better to use a SAS backplane with 3 SAS connections? This is in terms of performance, rather than expansion. This array will be setup using ZFS on a OpenSolaris via a LSI SAS controller as an iSCSI target. The two products being considered are the SuperMicro SuperChassis 826A-R1200LPB or the SuperChassis 826E2-R800LPB

    Read the article

  • Slow INFORMATION_SCHEMA query

    - by Thomas
    We have a .NET Windows application that runs the following query on login to get some information about the database: SELECT t.TABLE_NAME, ISNULL(pk_ccu.COLUMN_NAME,'') PK, ISNULL(fk_ccu.COLUMN_NAME,'') FK FROM INFORMATION_SCHEMA.TABLES t LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk_tc ON pk_tc.TABLE_NAME = t.TABLE_NAME AND pk_tc.CONSTRAINT_TYPE = 'PRIMARY KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE pk_ccu ON pk_ccu.CONSTRAINT_NAME = pk_tc.CONSTRAINT_NAME LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS fk_tc ON fk_tc.TABLE_NAME = t.TABLE_NAME AND fk_tc.CONSTRAINT_TYPE = 'FOREIGN KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE fk_ccu ON fk_ccu.CONSTRAINT_NAME = fk_tc.CONSTRAINT_NAME Usually this runs in a couple seconds, but on one server running SQL Server 2000, it is taking over four minutes to run. I ran it with the execution plan enabled, and the results are huge, but this part caught my eye (it won't let me post an image): http://img35.imageshack.us/i/plank.png/ I then updated the statistics on all of the tables that were mentioned in the execution plan: update statistics sysobjects update statistics syscolumns update statistics systypes update statistics master..spt_values update statistics sysreferences But that didn't help. The index tuning wizard doesn't help either, because it doesn't let me select system tables. There is nothing else running on this server, so nothing else could be slowing it down. What else can I do to diagnose or fix the problem on that server?

    Read the article

  • Speed of TrueCrypt whole disk encryption

    - by Gareth
    I'm getting a new development laptop soon, and I'm thinking of using TrueCrypt to encrypt the whole disk. What kind of performance drop can I expect? 10%? 30%? more? Also, assuming the workload has an effect, would compiling/using Visual Studio be affected much? I cannot seem to find anything like this on the web.

    Read the article

  • Any reason not to disable Windows kernel paging?

    - by Nathaniel
    So I'm planning on eventually going to 2 GB (mobo max) RAM from 1 GB, and I want to disable kernel paging once I do, because I've heard it can give a performance boost (and that I believe). Any reason not to do it or any general thoughts about it? Edit: for clarification, this is not disabling general RAM paging. This is disabling having kernel memory paged (or at least parts of it, as Charlls noted).

    Read the article

  • How to monitor the total number of SQL Server logins

    - by Shiraz Bhaiji
    We have an SQL Server 2005 that is the backend of a web application. The application is partly SharePoint and partly web services accessing the database via Entity Framework. In the performance monitor I am seeing average SQL Logins is ca, 60 per second (max 170), but the average logouts is less than 1. Where can I see the total number of SQL Server logins? Anyone have an idea what could be causing this?

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >