Search Results

Search found 48586 results on 1944 pages for 'page performance'.

Page 22/1944 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • NAS Performance issues

    - by Markus
    I bought a NAS from Conceptronic CH3MNAS and built in two Western Digital 1,5TB Green Drives. I only get a write speed of 6mb/s in LAN The configuration of the drives is as follows: - Raid 0 - EXT2 Is that a normal speed?

    Read the article

  • Vista startup performance

    - by PeterMmm
    After 2 years my Vista (32-bit) machine now boots quite slowly. The event viewer tells me two programs comming up slow: explorer.exe and svchost.exe. Fine. But what can i do that these programs comes up as quickly as before ?

    Read the article

  • Webserver: Performance impact when storing session files on /dev/shm

    - by GetFree
    I have a website runing on a typical setup: Linux, Apache, PHP, MySQL. However, what's not typical about it, is that it's getting tons of traffic (400,000+ visits a day) and so, efficiency is becoming more and more important to me. I'm constantly looking for things I could optimize and, right now, my attention is focused on PHP's session files. There's a hell lot of session files constantly being read and created on the /tmp directory. So my question is: Is it a good idea to store the session files in /dev/shm (tmpfs) in order to speed things up a little bit??

    Read the article

  • Hard Drive Fundamentals And Verifying Disk Performance

    - by Agnel Kurian
    Over the past few months, my Windows XP machine has slowed down to a crawl. It takes about 10-15 minutes to go from power-up to reaching a responsive state. I have reasons to believe that this is a result of the hard disk slowing down. Questions: Do hard disks slow down as a result of mechanical wear and tear ...or age? How do I check if my disk has slowed down? Conversely, how can I verify that my disk is indeed running at the speed it's designed to run at? Could drivers be at fault here? Do hard disks come with drivers or does Windows use a generic driver?

    Read the article

  • Methodologies for performance-testing a WAN link

    - by Chopper3
    We have a pair of new diversely-routed 1Gbps Ethernet links between locations about 200 miles apart. The 'client' is a new reasonably-powerful machine (HP DL380 G6, dual E56xx Xeons, 48GB DDR3, R1 pair of 300GB 10krpm SAS disks, W2K8R2-x64) and the 'server' is a decent enough machine too (HP BL460c G6, dual E55xx Xeons, 72GB, R1 pair of 146GB 10krpm SAS disks, dual-port Emulex 4Gbps FC HBA linked to dual Cisco MDS9509s then onto dedicated HP EVA 8400 with 128 x 450GB 15krpm FC disks, RHEL 5.3-x64). Using SFTP from the client we're only seeing about 40Kbps of throughput using large (2GB) files. We've performed server to 'other local server' tests and see around 500Mbps through the local switches (Cat 6509s), we're going to do the same on the client side but that's a day or so away. What other testing methods would you use to prove to the link providers that the problem is theirs?

    Read the article

  • Solaris TCP/IP performance tuning

    - by Andy Faibishenko
    I am trying to tune a high message traffic system running on Solaris. The architecture is a large number (600) of clients which connect via TCP to a big Solaris server and then send/receive relatively small messages (.5 to 1K payload) at high rates. The goal is to minimize the latency of each message processed. I suspect that the TCP stack of the server is getting overwhelmed by all the traffic. What are some commands/metrics that I can use to confirm this, and in case this is true, what is the best way to alleviate this bottleneck? PS I posted this on StackOverflow originally. One person suggested snoop and dtrace. dtrace seems pretty general - are there any additional pointers on how to use it to diagnose TCP issues?

    Read the article

  • apache and ajp performance

    - by user12145
    I have an apache sitting in front of two tomcat app servers(one on the same physical server, the other on a different one) that does time consuming work(0.5 sec to 10sec per request). The apache http server is getting killed by an average of 1 to 2 concurrent requests per second. both Server spec is about 2GB of RAM. Is there a way to optimize apache to handle the load? any advise is welcome. BalancerMember ajp://localhost:8009/xxxxxx BalancerMember ajp://XXX.XX.XXX.XX:8009/xxxxxx I keep getting the following in apache2.2 log: [Mon Dec 28 00:31:02 2009] [error] ajp_read_header: ajp_ilink_receive failed [Mon Dec 28 00:31:02 2009] [error] (120006)APR does not understand this error code: proxy: read response failed from 127.0.0.1:8009 (localhost)

    Read the article

  • SamFS performance problem on file creation

    - by Gregor Longariva
    I have two samfs filesystems (samfs1 and samfs2), both on the same 6130, both with the same config/watermarks/timeouts etc. creating a file on samfs2 works as it should, on samfs1 not. A little simple script shows up, that every while and then the file creation needs between 11 and 28 seconds: stan 12:32 [scratch]# while ( 1 ) while? echo - while? time echo test file while? time mv file file2 while? echo + while? sleep 1 while? end 0.00u 0.00s 0:00.01 0.0% 0.00u 0.00s 0:00.00 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.03 0.0% + 0.00u 0.00s 0:23.71 0.0% 0.00u 0.00s 0:00.14 0.0% + 0.00u 0.00s 0:00.18 0.0% 0.00u 0.00s 0:00.13 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.06 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.04 0.0% + 0.00u 0.00s 0:00.04 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.01 0.0% + 0.00u 0.00s 0:26.05 0.0% 0.00u 0.00s 0:00.50 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.06 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.12 0.0% + Any idea where the problem could be?

    Read the article

  • poor performance when deleteing many files

    - by choppy
    I've got two machines: The first is IBM Blade with 24 cores 96GB RAM and single local hard drive with 278GB divided to 4 partitions: 1. c: - 40GB; 3GB free 2. d: - 40GB; 37GB free 3. e: - 198322GB; 198.1 free 4. 100MB (EFI system Partition) Formatted with GPT The other is pizza server with 4 cores 8GB RAM and single local hard drive with 273GB divided to 3 partitions: 1. c: - 136.81; 20GB free 2. d: - 88.74GB; 87.91 free 3. e: - 47.85GB; 46.91 free Formatted with MBR I have two scripts, the first creates 20,000 files in one directory, each file size is 192KB, the second delete the folder (recursive) and prints how much time it toke to delete all files. The problem is on the first server (blade) it takes about 2 minutes to delete all 20,000 files while on the second (pizza) it takes about 4 seconds!? Both servers have clean windows server 2008R2 with no special application running on background. Any ideas what is going on?

    Read the article

  • Getting More Performance out of a MacBook Pro

    - by 5arx
    So I've got a mid-2009 MacBook Pro 13". Integrated GPU so not a games machine but fast enough for doing .Net development in VMs. I love the little thing and wanted to give it a Christmas present so thought I'd mod it up a bit and give it a boost. I'm thinking of swapping out the stock 5400rpm HD with a faster hybrid drive (has 4GB RAM and spins at 7200rpm) but was wondering if any of you had tried or knew of anything else I could change/upgrade/mod to squeeze more out of my laptop. Before you answer though, please be aware that I'm not sure I can run to putting in the 8GB of RAM Apple have suggested :-( Thanks in advance.

    Read the article

  • refresh windows network performance counters in command line

    - by michalv82
    I am testing a USB device connected to a windows PC. When the device is connected then windows has another network interface going through the device. I need to get the bytes transffered for that specific interface, basically I need the data shown in the networking tab in the task manager for my interface adapter. I found this question which helped to get this info: ms windows network activity bytes send and receive in command line However I have a problem when I run multiple tests - each time I disconnect and connect the device there's another line for the interface, like below. In the task manager networking tab I only see one record for my interface but I don't know how I can know from command line which is the lastest and current instance (it's not like the first line or last line is always the current interface, I noticed it's not consistent): wmic path Win32_PerfRawData_Tcpip_NetworkInterface Get Name,PacketsReceivedPerSec,PacketsSentPerSec,BytesReceivedPersec,BytesSentPersec BytesReceivedPersec BytesSentPersec Name PacketsReceivedPersec PacketsSentPersec 422666370 6317989292 Intel[R] 82579LM Gigabit Network Connection 2715169 8109643 49150 375973 My USB Device 432 568 0 0 My USB Device _2 0 0 0 0 My USB Device _3 0 0 0 0 My USB Device _6 0 0 0 0 Local Area Connection* 9 0 0

    Read the article

  • monitoring TCP/IP performance on Solaris

    - by Andy Faibishenko
    I am trying to tune a high message traffic system running on Solaris. The architecture is a large number (600) of clients which connect via TCP to a big Solaris server and then send/receive relatively small messages (.5 to 1K payload) at high rates. The goal is to minimize the latency of each message processed. I suspect that the TCP stack of the server is getting overwhelmed by all the traffic. What are some commands/metrics that I can use to confirm this, and in case this is true, what is the best way to alleviate this bottleneck?

    Read the article

  • Joomla performance problems on AWS

    - by Bobby Jack
    I'm running a site on AWS with the following setup: Single m1.small instance (web server) Single RDS m1.small db Joomla 1.5 Generally, the site is performant, but is fairly low-traffic - say around 50-100 visits / hour. However, at peak time, we see about double that traffic. During peak time, pretty much every day: CPU usage on the web server slowly climbs to 100% CPU usage on the RDS server climbs quite quickly to about 30%, from an average of about 15 Database connections shoot up to about 140, from a normal average of about 2 or 3 The site is then occasionally unreachable, certainly according to pingdom monitoring. Does anyone recognise this behaviour? Can you point me in the right direction to begin investigating? Of course, RDS makes it difficult to do things like slow query logging, so I've started by regularly dumping the mysql process list into a file to see if there's anything I can spot there, but it would be good to have something more concrete to investigate. UPDATE At least, can someone confirm that I'm definitely right in saying that the level of traffic implies the problem must be a specific type of query taking way longer than it should to execute? This would happen if a table gets locked, and many queries need to write to it, right? For this very reason, I've already changed the __session table type to InnoDB.

    Read the article

  • SSD performance

    - by Tom
    I recently upgraded to a Kingston Hyper-X 120GB SSD, when I run Crystaldiskmark my scores look really slow, my MB (gigabyte 775) does not have an option for ACHI in the BIOS, I'm wondering if that's an issue. The scores were: Seq read -233 write-176.8 512K-224 write-175.8 4K-25 write-80 4K-23 write-102 This drive is rated for over 500, Any help or input would be greatly appreciated..

    Read the article

  • performance monitoring

    - by Sunny
    I want to monitor CPU usage, disk read/write usage for a particular process, say ./myprocess. To monitor CPU top command seems to be a nice option and for read and write iotop seems to be a handy one. For example to monitor read/write for every second i use the command iotop -tbod1 | grep "myprocess". My difficulty is I just want only three variables to store, namely read/sec, write/sec, cpu usage/sec. Could you help me with a script that combines the outputs the above said three variables from top and iotop to be stored into a log file? Thanks!

    Read the article

  • Dynamically created controls and the ASP.NET page lifecycle

    - by Dirk
    I'm working on an ASP.NET project in which the vast majority of the forms are generated dynamically at run time (form definitions are stored in a DB for customizability). Therefore, I have to dynamically create and add my controls to the Page every time OnLoad fires, regardless of IsPostBack. This has been working just fine and .NET takes care of managing ViewState for these controls. protected override void OnLoad(EventArgs e) { base.OnLoad(e); RenderDynamicControls() } private void RenderDynamicControls(){ //1. call service layer to retrieve form definition //2. create and add controls to page container } I have a new requirement in which if a user clicks on a given button (this button is created at design time) the page should be re-rendered in a slightly different way. So in addition to the code that executes in OnLoad (i.e. RenderDynamicControls()), I have this code: protected void MyButton_Click(object sender, EventArgs e) { RenderDynamicControlsALittleDifferently() } private void RenderDynamicControlsALittleDifferently() (){ //1. clear all controls from the page container added in RenderDynamicControls() //2. call service layer to retrieve form definition //3. create and add controls to page container } My question is, is this really the only way to accomplish what I'm after? It seems beyond hacky to effectively render the form twice simply to respond to a button click. I gather from my research that this is simply how the page-lifecycle works in ASP.NET: Namely, that OnLoad must fire on every Postback before child events are invoked. Still, it's worthwhile to check with the SO community before having to drink the kool-aid. On a related note, once I get this feature completed, I'm planning on throwing an UpdatePanel on the page to perform the page updates via Ajax. Any code/advice that make that transition easier would be much appreciated. Thanks

    Read the article

  • Fixing predicated NSFetchedResultsController/NSFetchRequest performance with SQLite backend?

    - by Jaanus
    I have a series of NSFetchedResultsControllers powering some table views, and their performance on device was abysmal, on the order of seconds. Since it all runs on main thread, it's blocking my app at startup, which is not great. I investigated and turns out the predicate is the problem: NSPredicate *somePredicate = [NSPredicate predicateWithFormat:@"ANY somethings == %@", something]; [fetchRequest setPredicate:somePredicate]; I.e the fetch entity, call it "things", has a many-to-many relation with entity "something". This predicate is a filter that limits the results to only things that have a relation with a particular "something". When I removed the predicate for testing, fetch time (the initial performFetch: call) dropped (for some extreme cases) from 4 seconds to around 100ms or less, which is acceptable. I am troubled by this, though, as it negates a lot of the benefit I was hoping to gain with Core Data and NSFRC, which otherwise seems like a powerful tool. So, my question is, how can I optimize this performance? Am I using the predicate wrong? Should I modify the model/schema somehow? And what other ways there are to fix this? Is this kind of degraded performance to be expected? (There are on the order of hundreds of <1KB objects.) EDIT WITH DETAILS: Here's the code: [fetchRequest setFetchLimit:200]; NSLog(@"before fetch"); BOOL success = [frc performFetch:&error]; if (!success) { NSLog(@"Fetch request error: %@", error); } NSLog(@"after fetch"); Updated logs (previously, I had some application inefficiencies degrading the performance here. These are the updated logs that should be as close to optimal as you can get under my current environment): 2010-02-05 12:45:22.138 Special Ppl[429:207] before fetch 2010-02-05 12:45:22.144 Special Ppl[429:207] CoreData: sql: SELECT DISTINCT 0, t0.Z_PK, t0.Z_OPT, <model fields> FROM ZTHING t0 LEFT OUTER JOIN Z_1THINGS t1 ON t0.Z_PK = t1.Z_2THINGS WHERE t1.Z_1SOMETHINGS = ? ORDER BY t0.ZID DESC LIMIT 200 2010-02-05 12:45:22.663 Special Ppl[429:207] CoreData: annotation: sql connection fetch time: 0.5094s 2010-02-05 12:45:22.668 Special Ppl[429:207] CoreData: annotation: total fetch execution time: 0.5240s for 198 rows. 2010-02-05 12:45:22.706 Special Ppl[429:207] after fetch If I do the same fetch without predicate (by commenting out the two lines in the beginning of the question): 2010-02-05 12:44:10.398 Special Ppl[414:207] before fetch 2010-02-05 12:44:10.405 Special Ppl[414:207] CoreData: sql: SELECT 0, t0.Z_PK, t0.Z_OPT, <model fields> FROM ZTHING t0 ORDER BY t0.ZID DESC LIMIT 200 2010-02-05 12:44:10.426 Special Ppl[414:207] CoreData: annotation: sql connection fetch time: 0.0125s 2010-02-05 12:44:10.431 Special Ppl[414:207] CoreData: annotation: total fetch execution time: 0.0262s for 200 rows. 2010-02-05 12:44:10.457 Special Ppl[414:207] after fetch 20-fold difference in times. 500ms is not that great, and there does not seem to be a way to do it in background thread or otherwise optimize that I can think of. (Apart from going to a binary store where this becomes a non-issue, so I might do that. Binary store performance is consistently ~100ms for the above 200-object predicated query.) (I nested another question here previously, which I now moved away).

    Read the article

  • one page filter results in new page in javascript

    - by Jake
    I have links set up on one page and the relationship between the links is a parent child relationship. (For example: Parent: All, Children: Software; Hardware) These links of course lead the user to a new page that shows the results from a table that is populated. Currently these links are all Similar destinations, but just a filter in the url. But the problem is that there is a javascript filter on the page that gives the user to choose between All, Software, or Hardware. Understand basically that if the url is still reading that there on the software page but they just filtered on the page to be Hardware that doesn't look good IMO. So what I was trying to do was make the links on the inital page all go the the exact same destination and somehow still know on the new page which link was clicked and run the javascript filter from knowing which link was clicked on that page. Is there a way to found that out from javascript? I guess a way to pass that value to the new page and retrieving it in javascript without showing it in the url so I can filter the table for the user based on that value?

    Read the article

  • Rails paginate array items one-by-one instead of page-by-page

    - by hnovick
    Hi Guys, I have a group of assets, let's call them "practitioners". I'm displaying these practitioners in the header of a calendar interface. There are 7 columns to the calendar. 7 columns = 7 practitioners per view/page. Right now: if the first page shows you practitioners 1-7, when you go the next page you will see practitioners 8-15, next page 16-23, etc. etc. i am wondering how to page the practitioners so that if the first page shows you practitioners 1-7, the next page will show you practitioners 2-8, then 3-9, etc. etc. i would greatly appreciate any help you can offer. here is the rails code i am working with. best regards, harris novick # get the default sort order sort_order = RESOURCE_SORT_ORDER # if we've been given asset ids, start our list with them unless params[:asset_ids].blank? params[:asset_ids] = params[:asset_ids].values unless params[:asset_ids].is_a?(Array) sort_order = "#{params[:asset_ids].collect{|id| "service_provider_resources.id = #{id} DESC"}.join(",")}, #{sort_order}" end @asset_set = @provider.active_resources(:include => {:active_services => :latest_approved_version}).paginate( :per_page => RESOURCES_IN_DAY_VIEW, :page => params[:page], :order => sort_order )

    Read the article

  • [php] Firefox refreshes page 1, even after redirection to page 2

    - by Znarkus
    Hi! This is a quite weird and annoying problem, which is reproduced with the script below. Say we have two pages: script.php and script.php?second. Page 1 creates some database entries and redirects to page 2. On page 2, the user is presented with an editor for said entries. If page 1 for some reason crashes on the first try, and prints some error message, a strange thing will happend. If we refresh page 1 (and this time it redirects fine), every consecutive refresh (of page 2) will actually refresh page 1 and again redirect to page 2. In the above example this would create new database entries for every refresh, which is the problem I want to circumvent by redirecting to page 2. <?php header('Content-type: text/plain'); session_start(); if (!isset($_GET['second'])) { $_SESSION['counter'] = isset($_SESSION['counter']) ? $_SESSION['counter'] + 1 : 1; /*$_SESSION['counter'] = 0; exit('asd');*/ header("Location: {$_SERVER['PHP_SELF']}?second", true, 303); exit; } echo "Counter: {$_SESSION['counter']}"; To try the above complete script, first run it with the commented code intact, then by enabling the commented code. I've tried 301, 302 and 303 redirections. Does someone know why this is happening?

    Read the article

  • How can I replicate Google Page Speed's lossless image compression as part of my workflow?

    - by Keefer
    I love that Google's Page Speed is able to losslessly compress a lot of my images, but I'd love to make it part of my workflow, prior to uploading a site and making it live. Is there anything I can run locally to give me the same lossless compression? I currently export images from Export For Web from Photoshop, and use a little application called PNGCrusher to reduce file size of PNGs. I'd love to find a faster way though than saving out and replacing the individual images from Page Speed's results.

    Read the article

  • Can a site recover by itself after dropping google page rank for 404 errors?

    - by Jeff
    Recently redid a website and changed the directory / URL structure. I did some .htaccess redirects for the main landing pages - however when reviewing web master tools received 404 errors for the rest of the changed URLs and noticed that Google dropped my site from the #1 position to around the 5th page. I corrected all the 404s by providing redirects in the .htaccess, resubmitted the site map and tested the google crawl bot. Will my page regain its rank by itself - or am I going to have to put some time into like I originally did?

    Read the article

  • Is there a generally accepted maximum number of http requests for a web page load?

    - by MorganTiley
    I'm looking into optimizing my web application's client side performance but cannot figure out a good goal for # of http requests for a page. How does YSlow calculate the grade? This doesn't seem to be documented. Also it seems many sites like linkedin.com, amazon.com get an F grade but the page still loads quite fast. How does it fail but still perform well? Gmail get's an A grade and it has 43 unprimed/10 primed requests.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >