Search Results

Search found 14643 results on 586 pages for 'performance comparison'.

Page 95/586 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • Python Django sites on Apache+mod_wsgi with nginx proxy: highly fluctuating performance

    - by Halfgaar
    I have an Ubuntu 10.04 box running several dozen Python Django sites using mod_wsgi (embedded mode; the faster mode, if properly configured). Performance highly fluctuates. Sometimes fast, sometimes several seconds delay. The smokeping graphs are al over the place. Recently, I also added an nginx proxy for the static content, in the hopes it would cure the highly fluctuating performance. But, even though it reduced the number of requests Apache has to process significantly, it didn't help with the main problem. When clicking around on websites while running htop, it can be seen that sometimes requests are almost instant, whereas sometimes it causes Apache to consume 100% CPU for a few seconds. I really don't understand where this fluctuation comes from. I have configured the mpm_worker for Apache like this: StartServers 1 MinSpareThreads 50 MaxSpareThreads 50 ThreadLimit 64 ThreadsPerChild 50 MaxClients 50 ServerLimit 1 MaxRequestsPerChild 0 MaxMemFree 2048 1 server with 50 threads, max 50 clients. Munin and apache2ctl -t both show a consistent presence of workers; they are not destroyed and created all the time. Yet, it behaves as such. This tells me that once a sub interpreter is created, it should remain in memory, yet it seems sites have to reload all the time. I also have a nginx+gunicorn box, which performs quite well. I would really like to know why Apache is so random. This is a virtual host config: <VirtualHost *:81> ServerAdmin [email protected] ServerName example.com DocumentRoot /srv/http/site/bla Alias /static/ /srv/http/site/static Alias /media/ /srv/http/site/media WSGIScriptAlias / /srv/http/site/passenger_wsgi.py <Directory /> AllowOverride None </Directory> <Directory /srv/http/site> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> Ubuntu 10.04 Apache 2.2.14 mod_wsgi 2.8 nginx 0.7.65 Edit: I've put some code in the settings.py file of a site that writes the date to a tmp file whenever it's loaded. I can now see that the site is not randomly reloaded all the time, so Apache must be keeping it in memory. So, that's good, except it doesn't bring me closer to an answer... Edit: I just found an error that might also be related to this: File "/usr/lib/python2.6/subprocess.py", line 633, in __init__ errread, errwrite) File "/usr/lib/python2.6/subprocess.py", line 1049, in _execute_child self.pid = os.fork() OSError: [Errno 12] Cannot allocate memory The server has 600 of 2000 MB free, which should be plenty. Is there a limit that is set on Apache or WSGI somewhere?

    Read the article

  • Can someone explain RAID-0 in plain English?

    - by Edward Tanguay
    I've heard about and read about RAID throughout the years and understand it theoretically as a way to help e.g. server PCs reduce the chance of data loss, but now I am buying a new PC which I want to be as fast as possible and have learned that having two drives can considerably increase the perceived performance of your machine. In the question Recommendations for hard drive performance boost, the author says he is going to RAID-0 two 7200 RPM drives together. What does this mean in practical terms for me with Windows 7 installed, e.g. can I buy two drives, go into the device manager and "raid-0 them together"? I am not a network administrator or a hardware guy, I'm just a developer who is going to have a computer store build me a super fast machine next week. I can read the wikipedia page on RAID but it is just way too many trees and not enough forest to help me build a faster PC: RAID-0: "Striped set without parity" or "Striping". Provides improved performance and additional storage but no redundancy or fault tolerance. Because there is no redundancy, this level is not actually a Redundant Array of Inexpensive Disks, i.e. not true RAID. However, because of the similarities to RAID (especially the need for a controller to distribute data across multiple disks), simple strip sets are normally referred to as RAID 0. Any disk failure destroys the array, which has greater consequences with more disks in the array (at a minimum, catastrophic data loss is twice as severe compared to single drives without RAID). A single disk failure destroys the entire array because when data is written to a RAID 0 drive, the data is broken into fragments. The number of fragments is dictated by the number of disks in the array. The fragments are written to their respective disks simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, increasing bandwidth. RAID 0 does not implement error checking so any error is unrecoverable. More disks in the array means higher bandwidth, but greater risk of data loss. So in plain English, how can "RAID-0" help me build a faster Windows-7 PC that I am going to order next week?

    Read the article

  • My server is slower than the average user's computer, should I still offload Access queries to SQL Server? [closed]

    - by andrewb
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I have a database set up with MS Access 2007 front ends and an SQL Server 2005 back end. At the moment, all the queries are saved in the front end as I've only recently moved to an SQL Server backend. I'm wondering how much of those queries I should save as stored procedures/views on SQL Server. About the system The number of concurrent users is only a handful, though it could be as high as 25 at one time (very unlikely). The average computer has an Intel i3-2120 CPU running at 3.3 GHz, which gets a PassMark score of 3,987, whilst the server has an Intel Xeon E5335 running at 2.0 GHz, which gets a PassMark score of 2,637. Always an awkward situation when an i3 outperforms a Xeon... though the i3 is from Q1 2011 and the Xeon is Q2 2009. There is potential for a server upgrade in the future, though it wouldn't come easy. I'm inclined to move the queries to the back end, as they are beginning to take noticeable time and I figure that is a better way of doing things. I like the idea of throwing everything at the server, then pushing for a server upgrade. It makes more sense in my mind to be upgrading one server rather than 30 PCs. Or am I being overzealous? Why my question isn't a duplicate It seems that my question has been misinterpreted and labelled a duplicate of quite a different question, one about testing and capacity planning. I'll try explain how my question is very different from the linked question. The crux of my question is something like "Even though my server is technically slower, is it better to have it doing more of the queries?" There's two ways that people could have answered this: I agree the server is going to be slower, but the extra benefits of such and such (like the less Access the better) means you should move most to the server anyway. (OR no it doesn't outweigh the benefit, keep them in Access) Actually the server will be faster because of such and such. I'm hoping that people out there could provide some answers like this, and the question in the dupe link doesn't really provide either of these answers. Ok sure, I suppose I could do extensive performance testing to compare Access queries running on a local machine to SQL Server queries running on the server, but that sounds like a very hard task (particularly performance testing of access) compared to someone giving some quick general guidance, and again, my question is looking for a lot more than immediate performance benefit.

    Read the article

  • Color Printer: Laser vs Inkjet

    - by Mike
    I am about to buy a color printer. I had a B&W Laserjet printer in the past but since then I've used inkjets for decades. I need a printer that can deliver high quality as these photo inkjet printers, but I'm tired of paying for ink that costs $9,000 per gallon (1 gallon = 3.785 liters = 300 cartridges = $9,000). So, I was thinking about buying a color laser printer, but I'm not sure these printers can deliver the same quality and are worth the investment in terms of toner consumption. I remembered that my old Laserjet printer was able to print 1100 pages per toner cartridge. The inkjet printers I have can print 500 pages per cartridge. Price by price, 2 inkjet cartridges have more or less the same cost as one toner cartridge and in theory prints almost the same. I am not sure if this is true for color lasers. What can you guys tell me about quality, toner cost and cost per page for laser or inkjet printer? Is it worth the change? (Keep in mind that an inkjet printer costs $50 and a laser printer costs $200.) Thanks.

    Read the article

  • Difference between resin and resin pro

    - by riteshmnayak
    I planning to deploy resin for a project that I am working on but cannot figure out the version of resin I must use. The downloads page lists two products, Resin and Resin Pro with dev, stable snapshots. What is the difference between the pro version and the plain version? Is pro a paid version or something?

    Read the article

  • Oracle tuning optimizer index cost adj and optimizer index caching

    - by Darryl Braaten
    What is the correct way to set the optimizer index cost adj parameter for Oracle. As a developer I have observed huge performance improvements as this parameter is lowered. Common queries are reduced from 2 seconds to 200ms. There are lots of warnings on the net that lowering this value will cause dire issues with the database, but no detail is given on what will start going wrong. I am currently only seeing only an upside, much improved application performance and no downside. I need to better understand the possible negative repercussions of adjusting these parameters.

    Read the article

  • FTP v/s SFTP v/s FTPS

    - by susmits
    We're setting up a web server at our workspace. In conjunction, we're planning to install an FTP server, however I'm stuck at what protocol to employ -- FTP, SFTP or FTPS. I googled around, trying to see what protocol offers what, coming across articles like this, but I can't make up my mind. Only simple, once-in-a-while file transfer is desired; however, security is a concern since the file server is intended to be accessible from the internet. What protocol is the most apt for my use, and why?

    Read the article

  • Recommendations for USB flash drive fast at writing small files

    - by Andrew Bainbridge
    I want a drive that I can be used as my work drive, storing a Subversion repo and sandbox for a small project. I'd also like it to be able to store a DVD rip. At the moment I've got a Super Talent pico-C 8gb. It's fast at reading and writing DVD rips, but the performance on small files (ie less than 4k) is utterly terrible (we're talking floppy disk speeds here). This Ars review measured a similar Super Talent drive and pretty much confirmed my measurements (take a look at the random write speeds on page 5). So, I'm looking for a 8gb or bigger drive that doesn't suck at read and write of small files and still has acceptable performance for very large files.

    Read the article

  • Identify differences between MP3 files

    - by Thingomy
    I have 2 old similar directory trees with MP3 files in them. I am happily using tools like diff and Rsync to identify and merge the files that are only present on one side, or are identical, I'm left with a bunch of files that are bitwise different. On running diff over a pair actually different files, (with -a tag to force text analysis) it produces incomprehensible gibberish. I have listened to files from both sides, and they both seem to play fine (but at nearly 10 minutes per song, when listening to them twice each, I haven't done many) I suspect the differences are due to some player in the past "enhancing" my collection by messing about with ID3 tags, but I can't be certain. Even if I identify differences in ID3 tags, I would like to confirm that no cosmic ray or file copy error issues have damaged any of the files. One method that occurs to be is finding the byte locations of the differences, and ignoring all changes in the first ~10kb of each file, but I don't know how to do this. I have on the order of a hundred or so files that differ across the directory tree. I found How to compare mp3, flac audio data in a file, ignoring header data (ID3 tag) etc.? -- but I can't run alldup due to being Linux only, and from the sounds of it, it would only partially solve my issues anyway.

    Read the article

  • subst performance

    - by pihentagy
    Does substing a directory affects the performance creating/reading/updating many small files in the substed volume? (will use svn there) If yes, how serious is the "penalty"?

    Read the article

  • Shopping for Fast USB Flash Drives

    - by Jim McKeeth
    I would like to pick up some really fast USB flash drives in the 16 - 64 GB range. When looking at drives they just list their size, their form factor (key chain hook, slider, etc.) and the fact that they are all Hi-Speed USB 2.0. It seems like I have heard that different drives have different performance and life expectancy. The sales guy tells me that they are all the same performance any more, but it wouldn't be the first time a sales guy had the wrong technical details. Our objective is to run Virtual PC images off of them, so good speed and resilience to rewrites it important.

    Read the article

  • Is there a "rigorous" method for choosing a database?

    - by Andrew Martin
    I'm not experienced with NoSQL, but one person on my team is calling for its use. I believe our data and its usage isn't optimal for a NoSQL implementation. However, my understanding is based off reading various threads on various websties. I'd like to get some stronger evidence as to who's correct. My question is therefore, "Is there a technique for estimating the performance and requirements of a certain database, that I could use to confirm or modify my intuitions?". Is there, for example, a good book for calculating the performance of equivalent MongoDB/MySQL schema? Is the only really reliable option to build the whole thing and take metrics?

    Read the article

  • Exchange 2007 + mailbox role - performance counters

    - by Ankh2054
    I hve two exchange server in my org. Exchange 2007 - mailbox role Exchange 2007 client access, transport role I am trying to monitor the following performance counter on my exchange 2007 server (mailbox role) MSExchange Database(Information Store)\Database Page Fault Stalls/sec But I cant find the counter anywhere. I have checked the version of exchange an its 8.3.6 I looked on the other server in case I had it mixed up, but its not here either. Can anyone shed some light ?

    Read the article

  • Could it be sane to use Windows Server 2012 as desktop

    - by nCdy
    what about using it on desktop? I've got enough strong PC with intel core i7 and 8GB Ram so what should I think about: why not? Were looking about major differences compared to windows 8, found less. for example new file system - can it affect me? In my usual day I need development instruments alike visual studio, virtualization tools, and some games So far I can't find something that must stop me, everything I need can work (seems like) there. Tell me why I must not do it or if that is sane to do.

    Read the article

  • vimdiff: Jump to next difference inside line?

    - by sleske
    vimdiff is very handy for comparing files. However, I often use it on files with long lines and relatively few differences inside the lines. vimdiff will correctly highlight differences inside a line (whole line pink, differing characters red). In these cases, it would be nice to be able to jump to the next difference inside the line. You can jump to the "next difference" (]c), but this will jump to the next line with a difference. Is there a way to go to the next different character inside the current line?

    Read the article

  • SQL Server 2008 - Performance impact of transactional replication?

    - by cxfx
    I'm planning to set up transactional replication for a 100Gb SQL Server 2008 database. I have the distributor and publisher on the same server, and am using push subscription. Should there be a performance impact on my publisher server when it creates the initial snapshot, and synchronises it with a subscriber? From what I've tried so far on a staging server, it seems to slow right down. Is there a better way to create the initial snapshot without impacting my production publisher server?

    Read the article

  • Recommended Setup

    - by Chris Ryan
    I have been running into issue with my MSSQL Database setup with speed. Here is my scenario. About 100M Rows Average: 1k Updates Per Second Hard Drives: RAID 10 SSD MDF --Active Time: 0 Log Drives: 1 SSD LDF - Simple Recovery --Active Time 99.9 --Queue: 8 I do not need a back up of the log so it is set to simple recovery but my bottleneck is still at my log. I get high WAITLOG times and thus it can not update any faster. I can't do bulk updates/transactions and each update needs to be one at a time. Is my only option to increase write performance of the log drives, add a RAID drives? Any suggestions on increasing the performance?

    Read the article

  • SCSI vs SATA? Is SCSI "actually" better?

    - by earlz
    Well, I was talking with a guy about servers the other day. I was a bit shocked whenever I asked him if there was any significant difference between SCSI and SATA and why he always uses SCSI. (note, I'm not sure if by SCSI he meant SAS) He told me that SCSI is always faster and that the drives are always more reliable.. I mean, this seems like a bold statement. He told me something about how SCSI will always be faster than SATA because the OS sends the SCSI (controller?) a request to get a file and it will build the file inside of the SCSI controller, instead of searching all over the disk.. which I do not understand how that would work, so I figure it is BS. SAS and SATA currently have equivalent data rate speeds.. Is there any true backing for his reasoning that SCSI is always faster and more reliable than SATA?

    Read the article

  • Intel Pentium 4 vs. Faster Celeron

    - by Synetech inc.
    A few months ago my motherboard died, so I bought a used computer that had a 2.4GHz Celeron. My old system had a 1.7GHz Pentium 4, so now I’m trying to decide which CPU to use. Obviously a P4 is preferable over a Celeron, but the Celeron is (significantly?) faster than the P4. I’m wondering if the faster Celeron might be better for certain tasks (ie, stronger but dumber is better at some things than smarter but weaker). I tried Googling for some reviews and comparisons for graphs to get a clear depiction of which is better overall, but found nothing that helped. (I did manage to find one page that indicates (apparently by poll, not benchmark) that the Celeron is better.) So which CPU should I use? Does anyone know of some graphs that I can use to compare the two?

    Read the article

  • Best way to compare (diff) a full directory structure?

    - by Adam Matan
    Hi, What's the best way to compare directory structures? I have a backup utility which uses rsync. I want to tell the exact differences (in terms of file sizes and last-changed dates) between the source and the backup. Something like: Local file Remote file Compare /home/udi/1.txt (date)(size) /home/udi/1.txt (date)(size) EQUAL /home/udi/2.txt (date)(size) /home/udi/2.txt (date)(size) DIFFERENT Of course, the tool can be ready-made or an idea for a python script. Many thanks! Udi

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >