Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 216/3203 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Nginx / uWsgi / Django site can handle more traffic with rewrite URL

    - by Ludo
    Hi there. I'm running a Django app, using uWsgi behind Nginx. I've been doing some performance tuning and load testing using ApacheBench and have discovered something unexpected which I wonder if someone could explain for me. In my Nginx config I have a rewrite directive which catches lots of different URL permutations and then forwards them to the canonical URL I wish to use, eg, it traps www.mysite.com/whatever, www.mysite.co.uk/whatever and forwards them all to http://mysite.com/whatever. If I load test against any of the URLs listed with a redirect (ie, NOT the canonical URL which it is eventually forwarded to), it can serve 15000 concurrent connections without breaking a sweat. If I load test against the canonical URL, which the above test I would have expected got forwarded to anyway, it can't handle nearly as much. It will drop about 4000 of the 15000 requests, and can only handle about 9000 reliably. This is the command line I'm using to test: ab -c15000 -n15000 http://www.mysite.com/somepath/ and ab -c15000 -n15000 http://mysite.com/somepath/ I've tried several different types - it makes no different which order I do them in. This doesn't make sense to me - I can understand why the requests involving a redirect may not handle quite so many concurrent connections, but it's happening the other way round. Can anyone explain? I'd really prefer it if the canonical URL was the one which could handle more traffic. I'll post my Nginx config below. Thanks loads for any help! server { server_name www.somesite.com somesite.net www.somesite.net somesite.co.uk www.somesite.co.uk; rewrite ^(.*) http://somesite.com$1 permanent; } server { root /home/django/domains/somesite.com/live/somesite/; server_name somesite.com somesite-live.myserver.somesite.com; access_log /home/django/domains/somesite.com/live/log/nginx.log; location / { uwsgi_pass unix:////tmp/somesite-live.sock; include uwsgi_params; } location /media { try_files $uri $uri/ /index.html; } location /site_media { try_files $uri $uri/ /index.html; } location = /favicon.ico { empty_gif; } }

    Read the article

  • How to reinstall Ubuntu keeping my data intact?

    - by Santosh Linkha
    I want to reinstall Ubuntu keeping my data intact. I have 160 GB hardrive (sata or pata I don't know but it's slim and made in China) with a 40 GB ext3 partition, a 4GB swap memory and 3 other partition with a FAT32 file system. I have around 4GB space on my drive where Linux is installed. I'd like to keep the data intact, especially the Downloads folder, desktop, and /var/www; And I no longer have access to any other machines or external storage devices.

    Read the article

  • How many users are "many users"?

    - by kemp
    I need to find a solution for a website which is struggling under load. The site gets ~500 simultaneous connections during peak time, and counts around 42k hits per day. It's a wordpress based site bridged with a vbulletin forum with a lot of contents and a fairly complex structure which makes intensive use of the database. I already implemented code level full page caching (without this the server just crashes), and configured all other caching directives as well as combining css files and the like to limit http requests as much as possible. I need to understand if there is more that can be done via software or if the load is just too much for the server to handle and it needs to be upgraded, because the server goes down occasionally during peak times. Can't access the server now, but it's a dedicated CentOS machine (I think 4GB ram, can't say what CPU) running apache/mysql. So back to the main question: how can I know when the users are just too many?

    Read the article

  • MySQL Linked Server and SQL Server 2008 Express Performance

    - by Jeffrey
    Hi All, I am currently trying to setup a MySQL Linked Server via SQL Server 2008 Express. I have tried two methods, creating a DSN using the mySQL 5.1 ODBC driver, and using Cherry Software OLE DB Driver as well. The method that I prefer would be using the ODBC driver, but both run horrendously slow (doing one simple join takes about 5 min). Is there any way I can get better performance? We are trying to cross query between multiple mySQL databases on different servers, and this seems to be method we think would work well. Any comments, suggestions, etc... would be greatly appreciated. Regards, Jeffrey

    Read the article

  • This Week on the Green Data Center Management Front

    Among the big news this week in green data center management: APC will demonstrate how to provide energy while addressing energy efficiency legislation; Altruent Systems announced it has completed a new energy efficient data center for one of its key clients; and Voonami is unveiling what it claims is the greenest in Utah.

    Read the article

  • How to programmatically bind to a Core Data model?

    - by Dave Gallagher
    Hello. I have a Core Data model, and was wondering if you know how to create a binding to an Entity, programmatically? Normally you use bind:toObject:withKeyPath:options: to create a binding. But I'm having a little difficulty getting this to work with Core Data, and couldn't find anything in Apple's docs regarding doing this programmatically. The Core Data model is simple: An Entity called Book An Attribute of Book called author (NSString) I have an object called BookController. It looks like so: @interface BookController : NSObject { NSString *anAuthor; } @property (nonatomic, retain) NSString *anAuthor; // @synthesize anAuthor; inside @implementation I'd like to bind anAuthor inside BookController, to author inside a Book entity. This is how I'm attempting to wrongly do it (it partially works): // A custom class I made, providing an interface to the Core Data database CoreData *db = [[CoreData alloc] init]; // Creating a Book entity, saving it [db addMocObject:@"Book"]; [db saveMoc]; // Fetching the Book entity we just created NSArray *books = [db fetchObjectsForEntity:@"Book" withPredicate:nil withSortDescriptors:nil]; NSManagedObject *book = [books objectAtIndex:0]; // Creating the binding BookController *bookController = [[BookController alloc] init]; [bookController bind:@"anAuthor" toObject:book withKeyPath:@"author" options:nil]; // Manipulating the binding [bookController setAnAuthor:@"Bill Gates"]; Now, when updating from the perspective of bookController, things don't work quite right: // Testing the binding from the bookController's perspective [bookController setAnAuthor:@"Bill Gates"]; // Prints: "bookController's anAuthor: Bill Gates" NSLog(@"bookController's anAuthor: %@", [bookController anAuthor]); // OK! // ERROR HERE - Prints: "bookController's anAuthor: (null)" NSLog(@"Book's author: %@", [book valueForKey:@"author"]); // DOES NOT WORK! :( When updating from the perspective of the Book entity, things work fine: // ------------------------------ // Testing the binding from the Book's (Entity) perspective (this works perfect) [book setValue:@"Steve Jobs" forKey:@"author"]; // Prints: "bookController's anAuthor: Steve Jobs" NSLog(@"bookController's anAuthor: %@", [bookController anAuthor]); // OK! // Prints: "bookController's anAuthor: Steve Jobs" NSLog(@"Book's author: %@", [book valueForKey:@"author"]); // OK! It appears that the binding is partially working. I can update it on the side of the Model and it propagates up to the Controller via KVO, but if I update it on the side of the Controller, it doesn't trickle down to the Model via KVC. Any idea on what I'm doing wrong? Thanks so much for looking! :)

    Read the article

  • Determining distribution of NULL values

    - by AaronBertrand
    Today on the twitter hash tag #sqlhelp, @leenux_tux asked: How can I figure out the percentage of fields that don't have data ? After further clarification, it turns out he is after what proportion of columns are NULL. Some folks suggested using a data profiling task in SSIS . There may be some validity to that, but I'm still a fan of sticking to T-SQL when I can, so here is how I would approach it: Create a #temp table or @table variable to store the results. Create a cursor that loops through all...(read more)

    Read the article

  • Octave: importing a large matrix in csv format

    - by Massagran
    I'm trying to import a matrix (about 80.000 rows) from a csv file to Octave. The obvious solution seems something like: load("-ascii","relative_directory/the_file.csv") or maybe renaming the file and trying: load("-ascii", "relative_directory/the_file.txt") Yet I keep getting the error: load: failed to read matrix from file "relative_directory/the_file.csv" or .txt without anymore details. Any tips are appreciated.

    Read the article

  • Performance required to improve Windows Experience Index?

    - by Ian Boyd
    Is there a guide on the metrics required to obtain a certain Windows Experience Index? A Microsoft guy said in January 2009: On the matter of transparency, it is indeed our plan to disclose in great detail how the scores are calculated, what the tests attempt to measure, why, and how they map to realistic scenarios and usage patterns. Has that amount of transparency happened? Is there a technet article somewhere? If my score was limited by my Memory subscore of 5.9. A nieve person would suggest: Buy a faster RAM Which is wrong of course. From the Windows help: If your computer has a 64-bit central processing unit (CPU) and 4 gigabytes (GB) or less random access memory (RAM), then the Memory (RAM) subscore for your computer will have a maximum of 5.9. You can buy the fastest, overclocked, liquid-cooled, DDR5 RAM on the planet; you'll still have a maximum Memory subscore of 5.9. So in general the knee-jerk advice "buy better stuff" is not helpful. What i am looking for is attributes required to achieve a certain score, or move beyond a current limitation. The information i've been able to compile so far, chiefly from 3 Windows blog entries, and an article: Memory subscore Score Conditions ======= ================================ 1.0 < 256 MB 2.0 < 500 MB 2.9 <= 512 MB 3.5 < 704 MB 3.9 < 944 MB 4.5 <= 1.5 GB 5.9 < 4.0GB-64MB on a 64-bit OS Windows Vista highest score 7.9 Windows 7 highest score Graphics Subscore Score Conditions ======= ====================== 1.0 doesn't support DX9 1.9 doesn't support WDDM 4.9 does not support Pixel Shader 3.0 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 7.9 Windows 7 highest score Gaming graphics subscore Score Result ======= ============================= 1.0 doesn't support D3D 2.0 supports D3D9, DX9 and WDDM 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 6.0-6.9 good framerates (e.g. 40-50fps) at normal resoltuions (e.g. 1280x1024) 7.0-7.9 even higher framerates at even higher resolutions 7.9 Windows 7 highest score Processor subscore Score Conditions ======= ========================================================================== 5.9 Windows Vista highest score 6.0-6.9 many quad core processors will be able to score in the high 6 low 7 ranges 7.0+ many quad core processors will be able to score in the high 6 low 7 ranges 7.9 8-core systems will be able to approach 8.9 Windows 7 highest score Primary hard disk subscore (note) Score Conditions ======= ======================================== 1.9 Limit for pathological drives that stop responding when pending writes 2.0 Limit for pathological drives that stop responding when pending writes 2.9 Limit for pathological drives that stop responding when pending writes 3.0 Limit for pathological drives that stop responding when pending writes 5.9 highest you're likely to see without SSD Windows Vista highest score 7.9 Windows 7 highest score Bonus Chatter You can find your WEI detailed test results in: C:\Windows\Performance\WinSAT\DataStore e.g. 2011-11-06 01.00.19.482 Disk.Assessment (Recent).WinSAT.xml <WinSAT> <WinSPR> <DiskScore>5.9</DiskScore> </WinSPR> <Metrics> <DiskMetrics> <AvgThroughput units="MB/s" score="6.4" ioSize="65536" kind="Sequential Read">89.95188</AvgThroughput> <AvgThroughput units="MB/s" score="4.0" ioSize="16384" kind="Random Read">1.58000</AvgThroughput> <Responsiveness Reason="UnableToAssess" Kind="Cap">TRUE</Responsiveness> </DiskMetrics> </Metrics> </WinSAT> Pre-emptive snarky comment: "WEI is useless, it has no relation to reality" Fine, how do i increase my hard-drive's random I/O throughput? Update - Amount of memory limits rating Some people don't believe Microsoft's statement that having less than 4GB of RAM on a 64-bit edition of Windows doesn't limit the rating to 5.9: And from xxx.Formal.Assessment (Recent).WinSAT.xml: <WinSPR> <LimitsApplied> <MemoryScore> <LimitApplied Friendly="Physical memory available to the OS is less than 4.0GB-64MB on a 64-bit OS : limit mem score to 5.9" Relation="LT">4227858432</LimitApplied> </MemoryScore> </LimitsApplied> </WinSPR> References Windows Vista Team Blog: Windows Experience Index: An In-Depth Look Understand and improve your computer's performance in Windows Vista Engineering Windows 7 Blog: Engineering the Windows 7 “Windows Experience Index”

    Read the article

  • Why better isolation level means better performance in MS SQL Server

    - by Oleg Zhylin
    When measuring performance on my query I came up with a dependency between isolation level and elapsed time that was surprising to me READUNCOMMITTED - 409024 READCOMMITTED - 368021 REPEATABLEREAD - 358019 SERIALIZABLE - 348019 Left column is table hint, and the right column is elapsed time in microseconds (sys.dm_exec_query_stats.total_elapsed_time). Why better isolation level gives better performance? This is a development machine and no concurrency whatsoever happens. I would expect READUNCOMMITTED to be the fasted due to less locking overhead.

    Read the article

  • SQL Server Geography Data Type

    We are working on the migration to SQL Server 2008 and have geospatial data that we would like to move over as well. As part of our application we house information on locations across the globe. Which data type should we use?

    Read the article

  • Oracle acquires Pillar Data Systems

    - by nospam(at)example.com (Joerg Moellenkamp)
    So far it was an investment of Larry Ellison, but now it's part of Oracle: Oracle has acquired Pillar Data Systems.. You will find more information in the press release.. As i already smell some of the comments:Pillar Data Systems is majority owned by Oracle CEO Larry Ellison. The evaluation and negotiation of the transaction was led by an independent committee of Oracle's Board of Directors. The transaction is structured as a 100% earn-out with no up-front payment.

    Read the article

  • Stairway to XML: Level 7 - Updating Data in an XML Instance

    You need to provide the necessary keywords and define the XQuery and value expressions in your XML DML expression in order to use the modify() method to update element and attribute values in either typed or untyped XML instances in an XML column. Robert Sheldon explains how. "It really helped us isolate where we were experiencing a bottleneck"- John Q Martin, SQL Server DBA. Get started with SQL Monitor today to solve tricky performance problems - download a free trial

    Read the article

  • Oracle Adattárház Referencia Architektúra, a legjobb gyakorlatból

    - by Fekete Zoltán
    Hogyan építsünk adattárházat, hogyan kapcsoljuk össze a rendszereinkkel? Mi legyen az az architektúra, mellyel a legkisebb kockázattal a legbiztosabban érünk célba? Ezekre a kérdésekre kaphatunk választ az Oracle Data Warehouse Reference Architecture leírásból. Letöltheto a következo dokumentum: Enabling Pervasive BI through a Practical Data Warehouse Reference Architecture

    Read the article

  • Does it make sense to resize an Hash Table down? And When?

    - by Nazgulled
    Hi, My Hash Table implementation has a function to resize the table when the load reaches about 70%. My Hash Table is implemented with separate chaining for collisions. Does it make sense that I should resize the hash table down at any point or should I just leave it like it is? Otherwise, if I increase the size (by almost double, actually I follow this: http://planetmath.org/encyclopedia/GoodHashTablePrimes.html) when the load is 70%, should I resize it down when the load gets 30% or below?

    Read the article

  • proxy.pac file performance optimization

    - by Tuinslak
    I reroute certain websites through a proxy with a proxy.pac file. It basically looks like this: if (shExpMatch(host, "www.youtube.com")) { return "PROXY proxy.domain.tld:8080; DIRECT" } if (shExpMatch(host, "youtube.com")) { return "PROXY proxy.domain.tld:8080; DIRECT" } At the moment about 125 sites are rerouted using this method. However, I plan on adding quite a few more domains to it, and I'm guessing it will eventually be a list of 500-1000 domains. It's important to not reroute all traffic through the proxy. What's the best way to keep this file optimized, performance-wise ? Thanks

    Read the article

  • Stairway to XML: Level 2 - The XML Data Type

    Robert Sheldon describes SQL Server's XML Data Type, and shows that it is as easy to configure a variable, column, or parameter with the XML data type as configuring one of these objects with any other datatype Keep your database and application development in syncSQL Connect is a Visual Studio add-in that brings your databases into your solution. It then makes it easy to keep your database in sync, and commit to your existing source control system. Find out more.

    Read the article

  • SQL SERVER Configure Management Data Collection in Quick Steps T-SQL Tuesday #005

    This article was written as a response to T-SQL Tuesday #005 Reporting.The three most important components of any computer and server are the CPU, Memory, and Hard disk specification. This post talks about how to get more details about these three most important components using the Management Data Collection. Management Data Collection generates the [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Very slow disk performance on Dell PowerEdge 2950 w/ PERC 6/i running RAID 10

    - by vocoder
    I recently set up a server running Ubuntu 10.04 LTS on a Dell PowerEdge 2950 server - it has 6 500 gb 7200RPM SATA drives setup in a RAID 10 config. I am seeing extremely poor disk performance - the RAID array reports all disks are normal and using MegaCLI, it looks like the BBU is fine. hdparm -tT /dev/sda reports: Timing cached reads: 90 MB in 2.05 seconds = 43.96 MB/sec Timing buffered disk reads: 24 MB in 3.11 seconds = 7.72 MB/sec So as you can see, it takes forever to something as simple as an apt-get upgrade and even logging into the server. How do I go about troubleshooting what is causing this? I upgraded the firmware on the PERC 6i RAID controller to the latest, but didn't see any improvements.

    Read the article

  • Lazy/deferred loading of a CollectionViewSource?

    - by Shimmy
    When you create a CollectionViewSource in the Resources section, is the set Source loaded when the resources are initalized (i.e. when the Resources holder is inited) or when data is bound? Is there a xamly way to make a CollectionViewSource lazy-load? deferred-load? explicit-load?

    Read the article

  • Stairway to XML: Level 4 - Querying XML Data

    You can extract a subset of data from an XML instance by using the query() method, and you can use the value() method to retrieve individual element and attribute values from an XML instance. SQL Monitor v3 is even more powerfulUse custom metrics to monitor and alert on data that's most important for your environment, easily imported from our custom metrics site. Find out more.

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >