Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 69/184 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Sysadmin 101: How can I figure out why my server crashes and monitor performance?

    - by bflora
    I have a Drupal-powered site that seems to have neverending performance problems. It was butt-slow about 5 months ago. I brought in some guys who installed nginx for anonymous visitors, ajaxified a few queries so they wouldn't fire during page load, and helped me find a few bottlenecks in the code. For about a month, the site was significantly faster, though not "fast" by any stretch of the word. Meanwhile, I'm now shelling out $400/month to Slicehost to host a site that gets less than 5,000/uniques a day. Yes, you read that right. Go Drupal. Recently the site started crashing again and is slow again. I can't afford to hire people to come in, study my code from top to bottom, and make changes that may or may not help anymore. And I can't afford to throw more hardware at the problem. So I need to figure out what the problem is myself. Questions: When apache crashes, is it possible to find out what caused it to crash? There has to be a way, right? If so, how can I do this? Is there software I can use that will tell me which process caused my server to die? (e.g. "Apache crashed because someone visited page X." or "Apache crashed because you were importing too many RSS items from feed X.") There's got to be a way to learn this, right? What's a good, noob-friendly way to monitor my current apache performance? My developer friends tell me to "just use Top, dude," but Top shows me a bunch of numbers without any context. I have no clue what qualifies as a bad number or a good number in Top, or which processes are relevant and which aren't. Are there any noob-friendly server monitoring tools out there? Ideally, I could have a page that would give me a color-coded indicator about how apache is performing and then show me a list of processes or pages that are sucking right now. This way, I could know when performance is bad and then what's causing it to be so bad. Why does PHP memory matter? My apparently has a 30MB memory foot print. Will it run faster if I bring that number down? Thanks for any advice. I spent a year or so trying to boost my advertising income so I could hire a contractor to solve my performance woes. I didn't want to have to learn all this sysadmin voodoo. I'm now resigned to the fact that might not have a choice.

    Read the article

  • PHP: Array access short-hand?

    - by motionman95
    In Javascript, after executing a function I can immediately get the an element of the array returned by the function, like so: myFunc("birds")[0] //gets element zero returned from "myFunc()" This is much easier and faster than doing this: $myArray = myFunc("birds"); echo $myArray[0]; Does PHP have a similar shorthand to javascript? I'm just curious. Thanks in advance!

    Read the article

  • Are jQuery's :first and :eq(0) selectors functionally equivalent?

    - by travis
    I'm not sure whether to use :first or :eq(0) in a selector. I'm pretty sure that they'll always return the same object, but is one speedier than the other? I'm sure someone here must have benchmarked these selectors before and I'm not really sure the best way to test if one is faster. Update: here's the bench I ran: /* start bench */ for (var count = 0; count < 5; count++) { var i = 0, limit = 10000; var start, end; start = new Date(); for (i = 0; i < limit; i++) { var $radeditor = $thisFrame.parents("div.RadEditor.Telerik:eq(0)"); } end = new Date(); alert("div.RadEditor.Telerik:eq(0) : " + (end-start)); var start = new Date(); for (i = 0; i < limit; i++) { var $radeditor = $thisFrame.parents("div.RadEditor.Telerik:first"); } end = new Date(); alert("div.RadEditor.Telerik:first : " + (end-start)); start = new Date(); for (i = 0; i < limit; i++) { var radeditor = $thisFrame.parents("div.RadEditor.Telerik")[0]; } end = new Date(); alert("(div.RadEditor.Telerik)[0] : " + (end-start)); start = new Date(); for (i = 0; i < limit; i++) { var $radeditor = $($thisFrame.parents("div.RadEditor.Telerik")[0]); } end = new Date(); alert("$((div.RadEditor.Telerik)[0]) : " + (end-start)); } /* end bench */ I assumed that the 3rd would be the fastest and the 4th would be the slowest, but here's the results that I came up with: FF3: :eq(0) :first [0] $([0]) trial1 5275 4360 4107 3910 trial2 5175 5231 3916 4134 trial3 5317 5589 4670 4350 trial4 5754 4829 3988 4610 trial5 4771 6019 4669 4803 Average 5258.4 5205.6 4270 4361.4 IE6: :eq(0) :first [0] $([0]) trial1 13796 15733 12202 14014 trial2 14186 13905 12749 11546 trial3 12249 14281 13421 12109 trial4 14984 15015 11718 13421 trial5 16015 13187 11578 10984 Average 14246 14424.2 12333.6 12414.8 I was correct about just returning the first native DOM object being the fastest ([0]), but I can't believe the wrapping that object in the jQuery function was faster that both :first and :eq(0)! Unless I'm doing it wrong.

    Read the article

  • speeding up website load using multiple servers/domains

    - by Mohammad
    When Yahoo! developer guide says "Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user's perspective". And as an explanation I read somewhere, that browsers will load up to 5 things simultaneously from the same domain. Would a subdomain, for example cdn.example.com be considered a new domain, in the previous statement?

    Read the article

  • C# immutable object usage best practices? Should I be using them as much as possible?

    - by Daniel
    Say I have a simple object such as class Something { public int SomeInt { get; set; } } I have read that using immutable objects are faster and a better means of using business objects? If this is so, should i strive to make all my objects as such: class ImmutableSomething { public int SomeInt { get { return m_someInt; } } private int m_someInt = 0; public void ChangeSomeInt(int newValue) { m_someInt = newvalue; } } What do you reckon?

    Read the article

  • How can I quickly enumerate directories on Win32?

    - by BillyONeal
    Hello everyone :) I'm trying to speedup directory enumeration in C++, where I'm recursing into subdirectories. I currently have an app which spends 95% of it's time in FindFirst/FindNextFile APIs, and it takes several minutes to enumerate all the files on a given volume. I know it's possible to do this faster because there is an app that does: Everything. It enumerates my entire drive in seconds. How might I accomplish something like this?

    Read the article

  • iPhone: Speeding up a search that's polling 17,000 Core Data objects

    - by randombits
    I have a class that conforms to UISearchDisplayDelegate and contains a UISearchBar. This view is responsible for allowing the user to poll a store of about 17,000 objects that are currently managed by Core Data. Everytime the user types in a character, I created an instance of a SearchOperation (subclasses NSOperation) that queries Core Data to find results that might match the search. The code in the search controller looks something like: - (void)filterContentForSearchText:(NSString*)searchText scope:(NSString*)scope { // Update the filtered array based on the search text and scope in a secondary thread if ([searchText length] < 3) { [filteredList removeAllObjects]; // First clear the filtered array. [self setFilteredList:NULL]; [self.tableView reloadData]; return; } NSDictionary *searchdict = [NSDictionary dictionaryWithObjectsAndKeys:scope, @"scope", searchText, @"searchText", nil]; [aSearchQueue cancelAllOperations]; SearchOperation *searchOp = [[SearchOperation alloc] initWithDelegate:self dataDict:searchdict]; [aSearchQueue addOperation:searchOp]; } And my search is rather straight forward. SearchOperation is a subclass of NSOperation. I overwrote the main method with the following code: - (void)main { if ([self isCancelled]) { return; } NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"MyEntity" inManagedObjectContext:managedObjectContext]; NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; [fetchRequest setEntity:entity]; NSPredicate *predicate = NULL; predicate = [NSPredicate predicateWithFormat:@"(someattr contains[cd] %@)", searchText]; [fetchRequest setPredicate:predicate]; NSError *error = NULL; NSArray *fetchResults = [managedObjectContext executeFetchRequest:fetchRequest error:&error]; [fetchRequest release]; if (self.delegate != nil) [self.delegate didFinishSearching:fetchResults]; [pool drain]; } This code works, but it has several issues. It's slow. Even though I have the search happening in a separate thread other than the UI thread, querying 17,000 objects is clearly not optimal. If I'm not careful, crashes can happen. I set the max concurrent searches in my NSOperationQueue to 1 to avoid this. What else can I do to make this search faster? I think preloading all 17,000 objects into memory might be risky. There has to be a smarter way to conduct this search to give results back to the user faster.

    Read the article

  • Highlighting company name in Eclipse stack traces

    - by Robin Green
    Is there a way to make Eclipse highlight com.company (where for company substitute the name of the company you work for) in stack traces? It would make it fractionally easier and faster to home in on which parts of the stack trace were ours, and which were third-party code. I tried the logview plugin here, and while it does work, it makes the location information in the stack traces unclickable, which means I would waste more time than I would save.

    Read the article

  • How do i jump to a breakpoint within gdb?

    - by Chris Smullian
    Hey folks! I'm new to gdb and I tried out the following: I set a breakpoint, which worked fine actually, but is there any possibility to jump immediately to that breakpoint without using "next" or "step". Using "next" or "step" takes really long to get to the final breakpoint. I wasn't able to find a faster way. Any help is preciated! regards chris smullian

    Read the article

  • MySQL index building performance

    - by Christian
    I tried to build an index over a two columns of a 30,000,000 entry database. I canceled the process after ~60hr as it didn't seem to work. For some reason MySQL takes only 22 mb ram instead of using the RAM fully. Is index building an operation that needs no Ram or is there some way to tell MySQL to use more RAM to be faster?

    Read the article

  • network windows programming problem

    - by Jayjitraj
    I want to write program which will connect to a network for some amount of time in seconds. Then it disconnects from it and it should be that much faster than the other application (?) should not feel it is disconnected. Which layer should I program at? I know how to disconnect and connect to the network so any suggestion will be helpful...... thanks in advance....

    Read the article

  • Entity Framework 4.0: Why Would One Use the Code Generated EntityObjects Over POCO Objects?

    - by senfo
    Aside from faster development time (Visual Studio 2010 beta 2 has no T4 templates for building POCO entity objects that I'm aware of), are there any advantages to using the traditional EntityObject entities that Entity Framework creates, by default? If Microsoft delivers a T4 template for building POCO objects, I'm trying to figure out why anybody would want to use the traditional method.

    Read the article

  • local web application vs desktop application speed?

    - by Josh
    Which one would be faster - a local web app gui made with something like qooxdoo or a desktop app? How much speed difference would there be expected? I would prefer creating a web app which could in the future be shared than creating a desktop gui which is specialized on certain gui toolkits.

    Read the article

  • B- trees, B+ trees difference

    - by dta
    In a B- tree you can store both keys and data in the internal/leaf nodes. But in a B+ tree you have to store the data in the leaf nodes only. Is there any advantage of doing the above in a B+ tree? Why not use B- trees instead of B+ trees everywhere? As intuitively they seem much faster. I mean why do you need to replicate the key(data) in a B+ tree?

    Read the article

  • Will these optimizations to my Ruby implementation of diff improve performance in a Rails app?

    - by grg-n-sox
    <tl;dr> In source version control diff patch generation, would it be worth it to use the optimizations listed at the very bottom of this writing (see <optimizations>) in my Ruby implementation of diff for making diff patches? </tl;dr> <introduction> I am programming something I have never done before and there might already be tools out there to do the exact thing I am programming but at this point I am having too much fun to care so I am still going to do it from scratch, even if there is a tool for this. So anyways, I am working on a Ruby on Rails app and need a certain feature. Basically I want each entry in a table of mine, let's say for example a table of video games, to have a stored chunk of text that represents a review or something of the sort for that table entry. However, I want this text to be both editable by any registered user and also keep track of different submissions in a version control system. The simplest solution I could think of is just implement a solution that keeps track of the text body and the diff patch history of different versions of the text body as objects in Ruby and then serialize it, preferably in human readable form (so I'll most likely use YAML for this) for editing if needed due to corruption by a software bug or a mistake is made by an admin doing some version editing. So at first I just tried to dive in head first into this feature to find that the problem of generating a diff patch is more difficult that I thought to do efficiently. So I did some research and came across some ideas. Some I have implemented already and some I have not. However, it all pretty much revolves around the longest common subsequence problem, as you would already know if you have already done anything with diff or diff-like features, and optimization the function that solves it. Currently I have it so it truncates the compared versions of the text body from the beginning and end until non-matching lines are found. Then it solves the problem using a comparison matrix, but instead of incrementing the value stored in a cell when it finds a matching line like in most longest common subsequence algorithms I have seen examples of, I increment when I have a non-matching line so as to calculate edit distance instead of longest common subsequence. Although as far as I can tell between the two approaches, they are essentially two sides of the same coin so either could be used to derive an answer. It then back-traces through the comparison matrix and notes when there was an incrementation and in which adjacent cell (West, Northwest, or North) to determine that line's diff entry and assumes all other lines to be unchanged. Normally I would leave it at that, but since this is going into a Rails environment and not just some stand-alone Ruby script, I started getting worried about needing to optimize at least enough so if a spammer that somehow knew how I implemented the version control system and knew my worst case scenario entry still wouldn't be able to hit the server that bad. After some searching and reading of research papers and articles through the internet, I've come across several that seem decent but all seem to have pros and cons and I am having a hard time deciding how well in this situation that the pros and cons balance out. So are the ones listed here worth it? I have listed them with known pros and cons. </introduction> <optimizations> Chop the compared sequences into multiple chucks of subsequences by splitting where lines are unchanged, and then truncating each section of unchanged lines at the beginning and end of each section. Then solve the edit distance of each subsequence. Pro: Changes the time increase as the changed area gets bigger from a quadratic increase to something more similar to a linear increase. Con: Figuring out where to split already seems like you have to solve edit distance except now you don't care how it is changed. Would be fine if this was solvable by a process closer to solving hamming distance but a single insertion would throw this off. Use a cryptographic hash function to both convert all sequence elements into integers and ensure uniqueness. Then solve the edit distance comparing the hash integers instead of the sequence elements themselves. Pro: The operation of comparing two integers is faster than the operation of comparing two strings, so a slight performance gain is received after every comparison, which can be a lot overall. Con: Using a cryptographic hash function takes time to convert all the sequence elements and may end up costing more time to do the conversion that you gain back from the integer comparisons. You could use the built in hash function for a string but that will not guarantee uniqueness. Use lazy evaluation to only calculate the three center-most diagonals of the comparison matrix and then only calculate additional diagonals as needed. And then also use this approach to possibly remove the need on some comparisons to compare all three adjacent cells as desribed here. Pro: Can turn an algorithm that always takes O(n * m) time and make it so only worst case scenario is that time, best case becomes practically linear, and average case is somewhere between the two. Con: It is an algorithm I've only seen implemented in functional programming languages and I am having a difficult time comprehending how to convert this into Ruby based on how it is described at the site linked to above. Make a C module and do the hard work at the native level in C and just make a Ruby wrapper for it so Ruby can make all the calls to it that it needs. Pro: I have to imagine that evaluating something like this in could be a LOT faster. Con: I have no idea how Rails handles apps with ruby code that has C extensions and it hurts the portability of the app. This is an optimization for after the solving of edit distance, but idea is to store additional combined diffs with the ones produced by each version to make a delta-tree data structure with the most recently made diff as the root node of the tree so getting to any version takes worst case time of O(log n) instead of O(n). Pro: Would make going back to an old version a lot faster. Con: It would mean every new commit, the delta-tree would get a new root node that will cost time to reorganize the delta-tree for an operation that will be carried out a lot more often than going back a version, not to mention the unlikelihood it will be an old version. </optimizations> So are these things worth the effort?

    Read the article

  • Fast or asynchronous AS3 JPEG encoding

    - by Bart van Heukelom
    I'm currently using the JPGEncoder from the AS3 core lib to encode a bitmap to JPEG var enc:JPGEncoder = new JPGEncoder(90); var jpg:ByteArray = enc.encode(bitmap); Because the bitmap is rather large (3000 x 2000) the encoding takes a long while (about 20 seconds), causing the application to seemingly freeze while encoding. To solve this, I need either: An asynchronous encoder so I can keep updating the screen (with a progress bar or something) while encoding An alternative encoder which is simply faster Is either possible?

    Read the article

  • Mod_wsgi versus fapws3 - Django

    - by RadiantHex
    Hi folks, is there a difference between using FAPWS3 and MOD_WSGI when dealing with Django? FAPWS3 seems alot faster when serving requests toward Python scripts. I would like to know if I'm missing out anything. :) Any ideas?

    Read the article

  • Any ideas on How to search a 2D array quickly?

    - by Tattat
    I jave a 2D array like this, just like a matrix: {{1, 2, 4, 5, 3, 6}, {8, 3, 4, 4, 5, 2}, {8, 3, 4, 2, 6, 2}, //code skips... ... } I want to get all the "4" position, instead of searching the array one by way, and return the position, how can I search it faster / more efficient? thz in advance.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >