Search Results

Search found 8167 results on 327 pages for 'general'.

Page 269/327 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • How do you clear your mind after 8-10 hours per day of coding?

    - by Bryan
    Related Question- Ways to prepare your mind before coding?. I'm having a hard time taking my mind off of work projects in my personal time. It's not that I have a stressful job or tight deadlines; I love my job. I find that after spending the whole day writing code & trying to solve problems, I have an extremely hard time getting it out of my mind. I'm constantly thinking about the current project/problem/task all the time. It's keeping me from relaxing, and in the long run it just builds stress. Personal projects help to some extent, but mostly just to distract me. I still have source code bouncing around my head 16 hours a day. I'm still relatively new to the workforce. Have you struggled with this, perhaps as a young developer? How did you overcome it? Can anyone offer general advice on winding down after a long programming session?

    Read the article

  • MySQL - What is wrong with this query or my database? Terrible performance.

    - by Moss
    SELECT * from `employees` a LEFT JOIN (SELECT phone1 p1, count(*) c, FROM `employees` GROUP BY phone1) b ON a.phone1 = b.p1; I'm not sure if it is this query in particular that has the problem. I have been getting terrible performance in general with this database. The table in question has 120,000 rows. I have tried this particular query remotely and locally with the MyISAM and InnoDB engines, with different types of joins, and with and without an index on phone1. I can get this to complete in about 4 minutes on a 10,000 row table successfully but performance drops exponentially with larger tables. Remotely it will lose connection to the server and locally it brings my system to its knees and seems to go on forever. This query is only a smaller step I was trying to do when a larger query couldn't complete. Maybe I should explain the whole scenario. I have one big flat ugly table that lists a bunch of people and their contact info and the info of the companies they work for. I'm trying to normalize the database and intelligently determine which phone numbers apply to individual people and which apply to an office location. My reasoning is that if a phone number occurs multiple times and the number of occurrence equals the number of times that the street address it is attached to occurs then it must be an office number. So the first step is to count each phone number grouping by phone number. Normally if you just use COUNT()...GROUP BY it will only list the first record it finds in that group so I figured I have to join the full table to the count table where the phone number matches. This does work but as I said I can't successfully complete it on any table much larger than 10,000 rows. This seems pathetic and this doesn't seem like a crazy query to do. Is there a better way to achieve what I want or do I have to break my large table into 12 pieces or is there something wrong with the table or db?

    Read the article

  • Removing a pattern from the beggining and end of a string in ruby

    - by seaneshbaugh
    So I found myself needing to remove <br /> tags from the beginning and end of strings in a project I'm working on. I made a quick little method that does what I need it to do but I'm not convinced it's the best way to go about doing this sort of thing. I suspect there's probably a handy regular expression I can use to do it in only a couple of lines. Here's what I got: def remove_breaks(text) if text != nil and text != "" text.strip! index = text.rindex("<br />") while index != nil and index == text.length - 6 text = text[0, text.length - 6] text.strip! index = text.rindex("<br />") end text.strip! index = text.index("<br />") while index != nil and index == 0 text = test[6, text.length] text.strip! index = text.index("<br />") end end return text end Now the "<br />" could really be anything, and it'd probably be more useful to make a general use function that takes as an argument the string that needs to be stripped from the beginning and end. I'm open to any suggestions on how to make this cleaner because this just seems like it can be improved.

    Read the article

  • RequestBuilder timeouts and browser connection limits per domain.

    - by WesleyJohnson
    This is specifically about GWT's RequestBuilder, but should apply to general XHR as well. My company is having me build a near realtime chat application over HTTP. Yes, I do realize there are better ways to do chat aplications, but this is what they want. Eventually we want it working on the iPad/iPhone as well so flash is out, which rules out websockets and comet as well, I think? Anyway, I'm running into issues were I've set GWT's RequestBuilder timeout to 10 seconds and we get very random and sporadic timeouts. We've got error handling and emailing on the server side and never get any errors, which suggests the underlying XHR request that RequestBuilder is built on, never gets to the server and times out after 10 seconds. We're using these request to poll the server for new messages rather often and also for sending new messages to the server and also polling (less frequently) for other parts of application. What I'm afraid of is that we're running into the browsers limit on concurrent connections to the same domain (2 for IE by default?). Now my question is - If I construct a RequestBuilder and call it's send() method and the browser blocks it from sending until one of the 2 connections per domain is free, does the timeout still start while the request is being blocked or will it not start until the browser actually releases the underlying XHR? I hope that's clear, if not please let me know and I'll try to explain more.

    Read the article

  • how do you authenticate a user between two services, if they are both using a common third-party oauth service?

    - by urandom
    I'm currently experimenting with oauth logins on a website, using google oauth2. While I set that up without too many problems, I saw that there isn't some kind of permanent token, which only google and the authorized service know about a user. Also, from what I gathered, if I were to create a companion app on android, the preferred way is to go with AccountManager, which seems to handle giving oauth2 access tokens for google accounts. But if I authenticate myself from the anroid app using a google account, how do I now link that user to the same one in the web app? One way I think this can be done if the user also logs into the web app as well, so that the server receives a fresh access token, and the android and web one are compared. But that seems like a huge hassle, and I haven't seen many other apps do that. Another is to use a refresh token on the server, but that would require extra permissions which might put off any potential visitors. So what is the general workflow for achieving this? Or am I thinking the wrong way?

    Read the article

  • Project Euler, Problem 10 java solution now working

    - by Dennis S
    Hi, I'm trying to find the sum of the prime numbers < 2'000'000. This is my solution in java but I can't seem get the correct answer. Please give some input on what could be wrong and general advice on the code is appreciated. Printing 'sum' gives: 1308111344, which is incorrect. /* The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17. Find the sum of all the primes below two million. */ class Helper{ public void run(){ Integer sum = 0; for(int i = 2; i < 2000000; i++){ if(isPrime(i)) sum += i; } System.out.println(sum); } private boolean isPrime(int nr){ if(nr == 2) return true; else if(nr == 1) return false; if(nr % 2 == 0) return false; for(int i = 3; i < Math.sqrt(nr); i += 2){ if(nr % i == 0) return false; } return true; } } class Problem{ public static void main(String[] args){ Helper p = new Helper(); p.run(); } }

    Read the article

  • What goes into main function?

    - by Woltan
    I am looking for a best practice tip of what goes into the main function of a program using c++. Currently I think two approaches are possible. (Although the "margins" of those approaches can be arbitrarily close to each other) 1: Write a "Master"-class that receives the parameters passed to the main function and handle the complete program in that "Master"-class (Of course you also make use of other classes). Therefore the main function would be reduced to a minimum of lines. #include "MasterClass.h" int main(int args, char* argv[]) { MasterClass MC(args, argv); } 2: Write the "complete" program in the main function making use of user defined objects of course! However there are also global functions involved and the main function can get somewhat large. I am looking for some general guidelines of how to write the main function of a program in c++. I came across this issue by trying to write some unit test for the first approach, which is a little difficult since most of the methods are private. Thx in advance for any help, suggestion, link, ...

    Read the article

  • Ideal HTTP cache control headers for different types of resources

    - by chris_l
    I want to find a minimal set of headers, that work with "all" caches and browsers (also when using HTTPS!) On my (GWT-based) web site, I'll have three kinds of resources: 1. Forever cacheable (public / equal for all users) These files don't ever change, and they get a filename based on the MD5 of their contents (this is GWT's approach). They should get cached as much as possible, even when using HTTPS (so I assume, I should set Cache-Control: public, especially for Firefox?) 2. Changing for every new version of the site (public / equal for all users) These files can be cached, but probably need to be revalidated every time. 3. Individual for each request (private / user specific) These resources (e. g. JSON responses) should never be cached unencrypted to disk under no circumstances. (Maybe I'll have a few specific requests that could be cached.) I have a general idea on which headers I would probably use for each type, but there's always something I could be missing.

    Read the article

  • Table index design

    - by Swoosh
    I would like to add index(s) to my table. I am looking for general ideas how to add more indexes to a table. Other than the PK clustered. I would like to know what to look for when I am doing this. So, my example: This table (let's call it TASK table) is going to be the biggest table of the whole application. Expecting millions records. IMPORTANT: massive bulk-insert is adding data in this table table has 27 columns: (so far, and counting :D ) int x 9 columns = id-s varchar x 10 columns bit x 2 columns datetime x 5 columns INT COLUMNS all of these are INT ID-s but from tables that are usually smaller than Task table (10-50 records max), example: Status table (with values like "open", "closed") or Priority table (with values like "important", "not so important", "normal") there is also a column like "parent-ID" (self - ID) join: all the "small" tables have PK, the usual way ... clustered STRING COLUMNS there is a (Company) column (string!) that is something like "5 characters long all the time" and every user will be restricted using this one. If in Task there are 15 different "Companies" the logged in user would only see one. So there's always a filter on this one. Might be a good idea to add an index to this column? DATE COLUMNS I think they don't index these ... right? Or can / should be?

    Read the article

  • JavaScript onClick() Display

    - by junaidkaps
    I have an array consisting of several objects containing Strings. I am successfully able to display the array by using: <td><p onclick="theMiddle(this)">The Middle</td> As you see from the td tag this is part of a table. Issue is that the browser opens up a new page to display my text. I have been trying to display the array above my table in a p tag. //JavaScript var arrayTheMiddle = new Array (showName.theMiddle, beginingTime.theMiddle, network.abc, duration.thirty, rating.general, description.theMiddle, showImage.theMiddle); function theMiddle(obj){ for(i=0; i < arrayTheMiddle.length; i++) { document.write(arrayTheMiddle[i] + "<br>"); } } //HTML File <p>Would like the array/function displayed here while the user clicks within the table below (entire table has not been listed)</p> <td><p onclick="theMiddle(this)">The Middle</td> Unfortunately I am constantly failing at utilizing get element by id to call my function which consists of an array. I have searched for all sorts of stuff, yet frankly I'm lost. Not even sure if my approach is correct at this point. I'm sure this is one of those simple things that are blowing over my head!

    Read the article

  • Over Optimistic Daily Productivity

    - by Dan Revell
    I'm a junior developer and have been working since I graduated last summer so coming up to a year now. I have this issue that is starting to get to me. Every night I think back to what I did that day, feel bad that I didn't get as much done as I would have liked and then tick off in my head all the things I'll get done the following day. Come the end of the following day I end haven't gotten through half of what I wanted to. This over optimism that I'm suffering from. Might it be just because I'm relatively new to the profession and aren't aware of how long things will actually take me. The work might be quick to think through in my head but all sorts of time sync's involved can bleed away the hours. If not that then perhaps it's the technology stack that I'm working on. SharePoint isn't the easiest thing to develop for and it's certainly something I came into not knowing a whole lot about. If it's because I'm not yet skilled enough to predict how long things will take me, is this trait of over optimistic predictions universal to the profession? I'd appreciate any input from those experienced with working with younger developers and those that might have suffered from this themselves. [EDIT] Perhaps I worded the question badly. I'm interested in just general day to day work rather than overall project completion estimation.

    Read the article

  • Best way to implement plugin framework - are DLLs the only way (C/C++ project)?

    - by Microkernel
    Introduction: I am currently developing a document classifier software in C/C++ and I will be using Naive-Bayesian model for classification. But I wanted the users to use any algorithm that they want(or I want in the future), hence I went to separate the algorithm part in the architecture as a plugin that will be attached to the main app @ app start-up. Hence any user can write his own algorithm as a plugin and use it with my app. Problem Statement: The way I am intending to develop this is to have each of the algorithms that user wants to use to be made into a DLL file and put into a specific directory. And at the start, my app will search for all the DLLs in that directory and load them. My Questions: (1) What if a malicious code is made as a DLL (and that will have same functions mandated by plugin framework) and put into my plugins directory? In that case, my app will think that its a plugin and picks it and calls its functions, so the malicious code can easily bring down my entire app down (In the worst case could make my app as a malicious code launcher!!!). (2) Is using DLLs the only way available to implement plugin design pattern? (Not only for the fear of malicious plugin, but its a generic question out of curiosity :) ) (3) I think a lot of softwares are written with plugin model for extendability, if so, how do they defend against such attacks? (4) In general what do you think about my decision to use plugin model for extendability (do you think I should look at any other alternatives?) Thank you -MicroKernel :)

    Read the article

  • How can I abstract out the core functionality of several Rails applications?

    - by hornairs
    I'd like to develop a number of non-trivial Rails applications which all implement a core set of functionality but each have certain particular customizations, extensions, and aesthetic differences. How can I pull the core functionality (models, controllers, helpers, support classes, tests) common to all these systems out in such a way that updating the core will benefit every application based upon it? I've seen Rails Engines but they seem to be too detached, almost too abstracted to be built upon. I can seem them being useful for adding one component to an existing app, for example bolting on a blog engine to your existing e-commerce site. Since engines seem to be mostly self contained, it seems difficult and inconvenient to override their functionality and views while keeping DRY. I've also considered abstracting the code into a gem, but this seems a little odd. Do I make the gem depend on the Rails gems, and the define models & controllers inside it, and then subclass them in my various applications? Or do I define many modules inside the gem that I include in the different spots inside my various applications? How do I test the gem and then test the set of customizations and overridden functionality on top of it? I'm also concerned with how I'll develop the gem and the Rails apps in tandem, can I vendor a git repository of the gem into the app and push from that so I don't have to build a new gem every iteration? Also, are there private gem hosts/can I set my own gem source up? Also, any general suggestions for this kind of undertaking? Abstraction paradigms to adhere to? Required reading? Comments from the wise who have done this before? Thanks!

    Read the article

  • pyOpenSSL and the WantReadError

    - by directedition
    I have a socket server that I am trying to move over to SSL on python 2.5, but I've run into a snag with pyOpenSSL. I can't find any good tutorials on using it, so I'm operating largely on guesses. Here is how my server sets up the socket: ctx = SSL.Context(SSL.SSLv23_METHOD) ctx.use_privatekey_file ("mykey.pem") ctx.use_certificate_file("mycert.pem") sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) addr = ('', int(8081)) sock.bind(addr) sock.listen(5) Here is how it accepts clients: sock.setblocking(0) while True: if len(select([sock], [], [], 0.25)[0]): client_sock, client_addr = sock.accept() client = ClientGen(client_sock) And here is how it sends/receives from the connected sockets: while True: (r, w, e) = select.select([sock], [sock], [], 0.25) if len(r): bytes = sock.recv(1024) if len(w): n_bytes = sock.send(self.message) It's compacted, but you get the general idea. The problem is, once the send/receive loop starts, it dies right away, before anything has been sent or received (that I can see anyway): Traceback (most recent call last): File "ClientGen.py", line 50, in networkLoop n_bytes = sock.send(self.message WantReadError The manual's description of the 'WantReadError' is very vague, saying it can come from just about anywhere. What am I doing wrong?

    Read the article

  • Should a developer write their own test plan for Q/A?

    - by Mat Nadrofsky
    Who writes the test plans in your shop? Who should write them? I realize developers (like me) regularly do their own unit testing whilst developing and in some cases even their own Q/A depending on the size of the shop and the nature of the business, but in a big software shop with a full development team and Q/A team, who should be writing those official "my changes are done now" test plans? Soon, we'll be bringing on another Q/A member to our development team. My question is, going forward, is it a good practice to get your developers to write their own test plans? Something tells me that part of that might make sense but another part might not... What I like about that: Developer is very familiar with the changes made, thus it's easy to produce a document... What I don't like about that: Developer knows how it's supposed to work and might write a test plan that caters to this without knowing it. So, with the above in mind, what is the general stance on this topic? I'm of course already reading books like the Mythical Man-Month, Code Complete and a few others which really do help, but I'd like to get some input from the group as well.

    Read the article

  • C# Debug.Assert-s use the same error message. Should I promote it to a static variable?

    - by Hamish Grubijan
    I love Asserts but not code duplication, and in several places I use a Debug.Assert which checks for the same condition like so: Debug.Assert(kosherBaconList.SelectedIndex != -1, "An error message along the lines - you should not ever be able to click on edit button without selecting a kosher bacon first."); This is in response to an actual bug, although the actual list does not contain kosher bacon. Anyhow, I can think of two approaches: private static readonly mustSelectKosherBaconBeforeEditAssertMessage = "An error message along the lines - you should not ever be able to " + "click on edit button without selecting a something first."; ... Debug.Assert( kosherBaconList.SelectedIndex != -1, mustSelectKosherBaconBeforeEditAssertMessage) or: if (kosherBaconList.SelectedIndex == -1) { AssertMustSelectKosherBaconBeforeEdit(); } ... [Conditional("DEBUG")] private void AssertMustSelectKosherBaconBeforeEdit() { // Compiler will optimize away this variable. string errorMessage = "An error message along the lines - you should not ever be able to " + "click on edit button without selecting a something first."; Debug.Assert(false, errorMessage); } or is there a third way which sucks less than either one above? Please share. General helpful relevant tips are also welcome.

    Read the article

  • How should I return different types in a method based on the value of a string in Java?

    - by Siracuse
    I'm new to Java and I have come to having the following problem: I have created several classes which all implement the interface "Parser". I have a JavaParser, PythonParser, CParser and finally a TextParser. I'm trying to write a method so it will take either a File or a String (representing a filename) and return the appropriate parser given the extension of the file. Here is some psuedo-code of what I'm basically attempting to do: public Parser getParser(String filename) { String extension = filename.substring(filename.lastIndexOf(".")); switch(extension) { case "py": return new PythonParser(); case "java": return new JavaParser(); case "c": return new CParser(); default: return new TextParser(); } } In general, is this the right way to handle this situation? Also, how should I handle the fact that Java doesn't allow switching on strings? Should I use the .hashcode() value of the strings? I feel like there is some design pattern or something for handling this but it eludes me. Is this how you would do it?

    Read the article

  • Load page for validation but do not display it to user in ASP.NET

    - by Kevin
    We have a site requiring users pay $2 to view the details of a record. We occasionally get complaints because we send them to the payment page, and once they pay it turns out the record isn't valid, or it was lost, or the data couldn't be generated. So we want to add in a check to ensure the page constructs properly before the user is required to pay for it. However, we don't want the user to have access to the page until they pay for it. Is there anything in ASP.NET 3.5 or just general web design that would allow something like this? The data on the record is real time and computed on a backend server before sent to the client. Occasionally this computation fails for whatever reason. Our alternative is to call all of the loading methods and validate the data, then redirect them to the payment page. The problem is A) this will be a relatively involved process rewriting all of these methods to return validation information, and B) it still doesn't guarentee us the page will load properly. Any thoughts?

    Read the article

  • How to structure this Symfony web project?

    - by James William
    I am new to Symfony and am not sure how to best structure my web project. The solution must accommodate 3 use cases: Public access to www.mydomain.com for general use Member only access to member.mydomain.com Administrator access to admin.mydomain.com All three virtual hosts point to the Symfony /web directory Questions: Is this 3 separate applications in my Symfony project (e.g. "frontend", "backend" and "admin" or "public", "member", "admin")? Is this a good approach if there is to be some duplicate code (e.g. generating a member list would be common across all 3 applications, but presented differently)? How would I route to the various applications based on the subdomain when a user accesses *.mydomain.com? Where in Symfony should this routing logic be placed? Or, is this one application with modules for each of the above use cases? EDIT: I do not have access to httpd.conf in apache to specify a default page for virtual hosts. I can only specify a directory for each subdomain using the hostin provider's cPanel.

    Read the article

  • Should I base my Embedded Linux product on Qt?

    - by Udi
    My company is developing a medical product. One of the components is a pda-like platform that will run embedded linux. We were considering Qt as the UI framework but found out that Qt is a lot more than that (we are not familiar with Qt). In general, the device needs to do the following: 1. Receive measurements over USB HID from another device (USB HID is used for convenience). 2. Process the measurements. 3. Store them in a database. 4. Interact with the user using the device's touch screen lcd. 5. Communicate (wi-fi, tcp-ip) with a central management station that collects the data and configures the device. 6. Include a web server to allow accessing the device via a browser. We intend to program in C++. My questions are: 1. Is that a good choice for such a device? 2. Assuming we choose Qt, how do we build our product? - Do we use Qt just as a GUI framework and write the application code in a separate process (passing messages between Qt and the application process)? - Do we write the entire application inside Qt, using all of the services the tool has to offer? - Another approach?

    Read the article

  • Generating jquery 'rules' from business model to UI in asp.net mvc

    - by jim
    Hi all, I've had a good look around and am certain that there's no matching question on SO, so here goes. Has anyone created a 'helper' method on their model that generates jquery (or plain javascript) rules validation dynamically, based on the criteria/rules that are contained within the object and taken from a repository (i.e. DB). What i'm thinking of is a discrete set of partial views (and associated models) that have rules at the business logic 'level' and rather than (or in combination with) validating the rule(s) at postback, translating the same rules into tightly focussed jquery methods that work identically at client (js) and server (c#) levels. I can see benefits here re performance. Also, the rules definitions could be created in a single place (in c#) and the jquery generated off of that, thus allowing single edits to update both code streams. I appreciate that there would be limitations imposed by language specific contstraints but the general principle could be quite interesting if used appropriately. I'm also aware that testibility could be an issue when using two different language structures and hoping to achieve similar test outcomes - but those aside... any thoughts or experiences of similar out there?? cheers jimi

    Read the article

  • asp.net, wcf authentication and caching

    - by andrew
    I need to place my app business logic into a WCF service. The service shouldn't be dependent on ASP.NET and there is a lot of data regarding the authenticated user which is frequently used in the business logic hence it's supposed to be cached (probably using a distributed cache). As for authentication - I'm going to use two level authentication: Front-End - forms authentication back-end (WCF Service) - message username authentication. For both authentications the same custom membership provider is supposed to be used. To cache the authenticated user data, I'm going to implement two service methods: 1) Authenticate - will retrieve the needed data and place it into the cache(where username will be used as a key) 2) SignOut - will remove the data from the cache Question 1. Is correct to perform authentication that way (in two places) ? Question 2. Is this caching strategy worth using or should I look at using aspnet compatible service and asp.net session ? Maybe, these questions are too general. But, anyway I'd like to get any suggestions or recommendations. Any Idea

    Read the article

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • Cassandra instead of MySQL for social networking app

    - by Christopher McCann
    I am in the middle of building a new app which will have very similar features to Facebook and although obviously it wont ever have to deal with the likes of 400,000,000 million users it will still be used by a substantial user base and most of them will demand it run very very quickly. I have extensive experience with MySQL but a social app offers complexities which MySQL is not well suited too. I know Facebook, Twitter etc have moved towards Cassandra for a lot of their data but I am not sure how far to go with it. For example would you store such things as user data - username, passwords, addresses etc in Cassandra? Would you store e-mails, comments, status updates etc in Cassandra? I have also read alot that something like neo4j is much better for representing the friend relationships used by social apps as it is a graph database. I am only just starting down the NoSQL route so any guidance is greatly appreciated. Would anyone be able to advise me on this? I hope I am not being too general!

    Read the article

  • Can someone recommend a good tutorial on MySQL indexes, specifically when used in an order by clause

    - by Philip Brocoum
    I could try to post and explain the exact query I'm trying to run, but I'm going by the old adage of, "give a man a fish and he'll eat for a day, teach a man to fish and he'll eat for the rest of his life." SQL optimization seems to be very query-specific, and even if you could solve this one particular query for me, I'm going to have to write many more queries in the future, and I'd like to be educated on how indexes work in general. Still, here's a quick description of my current problem. I have a query that joins three tables and runs in 0.2 seconds flat. Awesome. I add an "order by" clause and it runs in 4 minutes and 30 seconds. Sucky. I denormalize one table so there is one fewer join, add indexes everywhere, and now the query runs in... 20 minutes. What the hell? Finally, I don't use a join at all, but rather a subquery with "where id in (...) order by" and now it runs in 1.5 seconds. Pretty decent. What in God's name is going on? I feel like if I actually understood what indexes were doing I could write some really good SQL. Anybody know some good tutorials? Thanks!

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >