Search Results

Search found 671 results on 27 pages for 'optimizing'.

Page 4/27 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • I need some help optimizing my database schema

    - by Steffan
    Here's a layout of my data: Heading 1: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 2: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 3: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 4: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 5: Sub heading Sub heading Sub heading Sub heading Sub heading These headings need to have a 'Completion Status' boolean value which gets linked to a user Id. Currently, this is how my table looks: id | userID | field_1 | field_2 | field_3 | field_4 | etc... ----------------------------------------------------------------------- 1 | 1 | 0 | 0 | 1 | 0 | ----------------------------------------------------------------------- 2 | 2 | 1 | 0 | 1 | 1 | Each field represents one Sub Heading. Having this many columns in my table looks awfully inefficient... How can I go about optimizing this? I can't think of any way to neaten it up :/

    Read the article

  • need help optimizing oracle query

    - by deming
    I need help in optimizing the following query. It is taking a long time to finish. It takes almost 213 seconds . because of some constraints, I can not add an index and have to live with existing ones. INSERT INTO temp_table_1 ( USER_ID, role_id, participant_code, status_id ) WITH A AS (SELECT USER_ID user_id,ROLE_ID, STATUS_ID,participant_code FROM USER_ROLE WHERE participant_code IS NOT NULL), --1 B AS (SELECT ROLE_ID FROM CMP_ROLE WHERE GROUP_ID = 3), C AS (SELECT USER_ID FROM USER) --2 SELECT USER_ID,ROLE_ID,PARTICIPANT_CODE,MAX(STATUS_ID) FROM A INNER JOIN B USING (ROLE_ID) INNER JOIN C USING (USER_ID) GROUP BY USER_ID,role_id,participant_code ; --1 = query when ran alone takes 100+ seconds --2 = query when ran alone takes 19 seconds DELETE temp_table_1 WHERE ROWID NOT IN ( SELECT a.ROWID FROM temp_table_1 a, USER_ROLE b WHERE a.status_id = b.status_id AND ( b.ACTIVE IN ( 1 ) OR ( b.ACTIVE IN ( 0,3 ) AND SYSDATE BETWEEN b.effective_from_date AND b.effective_to_date )) ); It seems like the person who wrote the query is trying to get everything into a temp table first and then deleting records from the temp table. whatever is left is the actual results. Can't it be done such a way that there is no need for the delete? We just get the results needed since that will save time?

    Read the article

  • Optimizing T-SQL where an array would be nice

    - by Polatrite
    Alright, first you'll need to grab a barf bag. I've been tasked with optimizing several old stored procedures in our database. This SP does the following: 1) cursor loops through a series of "buildings" 2) cursor loops through a week, Sunday-Saturday 3) has a huge set of IF blocks that are responsible for counting how many Objects of what Types are present in a given building Essentially what you'll see in this code block is that, if there are 5 objects of type #2, it will increment @Type_2_Objects_5 by 1. IF @Number_Type_1_Objects = 0 BEGIN SET @Type_1_Objects_0 = @Type_1_Objects_0 + 1 END IF @Number_Type_1_Objects = 1 BEGIN SET @Type_1_Objects_1 = @Type_1_Objects_1 + 1 END IF @Number_Type_1_Objects = 2 BEGIN SET @Type_1_Objects_2 = @Type_1_Objects_2 + 1 END IF @Number_Type_1_Objects = 3 BEGIN SET @Type_1_Objects_3 = @Type_1_Objects_3 + 1 END [... Objects_4 through Objects_20 for Type_1] IF @Number_Type_2_Objects = 0 BEGIN SET @Type_2_Objects_0 = @Type_2_Objects_0 + 1 END IF @Number_Type_2_Objects = 1 BEGIN SET @Type_2_Objects_1 = @Type_2_Objects_1 + 1 END IF @Number_Type_2_Objects = 2 BEGIN SET @Type_2_Objects_2 = @Type_2_Objects_2 + 1 END IF @Number_Type_2_Objects = 3 BEGIN SET @Type_2_Objects_3 = @Type_2_Objects_3 + 1 END [... Objects_4 through Objects_20 for Type_2] In addition to being extremely hacky (and limited to a quantity of 20 objects), it seems like a terrible way of handling this. In a traditional language, this could easily be solved with a 2-dimensional array... objects[type][quantity] += 1; I'm a T-SQL novice, but since writing stored procedures often uses a lot of temporary tables (which could essentially be a 2-dimensional array) I was wondering if someone could illuminate a better way of handling a situation like this with two dynamic pieces of data to store. Requested in comments: The columns are simply Number_Type_1_Objects, Number_Type_2_Objects, Number_Type_3_Objects, Number_Type_4_Objects, Number_Type_5_Objects, and CurrentDateTime. Each row in the table represents 5 minutes. The expected output is to figure out what percentage of time a given quantity of objects is present throughout each day. Sunday - Object Type 1 0 objects - 69 rows, 5:45, 34.85% 1 object - 85 rows, 7:05, 42.93% 2 objects - 44 rows, 3:40, 22.22% On Sunday, there were 0 objects of type 1 for 34.85% of the day. There was 1 object for 42.93% of the day, and 2 objects for 22.22% of the day. Repeat for each object type.

    Read the article

  • Feedback on Optimizing C# NET Code Block

    - by Brett Powell
    I just spent quite a few hours reading up on TCP servers and my desired protocol I was trying to implement, and finally got everything working great. I noticed the code looks like absolute bollocks (is the the correct usage? Im not a brit) and would like some feedback on optimizing it, mostly for reuse and readability. The packet formats are always int, int, int, string, string. try { BinaryReader reader = new BinaryReader(clientStream); int packetsize = reader.ReadInt32(); int requestid = reader.ReadInt32(); int serverdata = reader.ReadInt32(); Console.WriteLine("Packet Size: {0} RequestID: {1} ServerData: {2}", packetsize, requestid, serverdata); List<byte> str = new List<byte>(); byte nextByte = reader.ReadByte(); while (nextByte != 0) { str.Add(nextByte); nextByte = reader.ReadByte(); } // Password Sent to be Authenticated string string1 = Encoding.UTF8.GetString(str.ToArray()); str.Clear(); nextByte = reader.ReadByte(); while (nextByte != 0) { str.Add(nextByte); nextByte = reader.ReadByte(); } // NULL string string string2 = Encoding.UTF8.GetString(str.ToArray()); Console.WriteLine("String1: {0} String2: {1}", string1, string2); // Reply to Authentication Request MemoryStream stream = new MemoryStream(); BinaryWriter writer = new BinaryWriter(stream); writer.Write((int)(1)); // Packet Size writer.Write((int)(requestid)); // Mirror RequestID if Authenticated, -1 if Failed byte[] buffer = stream.ToArray(); clientStream.Write(buffer, 0, buffer.Length); clientStream.Flush(); } I am going to be dealing with other packet types as well that are formatted the same (int/int/int/str/str), but different values. I could probably create a packet class, but this is a bit outside my scope of knowledge for how to apply it to this scenario. If it makes any difference, this is the Protocol I am implementing. http://developer.valvesoftware.com/wiki/Source_RCON_Protocol

    Read the article

  • Optimizing Jaro-Winkler algorithm

    - by Pentium10
    I have this code for Jaro-Winkler algorithm taken from this website. I need to run 150,000 times to get distance between differences. It takes a long time, as I run on an Android mobile device. Can it be optimized more? public class Jaro { /** * gets the similarity of the two strings using Jaro distance. * * @param string1 the first input string * @param string2 the second input string * @return a value between 0-1 of the similarity */ public float getSimilarity(final String string1, final String string2) { //get half the length of the string rounded up - (this is the distance used for acceptable transpositions) final int halflen = ((Math.min(string1.length(), string2.length())) / 2) + ((Math.min(string1.length(), string2.length())) % 2); //get common characters final StringBuffer common1 = getCommonCharacters(string1, string2, halflen); final StringBuffer common2 = getCommonCharacters(string2, string1, halflen); //check for zero in common if (common1.length() == 0 || common2.length() == 0) { return 0.0f; } //check for same length common strings returning 0.0f is not the same if (common1.length() != common2.length()) { return 0.0f; } //get the number of transpositions int transpositions = 0; int n=common1.length(); for (int i = 0; i < n; i++) { if (common1.charAt(i) != common2.charAt(i)) transpositions++; } transpositions /= 2.0f; //calculate jaro metric return (common1.length() / ((float) string1.length()) + common2.length() / ((float) string2.length()) + (common1.length() - transpositions) / ((float) common1.length())) / 3.0f; } /** * returns a string buffer of characters from string1 within string2 if they are of a given * distance seperation from the position in string1. * * @param string1 * @param string2 * @param distanceSep * @return a string buffer of characters from string1 within string2 if they are of a given * distance seperation from the position in string1 */ private static StringBuffer getCommonCharacters(final String string1, final String string2, final int distanceSep) { //create a return buffer of characters final StringBuffer returnCommons = new StringBuffer(); //create a copy of string2 for processing final StringBuffer copy = new StringBuffer(string2); //iterate over string1 int n=string1.length(); int m=string2.length(); for (int i = 0; i < n; i++) { final char ch = string1.charAt(i); //set boolean for quick loop exit if found boolean foundIt = false; //compare char with range of characters to either side for (int j = Math.max(0, i - distanceSep); !foundIt && j < Math.min(i + distanceSep, m - 1); j++) { //check if found if (copy.charAt(j) == ch) { foundIt = true; //append character found returnCommons.append(ch); //alter copied string2 for processing copy.setCharAt(j, (char)0); } } } return returnCommons; } } I mention that in the whole process I make just instance of the script, so only once jaro= new Jaro(); If you are going to test and need examples so not break the script, you will find it here, in another thread for python optimization.

    Read the article

  • optimizing iPhone OpenGL ES fill rate

    - by NateS
    I have an Open GL ES game on the iPhone. My framerate is pretty sucky, ~20fps. Using the Xcode OpenGL ES performance tool on an iPhone 3G, it shows: Renderer Utilization: 95% to 99% Tiler Utilization: ~27% I am drawing a lot of pretty large images with a lot of blending. If I reduce the number of images drawn, framerates go from ~20 to ~40, though the performance tool results stay about the same (renderer still maxed). I think I'm being limited by the fill rate of the iPhone 3G, but I'm not sure. My questions are: How can I determine with more granularity where the bottleneck is? That is my biggest problem, I just don't know what is taking all the time. If it is fillrate, is there anything I do to improve it besides just drawing less? I am using texture atlases. I have tried to minimize image binds, though it isn't always possible (drawing order, not everything fits on one 1024x1024 texture, etc). Every frame I do 10 image binds. This seem pretty reasonable, but I could be mistaken. I'm using vertex arrays and glDrawArrays. I don't really have a lot of geometry. I can try to be more precise if needed. Each image is 2 triangles and I try to batch things were possible, though often (maybe half the time) images are drawn with individual glDrawArrays calls. Besides the images, I have ~60 triangles worth of geometry being rendered in ~6 glDrawArrays calls. I often glTranslate before calling glDrawArrays. Would it improve the framerate to switch to VBOs? I don't think it is a huge amount of geometry, but maybe it is faster for other reasons? Are there certain things to watch out for that could reduce performance? Eg, should I avoid glTranslate, glColor4g, etc? I'm using glScissor in a 3 places per frame. Each use consists of 2 glScissor calls, one to set it up, and one to reset it to what it was. I don't know if there is much of a performance impact here. If I used PVRTC would it be able to render faster? Currently all my images are GL_RGBA. I don't have memory issues. Here is a rough idea of what I'm drawing, in this order: 1) Switch to perspective matrix. 2) Draw a full screen background image 3) Draw a full screen image with translucency (this one has a scrolling texture). 4) Draw a few sprites. 5) Switch to ortho matrix. 6) Draw a few sprites. 7) Switch to perspective matrix. 8) Draw sprites and some other textured geometry. 9) Switch to ortho matrix. 10) Draw a few sprites (eg, game HUD). Steps 1-6 draw a bunch of background stuff. 8 draws most of the game content. 10 draws the HUD. As you can see, there are many layers, some of them full screen and some of the sprites are pretty large (1/4 of the screen). The layers use translucency, so I have to draw them in back-to-front order. This is further complicated by needing to draw various layers in ortho and others in perspective. I will gladly provide additional information if reqested. Thanks in advance for any performance tips or general advice on my problem!

    Read the article

  • Java: micro-optimizing array manipulation

    - by Martin Wiboe
    Hello all, I am trying to make a Java port of a simple feed-forward neural network. This obviously involves lots of numeric calculations, so I am trying to optimize my central loop as much as possible. The results should be correct within the limits of the float data type. My current code looks as follows (error handling & initialization removed): /** * Simple implementation of a feedforward neural network. The network supports * including a bias neuron with a constant output of 1.0 and weighted synapses * to hidden and output layers. * * @author Martin Wiboe */ public class FeedForwardNetwork { private final int outputNeurons; // No of neurons in output layer private final int inputNeurons; // No of neurons in input layer private int largestLayerNeurons; // No of neurons in largest layer private final int numberLayers; // No of layers private final int[] neuronCounts; // Neuron count in each layer, 0 is input // layer. private final float[][][] fWeights; // Weights between neurons. // fWeight[fromLayer][fromNeuron][toNeuron] // is the weight from fromNeuron in // fromLayer to toNeuron in layer // fromLayer+1. private float[][] neuronOutput; // Temporary storage of output from previous layer public float[] compute(float[] input) { // Copy input values to input layer output for (int i = 0; i < inputNeurons; i++) { neuronOutput[0][i] = input[i]; } // Loop through layers for (int layer = 1; layer < numberLayers; layer++) { // Loop over neurons in the layer and determine weighted input sum for (int neuron = 0; neuron < neuronCounts[layer]; neuron++) { // Bias neuron is the last neuron in the previous layer int biasNeuron = neuronCounts[layer - 1]; // Get weighted input from bias neuron - output is always 1.0 float activation = 1.0F * fWeights[layer - 1][biasNeuron][neuron]; // Get weighted inputs from rest of neurons in previous layer for (int inputNeuron = 0; inputNeuron < biasNeuron; inputNeuron++) { activation += neuronOutput[layer-1][inputNeuron] * fWeights[layer - 1][inputNeuron][neuron]; } // Store neuron output for next round of computation neuronOutput[layer][neuron] = sigmoid(activation); } } // Return output from network = output from last layer float[] result = new float[outputNeurons]; for (int i = 0; i < outputNeurons; i++) result[i] = neuronOutput[numberLayers - 1][i]; return result; } private final static float sigmoid(final float input) { return (float) (1.0F / (1.0F + Math.exp(-1.0F * input))); } } I am running the JVM with the -server option, and as of now my code is between 25% and 50% slower than similar C code. What can I do to improve this situation? Thank you, Martin Wiboe

    Read the article

  • Tips on creating user interfaces and optimizing the user experience

    - by Saif Bechan
    I am currently working on a project where a lot of user interaction is going to take place. There is also a commercial side as people can buy certain items and services. In my opinion a good blend of user interface, speed and security is essential for these types of websites. It is fairly easy to use ajax and JavaScript nowadays to do almost everything, as there are a lot of libraries available such as jQuery and others. But this can have some performance and incompatibility issues. This can lead to users just going to the next website. The overall look of the website is important too. Where to place certain buttons, where to place certain types of articles such as faq and support. Where and how to display error messages so that the user sees them but are not bothering him. And an overall color scheme is important too. The basic question is: How to create an interface that triggers a user to buy/use your services I know psychology also plays a huge role in how users interact with your website. The color scheme for example is important. When the colors are irritating on a website you just want to click away. I have not found any articles that explain those concept. Does anyone have any tips and/or recourses where i can get some articles that guide you in making the correct choices for your website.

    Read the article

  • Optimizing Levenshtein Distance Algorithm

    - by Matt
    I have a stored procedure that uses Levenshtein Distance to determine the result closest to what the user typed. The only thing really affecting the speed is the function that calculates the Levenshtein Distance for all the records before selecting the record with the lowest distance (I've verified this by putting a 0 in place of the call to the Levenshtein function). The table has 1.5 million records, so even the slightest adjustment may shave off a few seconds. Right now the entire thing runs over 10 minutes. Here's the method I'm using: ALTER function dbo.Levenshtein ( @Source nvarchar(200), @Target nvarchar(200) ) RETURNS int AS BEGIN DECLARE @Source_len int, @Target_len int, @i int, @j int, @Source_char nchar, @Dist int, @Dist_temp int, @Distv0 varbinary(8000), @Distv1 varbinary(8000) SELECT @Source_len = LEN(@Source), @Target_len = LEN(@Target), @Distv1 = 0x0000, @j = 1, @i = 1, @Dist = 0 WHILE @j <= @Target_len BEGIN SELECT @Distv1 = @Distv1 + CAST(@j AS binary(2)), @j = @j + 1 END WHILE @i <= @Source_len BEGIN SELECT @Source_char = SUBSTRING(@Source, @i, 1), @Dist = @i, @Distv0 = CAST(@i AS binary(2)), @j = 1 WHILE @j <= @Target_len BEGIN SET @Dist = @Dist + 1 SET @Dist_temp = CAST(SUBSTRING(@Distv1, @j+@j-1, 2) AS int) + CASE WHEN @Source_char = SUBSTRING(@Target, @j, 1) THEN 0 ELSE 1 END IF @Dist > @Dist_temp BEGIN SET @Dist = @Dist_temp END SET @Dist_temp = CAST(SUBSTRING(@Distv1, @j+@j+1, 2) AS int)+1 IF @Dist > @Dist_temp SET @Dist = @Dist_temp BEGIN SELECT @Distv0 = @Distv0 + CAST(@Dist AS binary(2)), @j = @j + 1 END END SELECT @Distv1 = @Distv0, @i = @i + 1 END RETURN @Dist END Anyone have any ideas? Any input is appreciated. Thanks, Matt

    Read the article

  • Optimizing performance of large ASP.NET applications

    - by NLV
    Hello, I'm building a asp.net web application with lots and lots of controls and huge volumes of data. My application is very slow and it is taking a large amount of time to load the data into the .net controls like grid, tree view etc. I also have some ajaxified pages and controls in my application. I want to reduce the page load time in each postbacks. What are the standards/best practices to be followed while developing large asp.net applications? Thank you. NLV

    Read the article

  • Optimizing a lot of Scanner.findWithinHorizon(pattern, 0) calls

    - by darvids0n
    I'm building a process which extracts data from 6 csv-style files and two poorly laid out .txt reports and builds output CSVs, and I'm fully aware that there's going to be some overhead searching through all that whitespace thousands of times, but I never anticipated converting about about 50,000 records would take 12 hours. Excerpt of my manual matching code (I know it's horrible that I use lists of tokens like that, but it was the best thing I could think of): public static String lookup(List<String> tokensBefore, List<String> tokensAfter) { String result = null; while(_match(tokensBefore)) { // block until all input is read if(id.hasNext()) { result = id.next(); // capture the next token that matches if(_matchImmediate(tokensAfter)) // try to match tokensAfter to this result return result; } else return null; // end of file; no match } return null; // no matches } private static boolean _match(List<String> tokens) { return _match(tokens, true); } private static boolean _match(List<String> tokens, boolean block) { if(tokens != null && !tokens.isEmpty()) { if(id.findWithinHorizon(tokens.get(0), 0) == null) return false; for(int i = 1; i <= tokens.size(); i++) { if (i == tokens.size()) { // matches all tokens return true; } else if(id.hasNext() && !id.next().matches(tokens.get(i))) { break; // break to blocking behaviour } } } else { return true; // empty list always matches } if(block) return _match(tokens); // loop until we find something or nothing else return false; // return after just one attempted match } private static boolean _matchImmediate(List<String> tokens) { if(tokens != null) { for(int i = 0; i <= tokens.size(); i++) { if (i == tokens.size()) { // matches all tokens return true; } else if(!id.hasNext() || !id.next().matches(tokens.get(i))) { return false; // doesn't match, or end of file } } return false; // we have some serious problems if this ever gets called } else { return true; // empty list always matches } } Basically wondering how I would work in an efficient string search (Boyer-Moore or similar). My Scanner id is scanning a java.util.String, figured buffering it to memory would reduce I/O since the search here is being performed thousands of times on a relatively small file. The performance increase compared to scanning a BufferedReader(FileReader(File)) was probably less than 1%, the process still looks to be taking a LONG time. I've also traced execution and the slowness of my overall conversion process is definitely between the first and last like of the lookup method. In fact, so much so that I ran a shortcut process to count the number of occurrences of various identifiers in the .csv-style files (I use 2 lookup methods, this is just one of them) and the process completed indexing approx 4 different identifiers for 50,000 records in less than a minute. Compared to 12 hours, that's instant. Some notes (updated): I don't necessarily need the pattern-matching behaviour, I only get the first field of a line of text so I need to match line breaks or use Scanner.nextLine(). All ID numbers I need start at position 0 of a line and run through til the first block of whitespace, after which is the name of the corresponding object. I would ideally want to return a String, not an int locating the line number or start position of the result, but if it's faster then it will still work just fine. If an int is being returned, however, then I would now have to seek to that line again just to get the ID; storing the ID of every line that is searched sounds like a way around that. Anything to help me out, even if it saves 1ms per search, will help, so all input is appreciated. Thankyou! Usage scenario 1: I have a list of objects in file A, who in the old-style system have an id number which is not in file A. It is, however, POSSIBLY in another csv-style file (file B) or possibly still in a .txt report (file C) which each also contain a bunch of other information which is not useful here, and so file B needs to be searched through for the object's full name (1 token since it would reside within the second column of any given line), and then the first column should be the ID number. If that doesn't work, we then have to split the search token by whitespace into separate tokens before doing a search of file C for those tokens as well. Generalised code: String field; for (/* each record in file A */) { /* construct the rest of this object from file A info */ // now to find the ID, if we can List<String> objectName = new ArrayList<String>(1); objectName.add(Pattern.quote(thisObject.fullName)); field = lookup(objectSearchToken, objectName); // search file B if(field == null) // not found in file B { lookupReset(false); // initialise scanner to check file C objectName.clear(); // not using the full name String[] tokens = thisObject.fullName.split(id.delimiter().pattern()); for(String s : tokens) objectName.add(Pattern.quote(s)); field = lookup(objectSearchToken, objectName); // search file C lookupReset(true); // back to file B } else { /* found it, file B specific processing here */ } if(field != null) // found it in B or C thisObject.ID = field; } The objectName tokens are all uppercase words with possible hyphens or apostrophes in them, separated by spaces. Much like a person's name. As per a comment, I will pre-compile the regex for my objectSearchToken, which is just [\r\n]+. What's ending up happening in file C is, every single line is being checked, even the 95% of lines which don't contain an ID number and object name at the start. Would it be quicker to use ^[\r\n]+.*(objectname) instead of two separate regexes? It may reduce the number of _match executions. The more general case of that would be, concatenate all tokensBefore with all tokensAfter, and put a .* in the middle. It would need to be matching backwards through the file though, otherwise it would match the correct line but with a huge .* block in the middle with lots of lines. The above situation could be resolved if I could get java.util.Scanner to return the token previous to the current one after a call to findWithinHorizon. I have another usage scenario. Will put it up asap.

    Read the article

  • Query Optimizing Request

    - by mithilatw
    I am very sorry if this question is structured in not a very helpful manner or the question itself is not a very good one! I need to update a MSSQL table call component every 10 minutes based on information from another table call materials_progress I have nearly 60000 records in component and more than 10000 records in materials_progress I wrote an update query to do the job, but it takes longer than 4 minutes to complete execution! Here is the query : UPDATE component SET stage_id = CASE WHEN t.required_quantity <= t.total_received THEN 27 WHEN t.total_ordered < t.total_received THEN 18 ELSE 18 END FROM ( SELECT mp.job_id, mp.line_no, mp.component, l.quantity AS line_quantity, CASE WHEN mp.component_name_id = 2 THEN l.quantity*2 ELSE l.quantity END AS required_quantity, SUM(ordered) AS total_ordered, SUM(received) AS total_received , c.component_id FROM line l LEFT JOIN component c ON c.line_id = l.line_id LEFT JOIN materials_progress mp ON l.job_id = mp.job_id AND l.line_no = mp.line_no AND c.component_name_id = mp.component_name_id WHERE mp.job_id IS NOT NULL AND (mp.cancelled IS NULL OR mp.cancelled = 0) AND (mp.manual_override IS NULL OR mp.manual_override = 0) AND c.stage_id = 18 GROUP BY mp.job_id, mp.line_no, mp.component, l.quantity, mp.component_name_id, component_id ) AS t WHERE component.component_id = t.component_id I am not going to explain the scenario as it too complex.. could somebody please please tell me what makes this query this much expensive and a way to get around it? Thank you very very much in advance!!!

    Read the article

  • Linux PDF/Postscript Optimizing

    - by Sheldon Ross
    So I have a report system built using Java and iText. PDF templates are created using Scribus. The Java code merges the data into the document using iText. The files are then copied over to a NFS share, and a BASH script prints them. I use acroread to convert them to PS, then lpr the PS. The FOSS application pdftops is horribly inefficient. My main problem is that the PDF's generated using iText/Scribus are very large. And I've recently run into the problem where acroread pukes because it hits 4gb of mem usage on large (300+ pages) documents. (Adobe is painfully slow at updating stuff to 64 bit). Now I can use Adobe reader on Windows, and use the Create Print PDF option or whatever its called, and it greatly( 10x) reduces the size of the PDF(it removes alot of metadata about form fields and such it appears) and produces a PDF that is basically a Print image. My question is does anyone know of a good solution/program for doing something similiar on Linux. Ideally, it would optimize the PDF, reduce size, and reduce PS complexity so the printer could print faster as it takes about 15-20 seconds a page to print right now.

    Read the article

  • Optimizing multiple dispatch notification algorithm in C#?

    - by Robert Fraser
    Sorry about the title, I couldn't think of a better way to describe the problem. Basically, I'm trying to implement a collision system in a game. I want to be able to register a "collision handler" that handles any collision of two objects (given in either order) that can be cast to particular types. So if Player : Ship : Entity and Laser : Particle : Entity, and handlers for (Ship, Particle) and (Laser, Entity) are registered than for a collision of (Laser, Player), both handlers should be notified, with the arguments in the correct order, and a collision of (Laser, Laser) should notify only the second handler. A code snippet says a thousand words, so here's what I'm doing right now (naieve method): public IObservable<Collision<T1, T2>> onCollisionsOf<T1, T2>() where T1 : Entity where T2 : Entity { Type t1 = typeof(T1); Type t2 = typeof(T2); Subject<Collision<T1, T2>> obs = new Subject<Collision<T1, T2>>(); _onCollisionInternal += delegate(Entity obj1, Entity obj2) { if (t1.IsAssignableFrom(obj1.GetType()) && t2.IsAssignableFrom(obj2.GetType())) obs.OnNext(new Collision<T1, T2>((T1) obj1, (T2) obj2)); else if (t1.IsAssignableFrom(obj2.GetType()) && t2.IsAssignableFrom(obj1.GetType())) obs.OnNext(new Collision<T1, T2>((T1) obj2, (T2) obj1)); }; return obs; } However, this method is quite slow (measurable; I lost ~2 FPS after implementing this), so I'm looking for a way to shave a couple cycles/allocation off this. I thought about (as in, spent an hour implementing then slammed my head against a wall for being such an idiot) a method that put the types in an order based on their hash code, then put them into a dictionary, with each entry being a linked list of handlers for pairs of that type with a boolean indication whether the handler wanted the order of arguments reversed. Unfortunately, this doesn't work for derived types, since if a derived type is passed in, it won't notify a subscriber for the base type. Can anyone think of a way better than checking every type pair (twice) to see if it matches? Thanks, Robert

    Read the article

  • Optimizing website - minification, sprites, etc...

    - by nivlam
    I'm looking at the product Aptimize Website Accelerator, which is an ISAPI filter that will concatenate files, minify css/javascript, and more. Does anyone have experience with this product, or any other "all-in-one" solutions? I'm interesting in knowing whether something like this would be good long-term, or would manually setting up all the components (integrate YUICompress into the build process, setting up gzip compression, tweaking expiration headers, etc...) be more beneficial? An all-in-one solution like this looks very tempting, as it could save a lot of time if our website is "less than optimal". But how efficient are these products? Would setting up the components manually generate better results? Or would the gap between the all-in-one solution and manually setting up the component be so small, that it's negligible?

    Read the article

  • Optimizing code using PIL

    - by freakazo
    Firstly sorry for the long piece of code pasted below. This is my first time actually having to worry about performance of an application so I haven't really ever worried about performance. This piece of code pretty much searches for an image inside another image, it takes 30 seconds to run on my computer, converting the images to greyscale and other changes shaved of 15 seconds, I need another 15 shaved off. I did read a bunch of pages and looked at examples but I couldn't find the same problems in my code. So any help would be greatly appreciated. From the looks of it (cProfile) 25 seconds is spent within the Image module, and only 5 seconds in my code. from PIL import Image import os, ImageGrab, pdb, time, win32api, win32con import cProfile def GetImage(name): name = name + '.bmp' try: print(os.path.join(os.getcwd(),"Images",name)) image = Image.open(os.path.join(os.getcwd(),"Images",name)) except: print('error opening image;', name) return image def Find(name): image = GetImage(name) imagebbox = image.getbbox() screen = ImageGrab.grab() #screen = Image.open(os.path.join(os.getcwd(),"Images","Untitled.bmp")) YLimit = screen.getbbox()[3] - imagebbox[3] XLimit = screen.getbbox()[2] - imagebbox[2] image = image.convert("L") Screen = screen.convert("L") Screen.load() image.load() #print(XLimit, YLimit) Found = False image = image.getdata() for y in range(0,YLimit): for x in range(0,XLimit): BoxCoordinates = x, y, x+imagebbox[2], y+imagebbox[3] ScreenGrab = screen.crop(BoxCoordinates) ScreenGrab = ScreenGrab.getdata() if image == ScreenGrab: Found = True #print("woop") return x,y if Found == False: return "Not Found" cProfile.run('print(Find("Login"))')

    Read the article

  • Optimizing Oracle query

    - by Omnipresent
    SELECT MAX(verification_id) FROM VERIFICATION_TABLE WHERE head = 687422 AND mbr = 23102 AND RTRIM(LTRIM(lname)) = '.iq bzw' AND TO_CHAR(dob,'MM/DD/YYYY')= '08/10/2004' AND system_code = 'M'; This query is taking 153 seconds to run. there are millions of rows in VERIFICATION_TABLE. I think query is taking long because of the functions in where clause. However, I need to do ltrim rtrim on the columns and also date has to be matched in MM/DD/YYYY format. How can I optimize this query?

    Read the article

  • optimizing an sql query using inner join and order by

    - by Sergio B
    I'm trying to optimize the following query without success. Any idea where it could be indexed to prevent the temporary table and the filesort? EXPLAIN SELECT SQL_NO_CACHE `groups`.* FROM `groups` INNER JOIN `memberships` ON `groups`.id = `memberships`.group_id WHERE ((`memberships`.user_id = 1) AND (`memberships`.`status_code` = 1 AND `memberships`.`manager` = 0)) ORDER BY groups.created_at DESC LIMIT 5;` +----+-------------+-------------+--------+--------------------------+---------+---------+---------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------+--------+--------------------------+---------+---------+---------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | memberships | ref | grp_usr,grp,usr,grp_mngr | usr | 5 | const | 5 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | groups | eq_ref | PRIMARY | PRIMARY | 4 | sportspool_development.memberships.group_id | 1 | | +----+-------------+-------------+--------+--------------------------+---------+---------+---------------------------------------------+------+----------------------------------------------+ 2 rows in set (0.00 sec) +--------+------------+-----------------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +--------+------------+-----------------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+ | groups | 0 | PRIMARY | 1 | id | A | 6 | NULL | NULL | | BTREE | | | groups | 1 | index_groups_on_name | 1 | name | A | 6 | NULL | NULL | YES | BTREE | | | groups | 1 | index_groups_on_privacy_setting | 1 | privacy_setting | A | 6 | NULL | NULL | YES | BTREE | | | groups | 1 | index_groups_on_created_at | 1 | created_at | A | 6 | NULL | NULL | YES | BTREE | | | groups | 1 | index_groups_on_id_and_created_at | 1 | id | A | 6 | NULL | NULL | | BTREE | | | groups | 1 | index_groups_on_id_and_created_at | 2 | created_at | A | 6 | NULL | NULL | YES | BTREE | | +--------+------------+-----------------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+ +-------------+------------+----------------------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +-------------+------------+----------------------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | memberships | 0 | PRIMARY | 1 | id | A | 2 | NULL | NULL | | BTREE | | | memberships | 0 | grp_usr | 1 | group_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 0 | grp_usr | 2 | user_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | grp | 1 | group_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | usr | 1 | user_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | grp_mngr | 1 | group_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | grp_mngr | 2 | manager | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | complex_index | 1 | group_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | complex_index | 2 | user_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | complex_index | 3 | status_code | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | complex_index | 4 | manager | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | index_memberships_on_user_id_and_status_code_and_manager | 1 | user_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | index_memberships_on_user_id_and_status_code_and_manager | 2 | status_code | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | index_memberships_on_user_id_and_status_code_and_manager | 3 | manager | A | 2 | NULL | NULL | YES | BTREE | | +-------------+------------+----------------------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+

    Read the article

  • Optimizing UITableView

    - by Daniel Granger
    I have a UITableview made up with a custom cell loaded from a nib. This custom cell has 9 UILabel s and thats all. When scrolling the table on my iPhone the tables scrolling motion is slightly jerky, its not as smooth as other tableviews! (On the simulator it scrolls fine but I guess its using the extra power of my mac) Are there any tips to help optimize this or any tableview or methods to help find the bottleneck. Many Thanks

    Read the article

  • Optimizing encrypted column search

    - by Sung Meister
    I have a table called,tblClient with an encrypted column called SSN. Due to company policy, we encrypted SSN using a symmetric key (chosen over asymmetric key due to performance reasons) using a password. Here is a partial LIKE search on SSN declare @SSN varchar(11) set @SSN = '111-22-%' open symmetric key SSN_KEY decrypt by password = 'secret' select Client_ID from tblClient (nolock) where convert(nvarchar(11), DECRYPTBYKEY(SSN)) like @SSN close symmetric key SSN_KEY Before encryption, searching thru 150,000 records took less than 1 second. but with the mix of decryption, the same search takes around 5 seconds. What strategy can I apply to try to optimize searching thru encrypted column?

    Read the article

  • Optimizing mathematics on arrays of floats in Ada 95 with GNATC

    - by mat_geek
    Consider the bellow code. This code is supposed to be processing data at a fixed rate, in one second batches, It is part of an overal system and can't take up too much time. When running over 100 lots of 1 seconds worth of data the program takes 35 seconds; or 35%. How do I improce the code to get the processing time down to a minimum? The code will be running on an Intel Pentium-M which is a P3 with SSE2. package FF is new Ada.Numerics.Generic_Elementary_Functions(Float); N : constant Integer := 820; type A is array(1 .. N) of Float; type A3 is array(1 .. 3) of A; procedure F(state : in out A3; result : out A3; l : in A; r : in A) is s : Float; t : Float; begin for i in 1 .. N loop t := l(i) + r(i); t := t / 2.0; state(1)(i) := t; state(2)(i) := t * 0.25 + state(2)(i) * 0.75; state(3)(i) := t * 1.0 /64.0 + state(2)(i) * 63.0 /64.0; for r in 1 .. 3 loop s := state(r)(i); t := FF."**"(s, 6.0) + 14.0; if t > MAX then t := MAX; elsif t < MIN then t := MIN; end if; result(r)(i) := FF.Log(t, 2.0); end loop; end loop; end;

    Read the article

  • Optimizing if-else /switch-case with string options

    - by cc
    What modification would bring to this piece of code? In the last lines, should I use more if-else structures, instead of "if-if-if" if (action.equals("opt1")) { //something } else { if (action.equals("opt2")) { //something } else { if ((action.equals("opt3")) || (action.equals("opt4"))) { //something } if (action.equals("opt5")) { //something } if (action.equals("opt6")) { //something } } } Later Edit: This is Java. I don't think that switch-case structure will work with Strings. Later Edit 2: A switch works with the byte, short, char, and int primitive data types. It also works with enumerated types (discussed in Classes and Inheritance) and a few special classes that "wrap" certain primitive types: Character, Byte, Short, and Integer (discussed in Simple Data Objects ).

    Read the article

  • Optimizing a large iteration of PHP objects (EAV-based)

    - by Aron Rotteveel
    I am currently working on a project that utilizes the EAV model. This turns out to work quite well, but like many others I am now stumbling upon some performance issues. The data set in this particular case consists of aproximately 2500 entities, each with aprox. 150 attributes. Each entity and each attribute is represented by a PHP-object. Since most parts of the application only iterate through a filtered set of entities, we have not had very large issues yet. Now, however, I am working on an algorithm that requires iteration over the entire dataset, which causes a major impact on performance. This information is perhaps not very much to work with, but since this is an architectural problem, I am hoping for a architectural pattern to help me on the way as well. Each entity, including it's attributes takes up aprox. 500KB of memory.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >