Search Results

Search found 13249 results on 530 pages for 'virtualized performance'.

Page 160/530 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • mysql subquery strangely slow

    - by aviv
    I have a query to select from another sub-query select. While the two queries look almost the same the second query (in this sample) runs much slower: SELECT user.id ,user.first_name -- user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); This query takes 1.2s Explain on this query results: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user index first_name 152 141192 Using where; Using index 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where The second query: SELECT -- user.id -- user.first_name user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); Takes 45sec to run, with explain: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user ALL 141192 Using where 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where Why is it slower if i query only by index fields? Why both queries scans the full length of the user table? Any ideas how to improve? Thanks.

    Read the article

  • Prepending to a multi-gigabyte file.

    - by dafmetal
    What would be the most performant way to prepend a single character to a multi-gigabyte file (in my practical case, a 40GB file). There is no limitation on the implementation to do this. Meaning it can be through a tool, a shell script, a program in any programming language, ...

    Read the article

  • SQL Server Express 2008 Stored Procedure execution time spikes periodically

    - by user156241
    I have a big stored procedure on a SQL Server 2008 Express SP2 database that gets run about every 200 ms. Normal execution time is about 50ms. What I am seeing is large inconsistencies in this run time. It will execute for while, say 50-100 times at 40-60ms which is expected, then seemingly at random the same stored procedure will take way longer, say 900ms or 1.5 seconds to run. Sometimes more than one call of the same procedure in a row will take longer too. It appears that something is causing sql server to slow down dramatically every minute or so, but I can't figure out what. There is no timing pattern between the occurences. I have the same setup on two different computers, one of which is a clean XP Pro load with no virus checking and nothing installed except SQL server. Also, The recovery options for all the databases are set to "Simple".

    Read the article

  • Using VirtualMode on a DataGridView when the number of rows/columns isn't known

    - by Nathan Baulch
    I need to display an unknown length sequence of dictionaries with unknown keys efficiently in a data grid. This sequence is the result of a potentially slow LINQ query that could contain any number of results. At first I thought that VirtualMode on DataGridView was what I was looking for but it appears that the number of rows and columns must be known upfront. I tried adding a single row and column then adding more as needed from the CellValueNeeded event but this doesn't work. Is this even possible with VirtualMode? Or do I need to estimate how many rows are visible on the screen and manually build up the rows/columns? And if so, how do I ensure that a vertical scrollbar is present and react appropriately when a user uses it?

    Read the article

  • Speed up math code in C# by writing a C dll?

    - by Projectile Fish
    I have a very large nested for loop in which some multiplications and additions are performed on floating point numbers. for (int i = 0; i < length1; i++) { s = GetS(i); c = GetC(i); for(int j = 0; j < length2; j++) { double oldU = u[j]; u[j] = c * oldU + s * omega[i][j]; omega[i][j] = c * omega[i][j] - s * oldU; } } This loop is taking up the majority of my processing time and is a bottleneck. Would I be likely to see any speed improvements if I rewrite this loop in C and interface to it from C#?

    Read the article

  • What's the good of IDE's auto generated @override annotation ?

    - by Tony
    I am using eclipse , when I use shortcut to generate override implementations , there is an override annotation up there , I am using JDK 6 , this is all right , but under JDK 5 this annotation will cause an error, so I want to ask , if this annotation is completely useless ? Will compiler do some kind of optimization using this annotation ?

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • How to do some preformance testing in asp.net mvc?

    - by chobo2
    Hi I am using asp.net mvc 2.0 and I want to test how long it takes to do some of my code. In one senario I do this load xml file up. validate xml file and deserailze. validate all rows in the xml file with more advanced validation that cannot be done in the schema validation. then I do a bulk insert. I want to know how long steps 1 to 3 take and how long step 4 takes. I tried to do like DateTime.UtcNow in areas and subtract them but it told me it took like 3 seconds but I know that is not right as steps 1 to 4 take 2mins to do.

    Read the article

  • What type of websites does memcached speed up

    - by Saif Bechan
    I have read this article about 400% boost of your website. This is done by a combination of nginx and memcached. The how-to part of this website is quite good, but i mis the part where it says to what types of websites this applies. I know nginx is a http engine, I need no explanation for that. I thought memcached had something to do with caching database result. However i don't understand what this has to do with the http request, can someone please explain that to me. Another question I have is for what types of websites is this used. I have a website where the important part of the website consist of data that changes often. Often being minutes. Will this method still apply to me, or should I just stick with the basic boring setup of apache and nothing else.

    Read the article

  • How do I make this nested for loop, testing sums of cubes, more efficient?

    - by Brian J. Fink
    I'm trying to iterate through all the combinations of pairs of positive long integers in Java and testing the sum of their cubes to discover if it's a Fibonacci number. I'm currently doing this by using the value of the outer loop variable as the inner loop's upper limit, with the effect being that the outer loop runs a little slower each time. Initially it appeared to run very quickly--I was up to 10 digits within minutes. But now after 2 full days of continuous execution, I'm only somewhere in the middle range of 15 digits. At this rate it may end up taking a whole year just to finish running this program. The code for the program is below: import java.lang.*; import java.math.*; public class FindFib { public static void main(String args[]) { long uLimit=9223372036854775807L; //long maximum value BigDecimal PHI=new BigDecimal(1D+Math.sqrt(5D)/2D); //Golden Ratio for(long a=1;a<=uLimit;a++) //Outer Loop, 1 to maximum for(long b=1;b<=a;b++) //Inner Loop, 1 to current outer { //Cube the numbers and add BigDecimal c=BigDecimal.valueOf(a).pow(3).add(BigDecimal.valueOf(b).pow(3)); System.out.print(c+" "); //Output result //Upper and lower limits of interval for Mobius test: [c*PHI-1/c,c*PHI+1/c] BigDecimal d=c.multiply(PHI).subtract(BigDecimal.ONE.divide(c,BigDecimal.ROUND_HALF_UP)), e=c.multiply(PHI).add(BigDecimal.ONE.divide(c,BigDecimal.ROUND_HALF_UP)); //Mobius test: if integer in interval (floor values unequal) Fibonacci number! if (d.toBigInteger().compareTo(e.toBigInteger())!=0) System.out.println(); //Line feed else System.out.print("\r"); //Carriage return instead } //Display final message System.out.println("\rDone. "); } } Now the use of BigDecimal and BigInteger was delibrate; I need them to get the necessary precision. Is there anything other than my variable types that I could change to gain better efficiency?

    Read the article

  • Fastest XML parser for small, simple documents in Java

    - by Varkhan
    I have to objectify very simple and small XML documents (less than 1k, and it's almost SGML: no namespaces, plain UTF-8, you name it...), read from a stream, in Java. I am using JAXP to process the data from my stream into a Document object. I have tried Xerces, it's way too big and slow... I am using Dom4j, but I am still spending way too much time in org.dom4j.io.SAXReader. Does anybody out there have any suggestion on a faster, more efficient implementation, keeping in mind I have very tough CPU and memory constraints? [Edit 1] Keep in mind that my documents are very small, so the overhead of staring the parser can be important. For instance I am spending as much time in org.xml.sax.helpers.XMLReaderFactory.createXMLReader as in org.dom4j.io.SAXReader.read [Edit 2] The result has to be in Dom format, as I pass the document to decision tools that do arbitrary processing on it, like switching code based on the value of arbitrary XPaths, but also extracting lists of values packed as children of a predefined node. [Edit 3] In any case I eventually need to load/parse the complete document, since all the information it contains is going to be used at some point. (This question is related to, but different from, http://stackoverflow.com/questions/373833/best-xml-parser-for-java )

    Read the article

  • oprofile unable to produce call graph

    - by aaa
    hello I am trying to use oprofile to generate call graph. Compiler is g++, platform is linux x86-64, linker is gfortran C++ code is compiled with -fno- omit-frame-pointer. oprofile is started with --callgraph=25. report I run with --callgraph. the call graph is produced but it's only includes self time, which is not much use what am I missing?

    Read the article

  • Efficient way to delete a line from a text file (C#)

    - by Valentin Vasilyev
    Hello. I need to delete a certain line from a text file. What is the most efficient way of doing this? File can be potentially large(over million records). Thank you. UPDATE: below is the code I'm currently using, but I'm not sure if it is good. internal void DeleteMarkedEntries() { string tempPath=Path.GetTempFileName(); using (var reader = new StreamReader(logPath)) { using (var writer = new StreamWriter(File.OpenWrite(tempPath))) { int counter = 0; while (!reader.EndOfStream) { if (!_deletedLines.Contains(counter)) { writer.WriteLine(reader.ReadLine()); } ++counter; } } } if (File.Exists(tempPath)) { File.Delete(logPath); File.Move(tempPath, logPath); } }

    Read the article

  • How do polymorphic inline caches work with mutable types?

    - by kingkilr
    A polymorphic inline cache works by caching the actual method by the type of the object, in order to avoid the expensive lookup procedures (usually a hashtable lookup). How does one handle the type comparison if the type objects are mutable (i.e. the method might be monkey patched into something different at run time). The one idea I've come up with would be a "class counter" that gets incremented each time a method is adjusted, however this seems like it would be exceptionally expensive in a heavily monkey patched environ since it would kill all the PICs for that class, even if the methods for them weren't altered. I'm sure there must be a good solution to this, as this issue is directly applicable to Javascript and AFAIK all 3 of the big JS VMs have PICs (wow acronym ahoy).

    Read the article

  • PostgreSQL: Why does this simple query not use the index?

    - by David
    I have a table t with a column c, which is an int and has a btree index on it. Why does the following query not utilize this index? explain select c from t group by c; The result I get is: HashAggregate (cost=1005817.55..1005817.71 rows=16 width=4) -> Seq Scan on t (cost=0.00..946059.84 rows=23903084 width=4) My understanding of indexes is limited, but I thought such queries were the purpose of indexes.

    Read the article

  • .NET WebService IPC - Should it be done to minimise some expensive operations?

    - by Kyle
    I'm looking at a few different approaches to a problem: Client requests work, some stuff gets done, and a result (ok/error) is returned. A .NET web service definitely seems like the way to go, my only issue is that the "stuff" will involve building up and tearing down a session for each request. Does abstracting the "stuff" out to an app (which would keep a single session active, and process the request from the web service) seem like the right way to go? (and if so, what communication method) The work time is negligible, my concern is the hammering the transaction servers in question will probably get if I create/drop a session for each job. Is some form of IPC or socket based communication a feasible solution here? Thoughts/comments/experiences much appreciated. Edit: After a bit more research, it seems like hosting a WCF service in a Windows Service is probably a better way to go...

    Read the article

  • [N]Hibernate: view-like fetching properties of associated class

    - by chiccodoro
    (Felt quite helpless in formulating an appropriate title...) In my C# app I display a list of "A" objects, along with some properties of their associated "B" objects and properties of B's associated "C" objects: A.Name B.Name B.SomeValue C.Name Foo Bar 123 HelloWorld Bar Hello 432 World ... To clarify: A has an FK to B, B has an FK to C. (Such as, e.g. BankAccount - Person - Company). I have tried two approaches to load these properties from the database (using NHibernate): A fast approach and a clean approach. My eventual question is how to do a fast & clean approach. Fast approach: Define a view in the database which joins A, B, C and provides all these fields. In the A class, define properties "BName", "BSomeValue", "CName" Define a hibernate mapping between A and the View, whereas the needed B and C properties are mapped with update="false" insert="false" and do actually stem from B and C tables, but Hibernate is not aware of that since it uses the view. This way, the listing only loads one object per "A" record, which is quite fast. If the code tries to access the actual associated property, "A.B", I issue another HQL query to get B, set the property and update the faked BName and BSomeValue properties as well. Clean approach: There is no view. Class A is mapped to table A, B to B, C to C. When loading the list of A, I do a double left-join-fetch to get B and C as well: from A a left join fetch a.B left join fetch a.B.C B.Name, B.SomeValue and C.Name are accessed through the eagerly loaded associations. The disadvantage of this approach is that it gets slower and takes more memory, since it needs to created and map 3 objects per "A" record: An A, B, and C object each. Fast and clean approach: I feel somehow uncomfortable using a database view that hides a join and treat that in NHibernate as if it was a table. So I would like to do something like: Have no views in the database. Declare properties "BName", "BSomeValue", "CName" in class "A". Define the mapping for A such that NHibernate fetches A and these properties together using a join SQL query as a database view would do. The mapping should still allow for defining lazy many-to-one associations for getting A.B.C My questions: Is this possible? Is it [un]artful? Is there a better way?

    Read the article

  • Fastest implementation of the frac function in C#

    - by user349937
    I would like to implement a frac function in C# (just like the one in hsl here http://msdn.microsoft.com/en-us/library/bb509603%28VS.85%29.aspx) but since it is for a very processor intensive application i would like the best version possible. I was using something like public float Frac(float value) { return value - (float)Math.Truncate(value); } but I'm having precision problems, for example for 2.6f it's returning in the unit test Expected: 0.600000024f But was: 0.599999905f I know that I can convert to decimal the value and then at the end convert to float to obtain the correct result something like this: public float Frac(float value) { return (float)((decimal)value - Decimal.Truncate((decimal)value)); } But I wonder if there is a better way without resorting to decimals...

    Read the article

  • Stored procedure optimization

    - by George Zacharia
    Hi, i have a stored procedure which takes lot of time to execure .Can any one suggest a better approch so that the same result set is achived. ALTER PROCEDURE [dbo].[spFavoriteRecipesGET] @USERID INT, @PAGENUMBER INT, @PAGESIZE INT, @SORTDIRECTION VARCHAR(4), @SORTORDER VARCHAR(4),@FILTERBY INT AS BEGIN DECLARE @ROW_START INT DECLARE @ROW_END INT SET @ROW_START = (@PageNumber-1)* @PageSize+1 SET @ROW_END = @PageNumber*@PageSize DECLARE @RecipeCount INT DECLARE @RESULT_SET_TABLE TABLE ( Id INT NOT NULL IDENTITY(1,1), FavoriteRecipeId INT, RecipeId INT, DateAdded DATETIME, Title NVARCHAR(255), UrlFriendlyTitle NVARCHAR(250), [Description] NVARCHAR(MAX), AverageRatingId FLOAT, SubmittedById INT, SubmittedBy VARCHAR(250), RecipeStateId INT, RecipeRatingId INT, ReviewCount INT, TweaksCount INT, PhotoCount INT, ImageName NVARCHAR(50) ) INSERT INTO @RESULT_SET_TABLE SELECT FavoriteRecipes.FavoriteRecipeId, Recipes.RecipeId, FavoriteRecipes.DateAdded, Recipes.Title, Recipes.UrlFriendlyTitle, Recipes.[Description], Recipes.AverageRatingId, Recipes.SubmittedById, COALESCE(users.DisplayName,users.UserName,Recipes.SubmittedBy) As SubmittedBy, Recipes.RecipeStateId, RecipeReviews.RecipeRatingId, COUNT(RecipeReviews.Review), COUNT(RecipeTweaks.Tweak), COUNT(Photos.PhotoId), dbo.udfGetRecipePhoto(Recipes.RecipeId) AS ImageName FROM FavoriteRecipes INNER JOIN Recipes ON FavoriteRecipes.RecipeId=Recipes.RecipeId AND Recipes.RecipeStateId <> 3 LEFT OUTER JOIN RecipeReviews ON RecipeReviews.RecipeId=Recipes.RecipeId AND RecipeReviews.ReviewedById=@UserId AND RecipeReviews.RecipeRatingId= ( SELECT MAX(RecipeReviews.RecipeRatingId) FROM RecipeReviews WHERE RecipeReviews.ReviewedById=@UserId AND RecipeReviews.RecipeId=FavoriteRecipes.RecipeId ) OR RecipeReviews.RecipeRatingId IS NULL LEFT OUTER JOIN RecipeTweaks ON RecipeTweaks.RecipeId = Recipes.RecipeId AND RecipeTweaks.TweakedById= @UserId LEFT OUTER JOIN Photos ON Photos.RecipeId = Recipes.RecipeId AND Photos.UploadedById = @UserId AND Photos.RecipeId = FavoriteRecipes.RecipeId AND Photos.PhotoTypeId = 1 LEFT OUTER JOIN users ON Recipes.SubmittedById = users.UserId WHERE FavoriteRecipes.UserId=@UserId GROUP BY FavoriteRecipes.FavoriteRecipeId, Recipes.RecipeId, FavoriteRecipes.DateAdded, Recipes.Title, Recipes.UrlFriendlyTitle, Recipes.[Description], Recipes.AverageRatingId, Recipes.SubmittedById, Recipes.SubmittedBy, Recipes.RecipeStateId, RecipeReviews.RecipeRatingId, users.DisplayName, users.UserName, Recipes.SubmittedBy; WITH SortResults AS ( SELECT ROW_NUMBER() OVER ( ORDER BY CASE WHEN @SORTDIRECTION = 't' AND @SORTORDER='a' THEN TITLE END ASC, CASE WHEN @SORTDIRECTION = 't' AND @SORTORDER='d' THEN TITLE END DESC, CASE WHEN @SORTDIRECTION = 'r' AND @SORTORDER='a' THEN AverageRatingId END ASC, CASE WHEN @SORTDIRECTION = 'r' AND @SORTORDER='d' THEN AverageRatingId END DESC, CASE WHEN @SORTDIRECTION = 'mr' AND @SORTORDER='a' THEN RecipeRatingId END ASC, CASE WHEN @SORTDIRECTION = 'mr' AND @SORTORDER='d' THEN RecipeRatingId END DESC, CASE WHEN @SORTDIRECTION = 'd' AND @SORTORDER='a' THEN DateAdded END ASC, CASE WHEN @SORTDIRECTION = 'd' AND @SORTORDER='d' THEN DateAdded END DESC ) RowNumber, FavoriteRecipeId, RecipeId, DateAdded, Title, UrlFriendlyTitle, [Description], AverageRatingId, SubmittedById, SubmittedBy, RecipeStateId, RecipeRatingId, ReviewCount, TweaksCount, PhotoCount, ImageName FROM @RESULT_SET_TABLE WHERE ((@FILTERBY = 1 AND SubmittedById= @USERID) OR ( @FILTERBY = 2 AND (SubmittedById <> @USERID OR SubmittedById IS NULL)) OR ( @FILTERBY <> 1 AND @FILTERBY <> 2)) ) SELECT RowNumber, FavoriteRecipeId, RecipeId, DateAdded, Title, UrlFriendlyTitle, [Description], AverageRatingId, SubmittedById, SubmittedBy, RecipeStateId, RecipeRatingId, ReviewCount, TweaksCount, PhotoCount, ImageName FROM SortResults WHERE RowNumber BETWEEN @ROW_START AND @ROW_END print @ROW_START print @ROW_END SELECT @RecipeCount=dbo.udfGetFavRecipesCount(@UserId) SELECT @RecipeCount AS RecipeCount SELECT COUNT(Id) as FilterCount FROM @RESULT_SET_TABLE WHERE ((@FILTERBY = 1 AND SubmittedById= @USERID) OR (@FILTERBY = 2 AND (SubmittedById <> @USERID OR SubmittedById IS NULL)) OR (@FILTERBY <> 1 AND @FILTERBY <> 2)) END

    Read the article

  • Which is more efficient regular expression?

    - by Vagnerr
    I'm parsing some big log files and have some very simple string matches for example if(m/Some String Pattern/o){ #Do something } It seems simple enough but in fact most of the matches I have could be against the start of the line, but the match would be "longer" for example if(m/^Initial static string that matches Some String Pattern/o){ #Do something } Obviously this is a longer regular expression and so more work to match. However I can use the start of line anchor which would allow an expression to be discarded as a failed match sooner. It is my hunch that the latter would be more efficient. Can any one back me up/shoot me down :-)

    Read the article

  • Approach for altering Primary Key from GUID to BigInt in SQL Server related tables

    - by Tom
    I have two tables with 10-20 million rows that have GUID primary keys and at leat 12 tables related via foreign key. The base tables have 10-20 indexes each. We are moving from GUID to BigInt primary keys. I'm wondering if anyone has any suggestions on an approach. Right now this is the approach I'm pondering: Drop all indexes and fkeys on all the tables involved. Add 'NewPrimaryKey' column to each table Make the key identity on the two base tables Script the data change "update table x, set NewPrimaryKey = y where OldPrimaryKey = z Rename the original primarykey to 'oldprimarykey' Rename the 'NewPrimaryKey' column 'PrimaryKey' Script back all the indexes and fkeys Does this seem like a good approach? Does anyone know of a tool or script that would help with this? TD: Edited per additional information. See this blog post that addresses an approach when the GUID is the Primary: http://www.sqlmag.com/blogs/sql-server-questions-answered/sql-server-questions-answered/tabid/1977/entryid/12749/Default.aspx

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >