Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 167/537 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • Is there a Better Way to Retreive Raw XML from a URL than WebClient or HttpWebRequest? [.NET]

    - by DaMartyr
    I am working on a Geocoding app where I put the address in the URL and retreive the XML. I need the complete XML response for this project. Is there any other class for downloading the XML from a website that may be faster than using WebClient or HttpWebRequest? Can the XMLReader be used to get the full XML without string manipulation and would that be faster and/or more efficient?

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • SQL Server Express 2008 Stored Procedure execution time spikes periodically

    - by user156241
    I have a big stored procedure on a SQL Server 2008 Express SP2 database that gets run about every 200 ms. Normal execution time is about 50ms. What I am seeing is large inconsistencies in this run time. It will execute for while, say 50-100 times at 40-60ms which is expected, then seemingly at random the same stored procedure will take way longer, say 900ms or 1.5 seconds to run. Sometimes more than one call of the same procedure in a row will take longer too. It appears that something is causing sql server to slow down dramatically every minute or so, but I can't figure out what. There is no timing pattern between the occurences. I have the same setup on two different computers, one of which is a clean XP Pro load with no virus checking and nothing installed except SQL server. Also, The recovery options for all the databases are set to "Simple".

    Read the article

  • What's the good of IDE's auto generated @override annotation ?

    - by Tony
    I am using eclipse , when I use shortcut to generate override implementations , there is an override annotation up there , I am using JDK 6 , this is all right , but under JDK 5 this annotation will cause an error, so I want to ask , if this annotation is completely useless ? Will compiler do some kind of optimization using this annotation ?

    Read the article

  • What type of websites does memcached speed up

    - by Saif Bechan
    I have read this article about 400% boost of your website. This is done by a combination of nginx and memcached. The how-to part of this website is quite good, but i mis the part where it says to what types of websites this applies. I know nginx is a http engine, I need no explanation for that. I thought memcached had something to do with caching database result. However i don't understand what this has to do with the http request, can someone please explain that to me. Another question I have is for what types of websites is this used. I have a website where the important part of the website consist of data that changes often. Often being minutes. Will this method still apply to me, or should I just stick with the basic boring setup of apache and nothing else.

    Read the article

  • Fastest XML parser for small, simple documents in Java

    - by Varkhan
    I have to objectify very simple and small XML documents (less than 1k, and it's almost SGML: no namespaces, plain UTF-8, you name it...), read from a stream, in Java. I am using JAXP to process the data from my stream into a Document object. I have tried Xerces, it's way too big and slow... I am using Dom4j, but I am still spending way too much time in org.dom4j.io.SAXReader. Does anybody out there have any suggestion on a faster, more efficient implementation, keeping in mind I have very tough CPU and memory constraints? [Edit 1] Keep in mind that my documents are very small, so the overhead of staring the parser can be important. For instance I am spending as much time in org.xml.sax.helpers.XMLReaderFactory.createXMLReader as in org.dom4j.io.SAXReader.read [Edit 2] The result has to be in Dom format, as I pass the document to decision tools that do arbitrary processing on it, like switching code based on the value of arbitrary XPaths, but also extracting lists of values packed as children of a predefined node. [Edit 3] In any case I eventually need to load/parse the complete document, since all the information it contains is going to be used at some point. (This question is related to, but different from, http://stackoverflow.com/questions/373833/best-xml-parser-for-java )

    Read the article

  • How do I make this nested for loop, testing sums of cubes, more efficient?

    - by Brian J. Fink
    I'm trying to iterate through all the combinations of pairs of positive long integers in Java and testing the sum of their cubes to discover if it's a Fibonacci number. I'm currently doing this by using the value of the outer loop variable as the inner loop's upper limit, with the effect being that the outer loop runs a little slower each time. Initially it appeared to run very quickly--I was up to 10 digits within minutes. But now after 2 full days of continuous execution, I'm only somewhere in the middle range of 15 digits. At this rate it may end up taking a whole year just to finish running this program. The code for the program is below: import java.lang.*; import java.math.*; public class FindFib { public static void main(String args[]) { long uLimit=9223372036854775807L; //long maximum value BigDecimal PHI=new BigDecimal(1D+Math.sqrt(5D)/2D); //Golden Ratio for(long a=1;a<=uLimit;a++) //Outer Loop, 1 to maximum for(long b=1;b<=a;b++) //Inner Loop, 1 to current outer { //Cube the numbers and add BigDecimal c=BigDecimal.valueOf(a).pow(3).add(BigDecimal.valueOf(b).pow(3)); System.out.print(c+" "); //Output result //Upper and lower limits of interval for Mobius test: [c*PHI-1/c,c*PHI+1/c] BigDecimal d=c.multiply(PHI).subtract(BigDecimal.ONE.divide(c,BigDecimal.ROUND_HALF_UP)), e=c.multiply(PHI).add(BigDecimal.ONE.divide(c,BigDecimal.ROUND_HALF_UP)); //Mobius test: if integer in interval (floor values unequal) Fibonacci number! if (d.toBigInteger().compareTo(e.toBigInteger())!=0) System.out.println(); //Line feed else System.out.print("\r"); //Carriage return instead } //Display final message System.out.println("\rDone. "); } } Now the use of BigDecimal and BigInteger was delibrate; I need them to get the necessary precision. Is there anything other than my variable types that I could change to gain better efficiency?

    Read the article

  • oprofile unable to produce call graph

    - by aaa
    hello I am trying to use oprofile to generate call graph. Compiler is g++, platform is linux x86-64, linker is gfortran C++ code is compiled with -fno- omit-frame-pointer. oprofile is started with --callgraph=25. report I run with --callgraph. the call graph is produced but it's only includes self time, which is not much use what am I missing?

    Read the article

  • How to do some preformance testing in asp.net mvc?

    - by chobo2
    Hi I am using asp.net mvc 2.0 and I want to test how long it takes to do some of my code. In one senario I do this load xml file up. validate xml file and deserailze. validate all rows in the xml file with more advanced validation that cannot be done in the schema validation. then I do a bulk insert. I want to know how long steps 1 to 3 take and how long step 4 takes. I tried to do like DateTime.UtcNow in areas and subtract them but it told me it took like 3 seconds but I know that is not right as steps 1 to 4 take 2mins to do.

    Read the article

  • PostgreSQL: Why does this simple query not use the index?

    - by David
    I have a table t with a column c, which is an int and has a btree index on it. Why does the following query not utilize this index? explain select c from t group by c; The result I get is: HashAggregate (cost=1005817.55..1005817.71 rows=16 width=4) -> Seq Scan on t (cost=0.00..946059.84 rows=23903084 width=4) My understanding of indexes is limited, but I thought such queries were the purpose of indexes.

    Read the article

  • How do polymorphic inline caches work with mutable types?

    - by kingkilr
    A polymorphic inline cache works by caching the actual method by the type of the object, in order to avoid the expensive lookup procedures (usually a hashtable lookup). How does one handle the type comparison if the type objects are mutable (i.e. the method might be monkey patched into something different at run time). The one idea I've come up with would be a "class counter" that gets incremented each time a method is adjusted, however this seems like it would be exceptionally expensive in a heavily monkey patched environ since it would kill all the PICs for that class, even if the methods for them weren't altered. I'm sure there must be a good solution to this, as this issue is directly applicable to Javascript and AFAIK all 3 of the big JS VMs have PICs (wow acronym ahoy).

    Read the article

  • .NET WebService IPC - Should it be done to minimise some expensive operations?

    - by Kyle
    I'm looking at a few different approaches to a problem: Client requests work, some stuff gets done, and a result (ok/error) is returned. A .NET web service definitely seems like the way to go, my only issue is that the "stuff" will involve building up and tearing down a session for each request. Does abstracting the "stuff" out to an app (which would keep a single session active, and process the request from the web service) seem like the right way to go? (and if so, what communication method) The work time is negligible, my concern is the hammering the transaction servers in question will probably get if I create/drop a session for each job. Is some form of IPC or socket based communication a feasible solution here? Thoughts/comments/experiences much appreciated. Edit: After a bit more research, it seems like hosting a WCF service in a Windows Service is probably a better way to go...

    Read the article

  • Efficient way to delete a line from a text file (C#)

    - by Valentin Vasilyev
    Hello. I need to delete a certain line from a text file. What is the most efficient way of doing this? File can be potentially large(over million records). Thank you. UPDATE: below is the code I'm currently using, but I'm not sure if it is good. internal void DeleteMarkedEntries() { string tempPath=Path.GetTempFileName(); using (var reader = new StreamReader(logPath)) { using (var writer = new StreamWriter(File.OpenWrite(tempPath))) { int counter = 0; while (!reader.EndOfStream) { if (!_deletedLines.Contains(counter)) { writer.WriteLine(reader.ReadLine()); } ++counter; } } } if (File.Exists(tempPath)) { File.Delete(logPath); File.Move(tempPath, logPath); } }

    Read the article

  • Fastest implementation of the frac function in C#

    - by user349937
    I would like to implement a frac function in C# (just like the one in hsl here http://msdn.microsoft.com/en-us/library/bb509603%28VS.85%29.aspx) but since it is for a very processor intensive application i would like the best version possible. I was using something like public float Frac(float value) { return value - (float)Math.Truncate(value); } but I'm having precision problems, for example for 2.6f it's returning in the unit test Expected: 0.600000024f But was: 0.599999905f I know that I can convert to decimal the value and then at the end convert to float to obtain the correct result something like this: public float Frac(float value) { return (float)((decimal)value - Decimal.Truncate((decimal)value)); } But I wonder if there is a better way without resorting to decimals...

    Read the article

  • [N]Hibernate: view-like fetching properties of associated class

    - by chiccodoro
    (Felt quite helpless in formulating an appropriate title...) In my C# app I display a list of "A" objects, along with some properties of their associated "B" objects and properties of B's associated "C" objects: A.Name B.Name B.SomeValue C.Name Foo Bar 123 HelloWorld Bar Hello 432 World ... To clarify: A has an FK to B, B has an FK to C. (Such as, e.g. BankAccount - Person - Company). I have tried two approaches to load these properties from the database (using NHibernate): A fast approach and a clean approach. My eventual question is how to do a fast & clean approach. Fast approach: Define a view in the database which joins A, B, C and provides all these fields. In the A class, define properties "BName", "BSomeValue", "CName" Define a hibernate mapping between A and the View, whereas the needed B and C properties are mapped with update="false" insert="false" and do actually stem from B and C tables, but Hibernate is not aware of that since it uses the view. This way, the listing only loads one object per "A" record, which is quite fast. If the code tries to access the actual associated property, "A.B", I issue another HQL query to get B, set the property and update the faked BName and BSomeValue properties as well. Clean approach: There is no view. Class A is mapped to table A, B to B, C to C. When loading the list of A, I do a double left-join-fetch to get B and C as well: from A a left join fetch a.B left join fetch a.B.C B.Name, B.SomeValue and C.Name are accessed through the eagerly loaded associations. The disadvantage of this approach is that it gets slower and takes more memory, since it needs to created and map 3 objects per "A" record: An A, B, and C object each. Fast and clean approach: I feel somehow uncomfortable using a database view that hides a join and treat that in NHibernate as if it was a table. So I would like to do something like: Have no views in the database. Declare properties "BName", "BSomeValue", "CName" in class "A". Define the mapping for A such that NHibernate fetches A and these properties together using a join SQL query as a database view would do. The mapping should still allow for defining lazy many-to-one associations for getting A.B.C My questions: Is this possible? Is it [un]artful? Is there a better way?

    Read the article

  • Which is more efficient regular expression?

    - by Vagnerr
    I'm parsing some big log files and have some very simple string matches for example if(m/Some String Pattern/o){ #Do something } It seems simple enough but in fact most of the matches I have could be against the start of the line, but the match would be "longer" for example if(m/^Initial static string that matches Some String Pattern/o){ #Do something } Obviously this is a longer regular expression and so more work to match. However I can use the start of line anchor which would allow an expression to be discarded as a failed match sooner. It is my hunch that the latter would be more efficient. Can any one back me up/shoot me down :-)

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • Stored procedure optimization

    - by George Zacharia
    Hi, i have a stored procedure which takes lot of time to execure .Can any one suggest a better approch so that the same result set is achived. ALTER PROCEDURE [dbo].[spFavoriteRecipesGET] @USERID INT, @PAGENUMBER INT, @PAGESIZE INT, @SORTDIRECTION VARCHAR(4), @SORTORDER VARCHAR(4),@FILTERBY INT AS BEGIN DECLARE @ROW_START INT DECLARE @ROW_END INT SET @ROW_START = (@PageNumber-1)* @PageSize+1 SET @ROW_END = @PageNumber*@PageSize DECLARE @RecipeCount INT DECLARE @RESULT_SET_TABLE TABLE ( Id INT NOT NULL IDENTITY(1,1), FavoriteRecipeId INT, RecipeId INT, DateAdded DATETIME, Title NVARCHAR(255), UrlFriendlyTitle NVARCHAR(250), [Description] NVARCHAR(MAX), AverageRatingId FLOAT, SubmittedById INT, SubmittedBy VARCHAR(250), RecipeStateId INT, RecipeRatingId INT, ReviewCount INT, TweaksCount INT, PhotoCount INT, ImageName NVARCHAR(50) ) INSERT INTO @RESULT_SET_TABLE SELECT FavoriteRecipes.FavoriteRecipeId, Recipes.RecipeId, FavoriteRecipes.DateAdded, Recipes.Title, Recipes.UrlFriendlyTitle, Recipes.[Description], Recipes.AverageRatingId, Recipes.SubmittedById, COALESCE(users.DisplayName,users.UserName,Recipes.SubmittedBy) As SubmittedBy, Recipes.RecipeStateId, RecipeReviews.RecipeRatingId, COUNT(RecipeReviews.Review), COUNT(RecipeTweaks.Tweak), COUNT(Photos.PhotoId), dbo.udfGetRecipePhoto(Recipes.RecipeId) AS ImageName FROM FavoriteRecipes INNER JOIN Recipes ON FavoriteRecipes.RecipeId=Recipes.RecipeId AND Recipes.RecipeStateId <> 3 LEFT OUTER JOIN RecipeReviews ON RecipeReviews.RecipeId=Recipes.RecipeId AND RecipeReviews.ReviewedById=@UserId AND RecipeReviews.RecipeRatingId= ( SELECT MAX(RecipeReviews.RecipeRatingId) FROM RecipeReviews WHERE RecipeReviews.ReviewedById=@UserId AND RecipeReviews.RecipeId=FavoriteRecipes.RecipeId ) OR RecipeReviews.RecipeRatingId IS NULL LEFT OUTER JOIN RecipeTweaks ON RecipeTweaks.RecipeId = Recipes.RecipeId AND RecipeTweaks.TweakedById= @UserId LEFT OUTER JOIN Photos ON Photos.RecipeId = Recipes.RecipeId AND Photos.UploadedById = @UserId AND Photos.RecipeId = FavoriteRecipes.RecipeId AND Photos.PhotoTypeId = 1 LEFT OUTER JOIN users ON Recipes.SubmittedById = users.UserId WHERE FavoriteRecipes.UserId=@UserId GROUP BY FavoriteRecipes.FavoriteRecipeId, Recipes.RecipeId, FavoriteRecipes.DateAdded, Recipes.Title, Recipes.UrlFriendlyTitle, Recipes.[Description], Recipes.AverageRatingId, Recipes.SubmittedById, Recipes.SubmittedBy, Recipes.RecipeStateId, RecipeReviews.RecipeRatingId, users.DisplayName, users.UserName, Recipes.SubmittedBy; WITH SortResults AS ( SELECT ROW_NUMBER() OVER ( ORDER BY CASE WHEN @SORTDIRECTION = 't' AND @SORTORDER='a' THEN TITLE END ASC, CASE WHEN @SORTDIRECTION = 't' AND @SORTORDER='d' THEN TITLE END DESC, CASE WHEN @SORTDIRECTION = 'r' AND @SORTORDER='a' THEN AverageRatingId END ASC, CASE WHEN @SORTDIRECTION = 'r' AND @SORTORDER='d' THEN AverageRatingId END DESC, CASE WHEN @SORTDIRECTION = 'mr' AND @SORTORDER='a' THEN RecipeRatingId END ASC, CASE WHEN @SORTDIRECTION = 'mr' AND @SORTORDER='d' THEN RecipeRatingId END DESC, CASE WHEN @SORTDIRECTION = 'd' AND @SORTORDER='a' THEN DateAdded END ASC, CASE WHEN @SORTDIRECTION = 'd' AND @SORTORDER='d' THEN DateAdded END DESC ) RowNumber, FavoriteRecipeId, RecipeId, DateAdded, Title, UrlFriendlyTitle, [Description], AverageRatingId, SubmittedById, SubmittedBy, RecipeStateId, RecipeRatingId, ReviewCount, TweaksCount, PhotoCount, ImageName FROM @RESULT_SET_TABLE WHERE ((@FILTERBY = 1 AND SubmittedById= @USERID) OR ( @FILTERBY = 2 AND (SubmittedById <> @USERID OR SubmittedById IS NULL)) OR ( @FILTERBY <> 1 AND @FILTERBY <> 2)) ) SELECT RowNumber, FavoriteRecipeId, RecipeId, DateAdded, Title, UrlFriendlyTitle, [Description], AverageRatingId, SubmittedById, SubmittedBy, RecipeStateId, RecipeRatingId, ReviewCount, TweaksCount, PhotoCount, ImageName FROM SortResults WHERE RowNumber BETWEEN @ROW_START AND @ROW_END print @ROW_START print @ROW_END SELECT @RecipeCount=dbo.udfGetFavRecipesCount(@UserId) SELECT @RecipeCount AS RecipeCount SELECT COUNT(Id) as FilterCount FROM @RESULT_SET_TABLE WHERE ((@FILTERBY = 1 AND SubmittedById= @USERID) OR (@FILTERBY = 2 AND (SubmittedById <> @USERID OR SubmittedById IS NULL)) OR (@FILTERBY <> 1 AND @FILTERBY <> 2)) END

    Read the article

  • Approach for altering Primary Key from GUID to BigInt in SQL Server related tables

    - by Tom
    I have two tables with 10-20 million rows that have GUID primary keys and at leat 12 tables related via foreign key. The base tables have 10-20 indexes each. We are moving from GUID to BigInt primary keys. I'm wondering if anyone has any suggestions on an approach. Right now this is the approach I'm pondering: Drop all indexes and fkeys on all the tables involved. Add 'NewPrimaryKey' column to each table Make the key identity on the two base tables Script the data change "update table x, set NewPrimaryKey = y where OldPrimaryKey = z Rename the original primarykey to 'oldprimarykey' Rename the 'NewPrimaryKey' column 'PrimaryKey' Script back all the indexes and fkeys Does this seem like a good approach? Does anyone know of a tool or script that would help with this? TD: Edited per additional information. See this blog post that addresses an approach when the GUID is the Primary: http://www.sqlmag.com/blogs/sql-server-questions-answered/sql-server-questions-answered/tabid/1977/entryid/12749/Default.aspx

    Read the article

  • Are closures in javascript recompiled

    - by Discodancer
    Let's say we have this code (forget about prototypes for a moment): function A(){ var foo = 1; this.method = function(){ return foo; } } var a = new A(); is the inner function recompiled each time the function A is run? Or is it better (and why) to do it like this: function method = function(){ return this.foo; } function A(){ this.foo = 1; this.method = method; } var a = new A(); Or are the javascript engines smart enough not to create a new 'method' function every time? Specifically Google's v8 and node.js. Also, any general recommendations on when to use which technique are welcome. In my specific example, it really suits me to use the first example, but I know thath the outer function will be instantiated many times.

    Read the article

  • Preventing a heavy process from sinking in the swap file

    - by eran
    Our service tends to fall asleep during the nights on our client's server, and then have a hard time waking up. What seems to happen is that the process heap, which is sometimes several hundreds of MB, is moved to the swap file. This happens at night, when our service is not used, and others are scheduled to run (DB backups, AV scans etc). When this happens, after a few hours of inactivity the first call to the service takes up to a few minutes (consequent calls take seconds). I'm quite certain it's an issue of virtual memory management, and I really hate the idea of forcing the OS to keep our service in the physical memory. I know doing that will hurt other processes on the server, and decrease the overall server throughput. Having that said, our clients just want our app to be responsive. They don't care if nightly jobs take longer. I vaguely remember there's a way to force Windows to keep pages on the physical memory, but I really hate that idea. I'm leaning more towards some internal or external watchdog that will initiate higher-level functionalities (there is already some internal scheduler that does very little, and makes no difference). If there were a 3rd party tool that provided that kind of service is would have been just as good. I'd love to hear any comments, recommendations and common solutions to this kind of problem. The service is written in VC2005 and runs on Windows servers.

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >