Search Results

Search found 3706 results on 149 pages for 'nano optimization'.

Page 47/149 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Stored procedure optimization

    - by George Zacharia
    Hi, i have a stored procedure which takes lot of time to execure .Can any one suggest a better approch so that the same result set is achived. ALTER PROCEDURE [dbo].[spFavoriteRecipesGET] @USERID INT, @PAGENUMBER INT, @PAGESIZE INT, @SORTDIRECTION VARCHAR(4), @SORTORDER VARCHAR(4),@FILTERBY INT AS BEGIN DECLARE @ROW_START INT DECLARE @ROW_END INT SET @ROW_START = (@PageNumber-1)* @PageSize+1 SET @ROW_END = @PageNumber*@PageSize DECLARE @RecipeCount INT DECLARE @RESULT_SET_TABLE TABLE ( Id INT NOT NULL IDENTITY(1,1), FavoriteRecipeId INT, RecipeId INT, DateAdded DATETIME, Title NVARCHAR(255), UrlFriendlyTitle NVARCHAR(250), [Description] NVARCHAR(MAX), AverageRatingId FLOAT, SubmittedById INT, SubmittedBy VARCHAR(250), RecipeStateId INT, RecipeRatingId INT, ReviewCount INT, TweaksCount INT, PhotoCount INT, ImageName NVARCHAR(50) ) INSERT INTO @RESULT_SET_TABLE SELECT FavoriteRecipes.FavoriteRecipeId, Recipes.RecipeId, FavoriteRecipes.DateAdded, Recipes.Title, Recipes.UrlFriendlyTitle, Recipes.[Description], Recipes.AverageRatingId, Recipes.SubmittedById, COALESCE(users.DisplayName,users.UserName,Recipes.SubmittedBy) As SubmittedBy, Recipes.RecipeStateId, RecipeReviews.RecipeRatingId, COUNT(RecipeReviews.Review), COUNT(RecipeTweaks.Tweak), COUNT(Photos.PhotoId), dbo.udfGetRecipePhoto(Recipes.RecipeId) AS ImageName FROM FavoriteRecipes INNER JOIN Recipes ON FavoriteRecipes.RecipeId=Recipes.RecipeId AND Recipes.RecipeStateId <> 3 LEFT OUTER JOIN RecipeReviews ON RecipeReviews.RecipeId=Recipes.RecipeId AND RecipeReviews.ReviewedById=@UserId AND RecipeReviews.RecipeRatingId= ( SELECT MAX(RecipeReviews.RecipeRatingId) FROM RecipeReviews WHERE RecipeReviews.ReviewedById=@UserId AND RecipeReviews.RecipeId=FavoriteRecipes.RecipeId ) OR RecipeReviews.RecipeRatingId IS NULL LEFT OUTER JOIN RecipeTweaks ON RecipeTweaks.RecipeId = Recipes.RecipeId AND RecipeTweaks.TweakedById= @UserId LEFT OUTER JOIN Photos ON Photos.RecipeId = Recipes.RecipeId AND Photos.UploadedById = @UserId AND Photos.RecipeId = FavoriteRecipes.RecipeId AND Photos.PhotoTypeId = 1 LEFT OUTER JOIN users ON Recipes.SubmittedById = users.UserId WHERE FavoriteRecipes.UserId=@UserId GROUP BY FavoriteRecipes.FavoriteRecipeId, Recipes.RecipeId, FavoriteRecipes.DateAdded, Recipes.Title, Recipes.UrlFriendlyTitle, Recipes.[Description], Recipes.AverageRatingId, Recipes.SubmittedById, Recipes.SubmittedBy, Recipes.RecipeStateId, RecipeReviews.RecipeRatingId, users.DisplayName, users.UserName, Recipes.SubmittedBy; WITH SortResults AS ( SELECT ROW_NUMBER() OVER ( ORDER BY CASE WHEN @SORTDIRECTION = 't' AND @SORTORDER='a' THEN TITLE END ASC, CASE WHEN @SORTDIRECTION = 't' AND @SORTORDER='d' THEN TITLE END DESC, CASE WHEN @SORTDIRECTION = 'r' AND @SORTORDER='a' THEN AverageRatingId END ASC, CASE WHEN @SORTDIRECTION = 'r' AND @SORTORDER='d' THEN AverageRatingId END DESC, CASE WHEN @SORTDIRECTION = 'mr' AND @SORTORDER='a' THEN RecipeRatingId END ASC, CASE WHEN @SORTDIRECTION = 'mr' AND @SORTORDER='d' THEN RecipeRatingId END DESC, CASE WHEN @SORTDIRECTION = 'd' AND @SORTORDER='a' THEN DateAdded END ASC, CASE WHEN @SORTDIRECTION = 'd' AND @SORTORDER='d' THEN DateAdded END DESC ) RowNumber, FavoriteRecipeId, RecipeId, DateAdded, Title, UrlFriendlyTitle, [Description], AverageRatingId, SubmittedById, SubmittedBy, RecipeStateId, RecipeRatingId, ReviewCount, TweaksCount, PhotoCount, ImageName FROM @RESULT_SET_TABLE WHERE ((@FILTERBY = 1 AND SubmittedById= @USERID) OR ( @FILTERBY = 2 AND (SubmittedById <> @USERID OR SubmittedById IS NULL)) OR ( @FILTERBY <> 1 AND @FILTERBY <> 2)) ) SELECT RowNumber, FavoriteRecipeId, RecipeId, DateAdded, Title, UrlFriendlyTitle, [Description], AverageRatingId, SubmittedById, SubmittedBy, RecipeStateId, RecipeRatingId, ReviewCount, TweaksCount, PhotoCount, ImageName FROM SortResults WHERE RowNumber BETWEEN @ROW_START AND @ROW_END print @ROW_START print @ROW_END SELECT @RecipeCount=dbo.udfGetFavRecipesCount(@UserId) SELECT @RecipeCount AS RecipeCount SELECT COUNT(Id) as FilterCount FROM @RESULT_SET_TABLE WHERE ((@FILTERBY = 1 AND SubmittedById= @USERID) OR (@FILTERBY = 2 AND (SubmittedById <> @USERID OR SubmittedById IS NULL)) OR (@FILTERBY <> 1 AND @FILTERBY <> 2)) END

    Read the article

  • ControlCollection extension method optimization

    - by Johan Leino
    Hi, got question regarding an extension method that I have written that looks like this: public static IEnumerable<T> FindControlsOfType<T>(this ControlCollection instance) where T : class { T control; foreach (Control ctrl in instance) { if ((control = ctrl as T) != null) { yield return control; } foreach (T child in FindControlsOfType<T>(ctrl.Controls)) { yield return child; } } } public static IEnumerable<T> FindControlsOfType<T>(this ControlCollection instance, Func<T, bool> match) where T : class { return FindControlsOfType<T>(instance).Where(match); } The idea here is to find all controls that match a specifc criteria (hence the Func<..) in the controls collection. My question is: Does the second method (that has the Func) first call the first method to find all the controls of type T and then performs the where condition or does the "runtime" optimize the call to perform the where condition on the "whole" enumeration (if you get what I mean). secondly, are there any other optimizations that I can do to the code to perform better. An example can look like this: var checkbox = this.Controls.FindControlsOfType<MyCustomCheckBox>( ctrl => ctrl.CustomProperty == "Test" ) .FirstOrDefault();

    Read the article

  • Database maintenance, big binary table optimization

    - by jgemedina
    Hi i have a huge database, around 1 TB in size, most of the space is consumed by a table which stores images, the tables has right now almost 800k rows. server response time has increased, i would like to know which techniques should i use or you recomend, partitioning? o how to reorganize the table every row is accessed by the image id column, and it has its clustered index by that column, and every two days i reorganize the index and every 7 days i rebuild it, but it seems not to be working any suggestions?

    Read the article

  • sequential minimal optimization C++

    - by Anton
    Hello. I want to implement the method of SVM. But the problem appeared in his training. It was originally planned to use SMO, but did not find ready-made libraries for C++. If there is a ready, then share it. Thank you in advance. The problem of finding an object in the picture (probably human)

    Read the article

  • Could someone give me their two cents on this optimization strategy

    - by jimstandard
    Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction. Rather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is "Smith" and then have all of those records loaded into memory as you go though each looking for matches for a specific "John Smith" using various data points. Would this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it?

    Read the article

  • mysql select query optimization

    - by Saharsh Shah
    I have two table testa & testb. CREATE TABLE `testa` ( `id` INT(10) NOT NULL AUTO_INCREMENT, `name` VARCHAR(50) DEFAULT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `testb` ( `id` INT(10) NOT NULL AUTO_INCREMENT, `name` VARCHAR(50) DEFAULT NULL, `aid1` INT(10) DEFAULT NULL, `aid2` INT(10) DEFAULT NULL, `aid3` INT(10) DEFAULT NULL, PRIMARY KEY (`id`) ); Currently I am running below query for retrieving all rows where id in testa table matches with any columns of aid1,aid2,aid3 in tableb. The query is retreiving acurate result but it is taking minimum 30 seconds to execute which is too much. I have also tried to optimise my query using UNION but failed to do so. SELECT a.id, a.name, b.name, b.id FROM testb b INNER JOIN testa a ON b.aid1 = a.id OR b.aid2 = a.id OR b.aid3 = a.id ; How do i optimize my query so it's total execution time is within 2-3 seconds? Thanks in advance...

    Read the article

  • Optimization of SQL query regarding pair comparisons

    - by InfiniteSquirrel
    Hi, I'm working on a pair comparison site where a user loads a list of films and grades from another site. My site then picks two random movies and matches them against each other, the user selects the better of the two and a new pair is loaded. This gives a complete list of movies ordered by whichever is best. The database contains three tables; fm_film_data - this contains all imported movies fm_film_data(id int(11), imdb_id varchar(10), tmdb_id varchar(10), title varchar(255), original_title varchar(255), year year(4), director text, description text, poster_url varchar(255)) fm_films - this contains all information related to a user, what movies the user has seen, what grades the user has given, as well as information about each film's wins/losses for that user. fm_films(id int(11), user_id int(11), film_id int(11), grade int(11), wins int(11), losses int(11)) fm_log - this contains records of every duel that has occurred. fm_log(id int(11), user_id int(11), winner int(11), loser int(11)) To pick a pair to show the user, I've created a mySQL query that checks the log and picks a pair at random. SELECT pair.id1, pair.id2 FROM (SELECT part1.id AS id1, part2.id AS id2 FROM fm_films AS part1, fm_films AS part2 WHERE part1.id <> part2.id AND part1.user_id = [!!USERID!!] AND part2.user_id = [!!USERID!!]) AS pair LEFT JOIN (SELECT winner AS id1, loser AS id2 FROM fm_log WHERE fm_log.user_id = [!!USERID!!] UNION SELECT loser AS id1, winner AS id2 FROM fm_log WHERE fm_log.user_id = [!!USERID!!]) AS log ON pair.id1 = log.id1 AND pair.id2 = log.id2 WHERE log.id1 IS NULL ORDER BY RAND() LIMIT 1 This query takes some time to load, about 6 seconds in our tests with two users with about 800 grades each. I'm looking for a way to optimize this but still limit all duels to appear only once. The server runs MySQL version 5.0.90-community.

    Read the article

  • ruby convert hundredth seconds to timestamp optimization

    - by Nik
    Hey! I want to convert "123456" to "00:20:34.56" where the two digits to the right of the decimal point is in hundredth of a second. So 00:00:00.99 + 00:00:00.01 = 00:00:01.00 What I have: def to_hmsc(cent) h = (cent/360000).floor cent -= h*360000 m = (cent/6000).floor cent -= m*6000 s = (cent/100).floor cent -= s*100 "#{h}:#{m}:#{s}.#{s}" end does this: to_hmsc("123456") #= "0:20:34.56" Question 1: I mean,this is ruby, I find the part ' cent -=... ' rather clunky. Can you see any way to shorten the entire process? Question 2: This has been asked a million times before, but please share whatever you've got: what's the shortest way to add leading zero to the digits. so that to_hmsc("123456") #= "00:20:34.56" Thanks!

    Read the article

  • Does "Return value optimization" cause undefined behavior?

    - by 6pack kid
    Reading this Wikipedia article pointed by one of the repliers to the following question: http://stackoverflow.com/questions/2323225/c-copy-constructor-temporaries-and-copy-semantics I came across this line Depending on the compiler, and the compiler's settings, the resulting program may display any of the following outputs: Doesn't this qualify for undefined behavior? I know the article says Depending on the compiler and settings but I just want to clear this.

    Read the article

  • empty base class optimization

    - by FredOverflow
    Two quotes from the C++ standard, §1.8: An object is a region of storage. Base class subobjects may have zero size. I don't think a region of storage can be of size zero. That would mean that some base class subobjects aren't actually objects. Opinions?

    Read the article

  • DB optimization to use it as a queue

    - by anony
    We have a table called worktable which has some columns(key(primary key), ptime, aname, status, content) we have something called producer which puts in rows in this table and we have consumer which does an order-by on the key column and fetches the first row which has status as 'pending'. The consumer does some processing on this row: 1. updates status to "processing" 2. does some processing using content 3. deletes the row we are facing contention issues when we try to run multiple consumers(probably due to the order-by which does a full table scan)... using Advanced queues would be our next step but before we go there we want to check what is the max throughput we can achieve with multiple consumers and producer on the table. What are the optimizations we can do to get the best numbers possible? Can we do an in-memory processing where a consumer fetches 1000 rows at a time processes and deletes? will that improve? What are other possibilities? partitioning of table? parallelization? Index organized tables?...

    Read the article

  • NHibernate Performance Optimization | Suggestions invited!!!

    - by user336749
    Hi, I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen. While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen. For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet. The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param). Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization. My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case? Thanks in advance for any suggestions. Regards, -Mike

    Read the article

  • NHibernate: Using value tables for optimization AND dynamic join

    - by Kostya
    Hi all, My situation is next: there are to entities with many-to-many relation, f.e. Products and Categories. Also, categories has hierachial structure, like a tree. There is need to select all products that depends to some concrete category with all its childs (branch). So, I use following sql statement to do that: SELECT * FROM Products p WHERE p.ID IN ( SELECT DISTINCT pc.ProductID FROM ProductsCategories pc INNER JOIN Categories c ON c.ID = pc.CategoryID WHERE c.TLeft >= 1 AND c.TRight <= 33378 ) But with big set of data this query executes very long and I found some solution to optimize it, look at here: DECLARE @CatProducts TABLE ( ProductID int NOT NULL ) INSERT INTO @CatProducts SELECT DISTINCT pc.ProductID FROM ProductsCategories pc INNER JOIN Categories c ON c.ID = pc.CategoryID WHERE c.TLeft >= 1 AND c.TRight <= 33378 SELECT * FROM Products p INNER JOIN @CatProducts cp ON cp.ProductID = p.ID This query executes very fast but I don't know how to do that with NHIbernate. Note, that I need use only ICriteria because of dynamic filtering\ordering. If some one knows a solution for that, it will be fantastic. But I'll pleasure to any suggestions of course. Thank you ahead, Kostya

    Read the article

  • Most efficient algorithm for merging sorted IEnumerable<T>

    - by franck
    Hello, I have several huge sorted enumerable sequences that I want to merge. Theses lists are manipulated as IEnumerable but are already sorted. Since input lists are sorted, it should be possible to merge them in one trip, without re-sorting anything. I would like to keep the defered execution behavior. I tried to write a naive algorithm which do that (see below). However, it looks pretty ugly and I'm sure it can be optimized. It may exist a more academical algorithm... IEnumerable<T> MergeOrderedLists<T, TOrder>(IEnumerable<IEnumerable<T>> orderedlists, Func<T, TOrder> orderBy) { var enumerators = orderedlists.ToDictionary(l => l.GetEnumerator(), l => default(T)); IEnumerator<T> tag = null; var firstRun = true; while (true) { var toRemove = new List<IEnumerator<T>>(); var toAdd = new List<KeyValuePair<IEnumerator<T>, T>>(); foreach (var pair in enumerators.Where(pair => firstRun || tag == pair.Key)) { if (pair.Key.MoveNext()) toAdd.Add(pair); else toRemove.Add(pair.Key); } foreach (var enumerator in toRemove) enumerators.Remove(enumerator); foreach (var pair in toAdd) enumerators[pair.Key] = pair.Key.Current; if (enumerators.Count == 0) yield break; var min = enumerators.OrderBy(t => orderBy(t.Value)).FirstOrDefault(); tag = min.Key; yield return min.Value; firstRun = false; } } The method can be used like that: // Person lists are already sorted by age MergeOrderedLists(orderedList, p => p.Age); assuming the following Person class exists somewhere: public class Person { public int Age { get; set; } } Duplicates should be conserved, we don't care about their order in the new sequence. Do you see any obvious optimization I could use?

    Read the article

  • Is there anything else I can do to optimize this MySQL query?

    - by Legend
    I have two tables, Table A with 700,000 entries and Table B with 600,000 entries. The structure is as follows: Table A: +-----------+---------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------+---------------------+------+-----+---------+----------------+ | id | bigint(20) unsigned | NO | PRI | NULL | auto_increment | | number | bigint(20) unsigned | YES | | NULL | | +-----------+---------------------+------+-----+---------+----------------+ Table B: +-------------+---------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------------------+------+-----+---------+----------------+ | id | bigint(20) unsigned | NO | PRI | NULL | auto_increment | | number_s | bigint(20) unsigned | YES | MUL | NULL | | | number_e | bigint(20) unsigned | YES | MUL | NULL | | | source | varchar(50) | YES | | NULL | | +-------------+---------------------+------+-----+---------+----------------+ I am trying to find if any of the values in Table A are present in Table B using the following code: $sql = "SELECT number from TableA"; $result = mysql_query($sql) or die(mysql_error()); while($row = mysql_fetch_assoc($result)) { $number = $row['number']; $sql = "SELECT source, count(source) FROM TableB WHERE number_s < $number AND number_e > $number GROUP BY source"; $re = mysql_query($sql) or die(mysql_error); while($ro = mysql_fetch_array($re)) { echo $number."\t".$ro[0]."\t".$ro[1]."\n"; } } I was hoping that the query would go fast but then for some reason, it isn't terrible fast. My explain on the select (with a particular value of "number") gives me the following: mysql> explain SELECT source, count(source) FROM TableB WHERE number_s < 1812194440 AND number_e > 1812194440 GROUP BY source; +----+-------------+------------+------+-------------------------+------+---------+------+--------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+------+-------------------------+------+---------+------+--------+----------------------------------------------+ | 1 | SIMPLE | TableB | ALL | number_s,number_e | NULL | NULL | NULL | 696325 | Using where; Using temporary; Using filesort | +----+-------------+------------+------+-------------------------+------+---------+------+--------+----------------------------------------------+ 1 row in set (0.00 sec) Is there any optimization that I can squeeze out of this? I tried writing a stored procedure for the same task but it doesn't even seem to work in the first place... It doesn't give any syntax errors... I tried running it for a day and it was still running which felt odd. CREATE PROCEDURE Filter() Begin DECLARE number BIGINT UNSIGNED; DECLARE x INT; DECLARE done INT DEFAULT 0; DECLARE cur1 CURSOR FOR SELECT number FROM TableA; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; CREATE TEMPORARY TABLE IF NOT EXISTS Flags(number bigint unsigned, count int(11)); OPEN cur1; hist_loop: LOOP FETCH cur1 INTO number; SELECT count(*) from TableB WHERE number_s < number AND number_e > number INTO x; IF done = 1 THEN LEAVE hist_loop; END IF; IF x IS NOT NULL AND x>0 THEN INSERT INTO Flags(number, count) VALUES(number, x); END IF; END LOOP hist_loop; CLOSE cur1; END

    Read the article

  • Local Variables take 7x longer to access than global variables?

    - by ItzWarty
    I was trying to benchmark the gain/loss of "caching" math.floor, in hopes that I could make calls faster. Here was the test: <html> <head> <script> window.onload = function() { var startTime = new Date().getTime(); var k = 0; for(var i = 0; i < 1000000; i++) k += Math.floor(9.99); var mathFloorTime = new Date().getTime() - startTime; startTime = new Date().getTime(); window.mfloor = Math.floor; k = 0; for(var i = 0; i < 1000000; i++) k += window.mfloor(9.99); var globalFloorTime = new Date().getTime() - startTime; startTime = new Date().getTime(); var mfloor = Math.floor; k = 0; for(var i = 0; i < 1000000; i++) k += mfloor(9.99); var localFloorTime = new Date().getTime() - startTime; document.getElementById("MathResult").innerHTML = mathFloorTime; document.getElementById("globalResult").innerHTML = globalFloorTime; document.getElementById("localResult").innerHTML = localFloorTime; }; </script> </head> <body> Math.floor: <span id="MathResult"></span>ms <br /> var mathfloor: <span id="globalResult"></span>ms <br /> window.mathfloor: <span id="localResult"></span>ms <br /> </body> </html> My results from the test: [Chromium 5.0.308.0]: Math.floor: 49ms var mathfloor: 271ms window.mathfloor: 40ms [IE 8.0.6001.18702] Math.floor: 703ms var mathfloor: 9890ms [LOL!] window.mathfloor: 375ms [Firefox [Minefield] 3.7a4pre] Math.floor: 42ms var mathfloor: 2257ms window.mathfloor: 60ms [Safari 4.0.4[531.21.10] ] Math.floor: 92ms var mathfloor: 289ms window.mathfloor: 90ms [Opera 10.10 build 1893] Math.floor: 500ms var mathfloor: 843ms window.mathfloor: 360ms [Konqueror 4.3.90 [KDE 4.3.90 [KDE 4.4 RC1]]] Math.floor: 453ms var mathfloor: 563ms window.mathfloor: 312ms The variance is random, of course, but for the most part In all cases [this shows time taken]: [takes longer] mathfloor Math.floor window.mathfloor [is faster] Why is this? In my projects i've been using var mfloor = Math.floor, and according to my not-so-amazing benchmarks, my efforts to "optimize" actually slowed down the script by ALOT... Is there any other way to make my code more "efficient"...? I'm at the stage where i basically need to optimize, so no, this isn't "premature optimization"...

    Read the article

  • Optimizing Jaro-Winkler algorithm

    - by Pentium10
    I have this code for Jaro-Winkler algorithm taken from this website. I need to run 150,000 times to get distance between differences. It takes a long time, as I run on an Android mobile device. Can it be optimized more? public class Jaro { /** * gets the similarity of the two strings using Jaro distance. * * @param string1 the first input string * @param string2 the second input string * @return a value between 0-1 of the similarity */ public float getSimilarity(final String string1, final String string2) { //get half the length of the string rounded up - (this is the distance used for acceptable transpositions) final int halflen = ((Math.min(string1.length(), string2.length())) / 2) + ((Math.min(string1.length(), string2.length())) % 2); //get common characters final StringBuffer common1 = getCommonCharacters(string1, string2, halflen); final StringBuffer common2 = getCommonCharacters(string2, string1, halflen); //check for zero in common if (common1.length() == 0 || common2.length() == 0) { return 0.0f; } //check for same length common strings returning 0.0f is not the same if (common1.length() != common2.length()) { return 0.0f; } //get the number of transpositions int transpositions = 0; int n=common1.length(); for (int i = 0; i < n; i++) { if (common1.charAt(i) != common2.charAt(i)) transpositions++; } transpositions /= 2.0f; //calculate jaro metric return (common1.length() / ((float) string1.length()) + common2.length() / ((float) string2.length()) + (common1.length() - transpositions) / ((float) common1.length())) / 3.0f; } /** * returns a string buffer of characters from string1 within string2 if they are of a given * distance seperation from the position in string1. * * @param string1 * @param string2 * @param distanceSep * @return a string buffer of characters from string1 within string2 if they are of a given * distance seperation from the position in string1 */ private static StringBuffer getCommonCharacters(final String string1, final String string2, final int distanceSep) { //create a return buffer of characters final StringBuffer returnCommons = new StringBuffer(); //create a copy of string2 for processing final StringBuffer copy = new StringBuffer(string2); //iterate over string1 int n=string1.length(); int m=string2.length(); for (int i = 0; i < n; i++) { final char ch = string1.charAt(i); //set boolean for quick loop exit if found boolean foundIt = false; //compare char with range of characters to either side for (int j = Math.max(0, i - distanceSep); !foundIt && j < Math.min(i + distanceSep, m - 1); j++) { //check if found if (copy.charAt(j) == ch) { foundIt = true; //append character found returnCommons.append(ch); //alter copied string2 for processing copy.setCharAt(j, (char)0); } } } return returnCommons; } } I mention that in the whole process I make just instance of the script, so only once jaro= new Jaro(); If you are going to test and need examples so not break the script, you will find it here, in another thread for python optimization.

    Read the article

  • Changing html <-> ajax <-> php/mysql to threaded approach

    - by Saif Bechan
    I have an application that needs to be updated real-time. There are various counters and other information that have to come from the database and the system needs to be up to date for the user. My approach now is just a normal ajax request every second to get the new values from the database. There is a JavaScript which loops every second getting the values trough ajax. This works fine but I think its very inefficient. The problem There is an ajax script that loops every second requesting data from php # On the server it has to load the PHP interpeter The PHP file has to get the data and format it correctly # PHP has to make a connection with the mysql database Work with the database(reads,never writes) Format the data so it can be send Send the data back to the browser # Close the database connection, and close the php interpeter Last the browser has to read these values and update the various html parts Now with this approach it has to load the interpreter and make a db connection every second. I was thinking of a way to make this more efficient, and maybe use a threaded approach to this. Threaded aprouch Do a post to the PHP when you enter the page and keep the connection alive In PHP only load the interpreter once, and make a connection to the DB ones Every second send an ajax response to the javascript listener The javascript listener than just changes values as the response from php arrives. I think this approach will be a great optimization to the server load and overall performance. But I can spot some weak point in the system and i need some help with these. Problems with the approach PHP execution time limit I don't think PHP is designed for such a setup. I know there is a time limit on php script execution. I don't know if an everlasting loop in PHP will cause any serious cpu/memory problems. Sending ajax request without breaking I don't know if it is possible to have just one ajax post action and have open and accepting data. user exists the page What will happen when the user exists the page and the PHP script is still going. Will it go on forever. security issues so far i can't think of any security issues. Almost every setup you use have some security issues. Maybe there are some with this solution I do not know of. Open to other solution I really want to change the setup as it is now and move to a threaded approach or better. If someone has a better approach to tackle this I definitely want to hear that. Maybe the usage of some other scripts is better suited for having an ongoing runtime. I only know php and java so any suggestions are welcome and I am willing to dig trough. I know there are things like perl, python etcetera that are used for this type of threaded but i don't know which one is best suited. When using other script If the best way is to go with other type of script like perl,python etcetera I do have some critera. The script has to be accessible via ajax post If it accepts some kind of json encode/decode it would be nice The script has to be able to access the session file This is essential because I need to know if the user is logged in The script has to be able to easily talk to MySQL All comments are welcome, and I hope this question is helpful to other also. Cheers!

    Read the article

  • Accessing local variable doesn't improve performance

    - by NicMagnier
    The short version Why is this code: var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; More performant than this one? var index = Math.floor(ref_index) * 4; The long version This week, the author of Impact js published an article about some rendering issue: http://www.phoboslab.org/log/2012/09/drawing-pixels-is-hard In the article there was the source of a function to scale an image by accessing pixels in the canvas. I wanted to suggest some traditional ways to optimize this kind of code so that the scaling would be shorter at loading time. But after testing it my result was most of the time worst that the original function. Guessing this was the JavaScript engine that was doing some smart optimization I tried to understand a bit more what was going on so I did a bunch of test. But my results are quite confusing and I would need some help to understand what's going on. I have a test page here: http://www.mx981.com/stuff/resize_bench/test.html jsPerf: http://jsperf.com/local-variable-due-to-the-scope-lookup To start the test, click the picture and the results will appear in the console. There are three different versions: The original code: for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; var indexScaled = (y * widthScaled + x) * 4; scaledPixels.data[ indexScaled ] = origPixels.data[ index ]; scaledPixels.data[ indexScaled+1 ] = origPixels.data[ index+1 ]; scaledPixels.data[ indexScaled+2 ] = origPixels.data[ index+2 ]; scaledPixels.data[ indexScaled+3 ] = origPixels.data[ index+3 ]; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance One of my attempt to optimize it: var ref_index = 0; var ref_indexScaled = 0 var ref_step = 1 / scale; for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = Math.floor(ref_index) * 4; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+1 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+2 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+3 ]; ref_index+= ref_step; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance The same optimized code but with recalculating the index variable each time (Hybrid) var ref_index = 0; var ref_indexScaled = 0 var ref_step = 1 / scale; for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+1 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+2 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+3 ]; ref_index+= ref_step; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance The only difference in the two last one is the calculation of the 'index' variable. And to my surprise the optimized version is slower in most browsers (except opera). Results of personal testing (not the jsPerf tests): Opera Original: 8668ms Optimized: 932ms Hybrid: 8696ms Chrome Original: 139ms Optimized: 145ms Hybrid: 136ms Safari Original: 433ms Optimized: 853ms Hybrid: 451ms Firefox Original: 343ms Optimized: 422ms Hybrid: 350ms After digging around, it seems an usual good practice is to access mainly local variable due to the scope lookup. Because The optimized version only call one local variable it should be faster that the Hybrid code which call multiple variable and object in addition to the various operation involved. So why the "optimized" version is slower? I thought that it might be because some JavaScript engine don't optimize the Optimized version because it is not hot enough but after using --trace-opt in chrome, it seems all version are properly compiled by V8. At this point I am a bit clueless and wonder if somebody would know what is going on? I did also some more test cases in this page: http://www.mx981.com/stuff/resize_bench/index.html

    Read the article

  • How can I copy music to my iPod nano without iTunes on Mac?

    - by ron
    Title says it all. I'm on Mac with the latest iTunes and it doesn't recognize my iPod anymore although it mounts to the desktop. I tried all and everything but it doesn't work (the iPod works on other Macs though, it itself is fine). How can I copy my music to the iPod without going through iTunes? Are there any tools like on Windows for this? Thanks!

    Read the article

  • What is Search Engine Optimization - An Art Or Science?

    As ridiculous and as outrageous as this question might sound, there has been no evident and obvious answer to this. The fact that the process of Search Engine optimization is an art or mere science is something that web scholars have been debating for a long time, and to people's amusement, have still not come to a concrete conclusion. One important step that was taken towards having this question answered or finding an answer to it was asking all the service providers about the way they think of SEO.

    Read the article

  • Search Engine Optimization and SEO Services - What Do They Offer?

    Search engine optimization involves using the Internet as a marketing tool. Companies that are using the Internet to attract customers desire to optimize their exposure to potential to customers. In order to best advantage of SEO, it is recommended that you hire a professional service that can implement programs to evaluate the optimum marketing potential of the Internet. When considering hiring a company to assist you in this process it is important to understand how such services work.

    Read the article

  • Why Does On-Page Search Engine Optimization Work So Well?

    On-page search engine optimization has been around since an inordinately long time - probably it is the first kind of SEO that marketers began to use - but it is only lately that people have begun to understand its great efficacy in bolstering the prospects of any website. The term is used to describe all methods you use on the page of the website in order to enhance its prospects with the search engines.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >