Search Results

Search found 5642 results on 226 pages for 'coding efficiency'.

Page 18/226 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • jQuery bind efficiency

    - by chelfers
    I'm having issue with load speed using multiple jQuery binds on a couple thousands elements and inputs, is there a more efficient way of doing this? The site has the ability to switch between product lists via ajax calls, the page cannot refresh. Some lists have 10 items, some 100, some over 2000. The issue of speed arises when I start flipping between the lists; each time the 2000+ item list is loaded the system drags for about 10 seconds. Before I rebuild the list I am setting the target element's html to '', and unbinding the two bindings below. I'm sure it has something to do with all the parent, next, and child calls I am doing in the callbacks. Any help is much appreciated. loop 2500 times <ul> <li><input type="text" class="product-code" /></li> <li>PROD-CODE</li> ... <li>PRICE</li> </ul> end loop $('li.product-code').bind( 'click', function(event){ selector = '#p-'+ $(this).prev('li').children('input').attr('lm'); $(selector).val( ( $(selector).val() == '' ? 1 : ( parseFloat( $(selector).val() ) + 1 ) ) ); Remote.Cart.lastProduct = selector; Remote.Cart.Products.Push( Remote.Cart.customerKey, { code : $(this).prev('li').children('input').attr('code'), title : $(this).next('li').html(), quantity : $('#p-'+ $(this).prev('li').children('input').attr('lm') ).val(), price : $(this).prev('li').children('input').attr('price'), weight : $(this).prev('li').children('input').attr('weight'), taxable : $(this).prev('li').children('input').attr('taxable'), productId : $(this).prev('li').children('input').attr('productId'), links : $(this).prev('li').children('input').attr('productLinks') }, '#p-'+ $(this).prev('li').children('input').attr('lm'), false, ( parseFloat($(selector).val()) - 1 ) ); return false; }); $('input.product-qty').bind( 'keyup', function(){ Remote.Cart.lastProduct = '#p-'+ $(this).attr('lm'); Remote.Cart.Products.Push( Remote.Cart.customerKey, { code : $(this).attr('code') , title : $(this).parent().next('li').next('li').html(), quantity : $(this).val(), price : $(this).attr('price'), weight : $(this).attr('weight'), taxable : $(this).attr('taxable'), productId : $(this).attr('productId'), links : $(this).attr('productLinks') }, '#p-'+ $(this).attr('lm'), false, previousValue ); });

    Read the article

  • Improving the efficiency of Kinect for Windows DTWGestureRecognition Application

    - by Ray
    Currently I am using the DTWGestureRecognition open source tool for Kinect SDK v1.5. I have recorded a few gestures and use them to navigate through Windows 7. I also have implemented voice control for simple things such as opening PowerPoint, Chrome, etc. My main issue is that the application uses quite a bit of my CPU power which causes it to become slow. During gestures and voice commands, the CPU usage sometimes spikes to 80-90%, which causes the application to be unresponsive for a few seconds. I am running it on a 64 bit Windows 7 machine with an i5 processor and 8 GB of RAM. I was wondering if anyone with any experience using this tool or Kinect in general has made it more efficient and less performance hogging. Right now I removed sections which display the RGB video and the Depth video but even doing that did not make a big impact. Any help is appreciated, thanks!

    Read the article

  • Increase efficiency for an R simulator of the Monty Hall Puzzle

    - by jahan_m
    The Monty Hall Problem is a simple puzzle involving probability that even stumps professionals in careers dealing with some heavy-duty math. Here's the basic problem: Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice? You can find numerous explanations of the solution here: http://en.wikipedia.org/wiki/Monty_Hall_problem Goal of my simulation: Prove that a switching strategy will win you the car 2/3 of the time. I got curious and wanted to write a little function that simulates the problem many times and returns the proportion of wins if you switched and the proportion of wins if you stayed with your first choice. The function then plots the cumulative wins. First and foremost, I'm interested in hearing if my simulation is indeed replicating the Monty Problem, or if some aspect of the code got it wrong. Secondly, this function takes a long time to run once I get to about 10,000 simulations. I know I don't need this many simulations to prove this but I'd love to hear some ideas on how to make it more efficient. Thanks for your feedback! Monty_Hall=function(repetitions){ doors=c('A','B','C') stay_wins=0 switch_wins=0 series=data.frame(sim_num=seq(repetitions),cum_sum_stay=replicate(repetitions,0),cum_sum_switch=replicate(repetitions,0)) for(i in seq(repetitions)){ winning_door=sample(doors,1) contestant_chooses=sample(doors,1) if(contestant_chooses==winning_door) stay_wins=stay_wins+1 else switch_wins=switch_wins+1 series[i,'cum_sum_stay']=stay_wins series[i,'cum_sum_switch']=switch_wins } plot(series$sim_num,series$cum_sum_switch,col=2,ylab='Cumulative # of wins', xlab='Simulation #',main=sprintf('%d Simulations of the Monty Hall Paradox',repetitions),type='l') lines(series$sim_num,series$cum_sum_stay,col=4) legend('topleft',legend=c('Cumulative wins from switching', 'Cumulative wins from staying'),col=c(2,4),lty=1) result=list(series=series,stay_wins=stay_wins,switch_wins=switch_wins, proportion_stay_wins=stay_wins/repetitions, proportion_switch_wins=switch_wins/repetitions) return(result) } #Theory predicts that it is to the contestant's advantage if he #switches his choice to the other door. This function simulates the game #many times, and shows you the proportion of games in which staying or #switching would win the car. It also plots the cumulative wins for each strategy. Monty_Hall(100)

    Read the article

  • Mysql Performance Question - Essentially about normalizing efficiency

    - by freqmode
    Hi there. Just a quick question about database performance. I'll outline my site purpose below as background. I'm creating a dictionary site that saves the words users define to a database. What I'm wondering is whether or not to create a words table for each user or to keep one massive words table. This site will be used for entire schools so the single words table would be massive! The database structure is as follows: A user table with: User_ID PRIMARY KEY Username First Last Password Email Country Research Standings SendInfo Donated JoinedOn LastLogin Logins Correct Attempts Admin Active And one word table with: User_ID PRIMARY KEY Word Vocab Spell Defined DefinedAttempted Spelled SpelledAttempted Sentenced SentencedAttempted So what I'm asking is , performance-wise, should I create a new table for each user when they join the site - each user could have hundreds or thousands of words over time? Or is it better to have one massive table with thousands and thousands of records and filter by User_ID. I don't think I'll perform many table joins. My gut feeling is to create a new table for each user, but I thought I'd ask for expert advice! Thanks in advance.

    Read the article

  • Where to draw the line between efficiency and practicality

    - by dclowd9901
    I understand very well the need for websites' front ends to be coded and compressed as much as possible, however, I feel like I have more lax standards than others when it comes to practical applications. For instance, while I understand why some would, I don't see anything wrong with putting selectors in the <html> or <body> tags on a website with an expected small visitation rate. I would only do this for a cheap website for a small client, because I can't really justify the cost of time otherwise. So, that said, do you think it's okay to draw a line? Where do you draw yours?

    Read the article

  • Improving HTML scrapper efficiency with pcntl_fork()

    - by Michael Pasqualone
    With the help from two previous questions, I now have a working HTML scrapper that feeds product information into a database. What I am now trying to do is improve efficiently by wrapping my brain around with getting my scrapper working with pcntl_fork. If I split my php5-cli script into 10 separate chunks, I improve total runtime by a large factor so I know I am not i/o or cpu bound but just limited by the linear nature of my scraping functions. Using code I've cobbled together from multiple sources, I have this working test: <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $hrefArray = array("http://slashdot.org", "http://slashdot.org", "http://slashdot.org", "http://slashdot.org"); function doDomStuff($singleHref,$childPid) { $html = new DOMDocument(); $html->loadHtmlFile($singleHref); $xPath = new DOMXPath($html); $domQuery = '//div[@id="slogan"]/h2'; $domReturn = $xPath->query($domQuery); foreach($domReturn as $return) { $slogan = $return->nodeValue; echo "Child PID #" . $childPid . " says: " . $slogan . "\n"; } } $pids = array(); foreach ($hrefArray as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref,$childPid); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); Which raises the following questions: 1) Given my hrefArray contains 4 urls - if the array was to contain say 1,000 product urls this code would spawn 1,000 child processes? If so, what is the best way to limit the amount of processes to say 10, and again 1,000 urls as an example split the child work load to 100 products per child (10 x 100). 2) I've learn that pcntl_fork creates a copy of the process and all variables, classes, etc. What I would like to do is replace my hrefArray variable with a DOMDocument query that builds the list of products to scrape, and then feeds them off to child processes to do the processing - so spreading the load across 10 child workers. My brain is telling I need to do something like the following (obviously this doesn't work, so don't run it): <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $maxChildWorkers = 10; $html = new DOMDocument(); $html->loadHtmlFile('http://xxxx'); $xPath = new DOMXPath($html); $domQuery = '//div[@id=productDetail]/a'; $domReturn = $xPath->query($domQuery); $hrefsArray[] = $domReturn->getAttribute('href'); function doDomStuff($singleHref) { // Do stuff here with each product } // To figure out: Split href array into $maxChilderWorks # of workArray1, workArray2 ... workArray10. $pids = array(); foreach ($workArray(1,2,3 ... 10) as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); But what I can't figure out is how to build my hrefsArray[] in the master/parent process only and feed it off to the child process. Currently everything I've tried causes loops in the child processes. I.e. my hrefsArray gets built in the master, and in each subsequent child process. I am sure I am going about this all totally wrong, so would greatly appreciate just general nudge in the right direction.

    Read the article

  • How to improve efficiency in loops?

    - by Jacob Worldly
    I have the following code, which translates the input string into morse code. My code runs through every letter in the string and then through every character in the alphabet. This is very inefficient, because what if I was reading from a very large file, instead of a small alphabet string. Is there any way that I could improve my code, Maybe using the module re, to match my string with the morse code characters? morse_alphabet = ".- -... -.-. -.. . ..-. --. .... .. .--- -.- .-.. -- -. --- .--. --.- .-. ... - ..- ...- .-- -..- -.-- --.." ALPHABET = "abcdefghijklmnopqrstuvwxyz" morse_letters = morse_alphabet.split(" ") result = [] count_character = 0 def t(code): for character in code: count_letter = 0 for letter in ALPHABET: lower_character = code[count_character].lower() lower_letter = letter.lower() if lower_character == lower_letter: result.append(morse_letters[count_letter]) count_letter += 1 count_character += 1 return result

    Read the article

  • PHP Array Efficiency and Memory Clarification

    - by CogitoErgoSum
    When declaring an Array in PHP, the index's may be created out of order...I.e Array[1] = 1 Array[19] = 2 Array[4] = 3 My question. In creating an array like this, is the length 19 with nulls in between? If I attempted to get Array[3] would it come as undefined or throw an error? Also, how does this affect memory. Would the memory of 3 index's be taken up or 19? Also currently a developer wrote a script with 3 arrays FailedUpdates[] FailedDeletes[] FailedInserts[] Is it more efficient to do it this way, or do it in the case of an associative array controlling several sub arrays "Failures" array(){ ["Updates"] => array(){ [0] => 12 [1] => 41 } ["Deletes"] => array(){ [0] => 122 [1] => 414 [1] => 43 } ["Inserts"] => array(){ [0] => 12 } }

    Read the article

  • Efficiency of the .NET garbage collector

    - by Jonas B
    OK here's the deal. There are some people who put their lives in the hands of .NET's garbage collector and some who simply wont trust it. I am one of those who partially trusts it, as long as it's not extremely performance critical (I know I know.. performance critical + .net not the favored combination), in which case I prefer to manually dispose of my objects and resources. What I am asking is if there are any facts as to how efficient or inefficient performance-wise the garbage collector really is? Please don't share any personal opinions or likely-assumptions-based-on-experience, I want unbiased facts. I also don't want any pro/con discussions because it won't answer the question. Thanks

    Read the article

  • Improve Efficiency in Array comparison in Ruby

    - by user2985025
    Hi I am working on Ruby /cucumber and have an requirement to develop a comparison module/program to compare two files. Below are the requirements The project is a migration project . Data from one application is moved to another Need to compare the data from the existing application against the new ones. Solution : I have developed a comparison engine in Ruby for the above requirement. a) Get the data, de duplicated and sorted from both the DB's b) Put the data in a text file with "||" as delimiter c) Use the key columns (number) that provides a unique record in the db to compare the two files For ex File1 has 1,2,3,4,5,6 and file2 has 1,2,3,4,5,7 and the columns 1,2,3,4,5 are key columns. I use these key columns and compare 6 and 7 which results in a fail. Issue : The major issue we are facing here is if the mismatches are more than 70% for 100,000 records or more the comparison time is large. If the mismatches are less than 40% then comparison time is ok. Diff and Diff -LCS will not work in this case because we need key columns to arrive at accurate data comparison between two applications. Is there any other method to efficiently reduce the time if the mismatches are more thatn 70% for 100,000 records or more. Thanks

    Read the article

  • Efficiency: Creating an array of doubles incrementally?

    - by Alan
    Consider the following code: List<double> l = new List<double>(); //add unknown number of values to the list l.Add(0.1); //assume we don't have these values ahead of time. l.Add(0.11); l.Add(0.1); l.ToArray(); //ultimately we want an array of doubles Anything wrong with this approach? Is there a more appropriate way to build an array, without knowing the size, or elements ahead of time?

    Read the article

  • Quick question about jQuery selector content efficiency

    - by serg
    I am loading HTML page through ajax and then doing a bunch of searches using selectors: $.ajax({ ... dataType: "html", success: function(html) { $("#id1", html); $(".class", html); //... } } Should I extract $(html) into a variable and use it as a content, or it doesn't matter (from performance point)? success: function(html) { $html = $(html); $("#id1", $html); $(".class", $html); //... }

    Read the article

  • apache commons http client efficiency

    - by wo_shi_ni_ba_ba
    I use apache commons http client to send data via post every second, is there a way to make the following code more efficient? I know http is stateless, but is there anything I can do to improve since the base url is always the same in this case(only the parameter value change. private void sendData(String s){ try { HttpClient client = getHttpClient(); HttpPost method = new HttpPost("http://192.168.1.100:8080/myapp"); System.err.println("send to server "+s); List formparams = new ArrayList(); formparams.add(new BasicNameValuePair("packet", s)); UrlEncodedFormEntity entity = new UrlEncodedFormEntity(formparams, "UTF-8"); method.setEntity(entity); HttpResponse resp=client.execute(method); String res = EntityUtils.toString(resp.getEntity()); System.out.println(res); } catch (Exception e) { e.printStackTrace(); } } private HttpClient getHttpClient() { if(httpClient==null){ httpClient = new DefaultHttpClient(); } return httpClient; }

    Read the article

  • Database design efficiency with 1 to many relationships limited 1 to 3

    - by Joe
    This is in mysql, but its a database design issue. If you have a one to many relationship, like a bank customer to bank-accounts, typically you would have the table that records the bank-account information have a foreign key that keeps track of the relationship between account and customer. Now this follows the 3rd normal form thing and is a widely accepted way of doing it. Now lets say that you are going to limit a user to only having 3 accounts. The current database implementation will support this and nothing would need to change. But another way to do this would have 3 coloms in the account table that have the id of the 3 respective accounts in them. By the way this violates 1st normal form of db design. The question is what would be the advantage and disadvantages of having the user account relationship recored in this way over the traditional?

    Read the article

  • PHP ORM's, multiple tables and efficiency

    - by sunwukung
    Let's say I have a data mapper function that aggregates multiple tables and generates an object instance from that data. The mapper has a typical save() method which delegates to update/insert. When the mapper executes save - ideally it isolates object fields that have been modified, thus preventing the code from blanket bombing the database. How would you go about this?

    Read the article

  • Javascript for loop efficiency

    - by Misha Moroshko
    Is for (var i=0, cols=columns.length; i<cols; i++) { ... } more efficient than for (var i=0; i<columns.length; i++) { ... } ? In the second variant, is columns.length calculated each time the condition i<columns.length is checked ?

    Read the article

  • C# GroupJoin efficiency

    - by bsnote
    without using GroupJoin: var playersDictionary = players.ToDictionary(player => player.Id, element => new PlayerDto { Rounds = new List<RoundDto>() }); foreach (var round in rounds) { PlayerDto playerDto; playersDictionary.TryGetValue(round.PlayerId, out playerDto); if (playerDto != null) { playerDto.Rounds.Add(new RoundDto { }); } } var playerDtoItems = playersDictionary.Values; using GroupJoin: var playerDtoItems = from player in players join round in rounds on player.Id equals round.PlayerId into playerRounds select new PlayerDto { Rounds = playerRounds.Select(playerRound => new RoundDto {}) }; Which of these two pieces is more efficient?

    Read the article

  • MySQL/PHP Search Efficiency

    - by iMaster
    Hi! I'm trying to create a small search for my site. I've tried using full-text index search, but I could never get it to work. Here is what I've come up with: if(isset($_GET['search'])) { $search = str_replace('-', ' ', $_GET['search']); $result = array(); $titles = mysql_query("SELECT title FROM Entries WHERE title LIKE '%$search%'"); while($row = mysql_fetch_assoc($titles)) { $result[] = $row['title']; } $tags = mysql_query("SELECT title FROM Entries WHERE tags LIKE '%$search%'"); while($row = mysql_fetch_assoc($tags)) { $result[] = $row['title']; } $text = mysql_query("SELECT title FROM Entries WHERE entry LIKE '%$search%'"); while($row = mysql_fetch_assoc($text)) { $result[] = $row['title']; } $result = array_unique($result); } So basically, it searches through all the titles, body-text, and tags of all the entries in the DB. This works decently well, but I'm just wondering how efficient would it be? This would only be for a small blog, too. Either way I'm just wondering if this could be made any more efficient.

    Read the article

  • Image resizing efficiency in C# and .NET 3.5

    - by Matthew Nichols
    I have written a web service to resize user uploaded images and all works correctly from a functional point of view, but it causes CPU usage to spike every time it is used. It is running on Windows Server 2008 64 bit. I have tried compiling to 32 and 64 bit and get about the same results. The heart of the service is this function: private Image CreateReducedImage(Image imgOrig, Size NewSize) { var newBM = new Bitmap(NewSize.Width, NewSize.Height); using (var newGrapics = Graphics.FromImage(newBM)) { newGrapics.CompositingQuality = CompositingQuality.HighSpeed; newGrapics.SmoothingMode = SmoothingMode.HighSpeed; newGrapics.InterpolationMode = InterpolationMode.HighQualityBicubic; newGrapics.DrawImage(imgOrig, new Rectangle(0, 0, NewSize.Width, NewSize.Height)); } return newBM; } I put a profiler on the service and it seemed to indicate the vast majority of the time is spent in the GDI+ library itself and there is not much to be gained in my code. Questions: Am I doing something glaringly inefficient in my code here? It seems to conform to the example I have seen. Are there gains to be had in using libraries other than GDI+? The benchmarks I have seen seem to indicate that GDI+ does well compare to other libraries but I didn't find enough of these to be confident. Are there gains to be had by using "unsafe code" blocks? Please let me know if I have not included enough of the code...I am happy to put as much up as requested but don't want to be obnoxious in the post.

    Read the article

  • Improving the efficiency of multiple concurrent Core Animation animations

    - by Alex
    I have a view in my app that is very similar to the month view in the built-in Calendar app. There's a subview that holds the individual cells (a custom UIView subclass that draws text into its layer), and when the user navigates to the next "month", I create the new cells and slide the view to show them. When the animation stops, I remove the old, hidden cells and set things up so it's ready to go for the next animation. This all works nicely. However, I'd like to animate the cells' text color, as in the Calendar app, so that the outgoing ones transition to a lighter color and the incoming ones transition to a darker color. The problems is that I can have as many as 70 cells, so doing individual animations is very slow -- between 5-10 fps on my iPhone 3GS. I'm trying to find a less computationally intense way of doing this. My reading of the Shark results is that the majority of the time is spent redrawing the text for each frame for each frame. This makes sense, since text rendering is hardly the cheapest operation. I've considered creating a second view -- one holding the "outgoing" state and one holding the "incoming" state and using a single opacity animation to gradually reveal the updated cells while both are sliding. I'm concerned that instead of having 70 cells, I'll have 140, which seems like a lot of views. So, is that too many views or would there be a better way of doing this?

    Read the article

  • increasing speed and efficiency of ajax call

    - by user1048824
    I'm not the most disciplined dev, dont know standards and am self-taught so bear with me. I create stuff very logically and fast but not always using 'programming standards'. I have a mobile app using geolocation API. it gets thousands of places from my db and makes gmaps v3 markers for the ones around the user's current location. there is an ajax call from my JS to an aspx page that calls the database, makes a json string, and sends the json string to the javascript that then creates the google map markers. would i save time if the json string was in a flat file? im not sure if, generally speaking, accessing a sql db from an aspx page is faster than c# file i/o on a flat file with pre-rendered JSON. (of course, using the flat file, it would update everytime the db is updated)

    Read the article

  • key value coding-compliant for NSObject class?

    - by 4thSpace
    I've created a singleton class that loads a plist. I keep getting this error when I try to set a value: '[ setValue:forUndefinedKey:]: this class is not key value coding-compliant for the key test.' I have one key in the plist file. The key is named "test" and has no value associated with it. I set the value like this: [[PlistManager sharedManager].plist setValue:@"the title value" forKey:@"test"]; I look at the set plist dictionary and see this from within PlistManager: po self.plistDictionary { test = ""; } I get the error just as I'm leaving PlistManager in the debugger. PlistManager is of type NSObject. So no xibs. Any ideas on what I need to do?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >