Search Results

Search found 7107 results on 285 pages for 'processing efficiency'.

Page 17/285 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Node.js vs PHP processing speed

    - by Cody Craven
    I've been looking into node.js recently and wanted to see a true comparison of processing speed for PHP vs Node.js. In most of the comparisons I had seen, Node trounced Apache/PHP set ups handily. However all of the tests were small 'hello worlds' that would not accurately reflect any webpage's markup. So I decided to create a basic HTML page with 10,000 hello world paragraph elements. In these tests Node with Cluster was beaten to a pulp by PHP on Nginx utilizing PHP-FPM. So I'm curious if I am misusing Node somehow or if Node is really just this bad at processing power. Note that my results were equivalent outputting "Hello world\n" with text/plain as the HTML, but I only included the HTML as it's closer to the use case I was investigating. My testing box: Core i7-2600 Intel CPU (has 8 threads with 4 cores) 8GB DDR3 RAM Fedora 16 64bit Node.js v0.6.13 Nginx v1.0.13 PHP v5.3.10 (with PHP-FPM) My test scripts: Node.js script var cluster = require('cluster'); var http = require('http'); var numCPUs = require('os').cpus().length; if (cluster.isMaster) { // Fork workers. for (var i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('death', function (worker) { console.log('worker ' + worker.pid + ' died'); }); } else { // Worker processes have an HTTP server. http.Server(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.write('<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n'); for (var i = 0; i < 10000; i++) { res.write('<p>Hello world</p>\n'); } res.end('</body>\n</html>'); }).listen(80); } This script is adapted from Node.js' documentation at http://nodejs.org/docs/latest/api/cluster.html PHP script <?php echo "<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n"; for ($i = 0; $i < 10000; $i++) { echo "<p>Hello world</p>\n"; } echo "</body>\n</html>"; My results Node.js $ ab -n 500 -c 20 http://speedtest.dev/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: Server Hostname: speedtest.dev Server Port: 80 Document Path: / Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 14.603 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95066500 bytes HTML transferred: 95035000 bytes Requests per second: 34.24 [#/sec] (mean) Time per request: 584.123 [ms] (mean) Time per request: 29.206 [ms] (mean, across all concurrent requests) Transfer rate: 6357.45 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 2 Processing: 94 547 405.4 424 2516 Waiting: 0 331 399.3 216 2284 Total: 95 547 405.4 424 2516 Percentage of the requests served within a certain time (ms) 50% 424 66% 607 75% 733 80% 813 90% 1084 95% 1325 98% 1843 99% 2062 100% 2516 (longest request) PHP/Nginx $ ab -n 500 -c 20 http://speedtest.dev/test.php This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: nginx/1.0.13 Server Hostname: speedtest.dev Server Port: 80 Document Path: /test.php Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 0.130 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95109000 bytes HTML transferred: 95035000 bytes Requests per second: 3849.11 [#/sec] (mean) Time per request: 5.196 [ms] (mean) Time per request: 0.260 [ms] (mean, across all concurrent requests) Transfer rate: 715010.65 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 1 Processing: 3 5 0.7 5 7 Waiting: 1 4 0.7 4 7 Total: 3 5 0.7 5 7 Percentage of the requests served within a certain time (ms) 50% 5 66% 5 75% 5 80% 6 90% 6 95% 6 98% 6 99% 6 100% 7 (longest request) Additional details Again what I'm looking for is to find out if I'm doing something wrong with Node.js or if it is really just that slow compared to PHP on Nginx with FPM. I certainly think Node has a real niche that it could fit well, however with these test results (which I really hope I made a mistake with - as I like the idea of Node) lead me to believe that it is a horrible choice for even a modest processing load when compared to PHP (let alone JVM or various other fast solutions). As a final note, I also tried running an Apache Bench test against node with $ ab -n 20 -c 20 http://speedtest.dev/ and consistently received a total test time of greater than 0.900 seconds.

    Read the article

  • Reminder: For a Complete View Of Your Concurrent Processing Take A Look At The CP Analyzer!

    - by LuciaC
    For a complete view of your Concurrent Processing take a look at the CP Analyzer!  Doc ID 1411723.1 has the script to download and a 9 min video. The Concurrent Processing Analyzer is a Self-Service Health-Check script which reviews the overall Concurrent Processing Footprint, analyzes the current configurations and settings for the environment providing feedback and recommendations on Best Practices.This is a non-invasive script which provides recommended actions to be performed on the instance it was run on.  For production instances, always apply any changes to a recent clone to ensure an expected outcome. E-Business Applications Concurrent Processing Analyzer Overview E-Business Applications Concurrent Request Analysis E-Business Applications Concurrent Manager Analysis Identifies Concurrent System Setup and configurations Identifies and recommends Concurrent Best Practices Easy to add Tool for regular Concurrent Maintenance Execute Analysis anytime to compare trending from past outputs Feedback welcome!

    Read the article

  • DVD/CD burning .zip: is it more reliable, faster, longer lasting to burn a zip of files rather than the files as a folder?

    - by Rob
    Is it more reliable, faster, longer lasting to burn to CD/DVD a zip (or a few large zips) of files rather than the files as a folder? Just thinking if 1000s of small files would not be as efficiently recorded compared with one or a few large zips. Also, even after the burning program verifies the disc, I also use Beyond Compare to compare the files with those on the disc. Always binary compares as identical but I hear the drive stuttering presumably as the head is being shifted only slightly each time to seek the next file, which leads me to think that its best to make one or more zips and copy those locally to compare. Or is it that burning invidual files to the disc is not as readable which causes the head to stutter. There aren't any problems, my disc burns are reliable, just thinking more of efficiency and longevity, the discs burn and verify fast enough on my 18x DVD burner. I'm using ImgBurn mostly. Also used Nero in the past. I burn whole discs closed, finalised. Not sure which write mode but would think Disc At Once from a temporary cached image made by the burning program would be the most reliable.

    Read the article

  • Asynchronous daemon processing / ORM interaction with Django

    - by perrierism
    I'm looking for a way to do asynchronous data processing with a daemon that uses Django ORM. However, the ORM isn't thread-safe; it's not thread-safe to try to retrieve / modify django objects from within threads. So I'm wondering what the correct way to achieve asynchrony is? Basically what I need to accomplish is taking a list of users in the db, querying a third party api and then making updates to user-profile rows for those users. As a daemon or background process. Doing this in series per user is easy, but it takes too long to be at all scalable. If the daemon is retrieving and updating the users through the ORM, how do I achieve processing 10-20 users at a time? I would use a standard threading / queue system for this but you can't thread interactions like models.User.objects.get(id=foo) ... Django itself is an asynchronous processing system which makes asynchronous ORM calls(?) for each request, so there should be a way to do it? I haven't found anything in the documentation so far. Cheers

    Read the article

  • C# Process Binary File, Multi-Thread Processing

    - by washtik
    I have the following code that processes a binary file. I want to split the processing workload by using threads and assigning each line of the binary file to threads in the ThreadPool. Processing time for each line is only small but when dealing with files that might contain hundreds of lines, it makes sense to split the workload. My question is regarding the BinaryReader and thread safety. First of all, is what I am doing below acceptable. I have a feeling it would be better to pass only the binary for each line to the PROCESS_Binary_Return_lineData method. Please note the code below is conceptual. I looking for a but of guidance on this as my knowledge of multi-threading is in its infancy. Perhaps there is a better way to achieve the same result, i.e. split processing of each binary line. var dic = new Dictionary<DateTime, Data>(); var resetEvent = new ManualResetEvent(false); using (var b = new BinaryReader(File.Open(Constants.dataFile, FileMode.Open, FileAccess.Read, FileShare.Read))) { var lByte = b.BaseStream.Length; var toProcess = 0; while (lByte >= DATALENGTH) { b.BaseStream.Position = lByte; lByte = lByte - AB_DATALENGTH; ThreadPool.QueueUserWorkItem(delegate { Interlocked.Increment(ref toProcess); var lineData = PROCESS_Binary_Return_lineData(b); lock(dic) { if (!dic.ContainsKey(lineData.DateTime)) { dic.Add(lineData.DateTime, lineData); } } if (Interlocked.Decrement(ref toProcess) == 0) resetEvent.Set(); }, null); } } resetEvent.WaitOne();

    Read the article

  • Improve XPath efficiency for repeated, parameterized queries

    - by Chris Allan
    Hi, I am repeatedly performing the following XPath query (though parameterized by 'keywordText') around 40,000 times: String query = SystemGlobal.YAHOO_KEYWORDSSUBNODE + "/" + SystemGlobal.YAHOO_KEYWORDNODE + "[" + SystemGlobal.YAHOO_ATTRKEYPHRASE + "='" + keywordText + "']"; CachedXPathAPI cachedXPathAPI = new CachedXPathAPI(); NodeIterator nl = cachedXPathAPI.selectNodeIterator(doc.getElementsByTagName(SystemGlobal.YAHOO_KEYWORDSROOT).item(0), query); Node n; if ((n = nl.nextNode()) != null) { keyword.setKeywordId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRKEYID).getTextContent())); keyword.setKeyPhrase(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRKEYPHRASE).getTextContent()); keyword.setStatus(mapStatus(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRSTATUS).getTextContent())); keyword.setCampaignId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, "../../" + SystemGlobal.YAHOO_ATTRCAMPAIGNID).getTextContent())); keyword.setAdGroupId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, "../" + SystemGlobal.YAHOO_ATTRADGROUPID).getTextContent())); On the first run of the script, all 40,000 runs of this piece of code will have nl.nextNode() == null, and everything runs quite quickly. However, on the following runs, when nl.nextNode() != null, then things slow down a lot - this takes around an additional 40min to run (whereas the first run takes maybe 1 minute). Oh, and the doc is constructed like so: InputSource in = new InputSource(new FileInputStream(filename)); DocumentBuilderFactory dfactory = DocumentBuilderFactory.newInstance(); dfactory.setNamespaceAware(true); doc = dfactory.newDocumentBuilder().parse(in); I tried including the following lines reportEvaluator = new XPathEvaluatorImpl(reportDoc); reportResolver = reportEvaluator.createNSResolver(reportDoc); and rather creating a NodeIterator, instead creating an XPathResult: XPathResult result = (XPathResult)reportEvaluator.evaluate(query, doc.getElementsByTagName(SystemGlobal.YAHOO_KEYWORDSROOT).item(0), reportResolver, XPathResult.UNORDERED_NODE_ITERATOR_TYPE, null); however this ran even slower Is there a way in which I can speed up the running of this script? I have seen references to precompiled queries, though I haven't seen many actual details. Also, as seen in the code, I am using CachedXPathAPI, though the benefit for this case is not so great. Any help is much appreciated! Chris Allan

    Read the article

  • T/SQL Efficiency and Order of Execution

    - by Kyle Rozendo
    Hi All, In regards to the order of execution of statements in SQL, is there any difference between the following performance wise? SELECT * FROM Persons WHERE UserType = 'Manager' AND LastName IN ('Hansen','Pettersen') And: SELECT * FROM Persons WHERE LastName IN ('Hansen','Pettersen') AND UserType = 'Manager' If there is any difference, is there perhaps a link etc. that you may have where one can learn more about this? Thanks a ton, Kyle

    Read the article

  • F# currying efficiency?

    - by Eamon Nerbonne
    I have a function that looks as follows: let isInSet setElems normalize p = normalize p |> (Set.ofList setElems).Contains This function can be used to quickly check whether an element is semantically part of some set; for example, to check if a file path belongs to an html file: let getLowerExtension p = (Path.GetExtension p).ToLowerInvariant() let isHtmlPath = isInSet [".htm"; ".html"; ".xhtml"] getLowerExtension However, when I use a function such as the above, performance is poor since evaluation of the function body as written in "isInSet" seems to be delayed until all parameters are known - in particular, invariant bits such as (Set.ofList setElems).Contains are reevaluated each execution of isHtmlPath. How can best I maintain F#'s succint, readable nature while still getting the more efficient behavior in which the set construction is preevaluated. The above is just an example; I'm looking for a general pattern that avoids bogging me down in implementation details - where possible I'd like to avoid being distracted by details such as the implementation's execution order since that's usually not important to me and kind of undermines a major selling point of functional programming.

    Read the article

  • Increase efficiency of a loop with jQuery

    - by Pez Cuckow
    I have a game coded in jQuery where bots are moved around the screen. The below code is a loop that runs every 20ms, currently if you have over 15 bots you start to notice the browser lagging (simply because of all the advanced collision detection going on). Is there any way to reduce the lag, can I make it any more efficient? P.s. sorrry for just posting a block of code, I can't see a way to make my point clear enough without! $.playground().registerCallback(function(){ //Movement Loop if(!pause) { for (var i in bots) { //bots - color, dir, x, y, z, spawned?, spawnerid, prevd var self = $('#b' + i); var current = bots[i]; if(bots[i][5]==1) { var xspeed = 0, yspeed = 0; if(current[1]==0) { yspeed = -D_SPEED; } else if(current[1]==1) { xspeed = D_SPEED; } else if(current[1]==2) { yspeed = D_SPEED; } else if(current[1]==3) { xspeed = -D_SPEED; } var x = current[2] + xspeed; var y = current[3] + yspeed; var z = current[3] + 120; if(current[2]>0&&x>PLAYGROUND_WIDTH||current[2]<0&&x<-GRID_SIZE|| current[3]>0&&y>PLAYGROUND_HEIGHT||current[3]<0&&y<-GRID_SIZE) { remove_bot(i, self); } else { if(current[7]!=current[1]) { self.setAnimation(colors[current[0]][current[1]]); bots[i][7] = current[1]; } if(self.css({"left": ""+(x)+"px", "top": ""+(y)+"px", "z-index": z})) { bots[i][2] = x; bots[i][3] = y; bots[i][4] = z; bots[i][8]++; } } } } $("#debug").html(dump(arrows)); $(".bot").each(function(){ var b_id = $(this).attr("id").substr(1); var collision = false; var c_bot = bots[b_id]; var b_x = c_bot[2]; var b_y = c_bot[3]; var b_d = c_bot[1]; $(this).collision(".arrow,#arrows").each(function(){ //Many thanks to Selim Arsever for this fix! var a_id = $(this).attr("id").substr(1); var piece = arrows[a_id]; var a_v = piece[0]; if(a_v==1) { var a_x = piece[2]; var a_y = piece[3]; var d_x = b_x-a_x; var d_y = b_y-a_y; if(d_x>=4&&d_x<=5&&d_y>=1&&d_y<=2) { //bots - color, dir, x, y, z, spawned?, spawnerid, prevd bots[b_id][7] = c_bot[1]; bots[b_id][1] = piece[1]; collision = true; } } }); if(!collision) { $(this).collision(".wall,#level").each(function(){ var w_id = $(this).attr("id").substr(1); var piece = pieces[w_id]; var w_x = piece[1]; var w_y = piece[2]; d_x = b_x-w_x; d_y = b_y-w_y; if(b_d==0&&d_x>=4&&d_x<=5&&d_y>=27&&d_y<=28) { kill_bot(b_id); collision = true; } //4 // 33 if(b_d==1&&d_x>=-12&&d_x<=-11&&d_y>=21&&d_y<=22) { kill_bot(b_id); collision = true; } //-14 // 21 if(b_d==2&&d_x>=4&&d_x<=5&&d_y>=-9&&d_y<=-8) { kill_bot(b_id); collision = true; } //4 // -9 if(b_d==3&&d_x>=22&&d_x<=23&&d_y>=20&&d_y<=21) { kill_bot(b_id); collision = true; } //22 // 21 }); } if(!collision&&c_bot[8]>GRID_MOVE) { $(this).collision(".spawn,#level").each(function(){ var s_id = $(this).attr("id").substr(1); var piece = pieces[s_id]; var s_x = piece[1]; var s_y = piece[2]; d_x = b_x-s_x; d_y = b_y-s_y; if(b_d==0&&d_x>=4&&d_x<=5&&d_y>=19&&d_y<=20) { kill_bot(b_id); collision = true; } //4 // 33 if(b_d==1&&d_x>=-14&&d_x<=-13&&d_y>=11&&d_y<=12) { kill_bot(b_id); collision = true; } //-14 // 21 if(b_d==2&&d_x>=4&&d_x<=5&&d_y>=-11&&d_y<=-10) { kill_bot(b_id); collision = true; } //4 // -9 if(b_d==3&&d_x>=22&&d_x<=23&&d_y>=11&&d_y<=12) { kill_bot(b_id); collision = true; } //22 // 21*/ }); } if(!collision) { $(this).collision(".exit,#level").each(function(){ var e_id = $(this).attr("id").substr(1); var piece = pieces[e_id]; var e_x = piece[1]; var e_y = piece[2]; d_x = b_x-e_x; d_y = b_y-e_y; if(d_x>=4&&d_x<=5&&d_y>=1&&d_y<=2) { current_bots++; bots[b_id] = false; $("#current_bots").html(current_bots); $("#b" + b_id).setAnimation(exit[2], function(node){$(node).fadeOut(200)}); } }); } if(!collision) { $(this).collision(".bot,#level").each(function(){ var bd_id = $(this).attr("id").substr(1); if(bd_id!=b_id) { var piece = bots[bd_id]; var bd_x = piece[2]; var bd_y = piece[3]; d_x = b_x-bd_x; d_y = b_y-bd_y; if(d_x>=0&&d_x<=2&&d_y>=0&&d_y<=2) { kill_bot(b_id); kill_bot(bd_id); collision = true; } } }); } }); } }, REFRESH_RATE); Many thanks,

    Read the article

  • Efficiency of Java "Double Brace Initialization"?

    - by Jim Ferrans
    In Hidden Features of Java the top answer mentions Double Brace Initialization, with a very enticing syntax: Set<String> flavors = new HashSet<String>() {{ add("vanilla"); add("strawberry"); add("chocolate"); add("butter pecan"); }}; This idiom creates an anonymous inner class with just an instance initializer in it, which "can use any [...] methods in the containing scope". Main question: Is this as inefficient as it sounds? Should its use be limited to one-off initializations? (And of course showing off!) Second question: The new HashSet must be the "this" used in the instance initializer ... can anyone shed light on the mechanism? Third question: Is this idiom too obscure to use in production code? Summary: Very, very nice answers, thanks everyone. On question (3), people felt the syntax should be clear (though I'd recommend an occasional comment, especially if your code will pass on to developers who may not be familiar with it). On question (1), The generated code should run quickly. The extra .class files do cause jar file clutter, and slow program startup slightly (thanks to coobird for measuring that). Thilo pointed out that garbage collection can be affected, and the memory cost for the extra loaded classes may be a factor in some cases. Question (2) turned out to be most interesting to me. If I understand the answers, what's happening in DBI is that the anonymous inner class extends the class of the object being constructed by the new operator, and hence has a "this" value referencing the instance being constructed. Very neat. Overall, DBI strikes me as something of an intellectual curiousity. Coobird and others point out you can achieve the same effect with Arrays.asList, varargs methods, Google Collections, and the proposed Java 7 Collection literals. Newer JVM languages like Scala, JRuby, and Groovy also offer concise notations for list construction, and interoperate well with Java. Given that DBI clutters up the classpath, slows down class loading a bit, and makes the code a tad more obscure, I'd probably shy away from it. However, I plan to spring this on a friend who's just gotten his SCJP and loves good natured jousts about Java semantics! ;-) Thanks everyone!

    Read the article

  • NSXMLParser Memory Allocation Efficiency for the iPhone

    - by Staros
    Hello, I've recently been playing with code for an iPhone app to parse XML. Sticking to Cocoa, I decided to go with the NSXMLParser class. The app will be responsible for parsing 10,000+ "computers", all which contain 6 other strings of information. For my test, I've verified that the XML is around 900k-1MB in size. My data model is to keep each computer in an NSDictionary hashed by a unique identifier. Each computer is also represented by a NSDictionary with the information. So at the end of the day, I end up with a NSDictionary containing 10k other NSDictionaries. The problem I'm running into isn't about leaking memory or efficient data structure storage. When my parser is done, the total amount of allocated objects only does go up by about 1MB. The problem is that while the NSXMLParser is running, my object allocation is jumping up as much as 13MB. I could understand 2 (one for the object I'm creating and one for the raw NSData) plus a little room to work, but 13 seems a bit high. I can't imaging that NSXMLParser is that inefficient. Thoughts? Code... The code to start parsing... NSXMLParser *parser = [[NSXMLParser alloc] initWithData: data]; [parser setDelegate:dictParser]; [parser parse]; output = [[dictParser returnDictionary] retain]; [parser release]; [dictParser release]; And the parser's delegate code... -(void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qualifiedName attributes:(NSDictionary *)attributeDict { if(mutableString) { [mutableString release]; mutableString = nil; } mutableString = [[NSMutableString alloc] init]; } -(void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string { if(self.mutableString) { [self.mutableString appendString:string]; } } -(void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { if([elementName isEqualToString:@"size"]){ //The initial key, tells me how many computers returnDictionary = [[NSMutableDictionary alloc] initWithCapacity:[mutableString intValue]]; } if([elementName isEqualToString:hashBy]){ //The unique identifier if(mutableDictionary){ [mutableDictionary release]; mutableDictionary = nil; } mutableDictionary = [[NSMutableDictionary alloc] initWithCapacity:6]; [returnDictionary setObject:[NSDictionary dictionaryWithDictionary:mutableDictionary] forKey:[NSMutableString stringWithString:mutableString]]; } if([fields containsObject:elementName]){ //Any of the elements from a single computer that I am looking for [mutableDictionary setObject:mutableString forKey:elementName]; } } Everything initialized and released correctly. Again, I'm not getting errors or leaking. Just inefficient. Thanks for any thoughts!

    Read the article

  • Efficiency of nested Loop

    - by didxga
    See the following snippet: //first nested loops for(int i=0;i<10;i++) { for(int j=1;j<1000000;j++) { //do some stuff } } //second nested loops for(int i=0;i<1000000;i++) { for(int j=1;j<10;j++) { //do some stuff } } I am wondering why the first nested loops is running slower than the second one? Regards!

    Read the article

  • calloc v/s malloc and time efficiency

    - by yCalleecharan
    Hi, I've read with interest the post "c difference between malloc and calloc". I'm using malloc in my code and would like to know what difference I'll have using calloc instead. My present (pseudo)code with malloc: Scenario 1 int main() { allocate large arrays with malloc INITIALIZE ALL ARRAY ELEMENTS TO ZERO for loop //say 1000 times do something and write results to arrays end for loop FREE ARRAYS with free command } //end main If I use calloc instead of malloc, then I'll have: Scenario2 int main() { for loop //say 1000 times ALLOCATION OF ARRAYS WITH CALLOC do something and write results to arrays FREE ARRAYS with free command end for loop } //end main I have three questions: Which of the scenarios is more efficient if the arrays are very large? Which of the scenarios will be more time efficient if the arrays are very large? In both scenarios,I'm just writing to arrays in the sense that for any given iteration in the for loop, I'm writing each array sequentially from the first element to the last element. The important question: If I'm using malloc as in scenario 1, then is it necessary that I initialize the elements to zero? Say with malloc I have array z = [garbage1, garbage2, garbage 3]. For each iteration, I'm writing elements sequentially i.e. in the first iteration I get z =[some_result, garbage2, garbage3], in the second iteration I get in the first iteration I get z =[some_result, another_result, garbage3] and so on, then do I need specifically to initialize my arrays after malloc?

    Read the article

  • Efficiency of checking for null varbinary(max) column ?

    - by Moe Sisko
    Using SQL Server 2008. Example table : CREATE table dbo.blobtest (id int primary key not null, name nvarchar(200) not null, data varbinary(max) null) Example query : select id, name, cast((case when data is null then 0 else 1 end) as bit) as DataExists from dbo.blobtest Now, the query needs to return a "DataExists" column, that returns 0 if the blob is null, else 1. This all works fine, but I'm wondering how efficient it is. i.e. does SQL server need to read in the whole blob to its memory, or is there some optimization so that it just does enough reads to figure out if the blob is null or not ? (FWIW, the sp_tableoption "large value types out of row" option is set to OFF for this example).

    Read the article

  • Increase efficiency of a loop with jQuery and GameQuery

    - by Pez Cuckow
    I have a game coded in jQuery where bots are moved around the screen. The below code is a loop that runs every 20ms, currently if you have over 15 bots you start to notice the browser lagging (simply because of all the advanced collision detection going on). Is there any way to reduce the lag, can I make it any more efficient? P.s. sorrry for just posting a block of code, I can't see a way to make my point clear enough without! $.playground().registerCallback(function(){ //Movement Loop if(!pause) { for (var i in bots) { //bots - color, dir, x, y, z, spawned?, spawnerid, prevd var self = $('#b' + i); var current = bots[i]; if(bots[i][5]==1) { var xspeed = 0, yspeed = 0; if(current[1]==0) { yspeed = -D_SPEED; } else if(current[1]==1) { xspeed = D_SPEED; } else if(current[1]==2) { yspeed = D_SPEED; } else if(current[1]==3) { xspeed = -D_SPEED; } var x = current[2] + xspeed; var y = current[3] + yspeed; var z = current[3] + 120; if(current[2]>0&&x>PLAYGROUND_WIDTH||current[2]<0&&x<-GRID_SIZE|| current[3]>0&&y>PLAYGROUND_HEIGHT||current[3]<0&&y<-GRID_SIZE) { remove_bot(i, self); } else { if(current[7]!=current[1]) { self.setAnimation(colors[current[0]][current[1]]); bots[i][7] = current[1]; } if(self.css({"left": ""+(x)+"px", "top": ""+(y)+"px", "z-index": z})) { bots[i][2] = x; bots[i][3] = y; bots[i][4] = z; bots[i][8]++; } } } } $("#debug").html(dump(arrows)); $(".bot").each(function(){ var b_id = $(this).attr("id").substr(1); var collision = false; var c_bot = bots[b_id]; var b_x = c_bot[2]; var b_y = c_bot[3]; var b_d = c_bot[1]; $(this).collision(".arrow,#arrows").each(function(){ //Many thanks to Selim Arsever for this fix! var a_id = $(this).attr("id").substr(1); var piece = arrows[a_id]; var a_v = piece[0]; if(a_v==1) { var a_x = piece[2]; var a_y = piece[3]; var d_x = b_x-a_x; var d_y = b_y-a_y; if(d_x>=4&&d_x<=5&&d_y>=1&&d_y<=2) { //bots - color, dir, x, y, z, spawned?, spawnerid, prevd bots[b_id][7] = c_bot[1]; bots[b_id][1] = piece[1]; collision = true; } } }); if(!collision) { $(this).collision(".wall,#level").each(function(){ var w_id = $(this).attr("id").substr(1); var piece = pieces[w_id]; var w_x = piece[1]; var w_y = piece[2]; d_x = b_x-w_x; d_y = b_y-w_y; if(b_d==0&&d_x>=4&&d_x<=5&&d_y>=27&&d_y<=28) { kill_bot(b_id); collision = true; } //4 // 33 if(b_d==1&&d_x>=-12&&d_x<=-11&&d_y>=21&&d_y<=22) { kill_bot(b_id); collision = true; } //-14 // 21 if(b_d==2&&d_x>=4&&d_x<=5&&d_y>=-9&&d_y<=-8) { kill_bot(b_id); collision = true; } //4 // -9 if(b_d==3&&d_x>=22&&d_x<=23&&d_y>=20&&d_y<=21) { kill_bot(b_id); collision = true; } //22 // 21 }); } if(!collision&&c_bot[8]>GRID_MOVE) { $(this).collision(".spawn,#level").each(function(){ var s_id = $(this).attr("id").substr(1); var piece = pieces[s_id]; var s_x = piece[1]; var s_y = piece[2]; d_x = b_x-s_x; d_y = b_y-s_y; if(b_d==0&&d_x>=4&&d_x<=5&&d_y>=19&&d_y<=20) { kill_bot(b_id); collision = true; } //4 // 33 if(b_d==1&&d_x>=-14&&d_x<=-13&&d_y>=11&&d_y<=12) { kill_bot(b_id); collision = true; } //-14 // 21 if(b_d==2&&d_x>=4&&d_x<=5&&d_y>=-11&&d_y<=-10) { kill_bot(b_id); collision = true; } //4 // -9 if(b_d==3&&d_x>=22&&d_x<=23&&d_y>=11&&d_y<=12) { kill_bot(b_id); collision = true; } //22 // 21*/ }); } if(!collision) { $(this).collision(".exit,#level").each(function(){ var e_id = $(this).attr("id").substr(1); var piece = pieces[e_id]; var e_x = piece[1]; var e_y = piece[2]; d_x = b_x-e_x; d_y = b_y-e_y; if(d_x>=4&&d_x<=5&&d_y>=1&&d_y<=2) { current_bots++; bots[b_id] = false; $("#current_bots").html(current_bots); $("#b" + b_id).setAnimation(exit[2], function(node){$(node).fadeOut(200)}); } }); } if(!collision) { $(this).collision(".bot,#level").each(function(){ var bd_id = $(this).attr("id").substr(1); if(bd_id!=b_id) { var piece = bots[bd_id]; var bd_x = piece[2]; var bd_y = piece[3]; d_x = b_x-bd_x; d_y = b_y-bd_y; if(d_x>=0&&d_x<=2&&d_y>=0&&d_y<=2) { kill_bot(b_id); kill_bot(bd_id); collision = true; } } }); } }); } }, REFRESH_RATE); Many thanks,

    Read the article

  • Array sorting efficiency... Beginner need advice

    - by SoleSoft
    I'll start by saying I am very much a beginner programmer, this is essentially my first real project outside of using learning material. I've been making a 'Simon Says' style game (the game where you repeat the pattern generated by the computer) using C# and XNA, the actual game is complete and working fine but while creating it, I wanted to also create a 'top 10' scoreboard. The scoreboard would record player name, level (how many 'rounds' they've completed) and combo (how many buttons presses they got correct), the scoreboard would then be sorted by combo score. This led me to XML, the first time using it, and I eventually got to the point of having an XML file that recorded the top 10 scores. The XML file is managed within a scoreboard class, which is also responsible for adding new scores and sorting scores. Which gets me to the point... I'd like some feedback on the way I've gone about sorting the score list and how I could have done it better, I have no other way to gain feedback =(. I know .NET features Array.Sort() but I wasn't too sure of how to use it as it's not just a single array that needs to be sorted. When a new score needs to be entered into the scoreboard, the player name and level also have to be added. These are stored within an 'array of arrays' (10 = for 'top 10' scores) scoreboardComboData = new int[10]; // Combo scoreboardTextData = new string[2][]; scoreboardTextData[0] = new string[10]; // Name scoreboardTextData[1] = new string[10]; // Level as string The scoreboard class works as follows: - Checks to see if 'scoreboard.xml' exists, if not it creates it - Initialises above arrays and adds any player data from scoreboard.xml, from previous run - when AddScore(name, level, combo) is called the sort begins - Another method can also be called that populates the XML file with above array data The sort checks to see if the new score (combo) is less than or equal to any recorded scores within the scoreboardComboData array (if it's greater than a score, it moves onto the next element). If so, it moves all scores below the score it is less than or equal to down one element, essentially removing the last score and then places the new score within the element below the score it is less than or equal to. If the score is greater than all recorded scores, it moves all scores down one and inserts the new score within the first element. If it's the only score, it simply adds it to the first element. When a new score is added, the Name and Level data is also added to their relevant arrays, in the same way. What a tongue twister. Below is the AddScore method, I've added comments in the hope that it makes things clearer O_o. You can get the actual source file HERE. Below the method is an example of the quickest way to add a score to follow through with a debugger. public static void AddScore(string name, string level, int combo) { // If the scoreboard has not yet been filled, this adds another 'active' // array element each time a new score is added. The actual array size is // defined within PopulateScoreBoard() (set to 10 - for 'top 10' if (totalScores < scoreboardComboData.Length) totalScores++; // Does the scoreboard even need sorting? if (totalScores > 1) { for (int i = totalScores - 1; i > - 1; i--) { // Check to see if score (combo) is greater than score stored in // array if (combo > scoreboardComboData[i] && i != 0) { // If so continue to next element continue; } // Check to see if score (combo) is less or equal to element 'i' // score && that the element is not the last in the // array, if so the score does not need to be added to the scoreboard else if (combo <= scoreboardComboData[i] && i != scoreboardComboData.Length - 1) { // If the score is lower than element 'i' and greater than the last // element within the array, it needs to be added to the scoreboard. This is achieved // by moving each element under element 'i' down an element. The new score is then inserted // into the array under element 'i' for (int j = totalScores - 1; j > i; j--) { // Name and level data are moved down in their relevant arrays scoreboardTextData[0][j] = scoreboardTextData[0][j - 1]; scoreboardTextData[1][j] = scoreboardTextData[1][j - 1]; // Score (combo) data is moved down in relevant array scoreboardComboData[j] = scoreboardComboData[j - 1]; } // The new Name, level and score (combo) data is inserted into the relevant array under element 'i' scoreboardTextData[0][i + 1] = name; scoreboardTextData[1][i + 1] = level; scoreboardComboData[i + 1] = combo; break; } // If the method gets the this point, it means that the score is greater than all scores within // the array and therefore cannot be added in the above way. As it is not less than any score within // the array. else if (i == 0) { // All Names, levels and scores are moved down within their relevant arrays for (int j = totalScores - 1; j != 0; j--) { scoreboardTextData[0][j] = scoreboardTextData[0][j - 1]; scoreboardTextData[1][j] = scoreboardTextData[1][j - 1]; scoreboardComboData[j] = scoreboardComboData[j - 1]; } // The new number 1 top name, level and score, are added into the first element // within each of their relevant arrays. scoreboardTextData[0][0] = name; scoreboardTextData[1][0] = level; scoreboardComboData[0] = combo; break; } // If the methods get to this point, the combo score is not high enough // to be on the top10 score list and therefore needs to break break; } } // As totalScores < 1, the current score is the first to be added. Therefore no checks need to be made // and the Name, Level and combo data can be entered directly into the first element of their relevant // array. else { scoreboardTextData[0][0] = name; scoreboardTextData[1][0] = level; scoreboardComboData[0] = combo; } } } Example for adding score: private static void Initialize() { scoreboardDoc = new XmlDocument(); if (!File.Exists("Scoreboard.xml")) GenerateXML("Scoreboard.xml"); PopulateScoreBoard("Scoreboard.xml"); // ADD TEST SCORES HERE! AddScore("EXAMPLE", "10", 100); AddScore("EXAMPLE2", "24", 999); PopulateXML("Scoreboard.xml"); } In it's current state the source file is just used for testing, initialize is called within main and PopulateScoreBoard handles the majority of other initialising, so nothing else is needed, except to add a test score. I thank you for your time!

    Read the article

  • NSDictionary, NSArray, NSSet and efficiency

    - by ryyst
    Hi, I've got a text file, with about 200,000 lines. Each line represents an object with multiple properties. I only search through one of the properties (the unique ID) of the objects. If the unique ID I'm looking for is the same as the current object's unique ID, I'm gonna read the rest of the object's values. Right now, each time I search for an object, I just read the whole text file line by line, create an object for each line and see if it's the object I'm looking for - which is basically the most inefficient way to do the search. I would like to read all those objects into memory, so I can later search through them more efficiently. The question is, what's the most efficient way to perform such a search? Is a 200,000-entries NSArray a good way to do this (I doubt it)? How about an NSSet? With an NSSet, is it possible to only search for one property of the objects? Thanks for any help! -- Ry

    Read the article

  • Java: Efficiency of the readLine method of the BufferedReader and possible alternatives

    - by Luhar
    We are working to reduce the latency and increase the performance of a process written in Java that consumes data (xml strings) from a socket via the readLine() method of the BufferedReader class. The data is delimited by the end of line separater (\n), and each line can be of a variable length (6KBits - 32KBits). Our code looks like: Socket sock = connection; InputStream in = sock.getInputStream(); BufferedReader inputReader = new BufferedReader(new InputStreamReader(in)); ... do { String input = inputReader.readLine(); // Executor call to parse the input thread in a seperate thread }while(true) So I have a couple of questions: Will the inputReader.readLine() method return as soon as it hits the \n character or will it wait till the buffer is full? Is there a faster of picking up data from the socket than using a BufferedReader? What happens when the size of the input string is smaller than the size of the Socket's receive buffer? What happens when the size of the input string is bigger than the size of the Socket's receive buffer? I am getting to grips (slowly) with Java's IO libraries, so any pointers are much appreciated. Thank you!

    Read the article

  • jQuery bind efficiency

    - by chelfers
    I'm having issue with load speed using multiple jQuery binds on a couple thousands elements and inputs, is there a more efficient way of doing this? The site has the ability to switch between product lists via ajax calls, the page cannot refresh. Some lists have 10 items, some 100, some over 2000. The issue of speed arises when I start flipping between the lists; each time the 2000+ item list is loaded the system drags for about 10 seconds. Before I rebuild the list I am setting the target element's html to '', and unbinding the two bindings below. I'm sure it has something to do with all the parent, next, and child calls I am doing in the callbacks. Any help is much appreciated. loop 2500 times <ul> <li><input type="text" class="product-code" /></li> <li>PROD-CODE</li> ... <li>PRICE</li> </ul> end loop $('li.product-code').bind( 'click', function(event){ selector = '#p-'+ $(this).prev('li').children('input').attr('lm'); $(selector).val( ( $(selector).val() == '' ? 1 : ( parseFloat( $(selector).val() ) + 1 ) ) ); Remote.Cart.lastProduct = selector; Remote.Cart.Products.Push( Remote.Cart.customerKey, { code : $(this).prev('li').children('input').attr('code'), title : $(this).next('li').html(), quantity : $('#p-'+ $(this).prev('li').children('input').attr('lm') ).val(), price : $(this).prev('li').children('input').attr('price'), weight : $(this).prev('li').children('input').attr('weight'), taxable : $(this).prev('li').children('input').attr('taxable'), productId : $(this).prev('li').children('input').attr('productId'), links : $(this).prev('li').children('input').attr('productLinks') }, '#p-'+ $(this).prev('li').children('input').attr('lm'), false, ( parseFloat($(selector).val()) - 1 ) ); return false; }); $('input.product-qty').bind( 'keyup', function(){ Remote.Cart.lastProduct = '#p-'+ $(this).attr('lm'); Remote.Cart.Products.Push( Remote.Cart.customerKey, { code : $(this).attr('code') , title : $(this).parent().next('li').next('li').html(), quantity : $(this).val(), price : $(this).attr('price'), weight : $(this).attr('weight'), taxable : $(this).attr('taxable'), productId : $(this).attr('productId'), links : $(this).attr('productLinks') }, '#p-'+ $(this).attr('lm'), false, previousValue ); });

    Read the article

  • Improving the efficiency of Kinect for Windows DTWGestureRecognition Application

    - by Ray
    Currently I am using the DTWGestureRecognition open source tool for Kinect SDK v1.5. I have recorded a few gestures and use them to navigate through Windows 7. I also have implemented voice control for simple things such as opening PowerPoint, Chrome, etc. My main issue is that the application uses quite a bit of my CPU power which causes it to become slow. During gestures and voice commands, the CPU usage sometimes spikes to 80-90%, which causes the application to be unresponsive for a few seconds. I am running it on a 64 bit Windows 7 machine with an i5 processor and 8 GB of RAM. I was wondering if anyone with any experience using this tool or Kinect in general has made it more efficient and less performance hogging. Right now I removed sections which display the RGB video and the Depth video but even doing that did not make a big impact. Any help is appreciated, thanks!

    Read the article

  • Increase efficiency for an R simulator of the Monty Hall Puzzle

    - by jahan_m
    The Monty Hall Problem is a simple puzzle involving probability that even stumps professionals in careers dealing with some heavy-duty math. Here's the basic problem: Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice? You can find numerous explanations of the solution here: http://en.wikipedia.org/wiki/Monty_Hall_problem Goal of my simulation: Prove that a switching strategy will win you the car 2/3 of the time. I got curious and wanted to write a little function that simulates the problem many times and returns the proportion of wins if you switched and the proportion of wins if you stayed with your first choice. The function then plots the cumulative wins. First and foremost, I'm interested in hearing if my simulation is indeed replicating the Monty Problem, or if some aspect of the code got it wrong. Secondly, this function takes a long time to run once I get to about 10,000 simulations. I know I don't need this many simulations to prove this but I'd love to hear some ideas on how to make it more efficient. Thanks for your feedback! Monty_Hall=function(repetitions){ doors=c('A','B','C') stay_wins=0 switch_wins=0 series=data.frame(sim_num=seq(repetitions),cum_sum_stay=replicate(repetitions,0),cum_sum_switch=replicate(repetitions,0)) for(i in seq(repetitions)){ winning_door=sample(doors,1) contestant_chooses=sample(doors,1) if(contestant_chooses==winning_door) stay_wins=stay_wins+1 else switch_wins=switch_wins+1 series[i,'cum_sum_stay']=stay_wins series[i,'cum_sum_switch']=switch_wins } plot(series$sim_num,series$cum_sum_switch,col=2,ylab='Cumulative # of wins', xlab='Simulation #',main=sprintf('%d Simulations of the Monty Hall Paradox',repetitions),type='l') lines(series$sim_num,series$cum_sum_stay,col=4) legend('topleft',legend=c('Cumulative wins from switching', 'Cumulative wins from staying'),col=c(2,4),lty=1) result=list(series=series,stay_wins=stay_wins,switch_wins=switch_wins, proportion_stay_wins=stay_wins/repetitions, proportion_switch_wins=switch_wins/repetitions) return(result) } #Theory predicts that it is to the contestant's advantage if he #switches his choice to the other door. This function simulates the game #many times, and shows you the proportion of games in which staying or #switching would win the car. It also plots the cumulative wins for each strategy. Monty_Hall(100)

    Read the article

  • Mysql Performance Question - Essentially about normalizing efficiency

    - by freqmode
    Hi there. Just a quick question about database performance. I'll outline my site purpose below as background. I'm creating a dictionary site that saves the words users define to a database. What I'm wondering is whether or not to create a words table for each user or to keep one massive words table. This site will be used for entire schools so the single words table would be massive! The database structure is as follows: A user table with: User_ID PRIMARY KEY Username First Last Password Email Country Research Standings SendInfo Donated JoinedOn LastLogin Logins Correct Attempts Admin Active And one word table with: User_ID PRIMARY KEY Word Vocab Spell Defined DefinedAttempted Spelled SpelledAttempted Sentenced SentencedAttempted So what I'm asking is , performance-wise, should I create a new table for each user when they join the site - each user could have hundreds or thousands of words over time? Or is it better to have one massive table with thousands and thousands of records and filter by User_ID. I don't think I'll perform many table joins. My gut feeling is to create a new table for each user, but I thought I'd ask for expert advice! Thanks in advance.

    Read the article

  • Where to draw the line between efficiency and practicality

    - by dclowd9901
    I understand very well the need for websites' front ends to be coded and compressed as much as possible, however, I feel like I have more lax standards than others when it comes to practical applications. For instance, while I understand why some would, I don't see anything wrong with putting selectors in the <html> or <body> tags on a website with an expected small visitation rate. I would only do this for a cheap website for a small client, because I can't really justify the cost of time otherwise. So, that said, do you think it's okay to draw a line? Where do you draw yours?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >