Search Results

Search found 3706 results on 149 pages for 'nano optimization'.

Page 66/149 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Which is faster in memory, ints or chars? And file-mapping or chunk reading?

    - by Nick
    Okay, so I've written a (rather unoptimized) program before to encode images to JPEGs, however, now I am working with MPEG-2 transport streams and the H.264 encoded video within them. Before I dive into programming all of this, I am curious what the fastest way to deal with the actual file is. Currently I am file-mapping the .mts file into memory to work on it, although I am not sure if it would be faster to (for example) read 100 MB of the file into memory in chunks and deal with it that way. These files require a lot of bit-shifting and such to read flags, so I am wondering that when I reference some of the memory if it is faster to read 4 bytes at once as an integer or 1 byte as a character. I thought I read somewhere that x86 processors are optimized to a 4-byte granularity, but I'm not sure if this is true... Thanks!

    Read the article

  • Creating objects makes the VM faster?

    - by Sudhir Jonathan
    Look at this piece of code: MessageParser parser = new MessageParser(); for (int i = 0; i < 10000; i++) { parser.parse(plainMessage, user); } For some reason, it runs SLOWER (by about 100ms) than for (int i = 0; i < 10000; i++) { MessageParser parser = new MessageParser(); parser.parse(plainMessage, user); } Any ideas why? The tests were repeated a lot of times, so it wasn't just random. How could creating an object 10000 times be faster than creating it once?

    Read the article

  • Can I optimize this at all?

    - by Moshe
    I'm working on an iOS app and I'm using the following code for one of my tables to return the number of rows in a particular section: return [[kSettings arrayForKey:@"views"] count]; Is there any other way to write that line of code so that it is more memory efficient? EDIT: kSettings = NSUserDefaults standardUserDefaults. Is there any way to rewrite my line of code so that whatever memory it occupies is released sooner than it is released now?

    Read the article

  • Fast find object by string property

    - by Andrew Kalashnikov
    Hello, colleagues. I've got task to fast find object by its string property. Object: class DicDomain { public virtual string Id{ get; set; } public virtual string Name { get; set; } } For storing my object I use List[T] dictionary where T is DicDomain for now . I've got 5-10 such lists, which contain about 500-20000 at each one. Task is find objects by its Name. I use next code now: List<T> entities = dictionary.FindAll(s => s.Name.Equals(word, StringComparison.OrdinalIgnoreCase)); I've got some questions: Is my search speed optimal. I think now. Data structure. It List good for this task. What about hashtable,sorted... Method Find. May be i should use string intern?? I haven't much exp at these tasks. Can u give me good advice for increase perfomance. Thanks

    Read the article

  • How to optimize this javascript code?

    - by Andrija
    I have a jsp which uses a lot of javascript and it's just not fast enough. I would like to optimize it so first, here's a part of the code: In the jsp I have the initialization: window.onload = function () { formCollection.pageSize.value = "<%= pagingSize%>"; elemCollection = iDom3.Table.all["spis"].XML.DOM; <% if (resultList != null) { %> elementsNumber = <%= resultList.size() %>; <%} else { %> elementsNumber = 0; <% } %> contextPath = "<%= request.getContextPath() %>"; } In my js file I have two types of js functions: // gets the first element and sets it's value to all the other; //the selectSingleNode function is used because I use XSLT transformation //to generate the table _setTehJed = function(){ var resultId = formCollection.elements["idTehJedinice_spis_1"].value; var resultText = formCollection.elements["tehnicka_spis_1"].value; if (resultId != ""){ var counter = 1; while (counter<elementsNumber){ counter++; if(formCollection.elements["idTehJedinice_spis_"+counter] != null){ formCollection.elements["idTehJedinice_spis_"+counter].value=resultId; formCollection.elements["tehnicka_spis_"+counter].value=resultText; } var node=elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+counter+"']/data[@col = 'tehnicka']/title"); node.text=resultText; var node2=elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+counter+"']/data[@col = 'idTehJedinice']/title"); node2.text=resultId; } } } // sets the elements checkbox to checked or unchecked _SelectCheckRokCuvanja = { all : [], Item : function (oItem, sId) { this.all["spis_"+sId] = oItem.value; if (oItem.checked) { elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+sId+"']/data[@col = 'rokCheck']").setAttribute("default", "true"); }else{ elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+sId+"']/data[@col = 'rokCheck']").setAttribute("default", "false"); } } } I've used these tips: http://blogs.msdn.com/b/ie/archive/2006/08/28/728654.aspx http://code.google.com/speed/articles/optimizing-javascript.html but I still think something could be done like defining the functions like this: In the jsp: window.onload = function () { iDom3.DigitalnaArhivaPrihvat.formCollection=document.forms["controller"]; iDom3.DigitalnaArhivaPrihvat.formCollection.pageSize.value = "<%= pagingSize%>"; iDom3.DigitalnaArhivaPrihvat.elemCollection = iDom3.Table.all["spis"].XML.DOM; <% if (resultList != null) { %> iDom3.DigitalnaArhivaPrihvat.elementsNumber = <%= resultList.size() %> <%} else { %> iDom3.DigitalnaArhivaPrihvat.elementsNumber = 0; <% } %> } in the js: iDom3.DigitalnaArhivaPrihvat = { formCollection:null, elemCollection:null, elementsNumber:null, _setTehJed : function(){ var resultId = this.formCollection.elements.idTehJedinice_spis_1.value; var resultText = this.formCollection.elements.tehnicka_spis_1.value; if (resultId != ""){ var counter = 1; while (counter<this.elementsNumber){ counter++; if(this.formCollection.elements["idTehJedinice_spis_"+counter] !== null){ this.formCollection.elements["idTehJedinice_spis_"+counter].value=resultId; this.formCollection.elements["tehnicka_spis_"+counter].value=resultText; } var node=this.elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+counter+"']/data[@col = 'tehnicka']/title"); node.text=resultText; var node2=this.elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+counter+"']/data[@col = 'idTehJedinice']/title"); node2.text=resultId; } } }, _SelectCheckRokCuvanja = { all : [], Item : function (oItem, sId) { this.all["spis_"+sId] = oItem.value; if (oItem.checked) { this.elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+sId+"']/data[@col = 'rokCheck']").setAttribute("default", "true"); }else{ this.elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+sId+"']/data[@col = 'rokCheck']").setAttribute("default", "false"); } } } but the problem is scoping (if I do it like this, the second function does not execute properly). Any suggestions?

    Read the article

  • Inline function v. Macro in C -- What's the Overhead (Memory/Speed)?

    - by Jason R. Mick
    I searched Stack Overflow for the pros/cons of function-like macros v. inline functions. I found the following discussion: Pros and Cons of Different macro function / inline methods in C ...but it didn't answer my primary burning question. Namely, what is the overhead in c of using a macro function (with variables, possibly other function calls) v. an inline function, in terms of memory usage and execution speed? Are there any compiler-dependent differences in overhead? I have both icc and gcc at my disposal. My code snippet I'm modularizing is: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = AttractiveTerm * AttractiveTerm; EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); My reason for turning it into an inline function/macro is so I can drop it into a c file and then conditionally compile other similar, but slightly different functions/macros. e.g.: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = pow(SigmaSquared/RadialDistanceSquared,9); EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); (note the difference in the second line...) This function is a central one to my code and gets called thousands of times per step in my program and my program performs millions of steps. Thus I want to have the LEAST overhead possible, hence why I'm wasting time worrying about the overhead of inlining v. transforming the code into a macro. Based on the prior discussion I already realize other pros/cons (type independence and resulting errors from that) of macros... but what I want to know most, and don't currently know is the PERFORMANCE. I know some of you C veterans will have some great insight for me!!

    Read the article

  • Execute a method less times possible - PHP

    - by serhio
    I have a site in multiple languages. I have a method that returns me the today currencies in a array. I display that currencies in a table then. // --- en/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); // --- fr/index.php <?php include_once "../exchangeRates.php"; $currencies = ReadExchangeRates(); ... // somewhere in the page <td><?php echo $currencies["eur"]["today"]; ?></td> So, every time I load, en/ or fr/ or other language, I request the exchange rates from a external site. Can I optimize this behavior (reading once per day or session)? maybe to store a global variable and check the update date?

    Read the article

  • How to simplify my code... 2D array in Objective C...?

    - by Tattat
    self.myArray = [NSArray arrayWithObjects: [NSArray arrayWithObjects: [self d], [self generateMySecretObject],nil], [NSArray arrayWithObjects: [self generateMySecretObject], [self generateMySecretObject],nil],nil]; for (int k=0; k<[self.myArray count]; k++) { for(int s = 0; s<[[self.myArray objectAtIndex:k] count]; s++){ [[[self.myArray objectAtIndex:k] objectAtIndex:s] setAttribute:[self generateSecertAttribute]]; } } As you can see this is a simple 2*2 array, but it takes me lots of code to assign the NSArray in very first place, because I found that the NSArray can't assign the size at very beginning. Also, I want to set attribute one by one. I can't think of if my array change to 10*10. How long it could be. So, I hope you guys can give me some suggestions on shorten the code, and more readable. thz

    Read the article

  • Animate screen while loading textures

    - by Omega
    My RPG-like game has random battles. When the player enters a random battle, it is necessary for my game to load the textures used within that battle (animated monsters, animations, etc). The textures are quite a lot, and rather big (the battles are very graphical intensive). Such process consumes significant time. And while it is loading, the whole screen freezes. The game's map freezes, and the wait time is significant - I personally find it annoying. I can't afford to preload the textures because, after doing some math, I realized: If I preload all the textures at the beginning of the game, the application will definitely crash. If I preload the textures that are used in a specific map when the player enters the map, the application is very likely to crash as well. I can only afford to load the textures when I need them, and dispose of them as soon as the battle ends. I'd prefer to not use a "loading screen" image because it affects my game's design and concept. I want to avoid this approach. If I could do some kind of animation while loading the textures, it would be great, which leads to my question: is that possible? What kind of animation, you ask? Well, how about... you remember when Final Fantasy used to distort the screen while apparently loading the textures? Something like that. But well, distorting is quite a time-consuming process as well, so maybe just a cool frame-by-frame animation or something. While writing this, I realized that I could make small pauses between textures (there are multiple textures), and during such pauses, I update the screen to represent the animation's state. However, this is very unlikely to happen, because each texture is 2048x2048, so the animation would be refreshed at a rather laggy (and annoying) rate. I'd prefer to avoid this as well.

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • How to batch retrieve documents with mongoDB?

    - by edude05
    Hello everyone, I have an application that queries data from a mongoDB using the mongoDB C# driver something like this: public void main() { foreach (int i in listOfKey) { list.add(getObjectfromDB(i); } } public myObject getObjFromDb(int primaryKey) { document query = new document(); query["primKey"] = primaryKey; document result= mongo["myDatabase"]["myCollection"].findOne(query); return parseObject(result); } On my local (development) machine to get 100 object this way takes less than a second. However, I recently moved the database to a server on the internet, and this query takes about 30 seconds to execute for the same number of object. Furthermore, looking at the mongoDB log, it seems to open about 8-10 connections to the DB to perform this query. So what I'd like to do is have the query the database for an array of primaryKeys and get them all back at once, then do the parsing in a loop afterwards, using one connection if possible. How could I optimize my query to do so? Thanks, --Michael

    Read the article

  • Improving MySQL Update Query Efficiency

    - by Russell C.
    In our database tables we keep a number of counting columns to help reduce the number of simple lookup queries. For example, in our users table we have columns for the number of reviews written, photos uploaded, friends, followers, etc. To help make sure these stay in sync we have a script that runs periodically to check and update these counting columns. The problem is that now that our database has grown significantly the queries we have been using are taking forever to run since they are totally inefficient. I would appreciate someone with more MySQL knowledge than myself to recommend how we can improve it's efficiency: update users set photos=(select count(*) from photos where photos.status="A" AND photos.user_id=users.id) where users.status="A"; If this were a select statement I would just use a join but I'm not sure if that is possible with update. Thanks in advance for your help!

    Read the article

  • How much faster is a database running in RAM?

    - by orokusaki
    I"m looking to run PostgreSQL in RAM for performance enhancement. The database isn't more than 1GB and shouldn't ever grow to more than 5GB. Is it worth doing? Are there any benchmarks out there? Is it buggy? My second major concern is: How easy is it to back things up when it's running purely in RAM. Is this just like using RAM as tier 1 HD, or is it much more complicated?

    Read the article

  • Common causes of slow performing jQuery and how to optimize the code?

    - by Polaris878
    Hello, This might be a bit of a vague or general question, but I figure it might be able to serve as a good resource for other jQuery-ers. I'm interested in common causes of slow running jQuery and how to optimize these cases. We have a good amount of jQuery/JavaScript performing actions on our page... and performance can really suffer with a large number off elements. What are some obvious performance pitfalls you know of with jQuery? What are some general optimizations a jQuery-er can do to squeeze every last bit of performance out of his/her scripts? One example: a developer may use a selector to access an element that is slower than some other way. Thanks

    Read the article

  • How can I optimize this loop?

    - by Moshe
    I've got a piece of code that returns a super-long string that represents "search results". Each result is represented by a double HTML break symbol. For example: Result1<br><br>Result 2<br><br>Result3 I've got the following loop that takes each result and puts it into an array, stripping out the break indicator, "kBreakIndicator" (<br><br>). The problem is that this lopp takes way too long to execute. With a few results it's fine, but once you hit a hundred results, it's about 20-30 seconds slower. It's unacceptable performance. What can I do to improve performance? Here's my code: content is the original NSString. NSMutableArray *results = [[NSMutableArray alloc] init]; //Loop through the string of results and take each result and put it into an array while(![content isEqualToString:@""]){ NSRange rangeOfResult = [content rangeOfString:kBreakIndicator]; NSString *temp = (rangeOfResult.location != NSNotFound) ? [content substringToIndex:rangeOfResult.location] : nil; if (temp) { [results addObject:temp]; content = [[[content stringByReplacingOccurrencesOfString:[NSString stringWithFormat:@"%@%@", temp, kBreakIndicator] withString:@""] mutableCopy] autorelease]; }else{ [results addObject:[content description]]; content = [[@"" mutableCopy] autorelease]; } } //Do something with the results array. [results release];

    Read the article

  • optimize a string.Format + replace.

    - by acidzombie24
    I have this function. The visual studio profile marked the line with string.Format as hot and were i spend much of my time. How can i write this loop more efficiently? public string EscapeNoPredicate(string sz) { var s = new StringBuilder(sz); s.Replace(sepStr, sepStr + sepStr); foreach (char v in IllegalChars) { string s2 = string.Format("{0}{1:X2}", seperator, (Int16)v); s.Replace(v.ToString(), s2); } return s.ToString(); }

    Read the article

  • Why isn't the copy constructor elided here?

    - by Jesse Beder
    (I'm using gcc with -O2.) This seems like a straightforward opportunity to elide the copy constructor, since there are no side-effects to accessing the value of a field in a bar's copy of a foo; but the copy constructor is called, since I get the output meep meep!. #include <iostream> struct foo { foo(): a(5) { } foo(const foo& f): a(f.a) { std::cout << "meep meep!\n"; } int a; }; struct bar { foo F() const { return f; } foo f; }; int main() { bar b; int a = b.F().a; return 0; }

    Read the article

  • MySQL query paralyzes site

    - by nute
    Once in a while, at random intervals, our website gets completely paralyzed. Looking at SHOW FULL PROCESSLIST;, I've noticed that when this happens, there is a specific query that is "Copying to tmp table" for a loooong time (sometimes 350 seconds), and almost all the other queries are "Locked". The part I don't understand is that 90% of the time, this query runs fine. I see it going through in the process list and it finishes pretty quickly most of the time. This query is being called by an ajax call on our homepage to display product recommendations based your browsing history (a la amazon). Just sometimes, randomly (but too often), it gets stuck at "copying to tmp table". Here is a caught instance of the query that was up 109 seconds when I looked: SELECT DISTINCT product_product.id, product_product.name, product_product.retailprice, product_product.imageurl, product_product.thumbnailurl, product_product.msrp FROM product_product, product_xref, product_viewhistory WHERE ( (product_viewhistory.productId = product_xref.product_id_1 AND product_xref.product_id_2 = product_product.id) OR (product_viewhistory.productId = product_xref.product_id_2 AND product_xref.product_id_1 = product_product.id) ) AND product_product.outofstock='N' AND product_viewhistory.cookieId = '188af1efad392c2adf82' AND product_viewhistory.productId IN (24976, 25873, 26067, 26073, 44949, 16209, 70528, 69784, 75171, 75172) ORDER BY product_xref.hits DESC LIMIT 10 Of course the "cookieId" and the list of "productId" changes dynamically depending on the request. I use php with PDO.

    Read the article

  • Why this query is so slow?

    - by Silver Light
    This query appears in mysql slow query log: it takes 11 seconds. INSERT INTO record_visits ( record_id, visit_day ) VALUES ( '567', NOW() ); The table has 501043 records and it's structure looks like this: CREATE TABLE IF NOT EXISTS `record_visits` ( `id` int(11) NOT NULL AUTO_INCREMENT, `record_id` int(11) DEFAULT NULL, `visit_day` date DEFAULT NULL, `visit_cnt` bigint(20) DEFAULT '1', PRIMARY KEY (`id`), UNIQUE KEY `record_id_visit_day` (`record_id`,`visit_day`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 ; What could be wrong? Why this INSERT takes so long?

    Read the article

  • Composite primary keys in N-M relation or not?

    - by BerggreenDK
    Lets say we have 3 tables (actually I have 2 at the moment, but this example might illustrate the thought better): [Person] ID: int, primary key Name: nvarchar(xx) [Group] ID: int, primary key Name: nvarchar(xx) [Role] ID: int, primary key Name: nvarchar(xx) [PersonGroupRole] Person_ID: int, PRIMARY COMPOSITE OR NOT? Group_ID: int, PRIMARY COMPOSITE OR NOT? Role_ID: int, PRIMARY COMPOSITE OR NOT? Should any of the 3 ID's in the relation PersonGroupRole be marked as PRIMARY key or should they all 3 be combined into one composite?? whats the real benefit of doing it or not? I can join anyways as far as I know, so Person JOIN PersonGroupRole JOIN Group gives me which persons are in which Groups etc. I will be using LINQ/C#/.NET on top of SQL-express and SQL-server, so if there is any reasons regarding language/SQL that might make the choice more clear, thats the platform I ask about. Looking forward to see what answers pops up, as I have thought of these primary keys/indexes many times when making combined ones.

    Read the article

  • Controlling read and write access width to memory mapped registers in C

    - by srking
    I'm using and x86 based core to manipulate a 32-bit memory mapped register. My hardware behaves correctly only if the CPU generates 32-bit wide reads and writes to this register. The register is aligned on a 32-bit address and is not addressable at byte granularity. What can I do to guarantee that my C (or C99) compiler will only generate full 32-bit wide reads and writes in all cases? For example, if I do a read-modify-write operation like this: volatile uint32_t* p_reg = 0xCAFE0000; *p_reg |= 0x01; I don't want the compiler to get smart about the fact that only the bottom byte changes and generate 8-bit wide read/writes. Since the machine code is often more dense for 8-bit operations on x86, I'm afraid of unwanted optimizations. Disabling optimizations in general is not an option.

    Read the article

  • Tree iterator, can you optimize this any further?

    - by Ron
    As a follow up to my original question about a small piece of this code I decided to ask a follow up to see if you can do better then what we came up with so far. The code below iterates over a binary tree (left/right = child/next ). I do believe there is room for one less conditional in here (the down boolean). The fastest answer wins! The cnt statement can be multiple statements so lets make sure this appears only once The child() and next() member functions are about 30x as slow as the hasChild() and hasNext() operations. Keep it iterative <-- dropped this requirement as the recursive solution presented was faster. This is C++ code visit order of the nodes must stay as they are in the example below. ( hit parents first then the children then the 'next' nodes). BaseNodePtr is a boost::shared_ptr as thus assignments are slow, avoid any temporary BaseNodePtr variables. Currently this code takes 5897ms to visit 62200000 nodes in a test tree, calling this function 200,000 times. void processTree (BaseNodePtr current, unsigned int & cnt ) { bool down = true; while ( true ) { if ( down ) { while (true) { cnt++; // this can/will be multiple statesments if (!current->hasChild()) break; current = current->child(); } } if ( current->hasNext() ) { down = true; current = current->next(); } else { down = false; current = current->parent(); if (!current) return; // done. } } }

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >