Search Results

Search found 671 results on 27 pages for 'optimizing'.

Page 22/27 | < Previous Page | 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • x86 opcode alignment references and guidelines

    - by mrjoltcola
    I'm generating some opcodes dynamically in a JIT compiler and I'm looking for guidelines for opcode alignment. 1) I've read comments that briefly "recommend" alignment by adding nops after calls 2) I've also read about using nop for optimizing sequences for parallelism. 3) I've read that alignment of ops is good for "cache" performance Usually these comments don't give any supporting references. Its one thing to read a blog or a comment that says, "its a good idea to do such and such", but its another to actually write a compiler that implements specific op sequences and realize most material online, especially blogs, are not useful for practical application. So I'm a believer in finding things out myself (disassembly, etc. to see what real world apps do). This is one case where I need some outside info. I notice compilers will usually start an odd byte instruction immediately after whatever previous instruction sequence there was. So the compiler is not taking any special care in most cases. I see "nop" here or there, but usually it seems nop is used sparingly, if at all. How critical is opcode alignment? Can you provide references for cases that I can actually use for implementation? Thanks.

    Read the article

  • Using Traveling Salesman Solver to Decide Hamiltonian Path

    - by Firas Assaad
    This is for a project where I'm asked to implement a heuristic for the traveling salesman optimization problem and also the Hamiltonian path or cycle decision problem. I don't need help with the implementation itself, but have a question on the direction I'm going in. I already have a TSP heuristic based on a genetic algorithm: it assumes a complete graph, starts with a set of random solutions as a population, and works to improve the population for a number of generations. Can I also use it to solve the Hamiltonian path or cycle problems? Instead of optimizing to get the shortest path, I just want to check if there is a path. Now any complete graph will have a Hamiltonian path in it, so the TSP heuristic would have to be extended to any graph. This could be done by setting the edges to some infinity value if there is no path between two cities, and returning the first path that is a valid Hamiltonian path. Is that the right way to approach it? Or should I use a different heuristic for Hamiltonian path? My main concern is whether it's a viable approach since I can be somewhat sure that TSP optimization works (because you start with solutions and improve them) but not if a Hamiltonian path decider would find any path in a fixed number of generations. I assume the best approach would be to test it myself, but I'm constrained by time and thought I'd ask before going down this route... (I could find a different heuristic for Hamiltonian path instead)

    Read the article

  • Optimal Serialization of Primitive Types

    - by Greg Dean
    We are beginning to roll out more and more WAN deployments of our product (.Net fat client w/ IIS hosted Remoting backend). Because of this we are trying to reduce the size of the data on the wire. We have overridden the default serialization by implementing ISerializable (similar to this), we are seeing anywhere from 12% to 50% gains. Most of our efforts focus on optimizing arrays of primitive types. I would like to know if anyone knows of any fancy way of serializing primitive types, beyond the obvious? For example today we serialize an array of ints as follows: [4-bytes (array length)][4-bytes][4-bytes] Can anyone do significantly better? The most obvious example of a significant improvement, for boolean arrays, is putting 8 bools in each byte, which we already do. Note: Saving 7 bits per bool may seem like a waste of time, but when you are dealing with large magnitudes of data (which we are), it adds up very fast. Note: We want to avoid general compression algorithms because of the latency associated with it. Remoting only supports buffered requests/responses(no chunked encoding). I realize there is a fine line between compression and optimal serialization, but our tests indicate we can afford very specific serialization optimizations at very little cost in latency. Whereas reprocessing the entire buffered response into new compressed buffer is too expensive.

    Read the article

  • What do I need for development for an ARM processor?

    - by claws
    Hello, I'm familiar with X86[-64] architecture & assembly. I want to start develop for an ARM processor. But unlike desktop processors, I don't have an actual ARM processor. I think I need an ARM simulator. http://www.armtutorial.com/ say An ARM assembly compiler will be required, the most accessible is the ARMulator. I thought of downloading Armulator but found from http://forums.arm.com/index.php?showtopic=13744 that Its not sold seperately. But you can download an eval of RVDS - which includes RVISS/ARMulator I've downloaded & installed RVDS but It looks very complex. I'm unable to figure out what do I need to do to write ARM assembly & run it. I want to write in assembly not in C/C++. I don't have an ARM processor. What is a good simulator? Can any one please mention in short. How to write assembly & assemble & simulate using RVDS. Please be clear? Are there any other alternative ways. I can't afford buying any kind of boards. I always learn from books rather than tutorials. I'm following these two books: ARM System Developer's Guide: Designing and Optimizing System Software (The Morgan Kaufmann Series in Computer Architecture and Design) ARM System-on-Chip Architecture (2nd Edition) Do you have any better suggestions?

    Read the article

  • SEM/SEO tasks doubts

    - by Josemalive
    Hello, Actually i think that i have an strong knowledge of SEO, but im having some doubts about the following: I will have to increase the position in Google of certain product pages of a company in the next months. I supposed that not only will be sufficient the following tasks: Improve usability of those pages. Change the pages title. Add meta description and keywords. Url's in a REST way. 301's http header to dont lose page rank for the new URLS Optimizing content for Google. Configure links of the website (follow and no follow attributes) Get more inbounds links (Link building tasks). Create RSS. Put main website in Twitter (using twitter feed) using the RSS. Put main website in Facebook. Create a Youtube channel. Invest in Adwords. Invest in other online advertising companies. Use sitemap.xml and Google Webmaster tools. Use Google Trends to analyze the volume of searches of certain keywords. Use Google Analytics to analyze weak points and good points of your site, and find new oportunities in keywords. Use tools to find new keywords related with your content. Do you have some internet links, or knowledge about all the tasks that a SEO Expert should do? Could you share some knowledge about what kind of business could be do with another companies (B2B) to increase the search engine position of those product pages. Do you know more tecniques about how to get more inbound links? (i only know the link interchange) Thanks in advance. Best Regards. Jose.

    Read the article

  • Using StringBuilder to process csv files to save heap space

    - by portoalet
    I am reading a csv file that has about has about 50,000 lines and 1.1MiB in size (and can grow larger). In Code1, I use String to process the csv, while in Code2 I use StringBuilder (only one thread executes the code, so no concurrency issues) Using StringBuilder makes the code a little bit harder to read that using normal String class. Am I prematurely optimizing things with StringBuilder in Code2 to save a bit of heap space and memory? Code1 fr = new FileReader(file); BufferedReader reader = new BufferedReader(fr); String line = reader.readLine(); while ( line != null ) { int separator = line.indexOf(','); String symbol = line.substring(0, seperator); int begin = separator; separator = line.indexOf(',', begin+1); String price = line.substring(begin+1, seperator); // Publish this update publisher.publishQuote(symbol, price); // Read the next line of fake update data line = reader.readLine(); } Code2 fr = new FileReader(file); StringBuilder stringBuilder = new StringBuilder(reader.readLine()); while( stringBuilder.toString() != null ) { int separator = stringBuilder.toString().indexOf(','); String symbol = stringBuilder.toString().substring(0, separator); int begin = separator; separator = stringBuilder.toString().indexOf(',', begin+1); String price = stringBuilder.toString().substring(begin+1, separator); publisher.publishQuote(symbol, price); stringBuilder.replace(0, stringBuilder.length(), reader.readLine()); }

    Read the article

  • Optimize existing code and need to list alphabetically.

    - by Brad
    I need help optimizing the code to run faster, unless it is optimized the best. I also want to alphabetize the list and I am unsure how to do that. It should be alphabetized by $userinfo[0]["sn"][0] I am using the adLDAP class: http://adldap.sourceforge.net/ <?php require_once('adLDAP.php'); //header('Content-type: text/json'); $adldap = new adLDAP(); $groupMembers = $adldap->group_members('STAFF'); //print_r($groupMembers); $userinfo = $adldap->user_info($username, array("givenname","sn","telephonenumber")); $displayname = $userinfo[0]["givenname"][0]." ".$userinfo[0]["sn"][0]; print "<ul>"; foreach ($groupMembers as $i => $username) { $userinfo = $adldap->user_info($username, array("*")); $displayname = "<strong>".$userinfo[0]["givenname"][0]." ".$userinfo[0]["sn"][0]."</strong> - ".$userinfo[0]["telephonenumber"][0]; if($userinfo[0]["sn"][0] != "" && $userinfo[0]["givenname"][0] != "" && $userinfo[0]["telephonenumber"][0] != "") { print "<li>".$displayname."</li>"; } } print "</ul>"; ?>

    Read the article

  • Optimize an SQL statement

    - by kovshenin
    Hey, I'm running WordPress, the database diagram could be found here: http://codex.wordpress.org/Database_Description After doing tonnes of filters and applying some hooks to the core, I'm left with the following query: SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts JOIN wp_postmeta ppmeta_beds ON (ppmeta_beds.post_id = wp_posts.ID AND ppmeta_beds.meta_key = 'pp-general-beds' AND ppmeta_beds.meta_value >= 2) JOIN wp_postmeta ppmeta_baths ON (ppmeta_baths.post_id = wp_posts.ID AND ppmeta_baths.meta_key = 'pp-general-baths' AND ppmeta_baths.meta_value >= 3) JOIN wp_postmeta ppmeta_furnished ON (ppmeta_furnished.post_id = wp_posts.ID AND ppmeta_furnished.meta_key = 'pp-general-furnished' AND ppmeta_furnished.meta_value = 'yes') JOIN wp_postmeta ppmeta_pool ON (ppmeta_pool.post_id = wp_posts.ID AND ppmeta_pool.meta_key = 'pp-facilities-pool' AND ppmeta_pool.meta_value = 'yes') JOIN wp_postmeta ppmeta_pool_type ON (ppmeta_pool_type.post_id = wp_posts.ID AND ppmeta_pool_type.meta_key = 'pp-facilities-pool-type' AND ppmeta_pool_type.meta_value IN ('tennis', 'voleyball', 'basketball', 'fitness')) JOIN wp_postmeta ppmeta_sport ON (ppmeta_sport.post_id = wp_posts.ID AND ppmeta_sport.meta_key = 'pp-facilities-sport' AND ppmeta_sport.meta_value = 'yes') JOIN wp_postmeta ppmeta_sport_type ON (ppmeta_sport_type.post_id = wp_posts.ID AND ppmeta_sport_type.meta_key = 'pp-facilities-sport-type' AND ppmeta_sport_type.meta_value IN ('tennis', 'voleyball', 'basketball', 'fitness')) JOIN wp_postmeta ppmeta_parking ON (ppmeta_parking.post_id = wp_posts.ID AND ppmeta_parking.meta_key = 'pp-facilities-parking' AND ppmeta_parking.meta_value = 'yes') JOIN wp_postmeta ppmeta_parking_type ON (ppmeta_parking_type.post_id = wp_posts.ID AND ppmeta_parking_type.meta_key = 'pp-facilities-parking-type' AND ppmeta_parking_type.meta_value IN ('street', 'off-street', 'garage')) JOIN wp_postmeta ppmeta_garden ON (ppmeta_garden.post_id = wp_posts.ID AND ppmeta_garden.meta_key = 'pp-facilities-garden' AND ppmeta_garden.meta_value = 'yes') JOIN wp_postmeta ppmeta_garden_type ON (ppmeta_garden_type.post_id = wp_posts.ID AND ppmeta_garden_type.meta_key = 'pp-facilities-garden-type' AND ppmeta_garden_type.meta_value IN ('private', 'communal')) JOIN wp_postmeta ppmeta_type ON (ppmeta_type.post_id = wp_posts.ID AND ppmeta_type.meta_key = 'pp-general-type' AND ppmeta_type.meta_value IN ('villa', 'apartment', 'penthouse')) JOIN wp_postmeta ppmeta_status ON (ppmeta_status.post_id = wp_posts.ID AND ppmeta_status.meta_key = 'pp-general-status' AND ppmeta_status.meta_value IN ('off-plan', 'resale')) JOIN wp_postmeta ppmeta_location_type ON (ppmeta_location_type.post_id = wp_posts.ID AND ppmeta_location_type.meta_key = 'pp-location-type' AND ppmeta_location_type.meta_value IN ('beachfront', 'countryside', 'town-center', 'near-the-sea', 'hillside', 'private-resort')) JOIN wp_postmeta ppmeta_price_range ON (ppmeta_price_range.post_id = wp_posts.ID AND ppmeta_price_range.meta_key = 'pp-general-price' AND ppmeta_price_range.meta_value BETWEEN 10000 AND 50000) JOIN wp_postmeta ppmeta_area_range ON (ppmeta_area_range.post_id = wp_posts.ID AND ppmeta_area_range.meta_key = 'pp-general-area' AND ppmeta_area_range.meta_value BETWEEN 50 AND 150) WHERE 1=1 AND (((wp_posts.post_title LIKE '%fdsfsad%') OR (wp_posts.post_content LIKE '%fdsfsad%'))) AND wp_posts.post_type = 'property' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 10 It's way too big. Could anybody please show me a way of optimizing all those joins into fewer statements? As you can see they all use the same tables but under different names. I'm not an SQL guru but I think there should be a way, because this is insane ;) Thanks! Update Here's what explain returns: http://twitpic.com/1cd36p

    Read the article

  • Optimize GROUP BY&ORDER BY query

    - by Jan Hancic
    I have a web page where users upload&watch videos. Last week I asked what is the best way to track video views so that I could display the most viewed videos this week (videos from all dates). Now I need some help optimizing a query with which I get the videos from the database. The relevant tables are this: video (~239371 rows) VID(int), UID(int), title(varchar), status(enum), type(varchar), is_duplicate(enum), is_adult(enum), channel_id(tinyint) signup (~115440 rows) UID(int), username(varchar) videos_views (~359202 rows after 6 days of collecting data, so this table will grow rapidly) videos_id(int), views_date(date), num_of_views(int) The table video holds the videos, signup hodls users and videos_views holds data about video views (each video can have one row per day in that table). I have this query that does the trick, but takes ~10s to execute, and I imagine this will only get worse over time as the videos_views table grows in size. SELECT v.VID, v.title, v.vkey, v.duration, v.addtime, v.UID, v.viewnumber, v.com_num, v.rate, v.THB, s.username, SUM(vvt.num_of_views) AS tmp_num FROM video v LEFT JOIN videos_views vvt ON v.VID = vvt.videos_id LEFT JOIN signup s on v.UID = s.UID WHERE v.status = 'Converted' AND v.type = 'public' AND v.is_duplicate = '0' AND v.is_adult = '0' AND v.channel_id <> 10 AND vvt.views_date >= '2001-05-11' GROUP BY vvt.videos_id ORDER BY tmp_num DESC LIMIT 8 And here is a screenshot of the EXPLAIN result: So, how can I optimize this?

    Read the article

  • Movies recommendation engine conceptual database design

    - by Supyxy
    I am working at an movie recommendations engine and i'm facing a DB design issue. My actual database looks like this: MOVIES [ID,TITLE] KEYWORDS_TABLE [ID,KEY_ID] - where ID is Foreign Key for MOVIES.id and KEY_ID is a key for a text keywords table This is not the entire DB, but i showed here what's important for my problem. I have about 50,000 movies and about 1,3 milion keywords correlations, and basically my algorithm consists in extracting all the who have the same keywords with a given movie, then ordering them by the number of keywords correlations. For example i looked for a movie similar to 'Cast away' and it returned 'Six days and six nights' because it had the most keywords correlations (4 keywords): Island Airplane crash Stranded Pilot The algorithm is based on more factors, but this one is the most important and the most difficult for the approach. Basically what i do now is getting all the movies that have at least one keyword similar to the given movie and then ordering them by other factors which are not important for a moment. There wouldn't be any problem if there weren't so many records, a query lasts in many cases up to 10-20 seconds and some of them return even over 5000 movies. Someone already helped me on here (thanks Mark Byers) with optimizing the query but that's not enough because it takes too longer SELECT DISTINCT M.title FROM keywords_table K1 JOIN keywords_table K2 ON K2.key_id = K1.key_id JOIN movies M ON K2.id = M.id WHERE K1.id = 4 So i thought it would be better if i pre-made those lists with movies recommendations for each movie, but i'm not sure how to design the tables.. whatever is it a good idea or how would you take this approach?

    Read the article

  • Testing performance of queries in mysl

    - by Unreason
    I am trying to setup a script that would test performance of queries on a development mysql server. Here are more details: I have root access I am the only user accessing the server Mostly interested in InnoDB performance The queries I am optimizing are mostly search queries (SELECT ... LIKE '%xy%') What I want to do is to create reliable testing environment for measuring the speed of a single query, free from dependencies on other variables. Till now I have been using SQL_NO_CACHE, but sometimes the results of such tests also show caching behaviour - taking much longer to execute on the first run and taking less time on subsequent runs. If someone can explain this behaviour in full detail I might stick to using SQL_NO_CACHE; I do believe that it might be due to file system cache and/or caching of indexes used to execute the query, as this post explains. It is not clear to me when Buffer Pool and Key Buffer get invalidated or how they might interfere with testing. So, short of restarting mysql server, how would you recommend to setup an environment that would be reliable in determining if one query performs better then the other?

    Read the article

  • Does a VCL OrgChart component with decent features exists? Is there a viable alternative?

    - by user193655
    I am using DevExpress OrgChart component that is still maintained but not developed since 2003 (fortunately bugs are fixed, but nothing more). Honestly this component, even if it starts to look too old still suffices my requirements except for 2 things: 1) it doesn't support at all the staff feature, for understanding what I mean see this image (where the items in staff are Administration, Communication, IT, Special Projects). 2) it arranges the items without optimizing the space, for example if there are 3 items at top level, and only the second item has 2 childs, the top items items are drawn more distantly, because of the 2 childs, there is no an option for "shirinking" the diagram. Of course the component misses tons of the features one would expect from an OrgChart tool, but in my case Those 2, and expecially (1) are important, the rest is lack of eye-candy. I look for VCL components, but if (as I fear, since I never found it) such component doesn't exist) I can see the following alternatives: i) using Hydra with .net winforms components ii) using ActiveX components. Between the 2 I would prefer ActiveX because of the .NET deployment hell (what I like about Delphi is that you ship the exe to the customer witn Win2k and it works). Anyway I never used an activeX control and I don't know which are the deployment issues, but I fear I will lose the opportunity of replacing an exe and upgrading the software. iii) hire a delphi component develoeper that can customize the DevEx component by adding feature (1) and maybe (2). I am stuck.

    Read the article

  • Memory leak when changing Text field of a Scintilla object.

    - by PlaZmaZ
    I have a relatively large program that I'm optimizing for ASCII input files around 10-80mB in size. The program reads every line of the file into a stringbuilder and then sets the Text field of the ScintillNET object to the stringbuilder. The stringbuilder is then set to null. private void ReloadFile(string sFile) { txt_log.ResetText(); try { StringBuilder sLine = new StringBuilder(""); using (StreamReader sr = new StreamReader(sFile)) { while (true) { string temp = sr.ReadLine(); if (temp == null) break; sLine.AppendLine(temp); } sr.Close(); } txt_log.Text = sLine.ToString(); sLine = null; } catch (Exception ex) { MessageBox.Show(this, "An error occurred opening this file.\n\n" + ex.Message, "File Open Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } GC.Collect(); } The program has an option to reload or open a file. This is irrelevant, as any call to txt_log.Text seems to not get rid of the previous memory used for the .Text field. Commenting out the txt_log.Text line gives proper memory behavior. The GC.Collect() line seems pointless, and I have tried both with and without it. Is there something I'm missing here? I HIGHLY doubt it's a problem with the ScintillaNET component itself--rather something in this code.

    Read the article

  • Avoiding secondary selects or joins with Hibernate Criteria or HQL query

    - by Ben Benson
    I am having trouble optimizing Hibernate queries to avoid performing joins or secondary selects. When a Hibernate query is performed (criteria or hql), such as the following: return getSession().createQuery(("from GiftCard as card where card.recipientNotificationRequested=1").list(); ... and the where clause examines properties that do not require any joins with other tables... but Hibernate still performs a full join with other tables (or secondary selects depending on how I set the fetchMode). The object in question (GiftCard) has a couple ManyToOne associations that I would prefer to be lazily loaded in this case (but not necessarily all cases). I want a solution that I can control what is lazily loaded when I perform the query. Here's what the GiftCard Entity looks like: @Entity @Table(name = "giftCards") public class GiftCard implements Serializable { private static final long serialVersionUID = 1L; private String id_; private User buyer_; private boolean isRecipientNotificationRequested_; @Id public String getId() { return this.id_; } public void setId(String id) { this.id_ = id; } @ManyToOne @JoinColumn(name = "buyerUserId") @NotFound(action = NotFoundAction.IGNORE) public User getBuyer() { return this.buyer_; } public void setBuyer(User buyer) { this.buyer_ = buyer; } @Column(name="isRecipientNotificationRequested", nullable=false, columnDefinition="tinyint") public boolean isRecipientNotificationRequested() { return this.isRecipientNotificationRequested_; } public void setRecipientNotificationRequested(boolean isRecipientNotificationRequested) { this.isRecipientNotificationRequested_ = isRecipientNotificationRequested; } }

    Read the article

  • MySQL FULLTEXT aggravation

    - by southof40
    Hi - I'm having problems with case-sensitivity in MySQL FULLTEXT searches. I've just followed the FULLTEXT example in the MySQL doco at http://dev.mysql.com/doc/refman/5.1/en/fulltext-boolean.html . I'll post it here for ease of reference ... CREATE TABLE articles ( id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY, title VARCHAR(200), body TEXT, FULLTEXT (title,body) ); INSERT INTO articles (title,body) VALUES ('MySQL Tutorial','DBMS stands for DataBase ...'), ('How To Use MySQL Well','After you went through a ...'), ('Optimizing MySQL','In this tutorial we will show ...'), ('1001 MySQL Tricks','1. Never run mysqld as root. 2. ...'), ('MySQL vs. YourSQL','In the following database comparison ...'), ('MySQL Security','When configured properly, MySQL ...'); SELECT * FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); ... my problem is that the example shows that SELECT returning the first and fifth rows ('..DataBase..' and '..database..') but I only get one row ('database') ! The example doesn't demonstrate what collation the table in the example had but I have ended up with latin1_general_cs on the title and body columns of my example table. My version of MySQL is 5.1.39-log and the connection collation is utf8_unicode_ci . I'd be really grateful is someone could suggest why my experience differs from the example in the manual ! Be grateful for any advice.

    Read the article

  • When is someone else's code I use from the internet "mine"?

    - by robault
    I'm building a library from methods that I've found on the internet. Some are free to use or modify with no requirements, others say that if I leave a comment in the code it's okay to use, others say when I use the code I have to attribute the use of someone's code in my application (in the credits for my app I guess). What I've been doing is reorganizing classes, renaming methods, adding descriptions (code comments), renaming the parameters and names inside the methods to something meaningful, optimizing loops if applicable, changing return types, adding try/catch/throw blocks, adding parameter checks and cleaning up resources in the methods. For example; I didn't come up with the algorithm for blurring a Bitmap but I've taken the basic example of iterating through the pixels and turned it into a decent library method (applying the aforementioned modifications). I understand how to go about building it now myself but I didn't actually hit the keystrokes to make it and I couldn't have come up with it before learning from their example. What about code people get in answers on Stackoverflow or examples from Codeproject? At what point can I drop their requirements because at n% their code became mine? FWIW I intend on using the libraries to create products that I will sell.

    Read the article

  • How to increase the performance of a loop which runs for every 'n' minutes.

    - by GustlyWind
    Hi Giving small background to my requirement and what i had accomplished so far: There are 18 Scheduler tasks run at regular intervals (least being 30 mins) takes input of nearly 5000 eligible employees run into a static method for iteration and generates a mail content for that employee and mails. An average task takes about 9 min multiplied by 18 will be roughly 162 mins meanwhile there would be next tasks which will be in queue (I assume). So my plan is something like the below loop try { // Handle Arraylist of alerts eligible employees Iterator employee = empIDs.iterator(); while (employee.hasNext()) { ScheduledImplementation.getInstance().sendAlertInfoToEmpForGivenAlertType((Long)employee.next(), configType,schedType); } } catch (Exception vEx) { _log.error("Exception Caught During sending " + configType + " messages:" + configType, vEx); } Since I know how many employees would come to my method I will divide the while loop into two and perform simultaneous operations on two or three employees at a time. Is this possible. Or is there any other ways I can improve the performance. Some of the things I had implemented so far 1.Wherever possible made methods static and variables too Didn't bother to catch exceptions and send back because these are background tasks. (And I assume this improves performance) Get the DB values in one query instead of multiple hits. If am successful in optimizing the while loop I think i can save couple of mins. Thanks

    Read the article

  • Compile Assembly Output generated by VC++?

    - by SDD
    I have a simple hello world C program and compile it with /FA. As a consequence, the compiler also generates the corresponding assembly listing. Now I want to use masm/link to assemble an executable from the generated .asm listing. The following command line yields 3 linker errors: \masm32\bin\ml /I"C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include" /c /coff asm_test.asm \masm32\bin\link /SUBSYSTEM:CONSOLE /LIBPATH:"C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\lib" asm_test.obj indicating that the C-runtime functions were not linked to the object files produced earlier: asm_test.obj : error LNK2001: unresolved external symbol @__security_check_cookie@4 asm_test.obj : error LNK2001: unresolved external symbol _printf LINK : error LNK2001: unresolved external symbol _wmainCRTStartup asm_test.exe : fatal error LNK1120: 3 unresolved externals Here is the generated assembly listing ; Listing generated by Microsoft (R) Optimizing Compiler Version 15.00.30729.01 TITLE c:\asm_test\asm_test\asm_test.cpp .686P .XMM include listing.inc .model flat INCLUDELIB OLDNAMES PUBLIC ??_C@_0O@OBPALAEI@hello?5world?$CB?6?$AA@ ; `string' EXTRN @__security_check_cookie@4:PROC EXTRN _printf:PROC ; COMDAT ??_C@_0O@OBPALAEI@hello?5world?$CB?6?$AA@ CONST SEGMENT ??_C@_0O@OBPALAEI@hello?5world?$CB?6?$AA@ DB 'hello world!', 0aH, 00H ; `string' CONST ENDS PUBLIC _wmain ; Function compile flags: /Ogtpy ; COMDAT _wmain _TEXT SEGMENT _argc$ = 8 ; size = 4 _argv$ = 12 ; size = 4 _wmain PROC ; COMDAT ; File c:\users\octon\desktop\asm_test\asm_test\asm_test.cpp ; Line 21 push OFFSET ??_C@_0O@OBPALAEI@hello?5world?$CB?6?$AA@ call _printf add esp, 4 ; Line 22 xor eax, eax ; Line 23 ret 0 _wmain ENDP _TEXT ENDS END I am using the latest masm32 version (6.14.8444).

    Read the article

  • Having trouble with confusing behaviour of error between debug and release modes in Xcode

    - by Cocorico
    Hi guys, I am confused over something (what is new!). I have an iPhone program I am writing and using some sqlite in a certain method, and there is some error which is giving me a message that says "Program received signal: “EXC_BAD_ACCESS” Okay, so I am trying to hunt down why this is doing this, and I notice something: When I run the program in debug mode, it gives me this error every single time I access this method (I test on the device). However when I run the program in release mode, I can access this method 2 times, and then it will give me this error the third time. So I mean, can someone just give me an explanation of what might cause this, I think that maybe deep-down I am not that smart on the difference in XCode of debug and release modes. I think that release mode does optimizing, and I guess the actual assembly machine code comes out different, yes? I AM A BIG NEWBIE USER UNFORTUNATELY! I am not clear on a lot of things like this, or whether it needs for I to remove nslog commands in the release build and such. Maybe I should just post the actual code in separate Stack OVerflow post, and see if people can see the error, then maybe this all become clear to me.

    Read the article

  • What scenarios are possible where the VS C# compiler would not compile a reference of a reference?

    - by SuperKing
    Hello, I'm probably asking this question wrong (and that may be why Google isn't helping), but here goes: In Visual Studio I am compiling a C# project (let's call it Project A, the startup project) which has a reference to Project B. Project B has a reference to a Project C, so when A gets built, the dlls for B gets placed in the bin directory of A, as does the dll for C (because B requires C, and A requires B). However, I have apparently made some change recently so that the dll for Project C does not go into the bin directory of Project A when rebuilding the solution. I have no idea what I've done to make this happen. I have not modified the setup of the solution itself, and I have only added additional references to the project files. Code wise, I have commented out most of the actual code in Project B that references classes in Project C, but did not remove the reference from the project itself (I don't think this matters). I was told that perhaps the C# compiler was optimizing somehow so that it was not building Project C, but really I'm out of ideas. I would think someone has run into something similar before Any thoughts? Thanks!

    Read the article

  • Confused about UIView frame property

    - by slowfungus
    I'm building a prototype iPad app that draws diagrams. I have the following view hierarchy: UIView UIScrollView DiagramView : UIView TabBar NavigationBar And a UIViewController subclass holding all that together. Before drawing the diagram the first time I calculate the dimensions of the diagram, and set the DiagramView frame to that size, and the content size of the scrollview as well. -(void)recalculateBounds { [renderer diagram:diagram shouldDraw:NO]; SQXRect diagramRect = SQXMakeRect(0.0,0.0,[diagram bounds].size.width,[diagram bounds].size.height); self.frame = diagramRect; [(UIScrollView*)[self superview] setContentSize:diagramRect.size]; } I should disclose that the frame is being set to about 1500 x 3500 which i know is ridiculous. I just want to focus on some other parts of the app before I get into optimizing the render code. This works beautifully, except that the rect being passed to drawRect is not the size that I set, and my drawing is getting clipped at the bottom. Its close the size i set, but bigger in width, and shorter in height. Also of note, is the fact that if I force the frame to be much bigger than what I know the diagram needs, then the drawRect:rect is big enough, and no clipping occurs. Of course this has me wondering if the frame size needs to take into account some other screen real estate like the toolbars but my reading of the docs tells me the frame is in superview coordinates, which would be the scrollview so I reckon I need to worry about such things. Any idea what is causing this discrepancy?

    Read the article

  • Wrappers of primitive types in arraylist vs arrays

    - by ismail marmoush
    Hi, In "Core java 1" I've read CAUTION: An ArrayList is far less efficient than an int[] array because each value is separately wrapped inside an object. You would only want to use this construct for small collections when programmer convenience is more important than efficiency. But in my software I've already used Arraylist instead of normal arrays due to some requirements, though "The software is supposed to have high performance and after I've read the quoted text I started to panic!" one thing I can change is changing double variables to Double so as to prevent auto boxing and I don't know if that is worth it or not, in next sample algorithm public void multiply(final double val) { final int rows = getSize1(); final int cols = getSize2(); for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { this.get(i).set(j, this.get(i).get(j) * val); } } } My question is does changing double to Double makes a difference ? or that's a micro optimizing that won't affect anything ? keep in mind I might be using large matrices.2nd Should I consider redesigning the whole program again ?

    Read the article

  • Seperation of business logic

    - by bruno
    Dear all, When I was optimizing my architecture of our applications in our website, I came to a problem that I don't know the best solution for. Now at the moment we have a small dll based on this structure: Database <-> DAL <-> BLL the Dal uses Business Objects to pass to the BLL that will pass it to the applications that uses this dll. Only the BLL is public so any application that includes this dll, can see the bll. In the beginning, this was a good solution for our company. But when we are adding more and more applications on that Dll, the bigger the Bll is getting. Now we dont want that some applications can see Bll-logic from other applications. Now I don't know what the best solution is for that. The first thing I thought was, move and seperate the bll to other dll's which i can include in my application. But then must the Dal be public, so the other dll's can get the data... and that I seems like a good solution. My other soluition, is just to seperate the bll in different namespaces, and just include only the namespaces you need in the applications. But in this solution, you can get directly access to other bll's if you want. So i'm asking for your oppinions. Thx, Bruno

    Read the article

  • How can I optimize retrieving lowest edit distance from a large table in SQL?

    - by Matt
    Hey, I'm having troubles optimizing this Levenshtein Distance calculation I'm doing. I need to do the following: Get the record with the minimum distance for the source string as well as a trimmed version of the source string Pick the record with the minimum distance If the min distances are equal (original vs trimmed), choose the trimmed one with the lowest distance If there are still multiple records that fall under the above two categories, pick the one with the highest frequency Here's my working version: DECLARE @Results TABLE ( ID int, [Name] nvarchar(200), Distance int, Frequency int, Trimmed bit ) INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@Source, [Name])) As Distance, Frequency, 'False' As Trimmed FROM MyTable INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@SourceTrimmed, [Name])) As Distance, Frequency, 'True' As Trimmed FROM MyTable SET @ResultID = (SELECT TOP 1 ID FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @Result = (SELECT TOP 1 [Name] FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultDist = (SELECT TOP 1 Distance FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultTrimmed = (SELECT TOP 1 Trimmed FROM @Results ORDER BY Distance, Trimmed, Frequency) I believe what I need to do here is to.. Not dumb the results to a temporary table Do only 1 select from `MyTable` Setting the results right in the select from the initial select statement. (Since select will set variables and you can set multiple variables in one select statement) I know there has to be a good implementation to this but I can't figure it out... this is as far as I got: SELECT top 1 @ResultID = ID, @Result = [Name], (dbo.Levenshtein(@Source, [Name])) As distOrig, (dbo.Levenshtein(@SourceTrimmed, [Name])) As distTrimmed, Frequency FROM MyTable WHERE /* ... yeah I'm lost */ ORDER BY distOrig, distTrimmed, Frequency Any ideas?

    Read the article

  • How to benchmark on multi-core processors

    - by Pascal Cuoq
    I am looking for ways to perform micro-benchmarks on multi-core processors. Context: At about the same time desktop processors introduced out-of-order execution that made performance hard to predict, they, perhaps not coincidentally, also introduced special instructions to get very precise timings. Example of these instructions are rdtsc on x86 and rftb on PowerPC. These instructions gave timings that were more precise than could ever be allowed by a system call, allowed programmers to micro-benchmark their hearts out, for better or for worse. On a yet more modern processor with several cores, some of which sleep some of the time, the counters are not synchronized between cores. We are told that rdtsc is no longer safe to use for benchmarking, but I must have been dozing off when we were explained the alternative solutions. Question: Some systems may save and restore the performance counter and provide an API call to read the proper sum. If you know what this call is for any operating system, please let us know in an answer. Some systems may allow to turn off cores, leaving only one running. I know Mac OS X Leopard does when the right Preference Pane is installed from the Developers Tools. Do you think that this make rdtsc safe to use again? More context: Please assume I know what I am doing when trying to do a micro-benchmark. If you are of the opinion that if an optimization's gains cannot be measured by timing the whole application, it's not worth optimizing, I agree with you, but I cannot time the whole application until the alternative data structure is finished, which will take a long time. In fact, if the micro-benchmark were not promising, I could decide to give up on the implementation now; I need figures to provide in a publication whose deadline I have no control over.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27  | Next Page >