Search Results

Search found 8219 results on 329 pages for 'less'.

Page 250/329 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • what are all the Optimize tricks that you know for asp.net code ?

    - by Aristos
    After some time of many code programming on asp.net, I discover the very big speed different between string and StringBuilder. I know that is very common and known but I just mention it for start. The second think that I have found to speed up the code, is to use the const, and not the static, for declare my configuration constants value (especial the strings). With the const, the compiler not create new object, but just place the value, on the point that you have ask it, but with the static declaration, is create a new memory object and keep its on the memory. My third trick, is when I search for string, I use hash values, and not the string itself. For example, if I need a List<string SomeValues, and place inside strings that I need to search them, I prefer to use List<int SomeHashValue, and I use the hash value to locate the strings. My forth thought that I was wandering, is if is better to place big strings in one line, or separate them in different lines with the + symbol to be more easy to read out. I make some tests and see that the compiler make a good job is some split the string, in many lines, using the + symbol. What other tricks/tips do you know and use on your programming to make it run faster, and maybe use less memory. Well I know, that some times, to make something run faster, you need more memory, more cache. My priority is on speed. Because Speed Counts.

    Read the article

  • How can I force a ListView with a custom panel to re-measure when the ListView width goes below the

    - by Scott Whitlock
    Sorry for the long winded question (I'm including background here). If you just want the question, go to the end. I have a ListView with a custom Panel implementation that I'm using to implement something similar to a WrapPanel, but not quite. I'm overriding the MeasureOverride and ArrangeOverride methods in the custom panel. If I do the naive implementation of a WrapPanel in the MeasureOverride method it doesn't work when the ListView is resized. Let's say the custom panel does a measure and the constraint is a width of 100 and let's say I have 3 items that are 40 wide each. The naive approach is to return a size of 80,80 but when I resize the window that the ListView is in, down to say 75, it just turns on the horizontal scrollbar and never calls measure or arrange again (it does keep measuring and arranging if the width is greater than 80). To get around this, I hard coded the measurement to only have a width of the widest item. Then in the arrange, it gives me more space than I asked for and I use as much horizontal space as I can before wrapping. If I resize the window smaller than the smallest item in the ListView, then it turns on the scrollbar, which is great. Unfortunately this is causing a big problem when I have one of these ListViews with a custom panel nested inside of another one. The outside one works ok, but I can't get the inside one to "take as much as it needs". It always sizes to the smallest item, and the only way around it is to set the MinWidth to be something greater than zero. Anyway, stepping back for a second, I think the real way to fix this is to go back to the Naive implementation of the WrapPanel but force it to re-measure when the ListView width goes below the Size I previously returned as a measurement. That should solve my problem with the nested one. So, that's my question: I have a ListView with a custom panel If I return a measurement width on the panel and the ListView is resized to less than that width, it stops calling MeasureOverride How can I get it to continue calling MeasureOverride?

    Read the article

  • Weird problem with string function

    - by wrongusername
    I'm having a weird problem with the following function, which returns a string with all the characters in it after a certain point: string after(int after, string word) { char temp[word.size() - after]; cout << word.size() - after << endl; //output here is as expected for(int a = 0; a < (word.size() - after); a++) { cout << word[a + after]; //and so is this temp[a] = word[a + after]; cout << temp[a]; //and this } cout << endl << temp << endl; //but output here does not always match what I want string returnString = temp; return returnString; } The thing is, when the returned string is 7 chars or less, it works just as expected. When the returned string is 8 chars or more, then it starts spewing nonsense at the end of the expected output. For example, the lines cout << after(1, "12345678") << endl; cout << after(1, "123456789") << endl; gives an output of: 7 22334455667788 2345678 2345678 8 2233445566778899 23456789?,?D~ 23456789?,?D~ What can I do to fix this error, and are there any default C++ functions that can do this for me?

    Read the article

  • Delphi Performance: Case Versus If

    - by Andreas Rejbrand
    I guess there might be some overlapping with previous SO questions, but I could not find a Delphi-specific question on this topic. Suppose that you want to check if an unsigned 32-bit integer variable "MyAction" is equal to any of the constants ACTION1, ACTION2, ... ACTIONn, where n is - say 1000. I guess that, besides being more elegant, case MyAction of ACTION1: {code}; ACTION2: {code}; ... ACTIONn: {code}; end; if much faster than if MyAction = ACTION1 then // code else if MyAction = ACTION2 then // code ... else if MyAction = ACTIONn then // code; I guess that the if variant takes time O(n) to complete (i.e. to find the right action) if the right action ACTIONi has a high value of i, whereas the case variant takes a lot less time (O(1)?). Am I correct that switch is much faster? Am I correct that the time required to find the right action in the switch case actually is independent of n? I.e. is it true that it does not really take any longer to check a million cases than to check 10 cases? How, exactly, does this work?

    Read the article

  • Timeout error occurred trying to start MySQL Daemon. CentOS 5

    - by epema
    I ran into troubles with MySQL on my CentOS. I had some problems and backed up my database and removed mysql with all dependencies. After that I ran reinstalled: yum groupinstall "MySQL Database" Installed without errors. Running the mysql daemon: service mysqld start Timeout error occurred trying to start MySQL Daemon. Starting MySQL: [FAILED] I also ran # /usr/bin/mysql_install_db --user=mysql Installing MySQL system tables... 120112 1:49:44 [ERROR] Error message file '/usr/share/mysql/english/errmsg.sys' had only 480 error messages, but it should contain at least 481 error messages. Check that the above file is the right version for this program! 120112 1:49:44 [ERROR] Aborting Installation of system tables failed! Examine the logs in /var/lib/mysql for more information. You can try to start the mysqld daemon with: /usr/libexec/mysqld --skip-grant & and use the command line tool /usr/bin/mysql to connect to the mysql database and look at the grant tables: shell> /usr/bin/mysql -u root mysql mysql> show tables Try 'mysqld --help' if you have problems with paths. Using --log gives you a log in /var/lib/mysql that may be helpful. The latest information about MySQL is available on the web at http://www.mysql.com Please consult the MySQL manual section: 'Problems running mysql_install_db', and the manual section that describes problems on your OS. Another information source is the MySQL email archive. Please check all of the above before mailing us! And if you do mail us, you MUST use the /usr/bin/mysqlbug script! Checking the logs: less /var/log/mysqld.log Log file is empty. I don't even know how to debug it and not sure what to do. Any recommendations? Thank you

    Read the article

  • What Javascript graphing package will let me plot points against a user-selected coordinate system?

    - by wes
    My customer has some specific requirements for a graph to show in our web app. We use HighCharts elsewhere in the app for more traditional graphing, but it doesn't seem to work for this situation. Their requirements: Allow the user to select a background image, set the scale and origin of the coordinate system. We'll graph our points against the user-defined coordinates. Points can be color coded Mouse-over boxes show more detail about the points Support for zooming and panning, scaling the background appropriately Less importantly: Support for drawing vectors off the points Some of this seems basic, but looking around at different graph packages, I was unable to find any with an example of this kind of usage. I've entertained the thought of just hacking it together in canvas myself, but I've never worked with canvas before so I don't think it would be cost effective. The basics of plotting points with a scaled coordinate system against an image background wouldn't be too hard, but the mouse-over details, zooming and panning sound much more daunting to me. More info: Right now we use jQuery, HighCharts, and ExtJS for our app. We tried flot in the past but switched to HighCharts after flot didn't meet our needs.

    Read the article

  • Windows FTP batch sript to read & dl from external user list

    - by Will Sims
    i have several old, unused batches that i'm redoing.. I have a batch file for an old network arch from several years ago.. the main thing I'd like it to do now is read a list of files.. I'll explain the setup.. Server updates a complete list [CurrentMediaStores.txt] 2x a day. The laptops can set settings to DL this list through their start.bat which also runs addins and updates I aply to my pc's, to give my batches and myself a break from slavish folder assignments and add a lil more dynamics and less adminin the bats now call on a list the user makes by simply copying a line from the CMS.txt file and pasting it into their [Grab_List.txt] My problem is though I have the branch :: off right now and the code that detects if LAN is connected or not to switch to an ftp connection. I'd like for the ftp batch to call/ use the Grab_List also. but I just can't/ don't know how to pass and do the for loop with a ftp session to loop through x amount of files in the users req list.. Anyhelp would be greatly appreciated

    Read the article

  • Spring 3.0 vs J2EE 6.0

    - by StudiousJoseph
    Hi everybody, I'm confronted with a situation... I've been asked to give an advise regarding which approach to take, in terms of J2EE development between Spring 3.0 and J2EE 6.0. I was, and still am, a promoter of Spring 2.5 over classic J2EE 5 development, specially with JBoss, I even migrated old apps to Spring and influenced the re-definition of the development policy here to include Spring specific APIs, and helped the development of a strategic plan to foster more lightweight solutions like Spring + Tomcat, instead of the heavier ones of JBoss, right now, we're using JBoss merely as a Web container, having what i call the "container inside the container paradox", that is, having Spring apps, with most of its APIs, running inside JBoss, So we're in the process of migrating to tomcat. However, with the coming of J2EE 6.0 many features, that made Spring attractive at that time, easy deployment, less-coupling, even some sort of D.I, etc, seems to have been mimicked, in one way or the other. JSF 2.0, JPA 2.0, WebBeans, WebProfiles, etc. So, the question goes... From your point of view, how save, and logical, it is to continue to invest in a non-standard J2EE development framework like Spring given the new perspectives offered by J2EE 6.0? Can we talk about maybe 3 or 4 more years of Spring development, or do you recommend early adoption of J2EE 6.0 APIs and it's practices? I'll appreciate any insights with this...

    Read the article

  • Load SQL query result data into cache in advance

    - by Marc
    I have the following situation: .net 3.5 WinForm client app accessing SQL Server 2008 Some queries returning relatively big amount of data are used quite often by a form Users are using local SQL Express and restarting their machines at least daily Other users are working remotely over slow network connections The problem is that after a restart, the first time users open this form the queries are extremely slow and take more or less 15s on a fast machine to execute. Afterwards the same queries take only 3s. Of course this comes from the fact that no data is cached and must be loaded from disk first. My question: Would it be possible to force the loading of the required data in advance into SQL Server cache? Note My first idea was to execute the queries in a background worker when the application starts, so that when the user starts the form the queries will already be cached and execute fast directly. I however don't want to load the result of the queries over to the client as some users are working remotely or have otherwise slow networks. So I thought just executing the queries from a stored procedure and putting the results into temporary tables so that nothing would be returned. Turned out that some of the result sets are using dynamic columns so I couldn't create the corresponding temp tables and thus this isn't a solution. Do you happen to have any other idea?

    Read the article

  • C++0x rvalue references and temporaries

    - by Doug
    (I asked a variation of this question on comp.std.c++ but didn't get an answer.) Why does the call to f(arg) in this code call the const ref overload of f? void f(const std::string &); //less efficient void f(std::string &&); //more efficient void g(const char * arg) { f(arg); } My intuition says that the f(string &&) overload should be chosen, because arg needs to be converted to a temporary no matter what, and the temporary matches the rvalue reference better than the lvalue reference. This is not what happens in GCC and MSVC. In at least G++ and MSVC, any lvalue does not bind to an rvalue reference argument, even if there is an intermediate temporary created. Indeed, if the const ref overload isn't present, the compilers diagnose an error. However, writing f(arg + 0) or f(std::string(arg)) does choose the rvalue reference overload as you would expect. From my reading of the C++0x standard, it seems like the implicit conversion of a const char * to a string should be considered when considering if f(string &&) is viable, just as when passing a const lvalue ref arguments. Section 13.3 (overload resolution) doesn't differentiate between rvalue refs and const references in too many places. Also, it seems that the rule that prevents lvalues from binding to rvalue references (13.3.3.1.4/3) shouldn't apply if there's an intermediate temporary - after all, it's perfectly safe to move from the temporary. Is this: Me misreading/misunderstand the standard, where the implemented behavior is the intended behavior, and there's some good reason why my example should behave the way it does? A mistake that the compiler vendors have somehow all made? Or a mistake based on common implementation strategies? Or a mistake in e.g. GCC (where this lvalue/rvalue reference binding rule was first implemented), that was copied by other vendors? A defect in the standard, or an unintended consequence, or something that should be clarified?

    Read the article

  • Prevent two users from editing the same data

    - by Industrial
    Hi everyone, I have seen a feature in different web applications including Wordpress (not sure?) that warns a user if he/she opens an article/post/page/whatever from the database, while someone else is editing the same data simultaneously. I would like to implement the same feature in my own application and I have given this a bit of thought. Is the following example a good practice on how to do this? It goes a little something like this: 1) User A enters a the editing page for the mysterious article X. The database tableEvents is queried to make sure that no one else is editing the same page for the moment, which no one is by then. A token is then randomly being generated and is inserted into a database table called Events. 1) User B also want's to make updates to the article X. Now since our User A already is editing the article, the Events table is queried and looks like this: | timestamp | owner | Origin | token | ------------------------------------------------------------ | 1273226321 | User A | article-x | uniqueid## | 2) The timestamp is being checked. If it's valid and less than say 100 seconds old, a message appears and the user cannot make any changes to the requested article X: Warning: User A is currently working with this article. In the meantime, editing cannot be done. Please do something else with your life. 3) If User A decides to go on and save his changes, the token is posted along with all other data to update the database, and toggles a query to delete the row with token uniqueid##. If he decides to do something else instead of committing his changes, the article X will still be available for editing in 100 seconds for User B Let me know what you think about this approach! Wish everyone a great weekend!

    Read the article

  • Oracle Date Format Convert Hour-Minute to Interval and Disregard Year-Month-Day

    - by dlite922
    I need to compare an event's half-way midpoint between a start and stop time of day. Right now i'm converting the dates you see on the right, to HH:MM and the comparison works until midnight. the query says: WHERE half BETWEEN pStart and pStop. As you can see below, pStart and pStap have January 1st 2000 dates, this is because the year month day are not important to me... Valid Data: +-------+--------+-------+---------------------+---------------------+---------------------+ | half | pStart | pStop | half2 | pStart2 | pStop2 | +-------+--------+-------+---------------------+---------------------+---------------------+ | 19:00 | 19:00 | 23:00 | 2012-11-04 19:00:00 | 2000-01-01 19:00:00 | 2000-01-01 23:00:00 | | 20:00 | 19:00 | 23:00 | 2012-11-04 20:00:00 | 2000-01-01 19:00:00 | 2000-01-01 23:00:00 | | 21:00 | 19:00 | 23:00 | 2012-11-04 21:00:00 | 2000-01-01 19:00:00 | 2000-01-01 23:00:00 | | 23:00 | 20:00 | 23:00 | 2012-11-05 23:00:00 | 2000-01-01 20:00:00 | 2000-01-01 23:00:00 | +-------+--------+-------+---------------------+---------------------+---------------------+ Now observe what happens when pStop is midnight or later... Valid Data that breaks it: +-------+--------+-------+---------------------+---------------------+---------------------+ | half | pStart | pStop | half2 | pStart2 | pStop2 | +-------+--------+-------+---------------------+---------------------+---------------------+ | 23:00 | 22:00 | 00:00 | 2012-11-04 23:00:00 | 2000-01-01 22:00:00 | 2000-01-01 00:00:00 | | 23:30 | 23:00 | 02:00 | 2012-11-05 23:30:00 | 2000-01-01 23:00:00 | 2000-01-01 02:00:00 | +-------+--------+-------+---------------------+---------------------+---------------------+ Thus my where clause translates to: WHERE 19:00 BETWEEN 22:00 AND 00:00 ...which returns false and I miss those two correct rows above. Question: Is there a way to show those dates as integer interval so that saying half BETWEEN pStart and pStop are correct? I thought about adding 24 when pStop is less than pStart to make 00:00 into 24:00 but don't know an easy way to do that without long string concatenations and number conversions. This would solve the problem because pStart pStop difference will never be longer than 6 hours. Note: (The Query is much more complex. It has other irrelevant date calculations, but the result are show above. DATE_FORMAT(%H:%i) is applied to the first three columns and no formatting to the last three) Thanks for your help:

    Read the article

  • Filtering on a left join in SQLalchemy

    - by Adam Ernst
    Using SQLalchemy I want to perform a left outer join and filter out rows that DO have a match in the joined table. I'm sending push notifications, so I have a Notification table. This means I also have a ExpiredDeviceId table to store device_ids that are no longer valid. (I don't want to just delete the affected notifications as the user might later re-install the app, at which point the notifications should resume according to Apple's docs.) CREATE TABLE Notification (device_id TEXT, time DATETIME); CREATE TABLE ExpiredDeviceId (device_id TEXT PRIMARY KEY, expiration_time DATETIME); Note: there may be multiple Notifications per device_id. There is no "Device" table for each device. So when doing SELECT FROM Notification I should filter accordingly. I can do it in SQL: SELECT * FROM Notification LEFT OUTER JOIN ExpiredDeviceId ON Notification.device_id = ExpiredDeviceId.device_id WHERE expiration_time == NULL But how can I do it in SQLalchemy? sess.query( Notification, ExpiredDeviceId ).outerjoin( (ExpiredDeviceId, Notification.device_id == ExpiredDeviceId.device_id) ).filter( ??? ) Alternately I could do this with a device_id NOT IN (SELECT device_id FROM ExpiredDeviceId) clause, but that seems way less efficient.

    Read the article

  • Computing "average" of two colors

    - by Francisco P.
    This is only marginally programming related - has much more to do w/ colors and their representation. I am working on a very low level app. I have an array of bytes in memory. Those are characters. They were rendered with anti-aliasing: they have values from 0 to 255, 0 being fully transparent and 255 totally opaque (alpha, if you wish). I am having trouble conceiving an algorithm for the rendering of this font. I'm doing the following for each pixel: // intensity is the weight I talked about: 0 to 255 intensity = glyphs[text[i]][x + GLYPH_WIDTH*y]; if (intensity == 255) continue; // Don't draw it, fully transparent else if (intensity == 0) setPixel(x + xi, y + yi, color, base); // Fully opaque, can draw original color else { // Here's the tricky part // Get the pixel in the destination for averaging purposes pixel = getPixel(x + xi, y + yi, base); // transfer is an int for calculations transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.red + (float) pixel.red)/2); // This is my attempt at averaging newPixel.red = (Byte) transfer; transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.green + (float) pixel.green)/2); newPixel.green = (Byte) transfer; // transfer = (int) ((float) ((float) 255.0 - (float) intensity)/255.0 * (((float) color.blue) + (float) pixel.blue)/2); transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.blue + (float) pixel.blue)/2); newPixel.blue = (Byte) transfer; // Set the newpixel in the desired mem. position setPixel(x+xi, y+yi, newPixel, base); } The results, as you can see, are less than desirable. That is a very zoomed in image, at 1:1 scale it looks like the text has a green "aura". Any idea for how to properly compute this would be greatly appreciated. Thanks for your time!

    Read the article

  • How to display only one letter in Flex Text Layout Framework ContainerController?

    - by rattkin
    I'm trying to implement dropped initials feature into my Flex application. Since Text Layout Framework does not support floating, the only known solution is to create additional containers that will be linked together, displaying the same textflow. Width and positioning of these containers has to be set in such a way that it will pretend that it's float. I'm using the same solution for dropped initials. Basically, I'm creating three containers, one for the initial letter (first letter in text flow), the other for text floating around, and the 3rd one to display text below these two. All these containers share one textflow. I have big issues with forcing controller to display only one letter from the text flow, and size it accordingly, so that it wont take any unnecessary aditional space and won't get any more text into it. Using ContainerController.getContentBounds() returns me size of whole sprite of the letter (with ascent/descent empty parts), not the height/width of the actual rendered letter. I'm using textFlow.flowComposer.getLineAt(0).getTextLine().getAtomBounds(0), but i think it's still not right. Also, even if I set container for this dimensions, it sometimes display additional text in it, especially for bigger fonts. See screen : Also, if I set width to just 1px less that contentBounds, things are going crazy, containers are moved around, positioned with big margins, etc. How should I solve this? Is it a bug in TLF / Player? Can I fix it somehow? Can I detect the size of the letter, or make containercontroller autosize to fit just one letter only?

    Read the article

  • Should java try blocks be scoped as tightly as possible?

    - by isme
    I've been told that there is some overhead in using the Java try-catch mechanism. So, while it is necessary to put methods that throw checked exception within a try block to handle the possible exception, it is good practice performance-wise to limit the size of the try block to contain only those operations that could throw exceptions. I'm not so sure that this is a sensible conclusion. Consider the two implementations below of a function that processes a specified text file. Even if it is true that the first one incurs some unnecessary overhead, I find it much easier to follow. It is less clear where exactly the exceptions come from just from looking at statements, but the comments clearly show which statements are responsible. The second one is much longer and complicated than the first. In particular, the nice line-reading idiom of the first has to be mangled to fit the readLine call into a try block. What is the best practice for handling exceptions in a funcion where multiple exceptions could be thrown in its definition? This one contains all the processing code within the try block: void processFile(File f) { try { // construction of FileReader can throw FileNotFoundException BufferedReader in = new BufferedReader(new FileReader(f)); // call of readLine can throw IOException String line; while ((line = in.readLine()) != null) { process(line); } } catch (FileNotFoundException ex) { handle(ex); } catch (IOException ex) { handle(ex); } } This one contains only the methods that throw exceptions within try blocks: void processFile(File f) { FileReader reader; try { reader = new FileReader(f); } catch (FileNotFoundException ex) { handle(ex); return; } BufferedReader in = new BufferedReader(reader); String line; while (true) { try { line = in.readLine(); } catch (IOException ex) { handle(ex); break; } if (line == null) { break; } process(line); } }

    Read the article

  • Strategy for WCF server with .Net clients and Android clients?

    - by D.H.
    I am using WCF to write a server that should be able to communicate with .Net clients, Android clients and possibly other types of clients. The main type of client is a desktop application that will be written in .Net. This client will usually be on the same intranet as the server. It will make an initial call to the server to get the current state of the system and will then receive updates from the server whenever a value changes. These updates are frequent, perhaps once a second. The Android clients will connect over the Internet. This client is also interested in updates, but it is not as critical as for the desktop client so a (less frequent) polling scenario might be acceptable. All clients will have to login to use the services, and when connecting over the Internet the connection should be secure. I am familiar with WCF but I am not sure what bindings are most appropriate for the scenario and what security solution to use. Also, I have not used Android, but I would like to make it as simple as possible for the person implementing the Android client to consume my services. So, what is my strategy?

    Read the article

  • Why does a sub-class class of a class have to be static in order to initialize the sub-class in the

    - by Alex
    So, the question is more or less as I wrote. I understand that it's probably not clear at all so I'll give an example. I have class Tree and in it there is the class Node, and the empty constructor of Tree is written: public class RBTree { private RBNode head; public RBTree(RBNode head,RBTree leftT,RBTree rightT){ this.head=head; this.head.leftT.head.father = head; this.head.rightT.head.father = head; } public RBTree(RBNode head){ this(head,new RBTree(),new RBTree()); } public RBTree(){ this(new RBNode(),null,null); } public class RBNode{ private int value; private boolean isBlack; private RBNode father; private RBTree leftT; private RBTree rightT; } } Eclipse gives me the error: "No enclosing instance of type RBTree is available due to some intermediate constructor invocation" for the "new RBTree()" in the empty constructor. However, if I change the RBNode to be a static class, there is no problem. So why is it working when the class is static. BTW, I found an easy solution for the cunstructor: public RBTree(){ this.head = new RBNode(); } So, I have no idea what is the problem in the first piece of code.

    Read the article

  • Prototypal inheritance should save memory, right?

    - by Techpriester
    Hi Folks, I've been wondering: Using prototypes in JavaScript should be more memory efficient than attaching every member of an object directly to it for the following reasons: The prototype is just one single object. The instances hold only references to their prototype. Versus: Every instance holds a copy of all the members and methods that are defined by the constructor. I started a little experiment with this: var TestObjectFat = function() { this.number = 42; this.text = randomString(1000); } var TestObjectThin = function() { this.number = 42; } TestObjectThin.prototype.text = randomString(1000); randomString(x) just produces a, well, random String of length x. I then instantiated the objects in large quantities like this: var arr = new Array(); for (var i = 0; i < 1000; i++) { arr.push(new TestObjectFat()); // or new TestObjectThin() } ... and checked the memory usage of the browser process (Google Chrome). I know, that's not very exact... However, in both cases the memory usage went up significantly as expected (about 30MB for TestObjectFat), but the prototype variant used not much less memory (about 26MB for TestObjectThin). I also checked: The TestObjectThin instances contain the same string in their "text" property, so they are really using the property of the prototype. Now, I'm not so sure what to think about this. The prototyping doesn't seem to be the big memory saver at all. I know that prototyping is a great idea for many other reasons, but I'm specifically concerned with memory usage here. Any explanations why the prototype variant uses almost the same amount of memory? Am I missing something?

    Read the article

  • Upload using python script takes very long on one laptop as compared to another

    - by Engr Am
    I have a python 2.7 code which uses STORBINARY function for uploading files to an ftp server and RETRBINARY for downloading from this server. However, the issue is the upload is taking a very long time on three laptops from different brands as compared to a Dell laptop. The strange part is when I manually upload any file, it takes the same time on all the systems. The manual upload rate and upload rate with the python script is the same on the Dell Laptop. However, on every other brand of laptop (I have tried with IBM, Toshiba, Fujitsu-Siemens) the python script has a very low upload rate than the manual attempt. Also, on all these other laptops, the upload rate using the python script is the same (1Mbit/s) while the manual upload rate is approx. 8 Mbit/s. I have tried to vary the filesize for the upload to no avail. TCP Optimizer improved the download rate on all the systems but had no effect on the upload rate. Download rate using this script on all the systems is fine and same as the manual download rate. I have checked the server and it has more than 90% free space. The network connection is the same for all the laptops, and I try uploading only with one laptop at a time. All the laptops have almost the same system configurations, same operating system and approximately the same free drive space. If anything the Dell laptop is a little less in terms of processing power and RAM than 2 of the others, but I suppose this has no effect as I have checked many times to see how much was the CPU usage and network usage during these uploads and downloads, and I am sure that no other virus or program has been eating up my bandwidth. Here is the code ('ftp' and 'file_path' are inputs to the function): path,filename=os.path.split(file_path) filesize=os.path.getsize(file_path) deffilesize=(filesize/1024)/1024 f = open(file_path, "rb") upstart = time.clock() print ftp.storbinary("STOR "+filename, f) upende = time.clock()-upstart outname="Upload " f.close() return upende, deffilesize, outname

    Read the article

  • how to get access to private members of nested class?

    - by macias
    Background: I have enclosed (parent) class E with nested class N with several instances of N in E. In the enclosed (parent) class I am doing some calculations and I am setting the values for each instance of nested class. Something like this: n1.field1 = ...; n1.field2 = ...; n1.field3 = ...; n2.field1 = ...; ... It is one big eval method (in parent class). My intention is -- since all calculations are in parent class (they cannot be done per nested instance because it would make code more complicated) -- make the setters only available to parent class and getters public. And now there is a problem: when I make the setters private, parent class cannot acces them when I make them public, everybody can change the values and C# does not have friend concept I cannot pass values in constructor because lazy evaluation mechanism is used (so the instances have to be created when referencing them -- I create all objects and the calculation is triggered on demand) I am stuck -- how to do this (limit access up to parent class, no more, no less)? I suspect I'll get answer-question first -- "but why you don't split the evaluation per each field" -- so I answer this by example: how do you calculate min and max value of a collection? In a fast way? The answer is -- in one pass. This is why I have one eval function which does calculations and sets all fields at once.

    Read the article

  • Predicate crashing iPhone App!

    - by DVG
    To preface, this is a follow up to an inquiry made a few days ago: http://stackoverflow.com/questions/2981803/iphone-app-crashes-when-merging-managed-object-contexts Short Version: EXC_BAD_ACCESS is crashing my app, and zombie-mode revealed the culprit to be my predicate embedded within the fetch request embedded in my Fetched Results Controller. How does an object within an object get released without an explicit command to do so? Long Version: Application Structure Platforms View Controller - Games View Controller (Predicated upon platform selection) - Add Game View Controller When a row gets clicked on the Platforms view, it sets an instance variable in Games View for that platform, then the Games Fetched Results Controller builds a fetch request in the normal way: - (NSFetchedResultsController *)fetchedResultsController{ if (fetchedResultsController != nil) { return fetchedResultsController; } //build the fetch request for Games NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Game" inManagedObjectContext:context]; [request setEntity:entity]; //predicate NSPredicate *predicate = [NSPredicate predicateWithFormat:@"platform == %@", selectedPlatform]; [request setPredicate:predicate]; //sort based on name NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; //fetch and build fetched results controller NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:request managedObjectContext:context sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [sortDescriptor release]; [sortDescriptors release]; [predicate release]; [request release]; [aFetchedResultsController release]; return fetchedResultsController; } At the end of this method, the fetchedResultsController's _fetch_request - _predicate member is set to an NSComparisonPredicate object. All is well in the world. By the time - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section gets called, the _predicate is now a Zombie, which will eventually crash the application when the table attempts to update itself. I'm more or less flummoxed. I'm not releasing the fetched results controller or any of it's parts, and the only part getting dealloc'd is the predicate. Any ideas?

    Read the article

  • Trouble with custom WPF Panel-derived class

    - by chaiguy
    I'm trying to write a custom Panel class for WPF, by overriding MeasureOverride and ArrangeOverride but, while it's mostly working I'm experiencing one strange problem I can't explain. In particular, after I call Arrange on my child items in ArrangeOverride after figuring out what their sizes should be, they aren't sizing to the size I give to them, and appear to be sizing to the size passed to their Measure method inside MeasureOverride. Am I missing something in how this system is supposed to work? My understanding is that calling Measure simply causes the child to evaluate its DesiredSize based on the supplied availableSize, and shouldn't affect its actual final size. Here is my full code (the Panel, btw, is intended to arrange children in the most space-efficient manner, giving less space to rows that don't need it and splitting remaining space up evenly among the rest--it currently only supports vertical orientation but I plan on adding horizontal once I get it working properly): protected override Size MeasureOverride( Size availableSize ) { foreach ( UIElement child in Children ) child.Measure( availableSize ); return availableSize; } protected override System.Windows.Size ArrangeOverride( System.Windows.Size finalSize ) { double extraSpace = 0.0; var sortedChildren = Children.Cast<UIElement>().OrderBy<UIElement, double>( new Func<UIElement, double>( delegate( UIElement child ) { return child.DesiredSize.Height; } ) ); double remainingSpace = finalSize.Height; double normalSpace = 0.0; int remainingChildren = Children.Count; foreach ( UIElement child in sortedChildren ) { normalSpace = remainingSpace / remainingChildren; if ( child.DesiredSize.Height < normalSpace ) // if == there would be no point continuing as there would be no remaining space remainingSpace -= child.DesiredSize.Height; else { remainingSpace = 0; break; } remainingChildren--; } extraSpace = remainingSpace / Children.Count; double offset = 0.0; foreach ( UIElement child in Children ) { //child.Measure( new Size( finalSize.Width, normalSpace ) ); double value = Math.Min( child.DesiredSize.Height, normalSpace ) + extraSpace; child.Arrange( new Rect( 0, offset, finalSize.Width, value ) ); offset += value; } return finalSize; }

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • Testing ActionFilterAttributes with MSpec

    - by Tomas Lycken
    I'm currently trying to grasp MSpec, mainly to learn new ways of (T/B)DD to be able to make an educated decision on which technology to use. Previously, I've mostly (read: only) used the built-in MSTest framework with Moq, so BDD is quite new for me. I'm writing an ASP.NET MVC app, and I want to implement PRG. Last time I did this, I used action filters to export and import ModelState via TempData, so that I could return a RedirectResult and the validation errors would still be there when the user got the view. I tested that scenario by verifying two things: a) That the ExportModelStateAttribute I had written was applied (among tests for my controller) b) That the attribute worked (among tests for action filter attributes) However, in BDD I've understood I should be even more concerned with behavior, and even less with implementation. This means I should probably just verify that the model state is in tempdata when the action has finished executing - not necessarily that it's done via an attribute. To further complicate things, attributes are not run when calling the action directly in the test, so I can't just call the action and see if the job's been done. How should I spec/test this in MSpec?

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >