Search Results

Search found 13692 results on 548 pages for 'bad practices'.

Page 136/548 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Check if web form values has changed. Best practice

    - by Andrew Florko
    Hello everybody, I have multi-step form and user can navigate to any page to modify or add information. There is a menu that shows progress and steps user completed. This menu allows to navigate to any step user completed or going to complete. Inspite of big button "Save and Continue" some users click this menu to navigate further. I have to check - if values have changed in a form and ask: "Save changes? Yes/No". What is the best way (with minimum code) you suggest me to check if form values have changed. Thank you in advance!

    Read the article

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • best-practice on for loop's condition

    - by guest
    what is considered best-practice in this case? for (i=0; i<array.length(); ++i) or for (i=array.length(); i>0; --i) assuming i don't want to iterate from a certain direction, but rather over the bare length of the array. also, i don't plan to alter the array's size in the loop body. so, will the array.length() become constant during compilation? if not, then the second approach should be the one to go for..

    Read the article

  • MapReduce programming system in java-actionscript

    - by eco_bach
    Just finished reading ch23 in the excellent 'Beautiful Code' http://oreilly.com/catalog/9780596510046 on Distributed Programming with MapReduce. I understand that MapReduce is a programming system designed for large-scale data processing problems, but I have a hard time getting my head around the basic examples given and how I might apply them in real world situations. Can someone give a simple example of MapReduce implemented using either java, javascript or actionscript?

    Read the article

  • Creating an interface and swappable implementations in python

    - by Blankman
    Hi, Would it be possible to create a class interface in python and various implementations of the interface. Example: I want to create a class for pop3 access (and all methods etc.). If I go with a commercial component, I want to wrap it to adhere to a contract. In the future, if I want to use another component or code my own, I want to be able to swap things out and not have things very tightly coupled. Possible? I'm new to python.

    Read the article

  • debug=true in web.config = BAD thing?

    - by MateloT
    We're seeing lots of virtual memory fragmentation and out of memory errors and then it hits the 3GB limit. The compilation debug is set to true in the web.config but I get different answers from everyone i ask, does debug set to true cause each aspx to compile into random areas of ram thus fragmenting that ram and eventually causing out of memory problems?

    Read the article

  • Preprocessor #define vs. function pointer - best practice?

    - by Dustin
    I recently started a small personal project (RGB value to BGR value conversion program) in C, and I realised that a function that converts from RGB to BGR can not only perform the conversion but also the inversion. Obviously that means I don't really need two functions rgb2bgr and bgr2rgb. However, does it matter whether I use a function pointer instead of a macro? For example: int rgb2bgr (const int rgb); /* * Should I do this because it allows the compiler to issue * appropriate error messages using the proper function name, * not to mention possible debugging benefits? */ int (*bgr2rgb) (const int bgr) = rgb2bgr; /* * Or should I do this since it is merely a convenience * and they're really the same function anyway? */ #define bgr2rgb(bgr) (rgb2bgr (bgr)) I'm not necessarily looking for a change in execution efficiency as it's more of a subjective question out of curiosity. I am well aware of the fact that type safety is neither lost nor gained using either method. Would the function pointer merely be a convenience or are there more practical benefits to be gained of which I am unaware?

    Read the article

  • How to differentiate between exceptions i can show the user, and ones i can't?

    - by Ian Boyd
    i have some business logic that traps some logically invalid situations, e.g. trying to reverse a transaction that was already reversed. In this case the correct action is to inform the user: Transaction already reversed or Cannot reverse a reversing transaction or You do not have permission to reverse transactions or This transaction is on a session that has already been closed or This transaction is too old to be reversed The question is, how do i communicate these exceptional cases back to the calling code, so they can show the user? Do i create a separate exception for each case: catch (ETransactionAlreadyReversedException) MessageBox.Show('Transaction already reversed') catch (EReversingAReversingTransactionException) MessageBox.Show('Cannot reverse a reversing transaction') catch (ENoPermissionToReverseTranasctionException) MessageBox.Show('You do not have permission to reverse transactions') catch (ECannotReverseTransactionOnAlredyClosedSessionException) MessageBox.Show('This transaction is on a session that has already been closed') catch (ECannotReverseTooOldTransactionException) MessageBox.Show('This transaction is too old to be reversed') Downside for this is that when there's a new logical case to show the user: Tranasctions created by NSL cannot be reversed i don't simply show the user a message, and instead it leaks out as an unhandled excpetion, when really it should be handled with another MessageBox. The alternative is to create a single exception class: `EReverseTransactionException` With the understanding that any exception of this type is a logical check, that should be handled with a message box: catch (EReverseTransactionException) But it's still understood that any other exceptions, ones that involve, for example, an memory ECC parity error, continue unhandled. In other words, i don't convert all errors that can be thrown by the ReverseTransaction() method into EReverseTransactionException, only ones that are logically invalid cause of the user.

    Read the article

  • Using typedefs (or #defines) on built in types - any sensible reason?

    - by jb
    Well I'm doing some Java - C integration, and throught C library werid type mappings are used (theres more of them;)): #define CHAR char /* 8 bit signed int */ #define SHORT short /* 16 bit signed int */ #define INT int /* "natural" length signed int */ #define LONG long /* 32 bit signed int */ typedef unsigned char BYTE; /* 8 bit unsigned int */ typedef unsigned char UCHAR; /* 8 bit unsigned int */ typedef unsigned short USHORT; /* 16 bit unsigned int */ typedef unsigned int UINT; /* "natural" length unsigned int*/ Is there any legitimate reason not to use them? It's not like char is going to be redefined anytime soon. I can think of: Writing platform/compiler portable code (size of type is underspecified in C/C++) Saving space and time on embedded systems - if you loop over array shorter than 255 on 8bit microprocessor writing: for(uint8_t ii = 0; ii < len; ii++) will give meaureable speedup.

    Read the article

  • Access of private field of another object in copy constructors - Really a problem?

    - by DR
    In my Java application I have some copy-constructors like this public MyClass(MyClass src) { this.field1 = src.field1; this.field2 = src.field2; this.field3 = src.field3; ... } Now Netbeans 6.9 warns about this and I wonder what is wrong with this code? My concerns: Using the getters might introduce unwanted side-effects. The new object might no longer be considered a copy of the original. If it is recommended using the getters, wouldn't it be more consistent if one would use setters for the new instance as well?

    Read the article

  • Why do I need to give my options a value attribute in my dropdown? JQuery.

    - by Alex
    So far in my web developing experiences, I've noticed that almost all web developers/designers choose to give their options in a select a value like so: <select name="foo"> <option value="bar">BarCheese</option> // etc. // etc. </select> Is this because it is best practice to do so? I ask this because I have done a lot of work with jQuery and dropdown's lately, and sometimes I get really annoyed when I have to check something like: $('select[name=foo]').val() == "bar"); To me, many times that seems less clear than just being able to check the val() against BarCheese. So why is it that most web developers/designers specify a value paramater instead of just letting the options actual value be its value?

    Read the article

  • Do you ever make a code change and just test rather than trying to fully understand the change you'v

    - by Clay Nichols
    I'm working in a 12 year old code base which I have been the only developer on. There are times that I'll make a a very small change based on an intuition (or quantum leap in logic ;-). Usually I try to deconstruct that change and make sure I read thoroughly the code. However sometimes, (more and more these days) I just test and make sure it had the effect I wanted. (I'm a pretty thorough tester and would test even if I read the code). This works for me and we have surprisingly (compared to most software I see) few bugs escape into the wild. But what I'm wondering is whether this is just the "art" side of coding. Yes, in an ideal world you would exhaustively read every bit of code that your change modified, but I in practice, if you're confident that it only affects a small section of code, is this a common practice? I can obviously see where this would be a disastrous approach in the hands of a poor programmer. But then, I've seen programmers who ostensibly are reading the code and break stuff left and right (in their own code based which only they have been working on).

    Read the article

  • Finding databases for use in applications

    - by JonF
    Does anyone have some recommendations on how I can find databases for random things that I might want to use in my application. For example, a database of zip code locations, area code cities, car engines, IP address locations, or whatever. I'm just asking generally when you decide you need a bunch of data where are some good places to start looking other than google?

    Read the article

  • What is the reason not to use select * ?

    - by Chris Lively
    I've seen a number of people claim that you should specifically name each column you want in your select query. Assuming I'm going to use all of the columns anyway, why would I not use SELECT *? Even considering the question from 9/24, I don't think this is an exact duplicate as I'm approaching the issue from a slightly different perspective. One of our principles is to not optimize before it's time. With that in mind, it seems like using SELECT * should be the preferred method until it is proven to be a resource issue or the schema is pretty much set in stone. Which, as we know, won't occur until development is completely done. That said, is there an overriding issue to not use SELECT *?

    Read the article

  • Why is 'virtual' optional for overridden methods in derived classes?

    - by squelart
    When a method is declared as virtual in a class, its overrides in derived classes are automatically considered virtual as well, and the C++ language makes this keyword virtual optional in this case: class Base { virtual void f(); }; class Derived : public Base { void f(); // 'virtual' is optional but implied. }; My question is: What is the rationale for making virtual optional? I know that it is not absolutely necessary for the compiler to be told that, but I would think that developers would benefit if such a constraint was enforced by the compiler. E.g., sometimes when I read others' code I wonder if a method is virtual and I have to track down its superclasses to determine that. And some coding standards (Google) make it a 'must' to put the virtual keyword in all subclasses.

    Read the article

  • C++ Headers/Source Files

    - by incrediman
    (Duplicate of C++ Code in Header Files) What is the standard way to split up C++ classes between header and source files? Am I supposed to put everything in the header file? Or should I declare the classes in the header file and define them in a .cpp file (source file)? Sorry if I'm shaky on the terminology here (declare, define, etc). So what's the standard?

    Read the article

  • Simplest way to create a wrapper class around some strings for a WPF DataGrid?

    - by Joel
    I'm building a simple hex editor in C#, and I've decided to use each cell in a DataGrid to display a byte*. I know that DataGrid will take a list and display each object in the list as a row, and each of that object's properties as columns. I want to display rows of 16 bytes each, which will require a wrapper with 16 string properties. While doable, it's not the most elegant solution. Is there an easier way? I've already tried creating a wrapper around a public string array of size 16, but that doesn't seem to work. Thanks *The rational for this is that I can have spaces between each byte without having to strip them all out when I want to save my edited file. Also it seems like it'll be easier to label the rows and columns.

    Read the article

  • highlite text parts with jquery, best practice

    - by helle
    Hey guys, i have a list of items containig names. then i have a eventlistener, which ckecks for keypress event. if the user types i.g. an A all names starting with an A should be viewed with the A bold. so all starting As should be bold. what is the best way using jquery to highlite only a part of a string? thanks for your help

    Read the article

  • How to track IIS server performance

    - by Chris Brandsma
    I have a reoccurring issue where a customer calls up and complains that the web site is too slow. Specifically, if they are inactive for a short period of time, then go back to the site, there will be a minute-two minute delay before the user sees a response. (the standard browser is Firefox in this case) I have Perfmon up and running, the cpu utilization is usually below 20% (single proc...don't ask). The database is humming along. And I'm pulling my hair out. So, what metrics/tools do you find useful when evaluating IIS performance?

    Read the article

  • ASP.NET MVC Filters: How to set Viewdata for Dropdown based on action parameter

    - by CRice
    Hi, Im loading an entity 'Member' from its id in route data. [ListItemsForMembershipType(true)] public ActionResult Edit(Member someMember) {...} The attribute on the action loads the membership type list items for a dropdown box and sticks it in viewdata. This is fine for add forms, and search forms (it gets all active items) but I need the attribute to execute BASED ON THE VALUE someMember.MembershipTypeId, because its current value must always be present when loading the item (i.e. all active items, plus the one from the loaded record). So the question is, what is the standard pattern for this? How can my attribute accept the value or should I be loading the viewdata for the drop down in a controller supertype or during model binding or something else? It is in an attribute now because the code to set the viewdata would otherwise be duplicated in each usage in each action.

    Read the article

  • Technical non-terminating condition in a loop

    - by Snarfblam
    Most of us know that a loop should not have a non-terminating condition. For example, this C# loop has a non-terminating condition: any even value of i. This is an obvious logic error. void CountByTwosStartingAt(byte i) { // If i is even, it never exceeds 254 for(; i < 255; i += 2) { Console.WriteLine(i); } } Sometimes there are edge cases that are extremely unlikeley, but technically constitute non-exiting conditions (stack overflows and out-of-memory errors aside). Suppose you have a function that counts the number of sequential zeros in a stream: int CountZeros(Stream s) { int total = 0; while(s.ReadByte() == 0) total++; return total; } Now, suppose you feed it this thing: class InfiniteEmptyStream:Stream { // ... Other members ... public override int Read(byte[] buffer, int offset, int count) { Array.Clear(buffer, offset, count); // Output zeros return count; // Never returns -1 (end of stream) } } Or more realistically, maybe a stream that returns data from external hardware, which in certain cases might return lots of zeros (such as a game controller sitting on your desk). Either way we have an infinite loop. This particular non-terminating condition stands out, but sometimes they don't. A completely real-world example as in an app I'm writing. An endless stream of zeros will be deserialized into infinite "empty" objects (until the collection class or GC throws an exception because I've exceeded two billion items). But this would be a completely unexpected circumstance (considering my data source). How important is it to have absolutely no non-terminating conditions? How much does this affect "robustness?" Does it matter if they are only "theoretically" non-terminating (is it okay if an exception represents an implicit terminating condition)? Does it matter whether the app is commercial? If it is publicly distributed? Does it matter if the problematic code is in no way accessible through a public interface/API? Edit: One of the primary concerns I have is unforseen logic errors that can create the non-terminating condition. If, as a rule, you ensure there are no non-terminating conditions, you can identify or handle these logic errors more gracefully, but is it worth it? And when? This is a concern orthogonal to trust.

    Read the article

  • What should I use to increase performance. View/Query/Temporary Table

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >