Search Results

Search found 6690 results on 268 pages for 'worst practices'.

Page 68/268 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • JavaScript: Keeping track of eventListeners on DOM elements

    - by bobthabuilda
    What is the best way to keep track of eventListener functions on DOM elements? Should I add a property to the element which references the function like this: var elem = document.getElementsByTagName( 'p' )[0]; function clickFn(){}; elem.listeners = { click: [clickFn, function(){}] }; elem.addEventListener( 'click', function(e){ clickFn(e); }, false ); Or should I store it in my own variable in my code like below: var elem = document.getElementsByTagName( 'p' )[0]; function clickFn(){}; // Using window for the sake of brevity, otherwise I wouldn't =D // DOM elements and their listeners are referenced here in a paired array window.listeners = [elem, { click: [clickFn, function(){}] }]; elem.addEventListener( 'click', function(e){ clickFn(e); }, false ); Obviously the second method would be less obtrusive, but it seems it could get intensive iterating through all those possibilities. Which is the best way and why? Is there a better way?

    Read the article

  • Best practice for submits redirecting to another page in MVC2?

    - by blesh
    I have a situation with my MVC2 app where I have multiple pages that need to submit different information, but all need to end up at the same page. In my old Web Forms app, I'd have just accomplished this in my btnSave_Click delegate with a Redirect. There are three different types of products, each of which need to be saved to the cart in a completely different manner from their completely different product pages. I'm not going to get into why or how they're different, just suffice to say, they're totally different. After they're saved to the cart, I need to "redirect" to the Checkout view. But it should be noted, that you can also just browse straight to the Checkout view without having to submit any products to add to the cart. Here's a diagram of what I'm trying to accomplish, and how I think I need to handle it: Is this correct? It seems like a common scenario, but I haven't seen any examples of how I should handle this. Thank you all in advance.

    Read the article

  • Asp.Net Program Architecture

    - by Pino
    I've just taken on a new Asp.Net MVC application and after opening it up I find the following, [Project].Web [Project].Models [Project].BLL [Project].DAL Now, something thats become clear is that there is the data has to do a hell of a lot before it makes it to the View (DatabaseDALRepoBLLConvertToModelControllerView). The DAL is Subsonic, the repositorys in the DAL return the subsonic entities to the BLL which process them does crazy things and converts them into a Model (From the .Models) sometimes with classes that look like this public DataModel GetDataModel(EntityObject Src) { var ReturnData = new DataModel(): ReturnData.ID = Src.ID; ReturnDate.Name = Src.Name; //etc etc } Now, the question is, "Is this complete overkill"? Ok the project is of a decent size and can only get bigger but is it worth carrying on with all this? I dont want to use AutoMapper as it just seems like it makes the complication worse. Can anyone shed any light on this?

    Read the article

  • Objective C selector memory managment (does this leak memory)?

    - by James Jones
    - (IBAction) someButtonCall { if(!someCondition) { someButtonCallBack = @selector(someButtonCall); [self presentModalViewController:someController animated:YES]; } else ... } //Called from someController - (void) someControllerFinished:(BOOL) ok { [self dismissModalViewControllerAnimated:YES]; if(ok) [self performSelector:someButtonCallBack]; else ... } I'm wondering if the user keeps getting into the !someCondition clause if the selector is leaked by assigning a new selector each time (the code above is hypothetical and not what i'm doing). Any help is appreciated. Thanks, James Jones

    Read the article

  • What is the correct way to import and export out of Excel to SQL Server and back?

    - by Vecdid
    Looking for the correct way and control to import and export of out of Microsoft Excel programmatically. I am willing to get a 3rd party control that supports this functionality, or I can create it myself, but looking to get this prioject done fast. The Datasource will be offline. Although when it is online for the upload/download if there is a control that would merge them, that would work also. But best is to remain offline, don't need the support headache. Security is also an issue. SSIS is not available on the shared database server. The website that hosts the asp.net application is not on the same machine as the sql server. Thank you.

    Read the article

  • Timing the Linux Kernel boot-time optimisation

    - by CVS-2600Hertz-wordpress-com
    I am trying to optimise the boot-up time of linux on an embedded device (not PC) Currently to profile the boot-up sequence, I have enabled the timing info on printk logs. Is this the most optimum way? If not, how do i profile the boot-up sequence (with timing) with minimum overhead? PS: I have a terminal (of the device) over a serial-connection & I use TeraTerm over windows-XP to access it.

    Read the article

  • Is it proper to get and especially set Perl module's global variables directly?

    - by DVK
    I was wondering what the best practice in Perl is regarding getting - or, more importantly, setting - a global variable of some module by directly accessing $Module::varName in case the module didn't provide getter/setter method for it. The reason it smells bad to me is the fact that it sort of circumvents encapsulation. Just because I can do it in Perl, I'm not entirely certain I should (assuming there actually is an alternative such as adding a getter/setter to the module). I'm asking this because I'm about to request an addition of a getter/setter for a global variable in one of the core Perl modules, and I would like to avoid it soundly and unanimously rejected on the grounds of "Why the heck do you need one when you can access the variable in the package directly?" - in case doing the latter is actually considered perfectly OK by the community.

    Read the article

  • Reference for proper handling of PID file on Unix

    - by bignose
    Where can I find a well-respected reference that details the proper handling of PID files on Unix? On Unix operating systems, it is common practice to “lock” a program (often a daemon) by use of a special lock file: the PID file. This is a file in a predictable location, often ‘/var/run/foo.pid’. The program is supposed to check when it starts up whether the PID file exists and, if the file does exist, exit with an error. So it's a kind of advisory, collaborative locking mechanism. The file contains a single line of text, being the numeric process ID (hence the name “PID file”) of the process that currently holds the lock; this allows an easy way to automate sending a signal to the process that holds the lock. What I can't find is a good reference on expected or “best practice” behaviour for handling PID files. There are various nuances: how to actually lock the file (don't bother? use the kernel? what about platform incompatibilities?), handling stale locks (silently delete them? when to check?), when exactly to acquire and release the lock, and so forth. Where can I find a respected, most-authoritative reference (ideally on the level of W. Richard Stevens) for this small topic?

    Read the article

  • PHP Equivalent of Java Type-Casting Solution

    - by stjowa
    Since PHP has no custom-class type-casting, how would I go about doing the PHP equivalent of this Java code: CustomBaseObject cusBaseObject = cusBaseObjectDao.readCustomBaseObjectById(id); ((CustomChildObject) cusBaseObject).setChildAttribute1(value1); ((CustomChildObject) cusBaseObject).setChildAttribute2(value2); In my case, it would very nice if I could do this. However, trying this without type-casting support, it gives me an error that the methods do not exist for the object. Thanks, Steve

    Read the article

  • SVN: Release branch headaches, how to merge in website revisions as and when cleared to go live?

    - by Pete Duncanson
    I need a sanity check here if we can, any ideas on correcting/changing the following are very welcome! We've been getting ourselves in knots of late with our SVN and are trying to correct it by putting a Trunk/Release system in place. We have a large website that we develop on and we store it all in SVN. Heres what we had in mind: We have trunk and a release branch All work gets checked into Trunk. When a feature is deemed ready for the next release it is merged into a Release branch. We only have one release branch and just tag "Latest" when we do a push to live We hope to be able to get all the files changed from Latest to Head to give us a zip that we can upload (any ideas on an easy way to do this via scripting?) So we set all this up and where very happy with ourselves. Except its not working and heres why. We work on lots a different features/fixes/problems at once and they don't all get nicely checked in feature complete (but always working at least). Then sometimes you have to wait for Clients to sign off. As a result you end up with revisions which are "ready for live" being scattered with ones which are "still being worked on" in trunk. That means that the completed revisions are not getting merged in sequentially but out of order. I thought SVN could handle this, clever little thing it is, but apparently not. Heres an example: Pete changes some CSS to make a new button look pretty (Revision 1) Dave add some CSS to the bottom of the same CSS file as Pete's for a new feature (Revision 2) Dave's mod gets the nod so he merges it into Release and commits it with a log message mentioning revision number and bug tracking id. Pete adds more buttons to finish this mod, no CSS changes here though (Revision 3) Pete then merges his mods (Revision 1 and 3) into the Head of Release (which has Daves merge in it) but this over-writes Daves CSS additions which now dissapear completely. This leads to the site being broken and the Release branch being pretty much useless. So we tried some other ideas like reverting Release back to "Latest" and then just merging in all the Revisions 1,2 and 3 in order. This worked fine until we had Revision 4 which was not ready for live and Revision 5 which was. Suddenly we are getting ourselves in knots again with exactly the same problem! Ok so take three. Revert to Latest, merge in Revision 5, then do any update back to Head. Tree conflicts galore! So thats a no no. I cracked in the end and built it all up manaually but its not something I want to do regular, ideally I want to script our deployment but can't while Release is in such a mess. HELP! What the heck are we doing wrong? I can't seem to find any solutions to this problem of wanting different none sequential Revisions in Release. If its not possible thats fine but how the heck are we meant to get stuff live easily. We can't branch for every single change, the site takes 30 minutes+ to check out it would take too long. Side note, we are using TortoiseSVN so can we keep command line examples to a minimum in any answers? Latest version of TSVN and SVN Version 1.6 so we have the funky merge tracking etc. EDIT: An excellent blog post which deals with the dev/release cycle (although using GIT but still relivant) thought everyone would like to read it if they found this question interesting. (http://nvie.com/git-model) EDIT 2: I wrote a blog post on how to show which branch you are working on in your website which others have asked me about (http://www.offroadcode.com/2010/5/14/which-svn-branch-are-you-working-on.aspx). Hope that helps. In the meantime we are looking at Kiln and hoping to make the switch next month (gulp!)

    Read the article

  • How would you organize a large complex web application (see basic example)?

    - by Anurag
    How do you usually organize complex web applications that are extremely rich on the client side. I have created a contrived example to indicate the kind of mess it's easy to get into if things are not managed well for big apps. Feel free to modify/extend this example as you wish - http://jsfiddle.net/NHyLC/1/ The example basically mirrors part of the comment posting on SO, and follows the following rules: Must have 15 characters minimum, after multiple spaces are trimmed out to one. If Add Comment is clicked, but the size is less than 15 after removing multiple spaces, then show a popup with the error. Indicate amount of characters remaining and summarize with color coding. Gray indicates a small comment, brown indicates a medium comment, orange a large comment, and red a comment overflow. One comment can only be submitted every 15 seconds. If comment is submitted too soon, show a popup with appropriate error message. A couple of issues I noticed with this example. This should ideally be a widget or some sort of packaged functionality. Things like a comment per 15 seconds, and minimum 15 character comment belong to some application wide policies rather than being embedded inside each widget. Too many hard-coded values. No code organization. Model, Views, Controllers are all bundled together. Not that MVC is the only approach for organizing rich client side web applications, but there is none in this example. How would you go about cleaning this up? Applying a little MVC/MVP along the way? Here's some of the relevant functions, but it will make more sense if you saw the entire code on jsfiddle: /** * Handle comment change. * Update character count. * Indicate progress */ function handleCommentUpdate(comment) { var status = $('.comment-status'); status.text(getStatusText(comment)); status.removeClass('mild spicy hot sizzling'); status.addClass(getStatusClass(comment)); } /** * Is the comment valid for submission */ function commentSubmittable(comment) { var notTooSoon = !isTooSoon(); var notEmpty = !isEmpty(comment); var hasEnoughCharacters = !isTooShort(comment); return notTooSoon && notEmpty && hasEnoughCharacters; } // submit comment $('.add-comment').click(function() { var comment = $('.comment-box').val(); // submit comment, fake ajax call if(commentSubmittable(comment)) { .. } // show a popup if comment is mostly spaces if(isTooShort(comment)) { if(comment.length < 15) { // blink status message } else { popup("Comment must be at least 15 characters in length."); } } // show a popup is comment submitted too soon else if(isTooSoon()) { popup("Only 1 comment allowed per 15 seconds."); } });

    Read the article

  • Preloading Winforms

    - by msarchet
    I am currently working on a project where we have a couple very control heavy user controls that are being used inside a MDI Controller. This is a Line of Business app and it is very data driven. The problem that we were facing was the aforementioned controls would load very very slowly, we dipped our toes into the waters of multi-threading for the control loading but that was not a solution for a plethora of reasons. Our solution to increasing the performance of the controls ended up being to 'pre-load' the forms onto a hidden window, create a stack of the existing forms, and pop off of the stack as the user requested a form. Now the current issue that I'm seeing that will arise as we push this 'fix' out to our testers, and the ultimately our users is this: Currently the 'hidden' window that contains the preloaded forms is visible in task manager, and can be shut down thus causing all of the controls to be lost. Then you have to create them on the fly losing the performance increase. Secondly, when the user uses up the stack we lose the performance increase (current solution to this is discussed below). For the first problem, is there a way to hide this window from task manager, perhaps by creating a parent form that encapsulates both the main form for the program and the hidden form? Our current solution to the second problem is to have an inactivity timer that when it fires checks the stacks for the forms, and loads a new form onto the stack if it isn't full. However this still has the potential of causing a hang in the UI while it creates the forms. A possible solutions for this would be to put 'used' forms back onto the stack, but I feel like there may be a better way.

    Read the article

  • WordPress: Image In Every Post

    - by Sarfraz
    Hello, If you visit this site: http://www.catswhocode.com/blog/ You would see that there is an image and summary for each post. What is the proper way to implement that? Is this done using wordpress custom fields? Or whether this is coded in image.php file present in theme folder? How do i do that? Thanks

    Read the article

  • .net code readability and maintainability

    - by george9170
    There Currently is a local debate as to which code is more readability We have one programmer who comes from a c background and when that programmer codes it looks like string foo = "bar"; if (foo[foo.Length - 1] == 'r') { } We have another programmer that doesn't like this methodology and would rather use if (foo.EndsWith("r")) { } which way of doing these types of operations is better?

    Read the article

  • Advice on displaying and allowing editing of data using ASP.NET MVC?

    - by Remnant
    I am embarking upon my first ASP.NET MVC project and I would like to get some input on possible ways to display database data and general best practice. In short, the body of my webpage will show data from my database in a table like format, with each table row showing similar data. For example: Name Age Position Date Joined Jon Smith 23 Striker 18th Mar 2005 John Doe 38 Defender 3rd Jan 1988 In terms of functionality, primarily I’d like to give the user the ability to edit the data and, after the edit, commit the edit to the database and refresh the view.The reason I want to refresh the view is because the data is date ordered and I will need to re-sort if the user edits a date field. My main question is what architecture / tools would be best suited to this fulfil my requirements at a high level? From the research I have done so far my initial conclusions were: ADO.NET for data retrieval. This is something I have used before and feel comfortable with. I like the look of LINQ to SQL but don’t want to make the learning curve any steeper for my first outing into MVC land just yet. Partial Views to create a template and then iterate through a datatable that I have pulled back from my database model. jQuery to allow the user to edit data in the table, error check edited data entries etc. Also, my intial view was that caching the data would not be a key requirement here. The only field a user will be able to update is the field and, if they do, I will need to commit that data to the database immediately and then refresh the view (as the data is date sorted). Any thoughts on this? Alternatively, I have seen some jQuery plug-ins that emulate a datagrid and provide associated functionality. My first thoughts are that I do not need all the functionality that comes with these plug-ins (e.g. zebra striping, ability to sort by column using sort glyph in column headers etc .) and I don’t really see any benefit to this over and above the solution I have outlined above. Again, is there reason to reconsider this view? Finally, when a user edits a date , I will need to refresh the view. In order to do this I had been reading about Html.RenderAction and this seemed like it may be a better option than using Partial Views as I can incorporate application logic into the action method. Am I right to consider Html.RenderAction or have I misunderstood its usage? Hope this post is clear and not too long. I did consider separate posts for each topic (e.g. Partial View vs. Html.RenderAction, when to use jQury datagrid plug-in) but it feels like these issues are so intertwined that they need to be dealt with in contect of each other. Thanks

    Read the article

  • The woes of (sometimes) storing "date only" in datetimes

    - by Heinzi
    We have two fields from and to (of type datetime), where the user can store the begin time and the end time of a business trip, e.g.: From: 2010-04-14 09:00 To: 2010-04-16 16:30 So, the duration of the trip is 2 days and 7.5 hours. Often, the exact times are not known in advance, so the user enters the dates without a time: From: 2010-04-14 To: 2010-04-16 Internally, this is stored as 2010-04-14 00:00 and 2010-04-16 00:00, since that's what most modern class libraries (e.g. .net) and databases (e.g. SQL Server) do when you store a "date only" in a datetime structure. Usually, this makes perfect sense. However, when entering 2010-04-16 as the to date, the user clearly did not mean 2010-04-16 00:00. Instead, the user meant 2010-04-16 24:00, i.e., calculating the duration of the trip should output 3 days, not 2 days. I can think of a few (more or less ugly) workarounds for this problem (add "23:59" in the UI layer of the to field if the user did not enter a time component; add a special "dates are full days" Boolean field; store "2010-04-17 00:00" in the DB but display "2010-04-16 24:00" to the user if the time component is "00:00"; ...), all having advantages and disadvantages. Since I assume that this is a fairly common problem, I was wondering: Is there a "standard" best-practice way of solving it? If there isn't, have you experienced a similar requirement, how did you solve it and what were the pros/cons of that solution?

    Read the article

  • Variable naming for arrays - C#

    - by David Neale
    What should I call a variable instantiated with some type of array? Is it okay to simply use a pluralised form of the type being held? IList<Person> people = new List<Person>(); or should I append something like 'List' to the name? IList<Person> personList = new List<Person>();

    Read the article

  • Splitting assemblies - finding the balance (avoiding overkill)

    - by M.A. Hanin
    I'm writing a wide component infrastructure, to be used in my projects. Since not all projects will require every component created, I've been thinking of splitting the component into discrete assemblies, so that every application developed will only be deployed with the required assemblies. I assume that creating an assembly has some storage overhead (the assembly's code, wrapping whatever is inside). Therefore, there must be some limit to the advantage gained by splitting an assembly - a certain point where splitting the assembly is worse than keeping it united (storage-wise and performance-wise). Now, here is the question: how do I know when splitting an assembly is an overkill? P.S I guess there are other overheads to assembly splitting, aside from the storage overhead. If anyone can point out these overheads, it would be much appreciated.

    Read the article

  • When to address integer overflow in C

    - by Yktula
    Related question: http://stackoverflow.com/questions/199333/best-way-to-detect-integer-overflow-in-c-c In C code, should integer overflow be addressed whenever integers are added? It seems like pointers and array indexes should be checked at all. When should integer overflow be checked for? When numbers are added in C without type explicitly mentioned, or printed with printf, when will overflow occur? Is there a way to automatically detect when an integer arithmetic overflow?

    Read the article

  • Makefiles - Compile all .cpp files in src/ to .o's in obj/, then link to binary in /

    - by Austin Hyde
    So, my project directory looks like this: /project Makefile main /src main.cpp foo.cpp foo.h bar.cpp bar.h /obj main.o foo.o bar.o What I would like my makefile to do would be to compile all .cpp files in the /src folder to .o files in the /obj folder, then link all the .o files in /obj into the output binary in the root folder /project. The problem is, I have next to no experience with Makefiles, and am not really sure what to search for to accomplish this. Also, is this a "good" way to do this, or is there a more standard approach to what I'm trying to do?

    Read the article

  • Metaprograming - self explanatory code - tutorials, articles, books

    - by elena
    Hello everybody, I am looking into improving my programming skils (actually I try to do my best to suck less each year, as our Jeff Atwood put it), so I was thinking into reading stuff about metaprogramming and self explanatory code. I am looking for something like an idiot's guide to this (free books for download, online resources). Also I want more than your average wiki page and also something language agnostic or preferably with Java examples. Do you know of such resources that will allow to efficiently put all of it into prectice (I know experience has a lot to say in all of this but i kind of want to build experience avoiding the flow bad decitions - experience - good decitions)? Thank you!

    Read the article

  • Declaring data types in SQLite

    - by dan04
    I'm familiar with how type affinity works in SQLite: You can declare column types as anything you want, and all that matters is whether the type name contains "INT", "CHAR", "FLOA", etc. But is there a commonly-used convention on what type names to use? For example, if you have an integer column, is it better to distinguish between TINYINT, SMALLINT, MEDIUMINT, and BIGINT, or just declare everything as INTEGER? So far, I've been using the following: INTEGER REAL CHAR(n) -- for strings with a known fixed with VARCHAR(n) -- for strings with a known maximum width TEXT -- for all other strings BLOB BOOLEAN DATE -- string in "YYYY-MM-DD" format TIME -- string in "HH:MM:SS" format TIMESTAMP -- string in "YYYY-MM-DD HH:MM:SS" format (Note that the last three are contrary to the type affinity.)

    Read the article

  • Why can I access private/protected methods using Object#send in Ruby?

    - by smotchkkiss
    The class class A private def foo puts :foo end public def bar puts :bar end private def zim puts :zim end protected def dib puts :dib end end instance of A a = A.new test a.foo rescue puts :fail a.bar rescue puts :fail a.zim rescue puts :fail a.dib rescue puts :fail a.gaz rescue puts :fail test output fail bar fail fail fail .send test [:foo, :bar, :zim, :dib, :gaz].each { |m| a.send(m) rescue puts :fail } .send output foo bar zim dib fail The question The section labeled "Test Output" is the expected result. So why can I access private/protected method by simply Object#send? Perhaps more important: What is the difference between public/private/protected in Ruby? When to use each? Can someone provide real world examples for private and protected usage?

    Read the article

  • PHP: Loop or no loop?

    - by Joseph Robidoux
    In this situation, is it better to use a loop or not? echo "0"; echo "1"; echo "2"; echo "3"; echo "4"; echo "5"; echo "6"; echo "7"; echo "8"; echo "9"; echo "10"; echo "11"; echo "12"; echo "13"; or $number = 0; while ($number != 13) { echo $number; $number = $number + 1; }

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >