Search Results

Search found 10115 results on 405 pages for 'coding practices'.

Page 120/405 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • JavaScript-library-based Project Organization

    - by Laith J
    Hello, I'm very new to the JavaScript library world. I have used JS by itself before to create a mini social network but this is the first time I use a JS library and I really don't know how to go about this. I'm planning to use Google Closure and I'm really not sure how I should go about organizing the code. Should I put everything in one file since it's a web app and should have one screen? Should I separate the code to many chunks and put them in different files? Or should I put different dialogs (like settings) in a separate page and thus a separate file? Like all programmers I'm a perfectionist so please help me out with this one, thanks.

    Read the article

  • How to use a 3rd party control inside the viewmodel?

    - by Sander
    I have a 3rd party control which among other things performs loading of some data. I want my viewmodel to keep track of this load operation and adjust its own state accordingly. If it were up to me, I'd do the data loading far away from the view, but it is not. So, I seem to be in the situation where my viewmodel depends on my view. How do I best handle this? I feel rather dirty making the view publish events to the viewmodel but I don't see any other reasonable way to get this info into the viewmodel. A similar situation might crop up with standard controls, too - imagine if your viewmodel depends on the events coming from a MediaElement - how do you properly model this? Do you put the MediaElement into the viewmodel? That doesn't sound right. If publishing the events to the viewmodel is indeed the most reasonable way, is there some common pattern used for this? How do you do it?

    Read the article

  • If you were developing shareware softwares for windows, would you target the .Net Framework or use n

    - by bohoo
    For the sake of the question, by 'shareware' I mean a software which is relatively small in size (up to few dozens of mb) and available for download and evaluation through a web site. I'm asking this question, because I don't understand something regarding the current state of windows commercial desktop development. It seems to me that: There is no reliable statistic regarding the extent of windows systems with .Net Framework installed. It makes no sense to force the end user to install the 20-60mb .Net for an application which may be smaller. Applications conforms to the term 'shareware' above have a big share on the win os market. Much of them don't need the capabilities of low level languages like c++, and therefore ideally they should be developed with a RAD enviroment. So, One would suppose there will be a blossom of RAD enviroments for native win code. But I know about only one - Delphi, and Delphi is so unpopular. How is that?

    Read the article

  • How to not over-use jQuery?

    - by Fedyashev Nikita
    Typical jQuery over-use: $('button').click(function() { alert('Button clicked: ' + $(this).attr('id')); }); Which can be simplified to: $('button').click(function() { alert('Button clicked: ' + this.id); }); Which is way faster. Can you give me any more examples of similar jQuery over-use?

    Read the article

  • What makes an effective UI for displaying versioning of structured hierarchical data

    - by Fadrian Sudaman
    Traditional version control system are displaying versioning information by grouping Projects-Folders-Files with Tree view on the left and details view on the right, then you will click on each item to look at revision history for that configuration history. Assuming that I have all the historical versioning information available for a project from Object-oriented model perspective (e.g. classes - methods - parameters and etc), what do you think will be the most effective way to present such information in UI so that you can easily navigate and access the snapshot view of the project and also the historical versioning information? Put yourself in the position that you are using a tool like this everyday in your job like you are currently using SVN, SS, Perforce or any VCS system, what will contribute to the usability, productivity and effectiveness of the tool. I personally find the classical way for display folders and files like above are very restrictive and less effective for displaying deep nested logical models. Assuming that this is a greenfield project and not restricted by specific technology, how do you think I should best approach this? I am looking for idea and input here to add values to my research project. Feel free to make any suggestions that you think is valuable. Thanks again for anyone that shares their thoughts.

    Read the article

  • What are the advantages to use StringBuilder versus XmlDocument or related to create XML documetns?

    - by Rob
    This might be a bit of a code smell, but I have seen it is some production code, namely the use of StringBuilder as opposed to XmlDocument when creating XML documents. In some cases these are write once operations (e.g. create the document and save it to disk) where as others are passing the built string to an XmlDocument to preform an XslTransform to a document that is returned to the client. So obvious question: is there merit to doing things this way, is it something that should be done on a case-by-case basis, or is this the wrong way of doing things?

    Read the article

  • Multiple REPLACE function in Oracle

    - by Adnan
    I am using the REPLACE function in oracle to replace values in my string like; SELECT REPLACE('THE NEW VALUE IS #VAL1#','#VAL1#','55') from dual So this is OK to replace one value, but what about 20+, should I use 20+ REPLACE function or is there a more practical solution. All ideas are welcome.

    Read the article

  • Win7: Right place to install a program that may be 'shared' with other computers

    - by robsoft
    We have an app that currently installs itself into 'program files\our app', and it puts the internal data files into the common Application Data folder. This means the program is available to any user on that particular PC. Now we want to make a multi-user version of this program, multiple PCs accessing the program at the same time across the network. In the bad old days, under XP, we'd just have the user who installed the app 'share' the app directory and off we'd go. In principle, is this still the 'right' way to do it under Vista/Windows 7? We'd like to do this 'properly' and be as compliant as possible! Is there a recommended 'Microsoft' approach for doing this, or is it largely down to whatever we can get away with and subsequently support (hah!). I've tried researching this on the MS websites but not found anything too helpful at all - it'd be really useful to have a 'if you're trying to install this kind of thing, put it here' type guide for developers!

    Read the article

  • HTTP POST with URL query parameters -- good idea or not?

    - by Steven Huwig
    I'm designing an API to go over HTTP and I am wondering if using the HTTP POST command, but with URL query parameters only and no request body, is a good way to go. Considerations: "Good Web design" requires non-idempotent actions to be sent via POST. This is a non-idempotent action. It is easier to develop and debug this app when the request parameters are present in the URL. The API is not intended for widespread use. It seems like making a POST request with no body will take a bit more work, e.g. a Content-Length: 0 header must be explicitly added. It also seems to me that a POST with no body is a bit counter to most developer's and HTTP frameworks' expectations. Are there any more pitfalls or advantages to sending parameters on a POST request via the URL query rather than the request body? Edit: The reason this is under consideration is that the operations are not idempotent and have side effects other than retrieval. See the HTTP spec: In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested. ... Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently idempotent.

    Read the article

  • Should Factories Persist Entities?

    - by mxmissile
    Should factories persist entities they build? Or is that the job of the caller? Pseudo Example Incoming: public class OrderFactory { public Order Build() { var order = new Order(); .... return order; } } public class OrderController : Controller { public OrderController(IRepository repository) { this.repository = repository; } public ActionResult MyAction() { var order = factory.Build(); repository.Insert(order); ... } } or public class OrderFactory { public OrderFactory(IRepository repository) { this.repository = repository; } public Order Build() { var order = new Order(); ... repository.Insert(order); return order; } } public class OrderController : Controller { public ActionResult MyAction() { var order = factory.Build(); ... } } Is there a recommended practice here?

    Read the article

  • What's the best way to write a maintainable web scraping app?

    - by Benj
    I wrote a perl script a while ago which logged into my online banking and emailed me my balance and a mini-statement every day. I found it very useful for keeping track of my finances. The only problem is that I wrote it just using perl and curl and it was quite complicated and hard to maintain. After a few instances of my bank changing their webpage I got fed up of debugging it to keep it up to date. So what's the best way of writing such a program in such a way that it's easy to maintain? I'd like to write a nice well engineered version in either Perl or Java which will be easy to update when the bank inevitably fiddle with their web site.

    Read the article

  • Should old/legacy/unused code be deleted from source control repository?

    - by Checkers
    I've encountered this in multiple projects. As the code base evolves, some libraries, applications, and components get abandoned and/or deprecated. Most people prefer to keep them in. The usual argument is that the code does not really take any space, it can be left alone until needed again. So a repository slowly turns into a cesspool of legacy code, where it's hard to find anything. Some people delete old code, since it creates clutter, raises more questions for new people, and you can restore any old snapshot of the code base anyway. However you can't always find the old code if you don't know where to look, as none of the (common) VCS I know offer search over the entire repository including all historical revisions, and the only way to search the old files is to check out the revision where the deleted file exists. What would be a good approach to repository management?

    Read the article

  • Alternatives to checking against the system time

    - by vikp
    Hi, I have an application which license should expire after some period of time. I can check the time in the applicatino against the system time, but system time can be changed by the administrator, therefore it's not a good idea to check against the system time in my opinion. What alternatives do I have? Thank you

    Read the article

  • Tips for Using Multiple Development Systems

    - by Tim Lytle
    When I travel, I don't pack up the desktop I use in the office and take it with me. Maybe I should, but I don't. However, since I'm a contract programmer I like to be able to work wherever I am: I'm mostly thinking of web development here. Version Control goes a long way in keeping sane and working on multiple projects on multiple systems (two or three computers); however, there are the issues of: IDE settings - different display sizes mean the IDE settings can't be completely synced, if at all. Database - if the database is 'external' (even if it's running on the same system, it's not in version control), how do you maintain the needed syncs of structure. Development Stack - Some projects need non-standard extensions, libraries, etc installed. Just an overview of some of the hassle involved with developing on multiple systems. I'll probably end up asking some specific questions, but I thought a CW style tips might reveal some things I would even think to ask about. Update: I guess this would also address tips to make upgrading/replacing your development system easier (something I've just done). So, one tip per answer please, so the 'top' tips are easy to find. How do you make it easier to develop on multiple systems, or to transfer work after upgrading/replaceing a development system?

    Read the article

  • Is it OK to put link to SO questions in a program comments?

    - by WizardOfOdds
    In quite some codebase you can see comments stating things like: // Workaround for defect 'xxx', (See bug 1434594 on Sun's bugparade) So I've got a few questions, but they're all related. Is it OK to put link to SO questions in a program's comments: // We're now mapping from the "sorted-on column" to original indices. // // There's apparently no easy way to do this in Java, so we're // re-inventing a wheel. // // (see why here, in SO question: http://stackoverflow.com/questions/951848) Do you do it? And what are the drawbacks in doing so? (see my first comment for a terrible drawback)

    Read the article

  • Preprocessor "macro function" vs. function pointer - best practice?

    - by Dustin
    I recently started a small personal project (RGB value to BGR value conversion program) in C, and I realised that a function that converts from RGB to BGR can not only perform the conversion but also the inversion. Obviously that means I don't really need two functions rgb2bgr and bgr2rgb. However, does it matter whether I use a function pointer instead of a macro? For example: int rgb2bgr (const int rgb); /* * Should I do this because it allows the compiler to issue * appropriate error messages using the proper function name, * not to mention possible debugging benefits? */ int (*bgr2rgb) (const int bgr) = rgb2bgr; /* * Or should I do this since it is merely a convenience * and they're really the same function anyway? */ #define bgr2rgb(bgr) (rgb2bgr (bgr)) I'm not necessarily looking for a change in execution efficiency as it's more of a subjective question out of curiosity. I am well aware of the fact that type safety is neither lost nor gained using either method. Would the function pointer merely be a convenience or are there more practical benefits to be gained of which I am unaware?

    Read the article

  • What lessons can you learn from software maintanence?

    - by Vasil Remeniuk
    Hello everyone, In the perfect world, all the software developers would work with the cutting edge technologies, creating systems from the scratch. In the real life, almost all of us have to maintain software from time to time (unlucky ones do it on a regular basis). Personally I first 2 years of my career was fixing bugs in the company that no longer exists (it has been taken up by Oracle). And probably the biggest lesson I've learned that time - despite of the pressure, always try to get as much information about the domain as possible (even if it's irrelevant to fixing a specific bug or adding a feature) - abstract domain knowledge doesn't lose value as fast as knowledge about trendy frameworks or methodologies. What lessons have you learned from maintenance?

    Read the article

  • When should we use Views, Temporary Tables and Direct Queries ? What are the Performance issues in a

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • How to make a jQuery plugin (the right way)?

    - by macek
    I know there are jQuery cookie plugins out there, but I wanted to write one for the sake of better learning the jQuery plugin pattern. I like the separation of "work" in small, manageable functions, but I feel like I'm passing name, value, and options arguments around too much. Is there a way this can be refactored? I'm looking for snippets of code to help illustrate examples provided with in answers. Any help is appreciated. Thanks :) example usage $.cookie('foo', 'bar', {expires:7}); $.cookie('foo'); //=> bar $.cookie('foo', null); $.cookie('foo'); //=> undefined Edit: I did a little bit of work on this. You can view the revision history to see where this has come from. It still feels like more refactoring can be done to optimize the flow a bit. Any ideas? the plugin (function($){ $.cookie = function(name, value, options) { if (typeof value == 'undefined') { return get(name); } else { options = $.extend({}, $.cookie.defaults, options || {}); return (value != null) ? set(name, value, options) : unset(name, options); } }; $.cookie.defaults = { expires: null, path: '/', domain: null, secure: false }; var set = function(name, value, options){ console.log(options); return document.cookie = options_string(name, value, options); }; var get = function(name){ var cookies = {}; $.map(document.cookie.split(';'), function(pair){ var c = $.trim(pair).split('='); cookies[c[0]] = c[1]; }); return decodeURIComponent(cookies[name]); }; var unset = function(name, options){ value = ''; options.expires = -1; set(name, value, options); }; var options_string = function(name, value, options){ var pairs = [param.name(name, value)]; $.each(options, function(k,v){ pairs.push(param[k](v)); }); return $.map(pairs, function(p){ return p === null ? null : p; }).join(';'); }; var param = { name: function(name, value){ return name + "=" + encodeURIComponent(value); }, expires: function(value){ // no expiry if(value === null){ return null; } // number of days else if(typeof value == "number"){ d = new Date(); d.setTime(d.getTime() + (value * 24 * 60 * 60 * 1000)); } // date object else if(typeof value == "object" && value instanceof "Date") { d = value; } return "expires=" + d.toUTCString(); }, path: function(value){ return "path="+value; }, domain: function(value){ return value === null ? null : "domain=" + value; }, secure: function(bool){ return bool ? "secure" : null; } }; })(jQuery);

    Read the article

  • Passing HttpFileCollectionBase to the Business Layer - Bad?

    - by Terry_Brown
    hopefully there's an easy solution to this one. I have my MVC2 project which allows uploads of files on certain forms. I'm trying to keep my controllers lean, and handle the processing within the business layer of this sort of thing. That said, HttpFileCollectionBase is obviously in the System.Web assembly. Ideally I want to call to something like: UserService.SaveEvidenceFiles(MyUser user, HttpFileCollectionBase files); or something similar and have my business layer handle the logic of how and where these things are saved. But, it feels a little icky to have my models layer with a reference to System.Web in terms of separation of concerns etc. So, we have (that I'm aware of) a few options: the web project handling this, and my controllers getting fatter mapping the HttpFileCollectionBase to something my business layer likes passing the collection through, and accepting that I reference System.Web from my business project Would love some feedback here on best practice approaches to this sort of thing - even if not specifically within the context of the above.

    Read the article

  • C# Async call garbage collection

    - by Troy
    Hello. I am working on a Silverlight/WCF application and of course have numerous async calls throughout the Silverlight program. I was wondering on how is the best way to handle the creation of the client classes and subscribing. Specifically, if I subscribe to an event in a method, after it returns does it fall out of scope? internal MyClass { public void OnMyButtonClicked() { var wcfClient = new WcfClient(); wcfClient.SomeMethodFinished += OnMethodCompleted; wcfClient.SomeMethodAsync(); } private void OnMethodCompleted(object sender, EventArgs args) { //Do something with the result //After this method does the subscription to the event //fall out of scope for garbage collection? } } Will I run into problems if I call the function again and create another subscription? Thanks in advance to anyone who responds.

    Read the article

  • Where to draw the line between efficiency and practicality

    - by dclowd9901
    I understand very well the need for websites' front ends to be coded and compressed as much as possible, however, I feel like I have more lax standards than others when it comes to practical applications. For instance, while I understand why some would, I don't see anything wrong with putting selectors in the <html> or <body> tags on a website with an expected small visitation rate. I would only do this for a cheap website for a small client, because I can't really justify the cost of time otherwise. So, that said, do you think it's okay to draw a line? Where do you draw yours?

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >