Search Results

Search found 10115 results on 405 pages for 'coding practices'.

Page 120/405 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Handling input from a keyboard wedge

    - by JDibble
    Following on from the question asked by Mykroft Best way to handle input from a keyboard “wedge” http://stackoverflow.com/questions/42437/best-way-to-handle-input-from-a-keyboard-wedge. I need to write a class that intercepts key strokes, if the input is determined to be from the keyboard wedge (as described in the above post) the data will be directed to POS classes to handle, otherwise they keystrokes must be passed on to be handled in windows in the normal manner. This raises two questions How can I intercept key strokes when not in a WinForm. How can I pass on the keypresses to windows. Thanks JDibble

    Read the article

  • Best practice -- Content Tracking Remote Data (cURL, file_get_contents, cron, et. al)?

    - by user322787
    I am attempting to build a script that will log data that changes every 1 second. The initial thought was "Just run a php file that does a cURL every second from cron" -- but I have a very strong feeling that this isn't the right way to go about it. Here are my specifications: There are currently 10 sites I need to gather data from and log to a database -- this number will invariably increase over time, so the solution needs to be scalable. Each site has data that it spits out to a URL every second, but only keeps 10 lines on the page, and they can sometimes spit out up to 10 lines each time, so I need to pick up that data every second to ensure I get all the data. As I will also be writing this data to my own DB, there's going to be I/O every second of every day for a considerably long time. Barring magic, what is the most efficient way to achieve this? it might help to know that the data that I am getting every second is very small, under 500bytes.

    Read the article

  • Should old/legacy/unused code be deleted from source control repository?

    - by Checkers
    I've encountered this in multiple projects. As the code base evolves, some libraries, applications, and components get abandoned and/or deprecated. Most people prefer to keep them in. The usual argument is that the code does not really take any space, it can be left alone until needed again. So a repository slowly turns into a cesspool of legacy code, where it's hard to find anything. Some people delete old code, since it creates clutter, raises more questions for new people, and you can restore any old snapshot of the code base anyway. However you can't always find the old code if you don't know where to look, as none of the (common) VCS I know offer search over the entire repository including all historical revisions, and the only way to search the old files is to check out the revision where the deleted file exists. What would be a good approach to repository management?

    Read the article

  • Win7: Right place to install a program that may be 'shared' with other computers

    - by robsoft
    We have an app that currently installs itself into 'program files\our app', and it puts the internal data files into the common Application Data folder. This means the program is available to any user on that particular PC. Now we want to make a multi-user version of this program, multiple PCs accessing the program at the same time across the network. In the bad old days, under XP, we'd just have the user who installed the app 'share' the app directory and off we'd go. In principle, is this still the 'right' way to do it under Vista/Windows 7? We'd like to do this 'properly' and be as compliant as possible! Is there a recommended 'Microsoft' approach for doing this, or is it largely down to whatever we can get away with and subsequently support (hah!). I've tried researching this on the MS websites but not found anything too helpful at all - it'd be really useful to have a 'if you're trying to install this kind of thing, put it here' type guide for developers!

    Read the article

  • Is it OK to put link to SO questions in a program comments?

    - by WizardOfOdds
    In quite some codebase you can see comments stating things like: // Workaround for defect 'xxx', (See bug 1434594 on Sun's bugparade) So I've got a few questions, but they're all related. Is it OK to put link to SO questions in a program's comments: // We're now mapping from the "sorted-on column" to original indices. // // There's apparently no easy way to do this in Java, so we're // re-inventing a wheel. // // (see why here, in SO question: http://stackoverflow.com/questions/951848) Do you do it? And what are the drawbacks in doing so? (see my first comment for a terrible drawback)

    Read the article

  • Alternatives to checking against the system time

    - by vikp
    Hi, I have an application which license should expire after some period of time. I can check the time in the applicatino against the system time, but system time can be changed by the administrator, therefore it's not a good idea to check against the system time in my opinion. What alternatives do I have? Thank you

    Read the article

  • How to not over-use jQuery?

    - by Fedyashev Nikita
    Typical jQuery over-use: $('button').click(function() { alert('Button clicked: ' + $(this).attr('id')); }); Which can be simplified to: $('button').click(function() { alert('Button clicked: ' + this.id); }); Which is way faster. Can you give me any more examples of similar jQuery over-use?

    Read the article

  • Passing HttpFileCollectionBase to the Business Layer - Bad?

    - by Terry_Brown
    hopefully there's an easy solution to this one. I have my MVC2 project which allows uploads of files on certain forms. I'm trying to keep my controllers lean, and handle the processing within the business layer of this sort of thing. That said, HttpFileCollectionBase is obviously in the System.Web assembly. Ideally I want to call to something like: UserService.SaveEvidenceFiles(MyUser user, HttpFileCollectionBase files); or something similar and have my business layer handle the logic of how and where these things are saved. But, it feels a little icky to have my models layer with a reference to System.Web in terms of separation of concerns etc. So, we have (that I'm aware of) a few options: the web project handling this, and my controllers getting fatter mapping the HttpFileCollectionBase to something my business layer likes passing the collection through, and accepting that I reference System.Web from my business project Would love some feedback here on best practice approaches to this sort of thing - even if not specifically within the context of the above.

    Read the article

  • Where to draw the line between efficiency and practicality

    - by dclowd9901
    I understand very well the need for websites' front ends to be coded and compressed as much as possible, however, I feel like I have more lax standards than others when it comes to practical applications. For instance, while I understand why some would, I don't see anything wrong with putting selectors in the <html> or <body> tags on a website with an expected small visitation rate. I would only do this for a cheap website for a small client, because I can't really justify the cost of time otherwise. So, that said, do you think it's okay to draw a line? Where do you draw yours?

    Read the article

  • anti-if campaign

    - by Andrew Siemer
    I recently ran against a very interesting site that expresses a very interesting idea - the anti-if campaign. You can see this here at www.antiifcampaign.com. I have to agree that complex nested IF statements are an absolute pain in the rear. I am currently on a project that up until very recently had some crazy nested IFs that scrolled to the right for quite a ways. We cured our issues in two ways - we used Windows Workflow Foundation to address routing (or workflow) concerns. And we are in the process of implementing all of our business rules utilizing ILOG Rules for .NET (recently purchased by IBM!!). This for the most part has cured our nested IF pains...but I find myself wondering how many people cure their pains in the manner that the good folks at the AntiIfCampaign suggest (see an example here) by creating numerous amounts of abstract classes to represent a given scenario that was originally covered by the nested IF. I wonder if another way to address the removal of this complexity might also be in using an IoC container such as StructureMap to move in and out of different bits of functionality. Either way... Question: Given a scenario where I have a nested complex IF or SWITCH statement that is used to evaluate a given type of thing (say evaluating an Enum) to determine how I want to handle the processing of that thing by enum type - what are some ways to do the same form of processing without using the IF or SWITCH hierarchical structure? public enum WidgetTypes { Type1, Type2, Type3, Type4 } ... WidgetTypes _myType = WidgetTypes.Type1; ... switch(_myType) { case WidgetTypes.Type1: //do something break; case WidgetTypes.Type2: //do something break; //etc... }

    Read the article

  • JavaScript-library-based Project Organization

    - by Laith J
    Hello, I'm very new to the JavaScript library world. I have used JS by itself before to create a mini social network but this is the first time I use a JS library and I really don't know how to go about this. I'm planning to use Google Closure and I'm really not sure how I should go about organizing the code. Should I put everything in one file since it's a web app and should have one screen? Should I separate the code to many chunks and put them in different files? Or should I put different dialogs (like settings) in a separate page and thus a separate file? Like all programmers I'm a perfectionist so please help me out with this one, thanks.

    Read the article

  • What makes an effective UI for displaying versioning of structured hierarchical data

    - by Fadrian Sudaman
    Traditional version control system are displaying versioning information by grouping Projects-Folders-Files with Tree view on the left and details view on the right, then you will click on each item to look at revision history for that configuration history. Assuming that I have all the historical versioning information available for a project from Object-oriented model perspective (e.g. classes - methods - parameters and etc), what do you think will be the most effective way to present such information in UI so that you can easily navigate and access the snapshot view of the project and also the historical versioning information? Put yourself in the position that you are using a tool like this everyday in your job like you are currently using SVN, SS, Perforce or any VCS system, what will contribute to the usability, productivity and effectiveness of the tool. I personally find the classical way for display folders and files like above are very restrictive and less effective for displaying deep nested logical models. Assuming that this is a greenfield project and not restricted by specific technology, how do you think I should best approach this? I am looking for idea and input here to add values to my research project. Feel free to make any suggestions that you think is valuable. Thanks again for anyone that shares their thoughts.

    Read the article

  • What's the best way to write a maintainable web scraping app?

    - by Benj
    I wrote a perl script a while ago which logged into my online banking and emailed me my balance and a mini-statement every day. I found it very useful for keeping track of my finances. The only problem is that I wrote it just using perl and curl and it was quite complicated and hard to maintain. After a few instances of my bank changing their webpage I got fed up of debugging it to keep it up to date. So what's the best way of writing such a program in such a way that it's easy to maintain? I'd like to write a nice well engineered version in either Perl or Java which will be easy to update when the bank inevitably fiddle with their web site.

    Read the article

  • How do we name test methods where we are checking for more than one condition?

    - by Sandbox
    I follow the technique specified in Roy Osherove's The Art Of Unit Testing book while naming test methods - MethodName_Scenario_Expectation. It suits perfectly well for my 'unit' tests. But,for tests that I write in 'controller' or 'coordinator' class, there isn't necessarily a method which I want to test. For these tests, I generate multiple conditions which make up one scenario and then I verify the expectation. For example, I may set some properties on different instances, generate an event and then verify that my expectation from controller/coordinator is being met. Now, my controller handles events using a private event handler. Here my scenario is that, I set some properties, say 3 condition1,condition2 and condition3 Also, my scenario includes an event is raised I don't have a method name as my event handler is private. How do I name such a test method?

    Read the article

  • How to make a jQuery plugin (the right way)?

    - by macek
    I know there are jQuery cookie plugins out there, but I wanted to write one for the sake of better learning the jQuery plugin pattern. I like the separation of "work" in small, manageable functions, but I feel like I'm passing name, value, and options arguments around too much. Is there a way this can be refactored? I'm looking for snippets of code to help illustrate examples provided with in answers. Any help is appreciated. Thanks :) example usage $.cookie('foo', 'bar', {expires:7}); $.cookie('foo'); //=> bar $.cookie('foo', null); $.cookie('foo'); //=> undefined Edit: I did a little bit of work on this. You can view the revision history to see where this has come from. It still feels like more refactoring can be done to optimize the flow a bit. Any ideas? the plugin (function($){ $.cookie = function(name, value, options) { if (typeof value == 'undefined') { return get(name); } else { options = $.extend({}, $.cookie.defaults, options || {}); return (value != null) ? set(name, value, options) : unset(name, options); } }; $.cookie.defaults = { expires: null, path: '/', domain: null, secure: false }; var set = function(name, value, options){ console.log(options); return document.cookie = options_string(name, value, options); }; var get = function(name){ var cookies = {}; $.map(document.cookie.split(';'), function(pair){ var c = $.trim(pair).split('='); cookies[c[0]] = c[1]; }); return decodeURIComponent(cookies[name]); }; var unset = function(name, options){ value = ''; options.expires = -1; set(name, value, options); }; var options_string = function(name, value, options){ var pairs = [param.name(name, value)]; $.each(options, function(k,v){ pairs.push(param[k](v)); }); return $.map(pairs, function(p){ return p === null ? null : p; }).join(';'); }; var param = { name: function(name, value){ return name + "=" + encodeURIComponent(value); }, expires: function(value){ // no expiry if(value === null){ return null; } // number of days else if(typeof value == "number"){ d = new Date(); d.setTime(d.getTime() + (value * 24 * 60 * 60 * 1000)); } // date object else if(typeof value == "object" && value instanceof "Date") { d = value; } return "expires=" + d.toUTCString(); }, path: function(value){ return "path="+value; }, domain: function(value){ return value === null ? null : "domain=" + value; }, secure: function(bool){ return bool ? "secure" : null; } }; })(jQuery);

    Read the article

  • Should Factories Persist Entities?

    - by mxmissile
    Should factories persist entities they build? Or is that the job of the caller? Pseudo Example Incoming: public class OrderFactory { public Order Build() { var order = new Order(); .... return order; } } public class OrderController : Controller { public OrderController(IRepository repository) { this.repository = repository; } public ActionResult MyAction() { var order = factory.Build(); repository.Insert(order); ... } } or public class OrderFactory { public OrderFactory(IRepository repository) { this.repository = repository; } public Order Build() { var order = new Order(); ... repository.Insert(order); return order; } } public class OrderController : Controller { public ActionResult MyAction() { var order = factory.Build(); ... } } Is there a recommended practice here?

    Read the article

  • Is it good practice to put private API in the .m files and public API in .h files in Cocoa?

    - by Paperflyer
    Many of my classes in my current project have several properties and methods that are only ever called from within the class itself. Also, they might mess with the working of the class depending on the current state of the class. Currently, all these interfaces are defined in the main interface declaration in the .h files. Is it considered good practice to put the “private” methods and properties at the top of the .m files? This won't ever affect anything since I am very likely the only person ever to look at this source code, but of course it would be interesting to know for future projects.

    Read the article

  • De-normalization for the sake of reports - Good or Bad?

    - by Travis
    What are the pros/cons of de-normalizing an enterprise application database because it will make writing reports easier? Pro - designing reports in SSRS will probably be "easier" since no joins will be necessary. Con - developing/maintaining the app to handle de-normalized data will become more difficult due to duplication of data and synchronization. Others?

    Read the article

  • What lessons can you learn from software maintanence?

    - by Vasil Remeniuk
    Hello everyone, In the perfect world, all the software developers would work with the cutting edge technologies, creating systems from the scratch. In the real life, almost all of us have to maintain software from time to time (unlucky ones do it on a regular basis). Personally I first 2 years of my career was fixing bugs in the company that no longer exists (it has been taken up by Oracle). And probably the biggest lesson I've learned that time - despite of the pressure, always try to get as much information about the domain as possible (even if it's irrelevant to fixing a specific bug or adding a feature) - abstract domain knowledge doesn't lose value as fast as knowledge about trendy frameworks or methodologies. What lessons have you learned from maintenance?

    Read the article

  • When should we use Views, Temporary Tables and Direct Queries ? What are the Performance issues in a

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • Preprocessor "macro function" vs. function pointer - best practice?

    - by Dustin
    I recently started a small personal project (RGB value to BGR value conversion program) in C, and I realised that a function that converts from RGB to BGR can not only perform the conversion but also the inversion. Obviously that means I don't really need two functions rgb2bgr and bgr2rgb. However, does it matter whether I use a function pointer instead of a macro? For example: int rgb2bgr (const int rgb); /* * Should I do this because it allows the compiler to issue * appropriate error messages using the proper function name, * not to mention possible debugging benefits? */ int (*bgr2rgb) (const int bgr) = rgb2bgr; /* * Or should I do this since it is merely a convenience * and they're really the same function anyway? */ #define bgr2rgb(bgr) (rgb2bgr (bgr)) I'm not necessarily looking for a change in execution efficiency as it's more of a subjective question out of curiosity. I am well aware of the fact that type safety is neither lost nor gained using either method. Would the function pointer merely be a convenience or are there more practical benefits to be gained of which I am unaware?

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >