Search Results

Search found 17731 results on 710 pages for 'programming practices'.

Page 195/710 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • Does it ever make sense to make a fundamental (non-pointer) parameter const?

    - by Scott Smith
    I recently had an exchange with another C++ developer about the following use of const: void Foo(const int bar); He felt that using const in this way was good practice. I argued that it does nothing for the caller of the function (since a copy of the argument was going to be passed, there is no additional guarantee of safety with regard to overwrite). In addition, doing this prevents the implementer of Foo from modifying their private copy of the argument. So, it both mandates and advertises an implementation detail. Not the end of the world, but certainly not something to be recommended as good practice. I'm curious as to what others think on this issue. Edit: OK, I didn't realize that const-ness of the arguments didn't factor into the signature of the function. So, it is possible to mark the arguments as const in the implementation (.cpp), and not in the header (.h) - and the compiler is fine with that. That being the case, I guess the policy should be the same for making local variables const. One could make the argument that having different looking signatures in the header and source file would confuse others (as it would have confused me). While I try to follow the Principle of Least Astonishment with whatever I write, I guess it's reasonable to expect developers to recognize this as legal and useful.

    Read the article

  • small scale web site - global javascript file style/format/pattern - improving maintainability

    - by yaya3
    I frequently create (and inherit) small to medium websites where I have the following sort of code in a single file (normally named global.js or application.js or projectname.js). If functions get big, I normally put them in a seperate file, and call them at the bottom of the file below in the $(document).ready() section. If I have a few functions that are unique to certain pages, I normally have another switch statement for the body class inside the $(document).ready() section. How could I restructure this code to make it more maintainable? Note: I am less interested in the functions innards, more so the structure, and how different types of functions should be dealt with. I've also posted the code here - http://pastie.org/999932 in case it makes it any easier var ProjectNameEnvironment = {}; function someFunctionUniqueToTheHomepageNotWorthMakingConfigurable () { $('.foo').hide(); $('.bar').click(function(){ $('.foo').show(); }); } function functionThatIsWorthMakingConfigurable(config) { var foo = config.foo || 700; var bar = 200; return foo * bar; } function globallyRequiredJqueryPluginTrigger (tooltip_string) { var tooltipTrigger = $(tooltip_string); tooltipTrigger.tooltip({ showURL: false ... }); } function minorUtilityOneLiner (selector) { $(selector).find('li:even').not('li ul li').addClass('even'); } var Lightbox = {}; Lightbox.setup = function(){ $('li#foo a').attr('href','#alpha'); $('li#bar a').attr('href','#beta'); } Lightbox.init = function (config){ if (typeof $.fn.fancybox =='function') { Lightbox.setup(); var fade_in_speed = config.fade_in_speed || 1000; var frame_height = config.frame_height || 1700; $(config.selector).fancybox({ frameHeight : frame_height, callbackOnShow: function() { var content_to_load = config.content_to_load; ... }, callbackOnClose : function(){ $('body').height($('body').height()); } }); } else { if (ProjectNameEnvironment.debug) { alert('the fancybox plugin has not been loaded'); } } } // ---------- order of execution ----------- $(document).ready(function () { urls = urlConfig(); (function globalFunctions() { $('.tooltip-trigger').each(function(){ globallyRequiredJqueryPluginTrigger(this); }); minorUtilityOneLiner('ul.foo') Lightbox.init({ selector : 'a#a-lightbox-trigger-js', ... }); Lightbox.init({ selector : 'a#another-lightbox-trigger-js', ... }); })(); if ( $('body').attr('id') == 'home-page' ) { (function homeFunctions() { someFunctionUniqueToTheHomepageNotWorthMakingConfigurable (); })(); } });

    Read the article

  • Debugging a Browser Redirect Loop

    - by just_wes
    Hi all, I am using CakePHP with the Auth and ACL components. My page loads fine for non-registered users, but if I try to log in as a registered user I get an infinite redirect loop in the browser. I am sure that this is some sort of permissions problem, but the problem exists even for users who have permissions for everything. The only way to prevent this behavior is to allow '*' in my AppController's beforeFilter method. What is the best way to debug this sort of problem? Thanks!

    Read the article

  • Unit testing huge applications - Proven methodologies?

    - by NLV
    Hello members I've been working in windows forms applications and ASP.Net applications for the past 10 months. I've always wondered how to perform proper unit testing on the complete application in a robust manner covering all the scenarios. I've the following questions regarding them - What are the standard mechanisms in performing unit testing and writing test cases? Does the methodologies change based on the application nature such as Windows Forms, Web applications etc? What is the best approach to make sure we cover all the scenarios? Any popular books on this? Popular tools for performing unit testing?

    Read the article

  • How should I architect JasperReports with a PHP front+backend system

    - by Itay Moav
    Our system is written completely in PHP. For various business reasons (which are a given) I need to build the reports of the system using JasperReports. What architecture should I use? Should I put the Jasper as a stand alone server (if possible) and let the php query against it, should I have it generate the reports with a cron, and then let the PHP scoop up the files and send them to the web client/browser...

    Read the article

  • Representing xml through a single class

    - by Charles
    I am trying to abstract away the difficulties of configuring an application that we use. This application takes a xml configuration file and it can be a bit bothersome to manually edit this file, especially when we are trying to setup some automatic testing scenarios. I am finding that reading xml is nice, pretty easy, you get a network of element nodes that you can just go through and build your structures quite nicely. However I am slowly finding that the reverse is not quite so nice. I want to be able to build a xml configuration file through a single easy to use interface and because xml is composed of a system of nodes I am having a lot of struggle trying to maintain the 'easy' part. Does anyone know of any examples or samples that easily and intuitively build xml files without declaring a bunch of element type classes and expect the user to build the network themselves? For example if my desired xml output is like so <cook version="1.1"> <recipe name="chocolate chip cookie"> <ingredients> <ingredient name="flour" amount="2" units="cups"/> <ingredient name="eggs" amount="2" units="" /> <ingredient name="cooking chocolate" amount="5" units="cups" /> </ingredients> <directions> <direction name="step 1">Preheat oven</direction> <direction name="step 2">Mix flour, egg, and chocolate</direction> <direction name="step 2">bake</direction> </directions> </recipe> <recipe name="hot dog"> ... How would I go about designing a class to build that network of elements and make one easy to use interface for creating recipes? Right now I have a recipe object, an ingredient object, and a direction object. The user must make each one, set the attributes in the class and attach them to the root object which assembles the xml elements and outputs the formatted xml. Its not very pretty and I just know there has to be a better way. I am using python so bonus points for pythonic solutions

    Read the article

  • Is this 2D array initialization a bad idea?

    - by Brendan Long
    I have something I need a 2D array for, but for better cache performance, I'd rather have it actually be a normal array. Here's the idea I had but I don't know if it's a terrible idea: const int XWIDTH = 10, YWIDTH = 10; int main(){ int * tempInts = new int[XWIDTH * YWIDTH]; int ** ints = new int*[XWIDTH]; for(int i=0; i<XWIDTH; i++){ ints[i] = &tempInts[i*YWIDTH]; } // do things with ints delete[] ints[0]; delete[] ints; return 0; } So the idea is that instead of newing a bunch of arrays (and having them placed in different places in memory), I just point to an array I made all at once. The reason for the delete[] (int*) ints; is because I'm actually doing this in a class and it would save [trivial amounts of] memory to not save the original pointer. Just wondering if there's any reasons this is a horrible idea. Or if there's an easier/better way. The goal is to be able to access the array as ints[x][y] rather than ints[x*YWIDTH+y].

    Read the article

  • In ASP.NET MVC Should A Form Post To Itself Or Another Action?

    - by Sohnee
    Which of these two scenario's is best practice in ASP.NET MVC? 1 Post to self In the view you use using (Html.BeginForm) { ... } And in the controller you have [HttpGet] public ActionResult Edit(int id) [HttpPost] public ActionResult Edit(EditModel model) 2 Post from Edit to Save In the view you use using (Html.BeginForm("Save", "ControllerName")) { And in the controller you have [HttpGet] public ActionResult Edit(int id) [HttpPost] public ActionResult Save(EditModel model) Summary I can see the benefits of each of these, the former gives you a more restful style, with the same address being used in conjunction with the correct HTTP verb (GET, POST, PUT, DELETE and so on). The latter has a URL schema that makes each address very specific. Which is the correct way to do this?

    Read the article

  • Is "for(;;)" faster than "while (TRUE)"? If not, why do people use it?

    - by Chris Cooper
    for (;;) { //Something to be done repeatedly } I have seen this sort of thing used a lot, but I think it is rather strange... Wouldn't it be much clearer to say while (TRUE), or something along those lines? I'm guessing that (as is the reason for many-a-programmer to resort to cryptic code) this is a tiny margin faster? Why, and is it REALLY worth it? If so, why not just define it this way: #DEFINE while(TRUE) for(;;)

    Read the article

  • Atomic operations on several transactionless external systems

    - by simendsjo
    Say you have an application connecting 3 different external systems. You need to update something in all 3. In case of a failure, you need to roll back the operations. This is not a hard thing to implement, but say operation 3 fails, and when rolling back, the rollback for operation 1 fails! Now the first external system is in an invalid state... I'm thinking a possible solution is to shut down the application and forcing a manual fix of the external system, but then again... It might already have used this information (and perhaps that's why it failed), or we might not have sufficient access. Or it might not even be a good way to rollback the action! Are there some good ways of handling such cases? EDIT: Some application details.. It's a multi user web application. Most of the work is done with scheduled jobs (through Quartz.Net), so most operations is run in it's own thread. Some user actions should trigger jobs that update several systems though. The external systems are somewhat unstable. I Was thinking of changing the application to use the Command and Unit Of Work pattern

    Read the article

  • Prims vs Polys: what are the pros and cons of each?

    - by Richard Inglis
    I've noticed that most 3d gaming/rendering environments represent solids as a mesh of (usually triangular) 3d polygons. However some examples, such as Second Life, or PovRay use solids built from a set of 3d primitives (cube, sphere, cone, torus etc) on which various operations can be performed to create more complex shapes. So my question is: why choose one method over the other for representing 3d data? I can see there might be benefits for complex ray-tracing operations to be able to describe a surface as a single mathematical function (like PovRay does), but SL surely isn't attempting anything so ambitious with their rendering engine. Equally, I can imagine it might be more bandwidth-efficient to serve descriptions of generalised solids instead of arbitrary meshes, but is it really worth the downside that SL suffers from (ie modelling stuff is really hard, and usually the results are ugly) - was this just a bad decision made early in SL's development that they're now stuck with? Or is it an artefact of what's easiest to implement in OpenGL?

    Read the article

  • The Java interface doesn't declare any exception. How to manage checked exception of the implementat

    - by Frór
    Let's say I have the following Java interface that I may not modify: public interface MyInterface { public void doSomething(); } And now the class implementing it is like this: class MyImplementation implements MyInterface { public void doSomething() { try { // read file } catch (IOException e) { // what to do? } } } I can't recover from not reading the file. A subclass of RuntimeException can clearly help me, but I'm not sure if it's the right thing to do: the problem is that that exception would then not be documented in the class and a user of the class would possibly get that exception an know nothing about solving this. What can I do?

    Read the article

  • Make a Method of the Business Layer secure. best practice / best pattern

    - by gsharp
    We are using ASP.NET with a lot of AJAX "Page Method" calls. The WebServices defined in the Page invokes methods from our BusinessLayer. To prevent hackers to call the Page Methods, we want to implement some security in the BusinessLayer. We are struggling with two different issues. First one: public List<Employees> GetAllEmployees() { // do stuff } This Method should be called by Authorized Users with the Role "HR". Second one: public Order GetMyOrder(int orderId) { // do sutff } This Method should only be called by the owner of the Order. I know it's easy to implement the security for each method like: public List<Employees> GetAllEmployees() { // check if the user is in Role HR } or public Order GetMyOrder(int orderId) { // check if the order.Owner = user } What I'm looking for is some pattern/best practice to implement this kind of security in a generic way (without coding the the if then else every time) I hope you get what i mean :-)

    Read the article

  • Removing a pattern from the beggining and end of a string in ruby

    - by seaneshbaugh
    So I found myself needing to remove <br /> tags from the beginning and end of strings in a project I'm working on. I made a quick little method that does what I need it to do but I'm not convinced it's the best way to go about doing this sort of thing. I suspect there's probably a handy regular expression I can use to do it in only a couple of lines. Here's what I got: def remove_breaks(text) if text != nil and text != "" text.strip! index = text.rindex("<br />") while index != nil and index == text.length - 6 text = text[0, text.length - 6] text.strip! index = text.rindex("<br />") end text.strip! index = text.index("<br />") while index != nil and index == 0 text = test[6, text.length] text.strip! index = text.index("<br />") end end return text end Now the "<br />" could really be anything, and it'd probably be more useful to make a general use function that takes as an argument the string that needs to be stripped from the beginning and end. I'm open to any suggestions on how to make this cleaner because this just seems like it can be improved.

    Read the article

  • how to minimize application downtime when updating database and application ORM

    - by yamspog
    We currently run an ecommerce solution for a leisure and travel company. Everytime we have a release, we must bring the ecommerce site down as we update database schema and the data access code. We are using a custom built ORM where each data entity is responsible for their own CRUD operations. This is accomplished by dynamically generating the SQL based on attributes in the data entity. For example, the data entity for an address would be... [tableName="address"] public class address : dataEntity { [column="address1"] public string address1; [column="city"] public string city; } So, if we add a new column to the database, we must update the schema of the database and also update the data entity. As you can expect, the business people are not too happy about this outage as it puts a crimp in their cash-flow. The operations people are not happy as they have to deal with a high-pressure time when database and applications are upgraded. The programmers are upset as they are constantly getting in trouble for the legacy system that they inherited. Do any of you smart people out there have some suggestions?

    Read the article

  • web api programming with java

    - by radi
    hi , i am new to web programming in java , i want to know how to use web api such as google api , facebook api in my code so i need to know how to begin and what i need to do that . thanks

    Read the article

  • Delphi: How to avoid EIntOverflow underflow when subtracting?

    - by Ian Boyd
    Microsoft already says, in the documentation for GetTickCount, that you could never compare tick counts to check if an interval has passed. e.g.: Incorrect (pseudo-code): DWORD endTime = GetTickCount + 10000; //10 s from now ... if (GetTickCount > endTime) break; The above code is bad because it is suceptable to rollover of the tick counter. For example, assume that the clock is near the end of it's range: endTime = 0xfffffe00 + 10000 = 0x00002510; //9,488 decimal Then you perform your check: if (GetTickCount > endTime) Which is satisfied immediatly, since GetTickCount is larger than endTime: if (0xfffffe01 > 0x00002510) The solution Instead you should always subtract the two time intervals: DWORD startTime = GetTickCount; ... if (GetTickCount - startTime) > 10000 //if it's been 10 seconds break; Looking at the same math: if (GetTickCount - startTime) > 10000 if (0xfffffe01 - 0xfffffe00) > 10000 if (1 > 10000) Which is all well and good in C/C++, where the compiler behaves a certain way. But what about Delphi? But when i perform the same math in Delphi, with overflow checking on ({Q+}, {$OVERFLOWCHECKS ON}), the subtraction of the two tick counts generates an EIntOverflow exception when the TickCount rolls over: if (0x00000100 - 0xffffff00) > 10000 0x00000100 - 0xffffff00 = 0x00000200 What is the intended solution for this problem? Edit: i've tried to temporarily turn off OVERFLOWCHECKS: {$OVERFLOWCHECKS OFF}] delta = GetTickCount - startTime; {$OVERFLOWCHECKS ON} But the subtraction still throws an EIntOverflow exception. Is there a better solution, involving casts and larger intermediate variable types?

    Read the article

  • Best way to store configuration settings outside of web.config

    - by Wil
    I'm starting to consider creating a class library that I want to make generic so others can use it. While planning it out, I came to thinking about the various configuration settings that I would need. Since the idea is to make it open/shared, I wanted to make things as easy on the end user as possible. What's the best way to setup configuration settings without making use of web.config/app.config?

    Read the article

  • Testing When Correctness is Poorly Defined?

    - by dsimcha
    I generally try to use unit tests for any code that has easily defined correct behavior given some reasonably small, well-defined set of inputs. This works quite well for catching bugs, and I do it all the time in my personal library of generic functions. However, a lot of the code I write is data mining code that basically looks for significant patterns in large datasets. Correct behavior in this case is often not well defined and depends on a lot of different inputs in ways that are not easy for a human to predict (i.e. the math can't reasonably be done by hand, which is why I'm using a computer to solve the problem in the first place). These inputs can be very complex, to the point where coming up with a reasonable test case is near impossible. Identifying the edge cases that are worth testing is extremely difficult. Sometimes the algorithm isn't even deterministic. Usually, I do the best I can by using asserts for sanity checks and creating a small toy test case with a known pattern and informally seeing if the answer at least "looks reasonable", without it necessarily being objectively correct. Is there any better way to test these kinds of cases?

    Read the article

  • Should HTTP POST be discouraged?

    - by Tomas Sedovic
    Quoting from the CouchDB documentation: It is recommended that you avoid POST when possible, because proxies and other network intermediaries will occasionally resend POST requests, which can result in duplicate document creation. To my understanding, this should not be happening on the protocol level (a confused user armed with a doubleclick is a completely different story). What is the best course of action, then? Should we really try to avoid POST requests and replace them by PUT? I don't like that as they convey a different meaning. Should we anticipate this and protect the requests by unique IDs where we want to avoid accidental duplication? I don't like that either: it complicates the code and prevents situations where multiple identical posts may be desired.

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >