Search Results

Search found 8692 results on 348 pages for 'patterns practices'.

Page 135/348 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Using free function as pseudo-constructors to exploit template parameter deduction

    - by Poita_
    Is it a common pattern/idiom to use free functions as pseudo-constructors to avoid having to explicitly specify template parameters? For example, everyone knows about std::make_pair, which uses its parameters to deduce the pair types: template <class A, class B> std::pair<A, B> make_pair(A a, B b) { return std::pair<A, B>(a, b); } // This allows you to call make_pair(1, 2), // instead of having to type pair<int, int>(1, 2) // as you can't get type deduction from the constructor. I find myself using this quite often, so I was just wondering if many other people use it, and if there is a name for this pattern?

    Read the article

  • Another answer to the CAPTCHA problem?

    - by Xeoncross
    Most sites at least employ server access log checking and banning along with some kind of bot prevention measure like a CAPTCHA (those messed-up text images). The problem with CAPTCHAs is that they poss a threat to the user experience. Luckily they now come with user friendly features like refresh and audio versions. Anyway, like linux vs windows, it isn't worth the time of a spammer to customize and/or build a script to handle a custom CAPTCHA example that only pertains to one site. Therefore, I was wondering if there might be better ways to handle the whole CAPTCHA thing. In A Better CAPTCHA Peter Bromberg mentions that one way would be to convert the image to HTML and display it embedded in the page. On http://shiflett.org/ Chris simply asks users to type his name into an input. Examples like this are ways to simplifying the CAPTCHA experience while decreasing the value for spammers. Does anyone know of more good examples I could use or see any problem with the embedded image idea?

    Read the article

  • Should I define a single "DataContext" and pass references to it around or define muliple "DataConte

    - by Nate Bross
    I have a Silverlight application that consists of a MainWindow and several classes which update and draw images on the MainWindow. I'm now expanding this to keep track of everything in a database. Without going into specifics, lets say I have a structure like this: MainWindow Drawing-Surface Class1 -- Supports Drawing DataContext + DataServiceCollection<T> w/events Class2 -- Manages "transactions" (add/delete objects from drawing) Class3 Each "Class" is passed a reference to the Drawing Surface so they can interact with it independently. I'm starting to use WCF Data Services in Class1 and its working well; however, the other classes are also going to need access to the WCF Data Services. (Should I define my "DataContext" in MainWindow and pass a reference to each child class?) Class1 will need READ access to the "transactions" data, and Class2 will need READ access to some of the drawing data. So my question is, where does it make the most sense to define my DataContext? Does it make sense to: Define a "global" WCF Data Service "Context" object and pass references to that in all of my subsequent classes? Define an instance of the "Context" for each Class1, Class2, etc Have each method that requires access to data define its own instance of the "Context" and use closures handle the async load/complete events? Would a structure like this make more sense? Is there any danger in keeping an active "DataContext" open for an extended period of time? Typical usecase of this application could range from 1 minute to 40+ minutes. MainWindow Drawing-Surface DataContext Class1 -- Supports Drawing DataServiceCollection<DrawingType> w/events Class2 -- Manages "transactions" (add/delete objects from drawing) DataServiceCollection<TransactionType> w/events Class3 DataServiceCollection<T> w/events

    Read the article

  • What are the worst examples of moral failure in the history of software engineering?

    - by Amanda S
    Many computer science curricula include a class or at least a lecture on disasters caused by software bugs, such as the Therac-25 incidents or Ariane 5 Flight 501. Indeed, Wikipedia has a list of software bugs with serious consequences, and a question on StackOverflow addresses some of them too. We study the failures of the past so that we don't repeat them, and I believe that rather than ignoring them or excusing them, it's important to look at these failures squarely and remind ourselves exactly how the mistakes made by people in our profession cost real money and real lives. By studying failures caused by uncaught bugs and bad process, we learn certain lessons about rigorous testing and accountability, and we make sure that our innocent mistakes are caught before they cause major problems. There are kinds of less innocent failure in software engineering, however, and I think it's just as important to study the serious consequences caused by programmers motivated by malice, greed, or just plain amorality. Thus we can learn about the ethical questions that arise in our profession, and how to respond when we are faced with them ourselves. Unfortunately, it's much harder to find lists of these failures--the only one I can come up with is that apocryphal "DOS ain't done 'til Lotus won't run" story. What are the worst examples of moral failure in the history of software engineering?

    Read the article

  • Managing aesthetic code changes in git

    - by Ollie Saunders
    I find that I make a lot of small changes to my source code, often things that have almost no functional effect. For example: Refining or correcting comments. Moving function definitions within a class for a more natural reading order. Spacing and lining up some declarations for readability. Collapsing something using multiple lines on to one. Removing an old piece of commented-out code. Correcting some inconsistent whitespace. I guess I have a formidable attention to detail in my code. But the problem is I don't know what to do about these changes and they make it difficult to switch between branches etc. in git. I find myself not knowing whether to commit the minor changes, stash them, or put them in a separate branch of little tweaks and merge that in later. None those options seems ideal. The main problem is that these sort of changes are unpredictable. If I was to commit these there would be so many commits with the message "Minor code aesthetic change.", because, the second I make such a commit I notice another similar issue. What should I do when I make a minor change, a significant change, and then another minor change? I'd like to merge the three minor changes into one commit. It's also annoying seeing files as modified in git status when the change barely warrants my attention. I know about git commit --amend but I also know that's bad practice as it makes my repo inconsistent with remotes.

    Read the article

  • Extension methods for encapsulation and reusability

    - by tzaman
    In C++ programming, it's generally considered good practice to "prefer non-member non-friend functions" instead of instance methods. This has been recommended by Scott Meyers in this classic Dr. Dobbs article, and repeated by Herb Sutter and Andrei Alexandrescu in C++ Coding Standards (item 44); the general argument being that if a function can do its job solely by relying on the public interface exposed by the class, it actually increases encapsulation to have it be external. While this confuses the "packaging" of the class to some extent, the benefits are generally considered worth it. Now, ever since I've started programming in C#, I've had a feeling that here is the ultimate expression of the concept that they're trying to achieve with "non-member, non-friend functions that are part of a class interface". C# adds two crucial components to the mix - the first being interfaces, and the second extension methods: Interfaces allow a class to formally specify their public contract, the methods and properties that they're exposing to the world. Any other class can choose to implement the same interface and fulfill that same contract. Extension methods can be defined on an interface, providing any functionality that can be implemented via the interface to all implementers automatically. And best of all, because of the "instance syntax" sugar and IDE support, they can be called the same way as any other instance method, eliminating the cognitive overhead! So you get the encapsulation benefits of "non-member, non-friend" functions with the convenience of members. Seems like the best of both worlds to me; the .NET library itself providing a shining example in LINQ. However, everywhere I look I see people warning against extension method overuse; even the MSDN page itself states: In general, we recommend that you implement extension methods sparingly and only when you have to. So what's the verdict? Are extension methods the acme of encapsulation and code reuse, or am I just deluding myself?

    Read the article

  • Am I abusing Policies?

    - by pmr
    I find myself using policies a lot in my code and usually I'm very happy with that. But from time to time I find myself confronted with using that pattern in situations where the Policies are selected and runtime and I have developed habbits to work around such situations. Usually I start with something like that: class DrawArrays { protected: void sendDraw() const; }; class DrawElements { protected: void sendDraw() const; }; template<class Policy> class Vertices : public Policy { using Policy::sendDraw(); public: void render() const; }; When the policy is picked at runtime I have different choices of working around the situation. Different code paths: if(drawElements) { Vertices<DrawElements> vertices; } else { Vertices<DrawArrays> vertices; } Inheritance and virtual calls: class PureVertices { public: void render()=0; }; template<class Policy> class Vertices : public PureVertices, public Policy { //.. }; Both solutions feel wrong to me. The first creates an umaintainable mess and the second introduces the overhead of virtual calls that I tried to avoid by using policies in the first place. Am I missing the proper solutions or do I use the wrong pattern to solve the problem?

    Read the article

  • Specification: Use cases for CRUD

    - by Mario Ortegón
    I am writing a Product requirements specification. In this document I must describe the ways that the user can interact with the system in a very high level. Several of these operations are "Create-Read-Update-Delete" on some objects. The question is, when writing use cases for these operations, what is the right way to do so? Can I write only one Use Case called "Manage Object x" and then have these operations as included Use Cases? Or do I have to create one use case per operation, per object? The problem I see with the last approach is that I would be writing quite a few pages that I feel do not really contribute to the understanding of the problem. What is the best practice?

    Read the article

  • SOAP - Why do I need to query for the values for an update?

    - by Phill Pafford
    I'm taking over a project and wanted to understand if this is common practice using SOAP. The process that is currently in place I have to query all the values before I do an update cause I need to pass back all the values that are not being updated. Does this sound right? Example Values: fname=phill lname=pafford address=123 main phone:222-555-1212 So if I just wanted to update the phone number I need to query for the record, get all the values and submit these values for an update. Example Update Values: fname=phill lname=pafford address=123 main phone:111-555-1212 I just want to know if this is common practice or should I change the functionality of this?

    Read the article

  • Designing a general database interface in PHP

    - by lamas
    I'm creating a small framework for my web projects in PHP so I don't have to do the basic work over and over again for every new website. It is not my goal to create a second CakePHP or Codeigniter and I'm also not planning to build my websites with any of the available frameworks as I prefer to use things I've created myself in general. I have no problems in designing that framework when it comes to parts like the core structure, request handling, and so on but I'm getting stuck with designing the database interface for my modules. I've already thought about using the MVC pattern but thought that it would be a bit of a overkill. So the exact problem I'm facing is how my frameworks modules (viewCustomers could be a module, for example) should interact with the database. Is it a good idea to write SQL directly in PHP (mysql_query( 'SELECT firstname, lastname(.....))? How could I abstract a query like SELECT firstname, lastname FROM customers WHERE id=X Would MySQL helper functions like $this->db->get( array('firstname', 'lastname'), array('id'=>X) ) be a good idea? I suppose not because they actually make everything more complicated by requiring arrays to be created and passed. Is the Model pattern from MVC my only real option?

    Read the article

  • Question about design (inheritance, polymorphism)

    - by Dan
    Hi, I have a question about a problem I'm struggling with. Hope you can bear with me. Imagine I have an Object class representing the base class of a hierarchy of physical objects. Later I inherit from it to create an Object1D, Object2D and Object3D classes. Each of these derived classes will have some specific methods and attributes. For example, the 3d object might have functionality to download a 3d model to be used by a renderer. So I'd have something like this: class Object {}; class Object1D : public Object { Point mPos; }; class Object2D : public Object { ... }; class Object3D : public Object { Model mModel; }; Now I'd have a separate class called Renderer, which simply takes an Object as argument and well, renders it :-) In a similar way, I'd like to support different kinds of renderers. For instance, I could have a default one that every object could rely on, and then provide other specific renderers for some kind of objects: class Renderer {}; // Default one class Renderer3D : public Renderer {}; And here comes my problem. A renderer class needs to get an Object as an argument, for example in the constructor in order to retrieve whatever data it needs to render the object. So far so good. But a Renderer3D would need to get an Object3D argument, in order to get not only the basic attributes but also the specific attributes of a 3d object. Constructors would look like this: CRenderer(Object& object); CRenderer3D(Object3D& object); Now how do I specify this in a generic way? Or better yet, is there a better way to design this? I know I could rely on RTTI or similar but I'd like to avoid this if possible as I feel there is probably a better way to deal with this. Thanks in advance!

    Read the article

  • How to document object-oriented MATLAB code?

    - by jjkparker
    I'm writing a sizable application using object-oriented MATLAB, and this has gotten me thinking about how to document the code. If this was C, I would use Doxygen. For Java, I'd use JavaDoc. Both have mostly agreed-upon standards for how class and method documentation should look and what it should contain. But what about MATLAB code? The most I've seen in TMW's own classes is a short sentence or two at the top of the class, and I can't find any topics devoted to documenting sizable MATLAB applications. So how do you document your MATLAB classes? Any particular style issues or additional tools?

    Read the article

  • Generic data input form in asp.net mvc application

    - by Diego
    Hello, I have an application that have EF 16 classes that share this information: They all are classes only with a key field and a description. I think it should be a waste if I make a controller with just 1 method just to present a form to fill these classes info, then I was thinking in to make a generic form(with key, description) and dynamically fill the right class through a sort of selection the selected info in any way, any good suggestion or pattern to do that? Where the generic methods should be located.

    Read the article

  • How do you go from an abstract project description to actual code?

    - by Jason
    Maybe its because I've been coding around two semesters now, but the major stumbling block that I'm having at this point is converting the professor's project description and requirements to actual code. Since I'm currently in Algorithms 101, I basically do a bottom-up process, starting with a blank whiteboard and draw out the object and method interactions, then translate that into classes and code. But now the prof has tossed interfaces and abstract classes into the mix. Intellectually, I can recognize how they work, but am stubbing my toes figuring out how to use these new tools with the current project (simulating a web server). In my professors own words, mapping the abstract description to Java code is the real trick. So what steps are best used to go from English (or whatever your language is) to computer code? How do you decide where and when to create an interface, or use an abstract class?

    Read the article

  • Should a Unit-test replicate functionality or Test output?

    - by Daniel Beardsley
    I've run into this dilemma several times. Should my unit-tests duplicate the functionality of the method they are testing to verify it's integrity? OR Should unit tests strive to test the method with numerous manually created instances of inputs and expected outputs? I'm mainly asking the question for situations where the method you are testing is reasonably simple and it's proper operation can be verified by glancing at the code for a minute. Simplified example (in ruby): def concat_strings(str1, str2) return str1 + " AND " + str2 end Simplified functionality-replicating test for the above method: def test_concat_strings 10.times do str1 = random_string_generator str2 = random_string_generator assert_equal (str1 + " AND " + str2), concat_strings(str1, str2) end end I understand that most times the method you are testing won't be simple enough to justify doing it this way. But my question remains; is this a valid methodology in some circumstances (why or why not)?

    Read the article

  • Simplest PHP Routing framework .. ?

    - by David
    I'm looking for the simplest implementation of a routing framework in PHP, in a typical PHP environment (Running on Apache, or maybe nginx) .. It's the implementation itself I'm mostly interested in, and how you'd accomplish it. I'm thinking it should handle URL's, with the minimal rewriting possible, (is it really a good idea, to have the same entrypoint for all dynamic requests?!), and it should not mess with the querystring, so I should still be able to fetch GET params with $_GET['var'] as you'd usually do.. So far I have only come across .htaccess solutions that puts everything through an index.php, which is sort of okay. Not sure if there are other ways of doing it. How would you "attach" what URL's fit to what controllers, and the relation between them? I've seen different styles. One huge array, with regular expressions and other stuff to contain the mapping. The one I think I like the best is where each controller declares what map it has, and thereby, you won't have one huge "global" map, but a lot of small ones, each neatly separated. So you'd have something like: class Root { public $map = array( 'startpage' => 'ControllerStartPage' ); } class ControllerStartPage { public $map = array( 'welcome' => 'WelcomeControllerPage' ); } // Etc ... Where: 'http://myapp/' // maps to the Root class 'http://myapp/startpage' // maps to the ControllerStartPage class 'http://myapp/startpage/welcome' // maps to the WelcomeControllerPage class 'http://myapp/startpage/?hello=world' // Should of course have $_GET['hello'] == 'world' What do you think? Do you use anything yourself, or have any ideas? I'm not interested in huge frameworks already solving this problem, but the smallest possible implementation you could think of. I'm having a hard time coming up with a solution satisfying enough, for my own taste. There must be something pleasing out there that handles a sane bootstrapping process of a PHP application without trying to pull a big magic hat over your head, and force you to use "their way", or the highway! ;)

    Read the article

  • Programming Concepts: What should be done when an exception is thrown?

    - by Dooms101
    This does not really apply to any language specifically, but if it matters I am using VB.NET in Visual Studio 2008. I can't seem to find anything really that useful using Google about this topic, but I was wondering what is common practice when an exception is thrown and caught but since it has been thrown the application cannot continue operating. For example I have exceptions that are thrown by my FileLoader class when a file cannot be found or when a file is deemed corrupt. The exception is only thrown within the class and is not handled really. If the error is detected, then the exception is thrown and whatever function is was thrown is basically quits. So in the code trying to create that object or call one of its members I use a Try...Catch statement. However, I was wondering, what should even do when this exception is caught? My application needs these files to be intact, and if they are not, the application is almost useless. So far I just pop up a message box telling the user their is an error and to reinstall. What else can I do, or better, what's common practice in these situations?

    Read the article

  • Avoid writing SQL queries altogether in SSIS

    - by Jonn
    Working on a Data Warehouse project, the guy that gave us the tutorial advised that we stick to using SQL queries over defining a lot of data flow transformations, citing points like it'll consume a lot of memory on the ETL box so we'd rather leave the processing to the DB box. Is this really advisable? Where's the balance between relying on GUI tools over executing a bunch of SQL scripts on your Integration package? And honestly, I'd like to avoid writing SQL queries as much as I can.

    Read the article

  • Use of (non) qualified names

    - by AProgrammer
    If I want to use the name baz defined in package foo|bar|quz, I've several choices: provide fbq as a short name for foo|bar|quz and use fbq|baz use foo|bar|quz|baz import baz from foo|bar|quz|baz and then use baz (or an alias given in the import process) import all public symbols from foo|bar|quz|baz and then use baz For the languages I know, my perception is that the best practice is to use the first two ways (I'll use one or the other depending on the specific package full name and the number of symbols I need from it). I'd use the third only in a language which doesn't provide the first and hunt for supporting tools to write the import statements. And in my opinion the fourth should be reserved to package designed with than import in mind, for instance if all exported symbols start with a prefix or contains the name of the package. My questions: what is in your opinion the best practice for your favorite languages? what would you suggest in a new language? what would you suggest in an old language adding such a feature?

    Read the article

  • Rails Full Engine using a Full Engine

    - by SirLenz0rlot
    I've got this full rails engine Foo with functionality X. I want to make another engine, engine Bar, that is pretty much the same, but override funcitonality x with y. (it basically does the same, but a few controller actions and views are differently implemented). (I might split this later in several mountable engines, but for now, this will be the setup: project Baz, using engine Bar, which uses engine Foo) I would like to know if there are any pitfalls. It doesn't seem like a pattern that is often used? Anybody else using this 'some sort of engine inheritance'?

    Read the article

  • Separate functionality depending on Role in ASP.NET MVC

    - by Andrew Bullock
    I'm looking for an elegant pattern to solve this problem: I have several user roles in my system, and for many of my controller actions, I need to deal with slightly different data. For example, take /Users/Edit/1 This allows a Moderator to edit a users email address, but Administrators to edit a user's email address and password. I'd like a design for separating the two different bits of action code for the GET and the POST. Solutions I've come up with so far are: Switch inside each method, however this doesn't really help when i want different model arguments on the POST :( Custom controller factory which chooses a UsersController_ForModerators and UsersController_ForAdmins instead of just UsersController from the controller name and current user role Custom action invoker which choose the Edit_ForModerators method in a similar way to above Have an IUsersController and register a different implementation of it in my IoC container as a named instance based on Role Build an implementation of the controller at runtime using Castle DynamicProxy and manipulate the methods to those from role-based implementations Im preferring the named IoC instance route atm as it means all my urls/routing will work seamlessly. Ideas? Suggestions?

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >