Search Results

Search found 11680 results on 468 pages for 'convenience methods'.

Page 105/468 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Data classes: getters and setters or different method design

    - by Frog
    I've been trying to design an interface for a data class I'm writing. This class stores styles for characters, for example whether the character is bold, italic or underlined. But also the font-size and the font-family. So it has different types of member variables. The easiest way to implement this would be to add getters and setters for every member variable, but this just feels wrong to me. It feels way more logical (and more OOP) to call style.format(BOLD, true) instead of style.setBold(true). So to use logical methods insteads of getters/setters. But I am facing two problems while implementing these methods: I would need a big switch statement with all member variables, since you can't access a variable by the contents of a string in C++. Moreover, you can't overload by return type, which means you can't write one getter like style.getFormatting(BOLD) (I know there are some tricks to do this, but these don't allow for parameters, which I would obviously need). However, if I would implement getters and setters, there are also issues. I would have to duplicate quite some code because styles can also have a parent styles, which means the getters have to look not only at the member variables of this style, but also at the variables of the parent styles. Because I wasn't able to figure out how to do this, I decided to ask a question a couple of weeks ago. See Object Oriented Programming: getters/setters or logical names. But in that question I didn't stress it would be just a data object and that I'm not making a text rendering engine, which was the reason one of the people that answered suggested I ask another question while making that clear (because his solution, the decorator pattern, isn't suitable for my problem). So please note that I'm not creating my own text rendering engine, I just use these classes to store data. Because I still haven't been able to find a solution to this problem I'd like to ask this question again: how would you design a styles class like this? And why would you do that? Thanks on forehand!

    Read the article

  • Composing programs from small simple pieces: OOP vs Functional Programming

    - by Jay Godse
    I started programming when imperative programming languages such as C were virtually the only game in town for paid gigs. I'm not a computer scientist by training so I was only exposed to Assembler and Pascal in school, and not Lisp or Prolog. Over the 1990s, Object-Oriented Programming (OOP) became more popular because one of the marketing memes for OOP was that complex programs could be composed of loosely coupled but well-defined, well-tested, cohesive, and reusable classes and objects. And in many cases that is quite true. Once I learned object-oriented programming my C programs became better because I structured them more like classes and objects. In the last few years (2008-2014) I have programmed in Ruby, an OOP language. However, Ruby has many functional programming (FP) features such as lambdas and procs, which enable a different style of programming using recursion, currying, lazy evaluation and the like. (Through ignorance I am at a loss to explain why these techniques are so great). Very recently, I have written code to use methods from the Ruby Enumerable library, such as map(), reduce(), and select(). Apparently this is a functional style of programming. I have found that using these methods significantly reduce code volume, and make my code easier to debug. Upon reading more about FP, one of the marketing claims made by advocates is that FP enables developers to compose programs out of small well-defined, well-tested, and reusable functions, which leads to less buggy code, and low code volume. QUESTIONS: Is the composition of complex program by using FP techniques contradictory to or complementary to composition of a complex program by using OOP techniques? In which situations is OOP more effective, and when is FP more effective? Is it possible to use both techniques in the same complex program? Do the techniques overlap or contradict each other?

    Read the article

  • On developing deep programming knowledge

    - by Robert Harvey
    Occasionally I see questions about edge cases and other weirdness on Stack Overflow that are easily answered by the likes of Jon Skeet and Eric Lippert, demonstrating a deep knowledge of the language and its many intricacies, like this one: You might think that in order to use a foreach loop, the collection you are iterating over must implement IEnumerable or IEnumerable<T>. But as it turns out, that is not actually a requirement. What is required is that the type of the collection must have a public method called GetEnumerator, and that must return some type that has a public property getter called Current and a public method MoveNext that returns a bool. If the compiler can determine that all of those requirements are met then the code is generated to use those methods. Only if those requirements are not met do we check to see if the object implements IEnumerable or IEnumerable<T>. That's cool stuff to know. I can understand why Eric knows this; he's on the compiler team, so he has to know. But what about those who demonstrate such deep knowledge who are not insiders? How do mere mortals (who are not on the C# compiler team) find out about stuff like this? Specifically, are there methods these folks use to systematically root out such knowledge, explore it and internalize it (make it their own)?

    Read the article

  • how to properly implement alpha blending in a complex 3d scene

    - by Gajet
    I know this question might sound a bit easy to answer but It's driving me crazy. There are too many possible situations that a good alpha blending mechanism should handle, and for each Algorithm I can think of there is something missing. these are the methods I've though about so far: first of I though about object sorting by depth, this one simply fails because Objects are not simple shapes, they might have curves and might loop inside each other. so I can't always tell which one is closer to camera. then I thought about sorting triangles but this one also might fail, thought I'm not sure how to implement it there is a rare case that might again cause problem, in which two triangle pass through each other. again no one can tell which one is nearer. the next thing was using depth buffer, at least the main reason we have depth buffer is because of the problems with sorting that I mentioned but now we get another problem. Since objects might be transparent, in a single pixel there might be more than one object visible. So for which Object should I store pixel depth? I then thought maybe I can only store the most front Object depth, and using that determine how should I blend next draw calls at that pixel. But again there was a problem, think about 2 semi transparent planes with a solid plane in middle of them. I was going to render the solid plane at the end, one can see the most distant plane. note that I was going to merge every two planes until there is only one color left for that pixel. Obviously I can use sorting methods too because of the same reasons I've explained above. Finally the only thing I imagine being able to work is to render all objects into different render targets and then sort those layers and display the final output. But this time I don't know how can I implement this algorithm.

    Read the article

  • Copy-and-Pasted Test Code: How Bad is This?

    - by joshin4colours
    My current job is mostly writing GUI test code for various applications that we work on. However, I find that I tend to copy and paste a lot of code within tests. The reason for this is that the areas I'm testing tend to be similar enough to need repetition but not quite similar enough to encapsulate code into methods or objects. I find that when I try to use classes or methods more extensively, tests become more cumbersome to maintain and sometimes outright difficult to write in the first place. Instead, I usually copy a big chunk of test code from one section and paste it to another, and make any minor changes I need. I don't use more structured ways of coding, such as using more OO-principles or functions. Do other coders feel this way when writing test code? Obviously I want to follow DRY and YAGNI principles, but I find that test code (automated test code for GUI testing anyway) can make these principles tough to follow. Or do I just need more coding practice and a better overall system of doing things? EDIT: The tool I'm using is SilkTest, which is in a proprietary language called 4Test. As well, these tests are mostly for Windows desktop applications, but I also have tested web apps using this setup as well.

    Read the article

  • "Programming error" exceptions - Is my approach sound?

    - by Medo42
    I am currently trying to improve my use of exceptions, and found the important distinction between exceptions that signify programming errors (e.g. someone passed null as argument, or called a method on an object after it was disposed) and those that signify a failure in the operation that is not the caller's fault (e.g. an I/O exception). As far as I understand, it makes little sense for an immediate caller to actually handle programming error exceptions, he should instead assure that the preconditions are met. Only "outer" exception handlers at task boundaries should catch them, so they can keep the system running if a task fails. In order to ensure that client code can cleanly catch "failure" exceptions without catching error exceptions by mistake, I create my own exception classes for all failure exceptions now, and document them in the methods that throw them. I would make them checked exceptions in Java. Now I have a few questions: Before, I tried to document all exceptions that a method could throw, but that sometimes creates an unwiedly list that needs to be documented in every method up the call chain until you can show that the error won't happen. Instead, I document the preconditions in the summary / parameter descriptions and don't even mention what happens if they are not met. The idea is that people should not try to catch these exceptions explicitly anyway, so there is no need to document their types. Would you agree that this is enough? Going further, do you think all preconditions even need to be documented for every method? For example, calling methods in IDisposable objects after calling Dispose is an error, but since IDisposable is such a widely used interface, can I just assume a programmer will know this? A similar case is with reference type parameters where passing null makes no conceivable sense: Should I document "non-null" anyway? IMO, documentation should only cover things that are not obvious, but I am not sure where "obvious" ends.

    Read the article

  • Will new Twitter API 1.1 allow hashtag/tweet/trend queries without any authentication, i.e. for a client that does not use an user's account at all?

    - by P5music
    I see that, even not being logged in Twitter with an account, if I google hashtags or twitter accounts, twitter show them. I think it should be also possible to get those tweets programmatically but I do not know it for sure, so I ask for confirmation here, especially for the future with the new Twitter API resctrictions. I mean, will it be possible to get tweets from hashtags or accounts without logging in an user account, and so not wanting to access the user settings, subscriptions, etc (because I do not need it), thus not having to respect any token limit? I found these API 1.1 faqs, have I to be concerned? Will an application have to request user authorization just to make public API calls? When API v1.1 is released, user authorization (and access tokens) are required for all API 1.1 requests. In the weeks following release, some methods will require only application-based authentication for certain "userless" contexts. Will an application have to request user authorization just to make public API calls? When API v1.1 is released, user authorization (and access tokens) are required for all API 1.1 requests. In the weeks following release, some methods will require only application-based authentication for certain "userless" contexts. Will the Search API require authentication? The Search API is now part of the official REST API in version 1.1. In addition to serving results in a format consistent with other Tweet resources, usage will also require authentication.

    Read the article

  • Strategies for managing use of types in Python

    - by dave
    I'm a long time programmer in C# but have been coding in Python for the past year. One of the big hurdles for me was the lack of type definitions for variables and parameters. Whereas I totally get the idea of duck typing, I do find it frustrating that I can't tell the type of a variable just by looking at it. This is an issue when you look at someone else's code where they've used ambiguous names for method parameters (see edit below). In a few cases, I've added asserts to ensure parameters comply with an expected type but this goes against the whole duck typing thing. On some methods, I'll document the expected type of parameters (eg: list of user objects), but even this seems to go against the idea of just using an object and let the runtime deal with exceptions. What strategies do you use to avoid typing problems in Python? Edit: Example of the parameter naming issues: If our code base we have a task object (ORM object) and a task_obj object (higher level object that embeds a task). Needless to say, many methods accept a parameter named 'task'. The method might expect a task or a task_obj or some other construct such as a dictionary of task properties - it is not clear. It is them up to be to look at how that parameter is used in order to work out what the method expects.

    Read the article

  • Models, collections...and then what? Processes?

    - by Dan
    I'm a LAMP-stack dev who's been more on the JavaScript side the last few years and really enjoying the Model + Collection approach to data entities that BackboneJS, etc. uses. It's helped me organize my code in such a way that it is extremely portable, keeping all my properties and methods in the scope (model, collection, etc.) in which they apply. One thing that keeps bugging me though is how to organize the next level up, the 'process layer' as you might call it, that can potentially operate on instances of either models or collections or whatever else. Where should methods like find() (which returns a collection) and create() (which returns a model) reside? I know some people would put a create() in the Collection prototype, but while a collection operates on models I don't think it's exactly right to create them. And while a find() would return a collection I don't think it correct to have that action within the collection prototype itself (it should be a layer up). Can anyone offer some examples of any patterns that employ some kind of OOP-friendly 'process' layer? I'm sorry if this is a fairly well-known discussion but I'm afraid I can't seem to find the terminology to search for.

    Read the article

  • How to visualize code?

    - by gablin
    I've mostly only had to read my own code. As such, I've had no need to visualize the code as I already know how each and every class and module communicate with one another. But the few times I've had to read someone else's code - let us now assume we are talking about at least one larger module which contains several internal classes - I've almost always found myself wishing "This would have been so much easier to understand if I could just visualize it!" So what are the common methods or tools for enabling this? Which do you use, and why do you prefer them over the others? I've heard stuff like UML, module and class diagrams, but I imagine there are more. Furthermore, any of these is most likely better than anything I can devise on my own. EDIT: For those who answer with "Use pen and paper and just draw it": This isn't very helpful unless you explain this further. What exactly am I supposed to draw? A box for each class? Should I include the public methods? What about its fields? How should I draw connections that explain how one class uses another? What about modules? What if the language isn't object-oriented but functional or logical, or even just imperative (C, for instance)? What about global variables and functions? Is there an already-standardized way of drawing this, or do I need to think up of a method of my own? You get the drift.

    Read the article

  • Gathering IP address and workstation information; does it belong in a state class?

    - by p.campbell
    I'm writing an enterprisey utility that collects exception information and writes to the Windows Event Log, sends an email, etc. This utility class will be used by all applications in the corporation: web, BizTalk, Windows Services, etc. Currently this class: holds state given to it via public properties calls out to .NET Framework methods to gather information about runtime details. Included are call to various properties and methods from System.Environment, Reflection details, etc. This implementation has the benefit of allowing all those callers not to have to make these same calls themselves. This means less code for the caller to forget, screw up, etc. Should this state class (please what's the phrase I'm looking for [like DTO]?) know how to resolve/determine runtime details (like the IP address and machine name that it's running on)? It seems to me on second thought that it's meant to be a class that should hold state, and not know how to call out to the .NET Framework to find information. var myEx = new AppProblem{MachineName="Riker"}; //Will get "Riker 10.0.0.1" from property MachineLongDesc Console.WriteLine("full machine details: " + myEx.MachineLongDesc); public class AppProblem { public string MachineName{get;set;} public string MachineLongDesc{ get{ if(string.IsNullOrEmpty(this.MachineName) { this.MachineName = Environment.MachineName; } return this.MachineName + " " + GetCurrentIP(); } } private string GetCurrentIP() { return System.Net.Dns.GetHostEntry(this.MachineName) .AddressList.First().ToString(); } } This code was written by hand from memory, and presented for simplicity, trying to illustrate the concept.

    Read the article

  • How to make the members of my Data Access Layer object aware of their siblings

    - by Graham
    My team currently has a project with a data access object composed like so: public abstract class DataProvider { public CustomerRepository CustomerRepo { get; private set; } public InvoiceRepository InvoiceRepo { get; private set; } public InventoryRepository InventoryRepo { get; private set; } // couple more like the above } We have non-abstract classes that inherit from DataProvider, and the type of "CustomerRepo" that gets instantiated is controlled by that child class. public class FloridaDataProvider { public FloridaDataProvider() { CustomerRepo = new FloridaCustomerRepo(); // derived from base CustomerRepository InvoiceRepo = new InvoiceRespository(); InventoryRepo = new InventoryRepository(); } } Our problem is that some of the methods inside a given repo really would benefit from having access to the other repo's. Like, a method inside InventoryRepository needs to get to Customer data to do some determinations, so I need to pass in a reference to a CustomerRepository object. Whats the best way for these "sibling" repos to be aware of each other and have the ability to call each other's methods as-needed? Virtually all the other repos would benefit from having the CustomerRepo, for example, because it is where names/phones/etc are selected from, and these data elements need to be added to the various objects that are returned out of the other repos. I can't just new-up a plain "CustomerRepository" object inside a method within a different repo, because it might not be the base CustomerRepository that actually needs to run.

    Read the article

  • Dependency Injection and method signatures

    - by sunwukung
    I've been using YADIF (yet another dependency injection framework) in a PHP/Zend app I'm working on to handle dependencies. This has achieved some notable benefits in terms of testing and decoupling classes. However,one thing that strikes me is that despite the sleight of hand performed when using this technique, the method names impart a degree of coupling. Probably not the best example -but these methods are distinct from ... say the PEAR Mailer. The method names themselves are a (subtle) form of coupling //example public function __construct($dic){ $this->dic = $dic; } public function example(){ //this line in itself indicates the YADIF origin of the DIC $Mail= $dic->getComponent('mail'); $Mail->setBodyText($body); $Mail->setFrom($from); $Mail->setSubject($subject); } I could write a series of proxies/wrappers to hide these methods and thus promote decoupling from , but this seems a bit excessive. You have to balance purity with pragmatism... How far would you go to hide the dependencies in your classes?

    Read the article

  • AI agents with FSM: a question regarding this

    - by Prog
    Finite State Machines implemented with the State design pattern are a common way to design AI agents. I am familiar with the State design pattern and know how to implement it. However I have a question regarding how this is used in games to design AI agents. Please consider a class Monster that represents an AI agent. Simplified it looks like this: class Monster{ State state; // other fields omitted public void update(){ // called every game-loop cycle state.execute(this); } public void setState(State state){ this.state = state; } // irrelevant stuff omitted } There are several State subclasses that implement execute() differently. So far classic State pattern. Here's my question: AI agents are subject to environmental effects and other objects communicating with them. For example an AI agent might tell another AI agent to attack (i.e. agent.attack()). Or a fireball might tell an AI agent to fall down. This means that the agent must have methods such as attack() and fallDown(), or commonly some message receiving mechanism to understand such messages. My question is divided to two parts: 1- Please say if this is correct: With an FSM, the current State of the agent should be the one taking care of such method calls - i.e. the agent delegates to the current state upon every event. Correct? Or wrong? 2- If correct, than how is this done? Are all states obligated by their superclass) to implement methods such as attack(), fallDown() etc., so the agent can always delegate to them on almost every event? Or is it done in some other way?

    Read the article

  • XNA - Obtaining depth from the scene's render target?

    - by user1423893
    I'm currently rendering my scene to a render target so it can be used for rendering methods such as post processing and order independent transparency. rtScene = new RenderTarget2D( GraphicsDevice, GraphicsDevice.PresentationParameters.BackBufferWidth, GraphicsDevice.PresentationParameters.BackBufferHeight, false, SurfaceFormat.Rgba64, DepthFormat.Depth24Stencil8, // Requires a depth format for objects to be drawn correctly (e.g. wireframe model surrounding model) 0, RenderTargetUsage.PreserveContents ); I am required to use RenderTargetUsage.PreserveContents so that the same render target can be rendered to multiple times, once for each of the draw methods below. DrawBackground DrawDeferred DrawForward DrawTransparent The problem is that DrawTransparent requires a copy of the scene's depth as a texture. Is there any way to obtain this from the scene render target above (rtScene)? I can't have more than one render target with RenderTargetUsage.PreserveContents as this causes problems on hardware such as the XBOX 360, so rendering the depth to a separate render target at the same time as I render the scene isn't possible as far as I can tell. Would I be able to get around this problem by "Ping-Ponging" two render targets (using the more compatible RenderTargetUsage.DiscardContents) and using the result for the depth texture?

    Read the article

  • FP for simulation and modelling

    - by heaptobesquare
    I'm about to start a simulation/modelling project. I already know that OOP is used for this kind of projects. However, studying Haskell made me consider using the FP paradigm for modelling a system of components. Let me elaborate: Let's say I have a component of type A, characterised by a set of data (a parameter like temperature or pressure,a PDE and some boundary conditions,etc.) and a component of type B, characterised by a different set of data(different or same parameter, different PDE and boundary conditions). Let's also assume that the functions/methods that are going to be applied on each component are the same (a Galerkin method for example). If I were to use an OOP approach, I would create two objects that would encapsulate each type's data, the methods for solving the PDE(inheritance would be used here for code reuse) and the solution to the PDE. On the other hand, if I were to use an FP approach, each component would be broken down to data parts and the functions that would act upon the data in order to get the solution for the PDE. This approach seems simpler to me assuming that linear operations on data would be trivial and that the parameters are constant. What if the parameters are not constant(for example, temperature increases suddenly and therefore cannot be immutable)? In OOP, the object's (mutable) state can be used. I know that Haskell has Monads for that. To conclude, would implementing the FP approach be actually simpler,less time consuming and easier to manage (add a different type of component or new method to solve the pde) compared to the OOP one? I come from a C++/Fortran background, plus I'm not a professional programmer, so correct me on anything that I've got wrong.

    Read the article

  • Command Query Separation

    - by Liam McLennan
    Command query separation is a strategy, proposed by Bertrand Meyer, that each of an object’s methods should be either a command or a query. A command is an operation that changes the state of a system, and a query is an operation that returns a value. This is not the same thing as CQRS, hence why I think that CQRS is poorly named. An Example of Command Query Separation Consider a system that models books and shelves. There is a rule that a shelf may not be removed if it holds any books. One way to implement the removal is to write a method Shelf.Remove() that internally checks to make sure that the shelf is empty before removing it. If the shelf is not empty then it is not removed and an error is returned. To implement this feature following the principle of command query separation would require two methods, one to query the shelf and determine if it is empty and a second method to remove the shelf. Separating the query from the command makes the shelf class simpler to use because the state change is clear and explicit.

    Read the article

  • Understanding interfaces [closed]

    - by user985482
    Possible Duplicate: When to use abstract classes instead of interfaces and extension methods in C#? Why are interfaces useful? What is the point of an interface? What other reasons are there to write interfaces rather than abstract classes? What is the point of having every service class have an interface? Is it bad habit not using interfaces? I am reading Microsoft Visual C# 2010 Step by Step which I feel it is a very good book on introducing you to the C# language. I have just finished reading a chapter on interfaces and although I understood the syntax of creating and using interfaces I have trouble of understanding the point on why should I use them? Correct me If I am wrong but in an interface you can only declare methods names and parameters.The body of the method should be declared in the class that inherits the interface. So in this case why should I declare an interface if I am going to declare the entire method in the class that inherits that interface? What is the point? Does this have something to do with the fact that a class can inherit multiple interfaces?

    Read the article

  • Generating random tunnels

    - by IVlad
    What methods could we use to generate a random tunnel, similar to the one in this classic helicopter game? Other than that it should be smooth and allow you to navigate through it, while looking as natural as possible (not too symmetric but not overly distorted either), it should also: Most importantly - be infinite and allow me to control its thickness in time - make it narrower or wider as I see fit, when I see fit; Ideally, it should be possible to efficiently generate it with smooth curves, not rectangles as in the above game; I should be able to know in advance what its bounds are, so I can detect collisions and generate powerups inside the tunnel; Any other properties that let you have more control over it or offer optimization possibilities are welcome. Note: I'm not asking for which is best or what that game uses, which could spark extended discussion and would be subjective, I'm just asking for some methods that others know about or have used before or even think they might work. That is all, I can take it from there. Also asked on stackoverflow, where someone suggested I should ask here too. I think it fits in both places, since it's as much an algorithm question as it is a gamedev question, IMO.

    Read the article

  • Is the Entity Component System architecture object oriented by definition?

    - by tieTYT
    Is the Entity Component System architecture object oriented, by definition? It seems more procedural or functional to me. My opinion is that it doesn't prevent you from implementing it in an OO language, but it would not be idiomatic to do so in a staunchly OO way. It seems like ECS separates data (E & C) from behavior (S). As evidence: The idea is to have no game methods embedded in the entity. And: The component consists of a minimal set of data needed for a specific purpose Systems are single purpose functions that take a set of entities which have a specific component I think this is not object oriented because a big part of being object oriented is combining your data and behavior together. As evidence: In contrast, the object-oriented approach encourages the programmer to place data where it is not directly accessible by the rest of the program. Instead, the data is accessed by calling specially written functions, commonly called methods, which are bundled in with the data. ECS, on the other hand, seems to be all about separating your data from your behavior.

    Read the article

  • Must all AI states be able to react to any event?

    - by Prog
    FSMs implemented with the State design pattern are a common way to design AI agents. I am familiar with the State design pattern and know how to implement it. How is this used in games to design AI agents? Consider a simplified class Monster, representing an AI agent: class Monster { State state; // other fields omitted public void update(){ // called every game-loop cycle state.execute(this); } public void setState(State state){ this.state = state; } // irrelevant stuff omitted } There are several State subclasses implementing execute() differently. So far, classic State pattern. AI agents are subject to environmental effects and other objects communicating with them. For example, an AI agent might tell another AI agent to attack (i.e. agent.attack()). Or a fireball might tell an AI agent to fall down. This means that the agent must have methods such as attack() and fallDown(), or commonly some message receiving mechanism to understand such messages. With an FSM, the current State of the agent should be the one taking care of such method calls - i.e. the agent delegates to the current state upon every event. Is this correct? If correct, how is this done? Are all states obligated by their superclass to implement methods such as attack(), fallDown() etc., so the agent can always delegate to them on almost every event? Or is it done in some other way?

    Read the article

  • How can I reduce the amount of time it takes to fully regression test an application ready for release?

    - by DrLazer
    An app I work on is being developed with a modified version of scrum. If you are not familiar with scrum, it's just an alternative approach to a more traditional watefall model, where a series of features are worked on for a set amount of time known as a sprint. The app is written in C# and makes use of WPF. We use Visual C# 2010 Express edition as an IDE. If we work on a sprint and add in a few new features, but do not plan to release until a further sprint is complete, then regression testing is not an issue as such. We just test the new features and give the app a good once over. However, if a release is planned that our customers can download - a full regression test is factored in. In the past this wasn't a big deal, it took 3 or 4 days and the devs simply fix up any bugs found in the regression phase, but now, as the app is getting larger and larger and incorporating more and more features, the regression is spanning out for weeks. I am interested in any methods that people know of or use that can decrease this time. At the moment the only ideas I have are to either start writing Unit Tests, which I have never fully tried out in a commercial environment, or to research the possibilty of any UI Automation API's or tools that would allow me to write a program to perform a series of batch tests. I know literally nothing about the possibilities of UI automation so any information would be valuable. I don't know that much about Unit testing either, how complicated can the tests be? Is it possible to get Unit tests to use the UI? Are there any other methods I should consider? Thanks for reading, and for any advice in advance.

    Read the article

  • How to choose between using a Domain Event, or letting the application layer orchestrate everything

    - by Mr Happy
    I'm setting my first steps into domain driven design, bought the blue book and all, and I find myself seeing three ways to implement a certain solution. For the record: I'm not using CQRS or Event Sourcing. Let's say a user request comes into the application service layer. The business logic for that request is (for whatever reason) separated into a method on an entity, and a method on a domain service. How should I go about calling those methods? The options I have gathered so far are: Let the application service call both methods Use method injection/double dispatch to inject the domain service into the entity, letting the entity do it's thing and then let it call the method of the domain service (or the other way around, letting the domain service call the method on the entity) Raise a domain event in the entity method, a handler of which calls the domain service. (The kind of domain events I'm talking about are: http://www.udidahan.com/2009/06/14/domain-events-salvation/) I think these are all viable, but I'm unable to choose between them. I've been thinking about this a long time and I've come to a point where I no longer see the semantic differences between the three. Do you know of some guidelines when to use what?

    Read the article

  • Should this code/logic be included in Business Objects class or a separate class?

    - by aspdotnetuser
    I have created a small application which has a three tier architecture and I have business object classes to represent entities such as User, Orders, UserType etc. In these classes I have methods that are executed when the Constuctor method of, for example, User is called. These methods perform calculations and generate details that setup data for attributes that are part of each User object. Here is the structure for the project in Visual Studio: Here is some code from the business object class User.cs: Public Class User { public string Name { get; set; } public int RandomNumber { get; set; } etc public User { Name = GetName(); RandomNumber = GetRandomNumber(); } public string GetName() { .... return name; } public int GetRandomNumber() { ... return randomNumber; } } Should this logic be included in the Business Object classes or should it be included in a Utilities class of some kind? Or in the business rules?

    Read the article

  • Motivation for service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such case. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - well, why not just import the BL.dll (and the DAL.dll) to the other UI, and whenever making a change re-copy the dll files, it might not be so 'neat', but is the all purpose of the service layer to prevent this? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >