Search Results

Search found 60107 results on 2405 pages for 'data oriented'.

Page 68/2405 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • PHP - Internal APIs/Libraries - What makes sense?

    - by Mark Locker
    I've been having a discussion lately with some colleagues about the best way to approach a new project, and thought it'd be interesting to get some external thoughts thrown into the mix. Basically, we're redeveloping a fairly large site (written in PHP) and have differing opinions on how the platform should be setup. Requirements: The platform will need to support multiple internal websites, as well as external (non-PHP) projects which at the moment consist of a mobile app and a toolbar. We have no plans/need in the foreseeable future to open up an API externally (for use in products other than our own). My opinion: We should have a library of well documented native model classes which can be shared between projects. These models will represent everything in our database and can take advantage of object orientated features such as inheritance, traits, magic methods, etc. etc. As well as employing ORM. We can then add an API layer on top of these models which can basically accept requests and route them to the appropriate methods, translating the response so that it can be used platform independently. This routing for each method can be setup as and when it's required. Their opinion: We should have a single HTTP API which is used by all projects (internal PHP ones or otherwise). My thoughts: To me, there are a number of issues with using the sole HTTP API approach: It will be very expensive performance wise. One page request will result in several additional http requests (which although local, are still ones that Apache will need to handle). You'll lose all of the best features PHP has for OO development. From simple inheritance, to employing the likes of ORM which can save you writing a lot of code. For internal projects, the actual process makes me cringe. To get a users name, for example, a request would go out of our box, over the LAN, back in, then run through a script which calls a method, JSON encodes the output and feeds that back. That would then need to be JSON decoded, and be presented as an array ready to use. Working with arrays, as appose to objects, makes me sad in a modern PHP framework. Their thoughts (and my responses): Having one method of doing thing keeps things simple. - You'd only do things differently if you were using a different language anyway. It will become robust. - Seeing as the API will run off the library of models, I think my option would be just as robust. What do you think? I'd be really interested to hear the thoughts of others on this, especially as opinions on both sides are not founded on any past experience.

    Read the article

  • Creating Objects

    - by user1424667
    I have a general coding standard question. Is it bad practice to initialize and create an object in multiple methods depending on the outcome of a users choice. So for example if the user quits a poker game, create the poker hand with the cards the user has received, even if < 5, and if the user played till the end create object to show completed hand to show outcome of the game. The key is that the object will only be created once in actuality, there are just different paths and parameters is will receive depending on if the user folded, or played on to the showdown.

    Read the article

  • Am I violating LSP if the condition can be checked?

    - by William
    This base class for some shapes I have in my game looks like this. Some of the shapes can be resized, some of them can not. private Shape shape; public virtual void SetSizeOfShape(int x, int y) { if (CanResize()){ shape.Width = x; shape.Height = y; } else { throw new Exception("You cannot resize this shape"); } } public virtual bool CanResize() { return true; } In a sub class of a shape that I don't ever want to resize I am overriding the CanResize() method so a piece of client code can check before calling the SetSizeOfShape() method. public override bool CanResize() { return false; } Here's how it might look in my client code: public void DoSomething(Shape s) { if(s.CanResize()){ s.SetSizeOfShape(50, 70); } } Is this violating LSP?

    Read the article

  • Is this the correct approach to an OOP design structure in php?

    - by Silver89
    I'm converting a procedural based site to an OOP design to allow more easily manageable code in the future and so far have created the following structure: /classes /templates index.php With these classes: ConnectDB Games System User User -Moderator User -Administrator In the index.php file I have code that detects if any $_GET values are posted to determine on which page content to build (it's early so there's only one example and no default): function __autoload($className) { require "classes/".strtolower($className).".class.php"; } $db = new Connect; $db->connect(); $user = new User(); if(isset($_GET['gameId'])) { System::buildGame($gameId); } This then runs the BuildGame function in the system class which looks like the following and then uses gets in the Game Class to return values, such as $game->getTitle() in the template file template/play.php: function buildGame($gameId){ $game = new Game($gameId); $game->setRatio(900, 600); require 'templates/play.php'; } I also have .htaccess so that actual game page url works instead of passing the parameters to index.php Are there any major errors of how I'm setting this up or do I have the general idea of OOP correct?

    Read the article

  • Should I build a multi-threaded system that handles events from a game and sorts them, independently, into different threads based on priority?

    - by JonathonG
    Can I build a multi-threaded system that handles events from a game and sorts them, independently, into different threads based on priority, and is it a good idea? Here's more info: I am about to begin work on porting a mid-sized game from Flash/AS3 to Java so that I can continue development with multi-threading capabilities. Here's a small bit of background about the game: The game contains numerous asynchronous activities, such as "world updating" (the game environment is constantly changing based on a set of natural laws and forces), procedural generation of terrain, NPCs, quests, items, etc., and on top of that, the effects of all of the player's interactions with his environment are programmatically calculated in real time, based on a set of constantly changing "stats" and once again, natural laws and forces. All of these things going on at once, in an asynchronous manner, seem to lend themselves to multi-threading very well. My question is: Can I build some kind of central engine that handles the "stacking" of all of these events as they are triggered, and dynamically sorts them out amongst the available threads, and would it be a good idea? As an example: Essentially, every time something happens (IE, a magic missile being generated by a spell, or a bunch of plants need to grow to their next stage), instead of just processing that task right then and adding the new object(s) to a list of managed objects, send a reference to that event to a core "event handler" that throws it into a stack of all other currently queued events, which then sorts them out and orders them according to urgency, splits them between a number of available threads for as-fast-as-possible multithreaded execution.

    Read the article

  • May I give a single class multiple responsibilities if only one will ever be reusable?

    - by lnluis
    To the extent that I understand the Single Responsibility Principle, a SINGLE class must only have one responsibility. We use this so that we can reuse other functionalities in other classes and not affect the whole class. My question is: what if the entity has only one purpose that really interacts with the system, and that purpose won't change? Do you have to separate the implementations of your methods into another class and just instantiate those from your entity class? Or to put it another way... Is it ok to break the SRP if you know those functions will not be reusable in the future? Or is it better to assume that we do not know if the functionalities of these methods will be reusable or not, and so just abstract them to other classes?

    Read the article

  • Advantages to Multiple Methods over Switch

    - by tandu
    I received a code review from a senior developer today asking "By the way, what is your objection to dispatching functions by way of a switch statement?" I have read in many places about how pumping an argument through switch to call methods is bad OOP, not as extensible, etc. However, I can't really come up with a definitive answer for him. I would like to settle this for myself once and for all. Here are our competing code suggestions (php used as an example, but can apply more universally): class Switch { public function go($arg) { switch ($arg) { case "one": echo "one\n"; break; case "two": echo "two\n"; break; case "three": echo "three\n"; break; default: throw new Exception("Unknown call: $arg"); break; } } } class Oop { public function go_one() { echo "one\n"; } public function go_two() { echo "two\n"; } public function go_three() { echo "three\n"; } public function __call($_, $__) { throw new Exception("Unknown call $_ with arguments: " . print_r($__, true)); } } Part of his argument was "It (switch method) has a much cleaner way of handling default cases than what you have in the generic __call() magic method." I disagree about the cleanliness and in fact prefer call, but I would like to hear what others have to say. Arguments I can come up with in support of Oop scheme: A bit cleaner in terms of the code you have to write (less, easier to read, less keywords to consider) Not all actions delegated to a single method. Not much difference in execution here, but at least the text is more compartmentalized. In the same vein, another method can be added anywhere in the class instead of a specific spot. Methods are namespaced, which is nice. Does not apply here, but consider a case where Switch::go() operated on a member rather than a parameter. You would have to change the member first, then call the method. For Oop you can call the methods independently at any time. Arguments I can come up with in support of Switch scheme: For the sake of argument, cleaner method of dealing with a default (unknown) request Seems less magical, which might make unfamiliar developers feel more comfortable Anyone have anything to add for either side? I'd like to have a good answer for him.

    Read the article

  • Is this a pattern? Proxy/delegation of interface to existing concrete implementation

    - by Ian Newson
    I occasionally write code like this when I want to replace small parts of an existing implementation: public interface IFoo { void Bar(); } public class Foo : IFoo { public void Bar() { } } public class ProxyFoo : IFoo { private IFoo _Implementation; public ProxyFoo(IFoo implementation) { this._Implementation = implementation; } #region IFoo Members public void Bar() { this._Implementation.Bar(); } #endregion } This is a much smaller example than the real life cases in which I've used this pattern, but if implementing an existing interface or abstract class would require lots of code, most of which is already written, but I need to change a small part of the behaviour, then I will use this pattern. Is this a pattern or an anti pattern? If so, does it have a name and are there any well known pros and cons to this approach? Is there a better way to achieve the same result? Rewriting the interfaces and/or the concrete implementation is not normally an option as it will be provided by a third party library.

    Read the article

  • Best practice to collect information from child objects

    - by Markus
    I'm regularly seeing the following pattern: public abstract class BaseItem { BaseItem[] children; // ... public void DoSomethingWithStuff() { StuffCollection collection = new StuffCollection(); foreach(child c : children) c.AddRequiredStuff(collection); // do something with the collection ... } public abstract void AddRequiredStuff(StuffCollection collection); } public class ConcreteItem : BaseItem { // ... public override void AddRequiredStuff(StuffCollection collection) { Stuff stuff; // ... collection.Add(stuff); } } Where I would use something like this, for better information hiding: public abstract class BaseItem { BaseItem[] children; // ... public void DoSomethingWithStuff() { StuffCollection collection = new StuffCollection(); foreach(child c : children) collection.AddRange(c.RequiredStuff()); // do something with the collection ... } public abstract StuffCollection RequiredStuff(); } public class ConcreteItem : BaseItem { // ... public override StuffCollection RequiredStuff() { StuffCollection stuffCollection; Stuff stuff; // ... stuffCollection.Add(stuff); return stuffCollection; } } What are pros and cons of each solution? For me, giving the implementation access to parent's information is some how disconcerting. On the other hand, initializing a new list, just to collect the items is a useless overhead ... What is the better design? How would it change, if DoSomethingWithStuff wouldn't be part of BaseItem but a third class? PS: there might be missing semicolons, or typos; sorry for that! The above code is not meant to be executed, but just for illustration.

    Read the article

  • Object desing problem for simple school application

    - by Aragornx
    I want to create simple school application that provides grades,notes,presence,etc. for students,teachers and parents. I'm trying to design objects for this problem and I'm little bit confused - because I'm not very experienced in class designing. Some of my present objects are : class PersonalData() { private String name; private String surename; private Calendar dateOfBirth; [...] } class Person { private PersonalData personalData; } class User extends Person { private String login; private char[] password; } class Student extends Person { private ArrayList<Counselor> counselors = new ArrayList<>(); } class Counselor extends Person { private ArrayList<Student> children = new ArrayList<>(); } class Teacher extends Person { private ArrayList<ChoolClass> schoolClasses = new ArrayList<>(); private ArrayList<Subject> subjects = new ArrayList<>(); } This is of course a general idea. But I'm sure it's not the best way. For example I want that one person could be a Teacher and also a Parent(Counselor) and present approach makes me to have two Person objects. I want that user after successful logging in get all roles that it has (Student or Teacher or (Teacher & Parent) ). I think I should make and use some interfaces but I'm not sure how to do this right. Maybe like this: interface Role { } interface TeacherRole implements Role { void addGrade( Student student, Grade grade, [...] ); } class Teacher implements TeacherRole { private Person person; [...] } class User extends Person{ ArrayList<Role> roles = new ArrayList<>(); } Please if anyone could help me to make this right or maybe just point me to some literature/article that covers practical objects design.

    Read the article

  • best practice for initializing class members in php

    - by rgvcorley
    I have lots of code like this in my constructors:- function __construct($params) { $this->property = isset($params['property']) ? $params['property'] : default_val; } Is it better to do this rather than specify the default value in the property definition? i.e. public $property = default_val? Sometimes there is logic for the default value, and some default values are taken from other properties, which was why I was doing this in the constructor. Should I be using setters so all the logic for default values is separated?

    Read the article

  • Does OO, TDD, and Refactoring to Smaller Functions affect Speed of Code?

    - by Dennis
    In Computer Science field, I have noticed a notable shift in thinking when it comes to programming. The advice as it stands now is write smaller, more testable code refactor existing code into smaller and smaller chunks of code until most of your methods/functions are just a few lines long write functions that only do one thing (which makes them smaller again) This is a change compared to the "old" or "bad" code practices where you have methods spanning 2500 lines, and big classes doing everything. My question is this: when it call comes down to machine code, to 1s and 0s, to assembly instructions, should I be at all concerned that my class-separated code with variety of small-to-tiny functions generates too much extra overhead? While I am not exactly familiar with how OO code and function calls are handled in ASM in the end, I do have some idea. I assume that each extra function call, object call, or include call (in some languages), generate an extra set of instructions, thereby increasing code's volume and adding various overhead, without adding actual "useful" code. I also imagine that good optimizations can be done to ASM before it is actually ran on the hardware, but that optimization can only do so much too. Hence, my question -- how much overhead (in space and speed) does well-separated code (split up across hundreds of files, classes, and methods) actually introduce compared to having "one big method that contains everything", due to this overhead? UPDATE for clarity: I am assuming that adding more and more functions and more and more objects and classes in a code will result in more and more parameter passing between smaller code pieces. It was said somewhere (quote TBD) that up to 70% of all code is made up of ASM's MOV instruction - loading CPU registers with proper variables, not the actual computation being done. In my case, you load up CPU's time with PUSH/POP instructions to provide linkage and parameter passing between various pieces of code. The smaller you make your pieces of code, the more overhead "linkage" is required. I am concerned that this linkage adds to software bloat and slow-down and I am wondering if I should be concerned about this, and how much, if any at all, because current and future generations of programmers who are building software for the next century, will have to live with and consume software built using these practices. UPDATE: Multiple files I am writing new code now that is slowly replacing old code. In particular I've noted that one of the old classes was a ~3000 line file (as mentioned earlier). Now it is becoming a set of 15-20 files located across various directories, including test files and not including PHP framework I am using to bind some things together. More files are coming as well. When it comes to disk I/O, loading multiple files is slower than loading one large file. Of course not all files are loaded, they are loaded as needed, and disk caching and memory caching options exist, and yet still I believe that loading multiple files takes more processing than loading a single file into memory. I am adding that to my concern.

    Read the article

  • Passing class names or objects?

    - by nischayn22
    I have a switch statement switch ( $id ) { case 'abc': return 'Animal'; case 'xyz': return 'Human'; //many more } I am returning class names,and use them to call some of their static functions using call_user_func(). Instead I can also create a object of that class, return that and then call the static function from that object as $object::method($param) switch ( $id ) { case 'abc': return new Animal; case 'xyz': return new Human; //many more } Which way is efficient? To make this question broader : I have classes that have mostly all static methods right now, putting them into classes is kind of a grouping idea here (for example the DB table structure of Animal is given by class Animal and so for Human class). I need to access many functions from these classes so the switch needs to give me access to the class

    Read the article

  • How to implement isValid correctly?

    - by Songo
    I'm trying to provide a mechanism for validating my object like this: class SomeObject { private $_inputString; private $_errors=array(); public function __construct($inputString) { $this->_inputString = $inputString; } public function getErrors() { return $this->_errors; } public function isValid() { $isValid = preg_match("/Some regular expression here/", $this->_inputString); if($isValid==0){ $this->_errors[]= 'Error was found in the input'; } return $isValid==1; } } Then when I'm testing my code I'm doing it like this: $obj = new SomeObject('an INVALID input string'); $isValid = $obj->isValid(); $errors=$obj->getErrors(); $this->assertFalse($isValid); $this->assertNotEmpty($errors); Now the test passes correctly, but I noticed a design problem here. What if the user called $obj->getErrors() before calling $obj->isValid()? The test will fail because the user has to validate the object first before checking the error resulting from validation. I think this way the user depends on a sequence of action to work properly which I think is a bad thing because it exposes the internal behaviour of the class. How do I solve this problem? Should I tell the user explicitly to validate first? Where do I mention that? Should I change the way I validate? Is there a better solution for this? UPDATE: I'm still developing the class so changes are easy and renaming functions and refactoring them is possible.

    Read the article

  • Where should I put a method that returns a list of active entries of a table?

    - by darga33
    I have a class named GuestbookEntry that maps to the properties that are in the database table named "guestbook". Very simple! Originally, I had a static method named getActiveEntries() that retrieved an array of all GuestbookEntry objects. Each row in the guestbook table was an object that was added to that array. Then while learning how to properly design PHP classes, I learned some things: Static methods are not desirable. Separation of Concerns Single Responsibility Principle If the GuestbookEntry class should only be responsible for managing single guestbook entries then where should this getActiveEntries() method most properly go? Update: I am looking for an answer that complies with the SOLID acronym principles and allows for test-ability. That's why I want to stay away from static calls/standard functions. DAO, repository, ...? Please explain as though your explanation will be part of "Where to Locate FOR DUMMIES"... :-)

    Read the article

  • Get and set accessors do they protect different instances of a variable?

    - by Chris Halcrow
    The standard method of implementing get and set accessors in C# and VB.NET is to use a public property to set and retrieve the value of a corresponding private variable. Am I right in saying that this has no effect of different instances of a variable? By this I mean, if there are different instantiations of an object, then those instances and their properties are completely independent right? So I think my understanding is correct that setting a private variable is just a construct to be able to implement the get and set pattern? Never been 100% sure about this.

    Read the article

  • PHP I am pulling my hair out trying to find the best! [closed]

    - by darga33
    PULLING MY HAIR OUT I have heard of Martin Fowler's book PoEAA and the other book Head First OOA OOD but those are not in PHP. I DESPERATELY WANT TO READ THEM, but ONLY in PHP utilizing SOLID principles! Does anyone know of the absolute best, most recent PHP book that utilizes the SOLID principles and GRASP, and all the other best practices? I want to learn from the best possible source! Not beginner books! I already understand OOP. This seems like an almost impossible question to find the answer to and so I thought, hey, might as well post on stackexchange!! Surely someone out there must know!!!!!!!!!! Or if noone knows Maybe they know of an open source application that utilizes these principles that is relatively small that is not a framework. Something that I can go through every single class, and spend time understanding the insides and outs of how the program was developed. Thanks so much in advance! I really really really really appreciate it!

    Read the article

  • Combinatorial explosion of interfaces: How many is too many?

    - by mga
    I'm a relative newcomer to OOP, and I'm having a bit of trouble creating good designs when it comes to interfaces. Consider a class A with N public methods. There are a number of other classes, B, C, ..., each of which interacts with A in a different way, that is, accesses some subset (<= N) of A's methods. The maximum degree of encapsulation is achieved by implementing an interface of A for each other class, i.e. AInterfaceForB, AInterfaceForC, etc. However, if B, C, ... etc. also interact with A and with each other, then there will be a combinatorial explosion of interfaces (a maximum of n(n-1), to be precise), and the benefit of encapsulation becomes outweighed by a code-bloat. What is the best practice in this scenario? Is the whole idea of restricting access to a class's public functions in different ways for other different classes just silly altogether? One could imagine a language that explicitly allows for this sort of encapsulation (e.g. instead of declaring a function public, one could specify exactly which classes it is visible to); Since this is not a feature of C++, maybe it's misguided to try to do it through the back door with interaces?

    Read the article

  • Override methods should call base method?

    - by Trevor Pilley
    I'm just running NDepend against some code that I have written and one of the warnings is Overrides of Method() should call base.Method(). The places this occurs are where I have a base class which has virtual properties and methods with default behaviour but which can be overridden by a class which inherits from the base class and doesn't call the overridden method. For example, in the base class I have a property defined like this: protected virtual char CloseQuote { get { return '"'; } } And then in an inheriting class which uses a different close quote: protected override char CloseQuote { get { return ']'; } } Not all classes which inherit from the base class use different quote characters hence my initial design. The alternatives I thought of were have get/set properties in the base class with the defaults set in the constructor: protected BaseClass() { this.CloseQuote = '"'; } protected char CloseQuote { get; set; } public InheritingClass() { this.CloseQuote = ']'; } Or make the base class require the values as constructor args: protected BaseClass(char closeQuote, ...) { this.CloseQuote = '"'; } protected char CloseQuote { get; private set; } public InheritingClass() base (closeQuote: ']', ...) { } Should I use virtual in a scenario where the base implementation may be replaced instead of extended or should I opt for one of the alternatives I thought of? If so, which would be preferable and why?

    Read the article

  • What is the correct way to implement Auth/ACL in MVC?

    - by WiseStrawberry
    I am looking into making a correctly laid out MVC Auth/ACL system. I think I want the authentication of a user (and the session handling) to be separate from the ACL system. (I don't know why but this seems a good idea from the things I've read.) What does MVC have to do with this question you ask? Because I wish for the application to be well integrated with my ACL. An example of a controller (CodeIgniter): <?php class forums extends MX_Controller { $allowed = array('users', 'admin'); $need_login = true; function __construct() { //example of checking if logged in. if($this->auth->logged_in() && $this->auth->is_admin()) { echo "you're logged in!"; } } public function add_topic() { if($this->auth->allowed('add_topic') { //some add topic things. } else { echo 'not allowed to add topic'; } } } ?> My thoughts $this->auth would be autoloaded in the system. I would like to check the $allowed array against the user currently (not) logged in and react accordingly. Is this a good way of doing things? I haven't seen much literature on MVC integration and Auth. I want to make things as easy as possible.

    Read the article

  • REST API wrapper - class design for 'lite' object responses

    - by sasfrog
    I am writing a class library to serve as a managed .NET wrapper over a REST API. I'm very new to OOP, and this task is an ideal opportunity for me to learn some OOP concepts in a real-life situation that makes sense to me. Some of the key resources/objects that the API returns are returned with different levels of detail depending on whether the request is for a single instance, a list, or part of a "search all resources" response. This is obviously a good design for the REST API itself, so that full objects aren't returned (thus increasing the size of the response and therefore the time taken to respond) unless they're needed. So, to be clear: .../car/1234.json returns the full Car object for 1234, all its properties like colour, make, model, year, engine_size, etc. Let's call this full. .../cars.json returns a list of Car objects, but only with a subset of the properties returned by .../car/1234.json. Let's call this lite. ...search.json returns, among other things, a list of car objects, but with minimal properties (only ID, make and model). Let's call this lite-lite. I want to know what the pros and cons of each of the following possible designs are, and whether there is a better design that I haven't covered: Create a Car class that models the lite-lite properties, and then have each of the more detailed responses inherit and extend this class. Create separate CarFull, CarLite and CarLiteLite classes corresponding to each of the responses. Create a single Car class that contains (nullable?) properties for the full response, and create constructors for each of the responses which populate it to the extent possible (and maybe include a property that returns the response type from which the instance was created). I expect among other things there will be use cases for consumers of the wrapper where they will want to iterate through lists of Cars, regardless of which response type they were created from, such that the three response types can contribute to the same list. Happy to be pointed to good resources on this sort of thing, and/or even told the name of the concept I'm describing so I can better target my research.

    Read the article

  • How to model an address type in DDD?

    - by Songo
    I have an User entity that has a Set of Address where Address is a value object: class User{ ... private Set<Address> addresses; ... public setAddresses(Set<Address> addresses){ //set all addresses as a batch } ... } A User can have a home address and a work address, so I should have something that acts as a look up in the database: tbl_address_type ------------------------------------------------ | address_type_id | address_type | ------------------------------------------------ | 1 | work | ------------------------------------------------ | 2 | home | ------------------------------------------------ and correspondingly tbl_address ------------------------------------------------------------------------------------- | address_id | address_description |address_type_id| user_id | ------------------------------------------------------------------------------------- | 1 | 123 main street | 1 | 100 | ------------------------------------------------------------------------------------- | 2 | 456 another street | 1 | 100 | ------------------------------------------------------------------------------------- | 3 | 789 long street | 2 | 200 | ------------------------------------------------------------------------------------- | 4 | 023 short street | 2 | 200 | ------------------------------------------------------------------------------------- Should the address type be modeled as an Entity or Value type? and Why? Is it OK for the Address Value object to hold a reference to the Entity AdressType (in case it was modeled as an entity)? Is this something feasible using Hibernate/NHibernate? If a user can change his home address, should I expose a User.updateHomeAddress(Address homeAddress) function on the User entity itself? How can I enforce that the client passes a Home address and not a work address in this case? (a sample implementation is most welcomed) If I want to get the User's home address via User.getHomeAddress() function, must I load the whole addresses array then loop it and check each for its type till I found the correct type then return it? Is there a more efficient way than this?

    Read the article

  • Placing advice on any parameter of a given type in AspectJ

    - by user12558
    Hi, Im doing a POC using Aspectj. class BaseInfo{..} class UserInfo extends BaseInfo{..} class UserService { public void getUser(UserInfo userInfo){..} public void deleteUser(String userId){..} } I've defined an advice, that gets invoked when I pass an UserInfo instance.But when i try to pass the BaseInfo, the advice is not getting invoked. Below block executes the afterMethod as expected for getUser. &ltaop:pointcut id="aopafterMethod" expression="execution(* UserService.*(..,UserInfo,..))" / &gtaop:after pointcut-ref="aopafterMethod" method="afterMethod" / But when i try to give BaseInfo instead of UserInfo, the aspect is not getting triggered. Am i missing something? Kindly help me on this issue.

    Read the article

  • Help with design structure choice: Using classes or library of functions

    - by roverred
    So I have GUI Class that will call another class called ImageProcessor that contains a bunch functions that will perform image processing algorithms like edgeDetection, gaussianblur, contourfinding, contour map generations, etc. The GUI passes an image to ImageProcessor, which performs one of those algorithm on it and it returns the image back to the GUI to display. So essentially ImageProcessor is a library of independent image processing functions right now. It is called in the GUI like so Image image = ImageProcessor.EdgeDetection(oldImage); Some of the algorithms procedures require many functions, and some can be done in a single function or even one line. All these functions for the algorithms jam packed into ImageProcessor can be pretty messy, and ImageProcessor doesn't sound it should be a library. So I was thinking about making every algorithm be a class with a shared interface say IAlgorithm. Then I pass the IAlgorithm interface from the GUI to the ImageProcessor. public interface IAlgorithm{ public Image Process(); } public class ImageProcessor{ public Image Process(IAlgorithm TheAlgorithm){ return IAlgorithm.Process(); } } Calling in the GUI like so Image image = ImageProcessor.Process(new EdgeDetection(oldImage)); I think it makes sense in an object point of view, but the problem is I'll end up with some classes that are just one function. What do you think is a better design, or are they both crap and you have a much better idea? Thanks!

    Read the article

  • How to learn to translate real world problems to code?

    - by StudioWorks
    I'm kind of a beginner to Java and OOP and I didn't quite get the whole concept of seeing a real world problem and translating it to classes and code. For example, I was reading a book on UML and at the beginning the author takes the example of a tic tac toe game and says: "In this example, it's natural to see three classes: Board, Player and Position." Then, he creates the methods in each class and explains how they relate. What I can't understand is how he thought all this. So, where should I start to learn how to see a real world problem and then "translate" it into code?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >