Search Results

Search found 5045 results on 202 pages for 'delusional logic'.

Page 99/202 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • c# vocabulary

    - by foxjazz
    I have probably seen and used the word Encapsulation 4 times in my 20 years of programming.I now know what it is again, after an interview for a c# job. Even though I have used the public, private, and protected key words in classes for as long as c# was invented. I can sill remember coming across the string.IndexOf function and thinking, why didn't they call it IndexAt.Now with all the new items like Lambda and Rx, Linq, map and pmap etc, etc. I think the more choices there is to do 1 or 2 things 10 or 15 differing ways, the more programmers think to stay with what works and try and leverage the new stuff only when it really becomes beneficial.For many, the new stuff is harder to read, because programmers aren't use to seeing declarative notation.I mean I have probably used yield break, twice in my project where it may have been possible to use it many more times. Or the using statement ( not the declaration of namespace references) but inline using. I never really saw a big advantage to this, other than confusion. It is another form of local encapsulation (oh there 5 times used in my programming career) but who's counting?  THE COMPUTERS ARE COUNTING!In business logic most programming is about displaying lists, selecting items in a list, and sending those choices to some other system or database to keep track of those selections. What makes this difficult is how these items relate to one, each other, and two externally listed items.Well I probably need to go back to school and learn c# certification so I can say I am an expert in c#. Apparently using all aspects of c# (even unsafe code) in my programming life, doesn't make me certified, just certifiable.This is a good time to sign off:Fox-jazzy

    Read the article

  • Am I missing something about these considerations about Leaderboard's database's schema?

    - by misiMe
    I just finished to develop a mobile game, now I want to implement an online leaerboard using mysql. I'm wondering about the database's schema, I thought about some possibilities: (I didn't got in detail with syntax because my question is just about the logic of it) Name: string; Score: integer I thought to ask the name just the first time. If, in the future, you will modify that, it will call just an update to the name associated with your id. Leaderboard(ID, Name, Score) ID: integer autoincrement, PrimaryKey With this kind of idea maybe the db will grow fast because if you choose everytime a different name for the score, it will add a new entry. Leaderboard(PhoneId, Name, Score) Here PhoneId will be the unique identifier of the phone, PrimaryKey. A con of this choice is that if you want to play with your friends' phone, you can't put a different name for the score. Leaderboard(Name, Score) Here Name is PrimaryKey. With that, if you enter a name that already exists, you will be prompted to choose another one. Do you agree with this considerations? What will you do? Am I missing something?

    Read the article

  • Java game design question (graphical objects)

    - by vemalsar
    Hello Guys, I'm beginner in game development, in Java and here on this site too and I have a game design question. Please comment my idea: I have a main loop which call update and draw method. I want to use an ArrayList which store graphical objects, they have coordinate and image or text to draw and my game objects extends this class. In update, I can choose which objects should be put in the array and in draw method I'll display the elements of array on the screen. I'm using a buffer and draw first there, but it is not important now I guess...Here is a simple (not full) code, only the logic: public class GamePanel extends JPanel implements KeyListener { ArrayList<graphicalObjects> graphArray = new ArrayList<graphicalObjects>(); public void update() { //change the game scene, update the graphArray, process input etc. } public void draw() { //draws every element of graphArray to a JPanel } public static main(String[] args) { while(true) { update(); draw(); } } } My questions: Should have I use interface or abstract class for graphicalObjects? graphicalObjects class and the ArrayList really needs or there is some better solution? How to draw objects? They draw themself with their own method or in the draw method I have to draw manually based on graphicalObjects variables (x,y coordinates, image etc.)? If this conception is wrong, please suggest another one! All comments are welcome and sorry if this is dumb question, thanks!

    Read the article

  • Do you write unit tests for all the time in TDD?

    - by mcaaltuntas
    I have been designing and developing code with TDD style for a long time. What disturbs me about TDD is writing tests for code that does not contain any business logic or interesting behaviour. I know TDD is a design activity more than testing but sometimes I feel it's useless to write tests in these scenarios. For example I have a simple scenario like "When user clicks check button, it should check file's validity". For this scenario I usually start writing tests for presenter/controller class like the one below. @Test public void when_user_clicks_check_it_should_check_selected_file_validity(){ MediaService service =mock(MediaService); View view =mock(View); when(view.getSelectedFile).thenReturns("c:\\Dir\\file.avi"); MediaController controller =new MediaController(service,view); controller.check(); verify(service).check("c:\\Dir\\file.avi"); } As you can see there is no design decision or interesting code to verify behaviour. I am testing values from view passed to MediaService. I usually write but don't like these kind of tests. What do yo do about these situations ? Do you write tests for all the time ? UPDATE : I have changed the test name and code after complaints. Some users said that you should write tests for the trivial cases like this so in the future someone might add interesting behaviour. But what about “Code for today, design for tomorrow.” ? If someone, including myself, adds more interesting code in the future the test can be created for it then. Why should I do it now for the trivial cases ?

    Read the article

  • Do cross reference database tables have a place in domain driven design?

    - by Mike Cellini
    First some background. Let's say we have a system where a customer is placing an order in a web interface. The items that customer is ordering can priced in various ways. Sometimes including the cost of delivery and sometimes not at all. That pricing effectively depends on a variety of factors including the vendor's own pricing model, that vendor's individual contracts with customers as well as that vendor's contracts with its own suppliers. Let's assume that once a customer places an order for a particular item and chooses a contract if any, the method of delivery can be determined by variables on those contracts. Those delivery methods also live in their own table in the database and have various properties consumed downstream. It makes sense that a cross reference or lookup table would store that information. That table would be loaded into the domain and could then be used to apply the appropriate delivery method while processing the order. Does this make sense in the context of domain driven design? Or is my thinking too relational? Is this logic that should be built into it's own class/method (I mean beyond apply the cross reference table data)?

    Read the article

  • MVC pattern synchronisation

    - by Hariprasad
    I am facing a problem in synchronizing my model and view threads I have a view which is table. In it, user can select a few rows. I update the view as soon as the user clicks on any row since I don't want the UI to be slow. This updating is done by a logic which runs in the controller thread below. At the same time, the controller will update the model data too, which takes place in a different thread. i.e., controller puts the query in a queue, which is then executed by the model thread - which is a single-threaded interface. As soon as the query executes, controller will get a signal. Now, In order to keep the view and model synchronized, I will update the view again based on the return value of the query (the data returned by model) - even though I updated the view already for that user action. But, I am facing issues because, its taking a lot of time for the model to return the result, by that time user would have performed multiple clicks. So, as a result of updating the view again based on the information from model, the view sometimes goes back to the state in which the previous clicks were made (Suppose user clicks thrice on different rows. I update the view as soon as the click happens. Also, I update the view when I get data back from the model - which is supposed to be same as the already updated state of the view. Now, when the user clicks third time, I get data for the first click from model. As a result, view goes back to a state which is generated by the first click) Is there any way to handle such a synchronization issue?

    Read the article

  • Is there any kind of established architecture for browser based MMO games?

    - by black_puppydog
    I am beginning the development of a broser based game in which players take certain actions at any point in time. Big parts of gameplay will be happening in real life and just have to be entered into the system. I believe a good kind of comparison might be a platform for managing fantasy football, although I have virtually no experience playing that, so please correct me if I am mistaken here. The point is that some events happen in the program (i.e. on the server, out of reach for the players) like pulling new results from some datasource, starting of a new round by a game master and such. Other events happen in real life (two players closing a deal on the transfer of some team member or whatnot - again: have never played fantasy football) and have to be entered into the system. The first part is pretty easy since the game masters will be "staff" and thus can be trusted to a certain degree to not mess with the system. But the second part bothers me quite a lot, especially since the actions may involve multiple steps and interactions with different players, like registering a deal with the system that then has to be approved by the other party or denied and passed on to a game master to decide. I would of course like to separate the game logic as far as possible from the presentation and basic form validation but am unsure how to do this in a clean fashion. Of course I could (and will) put some effort into making my own architectural decisions and prototype different ideas. But I am bound to make some stupid mistakes at some point, so I would like to avoid some of that by getting a little "book smart" beforehand. So the question is: Is there any kind of architectural works that I can read up on? Papers, blogs, maybe design documents or even source code? Writing this down this seems more like a business application with business rules, workflows and such... Any good entry points for that?

    Read the article

  • Is Moving Entity Framework objects over a webservice really the best way?

    - by aceinthehole
    I've inherited a .NET project that has close to 2 thousand clients out in the field that need to push data periodically up to a central repository. The clients wake up and attempt to push the data up via a series of WCF webservices where they are passing each entity framework entity as parameter. Once the service receives this object, it preforms some business logic on the data, and then turns around and sticks it in it's own database that mirrors the database on the client machines. The trick is, is that this data is being transmitted over a metered connection, which is very expensive. So optimizing the data is a serious priority. Now, we are using a custom encoder that compresses the data (and decompresses it on the other end) while it is being transmitted, and this is reducing the data footprint. However, the amount of data that the clients are using, seem ridiculously large, given the amount of information that is actually being transmitted. It seems me that entity framework itself may be to blame. I'm suspecting that the objects are very large when serialized to be sent over wire, with a lot context information and who knows what else, when what we really need is just the 'new' inserts. Is using the entity framework and WCF services as we have done so far the correct way, architecturally, of approaching this n-tiered, asynchronous, push only problem? Or is there a different approach, that could optimize the data use?

    Read the article

  • Storing editable site content?

    - by hmp
    We have a Django-based website for which we wanted to make some of the content (text, and business logic such as pricing plans) easily editable in-house, and so we decided to store it outside the codebase. Usually the reason is one of the following: It's something that non-technical people want to edit. One example is copywriting for a website - the programmers prepare a template with text that defaults to "Lorem ipsum...", and the real content is inserted later to the database. It's something that we want to be able to change quickly, without the need to deploy new code (which we currently do twice a week). An example would be features currently available to the customers at different tiers of pricing. Instead of hardcoding these, we read them from database. The described solution is flexible but there are some reasons why I don't like it. Because the content has to be read from the database, there is a performance overhead. We mitigate that by using a caching scheme, but this also adds some complexity to the system. Developers who run the code locally see the system in a significantly different state compared to how it runs on production. Automated tests also exercise the system in a different state. Situations like testing new features on a staging server also get trickier - if the staging server doesn't have a recent copy of the database, it can be unexpectedly different from production. We could mitigate that by committing the new state to the repository occasionally (e.g. by adding data migrations), but it seems like a wrong approach. Is it? Any ideas how best to solve these problems? Is there a better approach for handling the content that I'm overlooking?

    Read the article

  • TechEd 2012: Fast SQL Server

    - by Tim Murphy
    While I spend a certain amount of my time creating databases (coding around SQL Server and setup a server when I have to) it isn’t my bread and butter.  Since I have run into a number of time that SQL Server needed to be tuned I figured I would step out of my comfort zone and see what I can learn. Brent Ozar packed a mountain of information into his session on making SQL Server faster.  I’m not sure how he found time to hit all of his points since he was allowing the audience abuse him on Twitter instead of asking questions, but he managed it.  I also questioned his sanity since he appeared to be using a fruit laptop. He had my attention though when he stated that he had given up on telling people to not use “select *”. He posited that it could be fixed with hardware by caching the data in memory.  He continued by cautioning that having too many indexes could defeat this approach.  His logic was sound if not always practical, but it was a good place to start when determining the trade-offs you need to balance.  He was moving pretty fast, but I believe he was prescribing this solution predominately for OLTP database prior to moving on to data warehouse solutions. Much of the advice he gave for data warehouses is contained in the Microsoft Fast Track guidance so I won’t rehash it here.  To summarize the solution seems to be the proper balance memory, disk access speed and the speed of the pipes that get the data from storage to the CPU.  It appears to be sound guidance and the session gave enough information that going forward we should be able to find the details needed easily.  Just what the doctor ordered. del.icio.us Tags: SQL Server,TechEd,TechEd 2012,Database,Performance Tuning

    Read the article

  • Learning how to design knowledge and data flow [closed]

    - by max
    In designing software, I spend a lot of time deciding how the knowledge (algorithms / business logic) and data should be allocated between different entities; that is, which object should know what. I am asking for advice about books, articles, presentations, classes, or other resources that would help me learn how to do it better. I code primarily in Python, but my question is not really language-specific; even if some of the insights I learn don't work in Python, that's fine. I'll give a couple examples to clarify what I mean. Example 1 I want to perform some computation. As a user, I will need to provide parameters to do the computation. I can have all those parameters sent to the "main" object, which then uses them to create other objects as needed. Or I can create one "main" object, as well as several additional objects; the additional objects would then be sent to the "main" object as parameters. What factors should I consider to make this choice? Example 2 Let's say I have a few objects of type A that can perform a certain computation. The main computation often involves using an object of type B that performs some interim computation. I can either "teach" A instances what exact parameters to pass to B instances (i.e., make B "dumb"); or I can "teach" B instances to figure out what needs to be done when looking at an A instance (i.e., make B "smart"). What should I think about when I'm making this choice?

    Read the article

  • Dealing with numerous, simultaneous sounds in unity

    - by luxchar
    I've written a custom class that creates a fixed number of audio sources. When a new sound is played, it goes through the class, which creates a queue of sounds that will be played during that frame. The sounds that are closer to the camera are given preference. If new sounds arrive in the next frame, I have a complex set of rules that determines how to replace the old ones. Ideally, "big" or "important" sounds should not be replaced by small ones. Sound replacement is necessary since the game can be fast-paced at times, and should try to play new sounds by replacing old ones. Otherwise, there can be "silent" moments when an old sound is about to stop playing and isn't replaced right away by a new sound. The drawback of replacing old sounds right away is that there is a harsh transition from the old sound clip to the new one. But I wonder if I could just remove that management logic altogether, and create audio sources on the fly for new sounds. I could give "important" sounds more priority (closer to 0 in the corresponding property) as opposed to less important ones, and let Unity take care of culling out sound effects that exceed the channel limit. The only drawback is that it requires many heap allocations. I wonder what strategy people use here?

    Read the article

  • Do the benefits of Resin/Quercus outweigh the overhead?

    - by Craige
    Lately, I've been looking more and more into Resin + Quercus as a technology to develop an application of mine. The reason I started looking into it was that this application has high reporting needs, a lot of which cannot (or realistically, should not) be created in real-time. Java would offer a nice backend to queue and generate reports. Also, with Quercus I would be able to develop my data models in Hibernate, and use them "from PHP", thus effectively stretching these models across front and back-end. This same concept would also apply to any front/back-end common business logic, which could be developed in Java libraries. Now, the downside is that whichever front-end (PHP) MVC Framework I choose (my goal was Symfony 2), it is unlikely to work without some heavy modification, if it can work at all. Quercus is a pretty close implementation of PHP, and is supposed to be compatible with PHP5.3, so namespaces and closures SHOULDN'T be a problem, but when I tried to run an existing Symfony 1.4 app, I failed miserably. So, my question to you is, do you think the benefits of Resin + Quercus outweigh the overhead of using a not-so-perfect/stable implementation of PHP? If this were your application, and your goal was and end-product, rather than educational purposes, what would you decide?

    Read the article

  • How to define implementation details?

    - by woni
    In our project, an assembly combines logic for the IoC-Container, the project internals and the communication layer. The current version evolved to have only internal classes in addin assemblies. My main problem with this approach is, that the entry point is only available over the IoC-Container. It is not possible to use anything else than reflection to initialize the assembly. Everything behind the IoC-Interface is defined as implementation detail and therefore not intended for usages outside. It is well known that you should not test implementation detail (such as private and internal methods), because they should be tested through the public interface. It is also well known, that your tests should not use the IoC-Container to setup the SUTs, because that would result in too much dependencies. So we are using the InternalsVisibleTo-Attribute to make internals visible to our test assemblies and test the so called implementation details. I recognized that one problem could be the mixup between different concerns in that assembly, changing this would make this discussion useless, because classes have to be defined public. Ignoring my concerns with this, isn't the need to test a class enough reason to make it public, the usages of InternalsVisibleTo seems unintended, and a little bit "hacky". The approach to test only against the publicly available IoC-Container is too costly and would result in integration style tests. The pros of using internals are, that the usages are well known and do not have to be implemented like a public method would have to be (documentation, completeness, versioning,...). Is there a solution, to not test against internals, but keep their advantages over public classes, or do we have to redefine what an implementation detail is.

    Read the article

  • What will be the better way for data retrieval on application that needs to handle limited amount of data?

    - by Milanix
    Just moved this question from Stack Overflow. Since, adding my code snippets itself would make this question really long. Instead, I am pretty interested in knowing a better ways for data retrieval on application that needs to handle limited amount of data which isn't updated regularly. Let's take this example: I am writing an application which gets a schedule as an XML from server. I have written a logic in order to parse XML version and update database only if the version is newer than the local version. Although the update is checked automatically/manually on daily basis based on user preference, the actual version update happens only once per few months or so. Since, this is done by some other authority which doesn't provide API but, rather inform publicly on their changes. The actual XML contains a "(n number of groups)(days in a week) (n number of schedule)" . The group is usually 6 and the number of schedule is usually 2. So basically there would usually be only around 100 strings. Now although I have used SQLite at the moment. I want to know how to make update on database. Should I show progress dialog that the application is updating and exit the app when it's done? Since, my updates are infrequent i don't think this will really harm user experience but, is there any better ways to do it? Because I don't want update to be made when user is searching which is done using database. This will cause an database already open exception. At least I have faced this problem before. Is it better to rather parse XML every time when user wants to view certain things or to use SQLite? Since, I make lots of use of adapter in my app to create lists, will that degrade the performance?

    Read the article

  • Binding in the view or the controller?

    - by da_b0uncer
    I've seen 2 different approaches with MVC on the web. One, like in ExtJS, is to bind the callbacks to the view via the controller. Finding every element on the view and adding the functionallity. The other, like in angular.js and in the lift-framework server-side, too, is to bind in the views and just write the functionallity in the controller. Which is better and cleaner? The ExtJS approach has dumb views and all the logic in the controller. Which seems clean to me. I had problems with global IDs for GUI-elements or relative navigation to GUI-elements in this approach. When I changed the view, the controller couldn't find the buttons anymore or I had multiple instances of one button with the same ID on a single application, because of the global ID. But I solved this with IDs that are only global in a view and can be on the application multiple times. So I could mess with the (dumb) views layout and design and the functionallity wouldn't break. The angular.js approach with the bindings in the view don't has the problem with global IDs. Also, the person who changes something in the view layout has to know the IDs anyway, so the controller can put the data at the right spot. So if I write <a ng-click="doThis()" /> for angular.js and implement doThis() or <a lid="buttonwhichdoesthis" /> for extjs and find the element with the local id and add doThis() as handler on the controller side, seems to be not so different. The only thing is, the second one has one more layer of indirection, which seems cleaner. The first one seems somehow to cost less effort.

    Read the article

  • Integrating Amazon EC2 in Java via NetBeans IDE

    - by Geertjan
    Next, having looked at Amazon Associates services and Amazon S3, let's take a look at Amazon EC2, the elastic compute cloud which provides remote computing services. I started by launching an instance of Ubuntu Server 14.04 on Amazon EC2, which looks a bit like this in the on-line AWS Management Console, though I whitened out most of the details: Now that I have at least one running instance available on Amazon EC2, it makes sense to use the services that are integrated into NetBeans IDE:  I created a new application with one class, named "AmazonEC2Demo". Then I dragged the "describeInstances" service that you see above, with the mouse, into the class. Then the IDE automatically created all the other files you see below, i.e., 4 Java classes and one properties file: In the properties file, register the access ID and secret keys. These are read by the other generated Java classes. Signing and authentication are done automatically by the code that is generated, i.e., there's nothing generic you need to do and you can immediately begin working on your domain-specific code. Finally, you're now able to rewrite the code in "AmazonEC2Demo" to connect to Amazon EC2 and obtain information about your running instance: public class AmazonEC2Demo { public static void main(String[] args) { String instanceId1 = "i-something"; RestResponse result; try { result = AmazonEC2Service.describeInstances(instanceId1); System.out.println(result.getDataAsString()); } catch (IOException ex) { Logger.getLogger(AmazonEC2Demo.class.getName()).log(Level.SEVERE, null, ex); } } } From the above, you'll receive a chunk of XML with data about the running instance, it's name, status, dates, etc. In other words, you're now ready to integrate Amazon EC2 features directly into the applications you're writing, without very much work to get started. Within about 5 minutes, you're working on your business logic, rather than on the generic code that anyone needs when integrating with Amazon EC2.

    Read the article

  • PyQt application architecture

    - by L. De Leo
    I'm trying to give a sound structure to a PyQt application that implements a card game. So far I have the following classes: Ui_Game: this describes the ui of course and is responsible of reacting to the events emitted by my CardWidget instances MainController: this is responsible for managing the whole application: setup and all the subsequent states of the application (like starting a new hand, displaying the notification of state changes on the ui or ending the game) GameEngine: this is a set of classes that implement the whole game logic Now, the way I concretely coded this in Python is the following: class CardWidget(QtGui.QLabel): def __init__(self, filename, *args, **kwargs): QtGui.QLabel.__init__(self, *args, **kwargs) self.setPixmap(QtGui.QPixmap(':/res/res/' + filename)) def mouseReleaseEvent(self, ev): self.emit(QtCore.SIGNAL('card_clicked'), self) class Ui_Game(QtGui.QWidget): def __init__(self, window, *args, **kwargs): QtGui.QWidget.__init__(self, *args, **kwargs) self.setupUi(window) self.controller = None def place_card(self, card): cards_on_table = self.played_cards.count() + 1 print cards_on_table if cards_on_table <= 2: self.played_cards.addWidget(card) if cards_on_table == 2: self.controller.play_hand() class MainController(object): def __init__(self): self.app = QtGui.QApplication(sys.argv) self.window = QtGui.QMainWindow() self.ui = Ui_Game(self.window) self.ui.controller = self self.game_setup() Is there a better way other than injecting the controller into the Ui_Game class in the Ui_Game.controller? Or am I totally off-road?

    Read the article

  • Call DB Stored Procedure using @NamedStoredProcedureQuery Injection

    - by anwilson
    Oracle Database Stored Procedure can be called from EJB business layer to perform complex DB specific operations. This approach will avoid overhead from frequent network hits which could impact end-user result. DB Stored Procedure can be invoked from EJB Session Bean business logic using org.eclipse.persistence.queries.StoredProcedureCall API. Using this approach requires more coding to handle the Session and Arguments of the Stored Procedure, thereby increasing effort on maintenance. EJB 3.0 introduces @NamedStoredProcedureQuery Injection to call Database Stored Procedure as NamedQueries. This blog will take you through the steps to call Oracle Database Stored Procedure using @NamedStoredProcedureQuery.EMP_SAL_INCREMENT procedure available in HR schema will be used in this sample.Create Entity from EMPLOYEES table.Add @NamedStoredProcedureQuery above @NamedQueries to Employees.java with definition as given below - @NamedStoredProcedureQuery(name="Employees.increaseEmpSal", procedureName = "EMP_SAL_INCREMENT", resultClass=void.class, resultSetMapping = "", returnsResultSet = false, parameters = { @StoredProcedureParameter(name = "EMP_ID", queryParameter = "EMPID"), @StoredProcedureParameter(name = "SAL_INCR", queryParameter = "SALINCR")} ) Observe how Stored Procedure's arguments are handled easily in  @NamedStoredProcedureQuery using @StoredProcedureParameter.Expose Entity Bean by creating a Session Facade.Business method need to be added to Session Bean to access the Stored Procedure exposed as NamedQuery. public void salaryRaise(Long empId, Long salIncrease) throws Exception { try{ Query query = em.createNamedQuery("Employees.increaseEmpSal"); query.setParameter("EMPID", empId); query.setParameter("SALINCR", salIncrease); query.executeUpdate(); } catch(Exception ex){ throw ex; } } Expose business method through Session Bean Remote Interface. void salaryRaise(Long empId, Long salIncrease) throws Exception; Session Bean Client is required to invoke the method exposed through remote interface.Call exposed method in Session Bean Client main method. final Context context = getInitialContext(); SessionEJB sessionEJB = (SessionEJB)context.lookup("Your-JNDI-lookup"); sessionEJB.salaryRaise(new Long(200), new Long(1000)); Deploy Session BeanRun Session Bean Client.Salary of Employee with Id 200 will be increased by 1000.

    Read the article

  • How to choose between using a Domain Event, or letting the application layer orchestrate everything

    - by Mr Happy
    I'm setting my first steps into domain driven design, bought the blue book and all, and I find myself seeing three ways to implement a certain solution. For the record: I'm not using CQRS or Event Sourcing. Let's say a user request comes into the application service layer. The business logic for that request is (for whatever reason) separated into a method on an entity, and a method on a domain service. How should I go about calling those methods? The options I have gathered so far are: Let the application service call both methods Use method injection/double dispatch to inject the domain service into the entity, letting the entity do it's thing and then let it call the method of the domain service (or the other way around, letting the domain service call the method on the entity) Raise a domain event in the entity method, a handler of which calls the domain service. (The kind of domain events I'm talking about are: http://www.udidahan.com/2009/06/14/domain-events-salvation/) I think these are all viable, but I'm unable to choose between them. I've been thinking about this a long time and I've come to a point where I no longer see the semantic differences between the three. Do you know of some guidelines when to use what?

    Read the article

  • Preferred way for dealing with customer-defined data in enterprise application

    - by Axarydax
    Let's say that we have a small enterprise web (intranet) application for managing data for car dealers. It has screens for managing customers, inventory, orders, warranties and workshops. This application is installed at 10 customer sites for different car dealers. First version of this application was created without any way to provide for customer-specific data. For example, if dealer A wanted to be able to attach a photo to a customer, dealer B wanted to add e-mail contact to each workshop, and dealer C wanted to attach multiple PDF reports to a warranty, each and every feature like this was added to the application, so all of the customers received everything on new update. However, this will inevitably lead to conflicts as the number of customers grow as their usage patterns are unique, and if, for instance, a specific dealer requested to have an ability to attach (for some reason) a color of inventory item (and be able to search by this color) as a required item, others really wouldn't need this feature, and definitely will not want it to be a required item. Or, one dealer would like to manage e-mail contacts for their employees on a separate screen of the application. I imagine that a solution for this is to use a kind of plugin system, where we would have a core of the application that provides for standard features like customers, inventory, etc, and all of the customer's installed plugins. There would be different kinds of plugins - standalone screens like e-mail contacts for employees, with their own logic, and customer plugin which would extend or decorate inventory items (like photo or color). Inventory (customer,order,...) plugins would require to have installation procedure, hooks for plugging into the item editor, item displayer, item filtering for searching, backup hook and such. Is this the right way to solve this problem?

    Read the article

  • Motivation for service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such case. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - well, why not just import the BL.dll (and the DAL.dll) to the other UI, and whenever making a change re-copy the dll files, it might not be so 'neat', but is the all purpose of the service layer to prevent this? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

  • Access Control Service: Walkthrough Videos of Web Application, SOAP, REST and Silverlight Integration

    - by Your DisplayName here!
    Over the weekend I worked a little more on my ACS2 sample. Instead of writing it all down, I decided to quickly record four short videos that cover the relevant features and code. Have fun ;) Part 1 – Overview This video does a quick walkthrough of the solution and shows the web application part. This includes driving the sign in UI via JavaScript (thanks Matias) as well as the registration logic I wrote about here. watch Part 2 – SOAP Service and Client The sample app also exposes a WCF SOAP service. This video shows how to wire up the service to ACS and hows how to create a client that first requests a token from an IdP and then sends this token to ACS. watch Part 3 – REST Service and Client This part shows how to set up a WCF REST service that consumes SWT tokens from ACS. Unfortunately there is currently no standard WIF plumbing for REST. For the service integration I had to combine a lot of code from different sources (kzu, zulfiq) as well as the WIF SDK and OAuth CTPs together. But it is working. watch Part 4 – Silverlight and Web Identity Integration This part took by far the most time to write. The Silverlight Client shows ho to sign in to the application using a registered identity provider (including web identities) and using the resulting SWT token to call our REST service. This is designed to be a desktop (OOB) client application (thanks to Jörg for the UI magic). watch code download

    Read the article

  • How do you plan your asynchronous code?

    - by NullOrEmpty
    I created a library that is a invoker for a web service somewhere else. The library exposes asynchronous methods, since web service calls are a good candidate for that matter. At the beginning everything was just fine, I had methods with easy to understand operations in a CRUD fashion, since the library is a kind of repository. But then business logic started to become complex, and some of the procedures involves the chaining of many of these asynchronous operations, sometimes with different paths depending on the result value, etc.. etc.. Suddenly, everything is very messy, to stop the execution in a break point it is not very helpful, to find out what is going on or where in the process timeline have you stopped become a pain... Development becomes less quick, less agile, and to catch those bugs that happens once in a 1000 times becomes a hell. From the technical point, a repository that exposes asynchronous methods looked like a good idea, because some persistence layers could have delays, and you can use the async approach to do the most of your hardware. But from the functional point of view, things became very complex, and considering those procedures where a dozen of different calls were needed... I don't know the real value of the improvement. After read about TPL for a while, it looked like a good idea for managing tasks, but in the moment you have to combine them and start to reuse existing functionality, things become very messy. I have had a good experience using it for very concrete scenarios, but bad experience using them broadly. How do you work asynchronously? Do you use it always? Or just for long running processes? Thanks.

    Read the article

  • Should I use a seperate class per test?

    - by user460667
    Taking the following simple method, how would you suggest I write a unit test for it (I am using MSTest however concepts are similar in other tools). public void MyMethod(MyObject myObj, bool validInput) { if(!validInput) { // Do nothing } else { // Update the object myObj.CurrentDateTime = DateTime.Now; myObj.Name = "Hello World"; } } If I try and follow the rule of one assert per test, my logic would be that I should have a Class Initialise method which executes the method and then individual tests which check each property on myobj. public class void MyTest { MyObj myObj; [TestInitialize] public void MyTestInitialize() { this.myObj = new MyObj(); MyMethod(myObj, true); } [TestMethod] public void IsValidName() { Assert.AreEqual("Hello World", this.myObj.Name); } [TestMethod] public void IsDateNotNull() { Assert.IsNotNull(this.myObj.CurrentDateTime); } } Where I am confused is around the TestInitialize. If I execute the method under TestInitialize, I would need seperate classes per variation of parameter inputs. Is this correct? This would leave me with a huge number of files in my project (unless I have multiple classes per file). Thanks

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >