Search Results

Search found 25405 results on 1017 pages for 'document oriented db'.

Page 39/1017 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • How far should an entity take care of its properties values by itself?

    - by Kharlos Dominguez
    Let's consider the following example of a class, which is an entity that I'm using through Entity Framework. - InvoiceHeader - BilledAmount (property, decimal) - PaidAmount (property, decimal) - Balance (property, decimal) I'm trying to find the best approach to keep Balance updated, based on the values of the two other properties (BilledAmount and PaidAmount). I'm torn between two practices here: Updating the balance amount every time BilledAmount and PaidAmount are updated (through their setters) Having a UpdateBalance() method that the callers would run on the object when appropriate. I am aware that I can just calculate the Balance in its getter. However, it isn't really possible because this is an entity field that needs to be saved back to the database, where it has an actual column, and where the calculated amount should be persisted to. My other worry about the automatically updating approach is that the calculated values might be a little bit different from what was originally saved to the database, due to rounding values (an older version of the software, was using floats, but now decimals). So, loading, let's say 2000 entities from the database could change their status and make the ORM believe that they have changed and be persisted back to the database the next time the SaveChanges() method is called on the context. It would trigger a mass of updates that I am not really interested in, or could cause problems, if the calculation methods changed (the entities fetched would lose their old values to be replaced by freshly recalculated ones, simply by being loaded). Then, let's take the example even further. Each invoice has some related invoice details, which also have BilledAmount, PaidAmount and Balance (I'm simplifying my actual business case for the sake of the example, so let's assume the customer can pay each item of the invoice separately rather than as a whole). If we consider the entity should take care of itself, any change of the child details should cause the Invoice totals to change as well. In a fully automated approach, a simple implementation would be looping through each detail of the invoice to recalculate the header totals, every time one the property changes. It probably would be fine for just a record, but if a lot of entities were fetched at once, it could create a significant overhead, as it would perform this process every time a new invoice detail record is fetched. Possibly worse, if the details are not already loaded, it could cause the ORM to lazy-load them, just to recalculate the balances. So far, I went with the Update() method-way, mainly for the reasons I explained above, but I wonder if it was right. I'm noticing I have to keep calling these methods quite often and at different places in my code and it is potential source of bugs. It also has a detrimental effect on data-binding because when the properties of the detail or header changes, the other properties are left out of date and the method has no way to be called. What is the recommended approach in this case?

    Read the article

  • Webcast: DB Enterprise User Security Integration with Oracle Directory Services

    - by B Shashikumar
    The typical enterprise has a large number of DBA (Database administrator) accounts that are locally managed, which is often very costly, problematic and error-prone. Databases are a crucial component of your enterprise IT infrastructure, housing sensitive corporate data and database user accounts and privileges. To ensure the integrity of your enterprise's data, it's imperative to have a well-managed identity management system. This begins with centralized management of user accounts and access rights. Enterprise User Security (EUS), an Oracle Database Enterprise Edition feature, combined with Oracle Identity Management, gives you the ability to centrally manage database users and their authorizations in one central place. The cost of user provisioning and password resets is dramatically reduced. This technology is a must for new application development and should be considered for existing applications as well. Join Oracle Advisors for a live webcast on Jul 11 at 8am Pacific Time where Oracle experts will briefly introduce EUS, followed by a detailed discussion about the various directory options that are supported, including integration with Microsoft Active Directory. We'll conclude how to avoid common pitfalls deploying EUS with directory services. To register for this event, click here  

    Read the article

  • Are there design patterns or generalised approaches for particle simulations?

    - by romeovs
    I'm working on a project (for college) in C++. The goal is to write a program that can more or less simulate a beam of particles flying trough the LHC synchrotron. Not wanting to rush into things, me and my team are thinking about how to implement this and I was wondering if there are general design patterns that are used to solve this kind of problem. The general approach we came up with so far is the following: there is a World that holds all objects you can add objects to this world such as Particle, Dipole and Quadrupole time is cut up into discrete steps, and at each point in time, for each Particle the magnetic and electric forces that each object in the World generates are calculated and summed up (luckily electro-magnetism is linear). each Particle moves accordingly (using a simple estimation approach to solve the differential movement equations) save the Particle positions repeat This seems a good approach but, for instance, it is hard to take into account symmetries that might be present (such as the magnetic field of each Quadrupole) and is this thus suboptimal. To take into account such symmetries as that of the Quadrupole field, it would be much easier to (also) make space discrete and somehow store form of the Quadrupole field somewhere. (Since 2532 or so Quadrupoles are stored this should lead to a massive gain of performance, not having to recalculate each Quadrupole field) So, are there any design patterns? Is the World-approach feasible or is it old-fashioned, bad programming? What about symmetry, how is that generally taken into acount?

    Read the article

  • Class Versus Struct

    - by Prometheus87
    In C++ and other influenced languages there is a construct called Structure (struct) and we all know the class. Both are capable of holding functions and variables. some differences are 1. Class is given memory in heap and struct is given memory in stack 2. in class variable are private by default and in struct thy are public My question is that struct was somehow abandoned for Class. Why? other that abstraction, a struct can do all the same stuff a class does. Then why abandon it?

    Read the article

  • is 'protected' ever reasonable outside of virtual methods and destructors?

    - by notallama
    so, suppose you have some fields and methods marked protected (non-virtual). presumably, you did this because you didn't mark them public because you don't want some nincompoop to accidentally call them in the wrong order or pass in invalid parameters, or you don't want people to rely on behaviour that you're going to change later. so, why is it okay for that nincompoop to use those fields and methods from a subclass? as far as i can tell, they can still screw up in the same ways, and the same compatibility issues still exist if you change the implementation. the cases for protected i can think of are: non-virtual destructors, so you can't break things by deleting the base class. virtual methods, so you can override 'private' methods called by the base class. constructors in c++. in java/c# marking the class as abstract will do basically the same. any other use cases?

    Read the article

  • Semantic coupling vs. large class

    - by user106587
    I have hardware I communicate with via TCP. This hardware accepts ~40 different commands/requests with about 20 different responses. I've created a HardwareProxy class which has a TcpClient to send and receive data. I didn't like the idea of having 40 different methods to send the commands/requests, so I started down the path of having a single SendCommand method which takes an ICommand and returns an IResponse, this results in 40 different SpecificCommand classes. The problem is this requires semantic coupling, i.e. the method that invokes SendCommand receives an IResponse which it has to downcast to SpecificResponse, I use a future map which I believe ensures the appropriate SpecificResponse, but I get the impression this code smells. Besides the semantic coupling, ICommand and IResponse are essentially empty abstract classes (Marker Interfaces) and this seems suspicious to me. If I go with the 40 methods I don't think I have broken the single responisbility principle as the responsibility of the HardwareProxy class is to act as the hardware, which has all of these commands. This route is just ugly, plus I'd like to have Asynchronous versions, so there'd be about 80 methods. Is it better to bite the bullet and have a large class, accept the coupling and MarkerInterfaces for a smaller soultuion, or am I missing a better way? Thanks.

    Read the article

  • What are startups expecting when they ask you to solve a programming challenge before interviewing? [closed]

    - by Swapnil Tailor
    I have applied to couple of startups and most of them are asking to solve programming challenge before they start on the interviewing candidate. I have submitted couple of the solutions and all the time getting rejected in the initial screening. Now what I think is, they will see my coding style, algorithm and OOD concepts that I have used to solve the problem. Can you guys input more on it as what other details are taken into consideration and how can I improve my coding for getting selected. By the way, I did all my coding in either Java/Perl. EDIT I feel the biggest reason for rejection is code didn't work for couple of use cases.

    Read the article

  • Class design for calling "the same method" on different classes from one place

    - by betatester07
    Let me introduce my situation: I have Java EE application and in one package, I want to have classes which will act primarily as cache for some data from database, for example: class that will hold all articles for our website class that will hold all categories etc. Every class should have some update() method, which will update data for that class from database and also some other methods for data manipulation specific for that data type. Now, I would like to call update() method for all class instances (there will be exactly one class instance for every class) from one place. What is the best design?

    Read the article

  • How are objects modelled in a functional programming language?

    - by Giorgio
    In an answer to this question (written by Pete) there are some considerations about OOP versus FP. In particular, it is suggested that FP languages are not very suitable for modelling (persistent) objects that have an identity and a mutable state. I was wondering if this is true or, in other words, how one would model objects in a functional programming language. From my basic knowledge of Haskell I thought that one could use monads in some way, but I really do not know enough on this topic to come up with a clear answer. So, how are entities with an identity and a mutable persistent state normally modelled in a functional language? EDIT Here are some further details to clarify what I have in mind. Take a typical Java application in which I can (1) read a record from a database table into a Java object, (2) modify the object in different ways, (3) save the modified object to the database. How would this be implemented e.g. in Haskell? I would initially read the record into a record value (defined by a data definition), perform different transformations by applying functions to this initial value (each intermediate value is a new, modified copy of the original record) and then write the final record value to the database. Is this all there is to it? How can I ensure that at each moment in time only one copy of the record is valid / accessible? One does not want to have different immutable values representing different snapshots of the same object to be accessible at the same time.

    Read the article

  • Subscribe/Publish Model in Web-based Application (c#) - Best Practices for Event Handlers

    - by KingOfHypocrites
    I was recently exposed to a desktop application that uses an publish/subscribe model to handle commands, events, etc. I can't seem to find any good examples of using this in a web application, so I wonder if I am off base in trying to use this for web based development (on the server side)? I'm using asp.net c#. My main question in regards to the design is: When using a publish/subscribe model, is it better to have generic commands/events that pass no parameters and then have the subscribers look at static context objects that contain the data relevant to the event? Or is it better to create custom arguments for every event that contain data related to the event? The whole concept of a global container seems so convenient but at the same time seems to break encapsulation. Any thoughts or best practices from anyone who has implemented this type of model in a web based application? Even suggestions on this model out of the scope of my question are appreciated.

    Read the article

  • Cookie access within a HTTP Class

    - by James Jeffery
    I have a HTTP class that has a Get, and Post, method. It's a simple class I created to encapsulate Post and Get requests so I don't have to repeat the get/post code throughout the application. In C#: class HTTP { private CookieContainer cookieJar; private String userAgent = "..."; public HTTP() { this.cookieJar = new CookieContainer(); } public String get(String url) { // Make get request. Return the JSON } public String post(String url, String postData) { // Make post request. Return the JSON } } I've made the CookieJar a property because I want to preserve the cookie values throughout the session. If the user is logged into Twitter with my application, each request I make (be it get or post) I want to use the cookies so they remain logged in. That's the basics of it anyway. But, I don't want to return a string in all instances. Sometimes I may want the cookie, or a header value, or something else from the request. Ideally I'd like to be able to do this in my code: Cookie cookie = http.get("http://google.com").cookie("g_user"); String g_user = cookie.value; or String source = http.get("http://google.com").body; My question - To do this, would I need to have a Get class, and a Post class, that are included within the HTTP class and are accessible via accessors? Within the Get and Post class I would then have the Cookie method, and the body property, and whatever else is needed. Should I also use an interface, or create a Request class and have Post and Get extend it so that common methods and properties are available to both classes? Or, am I thinking totally wrong?

    Read the article

  • Finding the best practice for a game simulating tool

    - by Tougheart
    I'm studying Java right now, and I'm thinking of this tool as my practice project. The game is "League of Legends" in case anyone knows it, I'm not actually simulating the game as in simulating game play, I'm just trying to create a tool that can compare different champions to each other based on their own abilities and items bought inside the game. The game basics are: Every player has a champion in a team of 5 players playing against another team. Each champion has a different set of abilities (usually 4) that s/he uses to do damage to opposing champions. Each champion gets stronger by buying different items, increasing the attack it deals or decreasing the damage received. What I want to do is to create a tool to be used outside the game enabling players to try out different builds for their champions and compare the figures against other champions they usually fight against. The goal is to enable players get a deeper understanding of the different item combinations (builds) that can be used during the games, instead of trying them out in real games which can be somehow very time consuming. What I'm stuck at is the best practice I should follow to make this possible using Java, I can't figure out which classes should inherit from which, should I make champions and items specs in the code or extracted from other files, specially that I'm talking about hundreds of items and champions to use in that tool. I'm self studying Java, and I don't have much practice at it, so I would really appreciate any broad guidelines regarding this, and sorry if my question doesn't fit here, I tried to follow the rules. English isn't my native language, so I'm really sorry if I wasn't clear enough, I would be more than happy to explain anything that's not understood.

    Read the article

  • Is avoiding the private access specifier in PHP justified?

    - by Tifa
    I come from a Java background and I have been working with PHP for almost a year now. I have worked with WordPress, Zend and currently I'm using CakePHP. I was going through Cake's lib and I couldn't help notice that Cake goes a long way avoiding the "private" access specifier. Cake says Try to avoid private methods or variables, though, in favor of protected ones. The latter can be accessed or modified by subclasses, whereas private ones prevent extension or re-use. in this tutorial. Why does Cake overly shun the "private" access specifier while good OO design encourages its use i.e to apply the most restrictive visibility for a class member that is not intended to be part of its exported API? I'm willing to believe that "private" functions are difficult test, but is rest of the convention justified outside Cake? or perhaps it's just a Cake convention of OO design geared towards extensibility at the expense of being a stickler for stringent (or traditional?) OO design?

    Read the article

  • FP for simulation and modelling

    - by heaptobesquare
    I'm about to start a simulation/modelling project. I already know that OOP is used for this kind of projects. However, studying Haskell made me consider using the FP paradigm for modelling a system of components. Let me elaborate: Let's say I have a component of type A, characterised by a set of data (a parameter like temperature or pressure,a PDE and some boundary conditions,etc.) and a component of type B, characterised by a different set of data(different or same parameter, different PDE and boundary conditions). Let's also assume that the functions/methods that are going to be applied on each component are the same (a Galerkin method for example). If I were to use an OOP approach, I would create two objects that would encapsulate each type's data, the methods for solving the PDE(inheritance would be used here for code reuse) and the solution to the PDE. On the other hand, if I were to use an FP approach, each component would be broken down to data parts and the functions that would act upon the data in order to get the solution for the PDE. This approach seems simpler to me assuming that linear operations on data would be trivial and that the parameters are constant. What if the parameters are not constant(for example, temperature increases suddenly and therefore cannot be immutable)? In OOP, the object's (mutable) state can be used. I know that Haskell has Monads for that. To conclude, would implementing the FP approach be actually simpler,less time consuming and easier to manage (add a different type of component or new method to solve the pde) compared to the OOP one? I come from a C++/Fortran background, plus I'm not a professional programmer, so correct me on anything that I've got wrong.

    Read the article

  • Figuring out the Call chain

    - by BDotA
    Let's say I have an assemblyA that has a method which creates an instance of assemblyB and calls its MethodFoo(). Now assemblyB also creates an instance of assemblyC and calls MethodFoo(). So no matter if I start with assemblyB in the code flow or with assemlyA, at the end we are calling that MethodFoo of AssemblyC(). My question is when I am in the MethodFoo() how can I know who has called me? Has it been a call originally from assemblyA or was it from assemlyB? Is there any design pattern or a good OO way of solving this?

    Read the article

  • Clarify the Single Responsibility Principle.

    - by dsimcha
    The Single Responsibility Principle states that a class should do one and only one thing. Some cases are pretty clear cut. Others, though, are difficult because what looks like "one thing" when viewed at a given level of abstraction may be multiple things when viewed at a lower level. I also fear that if the Single Responsibility Principle is honored at the lower levels, excessively decoupled, verbose ravioli code, where more lines are spent creating tiny classes for everything and plumbing information around than actually solving the problem at hand, can result. How would you describe what "one thing" means? What are some concrete signs that a class really does more than "one thing"?

    Read the article

  • Should library classes be wrapped before using them in unit testing?

    - by Songo
    I'm doing unit testing and in one of my classes I need to send a mail from one of the methods, so using constructor injection I inject an instance of Zend_Mail class which is in Zend framework. Example: class Logger{ private $mailer; function __construct(Zend_Mail $mail){ $this->mail=$mail; } function toBeTestedFunction(){ //Some code $this->mail->setTo('some value'); $this->mail->setSubject('some value'); $this->mail->setBody('some value'); $this->mail->send(); //Some } } However, Unit testing demands that I test one component at a time, so I need to mock the Zend_Mail class. In addition I'm violating the Dependency Inversion principle as my Logger class now depends on concretion not abstraction. Does that mean that I can never use a library class directly and must always wrap it in a class of my own? Example: interface Mailer{ public function setTo($to); public function setSubject($subject); public function setBody($body); public function send(); } class MyMailer implements Mailer{ private $mailer; function __construct(){ $this->mail=new Zend_Mail; //The class isn't injected this time } function setTo($to){ $this->mailer->setTo($to); } //implement the rest of the interface functions similarly } And now my Logger class can be happy :D class Logger{ private $mailer; function __construct(Mailer $mail){ $this->mail=$mail; } //rest of the code unchanged } Questions: Although I solved the mocking problem by introducing an interface, I have created a totally new class Mailer that now needs to be unit tested although it only wraps Zend_Mail which is already unit tested by the Zend team. Is there a better approach to all this? Zend_Mail's send() function could actually have a Zend_Transport object when called (i.e. public function send($transport = null)). Does this make the idea of a wrapper class more appealing? The code is in PHP, but answers doesn't have to be. This is more of a design issue than a language specific feature

    Read the article

  • Creating Objects

    - by user1424667
    I have a general coding standard question. Is it bad practice to initialize and create an object in multiple methods depending on the outcome of a users choice. So for example if the user quits a poker game, create the poker hand with the cards the user has received, even if < 5, and if the user played till the end create object to show completed hand to show outcome of the game. The key is that the object will only be created once in actuality, there are just different paths and parameters is will receive depending on if the user folded, or played on to the showdown.

    Read the article

  • What is the order of diagram drawing in a design?

    - by Manuel Malagon
    I'm new with OOP and UML and I have some confusion here. I'd like to know where to start, I mean, somebody comes to you and ask you to do something (involves software design of course), once you have determined what has to be done, what is the order in which you have to start the software architecture? I mean, is it first the use case diagram or the class diagram? or should I draw some diagrams in parallel? But basically, where should I start? and what UML diagram goes first? Thanks for helping!!

    Read the article

  • Python simulation-scripts architecture

    - by Beastcraft
    Situation: I've some scripts that simulate user-activity on desktop. Therefore I've defined a few cases (workflows) and implemented them in Python. I've also written some classes for interacting with the users' software (e.g. web browser etc.). Problem: I'm a total beginner in software design / architecture (coding isn't a problem). How could I structure what I described above? Providing a library which contains all the workflows as functions, or a separate class/module etc. for each workflow? I want to keep the the workflows simple. The complexity should be hidden in the classes for interacting with the users' software. Are there any papers / books I could read about this, or could you provide some tips? Kind regards, B

    Read the article

  • Learning how to design knowledge and data flow [closed]

    - by max
    In designing software, I spend a lot of time deciding how the knowledge (algorithms / business logic) and data should be allocated between different entities; that is, which object should know what. I am asking for advice about books, articles, presentations, classes, or other resources that would help me learn how to do it better. I code primarily in Python, but my question is not really language-specific; even if some of the insights I learn don't work in Python, that's fine. I'll give a couple examples to clarify what I mean. Example 1 I want to perform some computation. As a user, I will need to provide parameters to do the computation. I can have all those parameters sent to the "main" object, which then uses them to create other objects as needed. Or I can create one "main" object, as well as several additional objects; the additional objects would then be sent to the "main" object as parameters. What factors should I consider to make this choice? Example 2 Let's say I have a few objects of type A that can perform a certain computation. The main computation often involves using an object of type B that performs some interim computation. I can either "teach" A instances what exact parameters to pass to B instances (i.e., make B "dumb"); or I can "teach" B instances to figure out what needs to be done when looking at an A instance (i.e., make B "smart"). What should I think about when I'm making this choice?

    Read the article

  • Is wrapping a third party code the only solution to unit test its consumers? [closed]

    - by Songo
    I'm doing unit testing and in one of my classes I need to send a mail from one of the methods, so using constructor injection I inject an instance of Zend_Mail class which is in Zend framework. Now some people argue that if a library is stable enough and won't change often then there is no need to wrap it. So assuming that Zend_Mail is stable and won't change and it fits my needs entirely, then I won't need a wrapper for it. Now take a look at my class Logger that depends on Zend_Mail: class Logger{ private $mailer; function __construct(Zend_Mail $mail){ $this->mail=$mail; } function toBeTestedFunction(){ //Some code $this->mail->setTo('some value'); $this->mail->setSubject('some value'); $this->mail->setBody('some value'); $this->mail->send(); //Some } } However, Unit testing demands that I test one component at a time, so I need to mock the Zend_Mail class. In addition I'm violating the Dependency Inversion principle as my Logger class now depends on concretion not abstraction. Now is wrapping Zend_Mail the only solution or is there a better approach to this problem? The code is in PHP, but answers doesn't have to be. This is more of a design issue than a language specific feature

    Read the article

  • Is it wrong to use a boolean parameter to determine behavior?

    - by Ray
    I have seen a practice from time to time that "feels" wrong, but I can't quite articulate what is wrong about it. Or maybe it's just my prejudice. Here goes: A developer defines a method with a boolean as one of its parameters, and that method calls another, and so on, and eventually that boolean is used, solely to determine whether or not to take a certain action. This might be used, for example, to allow the action only if the user has certain rights, or perhaps if we are (or aren't) in test mode or batch mode or live mode, or perhaps only when the system is in a certain state. Well there is always another way to do it, whether by querying when it is time to take the action (rather than passing the parameter), or by having multiple versions of the method, or multiple implementations of the class, etc. My question isn't so much how to improve this, but rather whether or not it really is wrong (as I suspect), and if it is, what is wrong about it.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >