Search Results

Search found 676 results on 28 pages for 'polymorphic associations'.

Page 25/28 | < Previous Page | 21 22 23 24 25 26 27 28  | Next Page >

  • Clean way to use mutable implementation of Immutable interfaces for encapsulation

    - by dsollen
    My code is working on some compost relationship which creates a tree structure, class A has many children of type B, which has many children of type C etc. The lowest level class, call it bar, also points to a connected bar class. This effectively makes nearly every object in my domain inter-connected. Immutable objects would be problematic due to the expense of rebuilding almost all of my domain to make a single change to one class. I chose to go with an interface approach. Every object has an Immutable interface which only publishes the getter methods. I have controller objects which constructs the domain objects and thus has reference to the full objects, thus capable of calling the setter methods; but only ever publishes the immutable interface. Any change requested will go through the controller. So something like this: public interface ImmutableFoo{ public Bar getBar(); public Location getLocation(); } public class Foo implements ImmutableFoo{ private Bar bar; private Location location; @Override public Bar getBar(){ return Bar; } public void setBar(Bar bar){ this.bar=bar; } @Override public Location getLocation(){ return Location; } } public class Controller{ Private Map<Location, Foo> fooMap; public ImmutableFoo addBar(Bar bar){ Foo foo=fooMap.get(bar.getLocation()); if(foo!=null) foo.addBar(bar); return foo; } } I felt the basic approach seems sensible, however, when I speak to others they always seem to have trouble envisioning what I'm describing, which leaves me concerned that I may have a larger design issue then I'm aware of. Is it problematic to have domain objects so tightly coupled, or to use the quasi-mutable approach to modifying them? Assuming that the design approach itself isn't inherently flawed the particular discussion which left me wondering about my approach had to do with the presence of business logic in the domain objects. Currently I have my setter methods in the mutable objects do error checking and all other logic required to verify and make a change to the object. It was suggested that this should be pulled out into a service class, which applies all the business logic, to simplify my domain objects. I understand the advantage in mocking/testing and general separation of logic into two classes. However, with a service method/object It seems I loose some of the advantage of polymorphism, I can't override a base class to add in new error checking or business logic. It seems, if my polymorphic classes were complicated enough, I would end up with a service method that has to check a dozen flags to decide what error checking and business logic applies. So, for example, if I wanted to have a childFoo which also had a size field which should be compared to bar before adding par my current approach would look something like this. public class Foo implements ImmutableFoo{ public void addBar(Bar bar){ if(!getLocation().equals(bar.getLocation()) throw new LocationException(); this.bar=bar; } } public interface ImmutableChildFoo extends ImmutableFoo{ public int getSize(); } public ChildFoo extends Foo implements ImmutableChildFoo{ private int size; @Override public int getSize(){ return size; } @Override public void addBar(Bar bar){ if(getSize()<bar.getSize()){ throw new LocationException(); super.addBar(bar); } My colleague was suggesting instead having a service object that looks something like this (over simplified, the 'service' object would likely be more complex). public interface ImmutableFoo{ ///original interface, presumably used in other methods public Location getLocation(); public boolean isChildFoo(); } public interface ImmutableSizedFoo implements ImmutableFoo{ public int getSize(); } public class Foo implements ImmutableSizedFoo{ public Bar bar; @Override public void addBar(Bar bar){ this.bar=bar; } @Override public int getSize(){ //default size if no size is known return 0; } @Override public boolean isChildFoo return false; } } public ChildFoo extends Foo{ private int size; @Override public int getSize(){ return size; } @Override public boolean isChildFoo(); return true; } } public class Controller{ Private Map<Location, Foo> fooMap; public ImmutableSizedFoo addBar(Bar bar){ Foo foo=fooMap.get(bar.getLocation()); service.addBarToFoo(foo, bar); returned foo; } public class Service{ public static void addBarToFoo(Foo foo, Bar bar){ if(foo==null) return; if(!foo.getLocation().equals(bar.getLocation())) throw new LocationException(); if(foo.isChildFoo() && foo.getSize()<bar.getSize()) throw new LocationException(); foo.setBar(bar); } } } Is the recommended approach of using services and inversion of control inherently superior, or superior in certain cases, to overriding methods directly? If so is there a good way to go with the service approach while not loosing the power of polymorphism to override some of the behavior?

    Read the article

  • Windows Server 2003 IPSec Tunnel Connected, But Not Working (Possibly NAT/RRAS Related)

    - by Kevinoid
    Configuration I have setup a "raw" IPSec tunnel between a Windows Server 2003 (SBS) machine and a Netgear FVG318 according to the instructions in Microsoft KB816514. The configuration is as follows (using the same conventions as the article): NetA | SBS2003 | FVG318 | NetB 10.0.0.0/24 | 216.x.x.x | 69.y.y.y | 10.0.254.0/24 Both the Main Mode and Quick Mode Security Associations are successfully completed and appear in the IP Security Monitor. I am also able to ping the SBS2003 server on its private address from any computer on NetB. The Problem Any traffic sent from a computer on NetA to NetB, or from SBS2003 to NetB (excluding ICMP Ping responses), is sent out on the public network interface outside the IPSec tunnel (no encryption or header authentication, as if the tunnel were not there). Pings sent from a computer on NetB to a computer on NetA successfully reach computers on NetA, but the responses are silently discarded by SBS2003 (they do not go out in the clear and do not generate any encrypted traffic). Possible Solutions Incorrect Configuration I could have mistyped something, somewhere, or KB816514 could be incorrect in some way. I have tried very hard to eliminate the first option. Have re-created the configuration several times, tried tweaking and adjusting all the settings I could without success (most prevent the SA from being established). NAT/RRAS I have seen multiple posts elsewhere suggesting that this could be due to interaction between NAT and the IPSec filters. Possibly the NetA private addresses get rewritten to 216.x.x.x before being compared with the Quick Mode IPSec filters and don't get tunneled because of the mismatch. In fact, The Cable Guy article from June 2005 "TCP/IP Packet Processing Paths" suggests that this is the case, (see step 2 and 4 of the Transit Traffic path). If this is the case, is there a way to exclude NetA-NetB traffic from NAT? Any thoughts, ideas, suggestions, and/or comments are appreciated. Update (2011-06-26) After failing to solve the problem, I resorted to paid Microsoft support. They were unable to solve the problem. Since then I have implemented a solution based on Linux that is working quite well. I will attempt to evaluate any proposed answers as best I can, but current configurations and time constraints will make this slow...

    Read the article

  • opening adobe reader results in infinite explorer.exe process creation loop

    - by irrational John
    First, apologies if the answer to this is only a Google away. I tried, honest I did. But I wasn't able to find anything about this problem posted elsewhere. I'm using Adobe Reader v9.3.2 in Windows 7 Home Premium 64-bit. If you want more system details, then just request them. What happens is that when I attempt to open a PDF by clicking "Open" on it then (1) adobe reader never opens and (2) the explorer.exe program is (apparently) recursively opened. I base this on opening the Task Manager and seeing a long list of explorer.exe processes under the "Processes" tab. Usually there is only one. When I recreate this problem, the list of explorer.exe processes are at least a page or two long. (Too many to bother counting). I "correct" this problem by logging off and then logging back on. This kills all the explorer.exe tasks. Unfortunately I don't know another way to terminate them all. Now here's the curious part. This only happens when I attempt to "Open" a PDF file. If instead I use the context menu (right mouse click on the PDF) and select "Open with" and "Adobe Reader 9.3" then Adobe Reader opens the file with no problem. It seems that there is something wrong with the setting for the default open action for PDF files. However, I have been unable to fix this by changing the Windows setting. Here is what I have tried. When I open Control Panel > All Control Panel Items > Default Programs > Set Associations I do not find an entry for file type .pdf. There are only entries for .pdfxml and .pdx. When use "Open with" on a PDF file and select "Choose default program", the check box for "Always use the selected program to open this kind of file" is disabled (greyed out). I have uninstalled and reinstalled Adobe Reader but the problem persists. While obviously no lives are at stake here, this problem is annoying the frickin' heck out of me. If I forget and recreate this bug then I have to stop everything I'm doing to stop it. Any suggestions on how I might go about fixing this?

    Read the article

  • Windows 2008 R2 IPsec encryption in tunnel mode, hosts in same subnet

    - by fission
    In Windows there appear to be two ways to set up IPsec: The IP Security Policy Management MMC snap-in (part of secpol.msc, introduced in Windows 2000). The Windows Firewall with Advanced Security MMC snap-in (wf.msc, introduced in Windows 2008/Vista). My question concerns #2 – I already figured out what I need to know for #1. (But I want to use the ‘new’ snap-in for its improved encryption capabilities.) I have two Windows Server 2008 R2 computers in the same domain (domain members), on the same subnet: server2 172.16.11.20 server3 172.16.11.30 My goal is to encrypt all communication between these two machines using IPsec in tunnel mode, so that the protocol stack is: IP ESP IP …etc. First, on each computer, I created a Connection Security Rule: Endpoint 1: (local IP address), eg 172.16.11.20 for server2 Endpoint 2: (remote IP address), eg 172.16.11.30 Protocol: Any Authentication: Require inbound and outbound, Computer (Kerberos V5) IPsec tunnel: Exempt IPsec protected connections Local tunnel endpoint: Any Remote tunnel endpoint: (remote IP address), eg 172.16.11.30 At this point, I can ping each machine, and Wireshark shows me the protocol stack; however, nothing is encrypted (which is expected at this point). I know that it's unencrypted because Wireshark can decode it (using the setting Attempt to detect/decode NULL encrypted ESP payloads) and the Monitor Security Associations Quick Mode display shows ESP Encryption: None. Then on each server, I created Inbound and Outbound Rules: Protocol: Any Local IP addresses: (local IP address), eg 172.16.11.20 Remote IP addresses: (remote IP address), eg 172.16.11.30 Action: Allow the connection if it is secure Require the connections to be encrypted The problem: Though I create the Inbound and Outbound Rules on each server to enable encryption, the data is still going over the wire (wrapped in ESP) with NULL encryption. (You can see this in Wireshark.) When the arrives at the receiving end, it's rejected (presumably because it's unencrypted). [And, disabling the Inbound rule on the receiving end causes it to lock up and/or bluescreen – fun!] The Windows Firewall log says, eg: 2014-05-30 22:26:28 DROP ICMP 172.16.11.20 172.16.11.30 - - 60 - - - - 8 0 - RECEIVE I've tried varying a few things: In the Rules, setting the local IP address to Any Toggling the Exempt IPsec protected connections setting Disabling rules (eg disabling one or both sets of Inbound or Outbound rules) Changing the protocol (eg to just TCP) But realistically there aren't that many knobs to turn. Does anyone have any ideas? Has anyone tried to set up tunnel mode between two hosts using Windows Firewall? I've successfully got it set up in transport mode (ie no tunnel) using exactly the same set of rules, so I'm a bit surprised that it didn't Just Work™ with the tunnel added.

    Read the article

  • Automating silent software deployments on Solaris 10

    - by datSilencer
    Hello everyone. Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10. Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example: Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations. Checking that target application folders exist and if not, then create them with preconfigured path values defined when the package was assembled. Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them. Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system. Obviously, doing this for many boxes (the goal in question) by hand can be slow and error prone. I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another. 1) Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort. 2) Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P. 3) Ant or Gradle scripts. They may be an option since the boxes also have java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting. It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this. I thank you for your time and help.

    Read the article

  • Another design-related C++ question

    - by Kotti
    Hi! I am trying to find some optimal solutions in C++ coding patterns, and this is one of my game engine - related questions. Take a look at the game object declaration (I removed almost everything, that has no connection with the question). // Abstract representation of a game object class Object : public Entity, IRenderable, ISerializable { // Object parameters // Other not really important stuff public: // @note Rendering template will never change while // the object 'lives' Object(RenderTemplate& render_template, /* params */) : /*...*/ { } private: // Object rendering template RenderTemplate render_template; public: /** * Default object render method * Draws rendering template data at (X, Y) with (Width, Height) dimensions * * @note If no appropriate rendering method overload is specified * for any derived class, this method is called * * @param Backend & b * @return void * @see */ virtual void Render(Backend& backend) const { // Render sprite from object's // rendering template structure backend.RenderFromTemplate( render_template, x, y, width, height ); } }; Here is also the IRenderable interface declaration: // Objects that can be rendered interface IRenderable { /** * Abstract method to render current object * * @param Backend & b * @return void * @see */ virtual void Render(Backend& b) const = 0; } and a sample of a real object that is derived from Object (with severe simplifications :) // Ball object class Ball : public Object { // Ball params public: virtual void Render(Backend& b) const { b.RenderEllipse(/*params*/); } }; What I wanted to get is the ability to have some sort of standard function, that would draw sprite for an object (this is Object::Render) if there is no appropriate overload. So, one can have objects without Render(...) method, and if you try to render them, this default sprite-rendering stuff is invoked. And, one can have specialized objects, that define their own way of being rendered. I think, this way of doing things is quite good, but what I can't figure out - is there any way to split the objects' "normal" methods (like Resize(...) or Rotate(...)) implementation from their rendering implementation? Because if everything is done the way described earlier, a common .cpp file, that implements any type of object would generally mix the Resize(...), etc methods implementation and this virtual Render(...) method and this seems to be a mess. I actually want to have rendering procedures for the objects in one place and their "logic implementation" - in another. Is there a way this can be done (maybe alternative pattern or trick or hint) or this is where all this polymorphic and virtual stuff sucks in terms of code placement?

    Read the article

  • Mixing policy-based design with CRTP in C++

    - by Eitan
    I'm attempting to write a policy-based host class (i.e., a class that inherits from its template class), with a twist, where the policy class is also templated by the host class, so that it can access its types. One example where this might be useful is where a policy (used like a mixin, really), augments the host class with a polymorphic clone() method. Here's a minimal example of what I'm trying to do: template <template <class> class P> struct Host : public P<Host<P> > { typedef P<Host<P> > Base; typedef Host* HostPtr; Host(const Base& p) : Base(p) {} }; template <class H> struct Policy { typedef typename H::HostPtr Hptr; Hptr clone() const { return Hptr(new H((Hptr)this)); } }; Policy<Host<Policy> > p; Host<Policy> h(p); int main() { return 0; } This, unfortunately, fails to compile, in what seems to me like circular type dependency: try.cpp: In instantiation of ‘Host<Policy>’: try.cpp:10: instantiated from ‘Policy<Host<Policy> >’ try.cpp:16: instantiated from here try.cpp:2: error: invalid use of incomplete type ‘struct Policy<Host<Policy> >’ try.cpp:9: error: declaration of ‘struct Policy<Host<Policy> >’ try.cpp: In constructor ‘Host<P>::Host(const P<Host<P> >&) [with P = Policy]’: try.cpp:17: instantiated from here try.cpp:5: error: type ‘Policy<Host<Policy> >’ is not a direct base of ‘Host<Policy>’ If anyone can spot an obvious mistake, or has successfuly mixing CRTP in policies, I would appreciate any help.

    Read the article

  • C++ Virtual Constructor, without clone()

    - by Julien L.
    I want to perform "deep copies" of an STL container of pointers to polymorphic classes. I know about the Prototype design pattern, implemented by means of the Virtual Ctor Idiom, as explained in the C++ FAQ Lite, Item 20.8. It is simple and straightforward: struct ABC // Abstract Base Class { virtual ~ABC() {} virtual ABC * clone() = 0; }; struct D1 : public ABC { virtual D1 * clone() { return new D1( *this ); } // Covariant Return Type }; A deep copy is then: for( i = 0; i < oldVector.size(); ++i ) newVector.push_back( oldVector[i]->clone() ); Drawbacks As Andrei Alexandrescu states it: The clone() implementation must follow the same pattern in all derived classes; in spite of its repetitive structure, there is no reasonable way to automate defining the clone() member function (beyond macros, that is). Moreover, clients of ABC can possibly do something bad. (I mean, nothing prevents clients to do something bad, so, it will happen.) Better design? My question is: is there another way to make an abstract base class clonable without requiring derived classes to write clone-related code? (Helper class? Templates?) Following is my context. Hopefully, it will help understanding my question. I am designing a class hierarchy to perform operations on a class Image: struct ImgOp { virtual ~ImgOp() {} bool run( Image & ) = 0; }; Image operations are user-defined: clients of the class hierarchy will implement their own classes derived from ImgOp: struct CheckImageSize : public ImgOp { std::size_t w, h; bool run( Image &i ) { return w==i.width() && h==i.height(); } }; struct CheckImageResolution; struct RotateImage; ... Multiple operations can be performed sequentially on an image: bool do_operations( std::vector< ImgOp* > v, Image &i ) { std::for_each( v.begin(), v.end(), /* bind2nd(mem_fun(&ImgOp::run), i ...) don't remember syntax */ ); } int main( ... ) { std::vector< ImgOp* > v; v.push_back( new CheckImageSize ); v.push_back( new CheckImageResolution ); v.push_back( new RotateImage ); Image i; do_operations( v, i ); } If there are multiple images, the set can be split and shared over several threads. To ensure "thread-safety", each thread must have its own copy of all operation objects contained in v -- v becomes a prototype to be deep copied in each thread.

    Read the article

  • Multiple layouts in rails [Newbie Q]

    - by BriteLite
    Hi. As a newb, I decided to build a "home inventory" application. I am now stuck on how to programmatically select a layout based on what type of item it is when viewing it in a browser. According to my planning, so far I should have created a few models to represent types of items I can find in my home: Furniture, Electronics and Books. class Book < ActiveRecord::Base end class Furniture < ActiveRecord::Base end class Electronic < ActiveRecord::Base end Now the Books model has things like isbn, pages, address, and category. Furniture model has things like color, price, address, and category. Electronics has things like name, voltage, address, and category. Here is where I got confused. I know the property address is going to be the same for all of them. I also know that, I will need to create multiple "layouts" for 3 different types of items to show the different properties of said items with appropriate graphics and stylesheets. But how will I go about deciding which category the item is so I can determine which layout to render. According to me, this is how I will do it: class DisplayController < ApplicationController def display @item = Params[:item] if @item.category = "electronics" render :layout => 'electronics' end end In my routes.rb map.display ':item', :controller => 'display', :action => 'display' I only seem to have one concern with this, I probably will add a lot of categories later on and think there should be a more DRY-esque way of dealing, rather than hardcoding them. I understand that I need to add into my layout html tags to display relevant information for that particular category. ----Questions---- Is this the right way to approach this type of problem. Will this approach be compatible when I decide to add a gem like *thinking_sphinx* to run search. What issues do you see with my approach and how can I make it better. I was reading something about "Polymorphic Assoc", does that apply in this case, since category exist for all items? Also, I was trying to get a routes to render a URL like "http://localhost/living-room-tv"

    Read the article

  • current_user.user_type_id = @employer ID

    - by sscirrus
    I am building a system with a User model (authenticated using AuthLogic) and three user types in three models: one of these models is Employer. Each of these three models has_many :users, :as = :authenticable. I start by having a new visitor to the site create their own 'User' record with username, password, which user type they are, etc. Upon creation, the user is sent to the 'new' action for one of the three models. So, if they tell us they are an employer, we redirect_to :controller = "employers, :action = "new". Question: When the employer has submitted, I want to set the current_user.user_type_id equal to the employer ID. This should be simple... but it's not working. # Employers Controller / new def new @employer = Employer.new 1.times {@employer.addresses.build} render :layout => 'forms' end # Employers Controller / create def create @employer = Employer.new(params[:employer]) if @employer.save if current_user.blank? redirect_to :controller => "users", :action => "new" else current_user.user_type_id = @employer.id current_user.user_type = "Employer" redirect_to :action => "home", :id => current_user.user_type_id end else render :action => "new" end end ------UPDATE------ Hi guys. In response: I am using this table structure because each of my three user type models have lots of different fields and each has different relationships to the other models, which is why I've avoided STI. By 1.times (@employer.addresses.build) I'm connecting the employer model to the address polymorphic table in one form, so I'm asking the controller to build a new address to go along with the new employer. Averell: you mentioned encapsulating... something in the model using a 'setter' method. I have no idea what you mean by this - could you please explain how this works (or direct me to an example elsewhere)? With tsdbrown's answer I have managed to create the behavior I want... if there's a more elegant way to accomplish the same thing I'd love to learn how. Thanks very much. Thanks to tsdbrown for answering the current_user.save problem!

    Read the article

  • Is it possible to create a C++ factory system that can create an instance of any "registered" object

    - by chrensli
    Hello, I've spent my entire day researching this topic, so it is with some scattered knowledge on the topic that i come to you with this inquiry. Please allow me to describe what I am attempting to accomplish, and maybe you can either suggest a solution to the immediate question, or another way to tackle the problem entirely. I am trying to mimic something related to how XAML files work in WPF, where you are essentially instantiating an object tree from an XML definition. If this is incorrect, please inform. This issue is otherwise unrelated to WPF, C#, or anything managed - I solely mention it because it is a similar concept.. So, I've created an XML parser class already, and generated a node tree based on ObjectNode objects. ObjectNode objects hold a string value called type, and they have an std::vector of child ObjectNode objects. The next step is to instantiate a tree of objects based on the data in the ObjectNode tree. This intermediate ObjectNode tree is needed because the same ObjectNode tree might be instantiated multiple times or delayed as needed. The tree of objects that is being created is such that the nodes in the tree are descendants of a common base class, which for now we can refer to as MyBase. Leaf nodes can be of any type, not necessarily derived from MyBase. To make this more challenging, I will not know what types of MyBase derived objects might be involved, so I need to allow for new types to be registered with the factory. I am aware of boost's factory. Their docs have an interesting little design paragraph on this page: o We may want a factory that takes some arguments that are forwarded to the constructor, o we will probably want to use smart pointers, o we may want several member functions to create different kinds of objects, o we might not necessarily need a polymorphic base class for the objects, o as we will see, we do not need a factory base class at all, o we might want to just call the constructor - without #new# to create an object on the stack, and o finally we might want to use customized memory management. I might not be understanding this all correctly, but that seems to state that what I'm trying to do can be accomplished with boost's factory. But all the examples I've located, seem to describe factories where all objects are derived from a base type. Any guidance on this would be greatly appreciated. Thanks for your time!

    Read the article

  • What are the Rails best practices for javascript templates in restful/resourceful controllers?

    - by numbers1311407
    First, 2 common (basic) approaches: # returning from some FoosController method respond_to do |format| # 1. render the javascript directly format.js { render :json => @foo.to_json } # 2. render the default template, say update.js.erb format.js { render } end # in update.js.erb $('#foo').html("<%= escape_javascript(render(@foo)) %>") These are obviously simple cases but I wanted to illustrate what I'm talking about. I believe that these are also the cases expected by the default responder in rails 3 (either the action-named default template or calling to_#{format} on the resource.) The Issues With 1, you have total flexibility on the view side with no worries about the template, but you have to manipulate the DOM directly via javascript. You lose access to helpers, partials, etc. With 2, you have partials and helpers at your disposal, but you're tied to the one template (by default at least). All your views that make JS calls to FoosController use the same template, which isn't exactly flexible. Three Other Approaches (none really satisfactory) 1.) Escape partials/helpers I need into javascript beforehand, then inserting them into the page after, using string replacement to tailor them to the results returned (subbing in name, id, etc). 2.) Put view logic in the templates. For example, looking for a particular DOM element and doing one thing if it exists, another if it does not. 3.) Put logic in the controller to render different templates. For example, in a polymorphic belongs to where update might be called for either comments/foo or posts/foo, rendering commnts/foos/update.js.erb versus posts/foos/update.js.erb. I've used all of these (and probably others I'm not thinking of). Often in the same app, which leads to confusing code. Are there best practices for this sort of thing? It seems like a common enough use-case that you'd want to call controllers via Ajax actions from different views and expect different things to happen (without having to do tedious things like escaping and string-replacing partials and helpers client side). Any thoughts?

    Read the article

  • Book Review: Oracle ADF Real World Developer’s Guide

    - by Frank Nimphius
    Recently PACKT Publishing published "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman, a product manager in our team. Though already the sixth book dedicated to Oracle ADF, it has a lot of great information in it that none of the previous books covered, making it a safe buy even for those who own the other books published by Oracle Press (McGrwHill) and PACKT Publishing. More than the half of the "Oracle ADF Real World Developer’s Guide" book is dedicated to Oracle ADF Business Components in a depth and clarity that allows you to feel the expertise that Jobinesh gained in this area. If you enjoy Jobinesh blog (http://jobinesh.blogspot.co.uk/) about Oracle ADF, then, no matter what expert you are in Oracle ADF, this book makes you happy as it provides you with detail information you always wished to have. If you are new to Oracle ADF, then this book alone doesn't get you flying, but, if you have some Java background, accelerates your learning big, big, big times. Chapter 1 is an introduction to Oracle ADF and not only explains the layers but also how it compares to plain Java EE solutions (page 13). If you are new to Oracle JDeveloper and ADF, then at the end of this chapter you know how to start JDeveloper and begin your ADF development Chapter 2 starts with what Jobinesh really is good at: ADF Business Components. In this chapter you learn about the architecture ingredients of ADF Business Components: View Objects, View Links, Associations, Entities, Row Sets, Query Collections and Application Modules. This chapter also provides a introduction to ADFBC SDO services, as well as sequence diagrams for what happens when you execute queries or commit updates. Chapter 3 is dedicated to entity objects and  is one of many chapters in this book you will enjoy and never want to miss. Jobinesh explains the artifacts that make up an entity object, how to work with entities and resource bundles, and many advanced topics, including inheritance, change history tracking, custom properties, validation and cursor handling.  Chapter 4 - you guessed it - is all about View objects. Comparable to entities, you learn about the XM files and classes that make a view object, as well as how to define and work with queries. List-of-values, inheritance, polymorphism, bind variables and data filtering are interesting - and important topics that follow. Again the chapter provides helpful sequence diagrams for you to understand what happens internally within a view object. Chapter 5 focuses on advanced view object and entity object topics, like lifecycle callback methods and when you want to override them. This chapter is a good digest of Jobinesh's blog entries (which most ADF developers have in their bookmark list). Really worth reading ! Chapter 6 then is bout Application Modules. Beside of what application modules are, this chapter covers important topics like properties, passivation, activation, application module pooling, how and where to write custom logic. In addition you learn about the AM lifecycle and request sequence. Chapter 7 is about the ADF binding layer. If you are new to Oracle ADF and got lost in the more advanced ADF Business Components chapters, then this chapter is where you get back into the game. In very easy terms, Jobinesh explains what the ADF binding is, how it fits into the JSF request lifecycle and what are the metadata file involved. Chapter 8 then goes into building data bound web user interfaces. In this chapter you get the basics of JavaServer Faces (e.g. managed beans) and learn about the interaction between the JSF UI and the ADF binding layer. Later this chapter provides advanced solutions for working with tree components and list of values. Chapter 9 introduces bounded task flows and ADF controller. This is a chapter you want to read if you are new to ADF of have started. Experts don't find anything new here, which doesn't mean that it is not worth reading it (I for example, enjoyed the controller talk very much) Chapter 10 is an advanced coverage of bounded task flow and talks about contextual events  Chapter 11 is another highlight and explains error handling, trains, transactions and more. I can only recommend you read this chapter. I am aware of many documents that cover exception handling in Oracle ADF (and my Oracle Magazine article for January/February 2013 does the same), but none that covers it in such a great depth. Chapter 12 covers ADF best practices, which is a great round-up of all the tips provided in this book (without Jobinesh to repeat himself). Its all cool stuff that helps you with your ADF projects. In summary, "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman is a great book and addition for all Oracle ADF developers and those who want to become one. Frank

    Read the article

  • BPM 11g - Dynamic Task Assignment with Multi-level Organization Units

    - by Mark Foster
    I've seen several requirements to have a more granular level of task assignment in BPM 11g based on some value in the data passed to the process. Parametric Roles is normally the first port of call to try to satisfy this requirement, but in this blog we will show how a lot of use-cases can be satisfied by the easier to implement and flexible Organization Unit. The Use-Case Task assignment is to an approval group containing several users. At runtime, a location value in the input data determines which of the particular users the task is ultimately assigned to. In this case we use the Demo Community referenced in the SOA Admin Guide, and specifically the "LoanAnalyticGroup" which contains three users; "szweig", "mmitch" & "fkafka". In our scenario we would like to assign a task to "szweig" if the input data specifies that the location is "JapanCentral", to "fkafka" if the location is "JapanNorth" and to "mmitch" if "JapanSouth", and to all of them if the location is "Japan" i.e....   The Process Simple one human task process.... In the output data association of the "Start" activity we need to set the value of the "Organization Unit" predefined variable based on the input data (note that the  predefined variables can only be set on output data associations)....  ...and in the output data association of the human activity we will reset the "Organization Unit" to empty, always good practice to ensure that the Organization Unit will not be used for any subsequent human activities for which we do not require it.... Set Up the Organization Unit  Log in to the BPM Workspace with an administrator user (weblogic/welcome1 in our case) and choose the "Administration" option. Within "Roles" assign the "ProcessOwner" swim-lane for our process to "LoanAnalyticGroup".... Within "Organization Units" we can model our organization.... "Root Organization Unit" as "Japan" and "Child Organization Unit" as "Central", "South" & "North" as shown. As described previously, add user "szweig" to "Central", "mmitch" to "South" and "fkafka" to "North"....   Test the Process Invalid Data  First let us test with invalid data in the input to see what the consequences are, here we use "X" as input.... ...and looking at the instance we can see it has errored.... Organization Unit Root Level Assignment  Now let us see what happens if we have "Japan" in the input data.... ...looking in the "flow trace" we can see that the task has been assigned....  ... but who has the task been assigned to ? Let us look in the BPM Workspace for user "szweig"....  ...and for "mmitch"....  ... and for "fkafka"....  ...so we can see that with an Organization Unit at "Root" level we have successfully assigned the task to all users. Organization Unit Child Level Assignment  Now let us test with "Japan/North" in the input data.... ...and looking in "fkafka" workspace we see the task has been assigned, remember, he was associated with "JapanNorth"....   ... but what about the workspace of "szweig"....  ...no tasks assigned, neither has "mmitch", just as we expected. Summary  We have seen in this blog how to easily implement multi-level dynamic task routing using Organization Units, a common use-case and a simpler solution than Parametric Roles. 

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part II)

    I would now like to expand a little on what I stumbled through in part I of my Visual Studio 2010 post and touch on a few other features of VS 2010.  Specifically, I want to generate some code based off of an Entity Framework model and tie it up to an actual data source.  Im not going to take the easy way and tie to a SQL Server data source, though, I will tie it to an XML data file instead.  Why?  Well, why not?  This is purely for learning, there are probably much better ways to get strongly-typed classes around XML but it will force us to go down a path less travelled and maybe learn a few things along the way.  Once we get this XML data and the means to interact with it, I will revisit data binding to this data in a WPF form and see if I cant get reading, adding, deleting, and updating working smoothly with minimal code.  To begin, I will use what was learned in the first part of this blog topic and draw out a data model for the MFL (My Football League) - I dont want the NFL to come down and sue me for using their name in this totally football-related article.  The data model looks as follows, with Teams having Players, and Players having a position and statistics for each season they played: Note that when making the associations between these entities, I was given the option to create the foreign key but I only chose to select this option for the association between Player and Position.  The reason for this is that I am picturing the XML that will contain this data to look somewhat like this: <MFL> <Position/> <Position/> <Position/> <Team>     <Player>         <Statistic/>     </Player> </Team> </MFL> Statistic will be under its associated Player node, and Player will be under its associated Team node no need to have an Id to reference it if we know it will always fall under its parent.  Position, however, is more of a lookup value that will not have any hierarchical relationship to the player.  In fact, the Position data itself may be in a completely different xml file (something Id like to play around with), so in any case, a player will need to reference the position by its Id. So now that we have a simple data model laid out, I would like to generate two things based on it:  A class for each entity with properties corresponding to each entity property An IO class with methods to get data for each entity, either all instances, by Id or by parent. Now my experience with code generation in the past has consisted of writing up little apps that use the code dom directly to regenerate code on demand (or using tools like CodeSmith).  Surely, there has got to be a more fun way to do this given that we are using the Entity Framework which already has built-in code generation for SQL Server support.  Lets start with that built-in stuff to give us a base to work off of.  Right click anywhere in the canvas of our model and select Add Code Generation Item: So just adding that code item seemed to do quite a bit towards what I was intending: It apparently generated a class for each entity, but also a whole ton more.  I mean a TON more.  Way too much complicated code was generated now that code is likely to be a black box anyway so it shouldnt matter, but we need to understand how to make this work the way we want it to work, so lets get ready to do some stumbling through that text template (tt) file. When I open the .tt file that was generated, right off the bat I realize there is going to be trouble there is no color coding, no intellisense no nothing!  That is going to make stumbling through more like groping blindly in the dark while handcuffed and hopping on one foot, which was one of the alternate titles I was considering for this blog.  Thankfully, the community comes to my rescue and I wont have to cast my mind back to the glory days of coding in VI (look it up, kids).  Using the Extension Manager (Available under the Tools menu), I did a quick search for tt editor in the Online Gallery and quickly found the Tangible T4 Editor: Downloading and installing this was a breeze, and after doing so I got some color coding and intellisense while editing the tt files.  If you will be doing any customizing of tt files, I highly recommend installing this extension.  Next, well see if that is enough help for us to tweak that tt file to do the kind of code generation that we wantDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part II)

    I would now like to expand a little on what I stumbled through in part I of my Visual Studio 2010 post and touch on a few other features of VS 2010.  Specifically, I want to generate some code based off of an Entity Framework model and tie it up to an actual data source.  Im not going to take the easy way and tie to a SQL Server data source, though, I will tie it to an XML data file instead.  Why?  Well, why not?  This is purely for learning, there are probably much better ways to get strongly-typed classes around XML but it will force us to go down a path less travelled and maybe learn a few things along the way.  Once we get this XML data and the means to interact with it, I will revisit data binding to this data in a WPF form and see if I cant get reading, adding, deleting, and updating working smoothly with minimal code.  To begin, I will use what was learned in the first part of this blog topic and draw out a data model for the MFL (My Football League) - I dont want the NFL to come down and sue me for using their name in this totally football-related article.  The data model looks as follows, with Teams having Players, and Players having a position and statistics for each season they played: Note that when making the associations between these entities, I was given the option to create the foreign key but I only chose to select this option for the association between Player and Position.  The reason for this is that I am picturing the XML that will contain this data to look somewhat like this: <MFL> <Position/> <Position/> <Position/> <Team>     <Player>         <Statistic/>     </Player> </Team> </MFL> Statistic will be under its associated Player node, and Player will be under its associated Team node no need to have an Id to reference it if we know it will always fall under its parent.  Position, however, is more of a lookup value that will not have any hierarchical relationship to the player.  In fact, the Position data itself may be in a completely different xml file (something Id like to play around with), so in any case, a player will need to reference the position by its Id. So now that we have a simple data model laid out, I would like to generate two things based on it:  A class for each entity with properties corresponding to each entity property An IO class with methods to get data for each entity, either all instances, by Id or by parent. Now my experience with code generation in the past has consisted of writing up little apps that use the code dom directly to regenerate code on demand (or using tools like CodeSmith).  Surely, there has got to be a more fun way to do this given that we are using the Entity Framework which already has built-in code generation for SQL Server support.  Lets start with that built-in stuff to give us a base to work off of.  Right click anywhere in the canvas of our model and select Add Code Generation Item: So just adding that code item seemed to do quite a bit towards what I was intending: It apparently generated a class for each entity, but also a whole ton more.  I mean a TON more.  Way too much complicated code was generated now that code is likely to be a black box anyway so it shouldnt matter, but we need to understand how to make this work the way we want it to work, so lets get ready to do some stumbling through that text template (tt) file. When I open the .tt file that was generated, right off the bat I realize there is going to be trouble there is no color coding, no intellisense no nothing!  That is going to make stumbling through more like groping blindly in the dark while handcuffed and hopping on one foot, which was one of the alternate titles I was considering for this blog.  Thankfully, the community comes to my rescue and I wont have to cast my mind back to the glory days of coding in VI (look it up, kids).  Using the Extension Manager (Available under the Tools menu), I did a quick search for tt editor in the Online Gallery and quickly found the Tangible T4 Editor: Downloading and installing this was a breeze, and after doing so I got some color coding and intellisense while editing the tt files.  If you will be doing any customizing of tt files, I highly recommend installing this extension.  Next, well see if that is enough help for us to tweak that tt file to do the kind of code generation that we wantDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Add objects to association in OnPreInsert, OnPreUpdate

    - by Dmitriy Nagirnyak
    Hi, I have an event listener (for Audit Logs) which needs to append audit log entries to the association of the object: public Company : IAuditable { // Other stuff removed for bravety IAuditLog IAuditable.CreateEntry() { var entry = new CompanyAudit(); this.auditLogs.Add(entry); return entry; } public virtual IEnumerable<CompanyAudit> AuditLogs { get { return this.auditLogs } } } The AuditLogs collection is mapped with cascading: public class CompanyMap : ClassMap<Company> { public CompanyMap() { // Id and others removed fro bravety HasMany(x => x.AuditLogs).AsSet() .LazyLoad() .Access.ReadOnlyPropertyThroughCamelCaseField() .Cascade.All(); } } And the listener just asks the auditable object to create log entries so it can update them: internal class AuditEventListener : IPreInsertEventListener, IPreUpdateEventListener { public bool OnPreUpdate(PreUpdateEvent ev) { var audit = ev.Entity as IAuditable; if (audit == null) return false; Log(audit); return false; } public bool OnPreInsert(PreInsertEvent ev) { var audit = ev.Entity as IAuditable; if (audit == null) return false; Log(audit); return false; } private static void LogProperty(IAuditable auditable) { var entry = auditable.CreateAuditEntry(); entry.CreatedAt = DateTime.Now; entry.Who = GetCurrentUser(); // Might potentially execute a query. // Also other information is set for entry here } } The problem with it though is that it throws TransientObjectException when commiting the transaction: NHibernate.TransientObjectException : object references an unsaved transient instance - save the transient instance before flushing. Type: CompanyAudit, Entity: CompanyAudit at NHibernate.Engine.ForeignKeys.GetEntityIdentifierIfNotUnsaved(String entityName, Object entity, ISessionImplementor session) at NHibernate.Type.EntityType.GetIdentifier(Object value, ISessionImplementor session) at NHibernate.Type.ManyToOneType.NullSafeSet(IDbCommand st, Object value, Int32 index, Boolean[] settable, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.WriteElement(IDbCommand st, Object elt, Int32 i, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.PerformInsert(Object ownerId, IPersistentCollection collection, IExpectation expectation, Object entry, Int32 index, Boolean useBatch, Boolean callable, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.Recreate(IPersistentCollection collection, Object id, ISessionImplementor session) at NHibernate.Action.CollectionRecreateAction.Execute() at NHibernate.Engine.ActionQueue.Execute(IExecutable executable) at NHibernate.Engine.ActionQueue.ExecuteActions(IList list) at NHibernate.Engine.ActionQueue.ExecuteActions() at NHibernate.Event.Default.AbstractFlushingEventListener.PerformExecutions(IEventSource session) at NHibernate.Event.Default.DefaultFlushEventListener.OnFlush(FlushEvent event) at NHibernate.Impl.SessionImpl.Flush() at NHibernate.Transaction.AdoTransaction.Commit() As the cascading is set to All I expected NH to handle this. I also tried to modify the collection using state but pretty much the same happens. So the question is what is the last chance to modify object's associations before it gets saved? Thanks, Dmitriy.

    Read the article

  • CakePHP Multiple Nested Joins

    - by Paul
    I have an App in which several of the models are linked by hasMany/belongsTo associations. So for instance, A hasMany B, B hasMany C, C hasMany D, and D hasMany E. Also, E belongs to D, D belongs to C, C belongs to B, and B belongs to A. Using the Containable behavior has been great for controlling the amount of information comes back with each query, but I seem to be having a problem when trying to get data from table A while using a condition that involves table D. For instance, here is an example of my 'A' model: class A extends AppModel { var $name = 'A'; var $hasMany = array( 'B' => array('dependent' => true) ); function findDependentOnE($condition) { return $this->find('all', array( 'contain' => array( 'B' => array( 'C' => array( 'D' => array( 'E' => array( 'conditions' => array( 'E.myfield' => $some_value ) ) ) ) ) ) )); } } This still gives me back all the records in 'A', and if it's related 'E' records don't satisfy the condition, then I just get this: Array( [0] => array( [A] => array( [field1] => // stuff [field2] => // more stuff // ...etc ), [B] => array( [field1] => // stuff [field2] => // more stuff // ...etc ), [C] => array( [field1] => // stuff [field2] => // more stuff // ...etc ), [D] => array( [field1] => // stuff [field2] => // more stuff // ...etc ), [E] => array( // empty if 'E.myfield' != $some_value' ) ), [1] => array( // ...etc ) ) When If 'E.myfield' != $some_value, I don't want the record returned at all. I hope this expresses my problem clearly enough... Basically, I want the following query, but in a database-agnostic/CakePHP-y kind of way: SELECT * FROM A INNER JOIN (B INNER JOIN (C INNER JOIN (D INNER JOIN E ON D.id=E.d_id) ON C.id=D.c_id) ON B.id=C.b_id) ON A.id=B.a_id WHERE E.myfield = $some_value

    Read the article

  • How to add objects to association in OnPreInsert, OnPreUpdate

    - by Dmitriy Nagirnyak
    Hi, I have an event listener (for Audit Logs) which needs to append audit log entries to the association of the object: public Company : IAuditable { // Other stuff removed for bravety IAuditLog IAuditable.CreateEntry() { var entry = new CompanyAudit(); this.auditLogs.Add(entry); return entry; } public virtual IEnumerable<CompanyAudit> AuditLogs { get { return this.auditLogs } } } The AuditLogs collection is mapped with cascading: public class CompanyMap : ClassMap<Company> { public CompanyMap() { // Id and others removed fro bravety HasMany(x => x.AuditLogs).AsSet() .LazyLoad() .Access.ReadOnlyPropertyThroughCamelCaseField() .Cascade.All(); } } And the listener just asks the auditable object to create log entries so it can update them: internal class AuditEventListener : IPreInsertEventListener, IPreUpdateEventListener { public bool OnPreUpdate(PreUpdateEvent ev) { var audit = ev.Entity as IAuditable; if (audit == null) return false; Log(audit); return false; } public bool OnPreInsert(PreInsertEvent ev) { var audit = ev.Entity as IAuditable; if (audit == null) return false; Log(audit); return false; } private static void LogProperty(IAuditable auditable) { var entry = auditable.CreateAuditEntry(); entry.CreatedAt = DateTime.Now; entry.Who = GetCurrentUser(); // Might potentially execute a query. // Also other information is set for entry here } } The problem with it though is that it throws TransientObjectException when commiting the transaction: NHibernate.TransientObjectException : object references an unsaved transient instance - save the transient instance before flushing. Type: PropConnect.Model.UserAuditLog, Entity: PropConnect.Model.UserAuditLog at NHibernate.Engine.ForeignKeys.GetEntityIdentifierIfNotUnsaved(String entityName, Object entity, ISessionImplementor session) at NHibernate.Type.EntityType.GetIdentifier(Object value, ISessionImplementor session) at NHibernate.Type.ManyToOneType.NullSafeSet(IDbCommand st, Object value, Int32 index, Boolean[] settable, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.WriteElement(IDbCommand st, Object elt, Int32 i, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.PerformInsert(Object ownerId, IPersistentCollection collection, IExpectation expectation, Object entry, Int32 index, Boolean useBatch, Boolean callable, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.Recreate(IPersistentCollection collection, Object id, ISessionImplementor session) at NHibernate.Action.CollectionRecreateAction.Execute() at NHibernate.Engine.ActionQueue.Execute(IExecutable executable) at NHibernate.Engine.ActionQueue.ExecuteActions(IList list) at NHibernate.Engine.ActionQueue.ExecuteActions() at NHibernate.Event.Default.AbstractFlushingEventListener.PerformExecutions(IEventSource session) at NHibernate.Event.Default.DefaultFlushEventListener.OnFlush(FlushEvent event) at NHibernate.Impl.SessionImpl.Flush() at NHibernate.Transaction.AdoTransaction.Commit() As the cascading is set to All I expected NH to handle this. I also tried to modify the collection using state but pretty much the same happens. So the question is what is the last chance to modify object's associations before it gets saved? Thanks, Dmitriy.

    Read the article

  • IDs necessary in update script not being stored (or even seen!?) (PHP MySQL)

    - by Derek
    Hi guys, I really need help with this one...have spent 3 hours trying to figure it out... Basically, I have 3 tables necessary for this function to work (the query and PHP)... Authors, Books and Users. An author can have many books, and a user can have many books - that's it. When the admin user selects to update a book, they are presented with a form, displaying the current data within the fields, very straight forward... However there is one tricky part, the admin user can change the author for a book (incase they make a mistake) and also change the user for which the book is associated with. When I select to update the single book information I am not getting any values what so ever for author_id or user_id. Meaning that when the user updates the book info, the associations with the user and author is being scrapped altogether (when before there was an association)... I cannot see why this is happening because I can clearly see the IDs for the users and authors for my option values (this is because they are in select dropdowns). Here is what my sql to retrieve the user ID is: SELECT user_id, name FROM users and then i have my select options which brings up all the users in the system: <label>This book belongs to:</label> <select name="name" id="name"> <option value="<?php echo $row['user_id']?>" SELECTED><?php echo $row['name']?> - Current</option> <?php while($row = mysql_fetch_array($result)) { ?> <option value="<?php echo $row['user_id']; if (isset($_POST['user_id']));?>"><?php echo $row['name']?></option> <?php } ?> In the presented HTML form, I can select the users (by name) and within the source code I can see the IDs (for the value) matching against the names of the users. Finally, in my script that performs the update, I have this: $book_id = $_POST['book_id']; $bookname = $_POST['bookname']; $booklevel = $_POST['booklevel']; $author_id = $_POST['author_id']; $user_id = $_POST['user_id']; $sql = "UPDATE books SET bookname= '".$bookname."', booklevel= '".$booklevel."', author_id='".$author_id."', user_id= '".$user_id."' WHERE book_id = ".$book_id; The result of this query returns no value for either author_id or user_id... Obviously in this question I have given the information for the user stuff (with the HTML being displayed) but im guessing that I have the same problem with authors aswell... How can I get these ID's passed to the script so that the change can be acknowledge!! :(

    Read the article

  • CakePHP HABTM: Editing one item casuses HABTM row to get recreated, destroys extra data

    - by leo-the-manic
    I'm having trouble with my HABTM relationship in CakePHP. I have two models like so: Department HABTM Location. One large company has many buildings, and each building provides a limited number of services. Each building also has its own webpage, so in addition to the HABTM relationship itself, each HABTM row also has a url field where the user can visit to find additional information about the service they're interested and how it operates at the building they're interested in. I've set up the models like so: <?php class Location extends AppModel { var $name = 'Location'; var $hasAndBelongsToMany = array( 'Department' => array( 'with' => 'DepartmentsLocation', 'unique' => true ) ); } ?> <?php class Department extends AppModel { var $name = 'Department'; var $hasAndBelongsToMany = array( 'Location' => array( 'with' => 'DepartmentsLocation', 'unique' => true ) ); } ?> <?php class DepartmentsLocation extends AppModel { var $name = 'DepartmentsLocation'; var $belongsTo = array( 'Department', 'Location' ); // I'm pretty sure this method is unrelated. It's not being called when this error // occurs. Its purpose is to prevent having two HABTM rows with the same location // and department. function beforeSave() { // kill any existing rows with same associations $this->log(__FILE__ . ": killing existing HABTM rows", LOG_DEBUG); $result = $this->find('all', array("conditions" => array("location_id" => $this->data['DepartmentsLocation']['location_id'], "department_id" => $this->data['DepartmentsLocation']['department_id']))); foreach($result as $row) { $this->delete($row['DepartmentsLocation']['id']); } return true; } } ?> The controllers are completely uninteresting. The problem: If I edit the name of a Location, all of the DepartmentsLocations that were linked to that Location are re-created with empty URLs. Since the models specify that unique is true, this also causes all of the newer rows to overwrite the older rows, which essentially destroys all of the URLs. I would like to know two things: Can I stop this? If so, how? And, on a less technical and more whiney note: Why does this even happen? It seems bizarre to me that editing a field through Cake should cause so much trouble, when I can easily go through phpMyAdmin, edit the Location name there, and get exactly the result I would expect. Why does CakePHP touch the HABTM data when I'm just editing a field on a row? It's not even a foreign key!

    Read the article

  • How do I change a child's parent in NHibernate when cascade is delete-all-orphan?

    - by Daniel T.
    I have two entities in a bi-directional one-to-many relationship: public class Storage { public IList<Box> Boxes { get; set; } } public class Box { public Storage CurrentStorage { get; set; } } And the mapping: <class name="Storage"> <bag name="Boxes" cascade="all-delete-orphan" inverse="true"> <key column="Storage_Id" /> <one-to-many class="Box" /> </bag> </class> <class name="Box"> <many-to-one name="CurrentStorage" column="Storage_Id" /> </class> A Storage can have many Boxes, but a Box can only belong to one Storage. I have them mapped so that the one-to-many has a cascade of all-delete-orphan. My problem arises when I try to change a Box's Storage. Assuming I already ran this code: var storage1 = new Storage(); var storage2 = new Storage(); storage1.Boxes.Add(new Box()); Session.Create(storage1); Session.Create(storage2); The following code will give me an exception: // get the first and only box in the DB var existingBox = Database.GetBox().First(); // remove the box from storage1 existingBox.CurrentStorage.Boxes.Remove(existingBox); // add the box to storage2 after it's been removed from storage1 var storage2 = Database.GetStorage().Second(); storage2.Boxes.Add(existingBox); Session.Flush(); // commit changes to DB I get the following exception: NHibernate.ObjectDeletedException : deleted object would be re-saved by cascade (remove deleted object from associations) This exception occurs because I have the cascade set to all-delete-orphan. The first Storage detected that I removed the Box from its collection and marks it for deletion. However, when I added it to the second Storage (in the same session), it attempts to save the box again and the ObjectDeletedException is thrown. My question is, how do I get the Box to change its parent Storage without encountering this exception? I know one possible solution is to change the cascade to just all, but then I lose the ability to have NHibernate automatically delete a Box by simply removing it from a Storage and not re-associating it with another one. Or is this the only way to do it and I have to manually call Session.Delete on the box in order to remove it?

    Read the article

  • NHibernate : delete error

    - by MadSeb
    Hi, Model: I have a model in which one Installation can contain multiple "Computer Systems". Database: The table Installations has two columns Name and Description. The table ComputerSystems has three columsn Name, Description and InstallationId. Mappings: I have the following mapping for Installation: <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="myProgram.Core" namespace="myProgram"> <class name="Installation" table="Installations" lazy="true"> <id name="Id" column="Id" type="int"> <generator class="native" /> </id> <property name="Name" column="Name" type="string" not-null="true" /> <property name="Description" column="Description" type="string" /> <bag name="ComputerSystems" inverse="true" lazy="true" cascade="all-delete-orphan"> <key column="InstallationId" /> <one-to-many class="ComputerSystem" /> </bag> </class> </hibernate-mapping> I have the following mapping for ComputerSystem: <?xml version="1.0" encoding="utf-8"?> <id name="Id" column="ID" type="int"> <generator class="native" /> </id> <property name="Name" column="Name" type="string" not-null="true" /> <property name="Description" column="Description" type="string" /> <many-to-one name="Installation" column="InstallationID" cascade="save-update" not-null="true" /> Classes: The Installation class is: public class Installation { public virtual String Description { get; set; } public virtual String Name { get; set; } public virtual IList<ComputerSystem> ComputerSystems { get { if (_computerSystemItems== null) { lock (this) { if (_computerSystemItems== null) _computerSystemItems= new List<ComputerSystem>(); } } return _computerSystemItems; } set { _computerSystemItems= value; } } protected IList<ComputerSystem> _computerSystemItems; public Installation() { Description = ""; Name= ""; } } The ComputerSystem class is: public class ComputerSystem { public virtual String Name { get; set; } public virtual String Description { get; set; } public virtual Installation Installation { get; set; } } The issue is that I get an error when trying to delete an installation that contains a ComputerSystem. The error is: "deleted object would be re-saved by cascade (remove deleted object from associations)". Can anyone help ? Regards, Seb

    Read the article

  • NHibernate unintentional lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

  • JPA entity design / cannot delete entity

    - by timaschew
    I though its simple what I want, but I cannot find any solution for my problem. I'm using playframework 1.2.3 and it's using Hibernate as JPA. So I think playframework has nothing to do with the problem. I have some classes (I omit the nonrelevant fields) public class User { ... } public class Task { public DataContainer dataContainer; } public class DataContainer { public Session session; public User user; } public class Session { ... } So I have association from Task to DataContainer and from DataContainer to Sesssion and the DataContainer belongs to a User. The DataContainers can have always the same User, but the Session have to be different for each instance. And the DataContainer of a Task have also to be different in each instance. A DataContainer can have a Sesesion or not (it's optinal). I use only unidirectional assoc. It should be sufficient. In other words: Every Task must has one DataContainer. Every DataContainer must has one/the same User and can have one Session. To create a DB schema I use JPA annotations: @Entity public class User extends Model { ... } @Entity public class Task extends Model { @OneToOne(optional = false, cascade = CascadeType.ALL) public DataContainer dataContainer; } @Entity public class DataContainer extends Model { @OneToOne(optional = true, cascade = CascadeType.ALL) public Session session; @ManyToOne(optional = false, cascade = CascadeType.ALL) public User user; } @Entity public class Session extends Model { ... } BTW: Model is a play class and provides the primary id as long type. When I create some for each entity a object and 'connect them', I mean the associations, it works fine. But when I try to delete a Session, I get a constraint violation exception, because a DataContainer still refers to the Session I want to delete. I want that the Session (field) of the DataContainer will be set to null respectively the foreign key (session_id) should be unset in the database. This will be okay, because its optional. I don't know, I think I have multiple problems. Am I using the right annotation @OneToOne ? I found on the internet some additional annotation and attributes: @JoinColumn and a mappedBy attribute for the inverse relationship. But I don't have it, because its not bidirectional. Or is a bidirectional assoc. essentially? Another try was to use @OnDelete(action = OnDeleteAction.CASCADE) the the contraint changed from NO ACTIONs when update or delete to: ADD CONSTRAINT fk4745c17e6a46a56 FOREIGN KEY (session_id) REFERENCES annotation_session (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE; But in this case, when I delete a session, the DataContainer and User is deleted. That's wrong for me. EDIT: I'm using postgresql 9, the jdbc stuff is included in play, my only db config is db=postgres://app:app@localhost:5432/app

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28  | Next Page >