Search Results

Search found 31258 results on 1251 pages for 'third person view'.

Page 230/1251 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • MVC pattern implementation. What is the n-relation between its components

    - by Srodriguez
    Dear all, I'm working in a C# project and we are , in order to get some unity across the different parts of our UI, trying to use the MVC pattern. The client is windows form based and I'm trying to create a simple MVC pattern implementation. It's been more challenging than expected, but I still have some questions regarding the MVC pattern. The problem comes mostly from the n-n relationships between its components: Here is what I've understood, but I'm not sure at all of it. Maybe someone can correct me? Model: can be shared among different Views. 1-n relationship between Model-View View: shows the state of the model. only one controller (can be shared among different views?). 1-1 relationship with the Model, 1-1 relationship with the controller Controller: handles the user actions on the view and updates the model. One controller can be shared among different views, a controller interacts only with one model? I'm not sure about the two last ones: Can a view have several controller? Or can a view share a controller with another view? Or is it only a 1:1 relationship? Can a controller handle several views? can it interact with several models? Also, I take advantage of this question to ask another MVC related question. I've suppressed all the synchronous calls between the different members of the MVC, making use of the events and delegates. One last call is still synchronous and is actually the most important one: The call between the view and the controller is still synchronous, as I need to know rather the controller has been able to handle the user's action or not. This is very bad as it means that I could block the UI thread (hence the client itself) while the controller is processing or doing some work. How can I avoid this? I can make use of the callback but then how do i know to which event the callback comes from? PS: I can't change the pattern at this stage, so please avoid answers of type "use MVP or MVVC, etc ;) Thanks!

    Read the article

  • MVC pattern implementation. What is the n-relation between its components

    - by Srodriguez
    Dear all, I'm working in a C# project and we are , in order to get some unity across the different parts of our UI, trying to use the MVC pattern. The client is windows form based and I'm trying to create a simple MVC pattern implementation. It's been more challenging than expected, but I still have some questions regarding the MVC pattern. The problem comes mostly from the n-n relationships between its components: Here is what I've understood, but I'm not sure at all of it. Maybe someone can correct me? Model: can be shared among different Views. 1-n relationship between Model-View View: shows the state of the model. only one controller (can be shared among different views?). 1-1 relationship with the Model, 1-1 relationship with the controller Controller: handles the user actions on the view and updates the model. One controller can be shared among different views, a controller interacts only with one model? I'm not sure about the two last ones: Can a view have several controller? Or can a view share a controller with another view? Or is it only a 1:1 relationship? Can a controller handle several views? can it interact with several models? Also, I take advantage of this question to ask another MVC related question. I've suppressed all the synchronous calls between the different members of the MVC, making use of the events and delegates. One last call is still synchronous and is actually the most important one: The call between the view and the controller is still synchronous, as I need to know rather the controller has been able to handle the user's action or not. This is very bad as it means that I could block the UI thread (hence the client itself) while the controller is processing or doing some work. How can I avoid this? I can make use of the callback but then how do i know to which event the callback comes from? PS: I can't change the pattern at this stage, so please avoid answers of type "use MVP or MVVC, etc ;) Thanks!

    Read the article

  • Starting company and getting a good deal

    - by Dan
    Hi, I'm having the possibility to start a small company with someone else. I'm a software programmer and been working on a field where there isn't much competition, at least in the platform I'm developing, which is the one with highest market share. This other person comes from marketing so he would find / provide clients and be in charge of the business side. So initially it would be me, the tech guy with the knowledge to develop our product and products based on my development, and this person who would become CEO (basically a 50/50 share). The idea is getting a product on our own, but also perform projects for others in order to get some cash. The problem is that his idea is doing this without initial capital. He tells me that he could get some customers from past business (which is mostly true as he has been in the business for some time) and then with the money obtained from such development, we could hire some freelance developers and build our own platform. I've already discussed with him that I don't find this the best way, provided that we need to compete against other companies some of who are VC funded and have many developers working fulltime. But most of all, I'm thinking of whether this is a fair deal to do, provided that this person is not providing anything other than clients, and I'd be the one having to do the work and dedicate time, thus also taking most of the risks. In all fairness, I'd expect him to put some initial capital, given that initially I'm putting some of my code which in some way is money given the time I've dedicated to it. So the question here is, does this seem fair to you? Is this the way it is usually done? My only concern here is that I don't have previous experience with such deals, and I ignore whether this is the usual thing to do in other cases. Certainly having a marketing person helps a lot. As you probably know being a programmer doesn't make me the most indicated person for marketing ;-) but at the same time I'm not sure if this could be a good deal or I'd just be making a pretty inconvenient deal. Hope I made my point clear, but feel free to let me know if more details are needed. Thanks in advance

    Read the article

  • JPA Inheritance and Relations - Clarification question

    - by Michael
    Here the scenario: I have a unidirectional 1:N Relation from Person Entity to Address Entity. And a bidirectional 1:N Relation from User Entity to Vehicle Entity. Here is the Address class: @Entity public class Address implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) privat Long int ... The Vehicles Class: @Entity public class Vehicle implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @ManyToOne private User owner; ... @PreRemove protected void preRemove() { //this.owner.removeVehicle(this); } public Vehicle(User owner) { this.owner = owner; ... The Person Class: @Entity @Inheritance(strategy = InheritanceType.JOINED) @DiscriminatorColumn(name="PERSON_TYP") public class Person implements Serializable { @Id protected String username; @OneToMany(cascade = CascadeType.ALL, orphanRemoval=true) @JoinTable(name = "USER_ADDRESS", joinColumns = @JoinColumn(name = "USERNAME"), inverseJoinColumns = @JoinColumn(name = "ADDRESS_ID")) protected List<Address> addresses; ... @PreRemove protected void prePersonRemove(){ this.addresses = null; } ... The User Class which is inherited from the Person class: @Entity @Table(name = "Users") @DiscriminatorValue("USER") public class User extends Person { @OneToMany(mappedBy = "owner", cascade = {CascadeType.PERSIST, CascadeType.REMOVE}) private List<Vehicle> vehicles; ... When I try to delete a User who has an address I have to use orphanremoval=true on the corresponding relation (see above) and the preRemove function where the address List is set to null. Otherwise (no orphanremoval and adress list not set to null) a foreign key contraint fails. When i try to delete a user who has an vehicle a concurrent Acces Exception is thrown when do not uncomment the "this.owner.removeVehicle(this);" in the preRemove Function of the vehicle. The thing i do not understand is that before i used this inheritance there was only a User class which had all relations: @Entity @Table(name = "Users") public class User implements Serializable { @Id protected String username; @OneToMany(mappedBy = "owner", cascade = {CascadeType.PERSIST, CascadeType.REMOVE}) private List<Vehicle> vehicles; @OneToMany(cascade = CascadeType.ALL) @JoinTable(name = "USER_ADDRESS", joinColumns = @JoinColumn(name = "USERNAME") inverseJoinColumns = @JoinColumn(name = "ADDRESS_ID")) ptivate List<Address> addresses; ... No orphanremoval, and the vehicle class has used the uncommented statement above in its preRemove function. And - I could delte a user who has an address and i could delte a user who has a vehicle. So why doesn't everything work without changes when i use inheritance? I use JPA 2.0, EclipseLink 2.0.2, MySQL 5.1.x and Netbeans 6.8

    Read the article

  • iPhone: Can't animate contentInset while animating Nav Bar show/hide

    - by Cuzog
    In my app, I have a table view. When the user clicks a button, a UIView overlays part of that table view. It's essentially a partial modal. That table view is intentionally still scrollable while that modal is active. To allow the user to scroll to the bottom of the table view, I change the contentInset and scrollIndicatorInsets values to adjust for the smaller area above the modal. When the modal is taken away, I reset those inset values. The problem is that when the user has scrolled to the bottom of the newly adjusted inset and then dismisses the modal, the table view jumps abruptly to a new scroll position because the inset is changed instantly. I would like to animate it so there is a transition, but the beginAnimation/commitAnimations methods aren't affecting it for some reason. Edit: More info. I found the conflict. When presenting the modal, I also hide the navigation bar. The navigation bar natively animates the table view up and down as it shows and hides. When I stop animating the navigation bar, the inset animation works fine. Does anyone know what I can do to work around this conflict? Do I have to wait for the navigation bar animation to finish before adjusting the inset? If so, how to I hook onto that? Any help is greatly appreciated! The relevant code from the table view controller is here: - (void)viewDidLoad { [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(modalOpened) name:@"ModalStartedOpening" object:nil]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(modalDismissed) name:@"ModalStartedClosing" object:nil]; [super viewDidLoad]; } - (void)modalOpened { [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:0.5]; [UIView setAnimationDelegate:self]; self.tableView.contentInset = UIEdgeInsetsMake(0, 0, 201, 0); self.tableView.scrollIndicatorInsets = UIEdgeInsetsMake(0, 0, 201, 0); [UIView commitAnimations]; } - (void)modalDismissed { [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:0.5]; [UIView setAnimationDelegate:self]; self.tableView.contentInset = UIEdgeInsetsMake(0, 0, 0, 0); self.tableView.scrollIndicatorInsets = UIEdgeInsetsMake(0, 0, 0, 0); [UIView commitAnimations]; }

    Read the article

  • A better UPDATE method in LINQ to SQL

    - by Refracted Paladin
    The below is a typical, for me, Update method in L2S. I am still fairly new to a lot of this(L2S & business app development) but this just FEELs wrong. Like there MUST be a smarter way of doing this. Unfortunately, I am having trouble visualizing it and am hoping someone can provide an example or point me in the right direction. To take a stab in the dark, would I have a Person Object that has all these fields as Properties? Then what, though? Is that redundant since L2S already mapped my Person Table to a Class? Is this just 'how it goes', that you eventually end up passing 30 parameters(or MORE) to an UPDATE statement at some point? For reference, this is a business app using C#, WinForms, .Net 3.5, and L2S over SQL 2005 Standard. Here is a typical Update Call for me. This is in a file(BLLConnect.cs) with other CRUD methods. Connect is the name of the DB that holds tblPerson When a user clicks save() this is what is eventually called with all of these fields having, potentially, been updated-- public static void UpdatePerson(int personID, string userID, string titleID, string firstName, string middleName, string lastName, string suffixID, string ssn, char gender, DateTime? birthDate, DateTime? deathDate, string driversLicenseNumber, string driversLicenseStateID, string primaryRaceID, string secondaryRaceID, bool hispanicOrigin, bool citizenFlag, bool veteranFlag, short ? residencyCountyID, short? responsibilityCountyID, string emailAddress, string maritalStatusID) { using (var context = ConnectDataContext.Create()) { var personToUpdate = (from person in context.tblPersons where person.PersonID == personID select person).Single(); personToUpdate.TitleID = titleID; personToUpdate.FirstName = firstName; personToUpdate.MiddleName = middleName; personToUpdate.LastName = lastName; personToUpdate.SuffixID = suffixID; personToUpdate.SSN = ssn; personToUpdate.Gender = gender; personToUpdate.BirthDate = birthDate; personToUpdate.DeathDate = deathDate; personToUpdate.DriversLicenseNumber = driversLicenseNumber; personToUpdate.DriversLicenseStateID = driversLicenseStateID; personToUpdate.PrimaryRaceID = primaryRaceID; personToUpdate.SecondaryRaceID = secondaryRaceID; personToUpdate.HispanicOriginFlag = hispanicOrigin; personToUpdate.CitizenFlag = citizenFlag; personToUpdate.VeteranFlag = veteranFlag; personToUpdate.ResidencyCountyID = residencyCountyID; personToUpdate.ResponsibilityCountyID = responsibilityCountyID; personToUpdate.EmailAddress = emailAddress; personToUpdate.MaritalStatusID = maritalStatusID; personToUpdate.UpdateUserID = userID; personToUpdate.UpdateDateTime = DateTime.Now; context.SubmitChanges(); } }

    Read the article

  • How do I stack images to simulate depth using Core Animation?

    - by Jeffrey Berthiaume
    I have a series of UIImages with which I need to simulate depth. I can't use scaling because I need to be able to rotate the parent view, and the images should look like they're stacked visibly in front of each other, not on the same plane. I made a new ViewController-based project and put this in the viewDidLoad (as well as attached three 120x120 pixel images named 1.png, 2.png, and 3.png): - (void)viewDidLoad { // display image 3 UIImageView *three = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"3.png"]]; three.center = CGPointMake(160 + 60, 240 - 60); [self.view addSubview:three]; // rotate image 3 around the z axis // THIS IS INCORRECT CATransform3D theTransform = three.layer.transform; theTransform.m34 = 1.0 / -1000; three.layer.transform = theTransform; // display image 2 UIImageView *two = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"2.png"]]; two.center = CGPointMake(160, 240); [self.view addSubview:two]; // display image 1 UIImageView *one = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"1.png"]]; one.center = CGPointMake(160 - 60, 240 + 60); [self.view addSubview:one]; // rotate image 3 around the z axis // THIS IS INCORRECT theTransform = one.layer.transform; theTransform.m34 = 1.0 / 1000; one.layer.transform = theTransform; // release the images [one release]; [two release]; [three release]; // rotate the parent view around the y axis theTransform = self.view.layer.transform; theTransform.m14 = 1.0 / -500; self.view.layer.transform = theTransform; [super viewDidLoad]; } I have very specific reasons why I'm not using an EAGLView and why I'm not loading the images as CALayers (i.e. why I'm using UIImageViews for each one). This is just a quick demo that I can use to work out exactly what I need in my parent application. Is there some matrix way to translate these 2d images along the z-axis so they will look like what I'm trying to represent? I've gone through the other StackOverflow articles as well as the Wikipedia references, and have not found what I'm looking for -- although I might not necessarily be using the right terms for what I'm trying to do.

    Read the article

  • using Object input\ output Streams with files and array list

    - by soad el-hayek
    hi every one .. i'm an it student , and it's time to finish my final project in java , i've faced too many problems , this one i couldn't solve it and i'm really ubset ! :S my code is like this : in Admin class : public ArrayList cos_info = new ArrayList(); public ArrayList cas_info = new ArrayList(); public int cos_count = 0 ; public int cas_count = 0 ; void coustmer_acount() throws FileNotFoundException, IOException{ String add=null; do{ person p = new person() ; cos_info.add(cos_count, p); cos_count ++ ; add =JOptionPane.showInputDialog("Do you want to add more coustmer..\n'y'foryes ..\n 'n'for No .."); } while(add.charAt(0) == 'Y'||add.charAt(0)=='y'); writenew_cos(); // add_acounts(); } void writenew_cos() throws IOException{ ObjectOutputStream aa = new ObjectOutputStream(new FileOutputStream("coustmer.txt")); aa.writeObject(cos_info); JOptionPane.showMessageDialog(null,"Added to file done sucessfuly.."); aa.close(); } in Coustmer class : void read_cos() throws IOException, ClassNotFoundException{ person p1= null ; int array_count = 0; ObjectInputStream d = new ObjectInputStream(new FileInputStrea ("coustmer.txt")); JOptionPane.showMessageDialog(null,d.available() ); for(int i = 0;d.available() == 0;i++){ a.add(array_count,(ArrayList) d.readObject()); array_count++; JOptionPane.showMessageDialog(null,"Haaaaai :D" ); JOptionPane.showMessageDialog(null,array_count ); } d.close(); JOptionPane.showMessageDialog(null,array_count +"1111" ); for(int i = 0 ; i<a.size()&& found!= true ; i++){ count++ ; p1 =(person)a.get(i); user=p1.user; pass = p1.pass; cas_checkpass(); } } it just print JOptionPane.showMessageDialog(null,d.available() ); and having excep. here a.add(array_count,(ArrayList) d.readObject()); p.s : person object from my own class and it's Serializabled

    Read the article

  • Problem setting row backgrounds in Android Listview

    - by zchtodd
    I have an application in which I'd like one row at a time to have a certain color. This seems to work about 95% of the time, but sometimes instead of having just one row with this color, it will allow multiple rows to have the color. Specifically, a row is set to have the "special" color when it is tapped. In rare instances, the last row tapped will retain the color despite a call to setBackgroundColor attempting to make it otherwise. private OnItemClickListener mDirectoryListener = new OnItemClickListener(){ public void onItemClick(AdapterView parent, View view, int pos, long id){ if (stdir.getStationCount() == pos) { stdir.moreStations(); return; } if (playingView != null) playingView.setBackgroundColor(Color.DKGRAY); view.setBackgroundColor(Color.MAGENTA); playingView = view; playStation(pos); } }; I have confirmed with print statements that the code setting the row to gray is always called. Can anyone imagine a reason why this code might intermittently fail? If there is a pattern or condition that causes it, I can't tell. I thought it might have something to do with the activity lifecycle setting the "playingView" variable back to null, but I can't reliably reproduce the problem by switching activities or locking the phone. private class DirectoryAdapter extends ArrayAdapter { private ArrayList<Station> items; public DirectoryAdapter(Context c, int resLayoutId, ArrayList<Station> stations){ super(c, resLayoutId, stations); this.items = stations; } public int getCount(){ return items.size() + 1; } public View getView(int position, View convertView, ViewGroup parent){ View v = convertView; LayoutInflater vi = (LayoutInflater)getContext().getSystemService(Context.LAYOUT_INFLATER_SERVICE); if (position == this.items.size()) { v = vi.inflate(R.layout.morerow, null); return v; } Station station = this.items.get(position); v = vi.inflate(R.layout.songrow, null); if (station.playing) v.setBackgroundColor(Color.MAGENTA); else if (station.visited) v.setBackgroundColor(Color.DKGRAY); else v.setBackgroundColor(Color.BLACK); TextView title = (TextView)v.findViewById(R.id.title); title.setText(station.name); return v; } };

    Read the article

  • Spring MVC, REST, and HATEOAS

    - by SingleShot
    I'm struggling with the correct way to implement Spring MVC 3.x RESTful services with HATEOAS. Consider the following constraints: I don't want my domain entities polluted with web/rest constructs. I don't want my controllers polluted with view constructs. I want to support multiple views. Currently I have a nicely put together MVC app without HATEOAS. Domain entities are pure POJOs without any view or web/rest concepts embedded. For example: class User { public String getName() {...} public String setName(String name) {...} ... } My controllers are also simple. They provide routing and status, and delegate to Spring's view resolution framework. Note my application supports JSON, XML, and HTML, yet no domain entities or controllers have embedded view information: @Controller @RequestMapping("/users") class UserController { @RequestMapping public ModelAndView getAllUsers() { List<User> users = userRepository.findAll(); return new ModelAndView("users/index", "users", users); } @RequestMapping("/{id}") public ModelAndView getUser(@PathVariable Long id) { User user = userRepository.findById(id); return new ModelAndView("users/show", "user", user); } } So, now my issue - I'm not sure of a clean way to support HATEOAS. Here's an example. Let's say when the client asks for a User in JSON format, it comes out like this: { firstName: "John", lastName: "Smith" } Let's also say that when I support HATEOAS, I want the JSON to contain a simple "self" link that the client can then use to refresh the object, delete it, or something else. It might also have a "friends" link indicating how to get the user's list of friends: { firstName: "John", lastName: "Smith", links: [ { rel: "self", ref: "http://myserver/users/1" }, { rel: "friends", ref: "http://myserver/users/1/friends" } ] } Somehow I want to attach links to my object. I feel the right place to do this is in the controller layer as the controllers all know the correct URLs. Additionally, since I support multiple views, I feel like the right thing to do is somehow decorate my domain entities in the controller before they are converted to JSON/XML/whatever in Spring's view resolution framework. One way to do this might be to wrap the POJO in question with a generic Resource class that contains a list of links. Some view tweaking would be required to crunch it into the format I want, but its doable. Unfortunately nested resources could not be wrapped in this way. Other things that come to mind include adding links to the ModelAndView, and then customizing each of Spring's out-of-the-box view resolvers to stuff links into the generated JSON/XML/etc. What I don't want is to be constantly hand-crafting JSON/XML/etc. to accommodate various links as they come and go during the course of development. Thoughts?

    Read the article

  • NHibernate and objects with value-semantics

    - by Groo
    Problem: If I pass a class with value semantics (Equals method overridden) to NHibernate, NHibernate tries to save it to db even though it just saved an entity equal by value (but not by reference) to the database. What am I doing wrong? Here is a simplified example model for my problem: Let's say I have a Person entity and a City entity. One thread (producer) is creating new Person objects which belong to a specific existing City, and another thread (consumer) is saving them to a repository (using NHibernate as DAL). Since there is lot of objects being flushed at a time, I am using Guid.Comb id's to ensure that each insert is made using a single SQL command. City is an object with value-type semantics (equal by name only -- for this example purposes only): public class City : IEquatable<City> { public virtual Guid Id { get; private set; } public virtual string Name { get; set; } public virtual bool Equals(City other) { if (other == null) return false; return this.Name == other.Name; } public override bool Equals(object obj) { return Equals(obj as City); } public override int GetHashCode() { return this.Name.GetHashCode(); } } Fluent NH mapping is something like: public class PersonMap : ClassMap<Person> { public PersonMap() { Id(x => x.Id) .GeneratedBy.GuidComb(); References(x => x.City) .Cascade.SaveUpdate(); } } public class CityMap : ClassMap<City> { public CityMap() { Id(x => x.Id) .GeneratedBy.GuidComb(); Map(x => x.Name); } } Right now (with my current NHibernate mapping config), my consumer thread maintains a dictionary of cities and replaces their references in incoming person objects (otherwise NHibernate will see a new, non-cached City object and try to save it as well), and I need to do it for every produced Person object. Since I have implemented City class to behave like a value type, I hoped that NHibernate would compare Cities by value and not try to save them each time -- i.e. I would only need to do a lookup once per session and not care about them anymore. Is this possible, and if yes, what am I doing wrong here?

    Read the article

  • proper fill an image larger than screen

    - by madcat
    what I wanted to achieve here is simply fit the image width to the screen on both orientations and use UIScrollView to just allow scroll vertically to see the whole image. both viewController and view are created pragmatically. the image loaded is larger than screen on both width and height. here is the related code in my viewController: - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; } - (void)loadView { UIScreen *screen = [UIScreen mainScreen]; CGRect rect = [screen applicationFrame]; self.view = [[UIView alloc] initWithFrame:rect]; self.view.contentMode = UIViewContentModeScaleAspectFill; self.view.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight; UIImage *img=[[UIImage alloc] initWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"image" ofType:@"png"]]; UIImageView *imgView =[[UIImageView alloc] initWithImage:img]; [img release]; imgView.contentMode = UIViewContentModeScaleAspectFill; imgView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight; [self.view addSubview:imgView]; [imgView release]; } tried all combinations for both contentMode above, did not give me correct result. the most close I am getting now: I manually resize imgView in loadView, portrait mode would display correctly since app always starts with portrait mode, but in landscape mode, the width fits correctly, but image is centered vertically rather than top aligned. if I add the imgView to a scrollView, in landscape mode it looks like contentSize is not set to full image size. but when I scroll bounce I can see the image is there in full size. question: why I need to resize it manually? in landscape mode how and where I can 'move' the imgView, so imgView.frame.origin is (0,0) and works correctly with a scroll view? Thanks! UPDATE: I added: imgView.clipsToBounds = YES; and find out in landscape mode the image bounds is smaller than screen in height. so the question becomes how to have the image view keeps original ratio (thus shows the full image always) when rotated to landscape? do I need to manually resize it after rotation again?

    Read the article

  • Rendering problem with UITableview

    - by Spider-Paddy
    I have a very strange problem with a UITableview within a navigation controller on the iPhone simulator. Of the cells displayed, only some are correctly rendered. They are all supposed to look the same but the majority are missing the accessory I've set, scrolling the view changes which cell has the accessory so I suspect it's some sort of cell caching happening, although the contents are correct for each cell. I also set an image as the background and that was also only displaying sporadically but I fixed that by changing cell.backgroundView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"yellow-bar_short.png"]]; (which also only rendered a random cell with the background) to cell.backgroundColor = [[UIColor alloc] initWithPatternImage:[UIImage imageNamed:@"yellow-bar_short.png"]]; I now need to fix the problem with the accessory only showing on a random cell. I tried moving the code from cellForRowAtIndex to willDisplayCell but it made no difference. I put in a log command to confirm that it is running through each frame. Basically it's a table view (UITableViewCellStyleSubtitle) that gets its info from a server & is then updated by a delegate method calling reload. Code is: -(void)tableView:(UITableView *)tableView willDisplayCell:(UITableViewCell *)cell forRowAtIndexPath:(NSIndexPath *)indexPath { NSLog(@"%@", [NSString stringWithFormat:@"Setting colours for cell %i", indexPath.row]); // Set cell background // cell.backgroundView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"yellow-bar_short.png"]]; cell.backgroundColor = [[UIColor alloc] initWithPatternImage:[UIImage imageNamed:@"yellow-bar_short.png"]]; cell.textLabel.backgroundColor = [UIColor clearColor]; cell.detailTextLabel.backgroundColor = [UIColor clearColor]; // detailTableViewAccessory is a view containing an imageview in this view's nib // i.e. nib <- view <- imageview <- image cell.accessoryView = detailTableViewAccessory; } // Called by data fetching object when done -(void)listDataTransferComplete:(ArticleListParser *)articleListParserObject { NSLog(@"Data parsed, reloading detail table"); self.currentTotalResultPages = (((articleListParserObject.currentArticleCount - 1) / 10) + 1); self.detailTableDataSource = [articleListParserObject.returnedArray copy]; // make a local copy of the returned array // Render table again with returned array data (neither of these 2 fixed it) [self.detailTableView performSelectorOnMainThread:@selector(reloadData) withObject:nil waitUntilDone:NO]; // [self.detailTableView reloadData]; // Re-enable necessary buttons (including table cells) letUserSelectRow = TRUE; [btnByName setEnabled:TRUE]; [btnByPrice setEnabled:TRUE]; // Remove please wait message NSLog(@"Removing please wait view"); [pleaseWaitViewControllerObject.view removeFromSuperview]; } I only included code that I thought was relevant, can supply more if needed. I can't test it on an iPhone yet so I don't know if it's maybe just a simulator anomaly or a bug in my code. I've always gotten good feedback from questions, any ideas?

    Read the article

  • Zend Framework AjaxContext filters the results and Decorators not removable

    - by Janis Peisenieks
    Ok, since this problem has 2 parts, it will be easier to explain them together. So here goes: I am trying to remove the default decorators from these elements, since I am using a little different way of styling them. But no matter what i do, the DtDDWrapper still shows up. If I try to remove all of the decorators, all of the fields below disappear. public function newfieldAction() { $ajaxContext = $this->_helper->getHelper('AjaxContext'); $ajaxContext->addActionContext('newfield', 'html')->initContext(); $id = $this->_getParam('id', null); $id1=$id+1; $id2=$id+2; $element = new Zend_Form_Element_Text("newTitle$id1"); $element->setOptions(array('escape'=>false)); $element->setRequired(true)->setLabel('Vertiba')->removeDecorator('label'); $tinyelement=new Zend_Form_Element_Text("newName$id"); $tinyelement->setRequired(true)->setOptions(array('escape'=>false))->setLabel('Vertiba')->removeDecorator('label'); $textarea_element = new Zend_Form_Element_Textarea("newText$id2"); $textarea_element->setRequired(true)->setOptions(array('escape'=>false))->setLabel('Vertiba')->removeDecorator('label'); $this->view->descriptionField = "<td>".$textarea_element->__toString()."</td>"; $this->view->titleField = $element->__toString(); $this->view->field = $tinyelement->__toString(); $this->view->id=$id; } The context view script seams to trim my code in one way or another. When I try to put a <td> or a <table> tag in the view script, it just skips the tags. Is there a way to stop this escaping from happening? My view script: id; ?" asdfasdfasdfasd field ? titleField ? descriptionField ? id ?"remove P.S. the code formatting system is barfing at me, could someone please help me with the formatting of the code?

    Read the article

  • Jquery Content Cycle Next/Previous not working

    - by user340745
    I have a definition list with a lot of text inside of it. When a user comes to the page, I want the jQuery to hide two thirds of that content, add a next and previous button where appropriate, and fade the content in and out. The first third of the content is .issue-group-1, the second third is .issue-group-3, the third, .issue-group-3. I am trying to set it so that when the user hits the next button, the next button changes classes, and thus acts differently the next time the user clicks it (that is, it takes them to the third page instead of the second. Right now, the next/previous buttons are working on the first and second "pages" but the next button to the third page won't work. My code is probably too long and this is maybe not the best way to do it--I'm new to jquery. Any suggestions would be helpful. $(document).ready(function(){ $('.issue-group-2, .issue-group-3').hide(); $('a#next-button.page1').fadeIn(500); $('a#next-button.page1').click(function() { $(this).removeClass('page1').addClass('page2'); $('.issue-group-1').fadeOut(500, function() { $('.issue-group-2').fadeIn(500); }); $('a#previous-button').fadeIn(500); }); $('a#previous-button.page2').click(function() { $('#next-button.page2').removeClass('page2').addClass('page1'); $('.issue-group-2').fadeOut(500, function() { $('.issue-group-1').fadeIn(500); }); $('a#previous-button').fadeOut(500); }); $('a#next-button.page2').click(function() { $('a#previous-button').removeClass('page2').addClass('page3'); $('.issue-group-2').fadeOut(500, function() { $('.issue-group-3').fadeIn(500); }); $('a#next-button').fadeOut(500); }); $('a#previous-button.page3').click(function() { $(this).removeClass('page3').addClass('page2'); $('.issue-group-2').fadeOut(500, function() { $('.issue-group-2').fadeIn(500); }); $('a#next-button').fadeIn(500); }); });

    Read the article

  • RequestFactoryEditorDriver getting edited data after flush

    - by Deanna
    Let me start with I have a solution, but I don't really think it is elegant. So, I am looking for a cleaner way to do this. I have an EntityProxy displayed in a view panel. The view panel is a RequestFactoryEditorDriver only using display mode. The user clicks on a data element and opens a popup editor to edit a data element of the EntityProxy with a few more bits of data than is displayed in the view panel. When the user saves the element I need the view panel to update the display. I ran into a problem because the RequestFactoryEditorDriver of the popup editor flow doesn't let you get to the edited data. The driver uses the passed in context and sends it to the server. The context returned out of flush only allows a Receiver even if you cast it to the type of context you stored in the editor driver in the edit() call. It doesn't appear to send and EntityProxyChanged event either, so I couldn't listen for that and update the display view. The solution I found was to change my domain object persist to return the newly saved entity. Then create the popup editor like this editor.getSaveButtonClickHandler().addClickHandler(createSaveHandler(driver, editor)); // initialize the Driver and edit the given text. driver.initialize(rf, editor); PlayerProfileCtx ctx = rf.playerProfile(); ctx.persist().using(playerProfile).with(driver.getPaths()) .to(new Receiver<PlayerProfileProxy>(){ @Override public void onSuccess(PlayerProfileProxy profile) { editor.hide(); playerProfile = profile; viewDriver.display(playerProfile); } }); driver.edit(playerProfile, ctx); editor.centerAndShow(); Then in the save handler I just fire the context I get from the flush. While this approach works, it doesn't seem right. It would seem I should subscribe to the entitychanged event in the display view and update the entity and the view from there. Also this approach saves the complete entity, not just the changed bits, which will increase bandwidth usage. What I would think should happen, is when you flush the entity it should 'optimistically' update the rf managed version of the entity and fire the entity proxy changed event. Only reverting the entity if something went wrong in the save. The actual save should only send the changed bits. In this way there isn't a need to refetch the whole entity and send that complete data over the wire twice. Is there a better solution?

    Read the article

  • IPhone iOs 5, need help getting my tab bar at the top to work

    - by Patrick
    I wanted to have the tab bar at the top. So i created a new project in XCode. Added a view and then inside that view i added (scrollbar, text and another view). See picture. What i wanted was to have my tab bar at the top. Then in the middle would be the contents from the tab bar and below it a small copyright text. See picture. No idea how to make this correctly. I have tried to create the UITabBarController on the fly and then assign it into the view at the top. (Top white space on the picture dedicated for the tab bar). Here is my code to init the MainWindow. MainWindow.h #import <UIKit/UIKit.h> @class Intro; @interface MainWindow : UIViewController @property (strong, nonatomic) IBOutlet UIScrollView *mainContentFrame; @property (strong, nonatomic) IBOutlet UIView *mainTabBarView; @property (strong, nonatomic) UITabBarController *mainTabBar; @property (nonatomic, strong) Intro *intro; // Trying to get this tab to show in the tab bar @end MainWindow.m #import "MainWindow.h" #import "Intro.h" @interface MainWindow () @end @implementation MainWindow @synthesize mainContentFrame = _mainContentFrame; @synthesize mainTabBarView = _mainTabBarView; @synthesize mainTabBar = _mainTabBar; @synthesize intro = _intro; - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if (self) { // Custom initialization } return self; } - (void)viewDidLoad { _intro = [[Intro alloc] init]; NSArray *allViews = [[NSArray alloc] initWithObjects:_intro, nil]; [super viewDidLoad]; // Do any additional setup after loading the view. _mainTabBar = [[UITabBarController alloc] init]; [_mainTabBar setViewControllers:allViews]; [_mainTabBarView.window addSubview:_mainTabBar.tabBarController.view]; } - (void)viewDidUnload { [self setMainTabBar:nil]; [self setMainContentFrame:nil]; [self setMainContentFrame:nil]; [super viewDidUnload]; // Release any retained subviews of the main view. } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation == UIInterfaceOrientationPortrait); } @end What am i missing to get this to work? Wanted the content to end up in the scrollview so that all tabs are scrollable.

    Read the article

  • UIButton does not respond to touch events after changing its position using setFrame

    - by Pranathi
    I have a view controller class (child) which extends from view controller class (parent). In the parent class's loadView() method I create a sub-view (named myButtonView) with two buttons (buttons are horizontally laid out in the subview) and add it to the main view. In the subclass I need to shift these two buttons up by 50pixels. So, I am shifting the buttonView by calling the setFrame method. This makes the buttons shift and render properly but they do not respond to touch events after this. Buttons work properly in the views of Parent class type. In the child class type view also, if I comment out the setFrame() call the buttons work properly. How can I shift the buttons and still make them respond to touch events? Any help is appreciated. Following is snippets of the code. In the parent class: - (void)loadView { // Some code... CGRect buttonFrameRect = CGRectMake(0,yOffset+1,screenRect.size.width,KButtonViewHeight); myButtonView = [[UIView alloc]initWithFrame:buttonFrameRect]; myButtonView.backgroundColor = [UIColor clearColor]; [self.view addSubview:myButtonView]; // some code... CGRect nxtButtonRect = CGRectMake(screenRect.size.width - 110, 5, 100, 40); myNxtButton = [UIButton buttonWithType:UIButtonTypeCustom]; [myNxtButton setTitle:@"Submit" forState:UIControlStateNormal]; myNxtButton.frame = nxtButtonRect; myNxtButton.backgroundColor = [UIColor clearColor]; [myNxtButton addTarget:self action:@selector(nextButtonPressed:) forControlEvents:UIControlEventTouchUpInside]; [myButtonView addSubview:myNxtButton]; CGRect backButtonRect = CGRectMake(10, 5, 100, 40); myBackButton = [UIButton buttonWithType:UIButtonTypeCustom]; [myBackButton setTitle:@"Back" forState:UIControlStateNormal]; myBackButton.frame = backButtonRect; myBackButton.backgroundColor = [UIColor clearColor]; [myBackButton addTarget:self action:@selector(backButtonPressed:) forControlEvents:UIControlEventTouchUpInside]; [myButtonView addSubview:myBackButton]; // Some code... } In the child class: - (void)loadView { [super loadView]; //Some code .. CGRect buttonViewRect = myButtonView.frame; buttonViewRect.origin.y = yOffset; // This is basically original yOffset + 50 [myButtonView setFrame:buttonViewRect]; yOffset += KButtonViewHeight; // Add some other view below myButtonView .. }

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • Nepotism In The SQL Family

    - by Rob Farley
    There’s a bunch of sayings about nepotism. It’s unpopular, unless you’re the family member who is getting the opportunity. But of course, so much in life (and career) is about who you know. From the perspective of the person who doesn’t get promoted (when the family member is), nepotism is simply unfair; even more so when the promoted one seems less than qualified, or incompetent in some way. We definitely get a bit miffed about that. But let’s also look at it from the other side of the fence – the person who did the promoting. To them, their son/daughter/nephew/whoever is just another candidate, but one in whom they have more faith. They’ve spent longer getting to know that person. They know their weaknesses and their strengths, and have seen them in all kinds of situations. They expect them to stay around in the company longer. And yes, they may have plans for that person to inherit one day. Sure, they have a vested interest, because they’d like their family members to have strong careers, but it’s not just about that – it’s often best for the company as well. I’m not announcing that the next LobsterPot employee is one of my sons (although I wouldn’t be opposed to the idea of getting them involved), but actually, admitting that almost all the LobsterPot employees are SQLFamily members… …which makes this post good for T-SQL Tuesday, this month hosted by Jeffrey Verheul (@DevJef). You see, SQLFamily is the concept that the people in the SQL Server community are close. We have something in common that goes beyond ordinary friendship. We might only see each other a few times a year, at events like the PASS Summit and SQLSaturdays, but the bonds that are formed are strong, going far beyond typical professional relationships. And these are the people that I am prepared to hire. People that I have got to know. I get to know their skill level, how well they explain things, how confident people are in their expertise, and what their values are. Of course there people that I wouldn’t hire, but I’m a lot more comfortable hiring someone that I’ve already developed a feel for. I need to trust the LobsterPot brand to people, and that means they need to have a similar value system to me. They need to have a passion for helping people and doing what they can to make a difference. Above all, they need to have integrity. Therefore, I believe in nepotism. All the people I’ve hired so far are people from the SQL community. I don’t know whether I’ll always be able to hire that way, but I have no qualms admitting that the things I look for in an employee are things that I can recognise best in those that are referred to as SQLFamily. …like Ted Krueger (@onpnt), LobsterPot’s newest employee and the guy who is representing our brand in America. I’m completely proud of this guy. He’s everything I want in an employee. He’s an experienced consultant (even wrote a book on it!), loving husband and father, genuine expert, and incredibly respected by his peers. It’s not favouritism, it’s just choosing someone I’ve been interviewing for years. @rob_farley

    Read the article

  • Web Matrix released

    - by TATWORTH
    Microsoft have now released Web Matrix (and ASP.NET MVC3 if you so inclined!) One signifcant utility is IIS Express which will replace Cassini It is worth noting that SP1 for VS2010 should be out in Q1. Links: http://www.hanselman.com/blog/ASPNETMVC3WebMatrixNuGetIISExpressAndOrchardReleasedTheMicrosoftJanuaryWebReleaseInContext.aspx http://www.hanselman.com/blog/LinkRollupNewDocumentationAndTutorialsFromWebPlatformAndTools.aspx http://arstechnica.com/microsoft/news/2011/01/microsoft-releases-free-webmatrix-web-development-tool.ars I am impressed by the copious tutorials on MVC, which I include below: Intro to ASP.NET MVC 3 onboarding series. Scott Hanselman and Rick Anderson collaboration and Mike Pope (Editor) Both C# and VB versions: Intro to ASP.NET MVC 3 Adding a Controller Adding a View Entity Framework Code-First Development Accessing your Model's Data from a Controller Adding a Create Method and Create View Adding Validation to the Model Adding a New Field to the Movie Model and Table Implementing Edit, Details and Delete Source code for this series MVC 3 Updated and new tutorials/ API Reference on MSDN Rick Anderson (Lead Programming Writer), Keith Newman and Mike Pope (Editor) ASP.NET MVC 3 Content Map ASP.NET MVC Overview MVC Framework and Application Structure Understanding MVC Application Execution Compatibility of ASP.NET Web Forms and MVC Walkthrough: Creating a Basic ASP.NET MVC Project Walkthrough: Using Forms Authentication in ASP.NET MVC Controllers and Action Methods in ASP.NET MVC Applications Using an Asynchronous Controller in ASP.NET MVC Views and UI Rendering in ASP.NET MVC Applications Rendering a Form Using HTML Helpers Passing Data in an ASP.NET MVC Application Walkthrough: Using Templated Helpers to Display Data in ASP.NET MVC Creating an ASP.NET MVC View by Calling Multiple Actions Models and Validation in ASP.NET MVC How to: Validate Model Data Using DataAnnotations Attributes Walkthrough: Using MVC View Templates How to: Implement Remote Validation in ASP.NET MVC Walkthrough: Adding AJAX Scripting Walkthrough: Organizing an Application using Areas Filtering in ASP.NET MVC Creating Custom Action Filters How to: Create a Custom Action Filter Unit Testing in ASP.NET MVC Applications Walkthrough: Using TDD with ASP.NET MVC How to: Add a Custom ASP.NET MVC Test Framework in Visual Studio ASP.NET MVC 3 Reference System.Web.Mvc System.Web.Mvc.Ajax System.Web.Mvc.Async System.Web.Mvc.Html System.Web.Mvc.Razor

    Read the article

  • DirectX Unproject troubles

    - by pivotnig
    I have an orthographic projection and I try to unproject a point from screen space. Following are the view and projection matrices: var w2 = ScreenWidthInPixels/2; var h2 = ScreenHeightInPixels/2; view = Matrix.LookAtLH(new Vector3(0, 0, -1), new Vector3(0, 0, 0), Vector3.UnitY); proj = Matrix.OrthoOffCenterLH(-w2, w2, -h2, h2, 0.1f, 10f); Here is how I unproject a Point p, the point is given in screen pixels: var m = Vector3.Unproject(p, 0, 0, ScreenWidthInPixels, ScreenHeightInPixels, 0.1f, 10f, // znear and zfar view *proj); My code doesn't work, the matrix m contains only Nan. When I try to invert view * proj I get back a Matrix with only zeros. So I suspect my problem has something to do with the orthographic projection matrix. Here are my questions: Could the problem be caused by an underflow due to the large values in the OrthoOffCenterLH projection? What parameters do I have to pass for x,y,width,height in Unproject(...)? What significance has the minZ and maxZ parameter in Unproject(...)? Does it matter what I pass for p.Z in Unproject(...)?

    Read the article

  • SQL SERVER – sp_describe_first_result_set New System Stored Procedure in SQL Server 2012

    - by pinaldave
    I might have said this earlier many times but I will say it again – SQL Server never stops to amaze me. Here is the example of it sp_describe_first_result_set. I stumbled upon it when I was looking for something else on BOL. This new system stored procedure did attract me to experiment with it. This SP does exactly what its names suggests – describes the first result set. Let us see very simple example of the same. Please note that this will work on only SQL Server 2012. EXEC sp_describe_first_result_set N'SELECT * FROM AdventureWorks.Sales.SalesOrderDetail', NULL, 1 GO Here is the partial resultset. Now let us take this simple example to next level and learn one more interesting detail about this function. First I will be creating a view and then we will use the same procedure over the view. USE AdventureWorks GO CREATE VIEW dbo.MyView AS SELECT [SalesOrderID] soi_v ,[SalesOrderDetailID] sodi_v ,[CarrierTrackingNumber] stn_v FROM [Sales].[SalesOrderDetail] GO Now let us execute above stored procedure with various options. You can notice I am changing the very last parameter which I am passing to the stored procedure.This option is known as for browse_information_mode. EXEC sp_describe_first_result_set N'SELECT soi_v soi, sodi_v sodi, stn_v stn FROM MyView', NULL, 0; GO EXEC sp_describe_first_result_set N'SELECT soi_v soi, sodi_v sodi, stn_v stn FROM MyView', NULL, 1; GO EXEC sp_describe_first_result_set N'SELECT soi_v soi, sodi_v sodi, stn_v stn FROM MyView', NULL, 2; GO Here is result of all the three queries together in single image for easier understanding regarding their difference. You can see that when BrowseMode is set to 1 the resultset describes the details of the original source database, schema as well source table. When BrowseMode is set to 2 the resulset describes the details of the view as the source database. I found it really really interesting that there exists system stored procedure which now describes the resultset of the output. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >