Search Results

Search found 9796 results on 392 pages for 'arrange items'.

Page 385/392 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • Nhibernate multilevel hierarchy save error?

    - by nisbus
    Hi, I have a database with a 6 level hierarchy and a domain model on top of that. something like this: Category -SubCategory -Container -DataDescription | Meta data -Data The mapping I'm using follows the following pattern: <class name="Category, Sample" table="Categories"> <id name="Id" column="Id" type="System.Int32" unsaved-value="0"> <generator class="native"/> </id> <property name="Name" access="property" type="String" column="Name"/> <property name="Metadata" access="property" type="String" column="Metadata"/> <bag name="SubCategories" cascade="save-update" lazy="true" inverse="true"> <key column="Id" foreign-key="category_subCategory_fk"/> <one-to-many class="SubCategory, Sample" /> </bag> </class> <class name="SubCategory, Sample" table="SubCategories"> <id name="Id" column="Id" type="System.Int32" unsaved-value="0"> <generator class="native"/> </id> <many-to-one name="Category" class="Category, Sample" foreign-key="subCat_category_fk"/> <property name="Name" access="property" type="String"/> <property name="Metadata" access="property" type="String"/> <bag name="Containers" inverse="true" cascade="save-update" lazy="true"> <key column="Id" foreign-key="subCat_container_fk" /> <one-to-many class="Container, Sample" /> </bag> </class> <class name="Container, Sample" table="Containers"> <id name="Id" column="Id" type="System.Int32" unsaved-value="0"> <generator class="assigned"/> </id> <many-to-one name="SubCategory" class="SubCategory,Sample" foreign-key="container_subCat_fk"/> <property name="Name" access="property" type="String" column="Name"/> <bag name="DataDescription" cascade="all" lazy="true" inverse="true"> <key column="Id" foreign-key="container_ DataDescription_fk"/> <one-to-many class="DataDescription, Sample" /> </bag> <bag name="MetaData" cascade="all" lazy="true" inverse="true"> <key column="Id" foreign-key="container_metadata_cat_fk"/> <one-to-many class="MetaData, Sample" /> </bag> </class> For some reason when I try to save the category (with the subcategory, container etc. attached) I get a foreign key violation from the database. The code is something like this (Pseudo). var category = new Category(); var subCategory = new SubCategory(); var container = new Container(); var dataDescription = new DataDescription(); var metaData = new MetaData(); category.AddSubCategory(subCategory); subCategory.AddContainer(container); container.AddDataDescription(dataDescription); container.AddMetaData(metaData); Session.Save(category); Here is the log from this test : DEBUG NHibernate.SQL - INSERT INTO Categories (Name, Metadata) VALUES (@p0, @p1); select SCOPE_IDENTITY(); @p0 = 'Unit test', @p1 = 'unit test' DEBUG NHibernate.SQL - INSERT INTO SubCategories (Category, Name, Metadata) VALUES (@p0, @p1, @p2); select SCOPE_IDENTITY(); @p0 = '1', @p1 = 'Unit test', @p2 = 'unit test' DEBUG NHibernate.SQL - INSERT INTO Containers (SubCategory, Name, Frequency, Scale, Measurement, Currency, Metadata, Id) VALUES (@p0, @p1, @p2, @p3, @p4, @p5, @p6, @p7); @p0 = '1', @p1 = 'Unit test', @p2 = '15', @p3 = '1', @p4 = '1', @p5 = '1', @p6 = 'unit test', @p7 = '0' ERROR NHibernate.Util.ADOExceptionReporter - The INSERT statement conflicted with the FOREIGN KEY constraint "subCat_container_fk". The conflict occurred in database "Sample", table "dbo.SubCategories", column 'Id'. The methods for adding items to objects is always as follows: public void AddSubCategory(ISubCategory subCategory) { subCategory.Category = this; SubCategories.Add(subCategory); } What am I missing?? Thanks, nisbus

    Read the article

  • Wpf Listbox and CheckBox

    - by Tan
    Hi iam using a listbox to show a list of items. in the listbox i ahve an togglebutton on every item. When i click on the toggle button the state of the togglebutton is pressed. But when i am scrolling down in the listbox and scolls up again. The togglebutton state is not pressed. How can i prevent this please help. Heres my itemtemplate <ListBox.ItemTemplate> <DataTemplate> <StackPanel Margin="0,3,0,0"> <Border BorderBrush="Black" BorderThickness="1,1,1,1"> <Border.Background> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0" MappingMode="RelativeToBoundingBox"> <GradientStop Color="#FFECECEC" Offset="1"/> <GradientStop Color="#FFE8E8E8"/> <GradientStop Color="#FFBDBDBD" Offset="0.153"/> <GradientStop Color="#FFE8E8E8" Offset="0.904"/> </LinearGradientBrush> </Border.Background> <Border.Style> <Style> <Style.Triggers> <DataTrigger Binding="{Binding Path=IsSelected, RelativeSource={RelativeSource Mode=FindAncestor,AncestorType={x:Type ListBoxItem}}}" Value="True"> <Setter Property="Border.Height" Value="100"/> <Setter Property="Border.Background"> <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0" MappingMode="RelativeToBoundingBox"> <GradientStop Color="DarkGray" Offset="1"/> <GradientStop Color="#FFE8E8E8"/> <GradientStop Color="#FFBDBDBD" Offset="0.153"/> <GradientStop Color="DarkGray" Offset="0.904"/> </LinearGradientBrush> </Setter.Value> </Setter> </DataTrigger> </Style.Triggers> </Style> </Border.Style> <StackPanel Orientation="Horizontal" VerticalAlignment="Center"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="500"/> <ColumnDefinition Width="100"/> <ColumnDefinition Width="55"/> </Grid.ColumnDefinitions> <!--Pick number--> <StackPanel Grid.Column="0" VerticalAlignment="Center" Orientation="Vertical"> <TextBlock Text="{Binding Path=FtgNamn}" FontWeight="Bold" FontSize="22pt" FontFamily="Calibri"/> <TextBlock Text="{Binding Path=LevsAttBeskr}" FontSize="18pt" FontFamily="Calibri"/> </StackPanel> <!--Pick Quantity--> <StackPanel Grid.Column="1" VerticalAlignment="Center"> <TextBlock Text="{Binding Path=Antal}" FontSize="44pt" FontFamily="Calibri"/> </StackPanel> <!-- Checkbox--> <StackPanel Grid.Column="2" VerticalAlignment="Center" HorizontalAlignment="Center"> <ToggleButton Name="Check" Width="40" Height="40" Click="Check_Click" Tag="{Binding Path=Plocklista}"> <ToggleButton.Style> <Style TargetType="ToggleButton"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type ToggleButton}"> <Border x:Name="InnerBorder" Background="White" BorderBrush="Black" BorderThickness="1"/> <ControlTemplate.Triggers> <Trigger Property="IsChecked" Value="True"> <Setter TargetName="InnerBorder" Property="Background"> <Setter.Value> <ImageBrush ImageSource="/Images/button_ok.png"/> </Setter.Value> </Setter> <Setter TargetName="InnerBorder" Property="BorderThickness" Value="0"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> </ToggleButton.Style> </ToggleButton> </StackPanel> </Grid> <Border BorderBrush="Darkgray" BorderThickness="0,0,1,0"> </Border> <TextBlock Width="100" Text="{Binding Path=Quantity}" FontSize="44pt" FontFamily="Calibri"/> <CheckBox Width="78"/> </StackPanel> </Border> </StackPanel> </DataTemplate>

    Read the article

  • jQuery featured content slider

    - by azz0r
    Hello, I have a content area that loops through divs and shows there content. I'm having trouble making it display the initial content, unfortunately it waits 5000 milliseconds before triggering the very first content area to display. Anyone spot an easy way to make it display the initial area, then slide to the next area and do them at 5000 milliseconds. JS Model.FeatureBar = { current:0, items:{}, init: function init(options){ var me = this; me.triggers = [] me.slides = [] this.container = jQuery('#features'); jQuery('.feature').each(function(i){ me.triggers[i] = {url: jQuery(this).children('.feature-title a').href, title: jQuery(this).children('.feature-title'),description:jQuery(this).children('.feature-description')} me.slides[i] = {image: jQuery(this).children('.feature-image')} }); for(var i in this.slides){ this.slides[i].image.hide(); this.triggers[i].description.hide(); } setInterval(function(){Model.FeatureBar.next()},5000); }, next: function next(){ var i = (this.current+1 < this.triggers.length) ? this.current+1 : 0; this.goToItem(i); }, previous: function previous(){ var i = (this.current-1 > 1) ? this.current-1 : this.triggers.length; this.goToItem(i); }, goToItem: function goToItem(i){ if(!this.slides[i]) throw 'Slide out of range'; this.triggers[this.current].description.slideUp(); this.triggers[i].description.slideDown(); this.slides[this.current].image.hide(); this.slides[i].image.show(); this.current = i; }, } html <div id="features"> <div class="feature current"> <div class="movie-cover"> <a href="/police/movie"><img title="" alt="" src="/design/images/four-oh-four.png"></a> </div> <div class="feature-image" style="display: none;"> <img src="/design/images/four-oh-four.png"> </div> <h2 class="feature-title"><a href="/police/movie">Police</a></h2> <p class="feature-description" style="overflow: hidden; display: block; height: 50.8604px; margin-top: 0px; margin-bottom: 0px; padding-top: 0px; padding-bottom: 0px;">DESCRIPTION</p> </div> <div class="feature"> <div class="movie-cover"> <a href="/rude/movie"><img title="" alt="" src="/design/images/four-oh-four.png"></a> </div> <div class="feature-image" style="display: none;"> <img src="/design/images/four-oh-four.png"> </div> <h2 class="feature-title"><a href="/rude/movie">Rude</a></h2> <p class="feature-description" style="overflow: hidden; display: block; height: 18.3475px; margin-top: 0px; margin-bottom: 0px; padding-top: 0px; padding-bottom: 0px;">DESCRIPTION</p> </div> <div class="feature"> <div class="movie-cover"> <a href="/brits/movie"><img title="" alt="" src="/design/images/four-oh-four.png"></a> </div> <div class="feature-image" style="display: block;"> <img src="/design/images/four-oh-four.png"> </div> <h2 class="feature-title"><a href="/brits/movie">Brits</a></h2> <p class="feature-description" style="overflow: hidden; display: block; height: 40.1549px; margin-top: 0px; margin-bottom: 0px; padding-top: 0px; padding-bottom: 0px;">DESCRIPTION</p> </div> <div class="feature"> <div class="movie-cover"> <a href="/indie/movie"><img title="" alt="" src="/design/images/four-oh-four.png"></a> </div> <div class="feature-image" style="display: none;"> <img src="/design/images/four-oh-four.png"> </div> <h2 class="feature-title"><a href="/indie/movie">Indie</a></h2> <p class="feature-description" style="overflow: hidden; display: block; height: 42.4247px; margin-top: 0px; margin-bottom: 0px; padding-top: 0px; padding-bottom: 0px;">DESCRIPTION</p> </div>

    Read the article

  • C++ Sentinel/Count Controlled Loop beginning programming

    - by Bryan Hendricks
    Hello all this is my first post. I'm working on a homework assignment with the following parameters. Piecework Workers are paid by the piece. Often worker who produce a greater quantity of output are paid at a higher rate. 1 - 199 pieces completed $0.50 each 200 - 399 $0.55 each (for all pieces) 400 - 599 $0.60 each 600 or more $0.65 each Input: For each worker, input the name and number of pieces completed. Name Pieces Johnny Begood 265 Sally Great 650 Sam Klutz 177 Pete Precise 400 Fannie Fantastic 399 Morrie Mellow 200 Output: Print an appropriate title and column headings. There should be one detail line for each worker, which shows the name, number of pieces, and the amount earned. Compute and print totals of the number of pieces and the dollar amount earned. Processing: For each person, compute the pay earned by multiplying the number of pieces by the appropriate price. Accumulate the total number of pieces and the total dollar amount paid. Sample Program Output: Piecework Weekly Report Name Pieces Pay Johnny Begood 265 145.75 Sally Great 650 422.50 Sam Klutz 177 88.5 Pete Precise 400 240.00 Fannie Fantastic 399 219.45 Morrie Mellow 200 110.00 Totals 2091 1226.20 You are required to code, compile, link, and run a sentinel-controlled loop program that transforms the input to the output specifications as shown in the above attachment. The input items should be entered into a text file named piecework1.dat and the ouput file stored in piecework1.out . The program filename is piecework1.cpp. Copies of these three files should be e-mailed to me in their original form. Read the name using a single variable as opposed to two different variables. To accomplish this, you must use the getline(stream, variable) function as discussed in class, except that you will replace the cin with your textfile stream variable name. Do not forget to code the compiler directive #include < string at the top of your program to acknowledge the utilization of the string variable, name . Your nested if-else statement, accumulators, count-controlled loop, should be properly designed to process the data correctly. The code below will run, but does not produce any output. I think it needs something around line 57 like a count control to stop the loop. something like (and this is just an example....which is why it is not in the code.) count = 1; while (count <=4) Can someone review the code and tell me what kind of count I need to introduce, and if there are any other changes that need to be made. Thanks. [code] //COS 502-90 //November 2, 2012 //This program uses a sentinel-controlled loop that transforms input to output. #include <iostream> #include <fstream> #include <iomanip> //output formatting #include <string> //string variables using namespace std; int main() { double pieces; //number of pieces made double rate; //amout paid per amount produced double pay; //amount earned string name; //name of worker ifstream inFile; ofstream outFile; //***********input statements**************************** inFile.open("Piecework1.txt"); //opens the input text file outFile.open("piecework1.out"); //opens the output text file outFile << setprecision(2) << showpoint; outFile << name << setw(6) << "Pieces" << setw(12) << "Pay" << endl; outFile << "_____" << setw(6) << "_____" << setw(12) << "_____" << endl; getline(inFile, name, '*'); //priming read inFile >> pieces >> pay >> rate; // ,, while (name != "End of File") //while condition test { //begining of loop pay = pieces * rate; getline(inFile, name, '*'); //get next name inFile >> pieces; //get next pieces } //end of loop inFile.close(); outFile.close(); return 0; }[/code]

    Read the article

  • Self-relation messes up contents in fetching

    - by holographix
    Hi folks, I'm dealing with an annoying problem in core data I've got a table named Character, which is made as follows I'm filling the table in various steps: 1) fill the attributes of the table 2) fill the Character Relation (charRel) FYI charRel is defined as follows I'm feeding the contents by pulling the data from an xml, the feeding code is this curStr = [[NSMutableString stringWithString:[curStr stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]] retain]; NSLog(@"Parsing relation within these keys %@, in order to get'em associated",curStr); NSArray *chunks = [curStr componentsSeparatedByString: @","]; for( NSString *relId in chunks ) { NSLog(@"Associating %@ with id %@",[currentCharacter valueForKey:@"character_id"], relId); NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"character_id == %@", relId]; [request setEntity:[NSEntityDescription entityForName:@"Character" inManagedObjectContext:[self managedObjectContext] ]]; [request setPredicate:predicate]; NSerror *error = nil; NSArray *results = [[self managedObjectContext] executeFetchRequest:request error:&error]; // error handling code if(error != nil) { NSLog(@"[SYMBOL CORRELATION]: retrieving correlated symbol error: %@", [error localizedDescription]); } else if([results count] > 0) { Character *relatedChar = [results objectAtIndex:0]; // grab the first result in the stack, could be done better! [currentCharacter addCharRelObject:relatedChar]; //VICE VERSA RELATIONS NSArray *charRels = [relatedChar valueForKey:@"charRel"]; BOOL alreadyRelated = NO; for(Character *charRel in charRels) { if([[charRel valueForKey:@"character_id"] isEqual:[currentCharacter valueForKey:@"character_id"]]) { alreadyRelated = YES; break; } } if(!alreadyRelated) { NSLog(@"\n\t\trelating %@ with %@", [relatedChar valueForKey:@"character_id"], [currentCharacter valueForKey:@"character_id"]); [relatedChar addCharRelObject:currentCharacter]; } } else { NSLog(@"[SYMBOL CORRELATION]: related symbol was not found! ##SKIPPING-->"); } [request release]; } NSLog(@"\t\t### TOTAL OF REALTIONS FOR ID %@: %d\n%@", [currentCharacter valueForKey:@"character_id"], [[currentCharacter valueForKey:@"charRel"] count], currentCharacter); error = nil; /* SAVE THE CONTEXT */ if (![managedObjectContext save:&error]) { NSLog(@"Whoops, couldn't save the symbol record: %@", [error localizedDescription]); NSArray* detailedErrors = [[error userInfo] objectForKey:NSDetailedErrorsKey]; if(detailedErrors != nil && [detailedErrors count] > 0) { for(NSError* detailedError in detailedErrors) { NSLog(@"\n################\t\tDetailedError: %@\n################", [detailedError userInfo]); } } else { NSLog(@" %@", [error userInfo]); } } at this point when I print out the values of the currentCharacter, everything looks perfect. every relation is in its place. in example in this log we can clearly see that this element has got 3 items in charRel: <Character: 0x5593af0> (entity: Character; id: 0x55938c0 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p86> ; data: { catRel = "<relationship fault: 0x9a29db0 'catRel'>"; charRel = ( "0x9a1f870 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p74>", "0x9a14bd0 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p109>", "0x558ba00 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p5>" ); "character_id" = 254; examplesRel = "<relationship fault: 0x9a29df0 'examplesRel'>"; meaning = "\n Left"; pinyin = "\n zu\U01d2"; "pronunciation_it" = "\n zu\U01d2"; strokenumber = 5; text = "\n \n <p>The most ancient form of this symbol"; unicodevalue = "\n \U5de6"; }) then when I'm in need of retrieving this item I perform an extraction, like this: // at first I get the single Character record NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSError *error; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"character_id == %@", self.char_id ]; [request setEntity:[NSEntityDescription entityForName:@"Character" inManagedObjectContext:_context ]]; [request setPredicate:predicate]; NSArray *fetchedObjs = [_context executeFetchRequest:request error:&error]; when, for instance, I print out in NSLog the contents of charRel NSArray *correlations = [singleCharacter valueForKey:@"charRel"]; NSLog(@"CHARACTER OBJECT \n%@", correlations); I get this Relationship fault for (<NSRelationshipDescription: 0x5568520>), name charRel, isOptional 1, isTransient 0, entity Character, renamingIdentifier charRel, validation predicates (), warnings (), versionHashModifier (null), destination entity Character, inverseRelationship (null), minCount 1, maxCount 99 on 0x6937f00 hope that I made myself clear. this thing is driving me insane, I've googled all over world, but I couldn't find a solution (and this make me think to as issue related to bad coding somehow :P). thank you in advance guys. k

    Read the article

  • Core Data: Fetch all entities in a to-many-relationship of a particular object?

    - by Björn Marschollek
    Hi there, in my iPhone application I am using simple Core Data Model with two entities (Item and Property): Item name properties Property name value item Item has one attribute (name) and one one-to-many-relationship (properties). Its inverse relationship is item. Property has two attributes the according inverse relationship. Now I want to show my data in table views on two levels. The first one lists all items; when one row is selected, a new UITableViewController is pushed onto my UINavigationController's stack. The new UITableView is supposed to show all properties (i.e. their names) of the selected item. To achieve this, I use a NSFetchedResultsController stored in an instance variable. On the first level, everything works fine when setting up the NSFetchedResultsController like this: -(NSFetchedResultsController *) fetchedResultsController { if (fetchedResultsController) return fetchedResultsController; // goal: tell the FRC to fetch all item objects. NSFetchRequest *fetch = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Item" inManagedObjectContext:self.moContext]; [fetch setEntity:entity]; NSSortDescriptor *sort = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; [fetch setSortDescriptors:[NSArray arrayWithObject:sort]]; [fetch setFetchBatchSize:10]; NSFetchedResultsController *frController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetch managedObjectContext:self.moContext sectionNameKeyPath:nil cacheName:@"cache"]; self.fetchedResultsController = frController; fetchedResultsController.delegate = self; [sort release]; [frController release]; [fetch release]; return fetchedResultsController; } However, on the second-level UITableView, I seem to do something wrong. I implemented the fetchedresultsController in a similar way: -(NSFetchedResultsController *) fetchedResultsController { if (fetchedResultsController) return fetchedResultsController; // goal: tell the FRC to fetch all property objects that belong to the previously selected item NSFetchRequest *fetch = [[NSFetchRequest alloc] init]; // fetch all Property entities. NSEntityDescription *entity = [NSEntityDescription entityForName:@"Property" inManagedObjectContext:self.moContext]; [fetch setEntity:entity]; // limit to those entities that belong to the particular item NSPredicate *predicate = [NSPredicate predicateWithFormat:[NSString stringWithFormat:@"item.name like '%@'",self.item.name]]; [fetch setPredicate:predicate]; // sort it. Boring. NSSortDescriptor *sort = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; [fetch setSortDescriptors:[NSArray arrayWithObject:sort]]; NSError *error = nil; NSLog(@"%d entities found.",[self.moContext countForFetchRequest:fetch error:&error]); // logs "3 entities found."; I added those properties before. See below for my saving "problem". if (error) NSLog("%@",error); // no error, thus nothing logged. [fetch setFetchBatchSize:20]; NSFetchedResultsController *frController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetch managedObjectContext:self.moContext sectionNameKeyPath:nil cacheName:@"cache"]; self.fetchedResultsController = frController; fetchedResultsController.delegate = self; [sort release]; [frController release]; [fetch release]; return fetchedResultsController; } Now it's getting weird. The above NSLog statement returns me the correct number of properties for the selected item. However, the UITableViewDelegate method tells me that there are no properties: -(NSInteger) tableView:(UITableView *)table numberOfRowsInSection:(NSInteger)section { id <NSFetchedResultsSectionInfo> sectionInfo = [[self.fetchedResultsController sections] objectAtIndex:section]; NSLog(@"Found %d properties for item \"%@\". Should have found %d.",[sectionInfo numberOfObjects], self.item.name, [self.item.properties count]); // logs "Found 0 properties for item "item". Should have found 3." return [sectionInfo numberOfObjects]; } The same implementation works fine on the first level. It's getting even weirder. I implemented some kind of UI to add properties. I create a new Property instance via Property *p = [NSEntityDescription insertNewObjectForEntityForName:@"Property" inManagedObjectContext:self.moContext];, set up the relationships and call [self.moContext save:&error]. This seems to work, as error is still nil and the object gets saved (I can see the number of properties when logging the Item instance, see above). However, the delegate methods are not fired. This seems to me due to the possibly messed up fetchRequest(Controller). Any ideas? Did I mess up the second fetch request? Is this the right way to fetch all entities in a to-many-relationship for a particular instance at all?

    Read the article

  • Usercontrol databinding within a databound datagridview

    - by user328259
    Good day. I'm developing a Windows application and working with Windows Forms; .Net 2.0. I have an issue databinding a generic List of Car Rental Companies to a DataGridView when that list contains (one of its properties) anoother generic List of Car Makes. I also have a UserControl that I need to bind to this [inner] generic list ... Class CarRentalCompany contains: string Name, string Location, List CarMakes Class CarMake contains: string Name, bool isFord, bool isChevy, bool isOther The UserControl has a label for CarMake.Name and 3 checkboxes for each of the booleans of the class. HOw do I make this user control bindable for the class? In my form, I have a DataGridView binded to the CarRentalCompany object. The list CarMakes could be 0 or more items and I can add these as necessary. How do I establish the binding of CarRentalCompanies properly so CarMakes will bind accordingly?? For example, I have: List CarRentalCompanies = new List(); CarRentalCompany company1 = new CarRentalCompany(); company1.Name = "Acme Rentals"; company1.Location = "New York, NY"; company1.CarMakes = new List<CarMake>(); CarMake car1 = new CarMake(); car1.Name = "The Yellow Car"; car1.isFord = true; car1.isChevy = false; car1.isOther = false; company1.CarMakes.Add(car1); CarMake car2 = new CarMake(); car2.Name = "The Blue Car"; car2.isFord = false; car2.isChevy = true; car2.isOther = false; company1.CarMakes.Add(car2); CarMake car3 = new CarMake(); car3.Name = "The Purple Car"; car3.isFord = false; car3.isChevy = false; car3.isOther = true; company1.CarMakes.Add(car3); CarRentalCompanies.Add(company1); CarRentalCompany company2 = new CarRentalCompany(); company1.Name = "Z-Auto Rentals"; company1.Location = "Phoenix, AZ"; company1.CarMakes = new List<CarMake>(); CarMake car4 = new CarMake(); car4.Name = "The OrangeCar"; car4.isFord = true; car4.isChevy = false; car4.isOther = false; company2.CarMakes.Add(car4); CarMake car5 = new CarMake(); car5.Name = "The Red Car"; car5.isFord = true; car5.isChevy = false; car5.isOther = false; company2.CarMakes.Add(car5); CarMake car6 = new CarMake(); car6.Name = "The Green Car"; car6.isFord = true; car6.isChevy = false; car6.isOther = false; company2.CarMakes.Add(car6); CarRentalCompanies.Add(company2); I load my form and in my load form I have the following: Note: CarDataGrid is a DataGridView BindingSource bsTheRentals = new BindingSource(); DataGridViewTextBoxColumn companyName = new DataGridViewTextBoxColumn(); companyName.DataPropertyName = "Name"; companyName.HeaderText = "Company Name"; companyName.Name = "CompanyName"; companyName.AutoSizeMode = DataGridViewAutoSizeColumnMode.AllCells; DataGridViewTextBoxColumn companyLocation = new DataGridViewTextBoxColumn(); companyLocation.DataPropertyName = "Location"; companyLocation.HeaderText = "Company Location"; companyLocation.Name = "CompanyLocation"; companyLocation.AutoSizeMode = DataGridViewAutoSizeColumnMode.AllCells; ArrayList carMakeColumnsToAdd = new ArrayList(); // Loop through the CarMakes list to add each custom column for (int intX=0; intX < CarRentalCompanies.CarMakes[0].Count; intX++) { // Custom column for user control string carMakeColumnName = "carColumn" + intX; CarMakeListColumn carMakeColumn = new DataGridViewComboBoxColumn(); carMakeColumn.Name = carMakeColumnName; carMakeColumn.DisplayMember = CarRentalCompanies.CarMakes[intX].Name; carMakeColumn.DataSource = CarRentalCompanies.CarMakes; // this is the CarMAkes List carMakeColumn.AutoSizeMode = DataGridViewAutoSizeColumnMode.Fill; carMakeColumnsToAdd.Add(carMakeColumn); } CarDataGrid.DataSource = bsTheRentals; CarDataGrid.Columns.AddRange(new DataGridViewColumn[] { companyName, companylocation, carMakeColumnsToAdd }); CarDataGrid.AutoGenerateColumns = false; The code I provided does not work because I am unfamiliar with UserControls and custom DataGridViewColumns and DataGridViewCells - I know I must derive from these classes in order to use my User Control properly. I appreciate any advice/assistance/help in this. Thank you. Lawrence

    Read the article

  • C++ run error: pointer being freed was not allocated

    - by Dale Reves
    I'm learning c++ and am working on a program that keeps giving me a 'pointer being freed was not allocated' error. It's a grocery store program that inputs data from a txt file, then user can enter item# & qty. I've read through similar questions but what's throwing me off is the 'pointer' issue. I would appreciate if someone could take a look and help me out. I'm using Netbeans IDE 7.2 on a Mac. I'll just post the whole piece I have so far. Thx. #include <iostream> #include <fstream> #include <string> #include <vector> using namespace std; class Product { public: // PLU Code int getiPluCode() { return iPluCode; } void setiPluCode( int iTempPluCode) { iPluCode = iTempPluCode; } // Description string getsDescription() { return sDescription; } void setsDescription( string sTempDescription) { sDescription = sTempDescription; } // Price double getdPrice() { return dPrice; } void setdPrice( double dTempPrice) { dPrice = dTempPrice; } // Type..weight or unit int getiType() { return iType; } void setiType( int iTempType) { iType = iTempType; } // Inventory quantity double getdInventory() { return dInventory; } void setdInventory( double dTempInventory) { dInventory = dTempInventory; } private: int iPluCode; string sDescription; double dPrice; int iType; double dInventory; }; int main () { Product paInventory[21]; // Create inventory array Product paPurchase[21]; // Create customer purchase array // Constructor to open inventory input file ifstream InputInventory ("inventory.txt", ios::in); //If ifstream could not open the file if (!InputInventory) { cerr << "File could not be opened" << endl; exit (1); }//end if int x = 0; while (!InputInventory.eof () ) { int iTempPluCode; string sTempDescription; double dTempPrice; int iTempType; double dTempInventory; InputInventory >> iTempPluCode >> sTempDescription >> dTempPrice >> iTempType >> dTempInventory; paInventory[x].setiPluCode(iTempPluCode); paInventory[x].setsDescription(sTempDescription); paInventory[x].setdPrice(dTempPrice); paInventory[x].setiType(iTempType); paInventory[x].setdInventory(dTempInventory); x++; } bool bQuit = false; //CREATE MY TOTAL VARIABLE HERE! int iUserItemCount; do { int iUserPLUCode; double dUserAmount; double dAmountAvailable; int iProductIndex = -1; //CREATE MY SUBTOTAL VARIABLE HERE! while(iProductIndex == -1) { cout<<"Please enter the PLU Code of the product."<< endl; cin>>iUserPLUCode; for(int i = 0; i < 21; i++) { if(iUserPLUCode == paInventory[i].getiPluCode()) { dAmountAvailable = paInventory[i].getdInventory(); iProductIndex = i; } } //PLU code entry validation if(iProductIndex == -1) { cout << "You have entered an invalid PLU Code."; } } cout<<"Enter the quantity to buy.\n"<< "There are "<< dAmountAvailable << "available.\n"; cin>> dUserAmount; while(dUserAmount > dAmountAvailable) { cout<<"That's too many, please try again"; cin>>dUserAmount; } paPurchase[iUserItemCount].setiPluCode(iUserPLUCode);// Array of objects function calls paPurchase[iUserItemCount].setdInventory(dUserAmount); paPurchase[iUserItemCount].setdPrice(paInventory[iProductIndex].getdPrice()); paInventory[iProductIndex].setdInventory( paInventory[iProductIndex].getdInventory() - dUserAmount ); iUserItemCount++; cout <<"Are you done purchasing items? Enter 1 for yes and 0 for no.\n"; cin >> bQuit; //NOTE: Put Amount * quantity for subtotal //NOTE: Put code to update subtotal (total += subtotal) // NOTE: Need to create the output txt file! }while(!bQuit); return 0; }

    Read the article

  • Error showing is NullPointerException [duplicate]

    - by user3659612
    This question already has an answer here: How to check a string against null in java? 11 answers I was trying to code a wifi scanner which does 20 scans but it shows NullPointerException at if(bssid[j].equals(null)){ My code is slightly huge package com.example.scanner; import java.io.File; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.io.OutputStreamWriter; import java.text.SimpleDateFormat; import java.util.Date; import java.util.List; import android.annotation.SuppressLint; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.content.IntentFilter; import android.net.wifi.ScanResult; import android.net.wifi.WifiInfo; import android.net.wifi.WifiManager; import android.os.Bundle; import android.os.Environment; import android.support.v7.app.ActionBarActivity; import android.view.Menu; import android.view.View; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.ListView; import android.widget.Toast; public class MainActivity extends ActionBarActivity { WifiManager wifi; WifiScanReceiver wifireciever; WifiInfo info; Button scan, save; List<ScanResult> wifilist; ListView list; String wifis[]; String name; String[] ssid = new String[100]; String[] bssid = new String[100]; int[] lvl = new int[100]; int[] count = new int[100]; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.fragment_main); list=(ListView)findViewById(R.id.listView1); scan=(Button)findViewById(R.id.button1); save=(Button)findViewById(R.id.button2); scan.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // TODO Auto-generated method stub wifi=(WifiManager)getSystemService(Context.WIFI_SERVICE); if (wifi.isWifiEnabled()==false){ wifi.setWifiEnabled(true); } wifireciever = new WifiScanReceiver(); for (int i=0;i<20;i++){ registerReceiver(wifireciever, new IntentFilter(WifiManager.SCAN_RESULTS_AVAILABLE_ACTION)); wifi.startScan(); if (i==19){ Toast.makeText(getBaseContext(), "Scan Finish", Toast.LENGTH_LONG).show(); } } } }); save.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // TODO Auto-generated method stub savedata(); } }); } protected void savedata() { // TODO Auto-generated method stub try { File sdcard = Environment.getExternalStorageDirectory(); File directory = new File(sdcard.getAbsolutePath() + "/WIFI_RESULT"); directory.mkdirs(); name = new SimpleDateFormat("yyyy-MM-dd HH mm ss").format(new Date()); File file = new File(directory,name + "wifi_data.txt"); FileOutputStream fou = new FileOutputStream(file); OutputStreamWriter osw = new OutputStreamWriter(fou); try { for (int i =0; i < list.getCount(); i++){ osw.append(list.getItemAtPosition(i).toString()); } osw.flush(); osw.close(); Toast.makeText(getBaseContext(), "Saved", Toast.LENGTH_LONG).show(); } catch (IOException e){ e.printStackTrace(); } } catch (FileNotFoundException e){ e.printStackTrace(); } } class WifiScanReceiver extends BroadcastReceiver { @SuppressLint("UseValueOf") public void onReceive(Context c, Intent intent) { int a =0; wifi.startScan(); List<ScanResult> wifilist = wifi.getScanResults(); if (a<wifilist.size()){ a=wifilist.size(); } for(int j=0;j<wifilist.size();j++){ if(bssid[j].equals(null)){ ssid[j] = wifilist.get(j).SSID.toString(); bssid[j] = wifilist.get(j).BSSID.toString(); lvl[j] = wifilist.get(j).level; count[j]++; } else if (bssid[j].equals(wifilist.get(j).BSSID.toString())){ lvl[j] = lvl[j] + wifilist.get(j).level; count[j]++; } } wifis = new String[a]; for (int i =0; i<a; i++){ wifis[i] = ("\n" + ssid[i] + "\n AP Address" + bssid[i] + "\n Signal Strength:" + lvl[i]/count[i]).toString(); } list.setAdapter(new ArrayAdapter<String>(getApplicationContext(), android.R.layout.simple_list_item_1,wifis)); } } protected void onDestroy() { unregisterReceiver(wifireciever); super.onPause(); } protected void onResume() { registerReceiver(wifireciever, new IntentFilter(WifiManager.SCAN_RESULTS_AVAILABLE_ACTION)); super.onResume(); } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.main, menu); return true; } } NullPointerException at that point mean my array bssid isn't initialize. So I just want to know how to initialize it in main activity so that I can use that string bssid anywhere.

    Read the article

  • Custom Section Name Crashing NSFetchedResultsController

    - by Mike H.
    I have a managed object with a dueDate attribute. Instead of displaying using some ugly date string as the section headers of my UITableView I created a transient attribute called "category" and defined it like so: - (NSString*)category { [self willAccessValueForKey:@"category"]; NSString* categoryName; if ([self isOverdue]) { categoryName = @"Overdue"; } else if ([self.finishedDate != nil]) { categoryName = @"Done"; } else { categoryName = @"In Progress"; } [self didAccessValueForKey:@"category"]; return categoryName; } Here is the NSFetchedResultsController set up: NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Task" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; NSMutableArray* descriptors = [[NSMutableArray alloc] init]; NSSortDescriptor *dueDateDescriptor = [[NSSortDescriptor alloc] initWithKey:@"dueDate" ascending:YES]; [descriptors addObject:dueDateDescriptor]; [dueDateDescriptor release]; [fetchRequest setSortDescriptors:descriptors]; fetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:@"category" cacheName:@"Root"]; The table initially displays fine, showing the unfinished items whose dueDate has not passed in a section titled "In Progress". Now, the user can tap a row in the table view which pushes a new details view onto the navigation stack. In this new view the user can tap a button to indicate that the item is now "Done". Here is the handler for the button (self.task is the managed object): - (void)taskDoneButtonTapped { self.task.finishedDate = [NSDate date]; } As soon as the value of the "finishedDate" attribute changes I'm hit with this exception: 2010-03-18 23:29:52.476 MyApp[1637:207] Serious application error. Exception was caught during Core Data change processing: no section named 'Done' found with userInfo (null) 2010-03-18 23:29:52.477 MyApp[1637:207] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'no section named 'Done' found' I've managed to figure out that the UITableView that is currently hidden by the new details view is trying to update its rows and sections because the NSFetchedResultsController was notified that something changed in the data set. Here's my table update code (copied from either the Core Data Recipes sample or the CoreBooks sample -- I can't remember which): - (void)controllerWillChangeContent:(NSFetchedResultsController *)controller { [self.tableView beginUpdates]; } - (void)controller:(NSFetchedResultsController *)controller didChangeObject:(id)anObject atIndexPath:(NSIndexPath *)indexPath forChangeType:(NSFetchedResultsChangeType)type newIndexPath:(NSIndexPath *)newIndexPath { switch(type) { case NSFetchedResultsChangeInsert: [self.tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [self.tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeUpdate: [self configureCell:[self.tableView cellForRowAtIndexPath:indexPath] atIndexPath:indexPath]; break; case NSFetchedResultsChangeMove: [self.tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; // Reloading the section inserts a new row and ensures that titles are updated appropriately. [self.tableView reloadSections:[NSIndexSet indexSetWithIndex:newIndexPath.section] withRowAnimation:UITableViewRowAnimationFade]; break; } } - (void)controller:(NSFetchedResultsController *)controller didChangeSection:(id <NSFetchedResultsSectionInfo>)sectionInfo atIndex:(NSUInteger)sectionIndex forChangeType:(NSFetchedResultsChangeType)type { switch(type) { case NSFetchedResultsChangeInsert: [self.tableView insertSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [self.tableView deleteSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; } } - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { [self.tableView endUpdates]; } I put breakpoints in each of these functions and found that only controllerWillChange is called. The exception is thrown before either controller:didChangeObject:atIndexPath:forChangeType:newIndex or controller:didChangeSection:atIndex:forChangeType are called. At this point I'm stuck. If I change my sectionNameKeyPath to just "dueDate" then everything works fine. I think that's because the dueDate attribute never changes whereas the category will be different when read back after the finishedDate attribute changes. Please help! UPDATE: Here is my UITableViewDataSource code: - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return [[self.fetchedResultsController sections] count]; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { id <NSFetchedResultsSectionInfo> sectionInfo = [[self.fetchedResultsController sections] objectAtIndex:section]; return [sectionInfo numberOfObjects]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [self.tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } [self configureCell:cell atIndexPath:indexPath]; return cell; } - (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { id <NSFetchedResultsSectionInfo> sectionInfo = [[self.fetchedResultsController sections] objectAtIndex:section]; return [sectionInfo name]; }

    Read the article

  • JQuery Dynamic Element - In DOM but unable to bind

    - by Grant80
    Hi All, I'm new to using JQuery so bear with me. I had implmented some code based on a js file that I found online which enables a series of div tags within a nested structure on my page to step through and show each one individually on the page. This all works great when I define the div tags as static entries in the masterpage. I should add that this is being implemented in a SharePoint master page. Ultimately though, with a static collection of div tags ideally containing an image with some descriptive text, and a hyperlink its not very flexible. Roll on my changes to make this a little more configurable. I have implemented some additional code that will read from a SharePoint list via an ajax call to the lists web service. For each entry in the list I am building a div tag that contains the information required dynamically. For testing, I am only pulling the title through at present. I have used the following code: $('#beltDiv').append(divHTML) to append the divs in the loop that are created to my nested structure on the page. I figured that this would cause the fade code to work as expected but I was wrong. It doesn't do anything at all. When check the source on the page, the div tags are not shown. They are however available in the DOM model when viewed through the IE developer toolbar. The issue (I think) looks to be that the initiation of the featureFade code is not working due to the div tags being unavailable. Is there a way to address this? The code used is shown below: <script type="text/javascript"> $(document).ready(function() { var soapEnv = "<soapenv:Envelope xmlns:soapenv='http://schemas.xmlsoap.org/soap/envelope/'> \ <soapenv:Body> \ <GetListItems xmlns='http://schemas.microsoft.com/sharepoint/soap/'> \ <listName>Carousel Items</listName> \ <viewFields> \ <ViewFields> \ <FieldRef Name='Title' /> \ </ViewFields> \ </viewFields> \ </GetListItems> \ </soapenv:Body> \ </soapenv:Envelope>"; $.ajax({ url: "_vti_bin/lists.asmx", type: "POST", dataType: "xml", data: soapEnv, complete: processResult, contentType: "text/xml; charset=\"utf-8\"" }); }); function processResult(xData, status) { $(xData.responseXML).find("z\\:row").each(function() { var divHTML = "<div id=\"divPanel_" + $(this).attr("ows_Title") + "\" class=\"panel\" style=\"background:url('http://devSP2010/sites/SPSOPS/Style Library/SharePointOps/Images/01.jpg') no-repeat; width:650px; height:55px;\"><div><div class=\"content\"><div><P><A style=\"COLOR: #cc0000\" href=\"www.google.com\">" + $(this).attr("ows_Title") + "</A></P><P>&nbsp;</P><P>&nbsp;</P><P>&nbsp;</P><P>&nbsp;</P></div></div></div></div>"; $("#beltDiv").append(divHTML); }); } featureFade.setup({ galleryid: 'headlines', beltclass: 'belt', panelclass: 'panel', autostep: { enable: true, moveby: 1, pause: 10000 }, panelbehavior: { speed: 1000, wraparound: true }, stepImgIDs: ["ftOne", "ftTwo", "ftThree", "ftFour","ftFive"], defaultButtons: { itemOn: "Style Library/SharePointOps/Images/dotOn.png", itemOff: "Style Library/SharePointOps/Images/dotOff.png" } }); The section where the div tags are dynamically appended is shown below. I've commented out the static div tags that work as expected. The only change is that these are implmented by the JQuery logic: <div class="homeFeature" style="display:inline-block"> <div id="headlines" class="headlines"> <div id="beltDiv" class="belt"> <!-- <div id="divPanel_ct01" class="panel" style="position:absolute;background-image:url('http://devsp2010/sites/spsops/Style Library/SharePointOps/Images/01.jpg'); background-repeat:no-repeat">Static Test 1</div> <div id="divPanel_ct02" class="panel" style="position:absolute;background-image:url('http://devsp2010/sites/spsops/Style Library/SharePointOps/Images/02.jpg'); background-repeat:no-repeat">Static Test 2</div> --> </div> </div> I'm stumped as to why it's not recognising the dynamically added elements in the DOM. Any help would be greatly appreciated on this. I'm happy to provide any further information on this. Thanks in advance, Grant Further to the answer recieved: I have modified the function call: function processResult(xData, status) { $(xData.responseXML).find("z\\:row").each( function() { /*alert($(this).attr("ows_ImagePath"));*/ var divHTML = "<div id=\"divPanel_" + $(this).attr("ows_Title") + "\" class=\"panel\" style=\"background:url('http://devSP2010/sites/SPSOPS/Style Library/SharePointOps/Images/ClydePort01big.jpg') no-repeat; width:650px; height:55px;\"><div><div class=\"content\"><div><P><A style=\"COLOR: #cc0000\" href=\"www.google.com\">" + $(this).attr("ows_Title") + "</A></P><P>&nbsp;</P><P>&nbsp;</P><P>&nbsp;</P><P>&nbsp;</P></div></div></div></div>"; $("#beltDiv").append(divHTML); } ); featureFade.setup( { galleryid: 'headlines', beltclass: 'belt', panelclass: 'panel', autostep: { enable: true, moveby: 1, pause: 10000 }, panelbehavior: { speed: 1000, wraparound: true }, stepImgIDs: ["ftOne", "ftTwo", "ftThree", "ftFour","ftFive"], defaultButtons: { itemOn: "Style Library/SharePointOps/Images/dotOn.png", itemOff: "Style Library/SharePointOps/Images/dotOff.png" } } ); }

    Read the article

  • Many-to-one relation exception due to closed session after loading

    - by Nick Thissen
    Hi, I am using NHibernate (version 1.2.1) for the first time so I wrote a simple test application (an ASP.NET project) that uses it. In my database I have two tables: Persons and Categories. Each person gets one category, seems easy enough. | Persons | | Categories | |--------------| |--------------| | Id (PK) | | Id (PK) | | Firstname | | CategoryName | | Lastname | | CreatedTime | | CategoryId | | UpdatedTime | | CreatedTime | | Deleted | | UpdatedTime | | Deleted | The Id, CreatedTime, UpdatedTime and Deleted attributes are a convention I use in all my tables, so I have tried to bring this fact into an additional abstraction layer. I have a project DatabaseFramework which has three important classes: Entity: an abstract class that defines these four properties. All 'entity objects' (in this case Person and Category) must inherit Entity. IEntityManager: a generic interface (type parameter as Entity) that defines methods like Load, Insert, Update, etc. NHibernateEntityManager: an implementation of this interface using NHibernate to do the loading, saving, etc. Now, the Person and Category classes are straightforward, they just define the attributes of the tables of course (keeping in mind that four of them are in the base Entity class). Since the Persons table is related to the Categories table via the CategoryId attribute, the Person class has a Category property that holds the related category. However, in my webpage, I will also need the name of this category (CategoryName), for databinding purposes for example. So I created an additional property CategoryName that returns the CategoryName property of the current Category property, or an empty string if the Category is null: Namespace Database Public Class Person Inherits DatabaseFramework.Entity Public Overridable Property Firstname As String Public Overridable Property Lastname As String Public Overridable Property Category As Category Public Overridable ReadOnly Property CategoryName As String Get Return If(Me.Category Is Nothing, _ String.Empty, _ Me.Category.CategoryName) End Get End Property End Class End Namespace I am mapping the Person class using this mapping file. The many-to-one relation was suggested by Yads in another thread: <id name="Id" column="Id" type="int" unsaved-value="0"> <generator class="identity" /> </id> <property name="CreatedTime" type="DateTime" not-null="true" /> <property name="UpdatedTime" type="DateTime" not-null="true" /> <property name="Deleted" type="Boolean" not-null="true" /> <property name="Firstname" type="String" /> <property name="Lastname" type="String" /> <many-to-one name="Category" column="CategoryId" class="NHibernateWebTest.Database.Category, NHibernateWebTest" /> (I can't get it to show the root node, this forum hides it, I don't know how to escape the html-like tags...) The final important detail is the Load method of the NHibernateEntityManager implementation. (This is in C# as it's in a different project, sorry about that). I simply open a new ISession (ISessionFactory.OpenSession) in the GetSession method and then use that to fill an EntityCollection(Of TEntity) which is just a collection inheriting System.Collections.ObjectModel.Collection(Of T). public virtual EntityCollection< TEntity Load() { using (ISession session = this.GetSession()) { var entities = session .CreateCriteria(typeof (TEntity)) .Add(Expression.Eq("Deleted", false)) .List< TEntity (); return new EntityCollection< TEntity (entities); } } (Again, I can't get it to format the code correctly, it hides the generic type parameters, probably because it reads the angled symbols as a HTML tag..? If you know how to let me do that, let me know!) Now, the idea of this Load method is that I get a fully functional collection of Persons, all their properties set to the correct values (including the Category property, and thus, the CategoryName property should return the correct name). However, it seems that is not the case. When I try to data-bind the result of this Load method to a GridView in ASP.NET, it tells me this: Property accessor 'CategoryName' on object 'NHibernateWebTest.Database.Person' threw the following exception:'Could not initialize proxy - the owning Session was closed.' The exception occurs on the DataBind method call here: public virtual void LoadGrid() { if (this.Grid == null) return; this.Grid.DataSource = this.Manager.Load(); this.Grid.DataBind(); } Well, of course the session is closed, I closed it via the using block. Isn't that the correct approach, should I keep the session open? And for how long? Can I close it after the DataBind method has been run? In each case, I'd really like my Load method to just return a functional collection of items. It seems to me that it is now only getting the Category when it is required (eg, when the GridView wants to read the CategoryName, which wants to read the Category property), but at that time the session is closed. Is that reasoning correct? How do I stop this behavior? Or shouldn't I? And what should I do otherwise? Thanks!

    Read the article

  • Databinding in combo box

    - by muralekarthick
    Hi I have two forms, and a class, queries return in Stored procedure. Stored Procedure: ALTER PROCEDURE [dbo].[Payment_Join] @reference nvarchar(20) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here SELECT p.iPaymentID,p.nvReference,pt.nvPaymentType,p.iAmount,m.nvMethod,u.nvUsers,p.tUpdateTime FROM Payment p, tblPaymentType pt, tblPaymentMethod m, tblUsers u WHERE p.nvReference = @reference and p.iPaymentTypeID = pt.iPaymentTypeID and p.iMethodID = m.iMethodID and p.iUsersID = u.iUsersID END payment.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; using System.Data.SqlClient; using System.Windows.Forms; namespace Finance { class payment { string connection = global::Finance.Properties.Settings.Default.PaymentConnectionString; #region Fields int _paymentid = 0; string _reference = string.Empty; string _paymenttype; double _amount = 0; string _paymentmethod; string _employeename; DateTime _updatetime = DateTime.Now; #endregion #region Properties public int paymentid { get { return _paymentid; } set { _paymentid = value; } } public string reference { get { return _reference; } set { _reference = value; } } public string paymenttype { get { return _paymenttype; } set { _paymenttype = value; } } public string paymentmethod { get { return _paymentmethod; } set { _paymentmethod = value; } } public double amount { get { return _amount;} set { _amount = value; } } public string employeename { get { return _employeename; } set { _employeename = value; } } public DateTime updatetime { get { return _updatetime; } set { _updatetime = value; } } #endregion #region Constructor public payment() { } public payment(string refer) { reference = refer; } public payment(int paymentID, string Reference, string Paymenttype, double Amount, string Paymentmethod, string Employeename, DateTime Time) { paymentid = paymentID; reference = Reference; paymenttype = Paymenttype; amount = Amount; paymentmethod = Paymentmethod; employeename = Employeename; updatetime = Time; } #endregion #region Methods public void Save() { try { SqlConnection connect = new SqlConnection(connection); SqlCommand command = new SqlCommand("payment_create", connect); command.CommandType = CommandType.StoredProcedure; command.Parameters.Add(new SqlParameter("@reference", reference)); command.Parameters.Add(new SqlParameter("@paymenttype", paymenttype)); command.Parameters.Add(new SqlParameter("@amount", amount)); command.Parameters.Add(new SqlParameter("@paymentmethod", paymentmethod)); command.Parameters.Add(new SqlParameter("@employeename", employeename)); command.Parameters.Add(new SqlParameter("@updatetime", updatetime)); connect.Open(); command.ExecuteScalar(); connect.Close(); } catch { } } public void Load(string reference) { try { SqlConnection connect = new SqlConnection(connection); SqlCommand command = new SqlCommand("Payment_Join", connect); command.CommandType = CommandType.StoredProcedure; command.Parameters.Add(new SqlParameter("@Reference", reference)); //MessageBox.Show("ref = " + reference); connect.Open(); SqlDataReader reader = command.ExecuteReader(); while (reader.Read()) { this.reference = Convert.ToString(reader["nvReference"]); // MessageBox.Show(reference); // MessageBox.Show("here"); // MessageBox.Show("payment type id = " + reader["nvPaymentType"]); // MessageBox.Show("here1"); this.paymenttype = Convert.ToString(reader["nvPaymentType"]); // MessageBox.Show(paymenttype.ToString()); this.amount = Convert.ToDouble(reader["iAmount"]); this.paymentmethod = Convert.ToString(reader["nvMethod"]); this.employeename = Convert.ToString(reader["nvUsers"]); this.updatetime = Convert.ToDateTime(reader["tUpdateTime"]); } reader.Close(); } catch (Exception ex) { MessageBox.Show("Check it again" + ex); } } #endregion } } i have already binded the combo box items through designer, When i run the application i just get the reference populated in form 2 and combo box just populated not the particular value which is fetched. New to c# so help me to get familiar

    Read the article

  • Adapting non-iterable containers to be iterated via custom templatized iterator

    - by DAldridge
    I have some classes, which for various reasons out of scope of this discussion, I cannot modify (irrelevant implementation details omitted): class Foo { /* ... irrelevant public interface ... */ }; class Bar { public: Foo& get_foo(size_t index) { /* whatever */ } size_t size_foo() { /* whatever */ } }; (There are many similar 'Foo' and 'Bar' classes I'm dealing with, and it's all generated code from elsewhere and stuff I don't want to subclass, etc.) [Edit: clarification - although there are many similar 'Foo' and 'Bar' classes, it is guaranteed that each "outer" class will have the getter and size methods. Only the getter method name and return type will differ for each "outer", based on whatever it's "inner" contained type is. So, if I have Baz which contains Quux instances, there will be Quux& Baz::get_quux(size_t index), and size_t Baz::size_quux().] Given the design of the Bar class, you cannot easily use it in STL algorithms (e.g. for_each, find_if, etc.), and must do imperative loops rather than taking a functional approach (reasons why I prefer the latter is also out of scope for this discussion): Bar b; size_t numFoo = b.size_foo(); for (int fooIdx = 0; fooIdx < numFoo; ++fooIdx) { Foo& f = b.get_foo(fooIdx); /* ... do stuff with 'f' ... */ } So... I've never created a custom iterator, and after reading various questions/answers on S.O. about iterator_traits and the like, I came up with this (currently half-baked) "solution": First, the custom iterator mechanism (NOTE: all uses of 'function' and 'bind' are from std::tr1 in MSVC9): // Iterator mechanism... template <typename TOuter, typename TInner> class ContainerIterator : public std::iterator<std::input_iterator_tag, TInner> { public: typedef function<TInner& (size_t)> func_type; ContainerIterator(const ContainerIterator& other) : mFunc(other.mFunc), mIndex(other.mIndex) {} ContainerIterator& operator++() { ++mIndex; return *this; } bool operator==(const ContainerIterator& other) { return ((mFunc.target<TOuter>() == other.mFunc.target<TOuter>()) && (mIndex == other.mIndex)); } bool operator!=(const ContainerIterator& other) { return !(*this == other); } TInner& operator*() { return mFunc(mIndex); } private: template<typename TOuter, typename TInner> friend class ContainerProxy; ContainerIterator(func_type func, size_t index = 0) : mFunc(func), mIndex(index) {} function<TInner& (size_t)> mFunc; size_t mIndex; }; Next, the mechanism by which I get valid iterators representing begin and end of the inner container: // Proxy(?) to the outer class instance, providing a way to get begin() and end() // iterators to the inner contained instances... template <typename TOuter, typename TInner> class ContainerProxy { public: typedef function<TInner& (size_t)> access_func_type; typedef function<size_t ()> size_func_type; typedef ContainerIterator<TOuter, TInner> iter_type; ContainerProxy(access_func_type accessFunc, size_func_type sizeFunc) : mAccessFunc(accessFunc), mSizeFunc(sizeFunc) {} iter_type begin() const { size_t numItems = mSizeFunc(); if (0 == numItems) return end(); else return ContainerIterator<TOuter, TInner>(mAccessFunc, 0); } iter_type end() const { size_t numItems = mSizeFunc(); return ContainerIterator<TOuter, TInner>(mAccessFunc, numItems); } private: access_func_type mAccessFunc; size_func_type mSizeFunc; }; I can use these classes in the following manner: // Sample function object for taking action on an LMX inner class instance yielded // by iteration... template <typename TInner> class SomeTInnerFunctor { public: void operator()(const TInner& inner) { /* ... whatever ... */ } }; // Example of iterating over an outer class instance's inner container... Bar b; /* assume populated which contained items ... */ ContainerProxy<Bar, Foo> bProxy( bind(&Bar::get_foo, b, _1), bind(&Bar::size_foo, b)); for_each(bProxy.begin(), bProxy.end(), SomeTInnerFunctor<Foo>()); Empirically, this solution functions correctly (minus any copy/paste or typos I may have introduced when editing the above for brevity). So, finally, the actual question: I don't like requiring the use of bind() and _1 placeholders, etcetera by the caller. All they really care about is: outer type, inner type, outer type's method to fetch inner instances, outer type's method to fetch count inner instances. Is there any way to "hide" the bind in the body of the template classes somehow? I've been unable to find a way to separately supply template parameters for the types and inner methods separately... Thanks! David

    Read the article

  • How should I implement simple caches with concurrency on Redis?

    - by solublefish
    Background I have a 2-tier web service - just my app server and an RDBMS. I want to move to a pool of identical app servers behind a load balancer. I currently cache a bunch of objects in-process. I hope to move them to a shared Redis. I have a dozen or so caches of simple, small-sized business objects. For example, I have a set of Foos. Each Foo has a unique FooId and an OwnerId. One "owner" may own multiple Foos. In a traditional RDBMS this is just a table with an index on the PK FooId and one on OwnerId. I'm caching this in one process simply: Dictionary<int,Foo> _cacheFooById; Dictionary<int,HashSet<int>> _indexFooIdsByOwnerId; Reads come straight from here, and writes go here and to the RDBMS. I usually have this invariant: "For a given group [say by OwnerId], the whole group is in cache or none of it is." So when I cache miss on a Foo, I pull that Foo and all the owner's other Foos from the RDBMS. Updates make sure to keep the index up to date and respect the invariant. When an owner calls GetMyFoos I never have to worry that some are cached and some aren't. What I did already The first/simplest answer seems to be to use plain ol' SET and GET with a composite key and json value: SET( "ServiceCache:Foo:" + theFoo.Id, JsonSerialize(theFoo)); I later decided I liked: HSET( "ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); That lets me get all the values in one cache as HVALS. It also felt right - I'm literally moving hashtables to Redis, so perhaps my top-level items should be hashes. This works to first order. If my high-level code is like: UpdateCache(myFoo); AddToIndex(myFoo); That translates into: HSET ("ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); var myFoos = JsonDeserialize( HGET ("ServiceCache:FooIndex", theFoo.OwnerId) ); myFoos.Add(theFoo.OwnerId); HSET ("ServiceCache:FooIndex", theFoo.OwnerId, JsonSerialize(myFoos)); However, this is broken in two ways. Two concurrent operations can read/modify/write at the same time. The latter "wins" the final HSET and the former's index update is lost. Another operation could read the index in between the first and second lines. It would miss a Foo that it should find. So how do I index properly? I think I could use a Redis set instead of a json-encoded value for the index. That would solve part of the problem since the "add-to-index-if-not-already-present" would be atomic. I also read about using MULTI as a "transaction" but it doesn't seem like it does what I want. Am I right that I can't really MULTI; HGET; {update}; HSET; EXEC since it doesn't even do the HGET before I issue the EXEC? I also read about using WATCH and MULTI for optimistic concurrency, then retrying on failure. But WATCH only works on top-level keys. So it's back to SET/GET instead of HSET/HGET. And now I need a new index-like-thing to support getting all the values in a given cache. If I understand it right, I can combine all these things to do the job. Something like: while(!succeeded) { WATCH( "ServiceCache:Foo:" + theFoo.FooId ); WATCH( "ServiceCache:FooIndexByOwner:" + theFoo.OwnerId ); WATCH( "ServiceCache:FooIndexAll" ); MULTI(); SET ("ServiceCache:Foo:" + theFoo.FooId, JsonSerialize(theFoo)); SADD ("ServiceCache:FooIndexByOwner:" + theFoo.OwnerId, theFoo.FooId); SADD ("ServiceCache:FooIndexAll", theFoo.FooId); EXEC(); //TODO somehow set succeeded properly } Finally I'd have to translate this pseudocode into real code depending how my client library uses WATCH/MULTI/EXEC; it looks like they need some sort of context to hook them together. All in all this seems like a lot of complexity for what has to be a very common case; I can't help but think there's a better, smarter, Redis-ish way to do things that I'm just not seeing. How do I lock properly? Even if I had no indexes, there's still a (probably rare) race condition. A: HGET - cache miss B: HGET - cache miss A: SELECT B: SELECT A: HSET C: HGET - cache hit C: UPDATE C: HSET B: HSET ** this is stale data that's clobbering C's update. Note that C could just be a really-fast A. Again I think WATCH, MULTI, retry would work, but... ick. I know in some places people use special Redis keys as locks for other objects. Is that a reasonable approach here? Should those be top-level keys like ServiceCache:FooLocks:{Id} or ServiceCache:Locks:Foo:{Id}? Or make a separate hash for them - ServiceCache:Locks with subkeys Foo:{Id}, or ServiceCache:Locks:Foo with subkeys {Id} ? How would I work around abandoned locks, say if a transaction (or a whole server) crashes while "holding" the lock?

    Read the article

  • android listview loadmore button with xml parsing

    - by user1780331
    Hi i have to developed listview with load more button using xml parsing in android application. Here i have faced some problem. my xml feed is empty means how can hide the load more button on last page. i have used below code here. public class CustomizedListView extends Activity { // All static variables private String URL = "http://dev.mmm.com/xctesting/xcart444pro/retrieve.php?page=1"; // XML node keys static final String KEY_SONG = "Order"; static final String KEY_TITLE = "orderid"; static final String KEY_DATE = "date"; static final String KEY_ARTIST = "payment_method"; int current_page = 1; ListView lv; LazyAdapter adapter; ProgressDialog pDialog; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); lv = (ListView) findViewById(R.id.list); ArrayList<HashMap<String, String>> songsList = new ArrayList<HashMap<String, String>>(); XMLParser parser = new XMLParser(); String xml = parser.getXmlFromUrl(URL); // getting XML from URL Document doc = parser.getDomElement(xml); // getting DOM element NodeList nl = doc.getElementsByTagName(KEY_SONG); // looping through all song nodes <song> for (int i = 0; i < nl.getLength(); i++) { // creating new HashMap HashMap<String, String> map = new HashMap<String, String>(); Element e = (Element) nl.item(i); // adding each child node to HashMap key => value map.put(KEY_ID, parser.getValue(e, KEY_ID)); map.put(KEY_TITLE, parser.getValue(e, KEY_TITLE)); map.put(KEY_ARTIST, parser.getValue(e, KEY_ARTIST)); songsList.add(map); } Button btnLoadMore = new Button(this); btnLoadMore.setText("Load More"); btnLoadMore.setBackgroundResource(R.drawable.lgnbttn); // Adding Load More button to lisview at bottom lv.addFooterView(btnLoadMore); // Getting adapter adapter = new LazyAdapter(this, songsList); lv.setAdapter(adapter); btnLoadMore.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View arg0) { // Starting a new async task new loadMoreListView().execute(); } }); } private class loadMoreListView extends AsyncTask<Void, Void, Void> { @Override protected void onPreExecute() { // Showing progress dialog before sending http request pDialog = new ProgressDialog( CustomizedListView.this); pDialog.setMessage("Please wait.."); //pDialog.setIndeterminateDrawable(getResources().getDrawable(R.drawable.my_progress_indeterminate)); pDialog.setIndeterminate(true); pDialog.setCancelable(false); pDialog.show(); pDialog.setContentView(R.layout.custom_dialog); } protected Void doInBackground(Void... unused) { current_page += 1; // Next page request URL = "http://dev.mmm.com/xctesting/xcart444pro/retrieve.php?page=" + current_page; ArrayList<HashMap<String, String>> songsList = new ArrayList<HashMap<String, String>>(); XMLParser parser = new XMLParser(); String xml = parser.getXmlFromUrl(URL); // getting XML from URL Document doc = parser.getDomElement(xml); // getting DOM element NodeList nl = doc.getElementsByTagName(KEY_SONG); NodeList nl = doc.getElementsByTagName(KEY_SONG); if (nl.getLength() == 0) { btnLoadMore.setVisibility(View.GONE); pDialog.dismiss(); } else // looping through all item nodes <item> for (int i = 0; i < nl.getLength(); i++) { // creating new HashMap HashMap<String, String> map = new HashMap<String, String>(); Element e = (Element) nl.item(i); // adding each child node to HashMap key => value map.put(KEY_ID, parser.getValue(e, KEY_ID)); map.put(KEY_TITLE, parser.getValue(e, KEY_TITLE)); map.put(KEY_ARTIST, parser.getValue(e, KEY_ARTIST)); songsList.add(map); } // get listview current position - used to maintain scroll position int currentPosition = lv.getFirstVisiblePosition(); // Appending new data to menuItems ArrayList adapter = new LazyAdapter( CustomizedListView.this, songsList); lv.setAdapter(adapter); lv.setSelectionFromTop(currentPosition + 1, 0); } }); return (null); } protected void onPostExecute(Void unused) { // closing progress dialog pDialog.dismiss(); } } } EDIT: Here i have to run the app means the listview is displayed on perpage 4 items.my last page having 1 item.please refer this screenshot:http://screencast.com/t/fTl4FETd In last page i have to click the load more button means have to go next activity and successfully hide the button on empty page..please refer this screenshot:http://screencast.com/t/wyG5zdp3r i have to check the condition for empty page: if (nl.getLength() == 0) { btnLoadMore.setVisibility(View.GONE); pDialog.dismiss(); } How can i write the conditon fot last page?????pleas ehelp me Here i wish to need the o/p is hide the button on last page. Please help me.how can i check the condition.give me some code programmatically.

    Read the article

  • Expand <div> tag to bottom of page with CSS

    - by typoknig
    Hi all, I know this question gets asked a lot because I have looked at many "solutions" trying to get this to work for me. I can get it to work if I hack up the html but I want to use all CSS. All I want is a header with two columns below it, and I want these three items to fill the entire page/screen, and I want to do it with CSS and without frames or tables. The XAMPP user interface looks exactly how I want my page to look, but again, I do not want to use frames. I cannot get the two orangeish colored columns to extend to the bottom of the screen. I do have it so it looks like the right column extends to the bottom of the screen just by changing the body background color to the same color as the background color of the right column, but I would like both columns to extend to the bottom so I didn't have to do that. Here is what I have so far: HTML <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html dir="ltr" xmlns="http://www.w3.org/1999/xhtml"> <head> <title>MY SITE</title> <meta content="text/html; charset=utf-8" http-equiv="Content-Type" /> <link href="stylesheet.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="container"> <div id="masthead"> MY SITE</div> <div id="left_col"> Employee Management<br /> <a href="Employee%20Management.php">Add New Employee</a><br /> <a href="Employee%20Management.php">Edit Existing Employee</a><br /> <br/> Load Management<br /> <a href="Load%20Management.php">Log New Load</a><br /> <a href="Load%20Management.php">Edit Existing Load</a><br /> <br/> Report Management<br /> <a href="Report%20Management.php">Employee Report</a><br /> <a href="Report%20Management.php">Load Report</a></div> <div id="page_content"> <div id="page_content_heading">Welcome!</div> Lots of words</div> </div> </body> </html> CSS #masthead { background-color:#FFFFFF; font-family:Arial,Helvetica,sans-serif; font-size:xx-large; font-weight:bold; padding:30px; text-align:center; } #container { min-width: 600px; min-height: 100%; } #left_col { padding: 10px; background-color: #339933; float: left; font-family: Arial,Helvetica,sans-serif; font-size: large; font-weight: bold; width: 210px; } #page_content { background-color: #CCCCCC; margin-left: 230px; padding: 20px; } #page_content_heading { font-family:Arial,Helvetica,sans-serif; font-size:large; font-weight:bold; padding-bottom:10px; padding-top:10px; } a { color:#0000FF; font-family:Arial,Helvetica,sans-serif; font-size:medium; font-weight:normal; } a:hover { color:#FF0000; } html, body { height: 100%; padding: 0; margin: 0; background-color: #CCCCCC; }

    Read the article

  • float addition 2.5 + 2.5 = 4.0? RPN

    - by AJ Clou
    The code below is my subprogram to do reverse polish notation calculations... basically +, -, *, and /. Everything works in the program except when I try to add 2.5 and 2.5 the program gives me 4.0... I think I have an idea why, but I'm not sure how to fix it... Right now I am reading all the numbers and operators in from command line as required by this assignment, then taking that string and using sscanf to get the numbers out of it... I am thinking that somehow the array that contains the three characters '2', '.', and '5', is not being totally converted to a float... instead i think just the '2' is. Could someone please take a look at my code and either confirm or deny this, and possibly tell me how to fix it so that i get the proper answer? Thank you in advance for any help! float fsm (char mystring[]) { int i = -1, j, k = 0, state = 0; float num1, num2, ans; char temp[10]; c_stack top; c_init_stack (&top); while (1) { switch (state) { case 0: i++; if ((mystring[i]) == ' ') { state = 0; } else if ((isdigit (mystring[i])) || (mystring[i] == '.')) { state = 1; } else if ((mystring[i]) == '\0') { state = 3; } else { state = 4; } break; case 1: temp[k] = mystring[i]; k++; i++; if ((isdigit (mystring[i])) || (mystring[i] == '.')) { state = 1; } else { state = 2; } break; case 2: temp[k] = '\0'; sscanf (temp, "%f", &num1); c_push (&top, num1); i--; k = 0; state = 0; break; case 3: ans = c_pop (&top); if (c_is_empty (top)) return ans; else { printf ("There are still items on the stack\n"); exit (0); case 4: num2 = c_pop (&top); num1 = c_pop (&top); if (mystring[i] == '+'){ ans = num1 + num2; return ans; } else if (mystring[i] == '-'){ ans = num1 - num2; return ans; } else if (mystring[i] == '*'){ ans = num1 * num2; return ans; } else if (mystring[i] == '/'){ if (num2){ ans = num1 / num2; return ans; } else{ printf ("Error: cannot divide by 0\n"); exit (0); } } c_push (&top, ans); state = 0; break; } } } } Here is my main program: #include <stdio.h> #include <stdlib.h> #include "boolean.h" #include "c_stack.h" #include <string.h> int main(int argc, char *argv[]) { char mystring[100]; int i; sscanf("", "%s", mystring); for (i=1; i<argc; i++){ strcat(mystring, argv[i]); strcat(mystring, " "); } printf("%.2f\n", fsm(mystring)); } and here is the header file with prototypes and the definition for c_stack: #include "boolean.h" #ifndef CSTACK_H #define CSTACK_H typedef struct c_stacknode{ char data; struct c_stacknode *next; } *c_stack; #endif void c_init_stack(c_stack *); boolean c_is_full(void); boolean c_is_empty(c_stack); void c_push(c_stack *,char); char c_pop(c_stack *); void print_c_stack(c_stack); boolean is_open(char); boolean is_brother(char, char); float fsm(char[]);

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Introducing the Earthquake Locator – A Bing Maps Silverlight Application, part 1

    - by Bobby Diaz
    Update: Live demo and source code now available!  The recent wave of earthquakes (no pun intended) being reported in the news got me wondering about the frequency and severity of earthquakes around the world. Since I’ve been doing a lot of Silverlight development lately, I decided to scratch my curiosity with a nice little Bing Maps application that will show the location and relative strength of recent seismic activity. Here is a list of technologies this application will utilize, so be sure to have everything downloaded and installed if you plan on following along. Silverlight 3 WCF RIA Services Bing Maps Silverlight Control * Managed Extensibility Framework (optional) MVVM Light Toolkit (optional) log4net (optional) * If you are new to Bing Maps or have not signed up for a Developer Account, you will need to visit www.bingmapsportal.com to request a Bing Maps key for your application. Getting Started We start out by creating a new Silverlight Application called EarthquakeLocator and specify that we want to automatically create the Web Application Project with RIA Services enabled. I cleaned up the web app by removing the Default.aspx and EarthquakeLocatorTestPage.html. Then I renamed the EarthquakeLocatorTestPage.aspx to Default.aspx and set it as my start page. I also set the development server to use a specific port, as shown below. RIA Services Next, I created a Services folder in the EarthquakeLocator.Web project and added a new Domain Service Class called EarthquakeService.cs. This is the RIA Services Domain Service that will provide earthquake data for our client application. I am not using LINQ to SQL or Entity Framework, so I will use the <empty domain service class> option. We will be pulling data from an external Atom feed, but this example could just as easily pull data from a database or another web service. This is an important distinction to point out because each scenario I just mentioned could potentially use a different Domain Service base class (i.e. LinqToSqlDomainService<TDataContext>). Now we can start adding Query methods to our EarthquakeService that pull data from the USGS web site. Here is the complete code for our service class: using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.ServiceModel.Syndication; using System.Web.DomainServices; using System.Web.Ria; using System.Xml; using log4net; using EarthquakeLocator.Web.Model;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// Provides earthquake data to client applications.     /// </summary>     [EnableClientAccess()]     public class EarthquakeService : DomainService     {         private static readonly ILog log = LogManager.GetLogger(typeof(EarthquakeService));           // USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/         private const string FeedForPreviousDay =             "http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml";         private const string FeedForPreviousWeek =             "http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml";           /// <summary>         /// Gets the earthquake data for the previous week.         /// </summary>         /// <returns>A queryable collection of <see cref="Earthquake"/> objects.</returns>         public IQueryable<Earthquake> GetEarthquakes()         {             var feed = GetFeed(FeedForPreviousWeek);             var list = new List<Earthquake>();               if ( feed != null )             {                 foreach ( var entry in feed.Items )                 {                     var quake = CreateEarthquake(entry);                     if ( quake != null )                     {                         list.Add(quake);                     }                 }             }               return list.AsQueryable();         }           /// <summary>         /// Creates an <see cref="Earthquake"/> object for each entry in the Atom feed.         /// </summary>         /// <param name="entry">The Atom entry.</param>         /// <returns></returns>         private Earthquake CreateEarthquake(SyndicationItem entry)         {             Earthquake quake = null;             string title = entry.Title.Text;             string summary = entry.Summary.Text;             string point = GetElementValue<String>(entry, "point");             string depth = GetElementValue<String>(entry, "elev");             string utcTime = null;             string localTime = null;             string depthDesc = null;             double? magnitude = null;             double? latitude = null;             double? longitude = null;             double? depthKm = null;               if ( !String.IsNullOrEmpty(title) && title.StartsWith("M") )             {                 title = title.Substring(2, title.IndexOf(',')-3).Trim();                 magnitude = TryParse(title);             }             if ( !String.IsNullOrEmpty(point) )             {                 var values = point.Split(' ');                 if ( values.Length == 2 )                 {                     latitude = TryParse(values[0]);                     longitude = TryParse(values[1]);                 }             }             if ( !String.IsNullOrEmpty(depth) )             {                 depthKm = TryParse(depth);                 if ( depthKm != null )                 {                     depthKm = Math.Round((-1 * depthKm.Value) / 100, 2);                 }             }             if ( !String.IsNullOrEmpty(summary) )             {                 summary = summary.Replace("</p>", "");                 var values = summary.Split(                     new string[] { "<p>" },                     StringSplitOptions.RemoveEmptyEntries);                   if ( values.Length == 3 )                 {                     var times = values[1].Split(                         new string[] { "<br>" },                         StringSplitOptions.RemoveEmptyEntries);                       if ( times.Length > 0 )                     {                         utcTime = times[0];                     }                     if ( times.Length > 1 )                     {                         localTime = times[1];                     }                       depthDesc = values[2];                     depthDesc = "Depth: " + depthDesc.Substring(depthDesc.IndexOf(":") + 2);                 }             }               if ( latitude != null && longitude != null )             {                 quake = new Earthquake()                 {                     Id = entry.Id,                     Title = entry.Title.Text,                     Summary = entry.Summary.Text,                     Date = entry.LastUpdatedTime.DateTime,                     Url = entry.Links.Select(l => Path.Combine(l.BaseUri.OriginalString,                         l.Uri.OriginalString)).FirstOrDefault(),                     Age = entry.Categories.Where(c => c.Label == "Age")                         .Select(c => c.Name).FirstOrDefault(),                     Magnitude = magnitude.GetValueOrDefault(),                     Latitude = latitude.GetValueOrDefault(),                     Longitude = longitude.GetValueOrDefault(),                     DepthInKm = depthKm.GetValueOrDefault(),                     DepthDesc = depthDesc,                     UtcTime = utcTime,                     LocalTime = localTime                 };             }               return quake;         }           private T GetElementValue<T>(SyndicationItem entry, String name)         {             var el = entry.ElementExtensions.Where(e => e.OuterName == name).FirstOrDefault();             T value = default(T);               if ( el != null )             {                 value = el.GetObject<T>();             }               return value;         }           private double? TryParse(String value)         {             double d;             if ( Double.TryParse(value, out d) )             {                 return d;             }             return null;         }           /// <summary>         /// Gets the feed at the specified URL.         /// </summary>         /// <param name="url">The URL.</param>         /// <returns>A <see cref="SyndicationFeed"/> object.</returns>         public static SyndicationFeed GetFeed(String url)         {             SyndicationFeed feed = null;               try             {                 log.Debug("Loading RSS feed: " + url);                   using ( var reader = XmlReader.Create(url) )                 {                     feed = SyndicationFeed.Load(reader);                 }             }             catch ( Exception ex )             {                 log.Error("Error occurred while loading RSS feed: " + url, ex);             }               return feed;         }     } }   The only method that will be generated in the client side proxy class, EarthquakeContext, will be the GetEarthquakes() method. The reason being that it is the only public instance method and it returns an IQueryable<Earthquake> collection that can be consumed by the client application. GetEarthquakes() calls the static GetFeed(String) method, which utilizes the built in SyndicationFeed API to load the external data feed. You will need to add a reference to the System.ServiceModel.Web library in order to take advantage of the RSS/Atom reader. The API will also allow you to create your own feeds to serve up in your applications. Model I have also created a Model folder and added a new class, Earthquake.cs. The Earthquake object will hold the various properties returned from the Atom feed. Here is a sample of the code for that class. Notice the [Key] attribute on the Id property, which is required by RIA Services to uniquely identify the entity. using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ComponentModel.DataAnnotations;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     [DataContract]     public class Earthquake     {         /// <summary>         /// Gets or sets the id.         /// </summary>         /// <value>The id.</value>         [Key]         [DataMember]         public string Id { get; set; }           /// <summary>         /// Gets or sets the title.         /// </summary>         /// <value>The title.</value>         [DataMember]         public string Title { get; set; }           /// <summary>         /// Gets or sets the summary.         /// </summary>         /// <value>The summary.</value>         [DataMember]         public string Summary { get; set; }           // additional properties omitted     } }   View Model The recent trend to use the MVVM pattern for WPF and Silverlight provides a great way to separate the data and behavior logic out of the user interface layer of your client applications. I have chosen to use the MVVM Light Toolkit for the Earthquake Locator, but there are other options out there if you prefer another library. That said, I went ahead and created a ViewModel folder in the Silverlight project and added a EarthquakeViewModel class that derives from ViewModelBase. Here is the code: using System; using System.Collections.ObjectModel; using System.ComponentModel.Composition; using System.ComponentModel.Composition.Hosting; using Microsoft.Maps.MapControl; using GalaSoft.MvvmLight; using EarthquakeLocator.Web.Model; using EarthquakeLocator.Web.Services;   namespace EarthquakeLocator.ViewModel {     /// <summary>     /// Provides data for views displaying earthquake information.     /// </summary>     public class EarthquakeViewModel : ViewModelBase     {         [Import]         public EarthquakeContext Context;           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         public EarthquakeViewModel()         {             var catalog = new AssemblyCatalog(GetType().Assembly);             var container = new CompositionContainer(catalog);             container.ComposeParts(this);             Initialize();         }           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         /// <param name="context">The context.</param>         public EarthquakeViewModel(EarthquakeContext context)         {             Context = context;             Initialize();         }           private void Initialize()         {             MapCenter = new Location(20, -170);             ZoomLevel = 2;         }           #region Private Methods           private void OnAutoLoadDataChanged()         {             LoadEarthquakes();         }           private void LoadEarthquakes()         {             var query = Context.GetEarthquakesQuery();             Context.Earthquakes.Clear();               Context.Load(query, (op) =>             {                 if ( !op.HasError )                 {                     foreach ( var item in op.Entities )                     {                         Earthquakes.Add(item);                     }                 }             }, null);         }           #endregion Private Methods           #region Properties           private bool autoLoadData;         /// <summary>         /// Gets or sets a value indicating whether to auto load data.         /// </summary>         /// <value><c>true</c> if auto loading data; otherwise, <c>false</c>.</value>         public bool AutoLoadData         {             get { return autoLoadData; }             set             {                 if ( autoLoadData != value )                 {                     autoLoadData = value;                     RaisePropertyChanged("AutoLoadData");                     OnAutoLoadDataChanged();                 }             }         }           private ObservableCollection<Earthquake> earthquakes;         /// <summary>         /// Gets the collection of earthquakes to display.         /// </summary>         /// <value>The collection of earthquakes.</value>         public ObservableCollection<Earthquake> Earthquakes         {             get             {                 if ( earthquakes == null )                 {                     earthquakes = new ObservableCollection<Earthquake>();                 }                   return earthquakes;             }         }           private Location mapCenter;         /// <summary>         /// Gets or sets the map center.         /// </summary>         /// <value>The map center.</value>         public Location MapCenter         {             get { return mapCenter; }             set             {                 if ( mapCenter != value )                 {                     mapCenter = value;                     RaisePropertyChanged("MapCenter");                 }             }         }           private double zoomLevel;         /// <summary>         /// Gets or sets the zoom level.         /// </summary>         /// <value>The zoom level.</value>         public double ZoomLevel         {             get { return zoomLevel; }             set             {                 if ( zoomLevel != value )                 {                     zoomLevel = value;                     RaisePropertyChanged("ZoomLevel");                 }             }         }           #endregion Properties     } }   The EarthquakeViewModel class contains all of the properties that will be bound to by the various controls in our views. Be sure to read through the LoadEarthquakes() method, which handles calling the GetEarthquakes() method in our EarthquakeService via the EarthquakeContext proxy, and also transfers the loaded entities into the view model’s Earthquakes collection. Another thing to notice is what’s going on in the default constructor. I chose to use the Managed Extensibility Framework (MEF) for my composition needs, but you can use any dependency injection library or none at all. To allow the EarthquakeContext class to be discoverable by MEF, I added the following partial class so that I could supply the appropriate [Export] attribute: using System; using System.ComponentModel.Composition;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// The client side proxy for the EarthquakeService class.     /// </summary>     [Export]     public partial class EarthquakeContext     {     } }   One last piece I wanted to point out before moving on to the user interface, I added a client side partial class for the Earthquake entity that contains helper properties that we will bind to later: using System;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     public partial class Earthquake     {         /// <summary>         /// Gets the location based on the current Latitude/Longitude.         /// </summary>         /// <value>The location.</value>         public string Location         {             get { return String.Format("{0},{1}", Latitude, Longitude); }         }           /// <summary>         /// Gets the size based on the Magnitude.         /// </summary>         /// <value>The size.</value>         public double Size         {             get { return (Magnitude * 3); }         }     } }   View Now the fun part! Usually, I would create a Views folder to place all of my View controls in, but I took the easy way out and added the following XAML code to the default MainPage.xaml file. Be sure to add the bing prefix associating the Microsoft.Maps.MapControl namespace after adding the assembly reference to your project. The MVVM Light Toolkit project templates come with a ViewModelLocator class that you can use via a static resource, but I am instantiating the EarthquakeViewModel directly in my user control. I am setting the AutoLoadData property to true as a way to trigger the LoadEarthquakes() method call. The MapItemsControl found within the <bing:Map> control binds its ItemsSource property to the Earthquakes collection of the view model, and since it is an ObservableCollection<T>, we get the automatic two way data binding via the INotifyCollectionChanged interface. <UserControl x:Class="EarthquakeLocator.MainPage"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:bing="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl"     xmlns:vm="clr-namespace:EarthquakeLocator.ViewModel"     mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" >     <UserControl.Resources>         <DataTemplate x:Key="EarthquakeTemplate">             <Ellipse Fill="Red" Stroke="Black" StrokeThickness="1"                      Width="{Binding Size}" Height="{Binding Size}"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="{Binding UtcTime}" />                         <TextBlock Text="{Binding LocalTime}" />                         <TextBlock Text="{Binding DepthDesc}" />                     </StackPanel>                 </ToolTipService.ToolTip>             </Ellipse>         </DataTemplate>     </UserControl.Resources>       <UserControl.DataContext>         <vm:EarthquakeViewModel AutoLoadData="True" />     </UserControl.DataContext>       <Grid x:Name="LayoutRoot">           <bing:Map x:Name="map" CredentialsProvider="--Your-Bing-Maps-Key--"                   Center="{Binding MapCenter, Mode=TwoWay}"                   ZoomLevel="{Binding ZoomLevel, Mode=TwoWay}">             <bing:MapItemsControl ItemsSource="{Binding Earthquakes}"                                   ItemTemplate="{StaticResource EarthquakeTemplate}" />         </bing:Map>       </Grid> </UserControl>   The EarthquakeTemplate defines the Ellipse that will represent each earthquake, the Width and Height that are determined by the Magnitude, the Position on the map, and also the tooltip that will appear when we mouse over each data point. Running the application will give us the following result (shown with a tooltip example): That concludes this portion of our show but I plan on implementing additional functionality in later blog posts. Be sure to come back soon to see the next installments in this series. Enjoy!   Additional Resources USGS Earthquake Data Feeds Brad Abrams shows how RIA Services and MVVM can work together

    Read the article

  • E-Business Suite Technology Sessions at OpenWorld 2012

    - by Max Arderius
    Oracle OpenWorld 2012 is almost here! We're looking forward to updating you on our products, strategy, and roadmaps. This year, the E-Business Suite Applications Technology Group (ATG) will participate in 25 speaker sessions, two Meet the Experts round-table discussions, five demoground booths and seven Special Interest Group meetings as guest speakers. We hope to see you at our sessions.  Please join us to hear the latest news and connect with senior ATG development staff. Here's a downloadable listing of all Applications Technology Group-related sessions with times and locations: FOCUS ON Oracle E-Business Suite - Applications Tools and Technology (PDF) General Sessions GEN8474 - Oracle E-Business Suite - Strategy, Update, and RoadmapCliff Godwin, SVP, Oracle Monday, Oct 1, 12:15 PM - 1:15 PM - Moscone West 2002/2004 In this session, hear Oracle E-Business Suite General Manager Cliff Godwin deliver an update on the Oracle E-Business Suite product line. This session covers the value delivered by the current release of Oracle E-Business Suite, the momentum, and how Oracle E-Business Suite applications integrate into Oracle’s overall applications strategy. You’ll come away with an understanding of the value Oracle E-Business Suite applications deliver now and will deliver in the future. GEN9173 - Optimize and Extend Oracle Applications - The Path to Oracle Fusion ApplicationsNadia Bendjedou, Oracle; Corre Curtice, Bhavish Madurai (CSC) Tuesday, Oct 2, 10:15 AM - 11:15 AM - Moscone West 3002/3004 One of the main objectives of this session is to help organizations build their IT roadmap for the next five years and be aligned with the Oracle Applications strategy in general and the Oracle Fusion Applications strategy in particular. Come hear about some of the common sense, practical steps you can take to optimize the performance of your Oracle Applications today and prepare your path to Oracle Fusion Applications for when your organization is ready to embrace them. Each step you take in adopting Oracle Fusion technology gets you partway to Oracle Fusion Applications. Conference Sessions CON9024 - Oracle E-Business Suite Technology: Latest Features and Roadmap Lisa Parekh, Oracle Monday, Oct 1, 10:45 AM - 11:45 AM - Moscone West 2016 This Oracle development session provides a comprehensive overview of Oracle’s product strategy for Oracle E-Business Suite technology, the capabilities and associated business benefits of recent releases, and a review of capabilities on the product roadmap. This is the cornerstone session for the Oracle E-Business Suite technology stack. Come hear about the latest new usability enhancements of the user interface; systems administration and configuration management tools; security-related updates; and tools and options for extending, customizing, and integrating Oracle E-Business Suite with other applications. CON9021 - Oracle E-Business Suite Future Directions: Deployment and System AdministrationMax Arderius, Oracle Monday, Oct 1, 3:15 PM - 4:15 PM - Moscone West 2016  What’s coming in the next major version of Oracle E-Business Suite 12? This Oracle Development session covers the latest technology stack, including the use of Oracle WebLogic Server (Oracle Fusion Middleware 11g) and Oracle Database 11g Release 2 (11.2). Topics include an architectural overview of the latest updates, installation and upgrade options, new configuration options, and new tools for hot cloning and automated “lights-out” cloning. Come learn how online patching (based on the Oracle Database 11g Release 2 Edition-Based Redefinition feature) will reduce your database patching downtimes to however long it takes to bounce your database server. CON9017 - Desktop Integration in Oracle E-Business Suite 12.1 Padmaprabodh Ambale, Gustavo Jimenez, Oracle Monday, Oct 1, 4:45 PM - 5:45 PM - Moscone West 2016 This presentation covers the latest functional enhancements in Oracle Web Applications Desktop Integrator and Oracle Report Manager, enhanced Microsoft Office support, and greater support for building custom desktop integration solutions. The session also presents tips and tricks for upgrading from Oracle Applications Desktop Integrator to Oracle Web Applications Desktop Integrator and Oracle Report Manager. CON9023 - Oracle E-Business Suite Technology Certification Primer and Roadmap Steven Chan, Oracle Tuesday, Oct 2, 10:15 AM - 11:15 AM - Moscone West 2016  Is your Oracle E-Business Suite technology stack up to date? Are you taking advantage of all the latest options and capabilities? This Oracle development session summarizes the latest certifications and roadmap for the Oracle E-Business Suite technology stack, including elements such as database releases and options, Java, Oracle Forms, Oracle Containers for J2EE, desktop operating systems, browsers, JRE releases, development and Web authoring tools, user authentication and management, business intelligence, Oracle Application Management Packs, security options, clouds, Oracle VM, and virtualization. The session also covers the most commonly asked questions about tech stack component support dates and upgrade implications. CON9028 - Minimizing Oracle E-Business Suite Maintenance DowntimesSantiago Bastidas, Elke Phelps, Oracle Tuesday, Oct 2, 11:45 AM - 12:45 PM - Moscone West 2016 This Oracle development session features a survey of the best techniques sysadmins can use to minimize patching downtimes. It starts with an architectural-level review of Oracle E-Business Suite fundamentals and then moves to a practical view of the various tools and approaches for downtimes. Topics include patching shortcuts, merging patches, distributing worker processes across multiple servers, running ADPatch in noninteractive mode, staged APPL_TOPs, shared file systems, deferring systemwide database tasks, avoiding resource bottlenecks, and more. An added bonus: hear about the upcoming Oracle E-Business Suite 12 online patching capabilities based on the groundbreaking Oracle Database 11g Release 2 Edition-Based Redefinition feature. CON9116 - Extending the Use of Oracle E-Business Suite with the Oracle Endeca PlatformOsama Elkady, Muhannad Obeidat, Oracle Tuesday, Oct 2, 11:45 AM - 12:45 PM - Moscone West 2018 The Oracle Endeca platform includes a leading unstructured data correlation and analytics engine, together with a best-in class catalog search and guided navigation solution, to improve the productivity of all types of users in your enterprise. This development session focuses on the details behind the Oracle Endeca platform’s integration into Oracle E-Business Suite. It demonstrates how easily you can extend the use of the Oracle Endeca platform into other areas of Oracle E-Business Suite and how you can bring in your own data and build new Oracle Endeca applications for Oracle E-Business Suite. CON9005 - Oracle E-Business Suite Integration Best PracticesVeshaal Singh, Oracle, Jeffrey Hand, Zebra Technologies Tuesday, Oct 2, 1:15 PM - 2:15 PM - Moscone West 2018 Oracle is investing across applications and technologies to make the application integration experience easier for customers. Today Oracle has certified Oracle E-Business Suite on Oracle Fusion Middleware 11g and provides a comprehensive set of integration technologies. Learn about Oracle’s integration offering across data- and process-centric integrations. These technologies can be used to address various application integration challenges and styles. In this session, you will get an understanding of how, when, and where you can leverage Oracle’s integration technologies to connect end-to-end business processes across your enterprise, including your Oracle Applications portfolio.  CON9026 - Latest Oracle E-Business Suite 12.1 User Interface and Usability EnhancementsPadmaprabodh Ambale, Oracle Tuesday, Oct 2, 1:15 PM - 2:15 PM - Moscone West 2016 This Oracle development session details the latest UI enhancements to Oracle Application Framework in Oracle E-Business Suite 12.1. Developers will get a detailed look at new features to enhance usability, offer more capabilities for personalization and extensions, and support the development and use of dashboards and Web services. Topics include new rich UI capabilities such as new home page features, Navigator and Favorites pull-down menus, REST interface, embedded widgets for analytics content, Oracle Application Development Framework (Oracle ADF) task flows, third-party widgets, a look-ahead list of values, inline attachments, pop-ups, personalization and extensibility enhancements, business layer extensions, Oracle ADF integration, and mobile devices. CON8805 - Planning Your Oracle E-Business Suite Upgrade from 11i to Release 12.1 and BeyondAnne Carlson, Oracle Tuesday, Oct 2, 5:00 PM - 6:00 PM - Moscone West 3002/3004 Attend this session to hear the latest Oracle E-Business Suite 12.1 upgrade planning tips from Oracle’s support, consulting, development, and IT organizations. You’ll get specific cross-product advice on how to understand the factors that affect your project’s duration, decide on your project’s scope, develop a robust testing strategy, leverage Oracle Support resources, and more. In a nutshell, this session tells you things you need to know before embarking upon your Release 12.1 upgrade project. CON9053 - Advanced Management of Oracle E-Business Suite with Oracle Enterprise ManagerAngelo Rosado, Oracle Tuesday, Oct 2, 5:00 PM - 6:00 PM - Moscone West 2016 The task of managing and monitoring Oracle E-Business Suite environments can be very challenging. Oracle Enterprise Manager is the only product on the market that is designed to monitor and manage all the different technologies that constitute Oracle E-Business Suite applications, including end user, midtier, configuration, host, and database management—to name just a few. Customers that have implemented Oracle Enterprise Manager have experienced dramatic improvements in system visibility and diagnostic capability as well as administrator productivity. The purpose of this session is to highlight the key features and benefits of Oracle Enterprise Manager and Oracle Application Management Suite for Oracle E-Business Suite. CON8809 - Oracle E-Business Suite 12.1 Upgrade Best Practices: Technical InsightIsam Alyousfi, Udayan Parvate, Oracle Wednesday, Oct 3, 10:15 AM - 11:15 AM - Moscone West 3011 This session is ideal for organizations thinking about upgrading to Oracle E-Business Suite 12.1. It covers the fundamentals of upgrading to Release 12.1, including the technology stack components and supported upgrade paths. Hear from Oracle Development about the set of best practices for patching in general and executing the Release 12.1 technical upgrade, with special considerations for minimizing your downtime. Also get to know about relatively recent upgrade resources. CON9032 - Upgrading Your Customizations of Oracle E-Business Suite 12.1Sara Woodhull, Oracle Wednesday, Oct 3, 10:15 AM - 11:15 AM - Moscone West 2016 Have you personalized Oracle Forms or Oracle Application Framework screens in Oracle E-Business Suite? Have you used mod_plsql in Release 11i? Have you extended or customized your Release 11i environment with other tools? The technical options for upgrading these customizations as part of your Oracle E-Business Suite Release 12.1 upgrade can be bewildering. Come to this Oracle development session to learn about selecting the best upgrade approach for your existing customizations. The session will help you understand customization scenarios and use cases, tools, and technologies to ensure that your Oracle E-Business Suite Release 12.1 environment fits your users’ needs closely and that any future customizations will be easy to upgrade. CON9259 - Oracle E-Business Suite Internationalization and Multilingual FeaturesMaher Al-Nubani, Oracle Wednesday, Oct 3, 10:15 AM - 11:15 AM - Moscone West 2018 Oracle E-Business Suite supports more countries, languages, and regions than ever. Come to this Oracle development session to get an overview of internationalization features and capabilities and see new Release 12 features such as calendar support for Hijra and Thai, new group separators, lightweight multilingual support (MLS) setup, new character sets such as AL32UTF, newly supported languages, Mac certifications, Oracle iSetup support for moving MLS setups, new file export options for Unicode, new MLS number spelling options, and more. CON7188 - Mobile Apps for Oracle E-Business Suite with Oracle ADF Mobile and Oracle SOA SuiteSrikant Subramaniam, Joe Huang, Veshaal Singh, Oracle Wednesday, Oct 3, 10:15 AM - 11:15 AM - Moscone West 3001 Follow your mobile customers, employees, and partners with Oracle Fusion Middleware. See how native iPhone and iPad applications can easily be built for Oracle E-Business Suite with the new Oracle ADF Mobile and Oracle SOA Suite. Using Oracle ADF Mobile, developers can quickly develop native applications for Apple iOS and other mobile platforms. The Oracle SOA Suite/Oracle ADF Mobile combination can execute business transactions on Oracle E-Business Suite. This session includes a demo in which a mobile user approves a business transaction in Oracle E-Business Suite and a demo of the tools used to build a native on-device solution. These concepts for mobile applications also apply to other Oracle applications.CON9029 - Oracle E-Business Suite Directions: Slashing Downtimes with Online PatchingKevin Hudson, Oracle Wednesday, Oct 3, 11:45 AM - 12:45 PM - Moscone West 2016 Oracle E-Business Suite will soon include online patching (based on the Oracle Database 11g Release 2 Edition-Based Redefinition feature), which will reduce your database patching downtimes to however long it takes to bounce your database server. This Oracle development session details how online patching works, with special attention to what’s happening at a database object level when database patches are applied to an Oracle E-Business Suite environment that’s still running. Come learn about the operational and system management implications for minimizing maintenance downtimes when applying database patches with this new technology and the related impact on customizations you might have built on top of Oracle E-Business Suite. CON8806 - Upgrading to Oracle E-Business Suite 12.1: Technical and Functional PanelAndrew Katz, Komori America Corporation; Sandra Vucinic, VLAD Group, Inc. ;Srini Chavali, Cummins Inc.; Amrita Mehrok, Nadia Bendjedou, Anne Carlson Oracle Wednesday, Oct 3, 1:15 PM - 2:15 PM - Moscone West 2018 In this panel discussion, Oracle experts, customers, and partners share their experiences in upgrading to the latest release of Oracle E-Business Suite, Release 12.1. The panelists cover aspects of a typical Release 12 upgrade, technical (upgrading the technical infrastructure) as well as functional (upgrading to the new financial infrastructure). Hear directly from the experts who either develop the product or support, implement, or upgrade it, and find out how to apply their lessons learned to your organization. CON9027 - Personalize and Extend Oracle E-Business Suite Applications with Rich MashupsGustavo Jimenez, Padmaprabodh Ambale, Oracle Wednesday, Oct 3, 1:15 PM - 2:15 PM - Moscone West 2016 This session covers the use of several Oracle Fusion Middleware technologies to personalize and extend your existing Oracle E-Business Suite applications. The Oracle Fusion Middleware technologies covered include Oracle Application Development Framework (Oracle ADF), Oracle WebCenter, Oracle Endeca applications, and Oracle Business Intelligence Enterprise Edition with Oracle E-Business Suite Oracle Application Framework applications. CON9036 - Advanced Oracle E-Business Suite Architectures: Maximum Availability, Security, and MoreElke Phelps, Oracle Wednesday, Oct 3, 3:30 PM - 4:30 PM - Moscone West 2016 This session includes architecture diagrams and configuration instructions for building a maximum availability architecture (MAA) that will help you design a disaster recovery solution that fits the needs of your business. Database and application high-availability features it describes include Oracle Data Guard, Oracle Real Application Clusters (Oracle RAC), Oracle Active Data Guard, load-balancing Web and forms services, parallel concurrent processing, and the use of Oracle Exalogic and Oracle Exadata to provide a highly available environment. The session also covers the latest updates to systems management tools, AutoConfig, cloud computing, virtualization, and Oracle WebLogic Server and provides sneak previews of upcoming functionality. CON9047 - Efficiently Scaling Oracle E-Business Suite on Oracle Exadata and Oracle ExalogicIsam Alyousfi, Nishit Rao, Oracle Wednesday, Oct 3, 5:00 PM - 6:00 PM - Moscone West 2016 Oracle Exadata and Oracle Exalogic are designed from the ground up with optimizations in software and hardware to deliver superfast performance for mission-critical applications such as Oracle E-Business Suite. Oracle E-Business Suite applications run three to eight times as fast on the Oracle Exadata/Oracle Exalogic platform in standard benchmark tests. Besides performance, customers benefit from simplified support, enhanced manageability, and the ability to consolidate multiple Oracle E-Business Suite instances. Attend this session to understand best practices for Oracle E-Business Suite deployment on Oracle Exalogic and Oracle Exadata through customer case studies. Learn how adopting the Exa* platform increases efficiency, simplifies scaling, and boosts performance for peak loads. CON8716 - Web Services and SOA Integration Options for Oracle E-Business SuiteRekha Ayothi, Veshaal Singh, Oracle Thursday, Oct 4, 11:15 AM - 12:15 PM - Moscone West 2016 This Oracle development session provides a deep dive into a subset of the Web services and SOA-related integration options available to Oracle E-Business Suite systems integrators. It offers a technical look at Oracle E-Business Suite Integrated SOA Gateway, Oracle SOA Suite, Oracle Application Adapters for Data Integration for Oracle E-Business Suite, and other Web services options for integrating Oracle E-Business Suite with other applications. Systems integrators and developers will get an overview of the latest integration capabilities and technologies available out of the box with Oracle E-Business Suite and possibly a sneak preview of upcoming functionality and features. CON9030 - Recommendations for Oracle E-Business Suite Performance TuningIsam Alyousfi, Samer Barakat, Oracle Thursday, Oct 4, 11:15 AM - 12:15 PM - Moscone West 2018 Need to squeeze more performance out of your existing servers? This packed Oracle development session summarizes practical tips and lessons learned from performance-tuning and benchmarking the world’s largest Oracle E-Business Suite environments. Apps sysadmins will learn concrete tips and techniques for identifying and resolving performance bottlenecks on all layers, with special attention to application- and database-tier servers. Learn about tuning Oracle Forms, Oracle Concurrent Manager, Apache, and Oracle Discoverer. Track down memory leaks and other issues at the Java and JVM layers. The session also covers Oracle E-Business Suite product-level tuning, including Oracle Workflow, Oracle Order Management, Oracle Payroll, and other modules. CON3429 - Using Oracle ADF with Oracle E-Business Suite: The Full Integration ViewSiva Puthurkattil, Lake County; Juan Camilo Ruiz, Sara Woodhull, Oracle Thursday, Oct 4, 11:15 AM - 12:15 PM - Moscone West 3003 Oracle E-Business Suite delivers functionality for handling the core business of your organization. However, user requirements and new technologies are driving an emerging need to implement new types of user interfaces for these applications. This session provides an overview of how to use Oracle Application Development Framework (Oracle ADF) to deliver cutting-edge Web 2.0 and mobile rich user interfaces that front existing Oracle E-Business Suite processes, and it also explores all the existing types of integration between the two worlds. CON9020 - Integrating Oracle E-Business Suite with Oracle Identity Management SolutionsSunil Ghosh, Elke Phelps, Oracle Thursday, Oct 4, 12:45 PM - 1:45 PM - Moscone West 2016 Need to integrate Oracle E-Business Suite with Microsoft Windows Kerberos, Active Directory, CA Netegrity SiteMinder, or other third-party authentication systems? Want to understand your options when Oracle Premier Support for Oracle Single Sign-On ends in December 2011? This Oracle Development session covers the latest certified integrations with Oracle Access Manager 11g and Oracle Internet Directory 11g, which can be used individually or as bridges for integrating with third-party authentication solutions. The session presents an architectural overview of how Oracle Access Manager, its WebGate and AccessGate components, and Oracle Internet Directory work together, with implications for Oracle Discoverer, Oracle Portal, and other Oracle Fusion identity management products. CON9019 - Troubleshooting, Diagnosing, and Optimizing Oracle E-Business Suite TechnologyGustavo Jimenez, Oracle Thursday, Oct 4, 2:15 PM - 3:15 PM - Moscone West 2016 This session covers how you can proactively diagnose Oracle E-Business Suite applications, including extensions built with Oracle Fusion Middleware technologies such as Oracle Application Development Framework (Oracle ADF) and Oracle WebCenter to catch potential issues in the middle tier before they become more serious. Topics include debugging, logging infrastructure, warning signs, performance tuning, information required when logging service requests, general JVM optimization, and an overall picture of all the moving parts that make it possible for Oracle E-Business Suite to isolate and fix problems. Also learn how Oracle Diagnostics Framework will help prevent downtime caused by failures. CON9031 - The Top 10 Things You Can Do to Secure Your Oracle E-Business Suite InstanceEric Bing, Erik Graversen, Oracle Thursday, Oct 4, 2:15 PM - 3:15 PM - Moscone West 2018 Learn the top 10 things you can do to secure your applications and your sensitive data. This Oracle development session for system administrators and security professionals explores some of the most important and overlooked things you can do to secure your Oracle E-Business Suite instance. It also covers data masking and other mechanisms for protecting sensitive data. Special Interest Groups (SIG) Some of our most senior staff have been invited to participate on the following SIG meetings as guest speakers: SIG10525 - OAUG - Archive & Purge SIGBrian Bent - Pre-Sales Engineer, TierData, Inc. Sunday, Sep 30, 10:30 AM - 12:00 PM - Moscone West 3011 The Archive and Purge SIG is an organization in which users can share their experiences and solicit functional and technical advice on archiving and purging data in Oracle E-Business Suite. This session provides an opportunity for users to network and share best practices, tips, and tricks. Guest: Oracle E-Business Suite Database Performance, Archive & Purging - Q&A SessionIsam Alyousfi, Senior Director, Applications Performance, Oracle SIG10547 - OAUG - Oracle E-Business (EBS) Applications Technology SIGSrini Chavali - IT Director, Cummins Inc Sunday, Sep 30, 10:30 AM - 12:00 PM - Moscone West 3018 The general purpose of the EBS Applications Technology SIG is to inform and educate its members about current and future components of the tech stack as they relate to Oracle E-Business Suite. Attend this meeting for networking and education and to share best practices. Guest: Oracle E-Business Suite Technology Certification Roadmap - Presentation and Q&ASteven Chan, Sr. Director, Applications Technology Group, Oracle SIG10559 - OAUG - User Management SIGSusan Behn - VP of Oracle Delivery, Infosemantics, Inc. Sunday, Sep 30, 10:30 AM - 12:00 PM - Moscone West 3024 The E-Business Suite User Management SIG focuses on the components of user management that enable Oracle E-Business Suite users to define administrative functions and manage users’ access to functions and data based on roles within an organization—rather than the user’s individual identity—which is referred to as role-based access control (RBAC). This meeting includes an introduction to Oracle User Management that covers the Oracle User Management building blocks and presents an example of creating a security policy.Guest: Security and User Management - Q&A SessionEric Bing, Sr. Director, EBS Security, OracleSara Woodhull, Principal Product Manager, Applications Technology Group, Oracle SIG10515 - OAUG – Upgrade SIGBarbara Matthews - Consultant, On Call DBASandra Vucinic, VLAD Group, Inc. Sunday, Sep 30, 12:00 PM - 2:00 PM - Moscone West 3009 This Upgrade SIG session starts with a business meeting and then features a Q&A panel discussion on Oracle E-Business Suite upgrade topics. The session• Reviews Upgrade SIG goals and objectives• Provides answers, during the Q&A session, to questions related to Oracle E-Business Suite upgrades• Shares “real world” experiences, tips, and techniques for Oracle E-Business Suite upgrades to Release 12.1. Guest: Oracle E-Business Suite Upgrade - Q&A SessionAnne Carlson - Sr. Director, Oracle E-Business Suite Product Strategy, OracleUdayan Parvate - Director, EBS Release Engineering, OracleSuzana Ferrari, Sr. Principal Consultant, OracleIsam Alyousfi, Sr. Director, Applications Performance, Oracle SIG10552 - OAUG - Oracle E-Business Suite SIGDonna Rosentrater - Manager, Global Sourcing & Procurement Systems, TJX Sunday, Sep 30, 12:15 PM - 1:45 PM - Moscone West 3020 The E-Business Suite SIG, affiliated with OAUG, supports Oracle E-Business Suite users through networking, education, and sharing of best practices. This SIG meeting will feature a general discussion of Oracle E-Business Suite product strategies in Release 12 and migration to Oracle Fusion Applications. Guest: Oracle E-Business Suite - Q&A SessionJeanne Lowell, Vice President, EBS Product Strategy, OracleNadia Bendjedou, Sr. Director, Product Strategy, Oracle SIG10556 - OAUG - SysAdmin SIGRandy Giefer - Sr Systems and Security Architect, Solution Beacon, LLC Sunday, Sep 30, 12:15 PM - 1:45 PM - Moscone West 3022 The SysAdmin SIG provides a forum in which OAUG members and participants can share updates, tips, and successful practices relating to system administration in an Oracle applications environment. The SysAdmin SIG strives to enable system administrators to become more effective and efficient in their jobs by providing them with access to people and information that can increase their system administration knowledge and experience. Attend this meeting to network, share best practices, and benefit from educational content. Guest: Oracle E-Business Suite 12.2 Online Patching- Presentation and Q&AKevin Hudson, Sr. Director, Applications Technology Group, Oracle SIG10553 - OAUG - Database SIGMichael Brown - Senior DBA, COLIBRI LTD LC Sunday, Sep 30, 2:00 PM - 3:15 PM - Moscone West 3020 The OAUG Database SIG provides an opportunity for applications database administrators to learn from and share their experiences with supporting the various Oracle applications environments. This session will include a brief business meeting followed by a short presentation. It will end with an open discussion among the attendees about items of interest to those present. Guest: Oracle E-Business Suite Database Performance - Presentation and Q&AIsam Alyousfi, Sr. Director, Applications Performance, Oracle Meet the Experts We're planning two round-table discussions where you can review your questions with senior E-Business Suite ATG staff: MTE9648 - Meet the Experts for Oracle E-Business Suite: Planning Your Upgrade Jeanne Lowell - VP, EBS Product Strategy, Oracle John Abraham - Sr. Principal Product Manager, Oracle Nadia Bendjedou - Sr. Director - Product Strategy, Oracle Anne Carlson - Sr. Director, Applications Technology Group, Oracle Udayan Parvate - Director, EBS Release Engineering, Oracle Isam Alyousfi, Sr. Director, Applications Performance, Oracle Monday, Oct 1, 3:15 PM - 4:15 PM - Moscone West 2001A Don’t miss this Oracle Applications Meet the Experts session with experts who specialize in Oracle E-Business Suite upgrade best practices. This is the place where attendees can have informal and semistructured but open one-on-one discussions with Strategy and Development regarding Oracle Applications strategy and your specific business and IT strategy. The experts will be available to discuss the value of the latest releases and share insights into the best path for your enterprise, so come ready with your questions. Space is limited, so make sure you register. MTE9649 - Meet the Oracle E-Business Suite Tools and Technology Experts Lisa Parekh - Vice President, Technology Integration, Oracle Steven Chan - Sr. Director, Oracle Elke Phelps - Sr. Principal Product Manager, Applications Technology Group, Oracle Max Arderius - Manager, Applications Technology Group, Oracle Tuesday, Oct 2, 1:15 PM - 2:15 PM - Moscone West 2001A Don’t miss this Oracle Applications Meet the Experts session with experts who specialize in Oracle E-Business Suite technology. This is the place where attendees can have informal and semistructured but open one-on-one discussions with Strategy and Development regarding Oracle Applications strategy and your specific business and IT strategy. The experts will be available to discuss the value of the latest releases and share insights into the best path for your enterprise, so come ready with your questions. Space is limited, so make sure you register. Demos We have five booths in the exhibition demogrounds this year, where you can try ATG technologies firsthand and get your questions answered. Please stop by and meet our staff at the following locations: Advanced Architecture and Technology Stack for Oracle E-Business Suite (W-067) New User Productivity Capabilities in Oracle E-Business Suite (W-065) End-to-End Management of Oracle E-Business Suite (W-063) Oracle E-Business Suite 12.1 Technical Upgrade Best Practices (W-066) SOA-Based Integration for Oracle E-Business Suite (W-064)

    Read the article

  • Configuring UCM cache to check for external Content Server changes

    - by Martin Deh
    Recently, I was involved in a customer scenario where they were modifying the Content Server's contributor data files directly through Content Server.  This operation of course is completely supported.  However, since the contributor data file was modified through the "backdoor", a running WebCenter Spaces page, which also used the same data file, would not get the updates immediately.  This was due to two reasons.  The first reason is that the Spaces page was using Content Presenter to display the contents of the data file. The second reason is that the Spaces application was using the "cached" version of the data file.  Fortunately, there is a way to configure cache so backdoor changes can be picked up more quickly and automatically. First a brief overview of Content Presenter.  The Content Presenter task flow enables WebCenter Spaces users with Page-Edit permissions to precisely customize the selection and presentation of content in a WebCenter Spaces application.  With Content Presenter, you can select a single item of content, contents under a folder, a list of items, or query for content, and then select a Content Presenter based template to render the content on a page in a Spaces application.  In addition to displaying the folders and the files in a Content Server, Content Presenter integrates with Oracle Site Studio to allow you to create, access, edit, and display Site Studio contributor data files (Content Server Document) in either a Site Studio region template or in a custom Content Presenter display template.  More information about creating Content Presenter Display Template can be found in the OFM Developers Guide for WebCenter Portal. The easiest way to configure the cache is to modify the WebCenter Spaces Content Server service connection setting through Enterprise Manager.  From here, under the Cache Details, there is a section to set the Cache Invalidation Interval.  Basically, this enables the cache to be monitored by the cache "sweeper" utility.  The cache sweeper queries for changes in the Content Server, and then "marks" the object in cache as "dirty".  This causes the application in turn to get a new copy of the document from the Content Server that replaces the cached version.  By default the initial value for the Cache Invalidation Interval is set to 0 (minutes).  This basically means that the sweeper is OFF.  To turn the sweeper ON, just set a value (in minutes).  The mininal value that can be set is 2 (minutes): Just a note.  In some instances, once the value of the Cache Invalidation Interval has been set (and saved) in the Enterprise Manager UI, it becomes "sticky" and the interval value cannot be set back to 0.  The good news is that this value can also be updated throught a WLST command.   The WLST command to run is as follows: setJCRContentServerConnection(appName, name, [socketType, url, serverHost, serverPort, keystoreLocation, keystorePassword, privateKeyAlias, privateKeyPassword, webContextRoot, clientSecurityPolicy, cacheInvalidationInterval, binaryCacheMaxEntrySize, adminUsername, adminPassword, extAppId, timeout, isPrimary, server, applicationVersion]) One way to get the required information for executing the command is to use the listJCRContentServerConnections('webcenter',verbose=true) command.  For example, this is the sample output from the execution: ------------------ UCM ------------------ Connection Name: UCM Connection Type: JCR External Appliction ID: Timeout: (not set) CIS Socket Type: socket CIS Server Hostname: webcenter.oracle.local CIS Server Port: 4444 CIS Keystore Location: CIS Private Key Alias: CIS Web URL: Web Server Context Root: /cs Client Security Policy: Admin User Name: sysadmin Cache Invalidation Interval: 2 Binary Cache Maximum Entry Size: 1024 The Documents primary connection is "UCM" From this information, the completed  setJCRContentServerConnection would be: setJCRContentServerConnection(appName='webcenter',name='UCM', socketType='socket', serverHost='webcenter.oracle.local', serverPort='4444', webContextRoot='/cs', cacheInvalidationInterval='0', binaryCacheMaxEntrySize='1024',adminUsername='sysadmin',isPrimary=1) Note: The Spaces managed server must be restarted for the change to take effect. More information about using WLST for WebCenter can be found here. Once the sweeper is turned ON, only cache objects that have been changed will be invalidated.  To test this out, I will go through a simple scenario.  The first thing to do is configure the Content Server so it can monitor and report on events.  Log into the Content Server console application, and under the Administration menu item, select System Audit Information.  Note: If your console is using the left menu display option, the Administration link will be located there. Under the Tracing Sections Information, add in only "system" and "requestaudit" in the Active Sections.  Check Full Verbose Tracing, check Save, then click the Update button.  Once this is done, select the View Server Output menu option.  This will change the browser view to display the log.  This is all that is needed to configure the Content Server. For example, the following is the View Server Output with the cache invalidation interval set to 2(minutes) Note the time stamp: requestaudit/6 08.30 09:52:26.001  IdcServer-68    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.016933999955654144(secs) requestaudit/6 08.30 09:52:26.010  IdcServer-69    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.006134999915957451(secs) requestaudit/6 08.30 09:52:26.014  IdcServer-70    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.004271999932825565(secs) ... other trace info ... requestaudit/6 08.30 09:54:26.002  IdcServer-71    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.020323999226093292(secs) requestaudit/6 08.30 09:54:26.011  IdcServer-72    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.017928000539541245(secs) requestaudit/6 08.30 09:54:26.017  IdcServer-73    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.010185999795794487(secs) Now that the tracing logs are reporting correctly, the next step is set up the Spaces app to test the sweeper. I will use 2 different pages that will use Content Presenter task flows.  Each task flow will use a different custom Content Presenter display template, and will be assign 2 different contributor data files (document that will be in the cache).  The pages at run time appear as follows: Initially, when the Space pages containing the content is loaded in the browser for the first time, you can see the tracing information in the Content Server output viewer. requestaudit/6 08.30 11:51:12.030 IdcServer-129 CLEAR_SERVER_OUTPUT [dUser=weblogic] 0.029171999543905258(secs) requestaudit/6 08.30 11:51:12.101 IdcServer-130 GET_SERVER_OUTPUT [dUser=weblogic] 0.025721000507473946(secs) requestaudit/6 08.30 11:51:26.592 IdcServer-131 VCR_GET_DOCUMENT_BY_NAME [dID=919][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.21525299549102783(secs) requestaudit/6 08.30 11:51:27.117 IdcServer-132 VCR_GET_CONTENT_TYPES [dUser=sysadmin][IsJava=1] 0.5059549808502197(secs) requestaudit/6 08.30 11:51:27.146 IdcServer-133 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.03360399976372719(secs) requestaudit/6 08.30 11:51:27.169 IdcServer-134 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.008806000463664532(secs) requestaudit/6 08.30 11:51:27.204 IdcServer-135 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.013265999965369701(secs) requestaudit/6 08.30 11:51:27.384 IdcServer-136 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.18119299411773682(secs) requestaudit/6 08.30 11:51:27.533 IdcServer-137 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.1519480049610138(secs) requestaudit/6 08.30 11:51:27.634 IdcServer-138 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.10827399790287018(secs) requestaudit/6 08.30 11:51:27.687 IdcServer-139 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.059702999889850616(secs) requestaudit/6 08.30 11:51:28.271 IdcServer-140 GET_USER_PERMISSIONS [dUser=weblogic][IsJava=1] 0.006703000050038099(secs) requestaudit/6 08.30 11:51:28.285 IdcServer-141 GET_ENVIRONMENT [dUser=sysadmin][IsJava=1] 0.010893999598920345(secs) requestaudit/6 08.30 11:51:30.433 IdcServer-142 GET_SERVER_OUTPUT [dUser=weblogic] 0.017318999394774437(secs) requestaudit/6 08.30 11:51:41.837 IdcServer-143 VCR_GET_DOCUMENT_BY_NAME [dID=508][dDocName=113_ES][dDocTitle=Landing Home][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.15937699377536774(secs) requestaudit/6 08.30 11:51:42.781 IdcServer-144 GET_FILE [dID=326][dDocName=WEBCENTERORACL000315][dDocTitle=Duke][dUser=anonymous][RevisionSelectionMethod=LatestReleased][dSecurityGroup=Public][xCollectionID=0] 0.16288499534130096(secs) The highlighted sections show where the 2 data files DF_UCMCACHETESTER (P1 page) and 113_ES (P2 page) were called by the (Spaces) VCR connection to the Content Server. The most important line to notice is the VCR_GET_DOCUMENT_BY_NAME invocation.  On subsequent refreshes of these 2 pages, you will notice (after you refresh the Content Server's View Server Output) that there are no further traces of the same VCR_GET_DOCUMENT_BY_NAME invocations.  This is because the pages are getting the documents from the cache. The next step is to go through the "backdoor" and change one of the documents through the Content Server console.  This operation can be done by first locating the data file document, and from the Content Information page, select Edit Data File menu option.   This invokes the Site Studio Contributor, where the modifications can be made. Refreshing the Content Server View Server Output, the tracing displays the operations perform on the document.  requestaudit/6 08.30 11:56:59.972 IdcServer-255 SS_CHECKOUT_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dUser=weblogic][dSecurityGroup=Public] 0.05558200180530548(secs) requestaudit/6 08.30 11:57:00.065 IdcServer-256 SS_GET_CONTRIBUTOR_CONFIG [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.08632399886846542(secs) requestaudit/6 08.30 11:57:00.470 IdcServer-259 DOC_INFO_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.02268899977207184(secs) requestaudit/6 08.30 11:57:10.177 IdcServer-264 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007652000058442354(secs) requestaudit/6 08.30 11:57:10.181 IdcServer-263 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.01868399977684021(secs) requestaudit/6 08.30 11:57:10.187 IdcServer-265 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.009367000311613083(secs) (internal)/6 08.30 11:57:26.118 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml (internal)/6 08.30 11:57:26.121 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml requestaudit/6 08.30 11:57:26.122 IdcServer-266 SS_SET_ELEMENT_DATA [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0][StatusCode=0][StatusMessage=Successfully checked in content item 'DF_UCMCACHETESTER'.] 0.3765290081501007(secs) requestaudit/6 08.30 11:57:30.710 IdcServer-267 DOC_INFO_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.07942699640989304(secs) requestaudit/6 08.30 11:57:30.733 IdcServer-268 SS_GET_CONTRIBUTOR_STRINGS [dUser=weblogic] 0.0044570001773536205(secs) After a few moments and refreshing the P1 page, the updates has been applied. Note: The refresh time may very, since the Cache Invalidation Interval (set to 2 minutes) is not determined by when changes happened.  The sweeper just runs every 2 minutes. Refreshing the Content Server View Server Output, the tracing displays the important information. requestaudit/6 08.30 11:59:10.171 IdcServer-270 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.00952600035816431(secs) requestaudit/6 08.30 11:59:10.179 IdcServer-271 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.011118999682366848(secs) requestaudit/6 08.30 11:59:10.182 IdcServer-272 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007447000127285719(secs) requestaudit/6 08.30 11:59:16.885 IdcServer-273 VCR_GET_DOCUMENT_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.0786449983716011(secs) After the specifed interval time the sweeper is invoked, which is noted by the GET_ ... calls.  Since the history has noted the change, the next call is to the VCR_GET_DOCUMENT_BY_NAME to retrieve the new version of the (modifed) data file.  Navigating back to the P2 page, and viewing the server output, there are no further VCR_GET_DOCUMENT_BY_NAME to retrieve the data file.  This simply means that this data file was just retrieved from the cache.   Upon further review of the server output, we can see that there was only 1 request for the VCR_GET_DOCUMENT_BY_NAME: requestaudit/6 08.30 12:08:00.021 Audit Request Monitor Request Audit Report over the last 120 Seconds for server webcenteroraclelocal16200****  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor -Num Requests 8 Errors 0 Reqs/sec. 0.06666944175958633 Avg. Latency (secs) 0.02762500010430813 Max Thread Count 2  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 1 Service VCR_GET_DOCUMENT_BY_NAME Total Elapsed Time (secs) 0.09200000017881393 Num requests 1 Num errors 0 Avg. Latency (secs) 0.09200000017881393  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 2 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.054999999701976776 Num requests 1 Num errors 0 Avg. Latency (secs) 0.054999999701976776  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 3 Service GET_FOLDER_HISTORY_REPORT Total Elapsed Time (secs) 0.028999999165534973 Num requests 2 Num errors 0 Avg. Latency (secs) 0.014499999582767487  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 4 Service GET_SERVER_OUTPUT Total Elapsed Time (secs) 0.017999999225139618 Num requests 1 Num errors 0 Avg. Latency (secs) 0.017999999225139618  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 5 Service GET_FILE Total Elapsed Time (secs) 0.013000000268220901 Num requests 1 Num errors 0 Avg. Latency (secs) 0.013000000268220901  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor ****End Audit Report*****  

    Read the article

  • Making WCF Output a single WSDL file for interop purposes.

    - by Glav
    By default, when WCF emits a WSDL definition for your services, it can often contain many links to others related schemas that need to be imported. For the most part, this is fine. WCF clients understand this type of schema without issue, and it conforms to the requisite standards as far as WSDL definitions go. However, some non Microsoft stacks will only work with a single WSDL file and require that all definitions for the service(s) (port types, messages, operation etc…) are contained within that single file. In other words, no external imports are supported. Some Java clients (to my working knowledge) have this limitation. This obviously presents a problem when trying to create services exposed for consumption and interop by these clients. Note: You can download the full source code for this sample from here To illustrate this point, lets say we have a simple service that looks like: Service Contract public interface IService1 { [OperationContract] [FaultContract(typeof(DataFault))] string GetData(DataModel1 model); [OperationContract] [FaultContract(typeof(DataFault))] string GetMoreData(DataModel2 model); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Service Implementation/Behaviour public class Service1 : IService1 { public string GetData(DataModel1 model) { return string.Format("Some Field was: {0} and another field was {1}", model.SomeField,model.AnotherField); } public string GetMoreData(DataModel2 model) { return string.Format("Name: {0}, age: {1}", model.Name, model.Age); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Configuration File <system.serviceModel> <services> <service name="SingleWSDL_WcfService.Service1" behaviorConfiguration="SingleWSDL_WcfService.Service1Behavior"> <!-- ...std/default data omitted for brevity..... --> <endpoint address ="" binding="wsHttpBinding" contract="SingleWSDL_WcfService.IService1" > ....... </services> <behaviors> <serviceBehaviors> <behavior name="SingleWSDL_WcfService.Service1Behavior"> ........ </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } When WCF is asked to produce a WSDL for this service, it will produce a file that looks something like this (note: some sections omitted for brevity): <?xml version="1.0" encoding="utf-8" ?> - <wsdl:definitions name="Service1" targetNamespace="http://tempuri.org/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" ...... namespace definitions omitted for brevity + &lt;wsp:Policy wsu:Id="WSHttpBinding_IService1_policy"> ... multiple policy items omitted for brevity </wsp:Policy> - <wsdl:types> - <xsd:schema targetNamespace="http://tempuri.org/Imports"> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd0" namespace="http://tempuri.org/" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd3" namespace="Http://SingleWSDL/Fault" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd1" namespace="http://schemas.microsoft.com/2003/10/Serialization/" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd2" namespace="http://SingleWSDL/Model1" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd4" namespace="http://SingleWSDL/Model2" /> </xsd:schema> </wsdl:types> + <wsdl:message name="IService1_GetData_InputMessage"> .... </wsdl:message> - <wsdl:operation name="GetData"> ..... </wsdl:operation> - <wsdl:service name="Service1"> ....... </wsdl:service> </wsdl:definitions> The above snippet from the WSDL shows the external links and references that are generated by WCF for a relatively simple service. Note the xsd:import statements that reference external XSD definitions which are also generated by WCF. In order to get WCF to produce a single WSDL file, we first need to follow some good practices when it comes to WCF service definitions. Step 1: Define a namespace for your service contract. [ServiceContract(Namespace="http://SingleWSDL/Service1")] public interface IService1 { ...... } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Normally you would not use a literal string and may instead define a constant to use in your own application for the namespace. When this is applied and we generate the WSDL, we get the following statement inserted into the document: <wsdl:import namespace="http://SingleWSDL/Service1" location="http://localhost:2370/HostingSite/Service-default.svc?wsdl=wsdl0" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } All the previous imports have gone. If we follow this link, we will see that the XSD imports are now in this external WSDL file. Not really any benefit for our purposes. Step 2: Define a namespace for your service behaviour [ServiceBehavior(Namespace = "http://SingleWSDL/Service1")] public class Service1 : IService1 { ...... } As you can see, the namespace of the service behaviour should be the same as the service contract interface to which it implements. Failure to do these tasks will cause WCF to emit its default http://tempuri.org namespace all over the place and cause WCF to still generate import statements. This is also true if the namespace of the contract and behaviour differ. If you define one and not the other, defaults kick in, and you’ll find extra imports generated. While each of the previous 2 steps wont cause any less import statements to be generated, you will notice that namespace definitions within the WSDL have identical, well defined names. Step 3: Define a binding namespace In the configuration file, modify the endpoint configuration line item to iunclude a bindingNamespace attribute which is the same as that defined on the service behaviour and service contract <endpoint address="" binding="wsHttpBinding" contract="SingleWSDL_WcfService.IService1" bindingNamespace="http://SingleWSDL/Service1"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, this does not completely solve the issue. What this will do is remove the WSDL import statements like this one: <wsdl:import namespace="http://SingleWSDL/Service1" location="http://localhost:2370/HostingSite/Service-default.svc?wsdl" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } from the generated WSDL. Finally…. the magic…. Step 4: Use a custom endpoint behaviour to read in external imports and include in the main WSDL output. In order to force WCF to output a single WSDL with all the required definitions, we need to define a custom WSDL Export extension that can be applied to any endpoints. This requires implementing the IWsdlExportExtension and IEndpointBehavior interfaces and then reading in any imported schemas, and adding that output to the main, flattened WSDL to be output. Sounds like fun right…..? Hmmm well maybe not. This step sounds a little hairy, but its actually quite easy thanks to some kind individuals who have already done this for us. As far as I know, there are 2 available implementations that we can easily use to perform the import and “WSDL flattening”.  WCFExtras which is on codeplex and FlatWsdl by Thinktecture. Both implementations actually do exactly the same thing with the imports and provide an endpoint behaviour, however FlatWsdl does a little more work for us by providing a ServiceHostFactory that we can use which automatically attaches the requisite behaviour to our endpoints for us. To use this in an IIS hosted service, we can modify the .SVC file to specify this ne factory to use like so: <%@ ServiceHost Language="C#" Debug="true" Service="SingleWSDL_WcfService.Service1" Factory="Thinktecture.ServiceModel.Extensions.Description.FlatWsdlServiceHostFactory" %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Within a service application or another form of executable such as a console app, we can simply create an instance of the custom service host and open it as we normally would as shown here: FlatWsdlServiceHost host = new FlatWsdlServiceHost(typeof(Service1)); host.Open(); And we are done. WCF will now generate one single WSDL file that contains all he WSDL imports and data/XSD imports. You can download the full source code for this sample from here Hope this has helped you. Note: Please note that I have not extensively tested this in a number of different scenarios so no guarantees there.

    Read the article

  • Making WCF Output a single WSDL file for interop purposes.

    By default, when WCF emits a WSDL definition for your services, it can often contain many links to others related schemas that need to be imported. For the most part, this is fine. WCF clients understand this type of schema without issue, and it conforms to the requisite standards as far as WSDL definitions go. However, some non Microsoft stacks will only work with a single WSDL file and require that all definitions for the service(s) (port types, messages, operation etc) are contained within that single file. In other words, no external imports are supported. Some Java clients (to my working knowledge) have this limitation. This obviously presents a problem when trying to create services exposed for consumption and interop by these clients. Note: You can download the full source code for this sample from here To illustrate this point, lets say we have a simple service that looks like: Service Contract public interface IService1 { [OperationContract] [FaultContract(typeof(DataFault))] string GetData(DataModel1 model); [OperationContract] [FaultContract(typeof(DataFault))] string GetMoreData(DataModel2 model); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Service Implementation/Behaviour public class Service1 : IService1 { public string GetData(DataModel1 model) { return string.Format("Some Field was: {0} and another field was {1}", model.SomeField,model.AnotherField); } public string GetMoreData(DataModel2 model) { return string.Format("Name: {0}, age: {1}", model.Name, model.Age); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Configuration File <system.serviceModel> <services> <service name="SingleWSDL_WcfService.Service1" behaviorConfiguration="SingleWSDL_WcfService.Service1Behavior"> <!-- ...std/default data omitted for brevity..... --> <endpoint address ="" binding="wsHttpBinding" contract="SingleWSDL_WcfService.IService1" > ....... </services> <behaviors> <serviceBehaviors> <behavior name="SingleWSDL_WcfService.Service1Behavior"> ........ </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } When WCF is asked to produce a WSDL for this service, it will produce a file that looks something like this (note: some sections omitted for brevity): <?xml version="1.0" encoding="utf-8" ?> - <wsdl:definitions name="Service1" targetNamespace="http://tempuri.org/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" ...... namespace definitions omitted for brevity + <wsp:Policy wsu:Id="WSHttpBinding_IService1_policy"> ... multiple policy items omitted for brevity </wsp:Policy> - <wsdl:types> - <xsd:schema targetNamespace="http://tempuri.org/Imports"> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd0" namespace="http://tempuri.org/" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd3" namespace="Http://SingleWSDL/Fault" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd1" namespace="http://schemas.microsoft.com/2003/10/Serialization/" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd2" namespace="http://SingleWSDL/Model1" /> <xsd:import schemaLocation="http://localhost:2370/HostingSite/Service-default.svc?xsd=xsd4" namespace="http://SingleWSDL/Model2" /> </xsd:schema> </wsdl:types> + <wsdl:message name="IService1_GetData_InputMessage"> .... </wsdl:message> - <wsdl:operation name="GetData"> ..... </wsdl:operation> - <wsdl:service name="Service1"> ....... </wsdl:service> </wsdl:definitions> The above snippet from the WSDL shows the external links and references that are generated by WCF for a relatively simple service. Note the xsd:import statements that reference external XSD definitions which are also generated by WCF. In order to get WCF to produce a single WSDL file, we first need to follow some good practices when it comes to WCF service definitions. Step 1: Define a namespace for your service contract. [ServiceContract(Namespace="http://SingleWSDL/Service1")] public interface IService1 { ...... } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Normally you would not use a literal string and may instead define a constant to use in your own application for the namespace. When this is applied and we generate the WSDL, we get the following statement inserted into the document: <wsdl:import namespace="http://SingleWSDL/Service1" location="http://localhost:2370/HostingSite/Service-default.svc?wsdl=wsdl0" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } All the previous imports have gone. If we follow this link, we will see that the XSD imports are now in this external WSDL file. Not really any benefit for our purposes. Step 2: Define a namespace for your service behaviour [ServiceBehavior(Namespace = "http://SingleWSDL/Service1")] public class Service1 : IService1 { ...... } As you can see, the namespace of the service behaviour should be the same as the service contract interface to which it implements. Failure to do these tasks will cause WCF to emit its default http://tempuri.org namespace all over the place and cause WCF to still generate import statements. This is also true if the namespace of the contract and behaviour differ. If you define one and not the other, defaults kick in, and youll find extra imports generated. While each of the previous 2 steps wont cause any less import statements to be generated, you will notice that namespace definitions within the WSDL have identical, well defined names. Step 3: Define a binding namespace In the configuration file, modify the endpoint configuration line item to iunclude a bindingNamespace attribute which is the same as that defined on the service behaviour and service contract <endpoint address="" binding="wsHttpBinding" contract="SingleWSDL_WcfService.IService1" bindingNamespace="http://SingleWSDL/Service1"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, this does not completely solve the issue. What this will do is remove the WSDL import statements like this one: <wsdl:import namespace="http://SingleWSDL/Service1" location="http://localhost:2370/HostingSite/Service-default.svc?wsdl" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } from the generated WSDL. Finally. the magic. Step 4: Use a custom endpoint behaviour to read in external imports and include in the main WSDL output. In order to force WCF to output a single WSDL with all the required definitions, we need to define a custom WSDL Export extension that can be applied to any endpoints. This requires implementing the IWsdlExportExtension and IEndpointBehavior interfaces and then reading in any imported schemas, and adding that output to the main, flattened WSDL to be output. Sounds like fun right..? Hmmm well maybe not. This step sounds a little hairy, but its actually quite easy thanks to some kind individuals who have already done this for us. As far as I know, there are 2 available implementations that we can easily use to perform the import and WSDL flattening.  WCFExtras which is on codeplex and FlatWsdl by Thinktecture. Both implementations actually do exactly the same thing with the imports and provide an endpoint behaviour, however FlatWsdl does a little more work for us by providing a ServiceHostFactory that we can use which automatically attaches the requisite behaviour to our endpoints for us. To use this in an IIS hosted service, we can modify the .SVC file to specify this ne factory to use like so: <%@ ServiceHost Language="C#" Debug="true" Service="SingleWSDL_WcfService.Service1" Factory="Thinktecture.ServiceModel.Extensions.Description.FlatWsdlServiceHostFactory" %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Within a service application or another form of executable such as a console app, we can simply create an instance of the custom service host and open it as we normally would as shown here: FlatWsdlServiceHost host = new FlatWsdlServiceHost(typeof(Service1)); host.Open(); And we are done. WCF will now generate one single WSDL file that contains all he WSDL imports and data/XSD imports. You can download the full source code for this sample from here Hope this has helped you. Note: Please note that I have not extensively tested this in a number of different scenarios so no guarantees there.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >