Search Results

Search found 65446 results on 2618 pages for 'projects data warehouse'.

Page 68/2618 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Windows 8 Data Binding Bug - OnPropertyChanged Updates Wrong Object

    - by Andrew
    I'm experiencing some really weird behavior with data binding in Windows 8. I have a combobox set up like this: <ComboBox VerticalAlignment="Center" Margin="0,18,0,0" HorizontalAlignment="Right" Height="Auto" Width="138" Background="{StaticResource DarkBackgroundBrush}" BorderThickness="0" ItemsSource="{Binding CurrentForum.SortValues}" SelectedItem="{Binding CurrentForum.CurrentSort, Mode=TwoWay}"> <ComboBox.ItemTemplate> <DataTemplate> <TextBlock HorizontalAlignment="Right" Text="{Binding Converter={StaticResource SortValueConverter}}"/> </DataTemplate> </ComboBox.ItemTemplate> </ComboBox> Inside of a page with the DataContext set to a statically located ViewModel. When I change that ViewModel's CurrentForm attribute, who's property is implemented like this... public FormViewModel CurrentForm { get { return _currentForm; } set { _currentForm = value; if (!_currentForm.IsLoaded) { _currentSubreddit.Refresh.Execute(null); } RaisePropertyChanged("CurrentForm"); } } ... something really strange happens. The previous FormViewModel's CurrentSort property is changed to the new FormViewModel's current sort property. This happens as the RaisePropertyChanged event is called, through a managed-to-native transition, with native code invoking the setter of CurrentSort of the previous FormViewModel. Does that sound like a bug in Win8's data binding? Am I doing something wrong?

    Read the article

  • core data saving in textboxes in table view controller

    - by user1489709
    I have created a five column (text boxes) cell (row) in table view controller with an option of add button. When a user clicks on add button, a new row (cell) with five column (text boxe) is added in a table view controller with null values. I want that when user fills the text boxes the data should get saved in database or if he changes any data in previous text boxes also it get saved. this is my save btn coding.. -(void)textFieldDidEndEditing:(UITextField *)textField { row1=textField.tag; NSLog(@"Row is %d",row1); path = [NSIndexPath indexPathForRow:row1 inSection:0]; Input_Details *inputDetailsObject1=[self.fetchedResultsController objectAtIndexPath:path]; /* Update the Input Values from the values in the text fields. */ EditingTableViewCell *cell; cell = (EditingTableViewCell *)[self.tableView cellForRowAtIndexPath:[NSIndexPath indexPathForRow:row1 inSection:0]]; inputDetailsObject1.mh_Up= cell.cell_MH_up.text; inputDetailsObject1.mh_down=cell.cell_MH_dn.text; inputDetailsObject1.sewer_No=[NSNumber numberWithInt:[cell.cell_Sewer_No.text intValue]]; inputDetailsObject1.mhup_gl=[NSNumber numberWithFloat:[cell.cell_gl_MHUP.text floatValue]]; inputDetailsObject1.mhdn_gl=[NSNumber numberWithFloat:[cell.cell_gl_MHDN.text floatValue]]; inputDetailsObject1.pop_Ln=[NSNumber numberWithInt:[cell.cell_Line_pop.text intValue]]; inputDetailsObject1.sew_len=[NSNumber numberWithFloat:[cell.cell_Sewer_len.text floatValue]]; [self saveContext]; NSLog(@"Saving the MH_up value %@ is saved in save at index path %d",inputDetailsObject1.mh_Up,row1); [self.tableView reloadData]; }

    Read the article

  • Grouping Categorized Data In WPF.

    - by VoidDweller
    Here is what I am trying to do. Dynamic Category: Columns can be 0 or more. Must contain 1 or more Type Columns. Will only be displayed if any row contains Type Column data associated with it. Data Rows: Will be added Asynchronously. Will be grouped by a Common Category column. Will add a Dynamic Category if it does not yet exist. Will add a Type Column if it does not yet exist within its appropriate Dynamic Category. Platform Info: WPF .Net 3.5 sp1 C# MVVM I have a few partially functional prototypes, but each has it's own major set of problems. Can any of you give me some guidance on this? Envision this nicely styled. :-) -------------------------------------------------------------------------- |[ Common Category ]|[ Dynamic Category 0 ]|[ Dynamic Category N ]| -------------------------------------------------------------------------- |[Header 1]|[Header 2]|[ Type 0 ]|[ Type N ]|[ Type 0 ]|[ Type N ]| -------------------------------------------------------------------------- |[Data 2 Group] | -------------------------------------------------------------------------- | Data A | Data 2 || Null | Data 1 || Data 0 | Data 1 || | Data B | Data 2 || Data 0 | Null || Data 0 | Data 1 || -------------------------------------------------------------------------- |[Data 1 Group] | -------------------------------------------------------------------------- | Data C | Data 1 || Null | Data 1 || Data 0 | Data 1 || | Data D | Data 1 || Null | Null || Data 0 | Null || -------------------------------------------------------------------------- Edit: Sorting and Paging is not necessary. I have looked at nested ListViews and DataGrids, dynamically building a Grid. Dynamically building a Grid and leveraging the SharedSizeGroup property seems the most promising strategy, but I am concerned about performance. Would a better approach be to consider this a dynamic report? If so, what should I be looking at? Thanks for your help.

    Read the article

  • Stopping Filter Display in Dynamic Data Entity Web App

    - by bert
    I'm currently experimenting with the Dynamic Data Entity Web App Project type in VS2008 SP1 and after reading many tutorials which offer helpful advice for problems I so far have no need of a solution to I have fallen at the first hurdle. In the DB I have made my entity model from I decided to start small with a table called "Companies" just to see if I could tweak the display into a satisfactory shape for this small table. The Companies table has a column called "contactid" which leads to a record filled with various contact information in a "contacts" table. The default created Entity Data Model has guessed that One companies could have many contact records. So it tries to be helpful and add a "Contact" filter onto the page that allows you to see all the Companies that share a particular set of contact info indexed by the "Contact Name" field. Unfortunately the contact table is a multi-purpose one that also stores contact info for customers and there are about 1000 times more customers than there are companies. So the Dropdown makes the page load time increase exponentially and produces no benefit. So I'd like to just stop the filter from appearing. Only problem is I don't have a clue how to switch it off. Google is so far proving recalcitrant on the matter so I wondered if anyone in here knew how to get rid of a useless filter.

    Read the article

  • Store data in tableview to NSUserDefaults

    - by Jozef Vrana
    Tricks.h file #import "Tricks.h" @implementation Tricks static NSMutableArray *trickList = nil; +(NSMutableArray *)trickList { if(!trickList){ trickList = [[NSMutableArray alloc]init]; } return trickList; } @end Tricks.m file @interface Tricks : NSObject @property(strong, nonatomic) NSString *trickName; Method for adding objects to array -(IBAction)saveAction:(id)sender { Tricks *trick = [[Tricks alloc]init]; trick.trickName = self.trickLabel.text; [[Tricks trickList]insertObject:trick atIndex:0]; [self.navigationController popViewControllerAnimated:YES]; } In .h file of UITabelview class I am making a reference to tricks class, but I am sure there is error on this line. @property (strong, nonatomic) Tricks *tricks; In cellForRow method I am storing data _trick = [[NSMutableDictionary alloc]initWithObjectsAndKeys:trick,nil]; NSUserDefaults *defaults=[NSUserDefaults standardUserDefaults]; [defaults setObject:_trick forKey:@"numberArray"]; [defaults synchronize]; NSLog(@"%@",_trick); In .m class of UITableview in viewDidLoad I want to retrieve data if([[NSUserDefaults standardUserDefaults] objectForKey:@"numberArray"] != nil) { _tricks = [[NSUserDefaults standardUserDefaults] objectForKey:@"numberArray"]; } Thanks for advices

    Read the article

  • Dynamic Data Extract Tools

    - by Kevin McGovern
    I've been searching around for a few weeks now for a tool that either is fully built or a direction of something I could build for dynamically extracting data via a web interface. Basically, what I'm looking for is a way to give users a list of all available data objects from our database and then let them pick ones from the list they'd like to view and set parameters then export the results to an excel file. Right now we're doing it purely with SQL statements but we have hundreds of objects so as you might imagine, those statements are really complex and prone to errors. It would be great if there was a tool available to do this or if someone had an idea of an easy way to organize this. Any help would be greatly appreciated. We've looked at BI tools like QlikView and Tableau but that is probably overkill for what we're trying to do. The open-source BI tools we've looked at seemed really primitive in their functionality. The other thing we looked at was MSAS (our DB is SQL Server) but I'd prefer something that was more database-agnostic and lived on a web server instead of on the database.

    Read the article

  • Master Data

    - by david.butler(at)oracle.com
    Let's take a deeper look at what we mean when we talk about 'Master' data. In its most general sense, master data is data that exists in more than one operational application. These are the applications that automate business processes. These applications require significant amounts of data to function correctly.  This includes data about the objects that are involved in transactions, as well as the transaction data itself.  For example, when a customer buys a product, the transaction is managed by a sales application.  The objects of the transaction are the Customer and the Product.  The transactional data is the time, place, price, discount, payment methods, etc. used at the point of sale. Many thousands of transactional data attributes are needed within the application. These important data elements are local to the applications and have no bearing on other applications. Harmonization and synchronization across applications is not necessary. The Customer and Product objects of the transaction also have a large number of attributes. Customer for example, includes hierarchies, hierarchical and matrixed relationships, contacts, classifications, preferences, accounts, identifiers, profiles, and addresses galore for 'ship to', 'mail to'; 'service at'; etc. Dozens of attributes exist for individuals, hundreds for organizations, and thousands for products. This data has meaning beyond any particular application. It exists in many applications and drives the vital cross application enterprise business processes. These are the processes that define and differentiate the organization. At every decision point, information about the objects of the process determines the direction of the process flow. This is the nature of the data that exists in more than one application, and this is why we call it 'master data'. Let me elaborate. Parties Oracle has developed a party schema to model all participants in your daily business operations. It models people, organizations, groups, customers, contacts, employees, and suppliers. It models their accounts, locations, classifications, and preferences.  And most importantly, it models the vast array of hierarchical and matrixed relationships that exist between all the participants in your real world operations.  The model logically separates people and organizations from their relationships and accounts.  This separation creates flexibility unmatched in the industry and accounts for the fact that the Oracle schema for Customers, Suppliers, and Accounts is a true superset of the wide variety of commercial and homegrown customer models in existence. Sites Sites are places where business is conducted. They can be addresses, clusters such as retail malls, locations within a cluster, floors within a building, places where meters are located, rooms on floors, etc.  Fully understanding all attributes of a site is key to many business processes. Attributes such as 'noise abatement policy' at a point of delivery, or the size of an oven in a business kitchen drive day-to-day activities such as delivery schedules or food promotions. Typically this kind of data is siloed in departments and scattered across applications and spreadsheets.  This leads to conflicting information and poor operational efficiencies. Oracle's Global Single Schema can hold all site attributes in one place and enables a single version of authoritative site information across the enterprise. Products and Services The Oracle Global Single Schema also includes a number of entities that define the products and services a company creates and offers for sale. Key entities include Items organized into Catalogs and Price Lists. The Catalog structures provide for the ability to capture different views of a product such as engineering, manufacturing, and service which are based on a unified product model. As a result, designers, manufacturing engineers, purchasers and partners can work simultaneously on a common product definition. The Catalog schema allows for unlimited attributes, combines them into meaningful groups, and maps them to catalog categories to track these different types of information. The model also maps an unlimited number of functional structures for each item. For example, multiple Bills of Material (BOMs) can be constructed representing requirements BOM, features BOM, and packaging BOM for an item. The Catalog model also supports hierarchical information about each item and all standard Global Data Synchronization attributes. Business Processes Utilizing Linked Data Entities Each business entity codified into a centralized master data environment significantly improves the efficiency of the automated business processes that use the consolidated data.  When all the key business entities used by an organization's process are so consolidated, the advantages are multiplied.  The primary reason for business process breakdowns (i.e. data errors across application boundaries) is eliminated. All processes are positively impacted and business process automation is itself automated.  I like to use the "Call to Resolution" business process as an example to help illustrate this important point. It involves call center applications, service applications, RMA applications, transportation applications, inventory applications, etc. Customer, Site, Product and Supplier master data must all be correct and consistent across these applications.  What's more, the data relationships between customer and product, and product and suppliers must be right. This is the minimum quality needed to insure the business process flows without error. But that is not the end of the story. Critical master data attributes such as customer loyalty, profitability, credit worthiness, and propensity to buy can optimize the call center point of contact component of the process. Critical product information such as alternative parts or equivalent products can optimize the resolution selected by the process. A comprehensive understanding of the 'service at' location can help insure multiple trips are avoided in the process. Full supplier information on reliability, delivery delays, and potential alternates can prevent supplier exceptions and play a significant role in optimizing the process.  In other words, these master data attributes enable the optimization of the "Call to Resolution" enterprise business process. Master data supports and guides business process flows. Thus the phrase 'Master Data' is indeed appropriate. MDM is the software that houses, manages, and governs the master data that resides in all applications and controls the enterprise business processes. A complete master data solution takes a data model that holds fully attributed master data entities and their inter-relationships. Oracle has this model. Oracle, with its deep understanding of application data is the logical choice for managing all your master data within the enterprise whether or not your organization actually runs any Oracle Applications.

    Read the article

  • Need help with testdisk output

    - by dan
    I had (note the past tense) an ubuntu 12.04 system with separate partitions for the base and /home directories. It started acting wonky, so I decided to do a reinstall with 12.10, intending just to do a reinstall to the base partition. After several seconds, I realize that the installer was repartitioning the drive and reinstalling, so I pulled the power cord. I'm now trying to recover as much as I can with testdisk, but it seems that testdisk is finding 100 unique partitions when I run it - they mostly tend to be HFS+ or solaris /home (which I think is just an ext4; I've never had solaris on the machine). I've pasted an abbreviated version of the testdisk output below (first ~100 lines, and then ~100 lines from the middle of the output). Is there a way to combine or recreate the partitions and then data recovery, or some other way maximize what I can recover (ideally as much of the file system as possible)? I really only care about what was in the /home directory - I'd rather not use photorec since I don't have another 2 TB HD lying around to recover to. Thanks, Dan Mon Dec 10 06:03:00 2012 Command line: TestDisk TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org OS: Linux, kernel 3.2.34-std312-amd64 (#2 SMP Sat Nov 17 08:06:32 UTC 2012) x86_64 Compiler: GCC 4.4 Compilation date: 2012-11-27T22:44:52 ext2fs lib: 1.42.6, ntfs lib: libntfs-3g, reiserfs lib: 0.3.1-rc8, ewf lib: none /dev/sda: LBA, HPA, LBA48, DCO support /dev/sda: size 3907029168 sectors /dev/sda: user_max 3907029168 sectors /dev/sda: native_max 3907029168 sectors Warning: can't get size for Disk /dev/mapper/control - 0 B - CHS 1 1 1, sector size=512 /dev/sr0 is not an ATA disk Hard disk list Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63, sector size=512 - WDC WD20EARS-00J2GB0, S/N:WD-WCAYY0075071, FW:80.00A80 Disk /dev/sdb - 1013 MB / 967 MiB - CHS 1014 32 61, sector size=512 - Generic Flash Disk, FW:8.07 Disk /dev/sr0 - 367 MB / 350 MiB - CHS 179470 1 1 (RO), sector size=2048 - PLDS DVD+/-RW DH-16AAS, FW:JD12 Partition table type (auto): Intel Disk /dev/sda - 2000 GB / 1863 GiB - WDC WD20EARS-00J2GB0 Partition table type: EFI GPT Analyse Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 Current partition structure: Bad GPT partition, invalid signature. search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB Results P MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB P Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB interface_write() 1 P MS Data 2048 3900753919 3900751872 2 P Linux Swap 3900755968 3907028975 6273008 search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14584, s_mnt_count=0/27, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 477915164 recover_EXT2: part_size 3823321312 MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB block_group_nr 1 ....snip...... MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 4096 3823325407 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 7028840 7033383 4544 FAT12, 2326 KB / 2272 KiB Mac HFS 67856948 67862179 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67862176 67867407 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67862244 67867475 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67867404 67872635 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67867472 67872703 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67872700 67877931 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67937834 67948067 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67938012 67948155 10144 HFS+ found using backup sector!, 5193 KB / 5072 KiB Mac HFS 67948064 67958297 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67948070 67958303 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67948152 67958295 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958292 67968435 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958300 67968533 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67992596 67997827 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67997824 68003055 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67997892 68003123 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 68003052 68008283 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68003120 68008351 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68008348 68013579 5232 HFS+, 2678 KB / 2616 KiB Solaris /home 84429840 123499141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84429952 123499253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493136 123562437 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493248 123562549 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566088 123635389 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566200 123635501 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571232 123640533 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571344 123640645 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84659952 123729253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84660064 123729365 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690504 123759805 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690616 123759917 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700424 123769725 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700536 123769837 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797720 123867021 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797832 123867133 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812544 123881845 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812656 123881957 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824552 123893853 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824664 123893965 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847528 123916829 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847640 123916941 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886840 123956141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886952 123956253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945488 124014789 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945600 124014901 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84957992 124027293 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84958104 124027405 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962240 124031541 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962352 124031653 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977168 124046469 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977280 124046581 39069302 UFS1, 20 GB / 18 GiB MS Data 174395467 178483851 4088385 ..... snip (it keeps going on for quite a while)

    Read the article

  • BRE (Business Rules Engine) Data Services is out...!!!

    - by Vishal
    A few months ago we at Tellago had open sourced the BizTalk Data Services. We were meanwhile working on other artifacts which comes along with BizTalk Server like the “Business Rules Engine”.  We are happy to announce the first version of BRE Data Services. BRE Data Services is a same concept which we covered through BTS Data Services, providing a RESTFul OData – based API to interact with the Business Rules Engine via HTTP using ATOM Publishing Protocol or JSON as the encoding mechanism.   In the first version release, we mainly focused on the browsing, querying and searching BRE artifacts via a RESTFul interface. Also along with that we provide the functionality to execute Business Rules by inserting the Facts for policies via the IUpdatable implementation of WCF Data Services.   The BRE Data Services API provides a lightweight interface for managing Business Rules Engine artifacts such as Policies, Rules, Vocabularies, Conditions, Actions, Facts etc. The following are some examples which details some of the available features in the current version of the API.   Basic Querying: Querying BRE Policies http://localhost/BREDataServices/BREMananagementService.svc/Policies Querying BRE Rules http://localhost/BREDataServices/BREMananagementService.svc/Rules Querying BRE Vocabularies http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies   Navigation: The BRE Data Services API also leverages WCF Data Services to enable navigation across related different BRE objects. Querying a specific Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies(‘PolicyName’) Querying a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules(‘RuleName’) Querying all Rules under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Rules Querying all Facts under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Facts Querying all Actions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying all Conditions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying a specific Vocabulary: http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies('VocabName')   Implementation: With the BRE Data Services, we also provide the functionality of executing a particular policy via HTTP. There are couple of ways you can do that though the API.   Ø First is though Service Operations feature of WCF Data Services in which you can execute the Facts by passing them in the URL itself. This is a very simple implementations of the executing the policies due to the limitations & restrictions (only primitive types of input parameters which can be passed) currently of the Service Operations of the WCF Data Services. Below is a code sample.                Below is a traced Request/Response message.                                 Ø Second is through the IUpdatable Interface of WCF Data Services. In this method, you can first query the rule which you want to execute and then inserts Facts for that particular Rules and finally when you perform the SaveChanges() call for the IUpdatable Interface API, it executes the policy with the facts which you inserted at runtime. Below is a sample of client side code. Due to the limitations of current version of WCF Data Services where there is no way you can return back the updates happening on the service side back to the client via the SaveChanges() method. Here we are executing the rule passing a serialized XML as Facts and there is no changes made to any data where we can query back to fetch the changes. This is overcome though the first way to executing the policies which is by executing it as a Service Operation call.     This actually generates a AtomPub message shown as below:   POST /Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/$batch HTTP/1.1 User-Agent: Microsoft ADO.NET Data Services DataServiceVersion: 1.0;NetFx MaxDataServiceVersion: 2.0;NetFx Accept: application/atom+xml,application/xml Accept-Charset: UTF-8 Content-Type: multipart/mixed; boundary=batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Host: localhost:8080 Content-Length: 1481 Expect: 100-continue   --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Content-Type: multipart/mixed; boundary=changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf   --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf Content-Type: application/http Content-Transfer-Encoding: binary   MERGE http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy') HTTP/1.1 Content-ID: 4 Content-Type: application/atom+xml;type=entry Content-Length: 927   <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" font-size: x-small"http://www.w3.org/2005/Atom">   <category scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" term="Tellago.BRE.REST.Resources.Fact" />   <title />   <author>     <name />   </author>   <updated>2011-01-31T20:09:15.0023982Z</updated>   <id>http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy')</id>   <content type="application/xml">     <m:properties>       <d:FactInstance>&lt;ns0:LoanStatus xmlns:ns0="http://tellago.com"&gt;&lt;Age&gt;10&lt;/Age&gt;&lt;Status&gt;true&lt;/Status&gt;&lt;/ns0:LoanStatus&gt;</d:FactInstance>       <d:FactType>TestSchema</d:FactType>       <d:ID>TestPolicy</d:ID>     </m:properties>   </content> </entry> --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf-- --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7—     Installation: The installation of the BRE Data Services is pretty straight forward. ·         Create a new IIS website say BREDataServices. ·         Download the SourceCode from TellagoCodeplex and copy the content from Tellago.BRE.REST.ServiceHost to the physical location of the above created website.     ·         The appPool account running the website should have admin access to the BizTalkRuleEngineDb database. ·         TheRight click the BREManagementService.svc in the IIS ContentView for the website and wala..     Conclusion: The BRE Data Services API is an experiment intended to bring the capabilities of RESTful/OData based services to the Traditional BTS/BRE Solutions. The future releases will target on technologies like BAM, ESB Toolkit. This version has been tested with various version of BizTalk Server and we have uploaded the source code to our Tellago's DevLabs workspace at Codeplex. I hope you guys enjoy this release. Keep an eye on our new releases @ Tellago Codeplex. We are working on various other Biztalk Artifacts like BAM, ESB Toolkit.     Till than happy BizzRuling…!!!     Thanks,   Vishal Mody

    Read the article

  • jQuery Templates, Data Link

    - by Renso
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Query Templates, Data Link, and Globalization I am sure you must have read Scott Guthrie’s blog post about jQuery support and officially supporting jQuery's templating, data linking and globalization, if not here it is: jQuery Templating Since we are an open source shop and use jQuery and jQuery plugins extensively to say the least, decided to look into the templating a bit and see what data linking is all about. For those not familiar with those terms here is the summary, plenty of material out there on what it is, but here is what in my experience it means: jQuery Templating: A templating engine that allows you to specify a client-side template where you indicate which properties/tags you want dynamically updated. You in a sense specify which parts of the html is dynamic and since it is pluggable you are able to use tools data jQuery data linking and others to let it sync up your template with data. What makes it more powerful is that you can easily work with rows of data, adding and removing rows. Once the template has been generated, which you do dynamically on a client-side event, you then append/inject the resulting template somewhere in your DOM, like for example you would get a JSON object from the database, map it to your template, it populates the template with your data in the indicated places, and then let’s say for example append it to a row in a table. I have not found it that useful for lets say a single record of data since you could easily just get a partial view from the server via an html type ajax call. It really shines when you dynamically add/remove rows from a list in the DOM. I have not found an alternative that meets the functionality of the jQuery template and helps of course that Microsoft officially supports it. In future versions of the jQuery plug-in it may even ship as part of the standard jQuery library and with future versions of Visual Studio. jQuery Data Linking: In short I was fascinated by it initially by how with one line of code I can sync up my JSON object with my form elements. That's where my enthusiasm stopped. It was one-line to let is deal with syncing up your form with your JSON object, but it is not bidirectional as they state and I tried all the work arounds they suggested and none of them work. The problem is that when you update your JSON object it DOES NOT sync it up with your form. In an example, accounts are being edited client side by selecting the account from a list by clicking on the row, it then fetches the entire account JSON object via ajax json-type call and then refreshes the form with the account’s details from the new JSON object. What is the use of syncing up my JSON with the form if I still have to programmatically sync up my new JSON object with each DOM property?! So you may ask: “what is the alternative”? Good question and the same one I was pondering, maybe I can just use it for keeping my from n sync with my JSON object so I can post that JSON object back to the server and update my database. That’s when I discovered Knockout: Knockout It addresses the issues mentioned above and also supports event handling through the observer pattern. Not wanting to go into detail here, Steve Sanderson, the creator of Knockout, has already done a terrific job of that, thanks Steve for a great plug-in! Best of all it integrates perfectly with the jQuery Templating engine as well. I have not found an alternative to this plugin that supports the depth and width of functionality and would recommend it to anyone. The only drawback is the embedded html attributes (data-bind=””) tags that you have to add to the HTML, in my opinion tying your behavior to your HTML, where I like to separate behavior from HTML as well as CSS, so the HTML is purely to define content, not styling or behavior. But there are plusses to this as well and also a nifty work around to this that I will just shortly mention here with an example. Instead of data binding an html tag with knockout event handling like so:  <%=Html.TextBox("PrepayDiscount", String.Empty, new { @class = "number" })%>   Do: <%=Html.DataBoundTextBox("PrepayDiscount", String.Empty, new { @class = "number" })%>   The html extension above then takes care of the internals and you could then swap Knockout for something else if you want to inside the extension and keep the HTML plugin agnostic. Here is what the extension looks like, you can easily build a whole library to support all kinds of data binding options from this:      public static class HtmlExtensions       {         public static MvcHtmlString DataBoundTextBox(this HtmlHelper helper, string name, object value, object htmlAttributes)         {             var dic = new RouteValueDictionary(htmlAttributes);             dic.Add("data-bind", String.Format("value: {0}", name));             return helper.TextBox(name, value, dic);         }       }   Hope this helps in making a decision when and where to consider jQuery templating, data linking and Knockout.

    Read the article

  • Constrained A* problem

    - by Ragekit
    I've got a little problem with an A* algorithm that I need to Constrained a little bit. Basically : I use an A* to find the shortest path between 2 randomly placed room in 3D space, and then build a corridor between them. The problem I found is that sometimes it makes chimney like corridors that are not ideal, so I constrict the A* so that if the last movement was up or down, you go sideways. Everything is fine, but in some corner cases, it fails to find a path (when there is obviously one). Like here between the blue and red dot : (i'm in unity btw, but i don't think it matters) Here is the code of the actual A* (a bit long, and some redundency) while(current != goal) { //add stair up / stair down foreach(Node<GridUnit> test in current.Neighbors) { if(!test.Data.empty && test != goal) continue; //bug at arrival; if(test == goal && penul !=null) { Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(currentDiff.y,0)) { //wanna drop on the last if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,to.Data.bounds.center)) { continue; } else { if(Mathf.Approximately(to.Data.bounds.center.x, current.Data.parentUnit.bounds.center.x) && Mathf.Approximately(to.Data.bounds.center.z, current.Data.parentUnit.bounds.center.z)) { continue; } } } } if(current.Data.parentUnit != null) { Vector3 previousDiff = current.Data.parentUnit.bounds.center - current.Data.bounds.center; Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(previousDiff.y,0)) { if(!Mathf.Approximately(currentDiff.y,0)) { //you wanna drop now : continue; } if(current.Data.parentUnit.parentUnit != null) { if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,current.Data.parentUnit.parentUnit.bounds.center)) { continue; }else { if(Mathf.Approximately(test.Data.bounds.center.x, current.Data.parentUnit.parentUnit.bounds.center.x) && Mathf.Approximately(test.Data.bounds.center.z, current.Data.parentUnit.parentUnit.bounds.center.z)) { continue; } } } } } g = current.Data.g + HEURISTIC(current.Data,test.Data); h = HEURISTIC(test.Data,goal.Data); f = g + h; if(open.Contains(test) || closed.Contains(test)) { if(test.Data.f > f) { //found a shorter path going passing through that point test.Data.f = f; test.Data.g = g; test.Data.h = h; test.Data.parentUnit = current.Data; } } else { //jamais rencontré test.Data.f = f; test.Data.h = h; test.Data.g = g; test.Data.parentUnit = current.Data; open.Add(test); } } closed.Add (current); if(open.Count == 0) { Debug.Log("nothingfound"); //nothing more to test no path found, stay to from; List<GridUnit> r = new List<GridUnit>(); r.Add(from.Data); return r; } //sort open from small to biggest travel cost open.Sort(delegate(Node<GridUnit> x, Node<GridUnit> y) { return (int)(x.Data.f-y.Data.f); }); //get the smallest travel cost node; Node<GridUnit> smallest = open[0]; current = smallest; open.RemoveAt(0); } //build the path going backward; List<GridUnit> ret = new List<GridUnit>(); if(penul != null) { ret.Insert(0,to.Data); } GridUnit cur = goal.Data; ret.Insert(0,cur); do{ cur = cur.parentUnit; ret.Insert(0,cur); } while(cur != from.Data); return ret; You see at the start of the foreach i constrict the A* like i said. If you have any insight it would be cool. Thanks

    Read the article

  • Good way to backup code projects

    - by rkmax
    I work as a developer, and I have many projects for most of them use GIT for versioning the code, I have some projects that are not code such as interfaces sketches, idea, etc.. currently I have a disorder with the backups before Dejadup used, but the downside comes when I want to restore my backups from Windows. Periodically change of operating system (Windows or Linux). my question is which is better, copy each folder (project) directly to my external disk or create a repository (git - bare init) for each project I need or should do?

    Read the article

  • problem in displays data in one page

    - by user318068
    hi ,,,,, I have a problem in the following code ... The following code works as follows displays the invites for each member so that if he had five invite from supposed to be displayed all on one page But before you code that does not function Proper image is the only display one invite on the page and until the approval or rejection of the invitation displays the invite the other .... But this is not my want to offer all on one page I wish I could solve the problem and I can view all calls in one page I think that the problem is in the order code I think that the problem is in the order code my code : <?php session_start(); if (!isset($_SESSION['user_id'])) { header("Location: login.php"); } $id=$_SESSION['user_id']; ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> <center> <?php include("connect.php"); $sql =mysql_query("select * from ninvite where recieverMemberID ='$id' and viwed= '0'"); $num =mysql_num_rows($sql); echo $num ; if ($num>0) { while($row=mysql_fetch_array($sql)) { $sender=$row['SenderMemberID']; $room=$row['RoomID']; $sql =mysql_query("select MemberName from members where MemberID ='$sender' "); $sql1 =mysql_query("select RoomName from rooms where RoomID ='$room' "); while($row=mysql_fetch_array($sql)) {$mem =$row['MemberName']; } while($rows=mysql_fetch_array($sql1)) { $Ro =$rows['RoomName']; ?> <form action="join.php" method="post"> <label> </label> <br/> <label> <?php echo " you have invite from $mem to join $Ro"; ?> </label> <br/><br/> <label>accept</label> <input name="radio1" type="radio" value="accpet" /> <label>reject</label> <input name="radio1" type="radio" value="Reject" /><br/> <input type="submit" name="submit" value="done" /> </form> <?php } } } ?> </center> </body> </html> thanks alot. my SQl -- phpMyAdmin SQL Dump -- version 3.2.4 -- http://www.phpmyadmin.net -- Host: localhost -- Generation Time: May 07, 2010 at 12:50 ? -- Server version: 5.1.41 -- PHP Version: 5.3.1 SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO"; /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT /; /!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS /; /!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION /; /!40101 SET NAMES utf8 */; -- -- Database: tr -- -- Table structure for table joinroom CREATE TABLE IF NOT EXISTS joinroom ( MemberID int(10) NOT NULL, RoomID int(10) NOT NULL, PRIMARY KEY (MemberID,RoomID) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; -- -- Dumping data for table joinroom INSERT INTO joinroom (MemberID, RoomID) VALUES (28, 1); -- -- Table structure for table members CREATE TABLE IF NOT EXISTS members ( MemberID int(10) unsigned NOT NULL AUTO_INCREMENT, MemberName varchar(20) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, MemberPass varchar(10) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, MemberEmail varchar(30) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, MemberLocation text CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, MemberImg text CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (MemberID) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=34 ; -- -- Dumping data for table members INSERT INTO members (MemberID, MemberName, MemberPass, MemberEmail, MemberLocation, MemberImg) VALUES (28, 'marwa', '1234', '[email protected]', 'mmmmmm', 'dddddddddd'), (29, 'nora', '1234', '[email protected]', 'fffffffffffgg', 'gggggggggggggg'), (30, 'soso', '1234', '[email protected]', 'ffffffff', 'kkkkkkkkkkkkkkkkkk'), (31, 'gege', '1234', '[email protected]', 'kkkkkkkkkkkkkkkk', 'uuuuuuuuuuuuuuuuu'), (32, 'nono', '1234', '[email protected]', 'ggggggggggggaaaaa', 'aaaaaaaaaaaaaaa'), (33, 'nda', '1234', '[email protected]', 'kkkkkkkkkkkkkkkk', 'ooooooooooooooo'); -- -- Table structure for table ninvite CREATE TABLE IF NOT EXISTS ninvite ( SenderMemberID int(11) NOT NULL AUTO_INCREMENT, recieverMemberID varchar(30) NOT NULL, RoomID int(11) NOT NULL, viwed int(11) NOT NULL, PRIMARY KEY (SenderMemberID,recieverMemberID,RoomID) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=33 ; -- -- Dumping data for table ninvite INSERT INTO ninvite (SenderMemberID, recieverMemberID, RoomID, viwed) VALUES (28, '33', 1, 0), (28, '32', 1, 0), (28, '31', 1, 0); /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT /; /!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS /; /!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;

    Read the article

  • From DBA to Data Analyst

    - by Denise McInerney
    Cross posted from the PASS Blog There is a lot changing in the data professional’s world these days. More data is being produced and stored. More enterprises are trying to use that data to improve their products and services and understand their customers better. More data platforms and tools seem to be crowding the market. For a traditional DBA this can be a confusing and perhaps unsettling time. It’s also a time that offers great opportunity for career growth. I speak from personal experience. We sometimes refer to the “accidental DBA”, the person who finds herself suddenly responsible for managing the database because she has some other technical skills. While it was not accidental, six months ago I was unexpectedly offered a chance to transition out of my DBA role and become a data analyst. I have since come to view this offer as a gift, though at the time I wasn’t quite sure what to do with it. Throughout my DBA career I’ve gotten support from my PASS friends and colleagues and they were the first ones I turned to for counsel about this new situation. Everyone was encouraging and I received two pieces of valuable advice: first, leverage what I already know about data and second, work to understand the business’ needs. Bringing the power of data to bear to solve business problems is really the heart of the job. The challenge is figuring out how to do that. PASS had been the source of much of my technical training as a DBA, so I naturally started there to begin my Business Intelligence education. Once again the Virtual Chapter webinars, local chapter meetings and SQL Saturdays have been invaluable. I work in a large company where we are fortunate to have some very talented data scientists and analysts. These colleagues have been generous with their time and advice. I also took a statistics class through Coursera where I got a refresher in statistics and an introduction to the R programming language. And that’s not the end of the free resources available to someone wanting to acquire new skills. There are many knowledgeable Business Intelligence and Analytics professionals who teach through their blogs. Every day I can learn something new from one of these experts. Sometimes we plan our next career move and sometimes it just happens. Either way a database professional who follows industry developments and acquires new skills will be better prepared when change comes. Take the opportunity to learn something about the changing data landscape and attend a Business Intelligence, Business Analytics or Big Data Virtual Chapter meeting. And if you are moving into this new world of data consider attending the PASS Business Analytics Conference in April where you can meet and learn from those who are already on that road. It’s been said that “the only thing constant is change.” That’s never been more true for the data professional than it is today. But if you are someone who loves data and grasps its potential you are in the right place at the right time.

    Read the article

  • Blink-Data vs Instinct?

    - by Samantha.Y. Ma
    In his landmark bestseller Blink, well-known author and journalist Malcolm Gladwell explores how human beings everyday make seemingly instantaneous choices --in the blink of an eye--and how we “think without thinking.”  These situations actually aren’t as simple as they seem, he postulates; and throughout the book, Gladwell seeks answers to questions such as: 1.    What makes some people good at thinking on their feet and making quick spontaneous decisions?2.    Why do some people follow their instincts and win, while others consistently seem to stumble into error?3.    Why are some of the best decisions often those that are difficult to explain to others?In Blink, Gladwell introduces us to the psychologist who has learned to predict whether a marriage will last, based on a few minutes of observing a couple; the tennis coach who knows when a player will double-fault before the racket even makes contact with the ball; the antiquities experts who recognize a fake at a glance. Ultimately, Blink reveals that great decision makers aren't those who spend the most time deliberating or analyzing information, but those who focus on key factors among an overwhelming number of variables-- i.e., those who have perfected the art of "thin-slicing.” In Data vs. Instinct: Perfecting Global Sales Performance, a new report sponsored by Oracle, the Economist Intelligence Unit (EIU) explores the roles data and instinct play in decision-making by sales managers and discusses how sales executives can increase sales performance through more effective  territory planning and incentive/compensation strategies.If you are a sales executive, ask yourself this:  “Do you rely on knowledge (data) when you plan out your sales strategy?  If you rely on data, how do you ensure that your data sources are reliable, up-to-date, and complete?  With the emergence of social media and the proliferation of both structured and unstructured data, how do you know that you are applying your information/data correctly and in-context?  Three key findings in the report are:•    Six out of ten executives say they rely more on data than instinct to drive decisions. •    Nearly one half (48 percent) of incentive compensation plans do not achieve the desired results. •    Senior sales executives rely more on current and historical data than on forecast data. Strikingly similar to what Gladwell concludes in Blink, the report’s authors succinctly sum up their findings: "The best outcome is a combination of timely information, insightful predictions, and support data."Applying this insight is crucial to creating a sound sales plan that drives alignment and results.  In the area of sales performance management, “territory programs and incentive compensation continue to present particularly complex challenges in an increasingly globalized market," say the report’s authors. "It behooves companies to get a better handle on translating that data into actionable and effective plans." To help solve this challenge, CRM Oracle Fusion integrates forecasting, quotas, compensation, and territories into a single system.   For example, Oracle Fusion CRM provides a natural integration between territories, which define the sales targets (e.g., collection of accounts) for the sales force, and quotas, which quantify the sales targets. In fact, territory hierarchy is a core analytic dimension to slice and dice sales results, using sales analytics and alerts to help you identify where problems are occurring. This makes territoriesStart tapping into both data and instinct effectively today with Oracle Fusion CRM.   Here is a short video to provide you with a snapshot of how it can help you optimize your sales performance.  

    Read the article

  • Reaching to the Holy Grail of Data Management

    - by Irem Radzik
    Pervasive, continuous access to trusted data. That’s the ultimate goal of data management. It enables to leverage data as an asset to create value for customers and the organization. It creates the strong foundation needed to move the business forward. How you get there is also critical. As with all IT initiatives using high performance solutions with low cost of ownership is another key requirement in today’s IT world. Oracle's  data integration product strategy focuses on helping customers achieve this ultimate goal with high performance and low TCO.  At OpenWorld, we will be showing how Oracle Data Integration products help you reach your data management goals, considering new trends in information management, such as big data and cloud computing. We will also provide an update on the latest product releases, such as Oracle GoldenGate 11gR2. If you will be at OpenWorld, please join us on Monday Oct 1st 10:45am at Moscone West – 3005 to hear our VP of Product Development, Brad Adelberg, present "Future Strategy, Direction, and Roadmap of Oracle’s Data Integration Platform". The Data Integration track at OpenWorld covers variety of topics and speakers. In addition to product management of Oracle GoldenGate, Oracle Data Integrator, and Enteprise Data Quality presenting product updates and roadmap, we have several customer panels and stand-alone sessions featuring select customers such as St. Jude Medical, Raymond James, Aderas, Turkcell, Paychex, Comcast,  Ticketmaster, Bank of America and more. You can see an overview of Data Integration sessions here. If you are not able to attend OpenWorld, please check out our latest resources for Data Integration and Oracle GoldenGate. In the coming weeks you will see more blogs about our products’ new capabilities and what to expect at OpenWorld. I hope to see you at OpenWorld and stay in touch via our future blogs. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Core Data migration failing with error: Failed to save new store after first pass of migration

    - by unforgiven
    In the past I had already implemented successfully automatic migration from version 1 of my data model to version 2. Now, using SDK 3.1.3, migrating from version 2 to version 3 fails with the following error: Unresolved error Error Domain=NSCocoaErrorDomain Code=134110 UserInfo=0x5363360 "Operation could not be completed. (Cocoa error 134110.)", { NSUnderlyingError = Error Domain=NSCocoaErrorDomain Code=256 UserInfo=0x53622b0 "Operation could not be completed. (Cocoa error 256.)"; reason = "Failed to save new store after first pass of migration."; } I have tried automatic migration using NSMigratePersistentStoresAutomaticallyOption and NSInferMappingModelAutomaticallyOption and also migration using only NSMigratePersistentStoresAutomaticallyOption, providing a mapping model from v2 to v3. I see the above error logged, and no object is available in the application. However, if I quit the application and reopen it, everything is in place and working. The Core Data methods I am using are the following ones - (NSManagedObjectModel *)managedObjectModel { if (managedObjectModel != nil) { return managedObjectModel; } NSString *path = [[NSBundle mainBundle] pathForResource:@"MYAPP" ofType:@"momd"]; NSURL *momURL = [NSURL fileURLWithPath:path]; managedObjectModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:momURL]; return managedObjectModel; } - (NSManagedObjectContext *) managedObjectContext { if (managedObjectContext != nil) { return managedObjectContext; } NSPersistentStoreCoordinator *coordinator = [self persistentStoreCoordinator]; if (coordinator != nil) { managedObjectContext = [[NSManagedObjectContext alloc] init]; [managedObjectContext setPersistentStoreCoordinator: coordinator]; } return managedObjectContext; } - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"MYAPP.sqlite"]]; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel: [self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:options error:&error]) { // Handle error NSLog(@"Unresolved error %@, %@", error, [error userInfo]); } return persistentStoreCoordinator; } In the simulator, I see that this generates a MYAPP~.sqlite files and a MYAPP.sqlite file. I tried to remove the MYAPP~.sqlite file, but BOOL oldExists = [[NSFileManager defaultManager] fileExistsAtPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"MYAPP~.sqlite"]]; always returns NO. Any clue? Am I doing something wrong? Thank you in advance.

    Read the article

  • ABPerson in Core Data

    - by eman
    I'm trying to figure out to store a reference to an ABPerson in a Core Data store on an iPhone app. Ultimately, I'd like to be able to sync with a Mac version of the app (I'm assuming ABRecordIDs wouldn't be the same for the iPhone and the Mac). I was thinking of storing the record ID, name, and email and checking against those--is there a better way?

    Read the article

  • Core Data - Calculated Fields

    - by Jacob
    Hi, I Know how to use core data with UITableview but how can I use the NSFetchedController to get calculated fields. Is there an example I can follow? LIke i want to go through all the NSManagedObjects and then add its "mark" field but can this be done in easier way or do I have to do it all manually. Thanks

    Read the article

  • System.Data.SQLite parameter issue

    - by CasperT
    I have the following code: try { //Create connection SQLiteConnection conn = DBConnection.OpenDB(); //Verify user input, normally you give dbType a size, but Text is an exception var uNavnParam = new SQLiteParameter("@uNavnParam", SqlDbType.Text) { Value = uNavn }; var bNavnParam = new SQLiteParameter("@bNavnParam", SqlDbType.Text) { Value = bNavn }; var passwdParam = new SQLiteParameter("@passwdParam", SqlDbType.Text) {Value = passwd}; var pc_idParam = new SQLiteParameter("@pc_idParam", SqlDbType.TinyInt) { Value = pc_id }; var noterParam = new SQLiteParameter("@noterParam", SqlDbType.Text) { Value = noter }; var licens_idParam = new SQLiteParameter("@licens_idParam", SqlDbType.TinyInt) { Value = licens_id }; var insertSQL = new SQLiteCommand("INSERT INTO Brugere (navn, brugernavn, password, pc_id, noter, licens_id)" + "VALUES ('@uNameParam', '@bNavnParam', '@passwdParam', '@pc_idParam', '@noterParam', '@licens_idParam')", conn); insertSQL.Parameters.Add(uNavnParam); //replace paramenter with verified userinput insertSQL.Parameters.Add(bNavnParam); insertSQL.Parameters.Add(passwdParam); insertSQL.Parameters.Add(pc_idParam); insertSQL.Parameters.Add(noterParam); insertSQL.Parameters.Add(licens_idParam); insertSQL.ExecuteNonQuery(); //Execute query //Close connection DBConnection.CloseDB(conn); //Let the user know that it was changed succesfully this.Text = "Succes! Changed!"; } catch(SQLiteException e) { //Catch error MessageBox.Show(e.ToString(), "ALARM"); } It executes perfectly, but when I view my "brugere" table, it has inserted the values: '@uNameParam', '@bNavnParam', '@passwdParam', '@pc_idParam', '@noterParam', '@licens_idParam' literally. Instead of replacing them. I have tried making a breakpoint and checked the parameters, they do have the correct assigned values. So that is not the issue either. I have been tinkering with this a lot now, with no luck, can anyone help? Oh and for reference, here is the OpenDB method from the DBConnection class: public static SQLiteConnection OpenDB() { try { //Gets connectionstring from app.config const string myConnectString = "data source=data;"; var conn = new SQLiteConnection(myConnectString); conn.Open(); return conn; } catch (SQLiteException e) { MessageBox.Show(e.ToString(), "ALARM"); return null; } }

    Read the article

  • Asp.net Dynamic Data - Field not rendered on List page

    - by Christo Fur
    I have created a Dynamic Data site against an Entity Framework Model I have 2 fields which are nvarchar(max) in the DB and they do not get rendered on the list view This is probably a sensible default But how do I overide this? Have tried adding various attributes to my MetaData class e.g [ScaffoldColumn(true)] [UIHint("RuleData")] But no joy with that Any ideas?

    Read the article

  • Sharing data between graphics and physics engine in the game?

    - by PolGraphic
    I'm writing the game engine that consists of few modules. Two of them are the graphics engine and the physics engine. I wonder if it's a good solution to share data between them? Two ways (sharing or not) looks like that: Without sharing data GraphicsModel{ //some common for graphics and physics data like position //some only graphic data //like textures and detailed model's verticles that physics doesn't need }; PhysicsModel{ //some common for graphics and physics data like position //some only physics data //usually my physics data contains A LOT more informations than graphics data } engine3D->createModel3D(...); physicsEngine->createModel3D(...); //connect graphics and physics data //e.g. update graphics model's position when physics model's position will change I see two main problems: A lot of redundant data (like two positions for both physics and graphics data) Problem with updating data (I have to manually update graphics data when physics data changes) With sharing data Model{ //some common for graphics and physics data like position }; GraphicModel : public Model{ //some only graphics data //like textures and detailed model's verticles that physics doesn't need }; PhysicsModel : public Model{ //some only physics data //usually my physics data contains A LOT more informations than graphics data } model = engine3D->createModel3D(...); physicsEngine->assingModel3D(&model); //will cast to //PhysicsModel for it's purposes?? //when physics changes anything (like position) in model //(which it treats like PhysicsModel), the position for graphics data //will change as well (because it's the same model) Problems here: physicsEngine cannot create new objects, just "assing" existing ones from engine3D (somehow it looks more anti-independent for me) Casting data in assingModel3D function physicsEngine and graphicsEngine must be careful - they cannot delete data when they don't need them (because second one may need it). But it's rare situation. Moreover, they can just delete the pointer, not the object. Or we can assume that graphicsEngine will delete objects, physicsEngine just pointers to them. Which way is better? Which will produce more problems in the future? I like the second solution more, but I wonder why most graphics and physics engines prefer the first one (maybe because they normally make only graphics or only physics engine and somebody else connect them in the game?). Have they any more hidden pros & contras?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >