Search Results

Search found 58470 results on 2339 pages for 'data visualisation'.

Page 46/2339 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Connect to QuickBooks from PowerBuilder using RSSBus ADO.NET Data Provider

    - by dataintegration
    The RSSBus ADO.NET providers are easy-to-use, standards based controls that can be used from any platform or development technology that supports Microsoft .NET, including Sybase PowerBuilder. In this article we show how to use the RSSBus ADO.NET Provider for QuickBooks in PowerBuilder. A similar approach can be used from PowerBuilder with other RSSBus ADO.NET Data Providers to access data from Salesforce, SharePoint, Dynamics CRM, Google, OData, etc. In this article we will show how to create a basic PowerBuilder application that performs CRUD operations using the RSSBus ADO.NET Provider for QuickBooks. Step 1: Open PowerBuilder and create a new WPF Window Application solution. Step 2: Add all the Visual Controls needed for the connection properties. Step 3: Add the DataGrid control from the .NET controls. Step 4:Configure the columns of the DataGrid control as shown below. The column bindings will depend on the table. <DataGrid AutoGenerateColumns="False" Margin="13,249,12,14" Name="datagrid1" TabIndex="70" ItemsSource="{Binding}"> <DataGrid.Columns> <DataGridTextColumn x:Name="idColumn" Binding="{Binding Path=ID}" Header="ID" Width="SizeToHeader" /> <DataGridTextColumn x:Name="nameColumn" Binding="{Binding Path=Name}" Header="Name" Width="SizeToHeader" /> ... </DataGrid.Columns> </DataGrid> Step 5:Add a reference to the RSSBus ADO.NET Provider for QuickBooks assembly. Step 6:Optional: Set the QBXML Version to 6. Some of the tables in QuickBooks require a later version of QuickBooks to support updates and deletes. Please check the help for details. Connect the DataGrid: Once the visual elements have been configured, developers can use standard ADO.NET objects like Connection, Command, and DataAdapter to populate a DataTable with the results of a SQL query: System.Data.RSSBus.QuickBooks.QuickBooksConnection conn conn = create System.Data.RSSBus.QuickBooks.QuickBooksConnection(connectionString) System.Data.RSSBus.QuickBooks.QuickBooksCommand comm comm = create System.Data.RSSBus.QuickBooks.QuickBooksCommand(command, conn) System.Data.DataTable table table = create System.Data.DataTable System.Data.RSSBus.QuickBooks.QuickBooksDataAdapter dataAdapter dataAdapter = create System.Data.RSSBus.QuickBooks.QuickBooksDataAdapter(comm) dataAdapter.Fill(table) datagrid1.ItemsSource=table.DefaultView The code above can be used to bind data from any query (set this in command), to the DataGrid. The DataGrid should have the same columns as those returned from the SELECT statement. PowerBuilder Sample Project The included sample project includes the steps outlined in this article. You will also need the QuickBooks ADO.NET Data Provider to make the connection. You can download a free trial here.

    Read the article

  • EBS Seed Data Comparison Reports Now Available

    - by Steven Chan (Oracle Development)
    Earlier this year we released a reporting tool that reports on the differences in E-Business Suite database objects between one release and another.  That's a very useful reference, but EBS defaults are delivered as seed data within the database objects themselves. What about the differences in this seed data between one release and another? I'm pleased to announce the availability of a new tool that provides comparison reports of E-Business Suite seed data between EBS 11.5.10.2, 12.0.4, 12.0.6, 12.1.1, and 12.1.3.  This new tool complements the information in the data model comparison tool.  You can download the new seed data comparison tool here: EBS ATG Seed Data Comparison Report (Note 1327399.1) The EBS ATG Seed Data Comparison Report provides report on the changes between different EBS releases based upon the seed data changes delivered by the product data loader files (.ldt extension) based on EBS ATG loader control (.lct extension) files.  You can use this new tool to report on the differences in the following types of seed data: Concurrent Program definitions Descriptive Flexfield entity definitions Application Object Library profile option definitions Application Object Library (AOL) key flexfield, function, lookups, value set definitions Application Object Library (AOL) menu and responsibility definitions Application Object Library messages Application Object Library request set definitions Application Object Library printer styles definitions Report Manager / WebADI component and integrator entity definitions Business Intelligence Publisher (BI Publisher) entity definitions BIS Request Set Generator entity definitions ... and more Your feedback is welcomeThis new tool was produced by our hard-working EBS Release Management team, and they're actively seeking your feedback.  Please feel free to share your experiences with it by posting a comment here.  You can also request enhancements to this tool via the distribution list address included in Note 1327399.1.Related Articles Oracle E-Business Suite Release 12.1.3 Now Available New Whitepaper: Upgrading EBS 11i Forms + OA Framework Personalizations to EBS 12 EBS 12.0 Minimum Requirements for Extended Support Finalized Five Key Resources for Upgrading to E-Business Suite Release 12 E-Business Suite Release 12.1.1 Consolidated Upgrade Patch 1 Now Available New Whitepaper: Planning Your E-Business Suite Upgrade from Release 11i to 12.1

    Read the article

  • How long for data highlighter mark up to appear in structured data tool?

    - by Max
    I used the data highlighter in webmaster tools over 3 weeks ago to mark up some local business data, but there is still no structured data being detected in webmaster tools. Does any body have any experience on approx how long it takes for Google Webmaster Tools to start reporting Structured Data that has been marked up with their data highlighter? I'm asking specifically about reporting on it in Web Master Tools Structured Data section, as opposed to actually appearing in the SERPs.

    Read the article

  • PASS Business Intelligence Virtual Chapter Upcoming Sessions (November 2013)

    - by Sergio Govoni
    Let me point out the upcoming live events, dedicated to Business Intelligence with SQL Server, that PASS Business Intelligence Virtual Chapter has scheduled for November 2013. The "Accidental Business Intelligence Project Manager"Date: Thursday 7th November - 8:00 PM GMT / 3:00 PM EST / Noon PSTSpeaker: Jen StirrupURL: https://attendee.gotowebinar.com/register/5018337449405969666 You've watched the Apprentice with Donald Trump and Lord Alan Sugar. You know that the Project Manager is usually the one gets firedYou've heard that Business Intelligence projects are prone to failureYou know that a quick Bing search for "why do Business Intelligence projects fail?" produces a search result of 25 million hits!Despite all this… you're now Business Intelligence Project Manager – now what do you do?In this session, Jen will provide a "sparks from the anvil" series of steps and working practices in Business Intelligence Project Management. What about waterfall vs agile? What is a Gantt chart anyway? Is Microsoft Project your friend or a problematic aspect of being a BI PM? Jen will give you some ideas and insights that will help you set your BI project right: assess priorities, avoid conflict, empower the BI team and generally deliver the Business Intelligence project successfully! Dimensional Modelling Design Patterns: Beyond BasicsDate: Tuesday 12th November - Noon AEDT / 1:00 AM GMT / Monday 11th November 5:00 PM PSTSpeaker: Jason Horner, Josh Fennessy and friendsURL: https://attendee.gotowebinar.com/register/852881628115426561 This session will provide a deeper dive into the art of dimensional modeling. We will look at the different types of fact tables and dimension tables, how and when to use them. We will also some approaches to creating rich hierarchies that make reporting a snap. This session promises to be very interactive and engaging, bring your toughest Dimensional Modeling quandaries. Data Vault Data Warehouse ArchitectureDate: Tuesday 19th November - 4:00 PM PST / 7 PM EST / Wednesday 20th November 11:00 PM AEDTSpeaker: Jeff Renz and Leslie WeedURL: https://attendee.gotowebinar.com/register/1571569707028142849 Data vault is a compelling architecture for an enterprise data warehouse using SQL Server 2012. A well designed data vault data warehouse facilitates fast, efficient and maintainable data integration across business systems. In this session Leslie and I will review the basics about enterprise data warehouse design, introduce you to the data vault architecture and discuss how you can leverage new features of SQL Server 2012 help make your data warehouse solution provide maximum value to your users. 

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • EMEA Analytics & Data Integration Oracle Partner Forum

    - by Mike.Hallett(at)Oracle-BI&EPM
    MONDAY 12TH NOVEMBER, 2012 IN LONDON (UK) For Oracle Partners across Europe, Middle East and Africa: come to hear the latest news from Oracle OpenWorld about Oracle BI & Data Integration, and propel your business growth as an Oracle partner. This event should appeal to BI or Data Integration specialised partners, Executives, Sales, Pre-sales and Solution architects: with a choice of participation in the plenary day and then a set of special interest (technical) sessions. The follow on breakout sessions from the 13th November provide deeper dives and technical training for those of you who wish to stay for more detailed and hands-on workshops. Keynote: Andrew Sutherland, SVP Oracle Technology Hot agenda items will include: The Fusion Middleware Stack: Engineered to work together A complete Analytics and Data Integration Solution Architecture: Big Data and Little Data combined In-Memory Analytics for Extreme Insight Latest Product Development Roadmap for Data Integration and Analytics Venue:  Oracles London CITY Moorgate Offices Places are limited, Register from this Link {see Register button at bottom right of page}. Note: Registration for the conference and the deeper dives and technical training is free of charge to OPN member Partners, but you will be responsible for your own travel and hotel expenses. Event Schedule During this event you can learn about partner success stories, participate in an array of break-out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Nov. 12th  : Day 1 Main Plenary Session : Full day, starting 10.30 am.     Oracle Hosted Dinner in the Evening Nov. 13th  onwards Architecture Masterclass : IM Reference Architecture – Big Data and Little Data combined (1 day) BI-Apps Bootcamp  (4-days) Oracle GoldenGate workshop (1 day) Oracle Data Integrator and Oracle Enterprise Data Quality workshop (1 day)   For further information and detail download the Agenda (pdf) or contact Michael Hallett at [email protected].

    Read the article

  • How to recover data from NTFS partition that was made into a Swap partition?

    - by Raghav Mehta
    I have extremely important stuff on my windows partition which during the ubuntu 10.10 installation,when it said that I should create something called swap space, I selected it to be a swap space (without even knowing what it actually meant) The Grub2 doesn't show up so I don't get a choice to boot Ubuntu or Windows. I don't get my windows partition as a removable device in Ubuntu either. When I go to disk utility and select the sda2 (i.e.. my windows partition) and click edit partition and select HPFS/NTFS for the type and tick bootable and click OK the small processing sign keep on rotating on the bottom right of the sda2 in the chart and after about 10 to 15 minutes it gives an unknown error and thus, I am still unable to use my windows. I am even worse than a beginner who doesn't know a thing about Ubuntu so please be patient and help me out.

    Read the article

  • Does using structure data semantic LocalBusiness schema markup work for local EMD URL's?

    - by ElHaix
    Based on what I have read about Google's recent Panda and Penguin updates, I'm getting the impression that using semantic markup may help improve SEO results. On a EMD (exact match domain) site, that may have been hit, we list location-based products. We are now going to be adding a itemtype="http://schema.org/Product" to each product, with relevant details. However, that product may be available in Los Angeles and also in appear in a Seattle results page. We could add a LocalBusiness item type on each geo page to define the geo location for that page. While the definition states: A particular physical business or branch of an organization. Examples of LocalBusiness include a restaurant, a particular branch of a restaurant chain, a branch of a bank, a medical practice, a club, a bowling alley, etc. We could add use the location property which would simply include the city/state details. I realize that this looks like it is meant for a physical location, however could this be done without seeming black-hat?

    Read the article

  • Swiss Re increases data warehouse performance and deploys in record time

    - by KLaker
    Great information on yet another data warehouse deployment on Exadata. A little background on Swiss Re: In 2002, Swiss Re established a data warehouse for its client markets and products to gather reinsurance information across all organizational units into an integrated structure. The data warehouse provided the basis for reporting at the group level with drill-down capability to individual contracts, while facilitating application integration and data exchange by using common data standards. Initially focusing on property and casualty reinsurance information only, it now includes life and health reinsurance, insurance, and nonlife insurance information. Key highlights of the benefits that Swiss Re achieved by using Exadata: Reduced the time to feed the data warehouse and generate data marts by 58% Reduced average runtime by 24% for standard reports comfortably loading two data warehouse refreshes per day with incremental feeds Freed up technical experts by significantly minimizing time spent on tuning activities Most importantly this was one of the fastest project deployments in Swiss Re's history. They went from installation to production in just four months! What is truly surprising is the that it only took two weeks between power-on to testing the machine with full data volumes! Business teams at Swiss Re are now able to fully exploit up-to-date analytics across property, casualty, life, health insurance, and reinsurance lines to identify successful products. These points are highlighted in the following quotes from Dr. Stephan Gutzwiller, Head of Data Warehouse Services at Swiss Re:  "We were operating a complete Oracle stack, including servers, storage area network, operating systems, and databases that was well optimized and delivered very good performance over an extended period of time. When a hardware replacement was scheduled for 2012, Oracle Exadata was a natural choice—and the performance increase was impressive. It enabled us to deliver analytics to our internal customers faster, without hiring more IT staff" “The high quality data that is readily available with Oracle Exadata gives us the insight and agility we need to cater to client needs. We also can continue re-engineering to keep up with the increasing demand without having to grow the organization. This combination creates excellent business value.” Our full press release is available here: http://www.oracle.com/us/corporate/customers/customersearch/swiss-re-1-exadata-ss-2050409.html. If you want more information about how Exadata can increase the performance of your data warehouse visit our home page: http://www.oracle.com/us/products/database/exadata-database-machine/overview/index.html

    Read the article

  • Free Webinar - Using Enterprise Data Integration Dashboards

    - by andyleonard
    Join Kent Bradshaw and me as we present Using Enterprise Data Integration Dashboards Tuesday 11 Dec 2012 at 10:00 AM ET! If data is the life of the modern organization, data integration is the heart of an enterprise. Data circulation is vital. Data integration dashboards provide enterprise ETL (Extract, Transform, and Load) teams near-real-time status supported with historical performance analysis. Join Linchpins Kent Bradshaw and Andy Leonard as they demonstrate and discuss the benefits of data...(read more)

    Read the article

  • At what point should data be sent back to server?

    - by whamsicore
    A good example would be the stackexchange "rate" button. When a post is upvoted the arrow changes color immediately. However there is a grace period for one to edit one's vote decision (oops! voted by mistake?). Is the upvote action processed immediately or does is only process after a set time period, or when the user leaves the page? How exactly is this rating processed? What is the standard for handling dynamic page edits (e.g. stackexchange rating, facebook posts?)

    Read the article

  • Visit our Consolidated List of Mandatory Project Costing Code and Data Fixes

    - by SherryG-Oracle
    Projects Support has a published document with a consolidated listing of mandatory code and data fixes for Project Costing.  Generic Data Fix (GDF) patches are created by development to fix data issues caused by bugs/issues in the application code.  The GDF patches are released for download via My Oracle Support which are then referenced in My Oracle Support documents and by support to provide data fixes for known code fix issues.Consolidated root cause code fix and generic data fix patches will be superceded whenever any new version is created.  These patches fix a number of critical code and data issues identified in the Project Costing flow.This document contains a consolidated list of code and data fixes for Project Costing.  The note lists the following details: Note ID Component Type (code or data) Abstract Patch Visit DocID 1538822.1 today!

    Read the article

  • Data Structures: What are some common examples of problems where "buffers" come into action?

    - by Dark Templar
    I was just wondering if there were some "standard" examples that everyone uses as a basis for explaining the nature of a problem that requires the use of a buffer. What are some well-known problems in the real world that can see great benefits from using a buffer? Also, a little background or explanation as to why the problem benefits from using a buffer, and how the buffer would be implemented, would be insightful for understanding the concept!

    Read the article

  • What processes would make the selling of a hard drive that previously held sensitive data justifiable? [closed]

    - by user12583188
    Possible Duplicate: Securely erasing all data from a hard drive In my personal collection are an increasing number of relatively new drives, only put on the shelf due to upgrades; in the past I have never sold hard drives with used machines for fear of having the encrypted password databases that have been stored on them compromised, but as their numbers increase I find myself more tempted to do so (due to the $$$ I know they're worth on the used market). What tools then exist to make the recovery of data from said drives difficult to the extent that selling them could be justified? Another way of saying this would be: what tools/method exist for making the attempts at recovery of any data previously stored on a certain drive impractical? I assume that it is always possible to recover data from a drive that is in working order. I assume also there are some methods for preventing recovery of data due a program called dban, and one particular feature in macOSX that deals with permanently deleting data from a disk.

    Read the article

  • Associate a texture to an object (from a data-model, not graphical point of view).

    - by Raveline
    I'm writing a roguelike where objects and floor can be made of different materials. For instance, let's say we can have a wooden chair, an iron chair, a golden chair, and so on. I've got an Object class (I know, the name is terrible), which is more or less using a composite pattern, and a Material class. Material have different important properties (noise, color...). For the time being, there are 5 different instances of materials, created at the initialization of the game. How would connect an instance of Object with one of the 5 instances of materials ? I see three simple solutions : Using a pointer. Simple and brutal. Using an integer material-id, then get the materials out of a table when engine manipulates the object for various purposes (display, attack analysis, etc.). Not very beautiful, I think, and not very flexible. Using an integer material-id, then get the materials out of a std::map. A bit more flexible, but still not perfect. Do you see other possibilities ? If not, what would you choose (and why) ? Thanks in advance !

    Read the article

  • populate uipicker view with results from core data DB using an NSArray

    - by Chris
    I am trying to populate a UIPickerView with the results of a NSFetchRequest. The results of the NSFetchRequest are then stored in an NSArray. I am using Core Data to interact with the SQLite DB. I have a simple class file that contains a UIPickerView that is associated with a storyboard scene in my project. The header file for the class looks like the following, ViewControllerUsers.h #import <UIKit/UIKit.h> #import "AppDelegate.h" @interface ViewControllerUsers : UIViewController <NSFetchedResultsControllerDelegate, UIPickerViewDelegate, UIPickerViewDataSource> { NSArray *dictionaries; } @property (nonatomic, strong) NSFetchedResultsController *fetchedResultsController; // Core Data @property (strong, nonatomic) NSManagedObjectContext *managedObjectContext; @property (nonatomic, strong) NSArray *users; @property (strong, nonatomic) IBOutlet UIPickerView *uiPickerViewUsers; @property (weak, nonatomic) IBOutlet UIBarButtonItem *btnDone; @property (weak, nonatomic) IBOutlet UIButton *btnChangePin; // added for testing purposes @property (nonatomic, strong) NSArray *usernames; - (IBAction)dismissScene:(id)sender; - (IBAction)changePin:(id)sender; @end The implementation file looks like the following, ViewControllerUsers.m #import "ViewControllerUsers.h" @interface ViewControllerUsers () @end @implementation ViewControllerUsers // Core Data @synthesize managedObjectContext = _managedObjectContext; @synthesize uiPickerViewUsers = _uiPickerViewUsers; @synthesize usernames = _usernames; - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if (self) { // Custom initialization } return self; } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view. // Core Data if (_managedObjectContext == nil) { _managedObjectContext = [(AppDelegate *)[[UIApplication sharedApplication] delegate] managedObjectContext]; NSLog(@"After _managedObjectContext: %@", _managedObjectContext); } NSFetchRequest *request = [NSFetchRequest fetchRequestWithEntityName:@"Account"]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Account" inManagedObjectContext:_managedObjectContext]; request.resultType = NSDictionaryResultType; request.propertiesToFetch = [NSArray arrayWithObject:[[entity propertiesByName] objectForKey:@"username"]]; request.returnsDistinctResults = YES; _usernames = [_managedObjectContext executeFetchRequest:request error:nil]; NSLog (@"names: %@",_usernames); } -(NSInteger)numberOfComponentsInPickerView:(UIPickerView *)pickerView { //One column return 1; } -(NSInteger)pickerView:(UIPickerView *)pickerView numberOfRowsInComponent:(NSInteger)component { //set number of rows return _usernames.count; } -(NSString *)pickerView:(UIPickerView *)pickerView titleForRow:(NSInteger)row forComponent:(NSInteger)component { //set item per row return [_usernames objectAtIndex:row]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } - (void)viewDidUnload { [self setBtnDone:nil]; [self setUiPickerViewUsers:nil]; [self setBtnChangePin:nil]; [super viewDidUnload]; } - (IBAction)dismissScene:(id)sender { [self dismissModalViewControllerAnimated:YES]; } - (IBAction)changePin:(id)sender { } @end The current code is causing the app to crash, but the NSLog is show the results of the NSFetchRequest in the NSArray. I currently think that I am not formatting the results of the NSFetchRequest in the NSArray properly if I had to take a guess. The crash log looks like the following, 2013-06-26 16:49:24.219 KegCop[41233:c07] names: ( { username = blah; }, { username = chris; }, { username = root; } ) 2013-06-26 16:49:24.223 KegCop[41233:c07] -[NSKnownKeysDictionary1 isEqualToString:]: unrecognized selector sent to instance 0xe54d9a0 2013-06-26 16:49:24.223 KegCop[41233:c07] Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSKnownKeysDictionary1 isEqualToString:]: unrecognized selector sent to instance 0xe54d9a0' First throw call stack:

    Read the article

  • R concentrating data frame

    - by user1631503
    I have a data frame like this >X_com Day_1 Day_2 Day_3 Day_4 Day_5 Day_6 Day_7 Day_8 Day_9 Day_10 1 0 0 0 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 I need to concentrate all the values into one column and add another column with "1;" So I did this > X_new=matrix(1,8,2) > X_new[1,]=paste(X_com[1,1], X_com[1,2],X_com[1,3],X_com [1,4],X_com[1,5],X_com[1,6],X_com[1,7],X_com[1,8],X_com [1,9],X_com[1,10], sep="") > X_new[2,]=paste(X_com[2,1], X_com[2,2],X_com[2,3],X_com [2,4],X_com[2,5],X_com[2,6],X_com[2,7],X_com[2,8],X_com [2,9],X_com[2,10], sep="") > X_new[3,]=paste(X_com[3,1], X_com[3,2],X_com[3,3],X_com [3,4],X_com[3,5],X_com[3,6],X_com[3,7],X_com[3,8],X_com [3,9],X_com[3,10], sep="") > X_new[4,]=paste(X_com[4,1], X_com[4,2],X_com[4,3],X_com [4,4],X_com[4,5],X_com[4,6],X_com[4,7],X_com[4,8],X_com [4,9],X_com[4,10], sep="") > X_new[5,]=paste(X_com[5,1], X_com[5,2],X_com[5,3],X_com [5,4],X_com[5,5],X_com[5,6],X_com[5,7],X_com[5,8],X_com [5,9],X_com[5,10], sep="") > X_new[6,]=paste(X_com[6,1], X_com[6,2],X_com[6,3],X_com [6,4],X_com[6,5],X_com[6,6],X_com[6,7],X_com[6,8],X_com [6,9],X_com[6,10], sep="") > X_new[7,]=paste(X_com[7,1], X_com[7,2],X_com[7,3],X_com [7,4],X_com[7,5],X_com[7,6],X_com[7,7],X_com[7,8],X_com [7,9],X_com[7,10], sep="") > X_new[8,]=paste(X_com[8,1], X_com[8,2],X_com[8,3],X_com [8,4],X_com[8,5],X_com[8,6],X_com[8,7],X_com[8,8],X_com [8,9],X_com[8,10], sep="") > X_new[1:8,2]="1;" > as.data.frame(X_new) V1 V2 1 0000000001 1; 2 0000000000 1; 3 0000000000 1; 4 0000000000 1; 5 0000000000 1; 6 0000000000 1; 7 0000000000 1; 8 0000000000 1; I believe there's definitely a faster way of achieving this but have no clue. The other problem is, I have over a thousand of data frame like this needs to be concentrated. I'm still learning how to loop these repetitive steps but is progressing quite slowly. If the original data frames were named uniquely, does that mean I have no choice but to work on each of them individually? Thank you in advance.

    Read the article

  • Java: immutability, overuse of stack -- better data structure?

    - by HH
    I overused hashSets but it was slow, then changed to Stacks, speed boost-up. Poly's reply uses Collections.emptyList() as immutable list, cutting out excess null-checkers. No Collections.emptyStack(). Combining the words stack and immutability, from the last experiences, gets "immutable stack" (probably not related to functional prog). Java Api 5 for list interface shows that Stack is an implementing class for list and arraylist, here. The java.coccurrent pkg does not have any immutable Stack data structure. The first hinted of misusing stack. The lack of immutabily in the last and poly's book recommendation leads way to list. Something very primitive, fast, no extra layers, with methods like emptyThing(). Overuse of stack and where I use it DataFile.java: public Stack<DataFile> files; FileObject.java: public Stack<String> printViews = new Stack<String>(); FileObject.java:// private static Stack<Object> getFormat(File f){return (new Format(f)).getFormat();} Format.java: private Stack<Object> getLine(File[] fs,String s){return wF;} Format.java: private Stack<Object> getFormat(){return format;} Positions.java: public static Stack<Integer[]> getPrintPoss(String s,File f,Integer maxViewPerF) Positions.java: Stack<File> possPrint = new Stack<File>(); Positions.java: Stack<Integer> positions=new Stack<Integer>(); Record.java: private String getFormatLine(Stack<Object> st) Record.java: Stack<String> lines=new Stack<String>(); SearchToUser.java: public static final Stack<File> allFiles = findf.getFs(); SearchToUser.java: public static final Stack<File> allDirs = findf.getDs(); SearchToUser.java: private Stack<Integer[]> positionsPrint=new Stack<Integer[]>(); SearchToUser.java: public Stack<String> getSearchResults(String s, Integer countPerFile, Integer resCount) SearchToUser.java: Stack<File> filesToS=Fs2Word.getFs2W(s,50); SearchToUser.java: Stack<String> rs=new Stack<String>(); View.java: public Stack<Integer[]> poss = new Stack<Integer[4]>(); View.java: public static Stack<String> getPrintViewsFileWise(String s,Object[] df,Integer maxViewsPerF) View.java: Stack<String> substrings = new Stack<String>(); View.java: private Stack<String> printViews=new Stack<String>(); View.java: MatchView(Stack<Integer> pss,File f,Integer maxViews) View.java: Stack<String> formatFile; View.java: private Stack<Search> files; View.java: private Stack<File> matchingFiles; View.java: private Stack<String> matchViews; View.java: private Stack<String> searchMatches; View.java: private Stack<String> getSearchResults(Integer numbResults) Easier with List: AllDirs and AllFs, now looping with push, but list has more pow. methods such as addAll [OLD] From Stack to some immutable data structure How to get immutable Stack data structure? Can I box it with list? Should I switch my current implementatios from stacks to Lists to get immutable? Which immutable data structure is Very fast with about similar exec time as Stack? No immutability to Stack with Final import java.io.*; import java.util.*; public class TestStack{ public static void main(String[] args) { final Stack<Integer> test = new Stack<Integer>(); Stack<Integer> test2 = new Stack<Integer>(); test.push(37707); test2.push(80437707); //WHY is there not an error to remove an elment // from FINAL stack? System.out.println(test.pop()); System.out.println(test2.pop()); } }

    Read the article

  • iPhone - Core Data crash on migration

    - by Sergey Zenchenko
    I have problems, when i install app from Xcode all works but if I build app and install it from iTunes I have an error with the database at launch. This happens only than i have changes in the core data model and need to migrate to a new version. At first launch at crashes with message: Thread 0: 0 libSystem.B.dylib 0x00034588 pwrite + 20 1 libsqlite3.dylib 0x000505ec _sqlite3_purgeEligiblePagerCacheMemory + 2808 2 libsqlite3.dylib 0x000243d8 sqlite3_backup_init + 7712 3 libsqlite3.dylib 0x000244ac sqlite3_backup_init + 7924 4 libsqlite3.dylib 0x0000d418 sqlite3_file_control + 4028 5 libsqlite3.dylib 0x000228b4 sqlite3_backup_init + 764 6 libsqlite3.dylib 0x00022dd0 sqlite3_backup_init + 2072 7 libsqlite3.dylib 0x000249a8 sqlite3_backup_init + 9200 8 libsqlite3.dylib 0x00029800 sqlite3_open16 + 11360 9 libsqlite3.dylib 0x0002a200 sqlite3_open16 + 13920 10 libsqlite3.dylib 0x0002ab84 sqlite3_open16 + 16356 11 libsqlite3.dylib 0x00049418 sqlite3_prepare16 + 54056 12 libsqlite3.dylib 0x00002940 sqlite3_step + 44 13 CoreData 0x00011958 _execute + 44 14 CoreData 0x000113e0 -[NSSQLiteConnection execute] + 696 15 CoreData 0x000994be -[NSSQLConnection prepareAndExecuteSQLStatement:] + 26 16 CoreData 0x000be14c -[_NSSQLiteStoreMigrator performMigration:] + 244 17 CoreData 0x000b6c60 -[NSSQLiteInPlaceMigrationManager migrateStoreFromURL:type:options:withMappingModel:toDestinationURL:destinationType:destinationOptions:error:] + 1040 18 CoreData 0x000aceb0 -[NSStoreMigrationPolicy(InternalMethods) migrateStoreAtURL:toURL:storeType:options:withManager:error:] + 92 19 CoreData 0x000ad6f0 -[NSStoreMigrationPolicy migrateStoreAtURL:withManager:metadata:options:error:] + 72 20 CoreData 0x000ac9ee -[NSStoreMigrationPolicy(InternalMethods) _gatherDataAndPerformMigration:] + 880 21 CoreData 0x0000965c -[NSPersistentStoreCoordinator addPersistentStoreWithType:configuration:URL:options:error:] + 1328 At next launches app doesn't loads data from database.

    Read the article

  • WCF Data Service Exception

    - by Ravi
    Hi, I am working on C#.Net with ADO.NET Dataservice WCF Data Services. I try to update one record to relational table, when I reach context.SetLink() I am getting exception("The context is not currently tracking the entity"). I don't know how to solve this problem. My code is specified below. LogNote dbLogNote =logNote; LogSubSession dbLogSubSession = (from p in context.LogSubSession where p.UID == logNote.SubSessionId select p).First<LogSubSession>() as LogSubSession; context.AddToLogNote(dbLogNote); dbLogNote.LogSubSession = dbLogSubSession; context.SetLink(dbLogNote, "LogSubSession", dbLogSubSession); context.SaveChanges(); Here LogSubSession is a primary table and LogNote is a foreign table. I am updating data into foreign table based on primary key table. Thanks

    Read the article

  • C# System.Data.SQLite Designer Code

    - by Nathan
    I've been messing around with the SQLite Designer in Visual Studio 2008 and I have noticed that when I use the generated Insert/Update statements they run extremely slow. Example: I have a data table with four columns and 5700 rows it took ~5 mins to insert the data into the database table However, I wrote my own database connection and insert methods using parameters and a single transaction and the same 5700 rows were inserted in under 1 second. Why is the generated code so slow and what is benefit to even using it? Thanks. Nathan

    Read the article

  • Core Data Error "Fetch Request must have an entity"

    - by Graeme
    Hi, I've attempted to add the TopSongs parser and Core Data files into my application, and it now builds succesfully, with no errors or warning messages. However, as soon as the app loads, it crashes, giving the following reason: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'executeFetchRequest:error: A fetch request must have an entity.' I have renamed all files, including the .xcdatamodel file. Could this be the problem (the renaming of the .xcdatamodel)? I'm assuming this error means that no data can be found. Thanks.

    Read the article

  • WCF REST vs. ADO.NET Data Services

    - by ray247
    Hi there, Could someone compare and contrast on WCF Rest services vs. ADO.NET Data Services? What is the difference and when to use which? Thanks, Ray. Edit: thanks to the first answer, just to give a bit background on what I'm looking to do: I have a web app I plan to put in the cloud (someday), the DAL is built with ADO.NET Entity Framework. And, I need to figure which web service data access technology would best fit my case.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >