Search Results

Search found 65872 results on 2635 pages for 'core data migration'.

Page 34/2635 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Five stars of open data - example and review

    - by Joe
    (there may be a more suited SE site for this question so feel free to shift) I have some data I'd like to make open to the public - It's synatesis of some related data retrived from freedom of infomation requests over the last year. The data itself is at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.csv or for fans of Excel, at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.xlsx . It's no more than a table with about five columns. I'd like to make this properly open data, so I was looking at the 5 star deployment scheme for Open Data. Much of which is fine but I'm confused towards the end and I could do with an explenation from people who know the answers. So to get achieve the star levels I need: "make your stuff available on the Web (whatever format) under an open license" trival - all I have to do is put the notes up on the page that will give the provance of the data. "make it available as structured data (e.g., Excel instead of image scan of a table)"… done… "use non-proprietary formats (e.g., CSV instead of Excel)" - done… "use URIs to identify things, so that people can point at your stuff" - this is where I start to get a bit hazy - does this mean there should be an URI for every line in the table? "link your data to other data to provide context" - this isn't massively clear to me - does this mean to give the provence of the data? One column of the data I've put out is a link to where the data came from - is that the sort of thing we're looking at? Any and all information and answers welcome… EDIT - or if anyone wants to recommend a place SE or other place to ask the question - that would be cool...

    Read the article

  • Using a "white list" for extracting terms for Text Mining, Part 2

    - by [email protected]
    In my last post, we set the groundwork for extracting specific tokens from a white list using a CTXRULE index. In this post, we will populate a table with the extracted tokens and produce a case table suitable for clustering with Oracle Data Mining. Our corpus of documents will be stored in a database table that is defined as create table documents(id NUMBER, text VARCHAR2(4000)); However, any suitable Oracle Text-accepted data type can be used for the text. We then create a table to contain the extracted tokens. The id column contains the unique identifier (or case id) of the document. The token column contains the extracted token. Note that a given document many have many tokens, so there will be one row per token for a given document. create table extracted_tokens (id NUMBER, token VARCHAR2(4000)); The next step is to iterate over the documents and extract the matching tokens using the index and insert them into our token table. We use the MATCHES function for matching the query_string from my_thesaurus_rules with the text. DECLARE     cursor c2 is       select id, text       from documents; BEGIN     for r_c2 in c2 loop        insert into extracted_tokens          select r_c2.id id, main_term token          from my_thesaurus_rules          where matches(query_string,                        r_c2.text)>0;     end loop; END; Now that we have the tokens, we can compute the term frequency - inverse document frequency (TF-IDF) for each token of each document. create table extracted_tokens_tfidf as   with num_docs as (select count(distinct id) doc_cnt                     from extracted_tokens),        tf       as (select a.id, a.token,                            a.token_cnt/b.num_tokens token_freq                     from                        (select id, token, count(*) token_cnt                        from extracted_tokens                        group by id, token) a,                       (select id, count(*) num_tokens                        from extracted_tokens                        group by id) b                     where a.id=b.id),        doc_freq as (select token, count(*) overall_token_cnt                     from extracted_tokens                     group by token)   select tf.id, tf.token,          token_freq * ln(doc_cnt/df.overall_token_cnt) tf_idf   from num_docs,        tf,        doc_freq df   where df.token=tf.token; From the WITH clause, the num_docs query simply counts the number of documents in the corpus. The tf query computes the term (token) frequency by computing the number of times each token appears in a document and divides that by the number of tokens found in the document. The doc_req query counts the number of times the token appears overall in the corpus. In the SELECT clause, we compute the tf_idf. Next, we create the nested table required to produce one record per case, where a case corresponds to an individual document. Here, we COLLECT all the tokens for a given document into the nested column extracted_tokens_tfidf_1. CREATE TABLE extracted_tokens_tfidf_nt              NESTED TABLE extracted_tokens_tfidf_1                  STORE AS extracted_tokens_tfidf_tab AS              select id,                     cast(collect(DM_NESTED_NUMERICAL(token,tf_idf)) as DM_NESTED_NUMERICALS) extracted_tokens_tfidf_1              from extracted_tokens_tfidf              group by id;   To build the clustering model, we create a settings table and then insert the various settings. Most notable are the number of clusters (20), using cosine distance which is better for text, turning off auto data preparation since the values are ready for mining, the number of iterations (20) to get a better model, and the split criterion of size for clusters that are roughly balanced in number of cases assigned. CREATE TABLE km_settings (setting_name  VARCHAR2(30), setting_value VARCHAR2(30)); BEGIN  INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.clus_num_clusters, 20);  INSERT INTO km_settings (setting_name, setting_value)     VALUES (dbms_data_mining.kmns_distance, dbms_data_mining.kmns_cosine);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.prep_auto,dbms_data_mining.prep_auto_off);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_iterations,20);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_split_criterion,dbms_data_mining.kmns_size);   COMMIT; END; With this in place, we can now build the clustering model. BEGIN     DBMS_DATA_MINING.CREATE_MODEL(     model_name          => 'TEXT_CLUSTERING_MODEL',     mining_function     => dbms_data_mining.clustering,     data_table_name     => 'extracted_tokens_tfidf_nt',     case_id_column_name => 'id',     settings_table_name => 'km_settings'); END;To generate cluster names from this model, check out my earlier post on that topic.

    Read the article

  • Multiple views and source list in a Core Data app

    - by Ellie P.
    I'm working on my first major Cocoa app for an undergraduate research project. The application is document-based and uses Core Data. One of the entities is an abstract entity, Page. Page is parent of several types of pages: ie PageWithHeaderAndFooter, PageWithTwoColumns, BasicPage etc. Page has attributes, such as title and author, that all pages have in common. Each specific type of page has a certain number of layout blocks (PageWithHeaderAndFooter has three: header, footer, body. BasicPage has one: body. etc.) Additionally, all Page subclasses define layout-specific implementations of certain methods. The other relevant entity is Style, which defines the visual look of a Page. (Think of Pages as HTML and Style as CSS.) I would like my app to have an iTunes/Mail-like source list with sections. (One section would be Pages, the other would be Styles.) I have a pretty good idea how to do the sectioned source list (this was a great help). However, after hours of headbanging and fruitless googling, here's what I can't figure out: Pages and Styles listed in the source list, and when you select one of them, all of the relevant fields for that object appear at the right (mostly NSTextViews, pop up menus, etc). I laid that out and did all of the bindings in Interface Builder. The problem is, if my source list contains different types of pages, how do I get a different view to display at the right depending on the type of page selected? For example, if a BasicPage is selected, I want just what you see above: the general page stuff and one NSTextView that corresponds to the one field body of BasicPage. But if I select a PageWithHeaderAndFooter, I want to display the general page stuff plus three NSTextViews (one for header, body, and footer.) If I have a Style selected, I want to display various pop up menus, color wells, etc. For the pages at least, we're only talking about one or more NSTextViews, each of which corresponds to a String attribute of the respective entity. How would you do this? Thank you for your help!

    Read the article

  • Crash in OS X Core Data Utility Tutorial

    - by vinogradov
    I'm trying to follow Apple's Core Data utility Tutorial. It was all going nicely, until... The tutorial uses a custom sub-class of NSManagedObject, called 'Run'. Run.h looks like this: #import <Foundation/Foundation.h> #import <CoreData/CoreData.h> @interface Run : NSManagedObject { NSInteger processID; } @property (retain) NSDate *date; @property (retain) NSDate *primitiveDate; @property NSInteger processID; @end Now, in Run.m we have an accessor method for the processID variable: - (void)setProcessID:(int)newProcessID { [self willChangeValueForKey:@"processID"]; processID = newProcessID; [self didChangeValueForKey:@"processID"]; } In main.m, we use functions to set up a managed object model and context, instantiate an entity called run, and add it to the context. We then get the current NSprocessInfo, in preparation for setting the processID of the run object. NSManagedObjectContext *moc = managedObjectContext(); NSEntityDescription *runEntity = [[mom entitiesByName] objectForKey:@"Run"]; Run *run = [[Run alloc] initWithEntity:runEntity insertIntoManagedObjectContext:moc]; NSProcessInfo *processInfo = [NSProcessInfo processInfo]; Next, we try to call the accessor method defined in Run.m to set the value of processID: [run setProcessID:[processInfo processIdentifier]]; And that's where it's crashing. The object run seems to exist (I can see it in the debugger), so I don't think I'm messaging nil; on the other hand, it doesn't look like the setProcessID: message is actually being received. I'm obviously still learning this stuff (that's what tutorials are for, right?), and I'm probably doing something really stupid. However, any help or suggestions would be gratefully received! ===MORE INFORMATION=== Following up on Jeremy's suggestions: The processID attribute in the model is set up like this: NSAttributeDescription *idAttribute = [[NSAttributeDescription alloc]init]; [idAttribute setName:@"processID"]; [idAttribute setAttributeType:NSInteger32AttributeType]; [idAttribute setOptional:NO]; [idAttribute setDefaultValue:[NSNumber numberWithInteger:-1]]; which seems a little odd; we are defining it as a scalar type, and then giving it an NSNumber object as its default value. In the associated class, Run, processID is defined as an NSInteger. Still, this should be OK - it's all copied directly from the tutorial. It seems to me that the problem is probably in there somewhere. By the way, the getter method for processID is defined like this: - (int)processID { [self willAccessValueForKey:@"processID"]; NSInteger pid = processID; [self didAccessValueForKey:@"processID"]; return pid; } and this method works fine; it accesses and unpacks the default int value of processID (-1). Thanks for the help so far!

    Read the article

  • Core Animation performance on iphone

    - by nico
    I'm trying to do some animations using Core Animation on the iphone. I'm using CABasicAnimation on CALayer. It's a straight forward animation from a random place at the top of the screen to the bottom of the screen at random speed, I have 30 elements that doing the same animation continuously until another action happens. But the performance on the iPhone 3G is very sluggish when the animations start. The image is only 8k. Is this the right approach? How should I change so it performs better. // image cached somewhere else. CGImageRef imageRef = [[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:name ofType:@"png"]] CGImage]; - (void)animate:(NSTimer *)timer { int startX = round(radom() % 320); float speed = 1 / round(random() % 100 + 2); CALayer *layer = [CALayer layer]; layer.name = @"layer"; layer.contents = imageRef; // cached image layer.frame = CGRectMake(0, 0, CGImageGetWidth(imageRef), CGImageGetHeight(imageRef)); int width = layer.frame.size.width; int height = layer.frame.size.height; layer.frame = CGRectMake(startX, self.view.frame.origin.y, width, height); [effectLayer addSublayer:layer]; CGPoint start = CGPointMake(startX, 0); CGPoint end = CGPointMake(startX, self.view.frame.size.height); float repeatCount = 1e100; CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:@"position"]; animation.delegate = self; animation.fromValue = [NSValue valueWithCGPoint:start]; animation.toValue = [NSValue valueWithCGPoint:end]; animation.duration = speed; animation.repeatCount = repeatCount; animation.autoreverses = NO; animation.removedOnCompletion = YES; animation.fillMode = kCAFillModeForwards; [layer addAnimation:animation forKey:@"position"]; } The animations are fired off using a NSTimer. animationTimer = [NSTimer timerWithTimeInterval:0.2 target:self selector:@selector(animate:) userInfo:nil repeats:YES]; [[NSRunLoop currentRunLoop] addTimer:animationTimer forMode:NSDefaultRunLoopMode];

    Read the article

  • Updating UISearchDisplayController with Core Data results using GCD

    - by Brian Halpin
    I'm having trouble displaying the results from Core Data in my UISearchDisplayController when I implement GCD. Without it, it works, but obviously blocks the UI. In my SearchTableViewController I have the following two methods: - (BOOL)searchDisplayController:(UISearchDisplayController *)controller shouldReloadTableForSearchString:(NSString *)searchString { // Tell the table data source to reload when text changes [self filterContentForSearchText:searchString]; // Return YES to cause the search result table view to be reloaded. return YES; } // Update the filtered array based on the search text -(void)filterContentForSearchText:(NSString*)searchText { // Remove all objects from the filtered search array [self.filteredLocationsArray removeAllObjects]; NSPredicate *predicate = [CoreDataMaster predicateForLocationUsingSearchText:@"Limerick"]; CoreDataMaster *coreDataMaster = [[CoreDataMaster alloc] init]; // Filter the array using NSPredicate self.filteredLocationsArray = [NSMutableArray arrayWithArray: [coreDataMaster fetchResultsFromCoreDataEntity:@"City" UsingPredicate:predicate]]; } You can probably guess that my problem is with returning the array from [coreDataMaster fetchResultsFromCoreDataEntity]. Below is the method: - (NSArray *)fetchResultsFromCoreDataEntity:(NSString *)entity UsingPredicate:(NSPredicate *)predicate { NSMutableArray *fetchedResults = [[NSMutableArray alloc] init]; dispatch_queue_t coreDataQueue = dispatch_queue_create("com.coredata.queue", DISPATCH_QUEUE_SERIAL); dispatch_async(coreDataQueue, ^{ NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entityDescription = [NSEntityDescription entityForName:entity inManagedObjectContext:self.managedObjectContext]; NSSortDescriptor *nameSort = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; NSArray *sortDescriptors = [NSArray arrayWithObjects:nameSort, nil]; [fetchRequest setEntity:entityDescription]; [fetchRequest setSortDescriptors:sortDescriptors]; // Check if predicate is set if (predicate) { [fetchRequest setPredicate:predicate]; } NSError *error = nil; NSArray *fetchedManagedObjects = [self.managedObjectContext executeFetchRequest:fetchRequest error:&error]; for (City *city in fetchedManagedObjects) { [fetchedResults addObject:city]; } NSDictionary *userInfo = [NSDictionary dictionaryWithObject:[NSArray arrayWithArray:fetchedResults] forKey:@"results"]; [[NSNotificationCenter defaultCenter] postNotificationName:@"fetchResultsComplete" object:nil userInfo:userInfo]; }); return [NSArray arrayWithArray:fetchedResults]; } So the thread hasn't finished executing by the time it returns the results to self.filteredLocationsArray. I've tried added a NSNotification which passes the NSDictionary to this method: - (void)updateSearchResults:(NSNotification *)notification { NSDictionary *userInfo = notification.userInfo; NSArray *array = [userInfo objectForKey:@"results"]; self.filteredLocationsArray = [NSMutableArray arrayWithArray:array]; [self.tableView reloadData]; } I've also tried refreshing the searchViewController like [self.searchDisplayController.searchResultsTableView reloadData]; but to no avail. I'd really appreciate it if someone could point me in the right direction and show me where I might be going wrong. Thanks

    Read the article

  • Checking an empty Core Data relationship (SQLite)

    - by rwat
    I have a to-many relationship in my data model, and I'd like to get all the objects that have no corresponding objects in the relationship. For example: Customer - Purchases I want to get all Customers that have 0 Purchases. I've read somewhere that I could use "Purchases[SIZE] = 0", but this gives me an unsupported function expression error, which I think means it doesn't work with a SQLite backing store (which I don't want to switch from, due to some performance constraints). Any ideas?

    Read the article

  • Building vs. Buying a Master Data Management Solution

    - by david.butler(at)oracle.com
    Many organizations prefer to build their own MDM solutions. The argument is that they know their data quality issues and their data better than anyone. Plus a focused solution will cost less in the long run then a vendor supplied general purpose product. This is not unreasonable if you think of MDM as a point solution for a particular data quality problem. But this approach carries significant risk. We now know that organizations achieve significant competitive advantages when they deploy MDM as a strategic enterprise wide solution: with the most common best practice being to deploy a tactical MDM solution and grow it into a full information architecture. A build your own approach most certainly will not scale to a larger architecture unless it is done correctly with the larger solution in mind. It is possible to build a home grown point MDM solution in such a way that it will dovetail into broader MDM architectures. A very good place to start is to use the same basic technologies that Oracle uses to build its own MDM solutions. Start with the Oracle 11g database to create a flexible, extensible and open data model to hold the master data and all needed attributes. The Oracle database is the most flexible, highly available and scalable database system on the market. With its Real Application Clusters (RAC) it can even support the mixed OLTP and BI workloads that represent typical MDM data access profiles. Use Oracle Data Integration (ODI) for batch data movement between applications, MDM data stores, and the BI layer. Use Oracle Golden Gate for more real-time data movement. Use Oracle's SOA Suite for application integration with its: BPEL Process Manager to orchestrate MDM connections to business processes; Identity Management for managing users; WS Manager for managing web services; Business Intelligence Enterprise Edition for analytics; and JDeveloper for creating or extending the MDM management application. Oracle utilizes these technologies to build its MDM Hubs.  Customers who build their own MDM solution using these components will easily migrate to Oracle provided MDM solutions when the home grown solution runs out of gas. But, even with a full stack of open flexible MDM technologies, creating a robust MDM application can be a daunting task. For example, a basic MDM solution will need: a set of data access methods that support master data as a service as well as direct real time access as well as batch loads and extracts; a data migration service for initial loads and periodic updates; a metadata management capability for items such as business entity matrixed relationships and hierarchies; a source system management capability to fully cross-reference business objects and to satisfy seemingly conflicting data ownership requirements; a data quality function that can find and eliminate duplicate data while insuring correct data attribute survivorship; a set of data quality functions that can manage structured and unstructured data; a data quality interface to assist with preventing new errors from entering the system even when data entry is outside the MDM application itself; a continuing data cleansing function to keep the data up to date; an internal triggering mechanism to create and deploy change information to all connected systems; a comprehensive role based data security system to control and monitor data access, update rights, and maintain change history; a flexible business rules engine for managing master data processes such as privacy and data movement; a user interface to support casual users and data stewards; a business intelligence structure to support profiling, compliance, and business performance indicators; and an analytical foundation for directly analyzing master data. Oracle's pre-built MDM Hub solutions are full-featured 3-tier Internet applications designed to participate in the full Oracle technology stack or to run independently in other open IT SOA environments. Building MDM solutions from scratch can take years. Oracle's pre-built MDM solutions can bring quality data to the enterprise in a matter of months. But if you must build, at lease build with the world's best technology stack in a way that simplifies the eventual upgrade to Oracle MDM and to the full enterprise wide information architecture that it enables.

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • Transferring DHCP using Windows Server Migration Tool - Why is Powershell is crashing on the import of the .mig file?

    - by Mike
    I am migrating DHCP from a windows server 2003R2 DC to a Windows Server 2008R2 DC I've followed this video and its predecessor (Installing Windows Server Migration Tools) http://technet.microsoft.com/en-us/video/migrating-dhcp-using-the-windows-server-2008-r2-migration-tools.aspx I went through everything smoothly until the last step. I have exported a .mig file with my DHCP configuration on the old 2003r2 server. I transferred this .mig file over to my 2008R2 server, when running the import command, it will appear to work for a minute or two and then I get a generic windows "Powershell has stopped working" error and I have to close the program. Under the problem details I see the following: FileVersionOfSystemManagementAutomation: 6.1.7600.16385 InnermostExceptionType: System.AccessViolationException OutermostExceptionType: System.AccessViolationException DeepestPowerShellFrame: unknown OS Version: 6.1.7600.2.0.0.272.7 LocaleID: 1033 Seems like there are permissions issues maybe? I am running powershell as an admin and am logged in to the server as a domain administrator. Any Ideas? Thanks

    Read the article

  • Oracle Communications Data Model

    - by jean-pierre.dijcks
    I've mentioned OCDM in previous posts but found the following (see end of the post) podcast on the topic and figured it is worthwhile to spread the news some more. ORetailDM and OCommunicationsDM are the two data models currently available from Oracle. Both are intended to capture: Business best practices and industry knowledge Pre-built advanced analytics intended to predict future events before they happen (like the Churn model shown below) Oracle technology best practices to ensure optimal performance of the model All of this typically comes with a reduced time to implementation, or as the marketing slogan goes, reduced time to value. Here are the links: Podcast on OCDM OTN pages for OCDM and ORDM

    Read the article

  • Importing data from text file to specific columns using BULK INSERT

    - by Dinesh Asanka
    Bulk insert is much faster than using other techniques such as  SSIS. However, when you are using bulk insert you can’t insert to specific columns. If, for example, there are five columns in a table you should have five values for each record in the text file you are importing from. This is an issue when you are expecting default values to be inserted into tables. Let us say you have table as below: In this table, you are expecting ID, Status and CreatedDate to be updated automatically, so your text file may only have   FirstName  LastName  values as below: Dinesh,Asanka Saman,Liyanage Ruwan,Silva Susantha,Bathige Jude,Peires Sanjeewa,Jayawickrama If you use bulk insert to this table like follows, You will be returned an error: Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 1 (ID). To avoid this you will need to create a view with the columns you are expecting to fill and use bulk insert against it. If you check the table now, you will see table with values in the text file and the default values.

    Read the article

  • Javascript: Safely upload a client data file

    - by Jeffrey Sweeney
    I'm (still) working on a template-based XML editing program. It's a GUI-based XML editor that only allows users to add certain tags and attributes based off the requirements. You can see the current version here for an idea. Now, I'd like to allow users to upload their own data templates, but I'm concerned about potential XSS hacks. Currently, the template file is in Javascript object literal notation, which unsurprisingly is a security nightmare if the user can upload their own. I was thinking of using XML instead, but is there an even better alternative?

    Read the article

  • Reuse the data CRUD methods in data access layer, but they are updated too quickly

    - by ValidfroM
    I agree that we should put CRUD methods in a data access layer, However, in my current project I have some issues. It is a legacy system, and there are quite a lot CRUD methods in some concrete manager classes. People including me seem to just add new methods to it, rather than reuse the existing methods. Because We don't know whether the existing method is what we need Even if we have source code, do we really need read other's code then make decision? It is updated too quickly. Do not have time get familiar with the DAO API. Back to the question, how do you solve that in your project? If we say "reuse", it really needs to be reusable rather than just an excuse.

    Read the article

  • Get aggregated view of data for entire website with Google Analytics

    - by crmpicco
    I have a website (www.ayrshireminis.com), which has three main sections under different directories, these are: /forum /galleries /contact I would like to have an aggregated view of the data for the whole website, but also for each section. What is the recommended approach for doing this? I believe I can create a web property that includes a profile for the entire website and duplicated filtered profiles, each section having an include filter. This is my gut instinct, but i'd like to know if there is another (better) way to do it? Maybe by having one account that includes a profile for the whole site and another profile with an include filter for the individual sections?

    Read the article

  • Design: How to model / where to store relational data between classes

    - by Walker
    I'm trying to figure out the best design here, and I can see multiple approaches, but none that seems "right." There are three relevant classes here: Base, TradingPost, and Resource. Each Base has a TradingPost which can offer various Resources depending on the Base's tech level. Where is the right place to store the minimum tech level a base must possess to offer any given resource? A database seems like overkill. Putting it in each subclass of Resource seems wrong--that's not an intrinsic property of the Resource. Do I have a mediating class, and if so, how does it work? It's important that I not be duplicating code; that I have one place where I set the required tech level for a given item. Essentially, where does this data belong? P.S. Feel free to change the title; I struggled to come up with one that fits.

    Read the article

  • Still no detected structured data in Google Webmaster Tools [on hold]

    - by user6211
    Can you give me some suggestions what's wrong with my structured data? Google still cannot read it. It looks like this: <div class="identity"> <div itemscope itemtype="http://schema.org/LocalBusiness"> <a itemprop="url" href="http://MYDOMAIN.co.uk/"><div itemprop="name"><strong>MY_COMPANY</strong></div></a> <div itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <span itemprop="streetAddress">MY_ADDRESS</span>, <span itemprop="addressLocality">London</span>, <span itemprop="postalCode">SE5 MY_XYZ</span>, <span itemprop="addressCountry">UK</span> </div> </div> </div>

    Read the article

  • Consolidating hotels data from various booking sites with different IDs or reference

    - by Victor
    In one of my projects, I have data for hotels, and other booking sites are able to book this hotel. For example: Hotel A - Booking (ID = 4002), Expedia (ID = 123), Priceline (ID = 147) The three booking engines each uses their own Id to reference to Hotel A. I would need to check manually and make the right reference to the hotel. If I have 100,000 hotels, I have to check manually 300,000 (considering 3 booking sites) times? They might provide API, then I can cross check the name, address or latitude/longitude, but if they differ a little bit then I might give the wrong reference to the wrong hotel. I'm sure there are better ways to do this. There are many travel sites out there which do hotel price checking on many booking sites, but how do they do to make sure they are checking the right hotel on these booking sites? Anyone has any experience on this?

    Read the article

  • Mount external HD ubuntu 12.10

    - by Luigi Tiburzi
    Although it's an abundantly treated matter, I'm unable to find an answer valid for my needs. I had a 12.04 installation of ubuntu and I decided to install the 12.10. I copied (using GParted) the partition where my system was to an external hd where there is a windows partition. Then I installed the newest ubuntu version and now I want to take back some files (for example my .emacs) from that partition but when I try to mount it, it is not found as sdb and if I mount it from /dev/usb/hddev0 I don't get any output, only a blinking cursor, no errors, no output. I even tried to mount it as an ntfs disk but the result was the same. It's like the hd cannot be detected. So how can I access data to that disk? Could I get them from GParted terminal instead of Ubuntu one? Thanks

    Read the article

  • Using Ubuntu to recover data from a crashed Windows install

    - by user289391
    I was using Windows on my laptop when suddenly the blue screen of death appeared and then laptop restarted and wrote for me this : Intel UNDI, PXE-2.1 (built 083) Copyright (C) 1997-200 Intel Corporation This Product is covered by one or more of the following patents: US5,307459, US5,434,872, US5732,094, US6579,884, US6115,776 and US6,327,625 Realtek PCIe FE Family Controller Series v120 (01/26/10) PXE-M0F: ExitingPXEROM. reboot failed I have Ubuntu on an external disk so I have now booted to that. Two questions: Any theories on what happened? How can I use Ubuntu to recovers my data from Windows install?

    Read the article

  • disk not accessible

    - by user107044
    i formatted my hard drive yesterday and it was working well even after the formatting. But when I restarted my system again , is is showing that the space is alloted to my files but they are inaccessible. I have even tried to unhide the files and folders, if they got hidden somehow. But nothing works. the hard drive is being shown empty but the properties are saying that it still conatins the data : http://imgur.com/ObjTE in the image, it is showing that the directory has only 1 file of size:4.8 kbps but the space being used by the drive is 11.6 GB. do suggest some solution.

    Read the article

  • SQL SERVER Data Pages in Buffer Pool Data Stored in MemoryCache

    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Memory efficient way of inserting an array of objects with Core Data

    - by randombits
    I'm working on a piece of code for an iPhone application that fetches a bunch of data from a server and builds objects from it on the client. It ends up creating roughly 40,000 objects. They aren't displayed to the user, I just need to create instances of NSManagedObject and store them to persistent storage. Am I wrong in thinking that the only way to do this is to create a single object, then save the context? is it best to create the objects all at once, then somehow save them to the context after they're created and stored in some set or array? If so, can one show some example code for how this is done or point me in the direction to code where this is done? The objects themselves are relatively straight forward models with string or integer attributes and don't contain any complex relationships.

    Read the article

  • iPhone OS: Fetching a random entity instance using NSPredicate Nsfetchrequest and core data

    - by nickthedude
    Working on an app where I have a large collections of managed objects against which I want to fetch a few random instances. My question is, is there any way I can use NSPredicate and NSFetchRequest to return several objects at random. I saw that you could actually add a NSFetchRequest into the entity using the data modeler, any way to do the random fetch using this? Also what would be the best method for determining the "count" of a table so I can set the bounds of the random number generator. let me know if you need more details. Thanks! Nick

    Read the article

  • flv video flex core

    - by user153506
    i have a flv video file i loaded the binary data of this flv file to memory by using var myFile:File = File.documentsDirectory.resolvePath("AIR Test/video.flv"); var myFileStream:FileStream = new FileStream(); myFileStream.open(myFile, FileMode.READ); var bytes:ByteArray = new ByteArray(); myFileStream.readBytes(bytes); myFileStream.close(); now i like to change some header of this loaded flv in bytes memory variable. but after changing header, changed header was stored in bytes memory variable that is overwritten. now how can i play that flv file from this memory (bytes memory variable)

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >