Search Results

Search found 59579 results on 2384 pages for 'data loss'.

Page 52/2384 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • MQ EOL Data conversion

    - by lemotdit
    we are sending data trough MQ from a z/OS/CICS system to an AS400. Original encoding of the message is CCSID 500 with a MQSTR Format. The client application is getting the message with the CONVERT option and CCSID 819. Data is almost converted correctly except for the end of line caracter. Any idea? The z/OS is sending 0D (CR) as end of line caracter. If they had 0D+0A (CR+LF), CCSID automatically change from 500 to 437, and the end of line still ain't right on the client side.

    Read the article

  • What is the best possible technology for pulling huge data from 4 remote servers

    - by Habib Ullah Bahar
    Hello, For one of our project, we need to pull huge real time stock data from 4 remote servers across two countries. The trivial process here, check the sources for a regular interval and save the update to database. But as these are real time stock data of more than 1000 companies, I have to pull every second, which isn't good in case of memory, bandwidth I think. Please give me suggestion on which technology/platform [We are flexible here. PHP, Python, Java, PERL - anyone of them will be OK for us] we should choose, it can be achieved easily and with better performance.

    Read the article

  • How do I do automatic data serialization of data objects in Haskell

    - by Adam Gent
    One of the huge benefits in languages that have some sort of reflection/introspecition is that objects can be automatically constructed from a variety of sources. For example in Java I can use the same objects for persisting to a db (with Hibernate) serializing to XML (with JAXB) or serializing to JSON (json-lib). You can do the same in Ruby and Python also usually following some simple rules for properties or annotations for Java. Thus I don't need lots "Domain Transfer Objects". I can concentrate on the domain I am working in. It seems in very strict FP like Haskell and Ocaml this is not possible. Particularly Haskell. The only thing I have seen is doing some sort of preprocessing or meta-programming (ocaml). Is it just accepted that you have to do all the transformations from the bottom upwards? In other words you have to do lot of boring work to turn a data type in haskell into JSON/XML/DB Row object and back again into a data object.

    Read the article

  • Efficient data structure for fast random access, search, insertion and deletion

    - by Leonel
    I'm looking for a data structure (or structures) that would allow me keep me an ordered list of integers, no duplicates, with indexes and values in the same range. I need four main operations to be efficient, in rough order of importance: taking the value from a given index finding the index of a given value inserting a value at a given index deleting a value at a given index Using an array I have 1 at O(1), but 2 is O(N) and insertion and deletions are expensive (O(N) as well, I believe). A Linked List has O(1) insertion and deletion (once you have the node), but 1 and 2 are O(N) thus negating the gains. I tried keeping two arrays a[index]=value and b[value]=index, which turn 1 and 2 into O(1) but turn 3 and 4 into even more costly operations. Is there a data structure better suited for this?

    Read the article

  • Best data-structure to use for two ended sorted list

    - by fmark
    I need a collection data-structure that can do the following: Be sorted Allow me to quickly pop values off the front and back of the list Remain sorted after I insert a new value Allow a user-specified comparison function, as I will be storing tuples and want to sort on a particular value Thread-safety is not required Optionally allow efficient haskey() lookups (I'm happy to maintain a separate hash-table for this though) My thoughts at this stage are that I need a priority queue and a hash table, although I don't know if I can quickly pop values off both ends of a priority queue. I'm interested in performance for a moderate number of items (I would estimate less than 200,000). Another possibility is simply maintaining an OrderedDictionary and doing an insertion sort it every-time I add more data to it. Furthermore, are there any particular implementations in Python. I would really like to avoid writing this code myself.

    Read the article

  • Core Data and Relationships

    - by alku83
    I have two objects, a Trip and a Place. A Trip represents a journey from one Place to another Place, ie. a Trip needs a fromPlace and a toPlace. So, this is a 1-to-2 relationship, but I need to know which is the "from" and which is the "to". I am not sure how to model this in Core Data. I have created two entities (Trip, Place), and now I want to setup the relationship(s) so I have a fromPlace and a toPlace. Do I need to add an extra field on the Place entity called isFrom, or similar? If this was in a database, I would just have a id column on the Place table, and then two columns in the Trip table - fromPlaceId and toPlaceId. How do I achieve something similar in Core Data?

    Read the article

  • How can I compare Core Data models?

    - by Don
    I noticed while doing system testing that a feature of our app had been removed. It looks like at some point, an older version of a file was checked into SVN that was missing a property. This specific file was generated from the Core Data model, and sure enough, the latest version of the model in SVN is missing the same attribute. I need to find out if any other attributes are missing, or if anything else in the model changed. However, the elements file in the .xcodedatamodel folder appears to binary and I can't compare the revisions. Is there a way to find the differences between two Core Data models in SVN? Barring that, what would be the best way to accomplish this task?

    Read the article

  • Storing a bucket of numbers in an efficient data structure

    - by BlitzKrieg
    I have a buckets of numbers e.g. - 1 to 4, 5 to 15, 16 to 21, 22 to 34,.... I have roughly 600,000 such buckets. The range of numbers that fall in each of the bucket varies. I need to store these buckets in a suitable data structure so that the lookups for a number is as fast as possible. So my question is what is the suitable data structure and a sorting mechanism for this type of problem. Thanks in advance

    Read the article

  • Queue-like data structure with fast search and insertion

    - by Max
    I need a datastructure with the following properties: It contains integer numbers, no duplicates. After it reaches the maximal size the first element is removed. So if the capacity is 3, then this is how it would look when putting in it sequential numbers: {}, {1}, {1, 2}, {1, 2, 3}, {2, 3, 4}, {3, 4, 5} etc. Only two operations are needed: inserting a number into this container (INSERT) and checking if the number is already in the container (EXISTS). The number of EXISTS operations is expected to be approximately 2 * number of INSERT operations. I need these operations to be as fast as possible. What would be the fastest data structure or combination of data structures for this scenario?

    Read the article

  • Data Structure for a particular problem??

    - by AGeek
    Hi, Which data structure can perform insertion, deletion and searching operation in O(1) time in the worst case. We may assume the set of elements are integers drawn from a finite set 1,2,...,n, and initialization can take O(n) time. I can only think of implementing a hash table. Implementing it with Trees will not give O(1) time complexity for any of the operation. Or is it possible?? Kindly share your views on this, or any other data structure apart from these.. Thanks..

    Read the article

  • Transferring Data Between Server and Client (Mobile)

    - by Byron
    Scenario: Client (Mobile) - .Net CF 2.0, SQL CE 3.0 Server - .Net 2.0, SQL Server 2005, Web Service Client and Server database schemas differ. From server - only certain columns from certain tables need to be synced. From client - everything will need to be synced once client has made changes. Client will continually poll a web service to download and upload data. A framework will be developed to package and unpackage data, used by both client and server. How would you develop the packaging and unpackaging? Use datasets, serialise strongly typed objects? All suggestions welcome. Thanks

    Read the article

  • Adding single row one by one in tableview from core data of iPhone

    - by user336685
    I am working on RSS reader code where articles get downloaded and are viewable offline. The problem is only after all articles are downlaoded the tableview containing headlines gets updated. Core data is used. So everytime NSobjectcontext is saved , [self tableview updatebegins ] is called.The table is getting updated via fetchcontroller core data. I tried saving NSobjectcontext everytime an article is saved but that is not updating the tableview. I want a mechanism similar to instapaper tableview where articles get saved and tableview gets updated immediately. Please help if you know the solution. Thanks in advance.

    Read the article

  • AutoComplete implementation - Interview Question

    - by user181218
    Hi, Say you have a DB table with two cols: SearchPhrase(String) | Popularity(Int). You need to initialize a DS so that you could use it to implement an autocomplete feature (like google suggest) comfortably. The requirement: Once the data from the db is processed into the data structure, when you type a letter you get the 10 most popular searchphrases from the db starting with that letter,then when you type the next one you get the 10 .... with these two letters and so on. The question only concerns planning the ds and pseudocoding Insert,Search etc. Note: YOU CANNOT USE TRIE DS. Any ideas?

    Read the article

  • Core Data and BOOL setup

    - by John Valen
    I am working on an app that uses Core Data as its backend for managing SQLite records. I have everything working with strings and numbers, but have just tried adding BOOL fields and can't seem to get things to work. In the .xcdatamodel, I have added a field to my object called isCurrentlyForSale which is not Optional, not Transient, and not Indexed. The attribute's type is set to Boolean with default value NO. When I created the class files from the data model, the boilerplate code added for this property in the .h header was: @property (nonatomic, retain) NSNumber * isCurrentlyForSale; along with the @dynamic isCurrentlyForSale; in the .m implementation file. I've always worked with booleans as simple BOOLs. I've read that I could use NSNumber's numberWithBool and boolValue methods, but this seems like an aweful lot of extra code for something so simple. Can the @property in the header be changed to a simple BOOL? If so is there anything to watch out for? Thanks -John

    Read the article

  • ADO.NET Data Service -- non .NET consumers

    - by SwampyFox
    Has anyone come across an example of a non .Net consumer of an ADO.NET Data Service? I am on the second day of looking at what Astoria is and how it can be used. I am also trying to answer why would I use this instead of a web service? After getting my examples running, I kind get the RESTful approach to getting data out the system. And, plugging it into a .net client is incredibly easy. But, then I wondered how a non-.NET consumer would go about it. Any ideas (pointers) definitely appreciated...

    Read the article

  • Best way to collect and store data daily?

    - by mktb
    I have a bunch of statistics: # of users, # of families, ratio user/family, etc. I'd like to store these daily so I can view this data historically. However, I'm looking for the most effective way to store this data. Should I run a cron job that writes to the database DATE: today USERS: 123 FAMILIES: 456 RATIO: 7.89 or whatever? (or should I write multiple rows like DATE: today DATATYPE: users VALUE: 123?) Or is there another option I can use that is more efficient or more effective? Thanks!

    Read the article

  • iOS 5 - Coredata Sqlite DB losing data after killing app

    - by Brian Boyle
    I'm using coredata with a sqlite DB to persist data in my app. However, each time I kill my app I lose any data that was saved in the DB. I'm pretty sure its because the .sqlite file for my DB is just being replaced by a fresh one each time my app starts, but I can't seem to find any code that will just use the existing one thats there. It would be great if anyone could point me towards some code that could handle this for me. Cheers B - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (__persistentStoreCoordinator != nil) { return __persistentStoreCoordinator; } NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; NSURL *storeURL = [[self applicationDocumentsDirectory] URLByAppendingPathComponent:@"FlickrCoreData.sqlite"]; NSError *error = nil; __persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![__persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeURL options:options error:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return __persistentStoreCoordinator; }

    Read the article

  • Data Warehouse: One Database or many?

    - by drrollins
    At my new company, they keep all data associated with the data warehouse, including import, staging, audit, dimension and fact tables, together in the same physical database. I've been a database developer for a number of years now and this consolidation of function and form seems counter to everything I know. It seems to make security, backup/restore and performance management issues more manually intensive. Is this something that is done in the industry? Are there substantial reasons for doing or not doing it? The platform is Netezza. The size is in terabytes, hundreds of millions of rows. What I'm looking to get from answers to this question is a solid understanding of how right or wrong this path is. From your experience, what are the issues I should be focused on arguing if this is a path that will cause trouble for us down the road. If it is no big deal, then I'd like to know that as well.

    Read the article

  • iPhone app launch times and Core Data migration

    - by sehugg
    I have a Core Data application which I plan to update with a new schema. The lightweight migration seems to work, but it takes time proportional to the amount of data in the database. This occurs in the didFinishLaunchingWithOptions phase of the app. I want to avoid <app> failed to launch in time problems, so I assume I cannot keep the migration in the didFinishLaunchingWithOptions method. I assume the best method would involve performing the migration in a background thread. I assume also that I'd need to defer loading of the main ViewController until the loading completes to avoid using the managedObjectContext until initialization completes. Does this make sense, and is there example code (maybe in Apple sample projects) of this sort of initialization?

    Read the article

  • Core Data produces Analyzer warnings

    - by RickiG
    Hi I am doing the final touch ups on an app and I am getting rid of every compiler/analyzer warning. I have a bunch of Class methods that wrap my apps access to Core Data entities. This is "provoking" the analyzer. + (CDProductEntity*) newProductEntity { return (CDProductEntity*)[NSEntityDescription insertNewObjectForEntityForName:@"CDProductEntity" inManagedObjectContext:[self context]]; } Which results in an Analyzer warning: Object with +0 retain counts returned to caller where a +1 (owning) retain count is expected In the method that calls the above Class Method I have this: CDProductEntity *newEntity = [self newProductEntity]; Which results in an Analyzer warning: Method returns an Objective-C object with a +1 retain count (owning reference) Explicitly releasing or autoreleasing a Core Data entity is usually very very bad, but is that what it is asking me to do here? First it tells me it has a +0 retain count and that is bad, then it tells me it has a +1 which is also bad. What can I do to ensure that I am either dealing with a Analyzer hiccup or that I release correctly? Thanks in advance

    Read the article

  • Better data-structure design

    - by Tempname
    Currently in my application I have a single table that is giving me a bit of trouble. The issue at hand is I have a value object that is mapped to this table. When the data is returned to me as an array of value objects, I have to then loop through this array and begin my recursion by matching the ParentID to parent ObjectID's. The column ParentID is either null (acts a parent) or it holds the value of an ObjectID. I know there has to be a better way to create this data structure so that I do not have to do recursive loops to match ParentID's with their ObjectID's. Any help with this is greatly appreciated. Here is the table in describe form: +----------------+------------------+------+-----+---------------------+-----------------------------+ | Field | Type | Null | Key | Default | Extra | +----------------+------------------+------+-----+---------------------+-----------------------------+ | ObjectID | int(11) unsigned | NO | PRI | NULL | auto_increment | | ObjectHeight | decimal(6,2) | NO | | NULL | | | ObjectWidth | decimal(6,2) | NO | | NULL | | | ObjectX | decimal(6,2) | NO | | NULL | | | ObjectY | decimal(6,2) | NO | | NULL | | | ObjectLabel | varchar(255) | NO | | NULL | | | TemplateID | int(11) unsigned | NO | MUL | NULL | | | ObjectTypeID | int(11) unsigned | NO | MUL | NULL | | | ParentID | int(11) unsigned | YES | MUL | NULL | | | CreationDate | datetime | YES | | 0000-00-00 00:00:00 | | | LastModifyDate | timestamp | YES | | NULL | on update CURRENT_TIMESTAMP | +----------------+------------------+------+-----+---------------------+-----------------------------+e

    Read the article

  • Adding a column to a data.frame

    - by Susanne Dreisigacker
    I have the data.frame below. I want to add a column that classifies my data according to column 1 (h_no) in that way that the first series of h_no 1,2,3,4 is class 1, the second series of h_no (1 to 7) is class 2 etc. such as indicated in the last column. h_no h_freq h_freqsq 1 0.09091 0.008264628 1 2 0.00000 0.000000000 1 3 0.04545 0.002065702 1 4 0.00000 0.000000000 1 1 0.13636 0.018594050 2 2 0.00000 0.000000000 2 3 0.00000 0.000000000 2 4 0.04545 0.002065702 2 5 0.31818 0.101238512 2 6 0.00000 0.000000000 2 7 0.50000 0.250000000 2 1 0.13636 0.018594050 3 2 0.09091 0.008264628 3 3 0.40909 0.167354628 3 4 0.04545 0.002065702 3

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >