Search Results

Search found 67143 results on 2686 pages for 'complex data types'.

Page 43/2686 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Explaining persistent data structures in simple terms

    - by Jason Baker
    I'm working on a library for Python that implements some persistent data structures (mainly as a learning exercise). However, I'm beginning to learn that explaining persistent data structures to people unfamiliar with them can be difficult. Can someone help me think of an easy (or at least the least complicated) way to describe persistent data structures to them? I've had a couple of people tell me that the documentation that I have is somewhat confusing. (And before anyone asks, no I don't mean persistent data structures as in persisted to the file system. Google persistent data structures if you're unclear on this.)

    Read the article

  • Aging Data Structure in C#

    - by thelsdj
    I want a data structure that will allow querying how many items in last X minutes. An item may just be a simple identifier or a more complex data structure, preferably the timestamp of the item will be in the item, rather than stored outside (as a hash or similar, wouldn't want to have problems with multiple items having same timestamp). So far it seems that with LINQ I could easily filter items with timestamp greater than a given time and aggregate a count. Though I'm hesitant to try to work .NET 3.5 specific stuff into my production environment yet. Are there any other suggestions for a similar data structure? The other part that I'm interested in is aging old data out, If I'm only going to be asking for counts of items less than 6 hours ago I would like anything older than that to be removed from my data structure because this may be a long-running program.

    Read the article

  • Data mining logs to locate a bug

    - by gooli
    I'm working on a data distribution application which receives data from a source and distributes that data to multiple target application. After successfully distributing several messages each second for 8 days, it missed a single message and did not deliver it properly to the clients. As I was looking at the logs I tried to find something there that was special for the time the miss happend - either in the data, its rate or some other condition but couldn't find anything. Is there any data mining technique I can use to identify how that specific event differs from other events?

    Read the article

  • How to build a data flow?

    - by salvationishere
    I am running Visual Studio 2008, the SSIS Tutorial described on: http://msdn.microsoft.com/en-us/library/ms167106.aspx I finished all of the tasks but am getting following errors: Error 1 Validation error. Extract Sample Currency Data: Extract Sample Currency Data: input column "CurrencyAlternateKey" (123) has lineage ID 55 that was not previously used in the Data Flow task. Lesson 1.dtsx 0 0 Error 2 Validation error. Extract Sample Currency Data SSIS.Pipeline: input column "CurrencyAlternateKey" (123) has lineage ID 55 that was not previously used in the Data Flow task. Lesson 1.dtsx 0 0 Can you tell what I need to do to make this build without errors?

    Read the article

  • Parsing content-disposion header's filename in multipart/from-data

    - by Artyom
    Hello According to RFC, in multipart/form-data content-disposition header filename field receives as parameter HTTP quoted string - string between quites where character '\' can escape any other ascii character. Problem web browsers don't do it. IE6 sends: Content-Disposition: form-data; name="file"; filename="z:\tmp\test.txt" Instead of expected Content-Disposition: form-data; name="file"; filename="z:\\tmp\\test.txt" Which should be parsed as z:tmptest.txt according to rules instead of z:\tmp\test.txt. Firefox, Konqueror and Chrome don't escape " characters for example: Content-Disposition: form-data; name="file"; filename=""test".txt" Instead of expected Content-Disposition: form-data; name="file"; filename="\"test\".txt" So... how would you suggest to deal with this issue?

    Read the article

  • Last Observation Carried Forward In a data frame?

    - by Tal Galili
    Hi all, I wish to implement a "Last Observation Carried Forward" for a data set I am working on which has missing values at the end of it. Here is a simple code to do it (question after it): LOCF <- function(x) { # Last Observation Carried Forward (for a left to right series) LOCF <- max(which(!is.na(x))) # the location of the Last Observation to Carry Forward x[LOCF:length(x)] <- x[LOCF] return(x) } # example: LOCF(c(1,2,3,4,NA,NA)) LOCF(c(1,NA,3,4,NA,NA)) Now this works great for simple vectors. But if I where to try and use it on a data frame: a <- data.frame(rep("a",4), 1:4,1:4, c(1,NA,NA,NA)) a t(apply(a, 1, LOCF)) # will make a mess It will turn my data frame into a character matrix. Can you think of a way to do LOCF on a data.frame, without turning it into a matrix? (I could use loops and such to correct the mess, but would love for a more elegant solution) Cheers, Tal

    Read the article

  • How to preserve data integrity while minimizing the transmission size

    - by user1500578
    we have sensors in the wild that send their data to a server every day via TCP/IP, either through 3G or through satellite for the physical layer. The sensors can automatically switch from one to the other depending on their location and the quality of the signal with the local 3G operator. Given that the 3G and satellite communications are very expensive, we want to minimize the amount of data to send. But also, we want to protect ourselves from lost data. What would be the best strategy to ensure with reasonable certainty that the integrity of our data is preserved, while minimizing the amount of redundancy, i.e the amount of data transmitted ? I've read about the zfec codec, but I'm not sure if we need to transmit all the chunks, or if we need to send a hash code along each chunk.

    Read the article

  • Merging MySQL Structure and Data

    - by Shahid
    I have a MySQL database running on a deployment machine which also contains data. Then I have another MySQL database which has evolved in terms of STRUCTURE + DATA for some time. I need a way to merge the changes (ONLY) for both structure and data to the DB in deployment machine without disturbing the existing data. Does anyone know of a tool available which can do this safely. I have had a look at a few comparison tools but I need a tool which can automate the merge operation. Note also that most of the data in the tables is in BINARY so I can't use many file comparison tools. Does any one know of a solution to this? thanks

    Read the article

  • JavaScript data grid for millions of rows

    - by Rudiger
    I need to present millions of rows of data to users in a grid using JavaScript. I don't want the user thinking about viewing finite pieces of data; it should appear that all of the data is available. Rather than downloading all the data at once, small chunks are downloaded as the user comes to them (by scrolling, keying in row numbers, searching). The rows will not be edited through this front end, so read-only answers are acceptable. What JavaScript data grids are best for this kind of seamless paging?

    Read the article

  • Letting Wcf data service implement another contract

    - by Wasim
    Hi all , I have a wcf data service with the standart configuration . I want to add another functionality to it , so I built a contract interface and let my wcf data service implements it . Now I see in the service the InitializeService method , and the contract interface methods . When I come to connect the service , I get an error , that there is no end point declared to the contract I added . How can do that ? examples ? links ? I choosed to add the contract interface to the wcf data service and not adding another service , because the client application uses the wcf data service generated objects , and I want to use the same obkject to make operations not related to data , for more coplex processing . If I do the methods in another service , then I have types incompatibility . Thanks in advance ...

    Read the article

  • Advice on simple efficient way to store web form data when no db/auth required

    - by ted776
    Hi, I have a situation where I need to provide an efficient way to process and store comments submitted via a web form. I would normally use PHP and either MySQL or XML to store the data, but this is slightly different in that this web form will only be temporarily available in a closed LAN environment, and all i need to do is process the form data and store it a format which can be accessed by another application on the LAN (Adobe Director). Each request made by the Director app should pop the stack of data. I'm wondering how best to store the data for this type of situation as it's not something I would normally do. I'm thinking possibly storing the data in an XML file, but any advice would be great!

    Read the article

  • JavaScript - question regarding data structure

    - by orokusaki
    I'm trying to calculate somebody's bowel health based on a points system. I can edit the data structure, or logic in any way. I'm just trying to write a function and data structure to handle this ability. Pseudo calculator function: // Bowel health calculator var points = 0; If age is from 30 and 34: points += 1 If age is from 35 and 40: points += 2 If daily BMs is from 1 and 3: points -= 1 If daily BMs is from 4 and 6: points -= 2 return points; Pseudo data structure: var points_map = { age: { '30-34': 1, '35-40': 2 }, dbm: { '1-3': -1, '4-6': -2 } }; I have a full spreadsheet of data like this, and I am trying to write a DRY version of code and a DRY version of this data (ie, probably not a string for the '30-34', etc) in order to handle this sort of thing without a humongous number of switch statements.

    Read the article

  • Cloud HUGE data storage options?

    - by ToughPal
    Hi, Does anyone have a good suggestion on how to do video recording? We have a camera that can record and then stream live video to a server. So this means we can have 1000's of cameras sending data 24X7 for recording. We will store data for over 7 / 14 / 30 days depending on the package. Per day if a camera is sending data to the server then it will store 1.5GB. So that means there is a traffic of 1.5GB / day / camera Total monthly 45GB / month / camera (Data + bandwidth for one camera) Please let me know the most cost effective way to get this data stored? Thanks!

    Read the article

  • How to Embed/Link binary data into a C++ DLL

    - by CrimsonX
    So I have a Visual Studio 2008 project which has a large amount of binary data that it is currently referencing. I would like to package the binary data much like you can do with C# by adding it as a "resource" and compiling it as a DLL. Lets say all my data has an extension of ".data" and is currently being read from the visual studio project. Is there a way that you can compile or link the data into the .dll which it is calling? I've looked at some of the google link for this and so far I haven't come up with anything - the only possible solution I've come up with is to use something like ResGen to create a .resources file and then link it using AssemblyLinker with /Embed or /Link flags. I dont think it'd work properly though because I dont have text files to create the .resources files, but rather binary files themselves. Any advice?

    Read the article

  • Managing test data for Junit tests.

    - by nobody
    Hi, We are facing one problem in managing test data(xmls which is used to create mock objects). The data which we have currently has been evolved over a long period of time. Each time we add a new functionality or test case we add new data to test that functionality. Now, the problem is when the business requirement changes the format( like length or format of a variable) or any change which the test data doesn't support , we need to change the entire test data which is 100s of MBs in size. Could anyone suggest a better method or process to overcome this problem? Any suggestion would be appreciated.

    Read the article

  • How to extract data from Google Analytics and build a data warehouse (webhouse) from it?

    - by nkaur301
    I have click stream data such as referring URL, top landing pages, top exit pages and metrics such as page views, number of visits, bounces all in Google Analytics. I am required to build a data warehouse from scratch(which I believe is known as web-house) from this data. My questions are:- 1)Is it possible? Every day data increases (some in terms of metrics or measures such as visits and some in terms of new referring sites), how would the process of loading the warehouse go about? 2)What ETL tool would help me to achieve this? Pentaho I believe has a way to pull out data from Google Analytics, has anyone used it? How does that process go? Any references, links would be appreciated besides answers.

    Read the article

  • SQL SERVER – 5 Tips for Improving Your Data with expressor Studio

    - by pinaldave
    It’s no secret that bad data leads to bad decisions and poor results.  However, how do you prevent dirty data from taking up residency in your data store?  Some might argue that it’s the responsibility of the person sending you the data.  While that may be true, in practice that will rarely hold up.  It doesn’t matter how many times you ask, you will get the data however they decide to provide it. So now you have bad data.  What constitutes bad data?  There are quite a few valid answers, for example: Invalid date values Inappropriate characters Wrong data Values that exceed a pre-set threshold While it is certainly possible to write your own scripts and custom SQL to identify and deal with these data anomalies, that effort often takes too long and becomes difficult to maintain.  Instead, leveraging an ETL tool like expressor Studio makes the data cleansing process much easier and faster.  Below are some tips for leveraging expressor to get your data into tip-top shape. Tip 1:     Build reusable data objects with embedded cleansing rules One of the new features in expressor Studio 3.2 is the ability to define constraints at the metadata level.  Using expressor’s concept of Semantic Types, you can define reusable data objects that have embedded logic such as constraints for dealing with dirty data.  Once defined, they can be saved as a shared atomic type and then re-applied to other data attributes in other schemas. As you can see in the figure above, I’ve defined a constraint on zip code.  I can then save the constraint rules I defined for zip code as a shared atomic type called zip_type for example.   The next time I get a different data source with a schema that also contains a zip code field, I can simply apply the shared atomic type (shown below) and the previously defined constraints will be automatically applied. Tip 2:     Unlock the power of regular expressions in Semantic Types Another powerful feature introduced in expressor Studio 3.2 is the option to use regular expressions as a constraint.   A regular expression is used to identify patterns within data.   The patterns could be something as simple as a date format or something much more complex such as a street address.  For example, I could define that a valid IP address should be made up of 4 numbers, each 0 to 255, and separated by a period.  So 192.168.23.123 might be a valid IP address whereas 888.777.0.123 would not be.   How can I account for this using regular expressions? A very simple regular expression that would look for any 4 sets of 3 digits separated by a period would be:  ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ Alternatively, the following would be the exact check for truly valid IP addresses as we had defined above:  ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])$ .  In expressor, we would enter this regular expression as a constraint like this: Here we select the corrective action to be ‘Escalate’, meaning that the expressor Dataflow operator will decide what to do.  Some of the options include rejecting the offending record, skipping it, or aborting the dataflow. Tip 3:     Email pattern expressions that might come in handy In the example schema that I am using, there’s a field for email.  Email addresses are often entered incorrectly because people are trying to avoid spam.  While there are a lot of different ways to define what constitutes a valid email address, a quick search online yields a couple of really useful regular expressions for validating email addresses: This one is short and sweet:  \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b (Source: http://www.regular-expressions.info/) This one is more specific about which characters are allowed:  ^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$ (Source: http://regexlib.com/REDetails.aspx?regexp_id=26 ) Tip 4:     Reject “dirty data” for analysis or further processing Yet another feature introduced in expressor Studio 3.2 is the ability to reject records based on constraint violations.  To capture reject records on input, simply specify Reject Record in the Error Handling setting for the Read File operator.  Then attach a Write File operator to the reject port of the Read File operator as such: Next, in the Write File operator, you can configure the expressor operator in a similar way to the Read File.  The key difference would be that the schema needs to be derived from the upstream operator as shown below: Once configured, expressor will output rejected records to the file you specified.  In addition to the rejected records, expressor also captures some diagnostic information that will be helpful towards identifying why the record was rejected.  This makes diagnosing errors much easier! Tip 5:    Use a Filter or Transform after the initial cleansing to finish the job Sometimes you may want to predicate the data cleansing on a more complex set of conditions.  For example, I may only be interested in processing data containing males over the age of 25 in certain zip codes.  Using an expressor Filter operator, you can define the conditional logic which isolates the records of importance away from the others. Alternatively, the expressor Transform operator can be used to alter the input value via a user defined algorithm or transformation.  It also supports the use of conditional logic and data can be rejected based on constraint violations. However, the best tip I can leave you with is to not constrain your solution design approach – expressor operators can be combined in many different ways to achieve the desired results.  For example, in the expressor Dataflow below, I can post-process the reject data from the Filter which did not meet my pre-defined criteria and, if successful, Funnel it back into the flow so that it gets written to the target table. I continue to be impressed that expressor offers all this functionality as part of their FREE expressor Studio desktop ETL tool, which you can download from here.  Their Studio ETL tool is absolutely free and they are very open about saying that if you want to deploy their software on a dedicated Windows Server, you need to purchase their server software, whose pricing is posted on their website. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Problem when trying to configure enterprise library 5.0 (Data Access Application Block)

    - by Phil
    Hi There Stackoverflow, I am running into some problems while trying to get DAAB from Enterprise library 5.0 running. I have followed the steps as per the tutorial, but am getting errors... 1) Download / install enterprise library 2) Add references to the blocks I need (common / data) 3) Imports Imports Microsoft.Practices.EnterpriseLibrary.Common Imports Microsoft.Practices.EnterpriseLibrary.Data 4) Through the enterprise library config software. I open up the web.config from my site. I then click Blocks, then Add data settings... fill in my details and save / close 5) I then (thinking setup is complete) try to get an instance of the database via Dim db As Database = DatabaseFactory.CreateDatabase() 6) I compile and receive the following error: Could not load file or assembly 'Microsoft.Practices.EnterpriseLibrary.Data, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) (C:\site\web.config line 4) Line 4 off my web.config was generated by the config tool and is: <section name="dataConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="true" /> Am I missing a required step? Have I done the steps in the wrong order? Have I made a mistake? Thanks a lot for the assistance.

    Read the article

  • Removing and adding persistent stores to a core data application

    - by mkko
    I'm using core data on an iPhone application. I have multiple persisntent stores that I'm switching from one to another so that only one of the stores can be active at the time. I have one managed object context and the different persistent stores are similar in data format (sqlite) and share the same managed object model. I'm importing the data to each persistent store from a respective XML file. For the first import everything works fine, but after I remove the imported data (the persistent store and the physical file) and then re-import, core data gives me an error: *** Terminating app due to uncaught exception 'NSObjectInaccessibleException', reason: 'The NSManagedObject with ID:0x3c14e00 <x-coredata://6D14F11E-2EA7-4141-9BE8-53747DE6FCC6/Book/p2> has been invalidated.' This error comes from the save: of NSManagedObjectContext. Before re-importing, i'm removing the persistent store from the persistent store coordinator and removing the physical file, so everything should be as if re-importing was done for the first time. Alos, the objects in managed object context are removed and the context is sent the reset: message (I don't know if this is actually needed). Could some one help me out here? How should the persistent store be switched? I'm basically using the same logic as tutored here: http://blog.sallarp.com/iphone-core-data-uitableview-drill-down/ Thanks in advance.

    Read the article

  • Core data and @unionOfSets

    - by KevinD
    Im having trouble using the @unionOfSets on my core-data objects. NSLog(@"%@", [NSApp valueForKeyPath:@"delegate.mainWindowController.sidebarViewController.arrayController.selection.list.listElement"]); Prints the set of listElements as expected 2010-03-24 18:11:15.844 Pirouette[7459:80f] Relationship objects for {( (entity: PRPlaylistElement; id: 0x10a71b0 ; data: ), (entity: PRPlaylistElement; id: 0x10ac7d0 ; data: ), (entity: PRPlaylistElement; id: 0x10acf60 ; data: ), (entity: PRPlaylistElement; id: 0x10a6850 ; data: ) However when I try to get the set of file objects for each of the list elements. NSLog(@"%@", [NSApp valueForKeyPath:@"delegate.mainWindowController.sidebarViewController.arrayController.selection.list.listElement.@unionOfArrays.file"]); I get the following error 2010-03-24 18:16:45.843 Pirouette[7505:80f] An uncaught exception was raised 2010-03-24 18:16:45.844 Pirouette[7505:80f] [<NSCFSet 0x10415e0> valueForKeyPath:]: this class does not implement the unionOfArrays operation. 2010-03-24 18:16:45.847 Pirouette[7505:80f] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '[<NSCFSet 0x10415e0> valueForKeyPath:]: this class does not implement the unionOfArrays operation.' Confused because I thought calling to-many relationships in core-data were NSSets.

    Read the article

  • Handling incremental Data Modeling Changes in Functional Programming

    - by Adam Gent
    Most of the problems I have to solve in my job as a developer have to do with data modeling. For example in a OOP Web Application world I often have to change the data properties that are in a object to meet new requirements. If I'm lucky I don't even need to programmatically add new "behavior" code (functions,methods). Instead I can declarative add validation and even UI options by annotating the property (Java). In Functional Programming it seems that adding new data properties requires lots of code changes because of pattern matching and data constructors (Haskell, ML). How do I minimize this problem? This seems to be a recognized problem as Xavier Leroy states nicely on page 24 of "Objects and Classes vs. Modules" - To summarize for those that don't have a PostScript viewer it basically says FP languages are better than OOP languages for adding new behavior over data objects but OOP languages are better for adding new data objects/properties. Are there any design pattern used in FP languages to help mitigate this problem? I have read Phillip Wadler's recommendation of using Monads to help this modularity problem but I'm not sure I understand how?

    Read the article

  • Archiver Securing SQLite Data without using Encryption on iPhone

    - by Redrocks
    I'm developing an iphone app that uses Core Data with a SQLite data store and lots of images in the resource bundle. I want a "simple" way to obfuscate the file structure of the SQLite database and the image files to prevent the casual hacker/unscrupulous developer from gaining access to them. When the app is deployed, the database file and image files would be obfuscated. Upon launching the app it would read in and un-obfuscate the database file, write the un-obfuscated version to the users "tmp" directory for use by core data, and read/un-obfuscate image files as needed. I'd like to apply a simple algorithm to the files that would somehow scramble/manipulate the file data so that the sqlite database data isn't discernible when the db is opened in a text editor and so that neither is recognized by other applications (SQLite Manager, Photoshop, etc.) It seems, from the information I've read, that I could use NSFileManager, NSKeyedArchiver, and NSData to accomplish this but I'm not sure how to proceed. Been developing software for many years but I'm new to everything CocoaTouch, Mac and iPhone. Also never had to secure/encrypt my data so this is new. Any thoughts, suggestions, or links to solutions are appreciated.

    Read the article

  • How to sort data in a table data structure in Java?

    - by rgksugan
    I need to sort data based on the third column of the table data structure. I tried based on the answers for the following question. But my sorting does not work. Please help me in this. Here goes my code. Object[] data = new Object[y]; rst.beforeFirst(); while (rst.next()) { int p_id = Integer.parseInt(rst.getString(1)); String sw2 = "select sum(quantity) from tbl_order_detail where product_id=" + p_id; rst1 = stmt1.executeQuery(sw2); rst1.next(); String sw3 = "select max(order_date) from tbl_order where tbl_order.`Order_ID` in (select tbl_order_detail.`Order_ID` from tbl_order_detail where product_id=" + p_id + ")"; rst2 = stmt2.executeQuery(sw3); rst2.next(); data[i] = new Object[]{new String(rst.getString(2)), new String(rst.getString(3)), new Integer(rst1.getString(1)), new String(rst2.getString(1))}; i++; } ColumnComparator cc = new ColumnComparator(2); Arrays.sort(data, cc); if (i == 0) { table.addCell(""); table.addCell(""); table.addCell(""); table.addCell(""); } else { for (int j = 0; j < y; j++) { Object[] theRow = (Object[]) data[j]; table.addCell((String) theRow[0]); table.addCell((String) theRow[1]); table.addCell((String) theRow[2]); table.addCell((String) theRow[3]); }

    Read the article

  • Moving UITableView cells and maintaining consistent data

    - by Mark F
    I've enabled editing mode and moving cells around to allow users to position table view content in the order they please. I'm using Core Data as the data source, which sorts the content by the attribute "userOrder". When content is first inserted, userOrder is set to a random value. The idea is that when the user moves a cell around, the userOrder of that cell changes to accomodate its new position. The following are problems I am running into while trying to accomplish this: Successfully saving the the new location of the cell and adjusting all changed locations of influenced cells. Getting the data to be consistent. For example, the TableView handles the movement fine, but when i click on the new location of the cell, it displays data for the old cell that used to be that location. Data of all influenced cells gets messed up as well. I know I have to implement this in: - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)sourceIndexPath toIndexPath:(NSIndexPath *)destinationIndexPath {} I just don't know how. The apple docs are not particularly helpful if you are using Core Data, as in my situation. Any guidance greatly appreciated!

    Read the article

  • iPhone SDK / Core Data usage scenario, similar to GAE data store?

    - by boliva
    Hi all, I am currently rewriting a map based App which I wrote in the past, specifically for 2.2.1 devices. Originally I wrote it to make use of SQLite databases but I would like to try and migrate it over Core Data, now that it's available on 3.X (for which I am rewriting to). I am fairly experienced in iPhone/Obj-C development, SQL and server backend technologies, but I have never had the chance to work with Core Data so IDK really if it's the appropiate tool for what I am trying to accomplish. The App works on a limited area in a map over which there are about 4000 placemarks, with different kinds of icons and sizes. Of course not all 4000 placemarks are shown at once but only those currently visible in the map viewport, and depending on the zoom level. What I am doing right now is, after the user moves the map in any way (panning or zooming) I am requesting from the backend server the required information for the placemarks that would be visible given the viewport coordinates boundaries and zoom level, however the process isn't as smooth as I'd like (the backend is sending its response in XML and I am compressing it using gzip), it takes anywhere from 1 to 3 seconds to update the display of the placemarks after the user ends moving the map. What I would like to do is to prefetch all the placemarks data at the App launch and use it all through the app life time - I don't mind storing it for later use because the data should be dynamic. The way I would do it right now is, after retrieving all the data, to store it on an SQLite db which I would query later, whenever the user moves the map, to return only the placemarks inside the viewport coordinate boundaries and specific to a given zoom level. Now, the question itself is, if is it possible to use some more 'native', object driven way to carry this queries process, which got me thinking about Core Data and if it is in any way similar to what Google App Engine offers through its datastore where you can fetch a number of objects from the backend given a certain query or criteria, without resorting to an SQL query itself. Like I said before I don't have any experience on Core Data but I have a pretty deep understanding of Obj-C and iPhone development, as well as SQL databases. Any guides on how to achieve what I'm trying (if possible at all) would be greatly appreciated.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >