Search Results

Search found 58576 results on 2344 pages for 'consolidate data'.

Page 35/2344 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Android App Widget: Data storage

    - by Jeffrey
    Hello everyone, I'm implementing a home screen app widget. I was wondering which is better to store/read data: SharedPreferences or a SQLite database? The data is accessed from an AppWidgetProvider (similar to a BroadcastReceiver), and any given instance of the widget displays different data based on appWidgetId. Is one way or the other frowned upon? Thanks for your time.

    Read the article

  • Sending html data via $post fails

    - by Neil
    I am using the code below which is fine but when I use the code below that in an attempt to send an html fragment to a processing page to save it as a file but I get nothing. I have tried using ajax with processData set to false ads dataTypes of html, text and xml but nothing works. I can't find anything on this so I guess I must be missing something fairly trivial but I've been at it for 3 hours now. This works $.post("SaveFile.aspx", {f: "test4.htm", c: "This is a test"}, function(data){ alert(data); }, "text"); This fails $.post("SaveFile.aspx", {f: "test4.htm", c: "<h1>This is a test</h1>"}, function(data){ alert(data); }, "text");

    Read the article

  • Core Data Null Relationship

    - by Dylan Copeland
    I have a to-one relationship in my data model with Core Data. I'm trying to set the value of the relationship but Core Data keeps thinking that it's nil. The "creatorUser" relationship is not optional, so when I go to save my managed object context, Core Data gives errors because it thinks the "creatorUser" is nil. Any help would be greatly advised. NSManagedObject *teamManagedObject = [NSEntityDescription insertNewObjectForEntityForName:@"DCTeam" inManagedObjectContext:_managedObjectContext]; // Creator Properties NSManagedObject *creator = [self userForID:[ticketInfo objectForKey:@"userid"]]; if (!creator) { creator = [NSEntityDescription insertNewObjectForEntityForName:@"DCUser" inManagedObjectContext:_managedObjectContext]; [creator setValue:[personInfo objectForKey:@"userid"] forKey:@"userid"]; [creator setValue:[personInfo objectForKey:@"creatorName"] forKey:@"name"]; } [teamManagedObject setValue:creator forKey:@"creatorUser"];

    Read the article

  • Reimplementing data structures in the real world

    - by Jason
    The topic of algorithms class today was reimplementing data structures, specifically ArrayList in Java. The fact that you can customize a structure for in various ways definitely got me interested, particularly with variations of add() & iterator.remove() methods. But is reimplementing and customizing a data structure something that is of more interest to the academics vs the real-world programmers? Has anyone reimplemented their own version of a data structure in a commercial application/program, and why did you pick that route over your particular language's implementation?

    Read the article

  • Elegant way of parsing Data files for Simulation

    - by sc_ray
    I am working on this project where I need to read in a lot of data from .dat files and use the data to perform simulations. The data in my .dat file looks as follows: DeviceID InteractingDeviceID InteractionStartTime InteractionEndTime 1 2 1101 1105 1,2 1101 and 1105 are tab delimited and it means Device 1 interacted with Device 2 at 1101 ms and ended the interaction at 1105ms. I have a trace data sets that compile thousands of such interactions and my job is to analyze these interactions. The first step is to parse the file. The language of choice is C++. The approach I was thinking of taking was to read the file, for every line that's read create a Device Object. This Device object will contain the property DeviceId and an array/vector of structs, that will contain a list of all the devices the given DeviceId interacted with over the course of the simulation.The struct will contain the Interacting Device Id, Interaction Start Time and Interaction End Time. I have a two fold question here: Is my approach correct? If I am on the right track, how do I rapidly parse these tab delimited data files and create Device objects without excessive memory overhead using C++? A push in the right direction will be much appreciated. Thanks

    Read the article

  • List of generic algorithms and data structures

    - by Jake Petroules
    As part of a library project, I want to include a plethora of generic algorithms and data structures. This includes algorithms for searching and sorting, data structures like linked lists and binary trees, path-finding algorithms like A*... the works. Basically, any generic algorithm or data structure you can think of that you think might be useful in such a library, please post or add it to the list. Thanks! (NOTE: Because there is no single right answer I've of course placed this in community wiki... and also, please don't suggest algorithms which are too specialized to be provided by a generic library) The List: Data structures AVL tree B-tree B*-tree B+-tree Binary tree Binary heap Binary search tree Linked lists Singly linked list Doubly linked list Stack Queue Sorting algorithms Binary tree sort Bubble sort Heapsort Insertion sort Merge sort Quicksort Selection sort Searching algorithms

    Read the article

  • Finding the most common values from given data

    - by Ben Shelock
    I have some data that looks something like this... +----------+----------+----------+ | Column 1 | Column 2 | Column 3 | +----------+----------+----------+ | Red | Blue | Green | | Yellow | Blue | Pink | | Black | Grey | Blue | +--------------------------------+ I need to go through this data and find the 3 most common colours. The raw data is in CSV and there's likely to be thousands more rows. (link) What's the best way of doing this?

    Read the article

  • Deploying App With Dummy/Initial Data

    - by mattmccomb
    I have a Core Data based application that stores hierarchal data displayed using a series of UITableViews. To illustrate my app functionality to the user I would like to pre-populate my database/app with some dummy values. This data would be available upon installation on the user's iPhone/iPod Touch. What is the best way to achieve this?

    Read the article

  • How to add database layer in core data application

    - by aditya
    Hi all I am fairly new to core data technology and i searched a lot on how to add the database to a core data application.so can anybody guide me on how to integrate the database layer? i have seen the iphone tutorial on core data (i.e books example) but i am not able to understand how to .sqlite file has been included in that application

    Read the article

  • Import data from .xls File

    - by tek3
    HI, I am developing an application in which i have to get data from .xls file. I am fairly new to iPhone development so any pointers in direction to getting started will be very much helpful. The steps that i am thinking are : 1) First i need to convert .xls to .csv Format. 2)Import the data from .csv file to SQlite Databse or Core Data.(I am not familier with any one of them, so kindly suggest which one to choose. I am looking forward to use Core Data) Am I thinking in the right direction. Will be greatful for any kind of assistance.. Thanx in advance..

    Read the article

  • Data munging and data import scripting

    - by morpheous
    I need to write some scripts to carry out some tasks on my server (running Ubuntu server 8.04 TLS). The tasks are to be run periodically, so I will be running the scripts as cron jobs. I have divided the tasks into "group A" and "group B" - because (in my mind at least), they are a bit different. Task Group A import data from a file and possibly reformat it - by reformatting, I mean doing things like santizing the data, possibly normalizing it and or running calculations on 'columns' of the data Import the munged data into a database. For now, I am mostly using mySQL for the vast majority of imports - although some files will be imported into a sqlLite database. Note: The files will be mostly text files, although some of the files are in a binary format (my own proprietary format, written by a C++ application I developed). Task Group B Extract data from the database Perform calculations on the data and either insert or update tables in the database. My coding experience is is primarily as a C/C++ developer, although I have been using PHP as well for the last 2 years or so. I am from a windows background so I am still finding my feet in the linux environment. My question is this - I need to write scripts to perform the tasks I described above. Although I suppose I could write a few C++ applications to be used in the shell scripts, I think it may be better to write them in a scripting language (maybe this is a flawed assumption?). My thinking is that it would be easier to modify thins in a script - no need to rebuild etc for changes to functionality. Additionally, C++ data munging in C++ tends to involve more lines of code than "natural" scripting languages such as Perl, Python etc. Assuming that the majority of people on here agree that scripting is the way to go, herein lies my dilema. Which scripting language to use to perform the tasks above (giving my background). My gut instinct tells me that Perl (shudder) would be the most obvious choice for performing all of the above tasks. BUT (and that is a big BUT). The mere mention of Perl makes my toes curl, as I had a very, very bag experience with it a while back. The syntax seems quite unnatural to me - despite how many times I have tried to learn it - so if possible, I would really like to give it a miss. PHP (which I already know), also am not sure is a good candidate for scripting on the CLI (I have not seen many examples on how to do this etc - so I may be wrong). The last thing I must mention is that IF I have to learn a new language in order to do this, I cannot afford (time constraint) to spend more than a day, in learning the key commands/features required in order to do this (I can always learn the details of the language later, once I have actually deployed the scripts). So, which scripting language would you recommend (PHP, Python, Perl, [insert your favorite here]) - and most importantly WHY?. Or, should I just stick to writing little C++ applications that I call in a shell script?. Lastly, if you have suggested a scripting language, can you please show with a FEW lines (Perl mongers - I'm looking in your direction [nothing to cryptic!] ;) ) how I can use the language you suggested to do what I want to do. Hopefully, the lines you present will convince me that it can be done easily and elegantly in the language you suggested.

    Read the article

  • Handling missing/incomplete data in R

    - by doug
    As you would expect from a DSL aimed at data analysts, R handles missing/incomplete data very well, for instance: Many R functions have an 'na.rm' flag that you can set to 'T' to remove the NAs, but if you want to deal with this before the function call, then: to replace each 'NA' w/ 0: ifelse(is.na(vx), 0, vx) to remove each 'NA': vx = vx[!is.na(a)] to remove entire each row that contains 'NA' from a data frame: dfx = dfx[complete.cases(dfx),] All of these functions remove 'NA' or rows with an 'NA' in them. Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it). to be as clear as possible about what i'm looking for: python/numpy has a class, 'masked array', with a 'mask' method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?

    Read the article

  • ways to store data in c#

    - by Audel
    I am looking for ways to store data in a windows form application in C#. I want to make the input data of a system persistent, so when I close my program and open it again, the data is retrieved. Which ways are of doing this besides creating a linked database? Examples are gladly appreciated regards

    Read the article

  • jQuery: Use of undefined constant data assumed 'data'

    - by morpheous
    I am trying to use jQuery to make a synchronous AJAX post to a server, and get a JSON response back. I want to set a javascript variable msg upon successful return This is what my code looks like: $(document).ready(function(){ $('#test').click(function(){ alert('called!'); jQuery.ajax({ async: false, type: 'POST', url: 'http://www.example.com', data: 'id1=1&id2=2,&id3=3', dataType: 'json', success: function(data){ msg = data.msg; }, error: function(xrq, status, et){alert('foobar\'d!');} }); }); [Edit] I was accidentally mixing PHP and Javascript in my previous xode (now corrected). However, I now get this even more cryptic error message: uncaught exception: [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIXMLHttpRequest.open]" nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)" location: "JS frame :: http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js :: anonymous :: line 19" data: no] What the ... ?

    Read the article

  • Using R to download zipped data file, extract, and import data

    - by Jeromy Anglim
    @EZGraphs on Twitter writes: "Lots of online csvs are zipped. Is there a way to download, unzip the archive, and load the data to a data.frame using R? #Rstats" I was also trying to do this today, but ended up just downloading the zip file manually. I tried something like: fileName <- "http://www.newcl.org/data/zipfiles/a1.zip" con1 <- unz(fileName, filename="a1.dat", open = "r") but I feel as if I'm a long way off. Any thoughts?

    Read the article

  • Detaching all entities of T to get fresh data

    - by Goran
    Lets take an example where there are two type of entites loaded: Product and Category, Product.CategoryId - Category.Id. We have available CRUD operations on products (not Categories). If on another screen Categories are updated (or from another user in the network), we would like to be able to reload the Categories, while preserving the context we currently use, since we could be in the middle of editing data, and we do not want changes to be lost (and we cannot depend on saving, since we have incomplete data). Since there is no easy way to tell EF to get fresh data (added, removed and modified), we thought of twp possible ways: 1) Getting products attached to context, and categories detached from context. This would mean that we loose the ability to access Product.Category.Name, which we do sometimes require, so we would need to manually resolve it (example when printing data). 2) detaching / attaching all Categories from current context. Context.ChangeTracker.Entries().Where(x => x.Entity.GetType() == typeof(T)).ForEach(x => x.State = EntityState.Detached); And then reload the categories, which will get fresh data. Do you find any problem with this second approach? We understand that this will require all constraints to be put on foreign keys, and not navigation properties, since when detaching all Categories, Product.Category navigation properties would be reset to null also. Also, there could be a potential performance problem, which we did not test, since there could be couple of thousand products loaded, and all would need to resolve navigation property when reloading. Which of the two do you prefer, and is there a better way (EF6 + .NET 4.0)?

    Read the article

  • function to find common rows between more than two data frames in R

    - by biohazard
    I have 4 data frames, and would like to find the rows whose values in a certain column do not exist in any of the other data frames. I wrote this function: #function to test presence of $Name in 3 other datasets common <- function(a, b, c, d) { is.B <- is.numeric(a$Name %in% b$Name) == 1 is.C <- is.numeric(a$Name %in% c$Name) == 1 is.D <- is.numeric(a$Name %in% d$Name) == 1 t <- as.numeric(is.B & is.C & is.D) t } However, the output is always t = 0. This means that it tells me that there are no unique rows in any data sets, even though the datas frames have very different numbers of rows. Since there are no duplicate rows in any of the data frames, I should be getting t = 1 for at least some rows in the biggest dataset. Can someone figure out what I got wrong?

    Read the article

  • JSON parse data in javascript from php

    - by Stefania
    I'm trying to retrieve data in a javascript file from a php file using json. $items = array(); while($r = mysql_fetch_array($result)) { $rows = array( "id_locale" => $r['id_locale'], "latitudine" => $r['lat'], "longitudine" => $r['lng'] ); array_push($items, array("item" => $rows)); } ECHO json_encode($items); and in the javascript file I try to recover the data using an ajax call: $.ajax({ type:"POST", url:"Locali.php", success:function(data){ alert("1"); //var obj = jQuery.parseJSON(idata); var json = JSON.parse(data); alert("2"); for (var i=0; i<json.length; i++) { point = new google.maps.LatLng(json[i].item.latitudine,json[i].item.longitudine); alert(point); } } }) The first alert is printed, the latter does not, it gives me error: Unexpected token <.... but I do not understand what it is. Anyone have any idea where am I wrong? I also tried to recover the data with jquery but with no positive results.

    Read the article

  • Retrieving selected data from DataGrid with IEnumerable<IDictionary> (Silverlight)

    - by RemiX
    I have an application that can dynamically load data into a DataGrid. What's needed is an object of IEnumerable<IDictionary>, and a List<Dictionary<string,object>> is supplied (each Dictionary in the list has exactly the same keys). The data is loaded into the DataGrid and shown, but now I want to retrieve the data the user has clicked on. Using datagrid.SelectedItem, Silverlight complains it cannot evaluate the variable, not even when type-casted. I tried keeping the List<Dictionary<string,object>> and retrieving the right data from it using datagrid.SelectedIndex, but this index changes when the DataGrid is sorted. Does anyone know a solution to this problem?

    Read the article

  • Haskell data serialization of some data implementing a common type class

    - by Evan
    Let's start with the following data A = A String deriving Show data B = B String deriving Show class X a where spooge :: a -> Q [ Some implementations of X for A and B ] Now let's say we have custom implementations of show and read, named show' and read' respectively which utilize Show as a serialization mechanism. I want show' and read' to have types show' :: X a => a -> String read' :: X a => String -> a So I can do things like f :: String -> [Q] f d = map (\x -> spooge $ read' x) d Where data could have been [show' (A "foo"), show' (B "bar")] In summary, I wanna serialize stuff of various types which share a common typeclass so I can call their separate implementations on the deserialized stuff automatically. Now, I realize you could write some template haskell which would generate a wrapper type, like data XWrap = AWrap A | BWrap B deriving (Show) and serialize the wrapped type which would guarantee that the type info would be stored with it, and that we'd be able to get ourselves back at least an XWrap... but is there a better way using haskell ninja-ery? EDIT Okay I need to be more application specific. This is an API. Users will define their As, and Bs and fs as they see fit. I don't ever want them hacking through the rest of the code updating their XWraps, or switches or anything. The most i'm willing to compromise is one list somewhere of all the A, B, etc. in some format. Why? Here's the application. A is "Download a file from an FTP server." B is "convert from flac to mp3". A contains username, password, port, etc. information. B contains file path information. A and B are Xs, and Xs shall be called "Tickets." Q is IO (). Spooge is runTicket. I want to read the tickets off into their relevant data types and then write generic code that will runTicket on the stuff read' from the stuff on disk. At some point I have to jam type information into the serialized data.

    Read the article

  • relating data stored in NoSQL DB to data stored in SQL DB

    - by seanbrant
    Whats the best way to use a SQL DB along side a NoSQL DB? I want to keep my users and other data in postgres but have some data that would be better suited for a NoSQL DB like redis. I see a lot of talk about switching to NoSQL but little talk on integrating it with existing systems. I think it would be foolish to throw the baby out with the bath water and ditch SQL all together, unless it makes things easier to maintain and develop. I'm wondering what the best approach is for relating data stored in SQL to my data in redis. I was thinking of something along the line of this. User object stored in SQL Book object in redis, key sh1 hash of value, value is a JSON string Relations stored in redis, key User.pk:books, value redis set of sha1's Anyone have experience, tips, better ways?

    Read the article

  • Strategies for "Always-Connected" Windows Client Data Architecture

    - by magz2010
    Hi. Let me start by saying: this is my 1st post here, this is a bit lenghty, and I havent done Windows Forms development in years....with that in mind please excuse me if this isn't directly a programming question and please bear with me as I really need the help!! I have been asked to develop a Windows Forms app for our company that talks to a central (local area network) Linux Server hosting a PostgreSQL database. The app is to allow users to authenticate themselves into the system and thereafter conduct the usual transactions with the PG database. Ordinarily, I would propose writing a webforms app against Mono, but the clients need to utilise local resources such as USB peripheral devices, so that is out of the question. While it might not seem clear, my questions are italised below: Dilemma #1: The application is meant to be always connected. How should I structure my DAL/BLL - Should this reside on the server or with the client? Dilemma #2: I have been reading up on Client Application Services (CAS), and it seems like a great fit for authentication, as everything is exposed via URIs. I know that a .NET Data Provider exists for PostgreSQL, but not too sure if CAS will all work on a Linux (Debian) server? Believe me, I would get my hands dirty and try myself, but I need to come up with a logical design first before resources are allocated to me for "trial purposes"! Dilemma #3: If the DAL/BLL is to reside on the server, is there any way I can create data services, and expose only these services to authenticated clients. There is a (security) requirement whereby a connection string with username and password to the database cannot be present on any client machines...even if security on the database side is quite rigid. I'm guessing that the only way for this to work would be to create the various CRUD data service methods that are exposed by an ASP.NET app, and have the WindowsForms make a request for data or persist data to the ASP.NET app (thru a URI) and have that return a resultset or value. Would I be correct in assuming this? Should I be looking into WCF Data Services? and will WCF work with a non-SQL Server database? Thank you for taking the time out to read this, but know that I am desperately seeking any advice on this! THANKS A MILLION!!!!

    Read the article

  • EXC_MEMORY_ACCESS when trying to delete from Core Data ($cash solution)

    - by llloydxmas
    I have an application that downloads an xml file, parses the file, and creates core data objects while doing so. In the parse code I have a function called 'emptydatacontext' that removes all items from Core Data before creating replacements items from the xml data. This method looks like this: -(void) emptyDataContext { NSFetchRequest * allCon = [[NSFetchRequest alloc] init]; [allCon setEntity:[NSEntityDescription entityForName:@"Condition" inManagedObjectContext:managedObjectContext]]; NSError * error = nil; NSArray * conditions = [managedObjectContext executeFetchRequest:allCon error:&error]; DebugLog(@"ERROR: %@",error); DebugLog(@"RETRIEVED: %@", conditions); [allCon release]; for (NSManagedObject * condition in conditions) { [managedObjectContext deleteObject:condition]; } // Update the data model effectivly removing the objects we removed above. //NSError *error; if (![managedObjectContext save:&error]) { DebugLog(@"%@", [error domain]); } } The first time this runs it deletes all objects and functions as it should - creating new objects from the xml file. I created a 'update' button that starts the exact same process of retrieving the file the proceeding with the parse & build. All is well until its time to delete the core data objects. This 'deleteObject' call creates a "EXC_BAD_ACCESS" error each time. This only happens on the second time through. Captured errors return null. If I log the 'conditions' array I get a list of NSManagedObjects on the first run. On the second this log request causes a crash exactly as the deleteObject call does. I have a feeling it is something very simple I'm missing or not doing correctly to cause this behavior. The data works great on my tableviews - its only when trying to update I get the crashes. I have spent days & days on this trying numerous alternative methods. Whats left of my hair is falling out. I'd be willing to ante up some cash for anyone willing to look at my code and see what I'm doing wrong. Just need to get past this hurdle. Thanks in advance for the help!

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >