Search Results

Search found 59036 results on 2362 pages for 'fake data'.

Page 152/2362 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • ExtJs store, how to load data using a MemoryProxy

    - by Miau
    Hi there I m trying to load a json store using a MemoryProxy ( i need to use a proxy because I use different sources depending on scenarios) it kinda looks like this var data = Ext.decode(gridArrayData); var proxy = new Ext.data.MemoryProxy(data); var store = new Ext.data.GroupingStore({ proxy: proxy }); however when I inspect this I can see that the proxy has 10 rows of data, but not the store I m lost as to why Any pointers?

    Read the article

  • JQuery getJSON Callback Returning Null Data

    - by user338828
    I have a getJSON call that is called back correctly, but the data variable is null. The python code posted below is executed by the getJSON call to the demandURL. Any ideas? javascript: var demandURL = "/demand/washington/"; $.getJSON(demandURL, function(data) { console.log(data); }); python: data = {"demand_count":"one"} json = simplejson.dumps(data) return HttpResponse(json, mimetype="application/json")

    Read the article

  • Oracle sample data problems

    - by Jay
    So, I have this java based data trasformation / masking tool, which I wanted to test out on Oracle 10g. The good part with Oracle 10g is that you get a load of sample schemas with half a million records in some. The schemas are : SH, OE, HR, IX and etc. So, I installed 10g, found out that the installation scripts are under ORACLE_HOME/demo/scripts. I customized these scripts a bit to run in batch mode. That solves one half of my requirement - to create source data for my testing my data transformation software. The second half of the requirement is that I create the same schemas under different names (TR_HR, TR_OE and so on...) without any data. These schemas would represent my target schemas. So, in short, my software would pick up data from a table in a schema and load it up in to the same table in a different schema. Now, I have two issues in creating my target schema and emptying it. I would like this in a batch job. But the oracle scripts you get, the sample schema names are not configurable. So, I tried creating a script, replacing OE with TR_OE, HR with TR_HR and so on. However, this approach is kind of irritating coz the sample schemas are kind of complicated in the way they are created; Oracle creates synonyms, views, materialized views, data types and lot of weird stuff. I would like the target schemas (TR_HR, TR_OE,...) to be empty. But some of the schemas have circular references, which would not allow me to delete data. The only work around seems to be removing certain foreign keys, deleting data and then adding the constraints back. Is there any easy way to all this, without all this fuss? I would need a complicated data set for my testing (complicated as in tables with triggers, multiple hierarchies.. for instance.. a child table that has children up to 5 levels, a parent table that refers to an IOT table and an IOT table that refers to a non-IOT table etc..). The sample schemas are just about perfect from a data set perspective. The only challenge I see is in automating this whole process of loading up the source schemas, and then creating the target schemas and emptying them. Appreciate your help and suggestions.

    Read the article

  • Beginner problem with posting data table to JsonResult

    - by ognjenb
    With this script I get data from JsonResult (GetDevicesTable) and put them to table ( id="OrderDevices") <script type="text/javascript"> $(document).ready(function() { $("#getDevices a").click(function() { var Id = $(this).attr("rel"); var rowToMove = $(this).closest("tr"); $.post("/ImportXML/GetDevicesTable", { Id: Id }, function(data) { if (data.success) { //remove row rowToMove.fadeOut(function() { $(this).remove() }); //add row to other table $("#OrderDevices").append("<tr><td>"+ data.DeviceId+ "</td><td>" + data.Id+ "</td><td>" + data.OrderId + "</td></tr>"); } else { alert(data.ErrorMsg); } }, "json"); }); }); <% using (Html.BeginForm()) {%> <table id="OrderDevices" class="data-table"> <tr> <th> DeviceId </th> <th> Id </th> <th> OrderId </th> </tr> </table> <p> <input type="submit" value="Submit" /> </p> <% } %> When click on Submit I need something like this: $(document).ready(function() { $("#submit").click(function() { var Id = $(this).attr("rel"); var DeviceId = $(this).attr(???); var OrderId = $(this).attr(???); var rowToMove = $(this).closest("tr"); $.post("/ImportXML/DevicesActions", { Id: Id, DeviceId:DeviceId, OrderId:OrderId }, function(data) { }, "json"); }); }); I have problem with this script because do not know how post data to this JsonResult: public JsonResult DevicesActions(int Id,int DeviceId,int OrderId) { orderdevice ord = new orderdevice(); ord.Id = Id; ord.OrderId = DeviceId; ord.DeviceId = OrderId; DBEntities.AddToorderdevice(ord); DBEntities.SaveChanges(); }

    Read the article

  • how to check if internal storage file has any data

    - by user3720291
    public class Save extends Activity { int levels = 2; int data_block = 1024; //char[] data = new char[] {'0', '0'}; String blankval = "0"; String targetval = "0"; String temp; String tempwrite; String string = "null"; TextView tex1; TextView tex2; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.save); Intent intent = getIntent(); Bundle b = intent.getExtras(); tex1 = (TextView) findViewById(R.id.textView1); tex2 = (TextView) findViewById(R.id.textView2); if(b!=null) { string =(String) b.get("string"); } loadprev(); save(); } public void save() { if (string.equals("Blank")) blankval = "1"; if (string.equals("Target")) targetval = "1"; temp = blankval + targetval; try { FileOutputStream fos = openFileOutput("data.gds", MODE_PRIVATE); fos.write(temp.getBytes()); fos.close(); } catch (FileNotFoundException e) {e.printStackTrace();} catch (IOException e) {e.printStackTrace();} tex1.setText(blankval); tex2.setText(targetval); } public void loadprev() { String final_data = ""; try { FileInputStream fis = openFileInput("data.gds"); InputStreamReader isr = new InputStreamReader(fis); char[] data = new char[data_block]; int size; while((size = isr.read(data))>0) { String read_data = String.copyValueOf(data, 0, size); final_data += read_data; data = new char[data_block]; } } catch (FileNotFoundException e) {e.printStackTrace();} catch (IOException e) {e.printStackTrace();} char[] tempread = final_data.toCharArray();; blankval = "" + tempread[0]; targetval = "" + tempread[1]; } } After much tinkering i have finally managed to get my save/load function to work, but it does have an error, pretty much i got it to work then i did a fresh reintall deleting data.gds, afterwards the save/load function crashes because the data.gds file has no previous values. can i use a if statment to check if data.gds has any values in it, if so how do i do it and if not, then what could i use instead?

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • Programmatically import CSV data to Access

    - by FrustratedWithFormsDesigner
    I have an Access database and the source of data comes from generated CSV files. I'd like to have an easy way for the users to simply select the data file and import it. Import should append the existing data to the data already in the data table. Is there a way in Access to create a file selector and import using saved CSV import settings that are already in the file?

    Read the article

  • warning about data loss c++/c

    - by Dr Deo
    i am getting a benign warning about possible data loss warning C4244: 'argument' : conversion from 'const int' to 'float', possible loss of data question i remember as if float has a larger precision than int. So how can data be lost if i convert from a smaller data type (int) to a larger data type (float)

    Read the article

  • Why is cakephp form input stored in $this->data and POST data stored in $this->params['form'] ?

    - by spudnik1979
    the cakephp rest tutorial says that post data should be in $this-data, but I am finding that it is not, but instead inside $this-params['form'] However, when using cakephp form helper in a view, the data is in $this-data. Am I correct to have to check both locations in my controller? It just seems a bit of a waste of extra code. Shouldnt the data appear in one place for whether it came from a rest rest requestor or Cakephp form post? ps im using cakephp 1.3

    Read the article

  • Passing data between blocks using sinatra

    - by Dan Galipo
    Hi All I'm trying to pass data between blocks using sinatra. For example: @data = Hash.new post "/" do @data[:test] = params.fetch("test").to_s redirect "/tmp" end get "/tmp" do puts @data[:test] end However whenever i get to the tmp block @data is nil and throws an error. Why is that?

    Read the article

  • Grep and Extract Data in Perl

    - by syker
    I have HTML content stored in a variable. How do I extract data that is found between a set of common tags in the page? For example, I am interested in the data (represented by DATA kept between a set of tags which one line after the other: ... <td class="jumlah">*DATA*</td> <td class="ud"><a href="">*DATA*</a></td> ...

    Read the article

  • DomainDataSource DataPager with silverlight 3 DataGrid & .Net RIA Services

    - by Dennis Ward
    I have a simple datagrid example with silverlight 3, and am populating it with the .NET ria services using a DomainDataSource along with a DataPager declaratively (nothing in the code-behind), and am experiencing this problem: The LoadSize is 30, and the Page size is 15, and when the page is loaded, the 1st and 2nd page appear correctly, but when I go beyond the 2nd page, nothing shows up in the grid. This used to work in the silverlight 3 beta with the Mix 2009 preview of .NET Ria services, and I've got a really simple example and have verified that the Service on the web project gets called to load a new batch, but the grid doesn't show any data. Can anyone shed any light as to why grid displays data only for the initial load of data and not subsequent batches from the pager? Here's my xaml: <riaControls:DomainDataSource x:Name="ArtistSource" QueryName="GetArtist" AutoLoad="True" LoadSize="30" PageSize="15"> <riaControls:DomainDataSource.DomainContext> <domain:AdminContext /> </riaControls:DomainDataSource.DomainContext> </riaControls:DomainDataSource> <data:DataGrid Grid.Row="1" x:Name="ArtistDataGrid" ItemsSource="{Binding Data, ElementName=ArtistSource}"> </data:DataGrid> <StackPanel Grid.Row="2"> <data:DataPager Source="{Binding Data, ElementName=ArtistSource}" /> </StackPanel>

    Read the article

  • How to populate Java (web) application with initial data using Spring/JPA/Hibernate

    - by Tuukka Mustonen
    I want to setup my database with initial data programmatically. I want to populate my database for development runs, not for testing runs (it's easy). The product is built on top of Spring and JPA/Hibernate. Developer checks out the project Developer runs command/script to setup database with initial data Developer starts application (server) and begins developing/testing then: Developer runs command/script to flush the database and set it up with new initial data because database structures or the initial data bundle were changed What I want is to setup my environment by required parts in order to call my DAOs and insert new objects into database. I do not want to create initial data sets in raw SQL, XML, take dumps of database or whatever. I want to programmatically create objects and persist them in database as I would in normal application logic. One way to accomplish this would be to start up my application normally and run a special servlet that does the initialization. But is that really the way to go? I would love to execute the initial data setup as Maven task and I don't know how to do that if I take the servlet approach. There is somewhat similar question. I took a quick glance at the suggested DBUnit and Unitils. But they seem to be heavily focused in setting up testing environments, which is not what I want here. DBUnit does initial data population, but only using xml/csv fixtures, which is not what I'm after here. Then, Maven has SQL plugin, but I don't want to handle raw SQL. Maven also has Hibernate plugin, but it seems to help only in Hibernate configuration and table schema creation (not in populating db with data). How to do this?

    Read the article

  • R and version control for the solo data analyst

    - by Jeromy Anglim
    Many data analysts that I respect use version control. For example: http://github.com/hadley/ See comments on http://permut.wordpress.com/2010/04/21/revision-control-statistics-bleg/ However, I'm evaluating whether adopting a version control system such as git would be worthwhile. A brief overview: I'm a social scientist who uses R to analyse data for research publications. I don't currently produce R packages. My R code for a project typically includes a few thousand lines of code for data input, cleaning, manipulation, analyses, and output generation. Publications are typically written using LaTeX. With regards to version control there are many benefits which I have read about, yet they seem to be less relevant to the solo data analyst. Backup: I have a backup system already in place. Forking and rewinding: I've never felt the need to do this, but I can see how it could be useful (e.g., you are preparing multiple journal articles based on the same dataset; you are preparing a report that is updated monthly, etc) Collaboration: Most of the time I am analysing data myself, thus, I woudln't get the collaboration benefits of version control. There are also several potential costs involved with adopting version control: Time to evaluate and learn a version control system A possible increase in complexity over my current file management system However, I still have the feeling that I'm missing something. General guides on version control seem to be addressed more towards computer scientists than data analysts. Thus, specifically in relation to data analysts in circumstances similar to those listed above: Is version control worth the effort? What are the main pros and cons of adopting version control? What is a good strategy for getting started with version control for data analysis with R (e.g., examples, workflow ideas, software, links to guides)?

    Read the article

  • XML Schema Migration

    - by Corwin Joy
    I am working on a project where we need to save data in an XML format. The problem is, over time we expect the format / schema for our data to change. What we want to be able to do is to produce scripts to migrate our data across different schema versions. We distribute our product to thousands of customers so we need to be able to run / apply these scripts at customer sites (so we can't just do the conversions by hand). I think that what we are looking for is some kind of XML data migration tool. In my mind the ideal tool could: Do an "XML diff" of two schema to identify added/deleted/changed nodes. Allow us to specify transformation functions. So, for example, we might add a new element to our schema that is a function of the old elements. (E.g. a new element C where C = A+B, A + B are old elements). So I think I am looking for a kind of XML diff and patch tool which can also apply transformation functions. One tool I am looking at for this is Altova's MapForce . I'm sure others here have had to deal with XML data format migration. How did you handle it? Edit: One point of clarification. The "diff" I plan to do is on the schema or .xsd files. The actual changes will be made to particular data sets that follow a given schema. These data sets will be .xml files. So its a "diff" of the schema to help figure out what changes need to be made to data sets to migrate them from one scheme to another.

    Read the article

  • Enormous data and PHP errors

    - by salamis
    I am currently using the following HighCharts:HighStock:Charts: http://www.highcharts.com/stock/demo/data-grouping in order to display the data returned from the server. We retrieve the data from a MySQL database and is really big. We are storing sensor metrics every 1 second. After a while we got the following error: [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 4756882 bytes) in C:\\wamp\\www\\admin\\getTrends.php on line 156, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP Stack trace:, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 1. {main}() C:\\wamp\\www\\admin\\getTrends.php:0, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 2. getTrendsDataAI() C:\\wamp\\www\\admin\\getTrends.php:33, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 3. printResults() C:\\wamp\\www\\admin\\getTrends.php:102, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 4. createData() C:\\wamp\\www\\admin\\getTrends.php:230, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 5. implode() C:\\wamp\\www\\admin\\getTrends.php:156, referer: http://localhost/admin/trends.php What is the best solution to return this data as JSON object to HighStocks for viewing? And how can we overcome the PHP limitation? Shall we return chunk of data each time? How do they usually present enormous amount of data to the users and creating charts and reports from this data? Another big problem that we need to overcome is that the returned JSON object is enormous. At this point is around 20-30 mbs and it will be much larger in the future. Is it ok to return this data to the user and perform everything client side? Any suggestions or thoughts welcome.

    Read the article

  • How do you remove invalid hexadecimal characters from an XML-based data source prior to constructing

    - by Oppositional
    Is there any easy/general way to clean an XML based data source prior to using it in an XmlReader so that I can gracefully consume XML data that is non-conformant to the hexadecimal character restrictions placed on XML? Note: The solution needs to handle XML data sources that use character encodings other than UTF-8, e.g. by specifying the character encoding at the XML document declaration. Not mangling the character encoding of the source while stripping invalid hexadecimal characters has been a major sticking point. The removal of invalid hexadecimal characters should only remove hexadecimal encoded values, as you can often find href values in data that happens to contains a string that would be a string match for a hexadecimal character. Background: I need to consume an XML-based data source that conforms to a specific format (think Atom or RSS feeds), but want to be able to consume data sources that have been published which contain invalid hexadecimal characters per the XML specification. In .NET if you have a Stream that represents the XML data source, and then attempt to parse it using an XmlReader and/or XPathDocument, an exception is raised due to the inclusion of invalid hexadecimal characters in the XML data. My current attempt to resolve this issue is to parse the Stream as a string and use a regular expression to remove and/or replace the invalid hexadecimal characters, but I am looking for a more performant solution.

    Read the article

  • Working with images (CGImage), exif data, and file icons

    - by Nick
    What I am trying to do (under 10.6).... I have an image (jpeg) that includes an icon in the image file (that is you see an icon based on the image in the file, as opposed to a generic jpeg icon in file open dialogs in a program). I wish to edit the exif metadata, save it back to the image in a new file. Ideally I would like to save this back to an exact copy of the file (i.e. preserving any custom embedded icons created etc.), however, in my hands the icon is lost. My code (some bits removed for ease of reading): // set up source ref I THINK THE PROBLEM IS HERE - NOT GRABBING THE INITIAL DATA CGImageSourceRef source = CGImageSourceCreateWithURL( (CFURLRef) URL,NULL); // snag metadata NSDictionary *metadata = (NSDictionary *) CGImageSourceCopyPropertiesAtIndex(source,0,NULL); // make metadata mutable NSMutableDictionary *metadataAsMutable = [[metadata mutableCopy] autorelease]; // grab exif NSMutableDictionary *EXIFDictionary = [[[metadata objectForKey:(NSString *)kCGImagePropertyExifDictionary] mutableCopy] autorelease]; << edit exif >> // add back edited exif [metadataAsMutable setObject:EXIFDictionary forKey:(NSString *)kCGImagePropertyExifDictionary]; // get source type CFStringRef UTI = CGImageSourceGetType(source); // set up write data NSMutableData *data = [NSMutableData data]; CGImageDestinationRef destination = CGImageDestinationCreateWithData((CFMutableDataRef)data,UTI,1,NULL); //add the image plus modified metadata PROBLEM HERE? NOT ADDING THE ICON CGImageDestinationAddImageFromSource(destination,source,0, (CFDictionaryRef) metadataAsMutable); // write to data BOOL success = NO; success = CGImageDestinationFinalize(destination); // save data to disk [data writeToURL:saveURL atomically:YES]; //cleanup CFRelease(destination); CFRelease(source); I don't know if this is really a question of image handling, file handing, post-save processing (I could use sip), or me just being think (I suspect the last). Nick

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Urgent: Sort HashSet() function data in sequence

    - by vincent low
    i am new to java, the function i like to perform is something like: i will load a series of data from a file, into my hashSet() function. the problem is, i able to enter all the data in sequence, but i cant retrieve it out in sequence base on the account name in the file. any 1 can help to give a comment? below is my code: public Set retrieveHistory(){ Set dataGroup = new HashSet(); try{ File file = new File("C:\\Documents and Settings\\vincent\\My Documents\\NetBeansProjects\\vincenttesting\\src\\vincenttesting\\vincenthistory.txt"); BufferedReader br = new BufferedReader(new FileReader(file)); String data = br.readLine(); while(data != null){ System.out.println("This is all the record:"+data); Customer cust = new Customer(); //break the data based on the , String array[] = data.split(","); cust.setCustomerName(array[0]); cust.setpassword(array[1]); cust.setlocation(array[2]); cust.setday(array[3]); cust.setmonth(array[4]); cust.setyear(array[5]); cust.setAmount(Double.parseDouble(array[6])); cust.settransaction(Double.parseDouble(array[7])); dataGroup.add(cust); //then proced to read next customer. data = br.readLine(); } br.close(); }catch(Exception e){ System.out.println("error" +e); } return dataGroup; } public static void main(String[] args) { FileReadDataModel fr = new FileReadDataModel(); Set customerGroup = fr.retrieveHistory(); System.out.println(e); for(Object obj : customerGroup){ Customer cust = (Customer)obj; System.out.println("Cust name :" +cust.getCustomerName()); System.out.println("Cust amount :" +cust.getAmount()); }

    Read the article

  • Designing DAOs around a JSON API for iPhone Development

    - by Bob Spryn
    So I've been trying to design a clean way of grabbing data for my models in iPhone land. All the data for my application is coming from JSON API's. So right now when a VC needs some models, it does the JSON call itself (asynch) and when it receives the data, it builds the models. It works, but I'm trying to think of a cleaner method whereby the DAO's retrieve the information for me and return the models, all in an async manner. My initial thought is build a protocol for my DAOs, such that the VC would instantiate a DAO and make itself the delegate. When you requested data [DAOinstance getAllUsers] the DAO would do all the network request stuff, and then when it had the data, it would call a method on its delegate (the VC) to pass the data. So I think that's a cool solution, but realized that if I needed to use the same DAO for different purposes in the same VC, my delegate method would have to branch logic depending on which DAO instance initiated the request. So my second thought was to be able to pass 'handler' selectors to the DAO object a la typical javascript patterns. So instead of an official protocol, I would say something like [DAOinstance getAllUsersWithSelector:"TheHandlerFunctionOnMyVC:"] Then when the DAO completed its network activities, it would call the passed selector on the VC, and pass the data back. So am I headed in the wrong direction entirely here? Seems like maybe an ok way to go. Any pointers or articles on designing this kind of data layer would be sweet. Thanks! Bob

    Read the article

  • Storing arbitrary data in HTML

    - by Rob Colburn
    What is the best way to embed data in html elements for later use? As an example, let's say we have jQuery returning some JSON from the server, and we want to dump that datat out to the user as paragraphs. However, we want to be able to attach meta-data to these elements, so we can events for these later. The way I tend to handle this, is with some ugly prefixing function handle_response(data) { var html = ''; for (var i in data) { html += '<p id="prefix_' + data[i].id + '">' + data[i].message + '</p>'; } jQuery('#log').html(html).find('p').click(function(){ alert('The ID is: ' + $(this).attr('id').substr(7)); }); } Alternatively, one can build a Form in the paragraph, and store your meta-data there. But, that often feels like overkill. This has been asked before in different ways, but I do not feel it's been answered well: http://stackoverflow.com/questions/432174/how-to-store-arbitrary-data-for-some-html-tags http://stackoverflow.com/questions/209428/non-standard-attributes-on-html-tags-good-thing-bad-thing-your-thoughts

    Read the article

  • How to sort HashSet() function data in sequence?

    - by vincent low
    I am new to Java, the function I would like to perform is to load a series of data from a file, into my hashSet() function. the problem is, I able to enter all the data in sequence, but I can't retrieve it out in sequence base on the account name in the file. Can anyone help? below is my code: public Set retrieveHistory(){ Set dataGroup = new HashSet(); try{ File file = new File("C:\\Documents and Settings\\vincent\\My Documents\\NetBeansProjects\\vincenttesting\\src\\vincenttesting\\vincenthistory.txt"); BufferedReader br = new BufferedReader(new FileReader(file)); String data = br.readLine(); while(data != null){ System.out.println("This is all the record:"+data); Customer cust = new Customer(); //break the data based on the , String array[] = data.split(","); cust.setCustomerName(array[0]); cust.setpassword(array[1]); cust.setlocation(array[2]); cust.setday(array[3]); cust.setmonth(array[4]); cust.setyear(array[5]); cust.setAmount(Double.parseDouble(array[6])); cust.settransaction(Double.parseDouble(array[7])); dataGroup.add(cust); //then proced to read next customer. data = br.readLine(); } br.close(); }catch(Exception e){ System.out.println("error" +e); } return dataGroup; } public static void main(String[] args) { FileReadDataModel fr = new FileReadDataModel(); Set customerGroup = fr.retrieveHistory(); System.out.println(e); for(Object obj : customerGroup){ Customer cust = (Customer)obj; System.out.println("Cust name :" +cust.getCustomerName()); System.out.println("Cust amount :" +cust.getAmount()); }

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >