Search Results

Search found 60622 results on 2425 pages for 'linked data'.

Page 164/2425 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • Programmatically import CSV data to Access

    - by FrustratedWithFormsDesigner
    I have an Access database and the source of data comes from generated CSV files. I'd like to have an easy way for the users to simply select the data file and import it. Import should append the existing data to the data already in the data table. Is there a way in Access to create a file selector and import using saved CSV import settings that are already in the file?

    Read the article

  • warning about data loss c++/c

    - by Dr Deo
    i am getting a benign warning about possible data loss warning C4244: 'argument' : conversion from 'const int' to 'float', possible loss of data question i remember as if float has a larger precision than int. So how can data be lost if i convert from a smaller data type (int) to a larger data type (float)

    Read the article

  • Why is cakephp form input stored in $this->data and POST data stored in $this->params['form'] ?

    - by spudnik1979
    the cakephp rest tutorial says that post data should be in $this-data, but I am finding that it is not, but instead inside $this-params['form'] However, when using cakephp form helper in a view, the data is in $this-data. Am I correct to have to check both locations in my controller? It just seems a bit of a waste of extra code. Shouldnt the data appear in one place for whether it came from a rest rest requestor or Cakephp form post? ps im using cakephp 1.3

    Read the article

  • Do (statically linked) DLLs use a different heap than the main program?

    - by happy_emi
    I'm new to Windows programming and I've just "lost" two hours hunting a bug which everyone seems aware of: you cannot create an object on the heap in a DLL and destroy it in another DLL (or in the main program). I'm almost sure that on Linux/Unix this is NOT the case (if it is, please say it, but I'm pretty sure I did that thousands of times without problems...). At this point I have a couple of questions: 1) Do statically linked DLLs use a different heap than the main program? 2) Is the statically linked DLL mapped in the same process space of the main program? (I'm quite sure the answer here is a big YES otherwise it wouldn't make sense passing pointers from a function in the main program to a function in a DLL). I'm talking about plain/regular DLL, not COM/ATL services EDIT: By "statically linked" I mean that I don't use LoadLibrary to load the DLL but I link with the stub library

    Read the article

  • Passing data between blocks using sinatra

    - by Dan Galipo
    Hi All I'm trying to pass data between blocks using sinatra. For example: @data = Hash.new post "/" do @data[:test] = params.fetch("test").to_s redirect "/tmp" end get "/tmp" do puts @data[:test] end However whenever i get to the tmp block @data is nil and throws an error. Why is that?

    Read the article

  • Grep and Extract Data in Perl

    - by syker
    I have HTML content stored in a variable. How do I extract data that is found between a set of common tags in the page? For example, I am interested in the data (represented by DATA kept between a set of tags which one line after the other: ... <td class="jumlah">*DATA*</td> <td class="ud"><a href="">*DATA*</a></td> ...

    Read the article

  • DomainDataSource DataPager with silverlight 3 DataGrid & .Net RIA Services

    - by Dennis Ward
    I have a simple datagrid example with silverlight 3, and am populating it with the .NET ria services using a DomainDataSource along with a DataPager declaratively (nothing in the code-behind), and am experiencing this problem: The LoadSize is 30, and the Page size is 15, and when the page is loaded, the 1st and 2nd page appear correctly, but when I go beyond the 2nd page, nothing shows up in the grid. This used to work in the silverlight 3 beta with the Mix 2009 preview of .NET Ria services, and I've got a really simple example and have verified that the Service on the web project gets called to load a new batch, but the grid doesn't show any data. Can anyone shed any light as to why grid displays data only for the initial load of data and not subsequent batches from the pager? Here's my xaml: <riaControls:DomainDataSource x:Name="ArtistSource" QueryName="GetArtist" AutoLoad="True" LoadSize="30" PageSize="15"> <riaControls:DomainDataSource.DomainContext> <domain:AdminContext /> </riaControls:DomainDataSource.DomainContext> </riaControls:DomainDataSource> <data:DataGrid Grid.Row="1" x:Name="ArtistDataGrid" ItemsSource="{Binding Data, ElementName=ArtistSource}"> </data:DataGrid> <StackPanel Grid.Row="2"> <data:DataPager Source="{Binding Data, ElementName=ArtistSource}" /> </StackPanel>

    Read the article

  • How to populate Java (web) application with initial data using Spring/JPA/Hibernate

    - by Tuukka Mustonen
    I want to setup my database with initial data programmatically. I want to populate my database for development runs, not for testing runs (it's easy). The product is built on top of Spring and JPA/Hibernate. Developer checks out the project Developer runs command/script to setup database with initial data Developer starts application (server) and begins developing/testing then: Developer runs command/script to flush the database and set it up with new initial data because database structures or the initial data bundle were changed What I want is to setup my environment by required parts in order to call my DAOs and insert new objects into database. I do not want to create initial data sets in raw SQL, XML, take dumps of database or whatever. I want to programmatically create objects and persist them in database as I would in normal application logic. One way to accomplish this would be to start up my application normally and run a special servlet that does the initialization. But is that really the way to go? I would love to execute the initial data setup as Maven task and I don't know how to do that if I take the servlet approach. There is somewhat similar question. I took a quick glance at the suggested DBUnit and Unitils. But they seem to be heavily focused in setting up testing environments, which is not what I want here. DBUnit does initial data population, but only using xml/csv fixtures, which is not what I'm after here. Then, Maven has SQL plugin, but I don't want to handle raw SQL. Maven also has Hibernate plugin, but it seems to help only in Hibernate configuration and table schema creation (not in populating db with data). How to do this?

    Read the article

  • R and version control for the solo data analyst

    - by Jeromy Anglim
    Many data analysts that I respect use version control. For example: http://github.com/hadley/ See comments on http://permut.wordpress.com/2010/04/21/revision-control-statistics-bleg/ However, I'm evaluating whether adopting a version control system such as git would be worthwhile. A brief overview: I'm a social scientist who uses R to analyse data for research publications. I don't currently produce R packages. My R code for a project typically includes a few thousand lines of code for data input, cleaning, manipulation, analyses, and output generation. Publications are typically written using LaTeX. With regards to version control there are many benefits which I have read about, yet they seem to be less relevant to the solo data analyst. Backup: I have a backup system already in place. Forking and rewinding: I've never felt the need to do this, but I can see how it could be useful (e.g., you are preparing multiple journal articles based on the same dataset; you are preparing a report that is updated monthly, etc) Collaboration: Most of the time I am analysing data myself, thus, I woudln't get the collaboration benefits of version control. There are also several potential costs involved with adopting version control: Time to evaluate and learn a version control system A possible increase in complexity over my current file management system However, I still have the feeling that I'm missing something. General guides on version control seem to be addressed more towards computer scientists than data analysts. Thus, specifically in relation to data analysts in circumstances similar to those listed above: Is version control worth the effort? What are the main pros and cons of adopting version control? What is a good strategy for getting started with version control for data analysis with R (e.g., examples, workflow ideas, software, links to guides)?

    Read the article

  • XML Schema Migration

    - by Corwin Joy
    I am working on a project where we need to save data in an XML format. The problem is, over time we expect the format / schema for our data to change. What we want to be able to do is to produce scripts to migrate our data across different schema versions. We distribute our product to thousands of customers so we need to be able to run / apply these scripts at customer sites (so we can't just do the conversions by hand). I think that what we are looking for is some kind of XML data migration tool. In my mind the ideal tool could: Do an "XML diff" of two schema to identify added/deleted/changed nodes. Allow us to specify transformation functions. So, for example, we might add a new element to our schema that is a function of the old elements. (E.g. a new element C where C = A+B, A + B are old elements). So I think I am looking for a kind of XML diff and patch tool which can also apply transformation functions. One tool I am looking at for this is Altova's MapForce . I'm sure others here have had to deal with XML data format migration. How did you handle it? Edit: One point of clarification. The "diff" I plan to do is on the schema or .xsd files. The actual changes will be made to particular data sets that follow a given schema. These data sets will be .xml files. So its a "diff" of the schema to help figure out what changes need to be made to data sets to migrate them from one scheme to another.

    Read the article

  • Status of Data in Rollback of Large Transaction in SQL Server

    - by Lloyd Banks
    I have a data warehousing procedure that downloads and replaces dozens of tables from a linked server to a local database. Every once in a while, the code will get stuck on one of the tables on the linked server because the table on the linked server is in a state of transition. I am under the assumption that since the entire procedure is considered one transaction commit, when the procedure gets stuck, none of the changes made by the procudure so far would have committed. But the opposite seems to be true, tables that were "downloaded" before the procedure got stuck would have been updated with today's versions on the local server. Shouldn't SQL Server wait for the entire procedure to finish before the changes are durable? CREATE PROCEDURE MYIMPORT AS BEGIN SET NOCOUNT ON IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE1') DROP TABLE TABLE1 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE1 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE1') IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE2') DROP TABLE TABLE2 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE2 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE2') --IF THE PROCEDURE GETS STUCK HERE, THEN CHANGES TO TABLE1 WOULD HAVE BEEN MADE ON THE LOCAL SERVER WHILE NO CHANGES WOULD HAVE BEEN MADE TO TABLE3 ON THE LOCAL SERVER IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE3') DROP TABLE TABLE3 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE3 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE3') END

    Read the article

  • Enormous data and PHP errors

    - by salamis
    I am currently using the following HighCharts:HighStock:Charts: http://www.highcharts.com/stock/demo/data-grouping in order to display the data returned from the server. We retrieve the data from a MySQL database and is really big. We are storing sensor metrics every 1 second. After a while we got the following error: [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 4756882 bytes) in C:\\wamp\\www\\admin\\getTrends.php on line 156, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP Stack trace:, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 1. {main}() C:\\wamp\\www\\admin\\getTrends.php:0, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 2. getTrendsDataAI() C:\\wamp\\www\\admin\\getTrends.php:33, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 3. printResults() C:\\wamp\\www\\admin\\getTrends.php:102, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 4. createData() C:\\wamp\\www\\admin\\getTrends.php:230, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 5. implode() C:\\wamp\\www\\admin\\getTrends.php:156, referer: http://localhost/admin/trends.php What is the best solution to return this data as JSON object to HighStocks for viewing? And how can we overcome the PHP limitation? Shall we return chunk of data each time? How do they usually present enormous amount of data to the users and creating charts and reports from this data? Another big problem that we need to overcome is that the returned JSON object is enormous. At this point is around 20-30 mbs and it will be much larger in the future. Is it ok to return this data to the user and perform everything client side? Any suggestions or thoughts welcome.

    Read the article

  • Cocoa Core data: cannot save Created Items in NSTableview

    - by Paul Rostorp
    Hello, I'm am a beginner in mac os x development and am trying to get started with all this. Here is my problem : I've create a non-document based cocoa app using core data as storage. I've added an entity and attributes to the xdatamodel. In IB i've created an NSArrayController and linked it properly. I've created an nstableview binded to the nsarraycontroller. Next I added a button linked to nsarraycontroller with the " add: " method. When I try it out, I can add and edit the items in the table. Here comes the problem: Core data is supposed to save everything automatically, but to make sure i linked the "save" button in the menu to the appdelegate and to the " file's owner" , first responder, application... everything possible ( with both " save :" and " saveaction:" methods ). And still it doesn't save when clicking save: when I restart the cell created ( and renamed ) are gone. And also, I didn't even edit the source code yet; core data for such simple tasks is supposed to only need Interface builder. Please help me for this, I haven't found any threads resolving this problem. Thank you in advance.

    Read the article

  • How do you remove invalid hexadecimal characters from an XML-based data source prior to constructing

    - by Oppositional
    Is there any easy/general way to clean an XML based data source prior to using it in an XmlReader so that I can gracefully consume XML data that is non-conformant to the hexadecimal character restrictions placed on XML? Note: The solution needs to handle XML data sources that use character encodings other than UTF-8, e.g. by specifying the character encoding at the XML document declaration. Not mangling the character encoding of the source while stripping invalid hexadecimal characters has been a major sticking point. The removal of invalid hexadecimal characters should only remove hexadecimal encoded values, as you can often find href values in data that happens to contains a string that would be a string match for a hexadecimal character. Background: I need to consume an XML-based data source that conforms to a specific format (think Atom or RSS feeds), but want to be able to consume data sources that have been published which contain invalid hexadecimal characters per the XML specification. In .NET if you have a Stream that represents the XML data source, and then attempt to parse it using an XmlReader and/or XPathDocument, an exception is raised due to the inclusion of invalid hexadecimal characters in the XML data. My current attempt to resolve this issue is to parse the Stream as a string and use a regular expression to remove and/or replace the invalid hexadecimal characters, but I am looking for a more performant solution.

    Read the article

  • Working with images (CGImage), exif data, and file icons

    - by Nick
    What I am trying to do (under 10.6).... I have an image (jpeg) that includes an icon in the image file (that is you see an icon based on the image in the file, as opposed to a generic jpeg icon in file open dialogs in a program). I wish to edit the exif metadata, save it back to the image in a new file. Ideally I would like to save this back to an exact copy of the file (i.e. preserving any custom embedded icons created etc.), however, in my hands the icon is lost. My code (some bits removed for ease of reading): // set up source ref I THINK THE PROBLEM IS HERE - NOT GRABBING THE INITIAL DATA CGImageSourceRef source = CGImageSourceCreateWithURL( (CFURLRef) URL,NULL); // snag metadata NSDictionary *metadata = (NSDictionary *) CGImageSourceCopyPropertiesAtIndex(source,0,NULL); // make metadata mutable NSMutableDictionary *metadataAsMutable = [[metadata mutableCopy] autorelease]; // grab exif NSMutableDictionary *EXIFDictionary = [[[metadata objectForKey:(NSString *)kCGImagePropertyExifDictionary] mutableCopy] autorelease]; << edit exif >> // add back edited exif [metadataAsMutable setObject:EXIFDictionary forKey:(NSString *)kCGImagePropertyExifDictionary]; // get source type CFStringRef UTI = CGImageSourceGetType(source); // set up write data NSMutableData *data = [NSMutableData data]; CGImageDestinationRef destination = CGImageDestinationCreateWithData((CFMutableDataRef)data,UTI,1,NULL); //add the image plus modified metadata PROBLEM HERE? NOT ADDING THE ICON CGImageDestinationAddImageFromSource(destination,source,0, (CFDictionaryRef) metadataAsMutable); // write to data BOOL success = NO; success = CGImageDestinationFinalize(destination); // save data to disk [data writeToURL:saveURL atomically:YES]; //cleanup CFRelease(destination); CFRelease(source); I don't know if this is really a question of image handling, file handing, post-save processing (I could use sip), or me just being think (I suspect the last). Nick

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Urgent: Sort HashSet() function data in sequence

    - by vincent low
    i am new to java, the function i like to perform is something like: i will load a series of data from a file, into my hashSet() function. the problem is, i able to enter all the data in sequence, but i cant retrieve it out in sequence base on the account name in the file. any 1 can help to give a comment? below is my code: public Set retrieveHistory(){ Set dataGroup = new HashSet(); try{ File file = new File("C:\\Documents and Settings\\vincent\\My Documents\\NetBeansProjects\\vincenttesting\\src\\vincenttesting\\vincenthistory.txt"); BufferedReader br = new BufferedReader(new FileReader(file)); String data = br.readLine(); while(data != null){ System.out.println("This is all the record:"+data); Customer cust = new Customer(); //break the data based on the , String array[] = data.split(","); cust.setCustomerName(array[0]); cust.setpassword(array[1]); cust.setlocation(array[2]); cust.setday(array[3]); cust.setmonth(array[4]); cust.setyear(array[5]); cust.setAmount(Double.parseDouble(array[6])); cust.settransaction(Double.parseDouble(array[7])); dataGroup.add(cust); //then proced to read next customer. data = br.readLine(); } br.close(); }catch(Exception e){ System.out.println("error" +e); } return dataGroup; } public static void main(String[] args) { FileReadDataModel fr = new FileReadDataModel(); Set customerGroup = fr.retrieveHistory(); System.out.println(e); for(Object obj : customerGroup){ Customer cust = (Customer)obj; System.out.println("Cust name :" +cust.getCustomerName()); System.out.println("Cust amount :" +cust.getAmount()); }

    Read the article

  • Designing DAOs around a JSON API for iPhone Development

    - by Bob Spryn
    So I've been trying to design a clean way of grabbing data for my models in iPhone land. All the data for my application is coming from JSON API's. So right now when a VC needs some models, it does the JSON call itself (asynch) and when it receives the data, it builds the models. It works, but I'm trying to think of a cleaner method whereby the DAO's retrieve the information for me and return the models, all in an async manner. My initial thought is build a protocol for my DAOs, such that the VC would instantiate a DAO and make itself the delegate. When you requested data [DAOinstance getAllUsers] the DAO would do all the network request stuff, and then when it had the data, it would call a method on its delegate (the VC) to pass the data. So I think that's a cool solution, but realized that if I needed to use the same DAO for different purposes in the same VC, my delegate method would have to branch logic depending on which DAO instance initiated the request. So my second thought was to be able to pass 'handler' selectors to the DAO object a la typical javascript patterns. So instead of an official protocol, I would say something like [DAOinstance getAllUsersWithSelector:"TheHandlerFunctionOnMyVC:"] Then when the DAO completed its network activities, it would call the passed selector on the VC, and pass the data back. So am I headed in the wrong direction entirely here? Seems like maybe an ok way to go. Any pointers or articles on designing this kind of data layer would be sweet. Thanks! Bob

    Read the article

  • Storing arbitrary data in HTML

    - by Rob Colburn
    What is the best way to embed data in html elements for later use? As an example, let's say we have jQuery returning some JSON from the server, and we want to dump that datat out to the user as paragraphs. However, we want to be able to attach meta-data to these elements, so we can events for these later. The way I tend to handle this, is with some ugly prefixing function handle_response(data) { var html = ''; for (var i in data) { html += '<p id="prefix_' + data[i].id + '">' + data[i].message + '</p>'; } jQuery('#log').html(html).find('p').click(function(){ alert('The ID is: ' + $(this).attr('id').substr(7)); }); } Alternatively, one can build a Form in the paragraph, and store your meta-data there. But, that often feels like overkill. This has been asked before in different ways, but I do not feel it's been answered well: http://stackoverflow.com/questions/432174/how-to-store-arbitrary-data-for-some-html-tags http://stackoverflow.com/questions/209428/non-standard-attributes-on-html-tags-good-thing-bad-thing-your-thoughts

    Read the article

  • How to sort HashSet() function data in sequence?

    - by vincent low
    I am new to Java, the function I would like to perform is to load a series of data from a file, into my hashSet() function. the problem is, I able to enter all the data in sequence, but I can't retrieve it out in sequence base on the account name in the file. Can anyone help? below is my code: public Set retrieveHistory(){ Set dataGroup = new HashSet(); try{ File file = new File("C:\\Documents and Settings\\vincent\\My Documents\\NetBeansProjects\\vincenttesting\\src\\vincenttesting\\vincenthistory.txt"); BufferedReader br = new BufferedReader(new FileReader(file)); String data = br.readLine(); while(data != null){ System.out.println("This is all the record:"+data); Customer cust = new Customer(); //break the data based on the , String array[] = data.split(","); cust.setCustomerName(array[0]); cust.setpassword(array[1]); cust.setlocation(array[2]); cust.setday(array[3]); cust.setmonth(array[4]); cust.setyear(array[5]); cust.setAmount(Double.parseDouble(array[6])); cust.settransaction(Double.parseDouble(array[7])); dataGroup.add(cust); //then proced to read next customer. data = br.readLine(); } br.close(); }catch(Exception e){ System.out.println("error" +e); } return dataGroup; } public static void main(String[] args) { FileReadDataModel fr = new FileReadDataModel(); Set customerGroup = fr.retrieveHistory(); System.out.println(e); for(Object obj : customerGroup){ Customer cust = (Customer)obj; System.out.println("Cust name :" +cust.getCustomerName()); System.out.println("Cust amount :" +cust.getAmount()); }

    Read the article

  • Rtti data manipulation and consistency in Delphi 2010

    - by Coco
    Has anyone an idea, how I can make TValue using a reference to the original data? In my serialization project, I use (as suggested in XML-Serialization) a generic serializer which stores TValues in an internal tree-structure (similar to the MemberMap in the example). This member-tree should also be used to create a dynamic setup form and manipulate the data. My idea was to define a property for the Data: TDataModel <T> = class {...} private FData : TValue; function GetData : T; procedure SetData (Value : T); public property Data : T read GetData write SetData; end; The implementation of the GetData, SetData Methods: procedure TDataModel <T>.SetData (Value : T); begin FData := TValue.From <T> (Value); end; procedure TDataModel <T>.GetData : T; begin Result := FData.AsType <T>; end; Unfortunately, the TValue.From method always makes a copy of the original data. So whenever the application makes changes to the data, the DataModel is not updated and vice versa if I change my DataModel in a dynamic form, the original data is not affected. Sure I could always use the Data property before and after changing anything, but as I use lot of Rtti inside my DataModel, I do not realy want to do this anytime. Perhaps someone has a better suggestion?

    Read the article

  • How to access different domain data using Java script

    - by shoaibmohammed
    Hello there, Here is the issue. Suppose there is a DOMAIN A which is going to be the server containing a PHP Script file. The data from Domain A is to be accessed by a Client at DOMAIN B. I know it cannot be accessed directly using JavaScript. So what I did is, in Domain A I created a a JavaScript file as front-end for the PHP Script which AJAXes the PHP and returns the data. But unfortunately it din't work I came across an example having PHP as a Middle Man in the client side. But I donot want to keep any server side PHP code as a middle man in the client side. I just want to give out the Javascript to the client domain. http://stackoverflow.com/questions/578095/how-to-get-data-with-javascript-from-another-server DOMAIN A PHP - data.php <?php echo "Server returns data"; ?> JS - example.js Does the Ajax to the PHP function getData() { //assume ajax is done for data.php and data is retrieved, now return the data return ajaxed_data; } Domain B JS Client includes the example.js file from Domain A in his HTML <script type="text/javascript" src="http://www.DomainA.com/example.js"></script> <script type="text/javascript"> alert(getData()); </script> I hope I have made myself understandable ! Can this be established ? Its something like Google friend connect, what I mean is, just provide JavaScript to the client and thats it. Every thing carried out in server side Thankx for providing this forum

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >