Search Results

Search found 4705 results on 189 pages for 'export to csv'.

Page 10/189 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Regular expression for parsing CSV in PHP

    - by Discodancer
    I already managed to split the CSV file using this regex: "/,(?=(?:[^\"]\"[^\"]\")(?![^\"]\"))/" But I ended up with an array of strings that contain the opening and ending double quotes. Now I need a regex that would strip those strings of the delimiter double quotes. As far as I know the CSV format can encapsulate strings in double quotes, and all the double quotes that are already a part of the string are doubled. For example: My "other" cat becomes "My ""other"" cat" What I basically need is a regex that will replace all sequences of N doublequotes with a sequence of (N/2 - rounded down) double quotes. Or is there a better way ? Thanks in advance.

    Read the article

  • iPhone Accelerometer > csv > email

    - by Bradley Powers
    Hi all, I'm trying to collect data for a machine learning project I'm working on. What I'd like to do is collect accelerometer data from an iPhone, save it to a csv and email it to myself. My app currently is able to acquire data from the accelerometer, but I'm at a bit of a loss as to how to proceed. First of all, I'd like to acquire data for a preset amount of time (after playing a sound to the user) which I don't really know how to do, and I can't find good documentation for. Also, I'd like to save that to a csv, which there is some documentation on (specifically using the NSString writeToFile method). Any recommendations/ ideas? Thanks!

    Read the article

  • How to serve a View as CSV in ASP.NET Web Forms

    - by ChessWhiz
    Hi, I have a MS SQL view that I want to make available as a CSV download in my ASPNET Web Forms app. I am using Entity Framework for other views and tables in the project. What's the best way to enable this download? I could add a HyperLink whose click handler iterates over the view, writes its CSV form to the disk, and then serves that file. However, I'd prefer not to write to the disk if it can be avoided, and that involves iteration code that may be avoided with some other solution. Any ideas?

    Read the article

  • Parsing a CSV File to a Rails Database

    - by Schroedinger
    G'day guys, I'm using fasterCSV and a rake script to parse a csv with about 30 columns into my rails db for a 'Trade' item. The script works fine when all of the values are set to strings, but when I change it to a decimal, int or other value, everything goes to hell. Wondering if fasterCSV has built in int etc parsing or whether I'll have to manage these within my model. Basically, I'm given a giant amount of trades data, need to import it, and then need to provide feedback with say the average trade volume, the times, etc. I understand I can do that all with the wonderful records provided to me by activeRecord but wondered if there was an easier way to populate a rather large Database with a given CSV? Several of the fields don't have values for certain rows, fasterCSV seems to work perfectly when they're all strings, but not when I try to get decimal or other.

    Read the article

  • Dealing with a badly formatted CSV file

    - by Josh K
    I have an exceptionally bad CSV file. Although I "solved" the problem in the end by manually writing scripts to process and reprocess this specific file I wanted to know if there were any other solutions out there. You have a CSV file that has all the fields terminated by | (pipe) characters. Running a quick check shows you that there are 53 fields in the file. The person who gave you the file claims there there are only 28 fields. Not all of the fields have information in them. For example there are five custom_field_{num} fields which may or may not have data. How would you get this into a database nicely? The ideal solution (and one I searched high and low for) would be to just throw it all into a table with no column names or specifications. Then remove any columns that were completely blank and then give them titles and specifications.

    Read the article

  • Using Regex to remove Carriage Returns in a CSV file in Notepad++

    - by Barry
    I have a CSV file I need to clean up. This is a one-time thing so I'd like to do it in Notepad++ if possible. The CSV file has two fields, one of which is wrapped in quotes. I'd like to remove any Carriage Returns from within the quoted field. I was trying to use this pattern but can't get it quite right... (.*)\"(.*)\n(.*)\"(.*) Also correct me if I am wrong, but I presume the "replace with" value would be something along the lines of: \1\2\3\4 Thanks in advance. I'm also open to alternate solutions such as a quick and dirty PERL script.

    Read the article

  • CSV File Content Display Issue

    - by Pankaj Khurana
    Hi, I want to retrieve contents from a csv file for that i am using following code: <?php $fo = fopen("record.csv", "rb+"); while(!feof($fo)) { $contents[] = fgetcsv($fo,0,';'); } print_r($contents); fclose($fo); ?> But my records are displayed in the following format: ????††???????†††??†††††????"Search Transactions Results" ††††??†???????††††?††††††??? ???????????? ?????????????? ?????????? My csv file format: "Search Transactions Results" "Transaction ID","Reference Transaction ID","Date","Type","Subject","Item Number","Item Name","Invoice ID","Name","Email","Shipping Name","Shipping Address Line 1","Shipping Address Line 2","Shipping Address City","Shipping State/Province","Shipping Zip/Postal Code","Shipping Address Country","Shipping Method","Address Status","Contact Phone Number","Gross Amount","Receipt ID","Custom Field","Option 1 Name","Option 1 Value","Option 2 Name","Option 2 Value","Note","Auction Site","Auction User ID","Item URL","Auction Closing Date","Insurance Amount","Currency","Fees","Net Amount","Shipping & Handling Amount","Sales Tax Amount","To Email","Time","Time Zone" "1T","",5/5/2010 2:10:44 PM,"Payment Processed","CFP Self Study Kit","1","CFP Self Study Kit","","User1","[email protected]","","","","","","","","","N","","68.18","R1","","","","","","","","","",,"","USD","-2.62","65.56","0","0","[email protected]","01:40","Asia/Calcutta" "2T","",5/19/2010 4:04:08 PM,"Payment Processed","CFP Self Study Kit","1","CFP Self Study Kit","","User2","[email protected]","","","","","","","","","N","","68.18","R2","","","","","","","","","",,"","USD","-2.62","65.56","0","0","[email protected]","03:34","Asia/Calcutta" "3T","1RT",5/19/2010 5:28:45 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","17492.6","","","","","","","","","","",,"","INR","0","17492.6","0","0","","04:58","Asia/Calcutta" "4T","2RT",5/19/2010 5:28:45 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","-393.36","","","","","","","","","","",,"","USD","0","-393.36","0","0","","04:58","Asia/Calcutta" "5T","",5/19/2010 5:28:45 PM,"Transfer to Bank Initiated","P1006","","P1006",""," ","","","","","","","","","","N","","-17492.6","","","","","","","","","","",,"","INR","0","-17492.6","0","0","","04:58","Asia/Calcutta" "6T","",5/20/2010 5:38:02 PM,"Transfer to Bank Completed","P1006","","P1006",""," ","","","","","","","","","","N","","-17492.6","","","","","","","","","","",,"","INR","0","-17492.6","0","0","","05:08","Asia/Calcutta" "7T","",5/21/2010 12:32:37 PM,"Payment Processed","FP - LVC Plus","","FP - LVC Plus","","User3","[email protected]","User3","NEW DELHI","BEHIND KARNATAKA BANK LD","SOUTH","NEW DELHI","110023","IN","","N","","283.96","","","","","","","","","","",,"","USD","-9.95","274.01","0","0","[email protected]","00:02","Asia/Calcutta" "8T","",5/25/2010 4:40:48 PM,"Transfer to Bank Initiated","P1006","","P1006",""," ","","","","","","","","","","N","","-12569.85","","","","","","","","","","",,"","INR","0","-12569.85","0","0","","04:10","Asia/Calcutta" "9T","3RT",5/25/2010 4:40:48 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","-274.01","","","","","","","","","","",,"","USD","0","-274.01","0","0","","04:10","Asia/Calcutta" "10T","4RT",5/25/2010 4:40:48 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","12569.85","","","","","","","","","","",,"","INR","0","12569.85","0","0","","04:10","Asia/Calcutta" "11T","",5/26/2010 4:57:39 PM,"Transfer to Bank Completed","P1006","","P1006",""," ","","","","","","","","","","N","","-12569.85","","","","","","","","","","",,"","INR","0","-12569.85","0","0","","04:27","Asia/Calcutta" "Total","-247.05 USD","-15.19","-262.24" "Total","0.00 INR","0.00","0.00" I want to retrieve the records where "Type"="Payment Processed". I want to retrieve content in a key value format that is for e.g. Transaction ID-1T as i have to store this values in a database but display is not proper. I am unable to find out the reason for the same please help me on this. Thanks

    Read the article

  • How to Practically Split Values from CSV File into MySQL Database

    - by Ryan
    Let's suppose I have the following line in a CSV file (I removed the header row in this example): "500,000",2,50,2,90000 I have a PHP script read the CSV file, break the file into individual lines, and store each line in an array called $linearray. Then, I use a foreach loop to look at each line individually. Within the foreach loop, I break the line into separate variables using the following function: $line = str_replace("'","\'",$line); From here, I insert the values into separate columns within a MySQL database. The script works. The values are inserted into a database, but I run into a problem. I want: "500,000" | 2 | 50 | 2 | 90000 But I get this: "500 | 000" | 2 | 50 | 2 | 90000 The script isn't smart enough to understand it should skip commas within quotation marks. Do you know how I can alter my script to make sure I get the output I'm looking for? Thanks.

    Read the article

  • Regex to match CSV file nested quotes

    - by user361970
    Hi, I know this has been discussed a million times. I tried searching through the forums and have seen some close regex expressions and tried to modify them but to no avail. Say there is a line in a csv file like this: "123", 456, "701 "B" Street", 910 Is there an easy regex to detect "B" (since its a non-escaped set of quotes within the normal CSV quotes) and replace it with something like \"B\" ? The final string would end up looking like this: "123", 456, "701 \"B\" Street", 910 Help would be greatly appreciated!

    Read the article

  • Python Parse CSV Correctly

    - by cornerstone
    I am very new to Python. I want to parse a csv file such that it will recognize quoted values - For example 1997,Ford,E350,"Super, luxurious truck" should be split as ('1997', 'Ford', 'E350', 'Super, luxurious truck') and NOT ('1997', 'Ford', 'E350', '"Super', ' luxurious truck"') the above is what I get if I use something like str.split(). How do I do this? Also would it be best to store these values in an array or some other data structure? because after I get these values from the csv I want to be able to easily choose, lets say any two of the columns and store it as another array or some other data structure. Thanks in advance.

    Read the article

  • Java BufferedReader behavior in CSV vs TXT file

    - by Gabriel
    If i try to read a CSV file called csv_file.csv. The problem is that when i read lines with BufferedReader.readLine() it skips the first line with months. But when i rename the file to csv_file.txt it reads it allright and it's not skipping the first line. Is there an undocumented "feature" of BufferedReader that i'm not aware? Example of file: Months, SEP2010, OCT2010, NOV2010 col1, col2, col3, col4, col5 aaa,,sdf,"12,456",bla bla bla, xsaffadfafda and so on, and so on, "10,00", xxx, xxx The code: FileInputStream stream = new FileInputStream(UploadSupport.TEMPORARY_FILES_PATH+fileName); BufferedReader br = new BufferedReader(new InputStreamReader(stream, "UTF-8")); String line = br.readLine(); String months[] = line.split(","); while ((line=br.readLine())!=null) { /*parse other lines*/ }

    Read the article

  • excluding a column in csv file with regex

    - by JPro
    Is there any way to exclude/delete/replace one field from a csv file with some regexp in notepad++? I have a csv file with some data like this: '1','data1','data2','data3','data4','data5','data6','data7','data8','data9', 'data10','data11','data12','data13','data14','data15','data16','data17','data18', 'data19','data20','data21','data22','data23','\'data24 with some commas, here and there and some "double quotes", and fullstops.','data25','data26' The only problem I am facing is with data24 WHERE I encounter \' and then "" and some wild characters like , and .. This is particularly fixed at 24 field. For the purpose of clarity, I have entered a newline here. But the entire text above is in juts one line. Any ideas on how to solve? Thanks.

    Read the article

  • Excel isn't reading sql exported csv properly

    - by mhopkins321
    I have a batch file that calls sqlcmd to run a command and then export the data as a csv. When viewed in a cell the trasancted date for example shows 35:30.0 but if you click on it the formula bar shows 1/1/1900 2:45:00 PM. I need the full timestamp to show in the cell. Any ideas? The batch file is the following sqlcmd -S server -U username -P password -d database -i "D:\path\sqlScript.sql" -s "," > D:\path\report.csv -I -W -k 1 The script is the following. Now I currently have them cast as varchars, but that's simply because i've tried to change it a bit. Varchar doesn't work either. SET NOCOUNT ON; select top(10)BO.Status, cast(tradeDate AS varchar) AS Trade_Date, CAST(closingTime AS varchar) AS Closing_Time, CAST(openingTime AS varchar) AS openingTime FROM GIANT COMPLICATED JOINS OF ALL SORTS OF TABLES

    Read the article

  • Wierd characters in exported csv files when converting

    - by Ahue
    Hey guys, I came across a problem I cannot solve on my own concerning the downloadable csv formatted trends data files from Google Insights for Search. I'm to lazy to reformat the files I4S gives me manually what means: Extracting the section with the actual trends data and reformatting the columns so that I can use it with a modelling program I do for school. So I wrote a tiny script the should do the work for me: Taking a file, do some magic and give me a new file in proper format. What it's supposed to do is reading the file contents, extracting the trends section, splitting it by newlines, splitting each line and then reorder the columns and maybe reformat them. When looking at a untouched I4S csv file it looks normal containing CR LF caracters at line breaks (maybe thats only because I'm using Windows). When just reading the contents and then writing them to a new file using the script wierd asian characters appear between CR and LF. I tried the script with a manually written similar looking file and even tried a csv file from Google Trends and it works fine. I use Python and the script (snippet) I used for the following example looks like this: # Read from an input file file = open(file,"r") contents = file.read() file.close() cfile = open("m.log","w+") cfile.write(contents) cfile.close() Has anybody an idea why those characters appear??? Thank you for you help! I'll give you and example: First few lines of I4S csv file: Web Search Interest: foobar Worldwide; 2004 - present Interest over time Week foobar 2004-01-04 - 2004-01-10 44 2004-01-11 - 2004-01-17 44 2004-01-18 - 2004-01-24 37 2004-01-25 - 2004-01-31 40 2004-02-01 - 2004-02-07 49 2004-02-08 - 2004-02-14 51 2004-02-15 - 2004-02-21 45 2004-02-22 - 2004-02-28 61 2004-02-29 - 2004-03-06 51 2004-03-07 - 2004-03-13 48 2004-03-14 - 2004-03-20 50 2004-03-21 - 2004-03-27 56 2004-03-28 - 2004-04-03 59 Output file when reading and writing contents: Web Search Interest: foobar ??????????? ? ? ? ????????? ????????? ???? ?????? Week foobar ?? ?? ?? ? ? ? ?? ??? ????? 2004-01-11 - 2004-01-17 44 ?? ?? ???? ? ? ?? ????????? 2004-01-25 - 2004-01-31 40 ?? ?? ?? ? ? ? ?? ?? ?????? 2004-02-08 - 2004-02-14 51 ?? ?? ???? ? ? ?? ????????? 2004-02-22 - 2004-02-28 61 ?? ?? ???? ? ? ?? ?? ?????? 2004-03-07 - 2004-03-13 48 ?? ?? ???? ? ? ?? ??? ?? ?? 2004-03-21 - 2004-03-27 56 ?? ?? ???? ? ? ?? ?? ?????? 2004-04-04 - 2004-04-10 69 ?? ?? ???? ? ? ?? ????????? 2004-04-18 - 2004-04-24 51 ?? ?? ???? ? ? ?? ?? ?????? 2004-05-02 - 2004-05-08 56 ?? ?? ?? ? ? ? ?? ????????? 2004-05-16 - 2004-05-22 54 ?? ?? ???? ? ? ?? ????????? 2004-05-30 - 2004-06-05 74 ?? ?? ?? ? ? ? ?? ????????? 2004-06-13 - 2004-06-19 50 ?? ?? ??? ? ? ?? ????????? 2004-06-27 - 2004-07-03 58 ?? ?? ?? ? ? ? ?? ??? ????? 2004-07-11 - 2004-07-17 59 ?? ?? ???? ? ? ?? ?????????

    Read the article

  • Code to update HyperV Export file

    - by Andy Schneider
    I am using the HyperV Module from Codeplex to do a "config only" export from a 2008R2 Hyper-V server. In order to import the configuration on another HyperV server, I need to edit the value of CopyVMStorage in the EXP file. This file is an XML file. I wrote the following code in PowerShell to do the update for me. The variable $existing is the existing exp file. $xml = [xml](get-content $existing) $xpath = '//PROPERTY[@NAME ="CopyVmStorage"]' foreach ($node in $xml.SelectNodes($xpath)) {$node.Value = 'TRUE'} $xml.Save($existing) This code makes the correct changes to the XML. However, when I go to import the file on the Hyper-V server, I get an error that says the file format is incorrect. I am wondering if the encoding of the file is incorrect or if there is something else going on. If I edit the file manually in wordpad, it imports without an issue. The filename is a GUID with a .exp extension, and it appears that the file name is too long for notepad to open. Notepad throws an error trying to open the file, which is why I went with WordPad. I have noticed that the file that is updated with PowerShell comes out formatted whereas the raw file is xml all bunched together with no whitespace. Any ideas on what "file format" means in this HyperV error message and how I might be able to use my code to automate this change in the XML without changing the file format?

    Read the article

  • Thunderbird: export email account settings

    - by zpea
    I'd like to create a new profile for Thunderbird using the same mail accounts I already configured in my old profile. As it is quite a number of accounts, it would be great to have a way to export/import them instead of writing down the settings just to fill in again in the new profile. Using web search and search here I mainly found following suggestions that do not match what I need: Copy the whole profile: Not possible for me as I don't want to copy other settings, the downloaded mail data etc. and the old profile broke when running out of space in the home folder anyway. Use mozBackup: There seem to be several programs by that name (forks?). In any case, it's Windows-only and hence no option (I am mostly on Linux and prefer platform-independent solutions anyway) Use accountex: Seems to do what I want, but it is not compatible with current Thunderbird version (supports only up to version 3.1) Posts with various tips from 4 years ago: Top results in the web search with the G. But they do not work in current versions of Thunderbird either. Did I overlook anything? After all, it doesn't sound like I was looking for something nobody ever looked for.

    Read the article

  • Cannot export a fusionchart with 'Embedding Charts Using <OBJECT>/<EMBED> Tags'

    - by zoom_pat277
    I am trying to export a fusion chart created using 'Embedding Charts Using / Tags'. Export works just perfect with the right click (on the chart) and chose a pdf to export. But I am not able to make this work via javascript. I have a button outside the chart which upon clicking calls the function below function myexport() { var object = getChartFromId('myChartid'); if( object.hasRendered() ) object.exportChart({exportFormat: 'PDF'}); } the object above returned is null and this fails on the next line here is the full prototype <html> <head> <title>My Chart</title> <script type="text/javascript" src="fusionCharts.debug.js"></script> <script type="text/javascript" src="fusionChartsExportComponent.js"></script> <script type="text/javascript"> function ExportMyChart() { var cObject = getChartFromId('Column3D'); if( cObject.hasRendered() ) cObject.exportChart({exportFormat: 'PDF'}); } </script> </head> <body> <object width="400" height="400" id="Column3D" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://fpdownload.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=8,0,0,0" > <param name="testname" value="Column3D.swf" /> <param name="FlashVars" value="&dataURL=testData.xml&chartWidth=400&chartHeight=300&DOMId=myChart1&registerWithJS=1&debugMode=0"> <param name="quality" value="high" /> <embed src="Column3D.swf" flashVars="&dataURL=testData.xml&chartWidth=400&chartHeight=300&DOMId=myChart1&registerWithJS=1&debugMode=0" width="400" height="300" name="Column3D" quality="high" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" /> </object> <!-- We also create a DIV to contain the FusionCharts client-side exporter component --> <div id="holderDiv" align="center">FusionCharts Export Handler Component</div> <script type="text/javascript"> var myExportComponent = new FusionChartsExportObject("testExporter1", "FCExporter.swf"); //Render the exporter SWF in our DIV fcexpDiv myExportComponent.Render("holderDiv"); </script> <input type="button" value="Export My Chart" onclick="ExportMyChart()" />

    Read the article

  • Reading CSV files in numpy where delimiter is ","

    - by monch1962
    Hello all, I've got a CSV file with a format that looks like this: "FieldName1", "FieldName2", "FieldName3", "FieldName4" "04/13/2010 14:45:07.008", "7.59484916392", "10", "6.552373" "04/13/2010 14:45:22.010", "6.55478493312", "9", "3.5378543" ... Note that there are double quote characters at the start and end of each line in the CSV file, and the "," string is used to delimit fields within each line. When I try to read this into numpy via: import numpy as np data = np.genfromtxt(csvfile, dtype=None, delimiter=',', names=True) all the data gets read in as string values, surrounded by double-quote characters. Not unreasonable, but not much use to me as I then have to go back and convert every column to its correct type When I use delimiter='","' instead, everything works as I'd like, except for the 1st and last fields. As the start of line and end of line characters are a single double-quote character, this isn't seen as a valid delimiter for the 1st and last fields, so they get read in as e.g. "04/13/2010 14:45:07.008 and 6.552373" - note the leading and trailing double-quote characters respectively. Because of these redundant characters, numpy assumes the 1st and last fields are both String types; I don't want that to be the case Is there a way of instructing numpy to read in files formatted in this fashion as I'd like, without having to go back and "fix" the structure of the numpy array after the initial read?

    Read the article

  • Preserving hierarchy when converting .csv file to xml or json

    - by Simon Levinson
    Hello I have a question concerning translating data from a CSV into XML or JSON where it is essential to preserve the heirarchy of the data. For example, if I have CSV data like this: type,brand,country,quantity apple,golden_delicious,english,1 apple,golden_delicious,french,2 apple,cox,,4 apple,braeburn,,1 banana,,carribean,6 banana,,central_america,7 clememtine,,,3 What I want is to preserve hierarchy in the XML so that I get something like: <fruit> <type = "apple"> <brand = "golden_delicious"> <country = "english" quantity = "1"> <country = "french" quantity = "2"> </brand> <brand = "cox"> <quantity = "4"> </brand> <brand = "braeburn"> <quantity = "1"> </brand> </type> <type = "banana"> <country = "carribean" quantity = "6"> <country = "central_america" quantity = "7"> </type> <type = "clementine"> <quantity = "3"> </type> <fruit /> Is it best to try to use JAXP or to convert the above into a table simply of parent, child and then writing the data to an array of strings for processing,? Like this: parent,child fruit,apple apple,golden_delicious golden_delicious,english golden_delicious,french english,1 french,2 apple,cox cox,4 apple,braeburn braeburn,1 And so on. Or is there a better way? Thanks Simon Levinson

    Read the article

  • Read alphanumeric characters from csv file in C#

    - by Prasad
    I am using the following code to read my csv file: public DataTable ParseCSV(string path) { if (!File.Exists(path)) return null; string full = Path.GetFullPath(path); string file = Path.GetFileName(full); string dir = Path.GetDirectoryName(full); //create the "database" connection string string connString = "Provider=Microsoft.ACE.OLEDB.12.0;" + "Data Source=\"" + dir + "\\\";" + "Extended Properties=\"text;HDR=Yes;FMT=Delimited;IMEX=1\""; //create the database query string query = "SELECT * FROM " + file; //create a DataTable to hold the query results DataTable dTable = new DataTable(); //create an OleDbDataAdapter to execute the query OleDbDataAdapter dAdapter = new OleDbDataAdapter(query, connString); //fill the DataTable dAdapter.Fill(dTable); dAdapter.Dispose(); return dTable; } But the above doesn't reads the alphanumeric value from the csv file. it reads only i either numeric or alpha. Whats the fix i need to make to read the alphanumeric values? Please suggest.

    Read the article

  • Retrieve Records from paypal csv file

    - by Pankaj Khurana
    Hi, I want to retrieve all the records from paypal csv file where type='Payment Processed' and store it in a database table.It should be available in the following format: 'Heading':'Value' The format of the csv is: "Transaction ID","Reference Transaction ID","Date","Type","Subject","Item Number","Item Name","Invoice ID","Name","Email","Shipping Name","Shipping Address Line 1","Shipping Address Line 2","Shipping Address City","Shipping State/Province","Shipping Zip/Postal Code","Shipping Address Country","Shipping Method","Address Status","Contact Phone Number","Gross Amount","Receipt ID","Custom Field","Option 1 Name","Option 1 Value","Option 2 Name","Option 2 Value","Note","Auction Site","Auction User ID","Item URL","Auction Closing Date","Insurance Amount","Currency","Fees","Net Amount","Shipping & Handling Amount","Sales Tax Amount","To Email","Time","Time Zone" "1T","",5/5/2010 2:10:44 PM,"Payment Processed","CFP Self Study Kit","1","CFP Self Study Kit","","User1","[email protected]","","","","","","","","","N","","68.18","R1","","","","","","","","","",,"","USD","-2.62","65.56","0","0","[email protected]","01:40","Asia/Calcutta" "2T","",5/19/2010 4:04:08 PM,"Payment Processed","CFP Self Study Kit","1","CFP Self Study Kit","","User2","[email protected]","","","","","","","","","N","","68.18","R2","","","","","","","","","",,"","USD","-2.62","65.56","0","0","[email protected]","03:34","Asia/Calcutta" "3T","1RT",5/19/2010 5:28:45 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","17492.6","","","","","","","","","","",,"","INR","0","17492.6","0","0","","04:58","Asia/Calcutta" "4T","2RT",5/19/2010 5:28:45 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","-393.36","","","","","","","","","","",,"","USD","0","-393.36","0","0","","04:58","Asia/Calcutta" "5T","",5/19/2010 5:28:45 PM,"Transfer to Bank Initiated","P1006","","P1006",""," ","","","","","","","","","","N","","-17492.6","","","","","","","","","","",,"","INR","0","-17492.6","0","0","","04:58","Asia/Calcutta" "6T","",5/20/2010 5:38:02 PM,"Transfer to Bank Completed","P1006","","P1006",""," ","","","","","","","","","","N","","-17492.6","","","","","","","","","","",,"","INR","0","-17492.6","0","0","","05:08","Asia/Calcutta" "7T","",5/21/2010 12:32:37 PM,"Payment Processed","FP - LVC Plus","","FP - LVC Plus","","User3","[email protected]","User3","NEW DELHI","BEHIND KARNATAKA BANK LD","SOUTH","NEW DELHI","110023","IN","","N","","283.96","","","","","","","","","","",,"","USD","-9.95","274.01","0","0","[email protected]","00:02","Asia/Calcutta" "8T","",5/25/2010 4:40:48 PM,"Transfer to Bank Initiated","P1006","","P1006",""," ","","","","","","","","","","N","","-12569.85","","","","","","","","","","",,"","INR","0","-12569.85","0","0","","04:10","Asia/Calcutta" "9T","3RT",5/25/2010 4:40:48 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","-274.01","","","","","","","","","","",,"","USD","0","-274.01","0","0","","04:10","Asia/Calcutta" "10T","4RT",5/25/2010 4:40:48 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","12569.85","","","","","","","","","","",,"","INR","0","12569.85","0","0","","04:10","Asia/Calcutta" "11T","",5/26/2010 4:57:39 PM,"Transfer to Bank Completed","P1006","","P1006",""," ","","","","","","","","","","N","","-12569.85","","","","","","","","","","",,"","INR","0","-12569.85","0","0","","04:27","Asia/Calcutta" "Total","-247.05 USD","-15.19","-262.24" "Total","0.00 INR","0.00","0.00" Please help me on this Thanks

    Read the article

  • Delete Duplicate records from large csv file C# .Net

    - by Sandhurst
    I have created a solution which read a large csv file currently 20-30 mb in size, I have tried to delete the duplicate rows based on certain column values that the user chooses at run time using the usual technique of finding duplicate rows but its so slow that it seems the program is not working at all. What other technique can be applied to remove duplicate records from a csv file Here's the code, definitely I am doing something wrong DataTable dtCSV = ReadCsv(file, columns); //columns is a list of string List column DataTable dt=RemoveDuplicateRecords(dtCSV, columns); private DataTable RemoveDuplicateRecords(DataTable dtCSV, List<string> columns) { DataView dv = dtCSV.DefaultView; string RowFilter=string.Empty; if(dt==null) dt = dv.ToTable().Clone(); DataRow row = dtCSV.Rows[0]; foreach (DataRow row in dtCSV.Rows) { try { RowFilter = string.Empty; foreach (string column in columns) { string col = column; RowFilter += "[" + col + "]" + "='" + row[col].ToString().Replace("'","''") + "' and "; } RowFilter = RowFilter.Substring(0, RowFilter.Length - 4); dv.RowFilter = RowFilter; DataRow dr = dt.NewRow(); bool result = RowExists(dt, RowFilter); if (!result) { dr.ItemArray = dv.ToTable().Rows[0].ItemArray; dt.Rows.Add(dr); } } catch (Exception ex) { } } return dt; }

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >