Search Results

Search found 1639 results on 66 pages for 'csv'.

Page 12/66 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • How to get database table header information into an CSV File.

    - by Rachel
    I am trying to connect to the database and get current state of a table and update that information into csv file, with below mentioned piece of code, am able to get data information into csv file but am not able to get header information from database table into csv file. So my questions is How can I get Database Table Header information into an CSV File ? $config['database'] = 'sakila'; $config['host'] = 'localhost'; $config['username'] = 'root'; $config['password'] = ''; $d = new PDO('mysql:dbname='.$config['database'].';host='.$config['host'], $config['username'], $config['password']); $query = "SELECT * FROM actor"; $stmt = $d->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); $data = fopen('file.csv', 'w'); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Header information meaning: Vehicle Build Model car 2009 Toyota jeep 2007 Mahindra So header information for this would be Vehicle Build Model Any guidance would be highly appreciated.

    Read the article

  • Can a .csv file be used as a data source in Visual Studio 2008?

    - by Kevin
    I'm pretty new to C# and Visual Studio. I'm writing a small program that will read a .csv file and then write the records read to a SQL Server database table. I can manually parse the .csv file, but I was wondering if it is possible to somehow "describe" the .csv file to Visual Studio so that I can use it as a data source? I should mention that the first two lines in the .csv file contain header information and the following lines are the actual comma-delimited data. Also, I should mention that this program is a stand-alone console program with no user interface.

    Read the article

  • How to deal with extra space characters while Reading a CSV file?

    - by Ravi Dutt
    I am reading a CSV file with CSV Open Source API. as shown below: Java Code:--> CSVReader reader = new CSVReader(new FileReader(filePath),'\n'); String[] values; if((read=(reader.readNext()))!=null) { values = (read[0].split(" (?=([^\"]*\"[^\"]*\")*[^\"]*$)",-1)).length; } // code ends here When I read this CSV file line by line and split that line with delimiter. Then after spliting values each value I get contains extra space character after each character in String. Suppose value in file is like "ABC" and I got this after reading from CSV file reader as " A B C " . I used removeAll("\s+","") on each value even after it is not working. Thank You in Advance.

    Read the article

  • JMeter CSV Data Set is corrupting Japanese strings stored as proper UTF-8, I get Question Marks instead

    - by Mark Bennett
    I read in search terms from a simple text file to send to a search engine. It works fine in English, but gives me ???? for any Japanese text. Text with mixed English and Japanese does show the English text, so I know it's reading it. What I'm seeing: Input text: Snow Leopard ??????????????? Turns into: Snow Leopard ??????????????? This is in my POST field of an HTTP. If I set JMeter to encode the data, it just puts in the percent sequence for question marks. Interesting note: In the example above there are 15 Japanese characters, and then 15 question marks, so at some point it's being seen as full characters and not just bytes. About the Data: The CSV file is very simple in structure. There's only one field / one column, which I name TERM, and later use as ${TERM} I don't really need full CSV because it's only one string per line. There's no commas or quotes. When I run the Unix "file" command on the file, it says UTF-8 text. I've also verified it in command line and graphical mode on two machines. JMeter CSV Dataset Config: Filename: japanese-searches.csv File encoding: UTF-8 (also tried without) Variable names: TERM Delimiter: , Allow Quoted Data: False (I also tried True, different, but still wrong) Recycle at EOF: True Stop at EOF: False Staring mode: All threads A few things I've tried: Tried Allow quoted Data. It changed to other strange characters. -Dfile.encoding=UTF-8 Tried encoding the POST, but it just turned into a bunch of %nn for question marks And I'm not sure how "debug" just after the each line of the CSV is read in. I think it's corrupted right away, but I'm not sure. If it's only mangled when I reference it, then instead of ${TERM} perhaps there's some other "to bytes" function call. I'll start checking into that. I haven't done anything with the JMeter functions yet.

    Read the article

  • Uploading a csv file to sql server - Identity problem.

    - by Doozer1979
    Given a column structure in a CSV file of: First_Name, Last_Name, Date_Of_Birth And a SQL Server table with a structure of ID(PK) | First_Name | Last_Name | Date_Of_Birth (Field ID is an Identity with an auto-increment of 1) How do i arrange it so that SQL Server does not attempt to insert the First_Name column from the csv file into the ID field? For info the csv is loaded into a DataTable and then copied to SQL Server using SqlBulkCopy Should i be modifying the csv file before the import add the ID column (The destination table is truncated prior to import, so no need to worry about duplicate key values.) Or perhaps adding an id column to the Datatable? Or Is there a setting in Sql Server that i may have missed?

    Read the article

  • In DBD::CSV what does a /r in the f_ext attribute mean?

    - by sid_com
    Why does only the second example append the extension to the filename and what is the "/r" in ".csv/r" for. #!/usr/bin/env perl use warnings; use strict; use 5.012; use DBI; my $dbh = DBI->connect( "DBI:CSV:f_dir=/home/mm", { RaiseError => 1, f_ext => ".csv/r"} ); my $table = 'new_1'; $dbh->do( "DROP TABLE IF EXISTS $table" ); $dbh->do( "CREATE TABLE $table ( id INT, name CHAR, city CHAR )" ); my $sth_new = $dbh->prepare( "INSERT INTO $table( id, name, city ) VALUES ( ?, ?, ?, )" ); $sth_new->execute( 1, 'Smith', 'Greenville' ); $dbh->disconnect(); # -------------------------------------------------------- $dbh = DBI->connect( "DBI:CSV:f_dir=/home/mm", { RaiseError => 1 } ); $dbh->{f_ext} = ".csv/r"; $table = 'new_2'; $dbh->do( "DROP TABLE IF EXISTS $table" ); $dbh->do( "CREATE TABLE $table ( id INT, name CHAR, city CHAR )" ); $sth_new = $dbh->prepare( "INSERT INTO $table( id, name, city ) VALUES ( ?, ?, ?, )" ); $sth_new->execute( 1, 'Smith', 'Greenville' ); $dbh->disconnect();

    Read the article

  • Piping Objects From Format-Table to Export-CSV Has Unexpected Results

    - by makerofthings7
    I'm running this command to get a list of all Activesync users and export them to C:\activesync.csv Get-ActiveSyncDevice | Get-ActiveSyncDeviceStatistics | sort-object status, devicetype , lastsyncattempttime | ft FirstSyncTime ,LastPolicyUpdateTime ,LastSyncAttemptTime ,LastSuccessSync , DeviceType , DeviceID, DeviceAccessState, Identity -a | Export-Csv c:\activesync.csv The problem is that the CSV data doesn't match the console display if I omit the trailing | c:\activesync.csv ... where the data and columns displayed don't match. Is this a bug in powershell?

    Read the article

  • How to find if the file is a CSV file?

    - by Mithun
    I have a scenario wherein the user uploads a file to the system. The only file that the system understands in a CSV, but the user can upload any type of file eg: jpeg, doc, html. I need to throw an exception if the user uploads anything other than CSV file. Can anybody let me know how can I find if the uploaded file is a CSV file or not?

    Read the article

  • Why do we keep using CSV?

    - by Stephen
    Why do we keep using CSV? I recently made a shift to working the health domain and despite the wonderful work in data transfer standards, all data transfer is in CSV, both for reporting to external organisations, and for data migrations when implementing new systems. Unfortunately the use of CSV is the cause of the endless repetition of the same stupid errors, with the same waste of developer time. (bad escaping, failing to handle null fields etc.) I know we can do better, and anything between JSON and XML (depending on the instance) would be fine. (Most of the time this is data going from one MS SQLserver 2005 to another!) I feel as if each time I see this happening I am literally watching one developer waste anothers time. So why do we keep shafting each other? When will we stop?

    Read the article

  • Why do we keep using CSV?

    - by Stephen
    Why do we keep using CSV? I recently made a shift to working the health domain and despite the wonderful work in data transfer standards, all data transfer is in CSV, both for reporting to external organisations, and for data migrations when implementing new systems. Unfortunately the use of CSV is the cause of the endless repetition of the same stupid errors, with the same waste of developer time. (bad escaping, failing to handle null fields etc.) I know we can do better, and anything between JSON and XML (depending on the instance) would be fine. (Most of the time this is data going from one MS SQLserver 2005 to another!) I feel as if each time I see this happening I am literally watching one developer waste anothers time. So why do we keep shafting each other? When will we stop?

    Read the article

  • ????????CSV???????(???????)?? ~ DBA????APEX

    - by Yuichi.Hayashi
    Oracle Application Express(Oracle APEX)????????????Web????????????????DBA??????·???????????????? ?????????CSV???????(???????)??·?? DBA???????????????????Oracle Database?????CSV????????????????? 1. ???Oracle APEX(Oracle Application Express)??????????????????????????????(???????????????SQL????????????????????????????????) 2. ???????????????????????(????????)??????????????????????? ?????????????????EMP???????? 3. ???????????????? 4.?????????????? ??????????????????????????????CSV???????????? ???????????? Oracle APEX?????????????????????????????????? APEX(Oracle Application Express)????~??????????????????????

    Read the article

  • CSV is actually .... Semicolon Separated Values ... (Excel export on AZERTY)

    - by Bugz R us
    I'm a bit confused here. When I use Excel 2003 to export a sheet to CSV, it actually uses semicolons ... Col1;Col2;Col3 shfdh;dfhdsfhd;fdhsdfh dgsgsd;hdfhd;hdsfhdfsh Now when I read the csv using Microsoft drivers, it expects comma's and sees the list as one big column ??? I suspect Excel is exporting with semicolons because I have a AZERTY keyboard. However, doesn't the CSV reader then also have to take in account the different delimiter ? How can I know the appropriate delimiter, and/or read the csv properly ?? public static DataSet ReadCsv(string fileName) { DataSet ds = new DataSet(); string pathName = System.IO.Path.GetDirectoryName(fileName); string file = System.IO.Path.GetFileName(fileName); OleDbConnection excelConnection = new OleDbConnection (@"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + pathName + ";Extended Properties=Text;"); try { OleDbCommand excelCommand = new OleDbCommand(@"SELECT * FROM " + file, excelConnection); OleDbDataAdapter excelAdapter = new OleDbDataAdapter(excelCommand); excelConnection.Open(); excelAdapter.Fill(ds); } catch (Exception exc) { throw exc; } finally { if(excelConnection.State != ConnectionState.Closed ) excelConnection.Close(); } return ds; }

    Read the article

  • How can I use io.StringIO() with the csv module?

    - by Tim Pietzcker
    I tried to backport a Python 3 program to 2.7, and I'm stuck with a strange problem: >>> import io >>> import csv >>> output = io.StringIO() >>> output.write("Hello!") # Fail: io.StringIO expects Unicode Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unicode argument expected, got 'str' >>> output.write(u"Hello!") # This works as expected. 6L >>> writer = csv.writer(output) # Now let's try this with the csv module: >>> csvdata = [u"Hello", u"Goodbye"] # Look ma, all Unicode! (?) >>> writer.writerow(csvdata) # Sadly, no. Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unicode argument expected, got 'str' According to the docs, io.StringIO() returns an in-memory stream for Unicode text. It works correctly when I try and feed it a Unicode string manually. Why does it fail in conjunction with the csv module, even if all the strings being written are Unicode strings? Where does the str come from that causes the Exception? (I do know that I can use StringIO.StringIO() instead, but I'm wondering what's wrong with io.StringIO() in this scenario)

    Read the article

  • Which type of file parsing easiest and efficient and good ?(html,pdf,csv,text)

    - by Harikrishna
    I want to parse the html file, pdf file, csv file and text file. Now parsing for which type of file (specified above) is easiest and efficient ? Like parsing for html file is easiest and efficient OR parsing for pdf file is easiest and efficient OR parsing for csv file is easiest and efficient ? I am asking this question because I want to parse pdf ,html ,csv and text file through common parsing code if possible. And now suppose if parsing for html is easiest and efficient then : I will write the parsing code for html file and will try to convert pdf file to the html file(if possible)so the code written for parsing html file will also work for pdf file also. And thus I will try to convert pdf,csv and text file to html file.And write the code for parsing html file and thus this code will parse html,pdf,csv and text file. Suppose if parsing for pdf is easiest and efficient then : I will convert html,csv and text file to pdf and write the code for parsing pdf file.So the code for parsing pdf file can parse html,csv and text file. So my question is (1) Which type of file parsing is easiest and efficient (pdf,csv,html,text) ? (2) And converting files(pdf,text,html,csv) to eachother is possible. Like if html parsing easiest then pdf to html,text to html and csv to html.

    Read the article

  • How to edit CSV file without changing or formatting values (ideally in Neo/Open-Office)?

    - by Scott Saunders
    I often need to edit CSV files that will later be imported into databases. I need to reorder columns, change values, delete lines, etc. I use NeoOffice for this now - it's basically Open Office with some Mac UI stuff tweaked. Often, though NeoOffice tries to be "helpful" and reformats fields it thinks are dates, rounds numbers to some number of decimals, etc. This breaks the file import and/or changes important data values. How can I prevent this from happening? I need to edit the fields exactly as they would appear in a text editor, with absolutely no changes to the data in the fields.

    Read the article

  • How can I convert an ordinary text file to a .csv file, and import it to Excel?

    - by Xavierjazz
    I have a group of names and addresses that I would like to import into Outlook. At the moment I have imported them into Excel, but all names and addresses are in one long entry. All are already separated by a comma. How can I get Excel to select each "value" and move it to a separate cell? Edit: I had already tried taking a text file and saving it as a .csv file. However, all contacts load into a single cell. I am using Excel 2003. Thanks.

    Read the article

  • How to plot 3D graphs in Excel from CSV data?

    - by Primx
    I have data formatted like this in a csv file: a, 1, 4, 6.0 a, 2, 42, 16.0 a, 5, 14, 69.3 a, 11, 4, 7.0 b, 1, 45, 6.0 b, 2, 45, 1.9 b, 9, 2, 4.4 b, 11, 4, 7.9 lines with first parameter a is one set of data, and first parameter b represents another set. My aim is to plot two lines on the same graph, one with points (1, 4, 6.0), (2, 42, 16.0), (5, 14, 69.3), (11, 4, 7.0) and the other with points (1, 45, 6.0), (2, 45, 1.9), (9, 2, 4.4), (11, 4, 7.9) I am able to import the data directly in MS Excel, but am not sure how to plot them. How can I plot this data?

    Read the article

  • What to do with CSV after export on the iPhone?

    - by alku83
    One of the requested features for my apps is to have an export feature. Most of the time the data is table-like in nature. As an example, users might enter every day what types of food they ate that day, and how many portions of each food type. As the data is table-like, I figure the most useful for export would be into CSV format. Then it can be read in spreadsheet software. I'm confident I can get the data into a CSV like format without too much trouble, and found this post would should help me: http://stackoverflow.com/questions/1512883/how-to-convert-data-to-csv-or-html-format What I'm wondering about is what I can do with the file once it has been created? Can I attach it to an email? How else can I make it available to the user so that it has some use? Alternatively, am I going about this the wrong way and would there be a better way to offer an export function?

    Read the article

  • Why would a CSV attachment appear as text in the body?

    - by David
    Hi all I've just implemented some code that emails a bunch of our clients with a CSV file attachment. Some (not many) have got back to us complaining that they don't get an attachment at all - just the CSV text inside the body of the email. Most however are fine. I suspect that it's different mail clients that are treating the attachment differently but I don't have enough info yet to be sure. I'm using .NET's MailMessage class with the Attachment.CreateAttachmentFromString() method. The MIME type I'm specifying for the attachment is text/csv. Anyone have any idea what the heck is going on? Ta muchly David

    Read the article

  • How to use PHP fgetcsv to create an array for each piece of data in csv file?

    - by Olivia
    I'm trying to import data from a csv file to some html code to use it in a graph I have already coded. I'm trying to use PHP and fgetcsv to create arrays for each separate piece of data to use PHP to put it into the html code. I know how to open the csv file and to print it using PHP, and how to print each separate row, but not each piece of data separated by a comma. Is there a way to do this? If this helps, this is the csv data I am trying to import. May 10,72,12,60 May 11,86,24,62 May 12,67,32,34 May 13,87,12,75 May 14,112,23,89 May 17,69,21,48 May 18,98,14,84 May 19,115,18,97 May 20,101,13,88 May 21,107,32,75 I hope that makes sense.

    Read the article

  • Is there a way to include commas in CSV columns without breaking the formatting?

    - by editor
    I've got a two column CSV with a name and a number. Some people's name use commas, for example "Joe Blow, CFA." This comma breaks the CSV format, since it's interpreted as a new column. I've read up and the most common prescription seems to be replacing that character, or replacing the delimiter, with a new value (e.g. "this|that|the, other"). I'd really like to keep the comma separator (I know excel supports other delimiters but other interpreters may not). I'd also like to keep the comma in the name, as "Joe Blow| CFA" looks pretty silly. Is there a way to include commas in CSV columns without breaking the formatting, for example by escaping them?

    Read the article

  • Unix sort keys cause performance problems

    - by KenFar
    My data: It's a 71 MB file with 1.5 million rows. It has 6 fields All six fields combine to form a unique key - so that's what I need to sort on. Sort statement: sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 -o output.csv input.csv The problem: If I sort without keys, it takes 30 seconds. If I sort with keys, it takes 660 seconds. I need to sort with keys to keep this generic and useful for other files that have non-key fields as well. The 30 second timing is fine, but the 660 is a killer. More details using unix time: sort input.csv -o output.csv = 28 seconds sort -t ',' -k1 input.csv -o output.csv = 28 seconds sort -t ',' -k1,1 input.csv -o output.csv = 64 seconds sort -t ',' -k1,1 -k2,2 input.csv -o output.csv = 194 seconds sort -t ',' -k1,1 -k2,2 -k3,3 input.csv -o output.csv = 328 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 input.csv -o output.csv = 483 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 input.csv -o output.csv = 561 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 input.csv -o output.csv = 660 seconds I could theoretically move the temp directory to SSD, and/or split the file into 4 parts, sort them separately (in parallel) then merge the results, etc. But I'm hoping for something simpler since looks like sort is just picking a bad algorithm. Any suggestions? Testing Improvements using buffer-size: With 2 keys I got a 5% improvement with 8, 20, 24 MB and best performance of 8% improvement with 16MB, but 6% worse with 128MB With 6 keys I got a 5% improvement with 8, 20, 24 MB and best performance of 9% improvement with 16MB. Testing improvements using dictionary order (just 1 run each): sort -d --buffer-size=8M -t ',' -k1,1 -k2,2 input.csv -o output.csv = 235 seconds (21% worse) sort -d --buffer-size=8M -t ',' -k1,1 -k2,2 input.csv -o ouput.csv = 232 seconds (21% worse) conclusion: it makes sense that this would slow the process down, not useful Testing with different file system on SSD - I can't do this on this server now. Testing with code to consolidate adjacent keys: def consolidate_keys(key_fields, key_types): """ Inputs: - key_fields - a list of numbers in quotes: ['1','2','3'] - key_types - a list of types of the key_fields: ['integer','string','integer'] Outputs: - key_fields - a consolidated list: ['1,2','3'] - key_types - a list of types of the consolidated list: ['string','integer'] """ assert(len(key_fields) == len(key_types)) def get_min(val): vals = val.split(',') assert(len(vals) <= 2) return vals[0] def get_max(val): vals = val.split(',') assert(len(vals) <= 2) return vals[len(vals)-1] i = 0 while True: try: if ( (int(get_max(key_fields[i])) + 1) == int(key_fields[i+1]) and key_types[i] == key_types[i+1]): key_fields[i] = '%s,%s' % (get_min(key_fields[i]), key_fields[i+1]) key_types[i] = key_types[i] key_fields.pop(i+1) key_types.pop(i+1) continue i = i+1 except IndexError: break # last entry return key_fields, key_types While this code is just a work-around that'll only apply to cases in which I've got a contiguous set of keys - it speeds up the code by 95% in my worst case scenario.

    Read the article

  • Why do multiline cells in my CSV file appear with a question mark at the end of each line in Excel?

    - by Chris Lindsay
    I'm currently working on a project where we'd like to allow a user to export their data to CSV. Some of the data we present has multiple values for a single cell, and so we use the standard CSV method of putting each value on its own line: Column A, Column B, Column C Value A, "Value B1 Value B2", Value C Most of the time this works fine, but some people are reporting seeing a small question mark in a box character appear at the end of each line when they load the file in Excel. Why is this happening?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >