Search Results

Search found 1639 results on 66 pages for 'csv'.

Page 17/66 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Add zip files from one archive to another using command line

    - by Curious2learn
    I have two zip archives. Say, set1 has 10 csv files created using Mac OS X 10.5.8 compress option, and set2 has 4 csv files similarly created. I want to take the 4 files from zipped archive set2 and add them to list of files in archive set1. Is there a way I can do that? I tried the following in Terminal: zip set1.zip set2.zip This adds the whole archive set2.zip to set1.zip, i.e., in set1.zip now I have: file1.csv, file2.csv,..., file10.csv, set2.zip What I instead want is: file1.csv, file2.csv,..., file10.csv, file11.csv, ..., file14.csv where, set2.zip is the archive containing file11.csv, ..., file14.csv. Thanks.

    Read the article

  • java database backup and restore

    - by jawath
    How do I backup / restore any kind of databases inside my java application to flate files.Are there any tools framework available to backup database to flat file like CSV, XML, or secure encrypted file, or restore from csv or xml files to databases, it should be also capable of dumping table vise restore and backup also.

    Read the article

  • Excel techniques for perfmon csv log file analysis

    - by Aszurom
    I have perfmon running against several servers, where I'm outputting to a .csv file data like CPU %time, memory bytes free, hard disk I/O metrics like s/write and writes/s. The ones graphing the SQL servers are also collecting SQL stats. The web servers are collecting .Net relevant stuff. I am aware of PAL, and used it as a template of what data to capture based on server type actually. I just don't think the output it generates is detailed or flexible enough - but it does a pretty remarkable job of parsing logs and making graphs. I'm borderline incompetent with Excel, so I'm hoping to be directed to some knowledge of how to take a perfmon output .csv and mine it in Excel to produce some numbers that are meaningful to me as a sysadmin. I could of course just pick a range of data and assemble a graph out of that and look for spikes and trends, but I'm convinced there is some technique to this that makes it more manageable than looking at a monsterous spreadsheet of numbers and trying to make graphs of it. Plus, it's pretty time consuming and not something I can do as a "take a glance at the servers" sort of routine. I'm graphing CPU, disk use, network b/sec, etc. in Cacti as well, which is nice for seeing big trends. The problem is that it is 5 minute averages, so a server could have a problem but it's intermittent and washes out in a 5 min average. What do you do with perfmon data that I could learn from?

    Read the article

  • SqlBulkCopy From CSV to SQL Datatable

    - by Swapnil
    I'm using SQL Server 2005, VB.NET 2005. I want to be able to import a very large excel file into a SQL table called "XYZ" I've done this by doing the following: 1. Save the excel file as csv.(Using SaveAs XLCSV option) 2. Build a datatable "ABC" From CSV.(using ODBC Connection and Select * from '*'.csv command) 3. copy the datatable"ABC" into database table "xyz" (using sqlBulkCopy.WriteToServer()) It works fine without any error but when i checked my database i found that data type for some columns has been changed and hence it didn't copy some of the records.Any help would be appreciated

    Read the article

  • Read only particular fields from CSV File in vb.net

    - by fireBand
    Hi, I have this code to read a CVS file. It reads each line, devides each line by delimiter ',' and stored the field values in array 'strline()' . How do I extract only required fields from the CSV file? For example if I have a CSV File like Type,Group,No,Sequence No,Row No,Date (newline) 0,Admin,3,345678,1,26052010 (newline) 1,Staff,5,78654,3,26052010 I Need only the value of columns Group,Sequence No and date. Thanks in advance for any ideas. Dim myStream As StreamReader = Nothing ' Hold the Parsed Data Dim strlines() As String Dim strline() As String Try myStream = File.OpenText(OpenFile.FileName) If (myStream IsNot Nothing) Then ' Hold the amount of lines already read in a 'counter-variable' Dim placeholder As Integer = 0 strlines = myStream.ReadToEnd().Split(Environment.NewLine) Do While strlines.Length <> -1 ' Is -1 when no data exists on the next line of the CSV file strline = strlines(placeholder).Split(",") placeholder += 1 Loop End If Catch ex As Exception LogErrorException(ex) Finally If (myStream IsNot Nothing) Then myStream.Close() End If End Try

    Read the article

  • Load data from CSV to mySQL database Java+hibernate+spring

    - by mona
    I am trying to load a CSV file in to mySQL database using Java+Hibernate+Spring. I am using the following query in the DAO to help me load in to the database: entityManager.createQuery("LOAD DATA INFILE :fileName INTO TABLE test").setParameter("fileName", "C:\\samples\\test\\abcd.csv").executeUpdate(); I got some idea to use this from http://dev.mysql.com/doc/refman/5.1/en/load-data.html and how to import a csv file into a mysql from an hibernate+spring application? But I am getting the error: java.lang.IllegalArgumentException: node to traverse cannot be null! Please help! Thanks

    Read the article

  • Online conversion to CSV using Perl

    - by Octopus
    I have a application generating logs in every 5 sec. The logs are in below format. 11:13:49.250,interface,0,RX,0 11:13:49.250,interface,0,TX,0 11:13:49.250,interface,1,close,0 11:13:49.250,interface,4,error,593 11:13:49.250,interface,4,idle,2994215 and so on for other interfaces... I am working to convert these into below CSV format Time,interface.RX,interface.TX,interface.close.... 11:13:49,0,0,0,.... Simple as of now but the problem is, I have to get the data in csv format online, i.e as soon the log file updated the CSV should also be updated. Is there any way to do this using perl.

    Read the article

  • Validating Column Data Stored as CSV Against Another Table

    - by Jakkwylde
    I wanted to see what some suggested approaches would be to validate a field that is stored as a CSV against a table containing appropriate values. Althought it would be desired, it is NOT an option to split the CSV list into another related table. In the example data below I would be trying to capture the code 99 for widget A. Below is an example data representation. Table: Widgets WidgetName WidgetCodeList A 1, 2, 3 B 1 C 2, 3 D 99 Table: WidgetCodes WidgetCode 1 2 3 An earlier approach was to query the CSV column as rows using various string manipulations and CONNECT_BY_LEVEL however the performance was not acceptible.

    Read the article

  • problem reading a csv file in python

    - by Hossein
    Hi, I am trying to read a very simple but somehow large(800Mb) csv file using the csv library in python. The delimiter is a single tab and each line consists of some numbers. Each line is a record, and I have 20681 rows in my file. I had some problems during my calculations using this file,it always stops at a certain row. I got suspicious about the number of rows in the file.I used the code below to count the number of row in this file: tfdf_Reader = csv.reader(open('v2-host_tfdf_en.txt'),delimiter=' ') c = 0 for row in tfdf_Reader: c = c + 1 print c To my surprise c is printed with the value of 61722!!! Why is this happening? What am I doing wrong?

    Read the article

  • Problem with a large CSV file

    - by moustafa
    I have a very large CSV file. 51427 lines to be exact. I need to import the entire file into a MySQL database, however, the script times out due to server settings and slow connection (and maybe other reasons that I am not aware of). So - I am now passing parameters START and LIMIT via address bar to import, like this: http://my.server.address/import.php?...000&limit=1000 This reads the entire CSV file into an array, and starts at line 10000 of the array and inserts into the database until it reaches line 11000, and then terminates the script. This works very nicely, however, I am not happy having to reach the entire 51427 lines of the CSV file into an array before processing. Is there not a way where I can only read the required lines into an array? That would speed things up significantly.

    Read the article

  • Performance issue when configuring non HA VM in cluster

    - by laiys
    Hi, I saw this article http://technet.microsoft.com/en-us/library/cc764243.aspx Quote taken from the link “ Important It is recommended that you not deploy virtual machines that are not highly available on your host clusters. Although you can do this by using Hyper-V (VMM does not allow it), the non-highly available virtual machines will consume resources that otherwise would be available to the HAVMs What kind of resources (CPU,memory, NIC, etc) that non HA VM will consume? Just curious as not all VM (in production) not to be in Failover Cluster and Live Migration. If i put the VM into CSV but did not make it as HA, what impact does it make since i allocate same vCPU, vNic and VMemory into the VM. (not to mention that i lost failover feature). Curious to understand more about this. Please advise. Thanks

    Read the article

  • In SSIS Convert European Currency Format to United States Currency Format

    - by Rob
    I have an interesting problem. I have an SSIS package that processes account data. We are now processing files from Europe. These files are in a CSV format using text qualifiers. For an example of the problem: In the United States the currency format is 123456.99 (We purposely leave the thousands separator out). The files sent from Europe are coming in with two formats. One is 123456,99 and the other is 123.456,00. SSIS is attempting to parse the text file and place it into a NUMERIC(20,2) field. This causes a parsing error in SSIS even with the text qualifiers. If I change the field to CURRENCY it sends a conversion error. I would like for SSIS to deal with this directly without requiring the data to be in the United States format. Has anyone had this problem? Any help will be greatly appreciated. Rob

    Read the article

  • How Export Result of MySQL Query on PHPMyAdmin 3.4.3?

    - by grape
    1) I've got a 30K row table. 2) When I run a long, 50-line query on that table, a GROUP function reduces the number of rows to 7K. 3) I want to export the grouped 7K rows as a new table, or save them as a CSV. When I attempt to export, instead of getting the grouped 7K rows, I get the old, pre-query 30K rows. What am I doing wrong, and what should I be doing? NOTE: I'm not a coder, so I'd really appreciate a solution that just used the PHPMyAdmin GUI. Thanks!

    Read the article

  • SED - Regular Expression over multiple lines

    - by herrherr
    Hi there, I'm stuck with this for several hours now and cycled through a wealth of different tools to get the job done. Without success. It would be fantastic, if someone could help me out with this. Here is the problem: I have a very large CSV file (400mb+) that is not formatted correctly. Right now it looks something like this: Alan Smithee ist ein Anagramm von „The [...] „Alan Smythee“, und „Adam Smithee“." ,Alan Smithee Die Aussagenlogik ist der Bereich der Logik, der sich mit [...] ihrer Teilaussagen bestimmen. ,Aussagenlogik As you can probably see the words ",Alan Smithee" and ",Aussagenlogik" should actually be on the same line as the foregoing sentence. Then it would look something like this: Alan Smithee ist ein Anagramm von „The Smitheeeee [...] „Alan Smythee“, und „Adam Smithee“.,Alan Smithee Die Aussagenlogik ist der Bereich der Logik, der sich mit [...] ihrer Teilaussagen bestimmen.,Aussagenlogik Please note that the end of the sentence can contain quotes or not. In the end they should be replaced too. Here is what I came up with so far: sed -n '1h;1!H;${;g;s/\."?.*,//g;p;}' out.csv > out1.csv This should actually get the job done of matching the expression over multiple lines. Unfortunately it doesn't :) The expression is looking for the dot at the end of the sentence and the optional quotes plus a newline character that I'm trying to match with .*. Help much appreciated. And it doesn't really matter what tool gets the job done (awk, perl, sed, tr, etc.). Thanks, Chris

    Read the article

  • ObjectDataSource.Select with Parameters Time Out

    - by MasterMax1313
    I'm using an ObjectDataSource with a 2008 ReportViewer control and Linq to CSV. The ODS has two parameters (the SQL is spelled out in an XSD file with a table adapter). The Reportviewer takes a very long time to render the output after a button is clicked to generate the report. That's my first problem. Even though it works (most of the time), the processing time worries me, and subsequent requests don't seem to be changing the results shown on the screen. The next issue is that when I go to export the ODS to CSV I'm getting a time out exception on the select method of the ODS (shown below). This works for ODS without parameters, but it seems like now that I've added parameters that it doesn't want to cooperate. I'm fresh out of ideas, any thoughts? <asp:ObjectDataSource ID="obsGetDataAllCustomers" runat="server" SelectMethod="GetDataAllCustomers" TypeName="my.myAdapter.AllCustomers" OldValuesParameterFormatString="original_{0}" > <SelectParameters> <asp:ControlParameter ControlID="StartDate" Name="Start" PropertyName="Text" Type="DateTime" /> <asp:ControlParameter ControlID="EndDate" Name="_End" PropertyName="Text" Type="DateTime" /> </SelectParameters> </asp:ObjectDataSource> After button click to view report - rvAllCustomers.LocalReport.Refresh() Export to CSV (adding the items returned to a list which is then processed by working code) - For Each dr As DataRow In CType(obs.Select(), DataView).Table.Rows l.Add(New FullOrderOutput(dr)) Next

    Read the article

  • CSV export task

    - by medecau
    Need a task that outputs a CSV text file of a couple of tables about every 5 minutes. Server is MSSQL 2008. It is a production server. requirements are: * utf8 output * '\t' or ';' cell separator * '\n' row terminator * file should be overwritten * the output is a join of two tables (dbo.article and dbo.stock key being 'c_art')

    Read the article

  • Remove a line from a csv file bash, sed, bash

    - by S1syphus
    I'm looking for a way to remove lines within multiple csv files, in bash using sed, awk or anything appropriate where the file ends in 0. So there are multiple csv files, their format is: EXAMPLEfoo,60,6 EXAMPLEbar,30,10 EXAMPLElong,60,0 EXAMPLEcon,120,6 EXAMPLEdev,60,0 EXAMPLErandom,30,6 So the file will be amended to: EXAMPLEfoo,60,6 EXAMPLEbar,30,10 EXAMPLEcon,120,6 EXAMPLErandom,30,6 A problem which I can see arising is distinguishing between double digits that end in zero and 0 itself. So any ideas?

    Read the article

  • SSRS csv export

    - by Lijo
    Hi Team, I am working on SSRS 2005. I have a column that has a comm to be displayed. I write it in the header; but the SP returns without comma for the column header. When I export the report to csv, the column names are taking the name of the text box with is not having comma. Is there a way to display comma in the header when exported to csv? Thanks Lijo

    Read the article

  • parsing issue with comma separated csv file

    - by Andrei
    I am trying to extract 4th column from csv file (comma separated, and skipping first 2 header lines) using this command, awk 'NR <2 {next}{FS =","}{print $4}' filename.csv | more However, it doesn't work because the first column cantains comma, thus 4th column is not really 4th. Below is an example of a row: "sdfsdfsd, sfsdf", 454,fgdfg, I_want_this_column,sdfgdg

    Read the article

  • converting nested DIVs into CSV format....

    - by wefwgeweg
    okay there is already solutions for finding TABLE, LIST and converting that to CSV however, what about DIVs ? there are some sites that use DIV + CSS to display data.... i am using nokogiri, i wonder how will i be able to automatically find nested DIV, and convert it to CSV format ?

    Read the article

  • MySQL DUMP as CSV

    - by swt83
    I've looked around and nothing seems to work: $file = '/path/to/file.csv'; $cmd = 'mysqldump DATABASE TABLE > '.$file.' --host=localhost --user=USER --password=PASS'; $cmd .= ' --lock-tables=false --no-create-info --tab=/tmp --fields-terminated-by=\',\''; exec($cmd); Everything I try creates an empty CSV file. Any ideas? Thanks much.

    Read the article

  • Opening a huge .csv file with Excel Interop using C#

    - by user262102
    Hi, I have an application that write huge .csv files about the size ranging from 1 GB to 2 GB. I need to color code the file and save it as .xlsx. So I have tried using Excel Interop and it works great for small files, but when I try to open a 1.3 GB .csv file with Excel, I get an Hresult error. Any ideas as to how I could accomplish this task either with using Excel, or if there is any other way of doing it. Thanks!

    Read the article

  • PHP code to convert a MySQL query to CSV

    - by Reilly
    What is the most efficient way to convert a MySQL query to CSV in PHP please? It would be best to avoid temp files as this reduces portability (dir paths and setting file-system permissions required). The CSV should also include one top line of field names. Cheers.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >