Search Results

Search found 1639 results on 66 pages for 'csv'.

Page 16/66 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Make a usable Join relationship with LINQ on top of a database CSV design error

    - by jdk
    I'm looking for a way to fix and/or abstract away a comma-separated values (CSV) list in a database field in order to reconstruct a usable relationship such that I can properly join the two tables below and query them using LINQ and its Join method. Following is a sample showing the Person table and CsvArticleIds field having a CSV value to represent a one-to-many association with Article records. TABLE [dbo].[Person] Id Name CsvArticleIds -- ---------- -------- 1 Joe "15,22" 5 Ed "22" 10 Arnie "8,15,22" ^^^(Of course a link table should have been created; nonetheless the relationship with articles is trapped inside that list of CSV values.) TABLE [dbo].[Article] Id Title -- ---------- 8 Beginning C# 15 A Historic look at Programming in the 90s 22 Gardening in January Additional Info the fix can be at any level: C#.NET or SQL Server something easy because I will be repeating the solution for many other CSV values in other tables. Elegant is nice too. not looking for efficiency because this is part of a one-time data migration task and can take as long as it wants to run.

    Read the article

  • Pass Memory in GB Using Import-CSV Powershell to New-VM in Hyper-V Version 3

    - by PowerShell
    I created the below function to pass memory from a csv file to create a VM in Hyper-V Version 3 Function Install-VM { param ( [Parameter(Mandatory=$true)] [int64]$Memory=512MB ) $VMName = "dv.VMWIN2K8R2-3.Hng" $vmpath = "c:\2012vms" New-VM -MemoryStartupBytes ([int64]$memory*1024) -Name $VMName -Path $VMPath -Verbose } Import-Csv "C:\2012vms\Vminfo1.csv" | ForEach-Object { Install-VM -Memory ([int64]$_.Memory) } But when i try to create the VM it says mismatch between the memory parameter passed from import-csv, i receive an error as below VERBOSE: New-VM will create a new virtual machine "dv.VMWIN2K8R2-3.Hng". New-VM : 'dv.VMWIN2K8R2-3.Hng' failed to modify device 'Memory'. (Virtual machine ID CE8D36CA-C8C6-42E6-B5C6-2AA8FA15B4AF) Invalid startup memory amount assigned for 'dv.VMWIN2K8R2-3.Hng'. The minimum amount of memory you can assign to a virtual machine is '8' MB. (Virtual machine ID CE8D36CA-C8C6-42E6-B5C6-2AA8FA15B4AF) A parameter that is not valid was passed to the operation. At line:48 char:9 + New-VM -ComputerName $HyperVHost -MemoryStartupBytes ([int64]$memory*10 ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (Microsoft.HyperV.PowerShell.VMTask:VMTask) [New-VM], VirtualizationOpe rationFailedException + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.NewVMCommand Also please not in the csv file im passing memory as 1,2,4.. etc as shown below, and converting them to MB by multiplying them with 1024 later Memory 1 Can Anyone help me out on how to format and pass the memory details to the function

    Read the article

  • Macro - To create one [.csv] file from/using multiple workbooks, kept in a folder, containing multi

    - by AJ
    Hello, I have more than one Excel Workbooks containing multiple worksheets in each of them. I would like to have a macro which help me to create (combine the information from) all the worksheets into one pipe [|] delimited [.csv] file. These sheets should be combined/appended into the [.csv] file, in the same order these worksbooks appear in a folder and in the order sheets appear in these workbooks. The macro should ask for a delimiter/separator specific to me and the input and output path based on my selection. It would be great if the output [.csv] file is names as "foldername" + "Output.csv" Thank you, Best Regards - AJ

    Read the article

  • Encoding in SQL to CSV

    - by Z77
    When do I execute query COPY TO ... CSV, I create CSV file. BUt when open it column with names in excel that should be with national characters are not as it should be. So my question is, If it is possible within a sql query to change this encoding to utf8? Or something else? Because I want that new created CSV file to be as final product for user on web. I hope someone understood what I want:)

    Read the article

  • Export CSV from Mysql

    - by ss888
    Hi, I'm having a bit of trouble exporting a csv file that is created from one of my mysql tables using php. The code I'm using prints the correct data, but I can't see how to download this data in a csv file, providing a download link to the created file. I thought the browser was supposed to automatically provide the file for download, but it doesn't. (Could it be because the below code is called using ajax?) Any help greatly appreciated - code below, S. include('../config/config.php'); //db connection settings $query = "SELECT * FROM isregistered"; $export = mysql_query ($query ) or die ( "Sql error : " . mysql_error( ) ); $fields = mysql_num_fields ( $export ); for ( $i = 0; $i < $fields; $i++ ) { $header .= mysql_field_name( $export , $i ) . "\t"; } while( $row = mysql_fetch_row( $export ) ) { $line = ''; foreach( $row as $value ) { if ( ( !isset( $value ) ) || ( $value == "" ) ) { $value = "\t"; } else { $value = str_replace( '"' , '""' , $value ); $value = '"' . $value . '"' . "\t"; } $line .= $value; } $data .= trim( $line ) . "\n"; } $data = str_replace( "\r" , "" , $data ); if ( $data == "" ) { $data = "\n(0) Records Found!\n"; } //header("Content-type: application/octet-stream"); //have tried all of these at sometime //header("Content-type: text/x-csv"); header("Content-type: text/csv"); //header("Content-type: application/csv"); header("Content-Disposition: attachment; filename=export.csv"); //header("Content-Disposition: attachment; filename=export.xls"); header("Pragma: no-cache"); header("Expires: 0"); echo '<a href="">Download Exported Data</a>'; //want my link to go in here... print "$header\n$data";

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • How to Map a CSV or Tab Delimited File to MySQL Multi-Table Database [migrated]

    - by Keefer
    I've got a pretty substantial XLS file a client provided 830 total tabs/sheets. I've designed a multi table database with PHPMyAdmin (MySQL obviously) to house the information that's in there, and have populated about 5 of those sheets by hand to ensure the data will fit into the designed database. Is there a piece of software or some sort of tool that will help me format this XLS document and map it to the right places in the database?

    Read the article

  • "t-sql" tool for csv files?

    - by adolf garlic
    Is there a tool that allows to query files based on the following criteria, and change them? creation date filenames (e.g. filetypeA_YYYYMMDD_data2.blah) file content For the example, the workflow could be the following one: Get all files created between date x and date y, where the filename contains z and the file itself contains value x under column zzz. Update it to be a different value/another column.

    Read the article

  • enclosing double quotes in array

    - by Jared
    Hi all I might be looking at this the wrong way, but I have a form that does its thing (sends emails etc etc) but I also put in some code to make a simple flatfile csv log with some of the user entered details. If a user accidentally puts in for instance 'himynameis","bob' this would either break the csv row (because the quotes weren't encapsulated) or if I use htmlspecialchars() and stripslashes() on the data, I end up with a ugly data value of 'himynameis&quot;,&quot;bob'. My question is, how can I handle the incoming data to cater for '"' being put in the form without breaking my csv file? this is my code for creating the csv log file. @$name = htmlspecialchars(trim($_POST['name'])); @$emailCheck = htmlspecialchars(trim($_POST['email'])); @$title = htmlspecialchars(trim($_POST['title'])); @$phone = htmlspecialchars(trim($_POST['phone'])); function logFile($logText) { $path = 'D:\logs'; $filename = '\Log-' . date('Ym', time()) . '.csv'; $file = $path . $filename; if(!file_exists($file)) { $logHeader = array('Date', 'IP_Address', 'Title', 'Name', 'Customer_Email', 'Customer_Phone', 'file'); $fp = fopen($file, 'a'); fputcsv($fp, $line); } $fp = fopen($file, 'a'); foreach ($logText as $record) { fputcsv($fp, $record); } } //Log submission to file $date = date("Y/m/d H:i:s"); $clientIp = getIpAddress(); //get clients IP address $nameLog = stripslashes($name); $titleLog = stripslashes($title); if($_FILES['uploadedfile']['error'] == 4) $filename = "No file attached."; //check if file uploaded and return $logText = array(array("$date", "$clientIp", "$titleLog", "$nameLog", "$emailCheck", "$phone", "$filename")); logFile($logText); //write form details to log Here is a sample of the incoming array data: Array ( [0] => Array ( [0] => 2010/05/17 10:22:27 [1] => xxx.xxx.xxx.xxx [2] => title [3] => """"himynameis","bob" [4] => [email protected] [5] => 346346 [6] => No file attached. ) ) TIA Jared

    Read the article

  • Question about DBD::CSB Statement-Functions

    - by sid_com
    From the SQL::Statement::Functions documentation: Function syntax When using SQL::Statement/SQL::Parser directly to parse SQL, functions (either built-in or user-defined) may occur anywhere in a SQL statement that values, column names, table names, or predicates may occur. When using the modules through a DBD or in any other context in which the SQL is both parsed and executed, functions can occur in the same places except that they can not occur in the column selection clause of a SELECT statement that contains a FROM clause. # valid for both parsing and executing SELECT MyFunc(args); SELECT * FROM MyFunc(args); SELECT * FROM x WHERE MyFuncs(args); SELECT * FROM x WHERE y < MyFuncs(args); # valid only for parsing (won't work from a DBD) SELECT MyFunc(args) FROM x WHERE y; Reading this I would expect that the first SELECT-statement of my example shouldn't work and the second should but it is quite the contrary. #!/usr/bin/env perl use warnings; use strict; use 5.010; use DBI; open my $fh, '>', 'test.csv' or die $!; say $fh "id,name"; say $fh "1,Brown"; say $fh "2,Smith"; say $fh "7,Smith"; say $fh "8,Green"; close $fh; my $dbh = DBI->connect ( 'dbi:CSV:', undef, undef, { RaiseError => 1, f_ext => '.csv', }); my $table = 'test'; say "\nSELECT 1"; my $sth = $dbh->prepare ( "SELECT MAX( id ) FROM $table WHERE name LIKE 'Smith'" ); $sth->execute (); $sth->dump_results(); say "\nSELECT 2"; $sth = $dbh->prepare ( "SELECT * FROM $table WHERE id = MAX( id )" ); $sth->execute (); $sth->dump_results(); outputs: SELECT 1 '7' 1 rows SELECT 2 Unknown function 'MAX' at /usr/lib/perl5/site_perl/5.10.0/SQL/Parser.pm line 2893. DBD::CSV::db prepare failed: Unknown function 'MAX' at /usr/lib/perl5/site_perl/5.10.0/SQL/Parser.pm line 2894. [for Statement "SELECT * FROM test WHERE id = MAX( id )"] at ./so_3.pl line 30. DBD::CSV::db prepare failed: Unknown function 'MAX' at /usr/lib/perl5/site_perl/5.10.0/SQL/Parser.pm line 2894. [for Statement "SELECT * FROM test WHERE id = MAX( id )"] at ./so_3.pl line 30. Could someone explaine me this behavior?

    Read the article

  • WinSCP putting multiple files on sftp site

    - by NewToWinSCP
    WinSCP 5.2 I wanted to put multiple files with a file extension .pgp on an sftp site. When I tested my original command line (see below) and it only placed the first *.pgp alphabetical file (D:\a.csv.pgp) on the sftp site. I tried specifying *.PGP and *.pgp without any changes - only one file (D:\a.csv.pgp) would be copied each time. I got it to work for all files only if I specified a put command for each .pgp file. Any ideas on how to put all *.pgp on the sftp site? Original Command Line - Does Not Work d:\winscp\winscp /command "option echo off" "option batch on" "option confirm off" "open sftp" "put D:\*.pgp" "close" "exit" Works d:\winscp\winscp /command "option echo off" "option batch on" "option confirm off" "open sftp" "put D:\a.csv.pgp" "put D:\b.csv.pgp" "put D:\c.csv.pgp" "put D:\d.csv.pgp" "put D:\e.csv.pgp" "put D:\f.csv.pgp" "put D:\g.csv.pgp" "put D:\h.csv.pgp" "put D:\i.csv.pgp" "close" "exit"

    Read the article

  • mysql: storing arbitrary data

    - by Hailwood
    Background: I was asking a question on stack overflow regarding creating tables on the fly where this conversation ensued: This smells like a terrible idea! In fact, it smells just like this one. What in the world do you want to use this for? – deceze @deceze: very true, However, How else would you store the contents of these CSV files. They must be stored in mysql for indexing. The only solid fact about them is that they all have a mobile column with a standard format. The CSV can have an arbitrary amount of columns with an arbitrary amount of rows. They can (with no exaggeration) range from a single row, 35 column csv to an 80k row single column CSV. I am open to other ideas. – Hailwood There are many solutions for this, from attribute-value schemas to JSON storage and NoSQL storage. Open a new question about it. Whatever you do though, don't dynamically create tables! – deceze Question: So my question is, What would you say is the best way to store this data? Are you in agreement with deceze about not creating dynamic tables?

    Read the article

  • Delete cell content in Libre (Open) Office based on the cell value

    - by take2
    I have a huge csv file (tens of thousands of rows) that I need to filter based on different criteria. After trying to find a proper CSV editor, I decided to use LibreOffice Calc. CSVed is great, but it doesn't support neither UTF-8 nor macros for advanced filtering. So, there are 4 columns, 3 of which contain numbers (with decimal numbers) and 1 of which contains text. I'm trying to find a way to delete rows with a macro code. I can achieve the desired behavior with filters too, but it's annoying to type all of the filtering values over and over again and there doesn't seem to be a way to export the filter and us it repeatedly. These rows should be deleted: The ones that don't contain certain words in textual column (column A). There are a few thousand different words used in that column and I want to keep only the rows that contain one of about 30 words in that column. Additionally, the number is the other columns should be bigger than 3.8 (column B), 4.5 (column C) and smaller than 20 (column C). The row-deletion type is "Shift up". Hopefully I have explained it well. Thanks a lot in advance for your help!

    Read the article

  • Rake don't know how to build task?

    - by Schroedinger
    Using a rake task to import data into a database: file as follows namespace :db do desc "load imported data from csv" task :load_csv_data => :environment do require 'fastercsv' require 'chronic' FasterCSV.foreach("input.csv", :headers => true) do |row| Trade.create( :name => row[0], :type => row[4], :price => row[6].to_f, :volume => row[7].to_i, :bidprice => row[10].to_f, :bidsize => row[11].to_i, :askprice => row[14].to_f, :asksize => row[15].to_i ) end end end When attempting to use this, with the right CSV files and the other elements in place, it will say Don't know how to build task 'db:import_csv_data' I know this structure works because I've tested it, I'm just trying to get it to convert to the new values on the fly. Suggestions?

    Read the article

  • Dynamically generating a file with javascript?

    - by gct
    I'm working on a web app for my company that will let us do some image tagging stuff, and I'd like to be able to generates results in the form of a CSV file. I can do this easily enough by dumping the CSV data to a div or something on the page and having the user copy it out. I'd rather have them hit the a generate button and have a CSV file downloaded as though they clicked on a link to the result, so they can more easily just save the file somewhere convenient. Is it possible to simulate this kind of thing with javascript? I basically want to dynamically generate the file and then let them download it, client side.

    Read the article

  • How to view a DataTable while debuging

    - by Eric
    I'm just getting started using ADO.NET and DataSets and DataTables. One problem I'm having is it seems pretty hard to tell what values are in the data table when trying to debug. What are some of the easiest ways of quickly seeing what values have been saved in a DataTable? Is there someway to see the contents in Visual Studio while debugging or is the only option to write the data out to a file? I've created a little utility function that will write a DataTable out to a CSV file. Yet the the resulting CSV file created was cut off. About 3 lines from what should have been the last line in the middle of writing out a System.Guid the file just stops. I can't tell if this is an issue with my CSV conversion method, or the original population of the DataTable. Update Forget the last part I just forgot to flush my stream writer.

    Read the article

  • Efficient way to access a mapping of identifiers in Python

    - by sixbelo
    I am writing an app to do a file conversion and part of that is replacing old account numbers with a new account numbers. Right now I have a CSV file mapping the old and new account numbers with around 30K records. I read this in and store it as dict and when writing the new file grab the new account from the dict by key. My question is what is the best way to do this if the CSV file increases to 100K+ records? Would it be more efficient to convert the account mappings from a CSV to a sqlite database rather than storing them as a dict in memory?

    Read the article

  • Creating interruptible process in python

    - by Glycerine
    I'm creating a python script of which parses a large (but simple) CSV. It'll take some time to process. I would like the ability to interrupt the parsing of the CSV so I can continue at a later stage. Currently I have this - of which lives in a larger class: (unfinished) Edit: I have some changed code. But the system will parse over 3 million rows. def parseData(self) reader = csv.reader(open(self.file)) for id, title, disc in reader: print "%-5s %-50s %s" % (id, title, disc) l = LegacyData() l.old_id = int(id) l.name = title l.disc_number = disc l.parsed = False l.save() This is the old code. def parseData(self): #first line start fields = self.data.next() for row in self.data: items = zip(fields, row) item = {} for (name, value) in items: item[name] = value.strip() self.save(item) Thanks guys.

    Read the article

  • Google Docs not importing CSVs consistently

    - by nick
    Hey everyone, I'm trying to import some csv data into google docs spreadsheet. The data I am entering is all made up of 16 digit integers. About 90% of them are imported perfectly but 10% are rewritten automatically into scientific notation. How do I turn this feature of. I just want all the numbers kept in their standard form. Kind Regards Nick

    Read the article

  • mysql to excel exporting data using php getting html code also

    - by pmms
    hi all following is code for getting xlsheet from mysql using php ` if( ($_POST['Submit']=='generateexcel')) { $tblname=$_GET['generateexcel']; global $obj_mysql; $table="tbl_js_login"; // this is the tablename that you want to export to csv from mysql. function exportMysqlToCsv($table,$filename = 'export.csv') { $csv_terminated = "\n"; $csv_separator = ","; $csv_enclosed = '"'; $csv_escaped = "\"; $sql_query = "select fld_id, fld_fname,fld_lname from $table"; // Gets the data from the database $result = mysql_query($sql_query); $fields_cnt = mysql_num_fields($result); $schema_insert = ''; for ($i = 0; $i < $fields_cnt; $i++) { $l = $csv_enclosed . str_replace($csv_enclosed, $csv_escaped . $csv_enclosed, stripslashes(mysql_field_name($result, $i))) . $csv_enclosed; $schema_insert .= $l; $schema_insert .= $csv_separator; } // end for $out = trim(substr($schema_insert, 0, -1)); $out .= $csv_terminated; // Format the data while ($row = mysql_fetch_array($result)) { $schema_insert = ''; for ($j = 0; $j < $fields_cnt; $j++) { if ($row[$j] == '0' || $row[$j] != '') { if ($csv_enclosed == '') { $schema_insert .= $row[$j]; } else { $schema_insert .= $csv_enclosed . str_replace($csv_enclosed, $csv_escaped . $csv_enclosed, $row[$j]) . $csv_enclosed; } } else { $schema_insert .= ''; } if ($j < $fields_cnt - 1) { $schema_insert .= $csv_separator; } } // end for $out .= $schema_insert; $out .= $csv_terminated; $out1 = strip_tags($out); } // end while header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Content-Length: " . strlen($out1)); // Output to browser with appropriate mime type, you choose ;) header("Content-type: text/x-csv"); //header("Content-type: text/csv"); //header("Content-type: application/csv"); header("Content-Disposition: attachment; filename=$filename"); echo $out1; exit; } exportMysqlToCsv($table); } include_once $path."includes/jobseeker_form.php"; /* function is_duplicate($login_name) { global $obj_mysql; $sql="SELECT * FROM tbl_admin_details WHERE fld_login ='$login_name'"; $num=$obj_mysql-get_num_rows($sql); if($num==0) return false; else return true; }*/ ?` the above code we are using for genrating the xlsheet along with xlsheet we are getting html at th top . following is the screen shot of xlsheet please provide some help how to remove the html code from xlsheet

    Read the article

  • Combiner le chargement d'une base de données et d'un fichier CSV sur Java Swing en 5 minutes, par Thierry Leriche-Dessirier

    Bonjour à tous, Je vous propose un petit article, intitulé "Charger et afficher des données de la base et d'un fichier CSV simple en 5 minutes" et disponible à l'adresse http://thierry-leriche-dessirier.dev...-db-csv-5-min/ Synopsis : Ce petit article montre (par l'exemple) comment charger des données depuis un fichier CSV simple (avec Open-CSV) et depuis la base MySql (avec JDBC), en fusionnant les valeurs pour les afficher dans une Interface (Swing) sous forme de tableau (JTable et Table model) et sous forme de graphes, le tout en quelques minutes seulement. Retrouvez aussi les autres articles de la séri...

    Read the article

  • JMeter: how to asign a single distinct value from CSV Data Set Config to each thread in thread group?

    - by JohnnyM
    I have to make a load test for a relatively large number of users so I cant realy use User Parameters pre-processor to parametrize each thread with custom user data. I've read that I should use CSV Data Set Config instead. However I run into a problem with how JMeter interprets the input of this Config. Example: I have a thread group of 3 threads and Loop Count:10 with one HTTP request sampler with server www.example.com and path: \${user}. The csv file (bullet is a single line in file) for CSV Data Set Config to extract the user parameter: 1 2 3 4 5 Expected output is that for thread 1-x the path of the request should be: \x. So the output file should consist of 10 samples per thread namely: for thread 1-1 : 10 requests to www.example.com\1 for thread 1-2 : 10 requests to www.example.com\2 for thread 1-3 : 10 requests to www.example.com\3 but instead i get requests to each \1 - \5 and then to EOF. Does anyone know how to achieve the expected effect with CSV Data Set Config in jmeter 2.9?

    Read the article

  • Does Chrome always adds an XLS extension on a vnd.ms-excel mime type?

    - by Claudio
    It seems that a simple download of a PHP generated CSV file (with a vnd.ms-excel mime type for the sake of opening it with (Open)Office readly upon the "Save as..." dialog, when present) always gets the unwanted .xls extension when the UA is Google Chrome. The file would then be named as myfile.csv.xls. Firefox behaves correctly. I wonder if it is a bug, a feature or a misunderstanding of some references. Thank you.

    Read the article

  • BULK INSERT problem in mysql

    - by kartiku
    Hi, I get an error with the following sql command for bulk insert....any help would be appreciated. BULK INSERT libra.faculty FROM 'd\:faculty.csv' WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' ); Here's the error message ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'BULK INSERT libra.faculty FROM 'd:\faculty.csv' WITH ( FIELDTERMINATOR = ',', RO' at line 1

    Read the article

  • java database backup restore

    - by jawath
    how do i backup /restore any kind of databases inside my java application to flate files.Are there any tools framework available to backup database to flat file like CSV, XML,or secure encrypted file,or restore from csv or xml files to databases ,it should be also capable of dumping table vise restore and backup also

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >