Search Results

Search found 71879 results on 2876 pages for 'file handling'.

Page 31/2876 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Uploading and Importing CSV file to SQL Server in ASP.NET WebForms

    - by Vincent Maverick Durano
    Few weeks ago I was working with a small internal project  that involves importing CSV file to Sql Server database and thought I'd share the simple implementation that I did on the project. In this post I will demonstrate how to upload and import CSV file to SQL Server database. As some may have already know, importing CSV file to SQL Server is easy and simple but difficulties arise when the CSV file contains, many columns with different data types. Basically, the provider cannot differentiate data types between the columns or the rows, blindly it will consider them as a data type based on first few rows and leave all the data which does not match the data type. To overcome this problem, I used schema.ini file to define the data type of the CSV file and allow the provider to read that and recognize the exact data types of each column. Now what is schema.ini? Taken from the documentation: The Schema.ini is a information file, used to define the data structure and format of each column that contains data in the CSV file. If schema.ini file exists in the directory, Microsoft.Jet.OLEDB provider automatically reads it and recognizes the data type information of each column in the CSV file. Thus, the provider intelligently avoids the misinterpretation of data types before inserting the data into the database. For more information see: http://msdn.microsoft.com/en-us/library/ms709353%28VS.85%29.aspx Points to remember before creating schema.ini:   1. The schema information file, must always named as 'schema.ini'.   2. The schema.ini file must be kept in the same directory where the CSV file exists.   3. The schema.ini file must be created before reading the CSV file.   4. The first line of the schema.ini, must the name of the CSV file, followed by the properties of the CSV file, and then the properties of the each column in the CSV file. Here's an example of how the schema looked like: [Employee.csv] ColNameHeader=False Format=CSVDelimited DateTimeFormat=dd-MMM-yyyy Col1=EmployeeID Long Col2=EmployeeFirstName Text Width 100 Col3=EmployeeLastName Text Width 50 Col4=EmployeeEmailAddress Text Width 50 To get started lets's go a head and create a simple blank database. Just for the purpose of this demo I created a database called TestDB. After creating the database then lets go a head and fire up Visual Studio and then create a new WebApplication project. Under the root application create a folder called UploadedCSVFiles and then place the schema.ini on that folder. The uploaded CSV files will be stored in this folder after the user imports the file. Now add a WebForm in the project and set up the HTML mark up and add one (1) FileUpload control one(1)Button and three (3) Label controls. After that we can now proceed with the codes for uploading and importing the CSV file to SQL Server database. Here are the full code blocks below: 1: using System; 2: using System.Data; 3: using System.Data.SqlClient; 4: using System.Data.OleDb; 5: using System.IO; 6: using System.Text; 7:   8: namespace WebApplication1 9: { 10: public partial class CSVToSQLImporting : System.Web.UI.Page 11: { 12: private string GetConnectionString() 13: { 14: return System.Configuration.ConfigurationManager.ConnectionStrings["DBConnectionString"].ConnectionString; 15: } 16: private void CreateDatabaseTable(DataTable dt, string tableName) 17: { 18:   19: string sqlQuery = string.Empty; 20: string sqlDBType = string.Empty; 21: string dataType = string.Empty; 22: int maxLength = 0; 23: StringBuilder sb = new StringBuilder(); 24:   25: sb.AppendFormat(string.Format("CREATE TABLE {0} (", tableName)); 26:   27: for (int i = 0; i < dt.Columns.Count; i++) 28: { 29: dataType = dt.Columns[i].DataType.ToString(); 30: if (dataType == "System.Int32") 31: { 32: sqlDBType = "INT"; 33: } 34: else if (dataType == "System.String") 35: { 36: sqlDBType = "NVARCHAR"; 37: maxLength = dt.Columns[i].MaxLength; 38: } 39:   40: if (maxLength > 0) 41: { 42: sb.AppendFormat(string.Format(" {0} {1} ({2}), ", dt.Columns[i].ColumnName, sqlDBType, maxLength)); 43: } 44: else 45: { 46: sb.AppendFormat(string.Format(" {0} {1}, ", dt.Columns[i].ColumnName, sqlDBType)); 47: } 48: } 49:   50: sqlQuery = sb.ToString(); 51: sqlQuery = sqlQuery.Trim().TrimEnd(','); 52: sqlQuery = sqlQuery + " )"; 53:   54: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 55: { 56: sqlConn.Open(); 57: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 58: sqlCmd.ExecuteNonQuery(); 59: sqlConn.Close(); 60: } 61:   62: } 63: private void LoadDataToDatabase(string tableName, string fileFullPath, string delimeter) 64: { 65: string sqlQuery = string.Empty; 66: StringBuilder sb = new StringBuilder(); 67:   68: sb.AppendFormat(string.Format("BULK INSERT {0} ", tableName)); 69: sb.AppendFormat(string.Format(" FROM '{0}'", fileFullPath)); 70: sb.AppendFormat(string.Format(" WITH ( FIELDTERMINATOR = '{0}' , ROWTERMINATOR = '\n' )", delimeter)); 71:   72: sqlQuery = sb.ToString(); 73:   74: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 75: { 76: sqlConn.Open(); 77: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 78: sqlCmd.ExecuteNonQuery(); 79: sqlConn.Close(); 80: } 81: } 82: protected void Page_Load(object sender, EventArgs e) 83: { 84:   85: } 86: protected void BTNImport_Click(object sender, EventArgs e) 87: { 88: if (FileUpload1.HasFile) 89: { 90: FileInfo fileInfo = new FileInfo(FileUpload1.PostedFile.FileName); 91: if (fileInfo.Name.Contains(".csv")) 92: { 93:   94: string fileName = fileInfo.Name.Replace(".csv", "").ToString(); 95: string csvFilePath = Server.MapPath("UploadedCSVFiles") + "\\" + fileInfo.Name; 96:   97: //Save the CSV file in the Server inside 'MyCSVFolder' 98: FileUpload1.SaveAs(csvFilePath); 99:   100: //Fetch the location of CSV file 101: string filePath = Server.MapPath("UploadedCSVFiles") + "\\"; 102: string strSql = "SELECT * FROM [" + fileInfo.Name + "]"; 103: string strCSVConnString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + filePath + ";" + "Extended Properties='text;HDR=YES;'"; 104:   105: // load the data from CSV to DataTable 106:   107: OleDbDataAdapter adapter = new OleDbDataAdapter(strSql, strCSVConnString); 108: DataTable dtCSV = new DataTable(); 109: DataTable dtSchema = new DataTable(); 110:   111: adapter.FillSchema(dtCSV, SchemaType.Mapped); 112: adapter.Fill(dtCSV); 113:   114: if (dtCSV.Rows.Count > 0) 115: { 116: CreateDatabaseTable(dtCSV, fileName); 117: Label2.Text = string.Format("The table ({0}) has been successfully created to the database.", fileName); 118:   119: string fileFullPath = filePath + fileInfo.Name; 120: LoadDataToDatabase(fileName, fileFullPath, ","); 121:   122: Label1.Text = string.Format("({0}) records has been loaded to the table {1}.", dtCSV.Rows.Count, fileName); 123: } 124: else 125: { 126: LBLError.Text = "File is empty."; 127: } 128: } 129: else 130: { 131: LBLError.Text = "Unable to recognize file."; 132: } 133:   134: } 135: } 136: } 137: } The code above consists of three (3) private methods which are the GetConnectionString(), CreateDatabaseTable() and LoadDataToDatabase(). The GetConnectionString() is a method that returns a string. This method basically gets the connection string that is configured in the web.config file. The CreateDatabaseTable() is method that accepts two (2) parameters which are the DataTable and the filename. As the method name already suggested, this method automatically create a Table to the database based on the source DataTable and the filename of the CSV file. The LoadDataToDatabase() is a method that accepts three (3) parameters which are the tableName, fileFullPath and delimeter value. This method is where the actual saving or importing of data from CSV to SQL server happend. The codes at BTNImport_Click event handles the uploading of CSV file to the specified location and at the same time this is where the CreateDatabaseTable() and LoadDataToDatabase() are being called. If you notice I also added some basic trappings and validations within that event. Now to test the importing utility then let's create a simple data in a CSV format. Just for the simplicity of this demo let's create a CSV file and name it as "Employee" and add some data on it. Here's an example below: 1,VMS,Durano,[email protected] 2,Jennifer,Cortes,[email protected] 3,Xhaiden,Durano,[email protected] 4,Angel,Santos,[email protected] 5,Kier,Binks,[email protected] 6,Erika,Bird,[email protected] 7,Vianne,Durano,[email protected] 8,Lilibeth,Tree,[email protected] 9,Bon,Bolger,[email protected] 10,Brian,Jones,[email protected] Now save the newly created CSV file in some location in your hard drive. Okay let's run the application and browse the CSV file that we have just created. Take a look at the sample screen shots below: After browsing the CSV file. After clicking the Import Button Now if we look at the database that we have created earlier you'll notice that the Employee table is created with the imported data on it. See below screen shot.   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,CSV,SQL,C#,ADO.NET

    Read the article

  • Is there an established convention for separating Windows file names in a string?

    - by Heinzi
    I have a function which needs to output a string containing a list of file paths. I can choose the separation character but I cannot change the data type (e.g. I cannot return a List<string> or something like that). Wanting to use some well-established convention, my first intuition was to use the semicolon, similar to what Windows's PATH and Java's CLASSPATH (on Windows) environment variables do: C:\somedir\somefile.txt;C:\someotherdir\someotherfile.txt However, I was surprised to notice that ; is a valid character in an NTFS file name. So, is the established best practice to just ignore this fact (i.e. "no sane person should use ; in a file name and if they do, it's their own fault") or is there some other established character for separating Windows paths or files? (The pipe (|) might be a good choice, but I have not seen it used anywhere yet for this purpose.)

    Read the article

  • How can I open an .xps file in Evince?

    - by Jakob
    On projects.gnome.org I read that evince/Document Viewer supports xps-files. But when I try to open an xps-file I get the error message Unable to open documentFile type Zip archive (application/zip) is not supported Reading "the full list of supported document formats" on live.gnome.org I can't find xps there. Now I ask myself (and you): Isn't Document Viewer able to open xps-files, or is there something wrong with that xps-file I try to open? I specifically want to do this with Ubuntu 11.10 Oneiric. The PPA ppa:medigeek/evince-xps has no solution for 11.10, and the xpstopdf utility mixes up the letters from my xps file totally - the new pdf then isn't usable. I want to see a solution for Evince or Gnome in general, not get a recommendation for a KDE application like here.

    Read the article

  • Java multiple connections downloading file

    - by weulerjunior
    Hello friends, I was wanting to add multiple connections in the code below to be able to download files faster. Could someone help me? Thanks in advance. public void run() { RandomAccessFile file = null; InputStream stream = null; try { // Open connection to URL. HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // Specify what portion of file to download. connection.setRequestProperty("Range", "bytes=" + downloaded + "-"); // Connect to server. connection.connect(); // Make sure response code is in the 200 range. if (connection.getResponseCode() / 100 != 2) { error(); } // Check for valid content length. int contentLength = connection.getContentLength(); if (contentLength < 1) { error(); } /* Set the size for this download if it hasn't been already set. */ if (size == -1) { size = contentLength; stateChanged(); } // Open file and seek to the end of it. file = new RandomAccessFile("C:\\"+getFileName(url), "rw"); file.seek(downloaded); stream = connection.getInputStream(); while (status == DOWNLOADING) { /* Size buffer according to how much of the file is left to download. */ byte buffer[]; if (size - downloaded > MAX_BUFFER_SIZE) { buffer = new byte[MAX_BUFFER_SIZE]; } else { buffer = new byte[size - downloaded]; } // Read from server into buffer. int read = stream.read(buffer); if (read == -1) { break; } // Write buffer to file. file.write(buffer, 0, read); downloaded += read; stateChanged(); } /* Change status to complete if this point was reached because downloading has finished. */ if (status == DOWNLOADING) { status = COMPLETE; stateChanged(); } } catch (Exception e) { error(); } finally { // Close file. if (file != null) { try { file.close(); } catch (Exception e) { } } // Close connection to server. if (stream != null) { try { stream.close(); } catch (Exception e) { } } } }

    Read the article

  • File sizing issue in DOS/FAT

    - by Heather
    I've been tasked with writing a data collection program for a Unitech HT630, which runs a proprietary DOS operating system that can run executables compiled for 16-bit MS DOS with some restrictions. I'm using the Digital Mars C/C++ compiler, which is working well thus far. One of the application requirements is that the data file must be human-readable plain text, meaning the file can be imported into Excel or opened by Notepad. I'm using a variable length record format much like CSV that I've successfully implemented using the C standard library file I/O functions. When saving a record, I have to calculate whether the updated record is larger or smaller than the version of the record currently in the data file. If larger, I first shift all records immediately after the current record forward by the size difference calculated before saving the updated record. EOF is extended automatically by the OS to accommodate the extra data. If smaller, I shift all records backwards by my calculated offset. This is working well, however I have found no way to modify the EOF marker or file size to ignore the data after the end of the last record. Most of the time records will grow in size because the data collection program will be filling some of the empty fields with data when saving a record. Records will only shrink in size when a correction is made on an existing entry, or on a normal record save if the descriptive data in the record is longer than what the program reads in memory. In the situation of a shrinking record, after the last record in the file I'm left with whatever data was sitting there before the shift. I have been writing an EOF delimiter into the file after a "shrinking record save" to signal where the end of my records are and space-filling the remaining data, but then I no longer have a clean file until a "growing record save" extends the size of the file over the space-filled area. The truncate() function in unistd.h does not work (I'm now thinking this is for *nix flavors only?). One proposed solution I've seen involves creating a second file and writing all the data you wish to save into that file, and then deleting the original. Since I only have 4MB worth of disk space to use, this works if the file size is less than 2MB minus the size of my program executable and configuration files, but would fail otherwise. It is very likely that when this goes into production, users would end up with a file exceeding 2MB in size. I've looked at Ralph Brown's Interrupt List and the interrupt reference in IBM PC Assembly Language and Programming and I can't seem to find anything to update the file size or similar. Is reducing a file's size without creating a second file even possible in DOS?

    Read the article

  • Silverlight file upload - file is in use by another process (Excel, Word)

    - by walkor
    Hi, all. I have a problem with uploading of the file in Silverlight application. Here is a code sample. In case when this file is opened in other application (excel or word for example) it fails to open it, otherwise it's working fine. I'm using OpenFileDialog to choose the file and pass it to this function. private byte[] GetFileContent(FileInfo file) { var result = new byte[] {}; try { using (var fs = file.OpenRead()) { result = new byte[file.Length]; fs.Read(result, 0, (int)file.Length); } } catch (Exception e) { // File is in use } return result; } Is there any way i can access this file or should i just notify the user that the file is locked?

    Read the article

  • Is it possible to save output from a command to a file after the command already has been executed?

    - by NES
    Does an elegant way exist to save the output of a command to a file after the command has been run, with the terminal window is open? I mean once the command has been executed in the terminal. The output is still there in terminal. Now i could copy & paste all the lines and save it to a file. But perhaps does a method exist to somehow write the output buffer of a terminal window to a file or even better the output of an already executed command?

    Read the article

  • Implementing Qt File Dialog with a Different File System Library (boost)

    - by knight
    Hi, I am writing an application which requires me to use another file system and file engine handlers and not the qt's default ones. Basically what I want to be able to do is to use qt's file dialog but have an underlying file system handler (for example built using boost file system library) of mine handling all the operations with regards to file and directory operations within that dialog. I have already written a custom file engine which handles some of the operations but I am now stuck with Qt's file system model and the file system watcher engine, as I need to have the signals transmitted for this custom file engine. Seems like I have a daunting task ahead. Am I heading in the right direction? Is there any other simpler way that I could implement this? Can anyone give me any idea on how to proceed. I was thinking of looking into proxy models but not sure if that would work. Thanks in advance for any help.

    Read the article

  • How can I set 'Print to File' as my default printing option?

    - by edm
    At the moment when I print, my Deskjet-3050 is selected as the default printer. I would like 'Print to File' to be the default 'printer' without using cups-pdf I specifically do not want to use cups-pdf because of the way it renders text (see below). I am not entirely sure what it is doing but it seems as though it renders the text as bitmaps and embeds them in pdf (as I am not able to highlight/copy/search embedded text as I am using a standard Print to File pdf). N.B. this is not a dupe of: Can I make PDF the default for 'print to file'

    Read the article

  • Android , Read in binary data and write it to file

    - by Shpongle
    Hi all , Im trying to read in image file from a server , with the code below . It keeps going into the exception. I know the correct number of bytes are being sent as I print them out when received. Im sending the image file from python like so #open the image file and read it into an object imgfile = open (marked_image, 'rb') obj = imgfile.read() #get the no of bytes in the image and convert it to a string bytes = str(len(obj)) #send the number of bytes self.conn.send( bytes + '\n') if self.conn.sendall(obj) == None: imgfile.flush() imgfile.close() print 'Image Sent' else: print 'Error' Here is the android part , this is where I'm having the problem. Any suggestions on the best way to go about receiving the image and writing it to a file ? //read the number of bytes in the image String noOfBytes = in.readLine(); Toast.makeText(this, noOfBytes, 5).show(); byte bytes [] = new byte [Integer.parseInt(noOfBytes)]; //create a file to store the retrieved image File photo = new File(Environment.getExternalStorageDirectory(), "PostKey.jpg"); DataInputStream dis = new DataInputStream(link.getInputStream()); try{ os =new FileOutputStream(photo); byte buf[]=new byte[1024]; int len; while((len=dis.read(buf))>0) os.write(buf,0,len); Toast.makeText(this, "File recieved", 5).show(); os.close(); dis.close(); }catch(IOException e){ Toast.makeText(this, "An IO Error Occured", 5).show(); } EDIT: I still cant seem to get it working. I have been at it since and the result of all my efforts have either resulted in a file that is not the full size or else the app crashing. I know the file is not corrupt before sending server side. As far as I can tell its definitely sending too as the send all method in python sends all or throws an exception in the event of an error and so far it has never thrown an exception. So the client side is messed up . I have to send the file from the server so I cant use the suggestion suggested by Brian .

    Read the article

  • Is it possible to save output from a command to a file subsequent the command already has been executed?

    - by NES
    Does an elegant way exists to save the output of a command subsequent to a file as long as the terminal window is open? I mean once the command has been executed in the terminal. The output is still there in terminal. Now i could copy & paste all the lines and save it to a file. But perhaps does a method exist to somehow write the output buffer of a terminal window to a file or even better the output of an already executed command?

    Read the article

  • What file format/database format does Picasa use?

    - by Raymond
    I am trying to figure out what file format the .db file and .pmp files are. I tried using db_dump (Berkeley DB) for the .db files, but it seems that they are not Berkeley DB, or of an older version. I have no idea what the .PMP files are. Directory of C:\Users\me\AppData\Local\Google\Picasa2\db3 6/09/2010 08:07 PM 303,748 imagedata_uid64.pmp 1/18/2010 10:34 PM 4,885 imagedata_unification_lhlist.pmp 6/09/2010 10:55 PM 155,752 imagedata_width.pmp 6/09/2010 10:55 PM 1,286,346,614 previews_0.db 6/10/2010 10:06 AM 467,168 previews_index.db Any help appreciated.

    Read the article

  • robocopy transfer file and not folder

    - by Bill McKay
    I'm trying to use robocopy to tranfer a single file from one location to another but robocopy seems to think I'm always specifying a folder. Here is an example: robocopy "c:\transfer_this.txt" "z:\transferred.txt" But I get this error instead: 2009/08/11 15:21:57 ERROR 123 (0x0000007B) Accessing Source Directory c:\transfer_this.txt\ (note the '\' at the end of transfer_this.txt) But if I treat it like an entire folder: robocopy "c:\folder" "z:\folder" It works but then I have to transfer everything in the folder. How can I only transfer a single file with robocopy?

    Read the article

  • Digital Asset Management, iPhoto / Aperture server... alternative

    - by Sisyphus
    Afternoon, Clients, 10 : All Apples running either Leopard or Snow Leopard Server : Snow Leopard server, (and I have a old Dell Poweredge 650 at home running Gentoo 2.6, if anybody as a Linux solution). The situation: I work in small design company with 8 people, at present we are looking to consolidate all our image files onto one location, at present we each use our preferred single user DAM solution, be it, Adobe Bridge, iPhoto/Aperture (some don't bother at all) The filetypes commonly used are .psd, .pdf, .eps, .tiff, .jpg and RAW image files. Ideally what is needed: Centralised on one server, but allows us to search via spotlight (not essential, but would be nice) Include searchable metadata information such as date, location, title Open-source or as low cost as possibly Allow simultaneous users to import files So far, I have looked at a few open source DAM, systems, such as Razuna, Gallery (not strictly DAM), ResourceSpace, Notre-DAM, while these are brilliant and open-source, they don't integrate as smoothly with the Desktop as iPhoto and aperture. For iPhoto and aperture, I have tried creating a Shared library on the server (a tad laggy), and also using a drive with no permissions, put a library and letting each client read from it, however if they want to put images onto the library only, it's only supports one user at a time writing to the library... Any ideas what could fulfill our needs? Or is it time to bite the bullet for FinalCut Server? Thanks in advance.

    Read the article

  • Free-as-in-beer binary file format inspector

    - by fbrereto
    I am looking for a utility that gives me the ability to specify a binary file format and then interpret a file of bytes according to that format. (Something along the lines of the 010 Editor, but infinitely more cost-effective). Something that runs on Mac OS X would be preferred, but I'm interested to see what all is out there in general (while more of a hassle I'd be willing to run a tool on Windows if it were superior.) What's your preference?

    Read the article

  • Utility to unmap a network drive when the screen saver starts

    - by JimR
    I'm looking for a way to unmap network drives when the screen saver turns on. I have a few users that share an external, encrypted drive (Samba share, not windows) and they have a requirement to disconnect the drive mapping when the local machine is idle. I'd also like it to warn them if there are open files on the mapped drive, if possible. There is also a requirement to force the password to be reentered before mapping when the machine comes back from idle. Is there a Windows setting or utility out there in the wild that meets these requirements?

    Read the article

  • SQL Server 2005, Huge LDF file.

    - by Scott Jackson
    Hi, I have a database running on SQL Server 2005. The database is 20Gb and the LDF file is 35Gb ! I'm now running low on disk space and want to shrink the log file. How can I do this and how can I stop this happening again ? Many thanks Scott

    Read the article

  • What OS would you use for a simple fileserver and why?

    - by barfoon
    Hey everyone, Im looking to set up a simple fileserver: 5 - 7 clients - Mixed Windows, Linux, Mac OSX - connecting over wireless and wired Serving ~200GB content - Photos, MP3's, ISO's etc What OS would you recommend for this fileserver? I understand XP limits the number of when connecting to different shares so this probably isnt the best choice. Any recommendations are appreciated. Thank you,

    Read the article

  • Mac keyboard shortcut to rm file

    - by MattDiPasquale
    What's the Mac keyboard shortcut to rm a file? I know command + delete sends it to trash, but I want to permanently delete it, say with command + fn + delete. UPDATE: It doesn't look like there is one. So, I want to create a service with Automator and then assign a keyboard shortcut to it from System Preferences. I can get to Automator - Service - Service receives selected files or folders in Finder.app, but how do I write the script that then runs rm -rf #{file/folder name}?

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >