Search Results

Search found 85647 results on 3426 pages for 'file write'.

Page 27/3426 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Old hard drive file permissions still there

    - by blsub6
    I have a new hard drive, put Windows 7 on it and want to get all the files off of my old hard drive. I put in my old hard drive as a slave drive. I can see the files but when I try to move 'em, it tells me that I'm not the owner of the file. I try to take ownership of the file and it doesn't work (it doesn't tell me that I can't take ownership of it, it goes through, just gives me the same error when I try and open the file again). I've tried modding the permissions, no dice. Anything else I can try?

    Read the article

  • Fixing mac user file permissions, not the system

    - by Cawas
    Usually those files get wrong permission when coming from the network, even when I copy them from it, but mostly through "file sharing". So, definitely not talking about Disk Utility repair here, please. But regardless of how the file got wrong permission, I know of two bad ways to fix them. One is CMD+I and the other is chown / chmod. The command line isn't all bad but isn't practical either. Some times it's just 1 file I need to repair, sometimes it's a bunch of them. By "repair" I mean 644 for files, 755 for folders, and current user:group for all of them. Isn't there any app / script / automator out there to do that?

    Read the article

  • File apparently doesn't exist when attempting to delete it

    - by Alex Yan
    A month or so back, I untarred the Linux source in a folder in Cygwin (I was curious as to whether or not it would compile with MinGW 'cause my other computer running Linux is a slow single core Sempron). I tried deleting it, but there's 1 file left, and it will not delete... Cygwin resides in C:\cygwin, and I untarred the source in C:\cygwin\src\linux-3.7.1. It didn't compile... So I tried deleting the folder. It was going well, until at the end, when I realized not all files are deleted. I tried deleting linux-3.7.1 folder again, and an error popped up: I opened the folder, and found that there's 1 source file left: aux.c, which is in C:\cygwin\src\linux-3.7.1\drivers\gpu\drm\nouveau\core\subdev\i2c\aux.c. It will not: Delete Open Move General properties: Security properties: How do I remove this file?

    Read the article

  • Recover open but deleted file on Linux using ln instead of cp

    - by Yang
    Say I have a file that's downloading (from a source that's hard to re-download from), but accidentally deleted from the filesystem namespace (/tmp/blah), and I'd like to recover this file. Normally I could just cp /proc/$PID/fd/$FD /tmp/blah, but in this case that would only get me a partial snapshot, since the file is still downloading. Furthermore, once the download completes, the downloading process (e.g. Chrome) will close the FD. Any way to recover by inode/create a hard link? Any other solutions? If it makes any difference, I'm mainly concerned with ext4. Thanks in advance.

    Read the article

  • how to run an AFS file server on a specific ethernet card (in Debian)

    - by listboss
    I have a linux box running Debian server with minimal number of packages (so no GUI for network management). The box has two ethernet cards, one of which (eth0) is connected to a Mac OSX computer using a cross-cable. I can bring up eth0 and assign a static ip (10.10.11.16) to it. This way I can ssh to the box through the cross-cable. This is what I run on Linux box: ifconfig eth0 10.10.11.16 netmask 255.255.255.0 up I also installed/started a file server (AFS) on Debian. So far, the file server can only be accessed through eth1 which is exposed to my home LAN and www. My goal is to set up the file server so that it's only visible through eth0. Is this possible? and if yes, how can I do it?

    Read the article

  • Why is my hosts file not working?

    - by elliot100
    I've been using the hosts file to for local website development, and it's recently stopped working. No entries other than localhost resolve. I've simplified to test, so it now contains only 127.0.0.1 localhost ::1 localhost 127.0.0.1 test.dev localhost responds to ping, test.dev does not. The file is called hosts with no extension It has no trailing spaces It's saved in C:\WINDOWS\System32\drivers\etc which matches the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\DataBasePath Oddly, despite UAC being on, I can edit, delete and save the file without admin permissions No proxy is being used, PC is not connected to network for testing Stopping the DNS Client service seemed to resolve the issue for a few minutes, test.dev briefly resolved but doesn't any more. Only firewall is Windows' Machine has been restarted. Is there anything else I should try?

    Read the article

  • How to download a url as a file?

    - by Michelle
    A website url has "hidden" some mp3 files by embedding them as shockwave files, as follows: <span class="caption"><!-- Odeo player --><embed src="http://odeo.com/flash/audio_player_tiny_gray.swf"quality="high" name="audio_player_tiny_gray" align="middle" allowScriptAccess="always" wmode="transparent" type="application/x-shockwave-flash" flashvars="valid_sample_rate=true external_url=http://podcast.cbc.ca/mp3/sundayeditionstream_20081125_9524.mp3" pluginspage="http://www.macromedia.com/go/getflashplayer"></embed></span> How can I download the files for off-line listening? I've found two methods: 1. The StackOverflow Method Create a new local html file with just the links eg <a href="http://podcast.cbc.ca/mp3/sundayeditionstream_20081125_9524.mp3">Sunday Edition 25Nov2008</a> Open the file in the browser, right click the link and File Save Link As. 2. The SuperUser Method Install the Firefox addin Iget. (Be sure to use the right version for your Firefox version.) Tools Downloads Enter url in field. Are there any other ways?

    Read the article

  • Linux - File was deleted and then reappeared when folder was zipped

    - by davee9
    Hello, I am using Backtrack 4 Final, which is a Linux distro that is Ubuntu based. I had a directory that contained around 5 files. I deleted one of the files, which sent it to the trash. I then zipped the directory up (now containing 4 files), using this command: zip -r directory.zip directory/ When I then unzipped directory.zip, the file I deleted was in there again. I couldn't believe this, so I zipped up the directory again, and the file reappeared again but this time could not be opened because the operating system said it didn't exist or something. I don't remember the exact error, and I cannot make this happen again. Would anyone happen to know why a file that was deleted from a directory would reappear in that directory after it was zipped up? Thank you.

    Read the article

  • Opening NBF backup file?

    - by ellisgeek
    I have a backup file from before i reinstalled windows but am unable to open it because the file is a NBF. It was created with Acer Backup Manager which is a proprietary version of NTI's backup software. is there any way to open this? I have tried using NTI Backup Now! 4.x but it says the file is invalid. Acer Backup Manager will only let me restore the ENTIRE image (not what I want), and many hours of googling have left me empty handed.

    Read the article

  • Batch edit (not rename) file properties in windows

    - by Jay
    I have a large directory of downloaded shareware. I keep track of what i have by individually editing the properties of each program. However, some of the programs are multipart .rar types. And i have at least a few hundred programs so far. I am looking for a utility that will let me batch edit file properties such as Title, Author, Summary, and Comments, so I don't have to edit each file or file part individually. Windows doesn't let me do this in Explorer. Powerdesk has a proprietary system, but it isn't preserved when moving or copying files. Any Suggestions?

    Read the article

  • nginx static file buffer

    - by Philip
    I have a nfs which several frontend-servers are connected to for making the files stored on the nfs available for http downloads. It looks like I have problems with the way apache is serving the files, there seems to be a very small buffer or no buffer at all which results in a lot disk seeks. I did some testing with loading the whole requested file into memory at once and serve it to the client from memory. With this technique I need less disk seeks for a download stream. Since I don't want to implement this by myself for production use I thought that I could maybe use nginx for that because the documentation says that it uses buffers for static file serving. Is it possible to increase the buffer size to a few mb, if so which config parameter do I have to change for this? Has anyone experience with large buffers for static file serving? Is there a better way to reduce disk seeks?

    Read the article

  • Reset Photoshop File Associations

    - by Rev
    Is there a way to reset Photoshop's file associations without having to reinstall? I had CS6 and CS5.5 installed side by side, and when I uninstalled CS5.5 it removed the file associations. I tried searching around but everyone seems to have the opposite problem (wanting to remove Photosohp's file associations). Oh, and just doing Open Width - Photoshop and setting that as default doesn't really work right. It displays the wrong icons (which really gets on my nerves). Running Windows 8 RP (but the fix should be the same as in Windows 7).

    Read the article

  • How to download a URL as a file?

    - by Michelle
    A website URL has "hidden" some MP3 files by embedding them as Shockwave files, as follows. <span class="caption"><!-- Odeo player --><embed src="http://odeo.com/flash/audio_player_tiny_gray.swf"quality="high" name="audio_player_tiny_gray" align="middle" allowScriptAccess="always" wmode="transparent" type="application/x-shockwave-flash" flashvars="valid_sample_rate=true external_url=http://podcast.cbc.ca/mp3/sundayeditionstream_20081125_9524.mp3" pluginspage="http://www.macromedia.com/go/getflashplayer"></embed></span> How can I download the files for off-line listening? I've found two methods: 1. The Stack Overflow Method Create a new local HTML file with just the links, for example: <a href="http://podcast.cbc.ca/mp3/sundayeditionstream_20081125_9524.mp3">Sunday Edition 25Nov2008</a> Open the file in the browser, right click the link and File Save Link As. 2. The Super User Method Install the Firefox addin Iget. (Be sure to use the right version for your Firefox version.) Tools Downloads Enter URL in the field. Are there any other ways?

    Read the article

  • Windows HOSTS File

    - by jdp
    I have a network of computers and a server. I often need to edit the hosts file of all the computers to add a new entry or to change one. e.g. 192.168.1.101 temporary-internal-site How can I get all the machines (all windows) to 'import' or use a hosts file on a server in addition to the normal one. This means that I can just edit this one file with the new entry and all the machines will check it. Feel free to ask questions if you don't understand.

    Read the article

  • How to open textfile and write to it append-style with php?

    - by Chris_45
    How do you open a textfile and write to it with php appendstyle textFile.txt //caught these variables $var1 = $_POST['string1']; $var2 = $_POST['string2']; $var3 = $_POST['string3']; $handle = fopen("textFile.txt", "w"); fwrite = ("%s %s %s\n", $var1, $var2, $var3, handle);//not the way to append to textfile fclose($handle);

    Read the article

  • Cannot write to Folder mounted with SSHFS

    - by JM at Work
    I just created a folder according to SSHFS (Ubuntu Docs) sudo apt-get install sshfs sudo gpasswd -a jm fuse sshfs -o idmap=user [email protected]:/path/to/folder folder Then I found that the folder is mounted, but I cannot write to it. The permissions seems fine http://pastie.org/1969299 But I even tried with chmod -R 777 ./folder Still no go UPDATE: It seems I can't write using NetBeans only. But it works with LeafPad for example

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Give write access to USB and Serial devices automatically

    - by Saeid87
    I am working with some USB and Serial micro-controllers. Everytime I plug a device I have to do the following command to give it write access, and also my password: sudo chmod 666 /dev/ttyUSB0 Can I set my Ubuntu to automatically give write access to pluged in devices? If not, how can I make a batch file that I can easily give the access to port I want for example if I run the following command it does the job: giveaccess -usb0

    Read the article

  • Motivating yourself to actually write the code after you've designed something

    - by dpb
    Does it happen only to me or is this familiar to you too? It's like this: You have to create something; a module, a feature, an entire application... whatever. It is something interesting that you have never done before, it is challenging. So you start to think how you are going to do it. You draw some sketches. You write some prototypes to test your ideas. You are putting different pieces together to get the complete view. You finally end up with a design that you like, something that is simple, clear to everybody, easy maintainable... you name it. You covered every base, you thought of everything. You know that you are going to have this class and that file and that database schema. Configure this here, adapt this other thingy there etc. But now, after everything is settled, you have to sit down and actually write the code for it. And is not challenging anymore.... Been there, done that! Writing the code now is just "formalities" and makes it look like re-iterating what you've just finished. At my previous job I sometimes got away with it because someone else did the coding based on my specifications, but at my new gig I'm in charge of the entire process so I have to do this too ('cause I get payed to do it). But I have a pet project I'm working on at home, after work and there is just me and no one is paying me to do it. I do the creative work and then when time comes to write it down I just don't feel like it (lets browse the web a little, see what's new on P.SE, on SO etc). I just want to move to the next challenging thing, and then to the next, and the next... Does this happen to you too? How do you deal with it? How do you convince yourself to go in and write the freaking code? I'll take any answer.

    Read the article

  • Write Your Own Microsoft Expression Blend 3 Addin

    Although there have been numerous articles introducing how to write a Microsoft Expression blend 2 addin, the Microsoft Expression blend 3/4 related ones are few. In this article, you will learn what a Microsoft Expression blend addin is and command the common routine to write an Addin. Finally, you will master the inner workings of the addin through a concrete sample application.

    Read the article

  • Is a 1:* write:read thread system safe?

    - by Di-0xide
    Theoretically, thread-safe code should fix race conditions. Race conditions, as I understand it, occur because two threads attempt to write to the same location at the same time. However, what about a threading model in which a single thread is designed to write to a location, and several slave/worker threads simply read from the location? Assuming the value/timing at which they read the data isn't relevant/doesn't hinder the worker thread's outcome, wouldn't this be considered 'thread safe', or am I missing something in my logic?

    Read the article

  • Windows 8 Stuck on Start Screen File Search

    - by baturayd
    I have been searching someone else having the same issue but I couldn't find any. Here is my problem: I'm using Windows 8 Pro with Media Center. File search screen does not close itself after I make a file search within start screen. It stuck on that screen. I can't go back to desktop, therefore, task manager is inaccessible. Only way to get out of it is to sign out. It also looks like non-operational. It doesn't give me any result at all. It's just a blank screen with "Files" title. It used to work perfectly. Things I have done before coming here: Safe mode minimal boot to ensure no other 3rd party software interferes. sfc haven't found any inconsistencies. Search index has been rebuild. Normal boot with all 3rd party services and start-up items disabled. System restore to a date that I know this feature was functional. And by the way, I have installed all updates released so far. I strongly used file search via start menu back in Windows 7. This is an absolute game changer for me. I'm curious what causes this. I'll do a "system refresh" if I can't fix this. I'm working on it, I'll keep this thread updated should I find any fix. First update: I just discovered that file search screen gets stuck if it's invoked by typing query directly in start screen. It doesn't get stuck if it's invoked from win + x menu. I was able to "escape" from stuck screen with invoking it again by win + x menu. After rebuilding search index again, search suggestions started to appear again but still there is no file search functionality. Second update: "Results for" expression appears only for a second near to "Files" title when search is initialized. Third update: As a last resort, I finally tried to do a "System Refresh" which has also failed to refresh by giving an error after waiting almost 20 minutes at 99%. (Seriously?) After cold boot it began to undo changes. After changes were reverted, I boot the machine without doing anything further and bingo! Search began acting normal again. This is a totally weird solution. It obviously fixed certain system files during failed refresh, and kept those changes untouched because they were supposed to be that way at the first place. I'll keep this thread alive should anyone comes with a more logical explanation. Forth update: After a windows update, search functionality again stopped responding. It happened after a system update for the first time. Now I have a pretty good suspicion about system update, though I have no solid proof of its involvement with this problem.

    Read the article

  • FileInput Help/Advice

    - by user559142
    I have a fileinput class. It has a string parameter in the constructor to load the filename supplied. However it just exits if the file doesn't exist. I would like it to output a message if the file doesn't exist - but not sure how.... Here is the class: public class FileInput extends Input { /** * Construct <code>FileInput</code> object given a file name. */ public FileInput(final String fileName) { try { scanner = new Scanner(new FileInputStream(fileName)); } catch (FileNotFoundException e) { System.err.println("File " + fileName + " could not be found."); System.exit(1); } } /** * Construct <code>FileInput</code> object given a file name. */ public FileInput(final FileInputStream fileStream) { super(fileStream); } } And its implementation: private void loadFamilyTree() { out.print("Enter file name: "); String fileName = in.nextLine(); FileInput input = new FileInput(fileName); family.load(input); input.close(); }

    Read the article

  • Uploading and Importing CSV file to SQL Server in ASP.NET WebForms

    - by Vincent Maverick Durano
    Few weeks ago I was working with a small internal project  that involves importing CSV file to Sql Server database and thought I'd share the simple implementation that I did on the project. In this post I will demonstrate how to upload and import CSV file to SQL Server database. As some may have already know, importing CSV file to SQL Server is easy and simple but difficulties arise when the CSV file contains, many columns with different data types. Basically, the provider cannot differentiate data types between the columns or the rows, blindly it will consider them as a data type based on first few rows and leave all the data which does not match the data type. To overcome this problem, I used schema.ini file to define the data type of the CSV file and allow the provider to read that and recognize the exact data types of each column. Now what is schema.ini? Taken from the documentation: The Schema.ini is a information file, used to define the data structure and format of each column that contains data in the CSV file. If schema.ini file exists in the directory, Microsoft.Jet.OLEDB provider automatically reads it and recognizes the data type information of each column in the CSV file. Thus, the provider intelligently avoids the misinterpretation of data types before inserting the data into the database. For more information see: http://msdn.microsoft.com/en-us/library/ms709353%28VS.85%29.aspx Points to remember before creating schema.ini:   1. The schema information file, must always named as 'schema.ini'.   2. The schema.ini file must be kept in the same directory where the CSV file exists.   3. The schema.ini file must be created before reading the CSV file.   4. The first line of the schema.ini, must the name of the CSV file, followed by the properties of the CSV file, and then the properties of the each column in the CSV file. Here's an example of how the schema looked like: [Employee.csv] ColNameHeader=False Format=CSVDelimited DateTimeFormat=dd-MMM-yyyy Col1=EmployeeID Long Col2=EmployeeFirstName Text Width 100 Col3=EmployeeLastName Text Width 50 Col4=EmployeeEmailAddress Text Width 50 To get started lets's go a head and create a simple blank database. Just for the purpose of this demo I created a database called TestDB. After creating the database then lets go a head and fire up Visual Studio and then create a new WebApplication project. Under the root application create a folder called UploadedCSVFiles and then place the schema.ini on that folder. The uploaded CSV files will be stored in this folder after the user imports the file. Now add a WebForm in the project and set up the HTML mark up and add one (1) FileUpload control one(1)Button and three (3) Label controls. After that we can now proceed with the codes for uploading and importing the CSV file to SQL Server database. Here are the full code blocks below: 1: using System; 2: using System.Data; 3: using System.Data.SqlClient; 4: using System.Data.OleDb; 5: using System.IO; 6: using System.Text; 7:   8: namespace WebApplication1 9: { 10: public partial class CSVToSQLImporting : System.Web.UI.Page 11: { 12: private string GetConnectionString() 13: { 14: return System.Configuration.ConfigurationManager.ConnectionStrings["DBConnectionString"].ConnectionString; 15: } 16: private void CreateDatabaseTable(DataTable dt, string tableName) 17: { 18:   19: string sqlQuery = string.Empty; 20: string sqlDBType = string.Empty; 21: string dataType = string.Empty; 22: int maxLength = 0; 23: StringBuilder sb = new StringBuilder(); 24:   25: sb.AppendFormat(string.Format("CREATE TABLE {0} (", tableName)); 26:   27: for (int i = 0; i < dt.Columns.Count; i++) 28: { 29: dataType = dt.Columns[i].DataType.ToString(); 30: if (dataType == "System.Int32") 31: { 32: sqlDBType = "INT"; 33: } 34: else if (dataType == "System.String") 35: { 36: sqlDBType = "NVARCHAR"; 37: maxLength = dt.Columns[i].MaxLength; 38: } 39:   40: if (maxLength > 0) 41: { 42: sb.AppendFormat(string.Format(" {0} {1} ({2}), ", dt.Columns[i].ColumnName, sqlDBType, maxLength)); 43: } 44: else 45: { 46: sb.AppendFormat(string.Format(" {0} {1}, ", dt.Columns[i].ColumnName, sqlDBType)); 47: } 48: } 49:   50: sqlQuery = sb.ToString(); 51: sqlQuery = sqlQuery.Trim().TrimEnd(','); 52: sqlQuery = sqlQuery + " )"; 53:   54: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 55: { 56: sqlConn.Open(); 57: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 58: sqlCmd.ExecuteNonQuery(); 59: sqlConn.Close(); 60: } 61:   62: } 63: private void LoadDataToDatabase(string tableName, string fileFullPath, string delimeter) 64: { 65: string sqlQuery = string.Empty; 66: StringBuilder sb = new StringBuilder(); 67:   68: sb.AppendFormat(string.Format("BULK INSERT {0} ", tableName)); 69: sb.AppendFormat(string.Format(" FROM '{0}'", fileFullPath)); 70: sb.AppendFormat(string.Format(" WITH ( FIELDTERMINATOR = '{0}' , ROWTERMINATOR = '\n' )", delimeter)); 71:   72: sqlQuery = sb.ToString(); 73:   74: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 75: { 76: sqlConn.Open(); 77: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 78: sqlCmd.ExecuteNonQuery(); 79: sqlConn.Close(); 80: } 81: } 82: protected void Page_Load(object sender, EventArgs e) 83: { 84:   85: } 86: protected void BTNImport_Click(object sender, EventArgs e) 87: { 88: if (FileUpload1.HasFile) 89: { 90: FileInfo fileInfo = new FileInfo(FileUpload1.PostedFile.FileName); 91: if (fileInfo.Name.Contains(".csv")) 92: { 93:   94: string fileName = fileInfo.Name.Replace(".csv", "").ToString(); 95: string csvFilePath = Server.MapPath("UploadedCSVFiles") + "\\" + fileInfo.Name; 96:   97: //Save the CSV file in the Server inside 'MyCSVFolder' 98: FileUpload1.SaveAs(csvFilePath); 99:   100: //Fetch the location of CSV file 101: string filePath = Server.MapPath("UploadedCSVFiles") + "\\"; 102: string strSql = "SELECT * FROM [" + fileInfo.Name + "]"; 103: string strCSVConnString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + filePath + ";" + "Extended Properties='text;HDR=YES;'"; 104:   105: // load the data from CSV to DataTable 106:   107: OleDbDataAdapter adapter = new OleDbDataAdapter(strSql, strCSVConnString); 108: DataTable dtCSV = new DataTable(); 109: DataTable dtSchema = new DataTable(); 110:   111: adapter.FillSchema(dtCSV, SchemaType.Mapped); 112: adapter.Fill(dtCSV); 113:   114: if (dtCSV.Rows.Count > 0) 115: { 116: CreateDatabaseTable(dtCSV, fileName); 117: Label2.Text = string.Format("The table ({0}) has been successfully created to the database.", fileName); 118:   119: string fileFullPath = filePath + fileInfo.Name; 120: LoadDataToDatabase(fileName, fileFullPath, ","); 121:   122: Label1.Text = string.Format("({0}) records has been loaded to the table {1}.", dtCSV.Rows.Count, fileName); 123: } 124: else 125: { 126: LBLError.Text = "File is empty."; 127: } 128: } 129: else 130: { 131: LBLError.Text = "Unable to recognize file."; 132: } 133:   134: } 135: } 136: } 137: } The code above consists of three (3) private methods which are the GetConnectionString(), CreateDatabaseTable() and LoadDataToDatabase(). The GetConnectionString() is a method that returns a string. This method basically gets the connection string that is configured in the web.config file. The CreateDatabaseTable() is method that accepts two (2) parameters which are the DataTable and the filename. As the method name already suggested, this method automatically create a Table to the database based on the source DataTable and the filename of the CSV file. The LoadDataToDatabase() is a method that accepts three (3) parameters which are the tableName, fileFullPath and delimeter value. This method is where the actual saving or importing of data from CSV to SQL server happend. The codes at BTNImport_Click event handles the uploading of CSV file to the specified location and at the same time this is where the CreateDatabaseTable() and LoadDataToDatabase() are being called. If you notice I also added some basic trappings and validations within that event. Now to test the importing utility then let's create a simple data in a CSV format. Just for the simplicity of this demo let's create a CSV file and name it as "Employee" and add some data on it. Here's an example below: 1,VMS,Durano,[email protected] 2,Jennifer,Cortes,[email protected] 3,Xhaiden,Durano,[email protected] 4,Angel,Santos,[email protected] 5,Kier,Binks,[email protected] 6,Erika,Bird,[email protected] 7,Vianne,Durano,[email protected] 8,Lilibeth,Tree,[email protected] 9,Bon,Bolger,[email protected] 10,Brian,Jones,[email protected] Now save the newly created CSV file in some location in your hard drive. Okay let's run the application and browse the CSV file that we have just created. Take a look at the sample screen shots below: After browsing the CSV file. After clicking the Import Button Now if we look at the database that we have created earlier you'll notice that the Employee table is created with the imported data on it. See below screen shot.   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,CSV,SQL,C#,ADO.NET

    Read the article

  • Is there an established convention for separating Windows file names in a string?

    - by Heinzi
    I have a function which needs to output a string containing a list of file paths. I can choose the separation character but I cannot change the data type (e.g. I cannot return a List<string> or something like that). Wanting to use some well-established convention, my first intuition was to use the semicolon, similar to what Windows's PATH and Java's CLASSPATH (on Windows) environment variables do: C:\somedir\somefile.txt;C:\someotherdir\someotherfile.txt However, I was surprised to notice that ; is a valid character in an NTFS file name. So, is the established best practice to just ignore this fact (i.e. "no sane person should use ; in a file name and if they do, it's their own fault") or is there some other established character for separating Windows paths or files? (The pipe (|) might be a good choice, but I have not seen it used anywhere yet for this purpose.)

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >