Search Results

Search found 59543 results on 2382 pages for 'solution files'.

Page 33/2382 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • vcxproj file won't load into solution.

    - by John C
    We've just recently switched to VS 2010 and i had a solution that was working fine. This moring when i try to load the solution i get the error: "An item with the same key has already been added." This occurs when it is trying to load one of our main projects and it is not loaded. I assumed the problem was with my solution so i created a brand new empty solution and tried to load the same vcxproj and got exactly the same error. When i revert the project file to a previous version it works, so apparently it's something in the vcxproj file. However it also appears that i'm the only one in the office that is affected. So some combination of the vcxproj file and my computer seems to be the issue. Has anyone seen anything like this before? Any ideas on a solution? Thanks

    Read the article

  • How best to use XPath with very large XML files in .NET?

    - by glenatron
    I need to do some processing on fairly large XML files ( large here being potentially upwards of a gigabyte ) in C# including performing some complex xpath queries. The problem I have is that the standard way I would normally do this through the System.XML libraries likes to load the whole file into memory before it does anything with it, which can cause memory problems with files of this size. I don't need to be updating the files at all just reading them and querying the data contained in them. Some of the XPath queries are quite involved and go across several levels of parent-child type relationship - I'm not sure whether this will affect the ability to use a stream reader rather than loading the data into memory as a block. One way I can see of making it work is to perform the simple analysis using a stream-based approach and perhaps wrapping the XPath statements into XSLT transformations that I could run across the files afterward, although it seems a little convoluted. Alternately I know that there are some elements that the XPath queries will not run across, so I guess I could break the document up into a series of smaller fragments based on it's original tree structure, which could perhaps be small enough to process in memory without causing too much havoc. I've tried to explain my objective here so if I'm barking up totally the wrong tree in terms of general approach I'm sure you folks can set me right...

    Read the article

  • Read from one large file and write to many (tens, hundreds, or thousands) files in Java?

    - by Rudiger
    I have a large-ish file (4-5 GB compressed) of small messages that I wish to parse into approximately 6,000 files by message type. Messages are small; anywhere from 5 to 50 bytes depending on the type. Each message starts with a fixed-size type field (a 6-byte key). If I read a message of type '000001', I want to write append its payload to 000001.dat, etc. The input file contains a mixture of messages; I want N homogeneous output files, where each output file contains only the messages of a given type. What's an efficient a fast way of writing these messages to so many individual files? I'd like to use as much memory and processing power to get it done as fast as possible. I can write compressed or uncompressed files to the disk. I'm thinking of using a hashmap with a message type key and an outputstream value, but I'm sure there's a better way to do it. Thanks!

    Read the article

  • How can I use Perl to determine whether the contents of two files are identical?

    - by Zaid
    This question comes from a need to ensure that changes I've made to code doesn't affect the values it outputs to text file. Ideally, I'd roll a sub to take in two filenames and return 1or return 0 depending on whether the contents are identical or not, whitespaces and all. Given that text-processing is Perl's forté, it should be quite easy to compare two files and determine whether they are identical or not (code below untested). use strict; use warnings; sub files_match { my ( $fileA, $fileB ) = @_; open my $file1, '<', $fileA; open my $file2, '<', $fileB; while (my $lineA = <$file1>) { next if $lineA eq <$file2>; return 0 and last; } return 1; } The only way I can think of (sans CPAN modules) is to open the two files in question, and read them in line-by-line until a difference is found. If no difference is found, the files must be identical. But this approach is limited and clumsy. What if the total lines differ in the two files? Should I open and close to determine line count, then re-open to scan the texts? Yuck. I don't see anything in perlfaq5 relating to this. I want to stay away from modules unless they come with the core Perl 5.6.1 distribution.

    Read the article

  • How can I copy files with names containing spaces and UNICODE, when using a shell script?

    - by LOlliffe
    I have a list of files that I'm trying to copy and move (using cp and mv) in a bash shell script. The problem that I'm running into, is that I can't get either command to recognize a huge number of files, seemingly because the filenames contain spaces and/or unicode characters. I couldn't find any switches to decode/re-encode these characters. Instead, for example, if I copy "file name.xml", I get "*.xml" and a script error that the file wasn't found for my result. Does anyone know settings or commands that will deal with these files?

    Read the article

  • Setup.exe files downloading without cab files over poor connections

    - by Colin
    We have customers who are trying to download a setup.exe file over mobile connections that appear to be very slow. They have reported that when they click on the downloaded setup.exe, the install wizard starts up, but part way through the wizard they get an error message indicating that a cab file is corrupt or missing. They couriered a problem tablet to us, and we downloaded the file without a problem but I could replicate the problem by using https to download the file (https is normally used to access the rest of the site, although it is not necessary for the download). When I did this the downloaded file was 2.8MB. It should be 8MB. I don't think that https is the root cause of the problem because I can see the download link in the browser history using http, so I know the customer tried to download using http. I think that the issue is that the poor connection is preventing a complete download, but the browser is acting as if it is complete. Is there a way to ensure the file is downloaded fully, or not at all? Why does the browser not indicate that the download is incomplete?

    Read the article

  • Windows 7 home backup solution, with offsite provision

    - by Richard E
    I am looking for a home backup solution for my single Windows 7 (Home Premium) PC. I have about 500GB of data to backup. I would like to spend less than GBP 300 on the solution. I don't see the need to backup the whole PC, rather specific folder branches (iTunes, photos, documents, Outlook files, user folders such as desktop, favorites etc). I would like a solution that enables me to maintain backups in two separate physical locations (e.g. home and work). To facilitate this I am imagining a storage unit with slots for two removable drives, along with three separate drives. At any one time two of the drives will be being backed up to in the storage unit. The third will be located at my work. Periodically I will take one of the drives into work and leave it there, then bring the drive that was there back home, and plug it into the storage unit. It will then be backed up along with the other drive that was left in the storage unit. This approach should cover scenarios such as virus attack and fire or theft from one location. Thoughts and comments on the sanity of this approach please...

    Read the article

  • Windows 7 home backup solution, with offsite provision

    - by Richard E
    I am looking for a home backup solution for my single Windows 7 (Home Premium) PC. I have about 500GB of data to backup. I would like to spend less than GBP 300 on the solution. I don't see the need to backup the whole PC, rather specific folder branches (iTunes, photos, documents, Outlook files, user folders such as desktop, favorites etc). I would like a solution that enables me to maintain backups in two separate physical locations (e.g. home and work). To facilitate this I am imagining a storage unit with slots for two removable drives, along with three separate drives. At any one time two of the drives will be being backed up to in the storage unit. The third will be located at my work. Periodically I will take one of the drives into work and leave it there, then bring the drive that was there back home, and plug it into the storage unit. It will then be backed up along with the other drive that was left in the storage unit. This approach should cover scenarios such as virus attack and fire or theft from one location. Thoughts and comments on the sanity of this approach please...

    Read the article

  • large amount of data in many text files - how to process?

    - by Stephen
    Hi, I have large amounts of data (a few terabytes) and accumulating... They are contained in many tab-delimited flat text files (each about 30MB). Most of the task involves reading the data and aggregating (summing/averaging + additional transformations) over observations/rows based on a series of predicate statements, and then saving the output as text, HDF5, or SQLite files, etc. I normally use R for such tasks but I fear this may be a bit large. Some candidate solutions are to 1) write the whole thing in C (or Fortran) 2) import the files (tables) into a relational database directly and then pull off chunks in R or Python (some of the transformations are not amenable for pure SQL solutions) 3) write the whole thing in Python Would (3) be a bad idea? I know you can wrap C routines in Python but in this case since there isn't anything computationally prohibitive (e.g., optimization routines that require many iterative calculations), I think I/O may be as much of a bottleneck as the computation itself. Do you have any recommendations on further considerations or suggestions? Thanks

    Read the article

  • remuxing mpv files from h264 AVI files

    - by crankharder
    I have a bunch of, I think, x264 encoded AVIs that I'd like to convert to m4v so that I can play with Quicktime. Here's how I created them First I dump the vob from DVD with this: $ mplayer -dumpstream -dumpfile new.vob dvd://1 Then I compress it: $ mencoder -oac copy -o new.avi -ovc x264 -x264encopts crf=18 new.vob I tried doing this to convertthem to m4v, but it's blowing up... I tried dumping the h264/acc streams: $ mplayer new.avi -dumpvideo -dumpfile new.h264 $ mplayer new.avi -dumpaudio -dumpfile new.acc And remuxing(?) with MP4Box but I'm getting an error: $ MP4Box -add new.h264#video -add new.aac#audio new.m4v Cannot find H264 start code Error importing new.h264#video: BitStream Not Compliant So not sure what to do now...

    Read the article

  • Vbscript - Compare and copy files from folder if newer than destination files

    - by Kenny Bones
    Hi, I'm trying to design this script that's supposed to be used as a part of a logon script for alot of users. And this script is basically supposed to take a source folder and destination folder as basically just make sure that the destination folder has the exact same content as the source folder. But only copy if the datemodified stamp of the source file is newer than the destination file. I have been thinking out this basic pseudo code, just trying to make sure this is valid and solid basically. Dim strSourceFolder, strDestFolder strSourceFolder = "C:\Users\User\SourceFolder\" strDestFolder = "C:\Users\User\DestFolder\" For each file in StrSourceFolder ReplaceIfNewer (file, strDestFolder) Next Sub ReplaceIfNewer (SourceFile, DestFolder) Dim DateModifiedSourceFile, DateModifiedDestFile DateModifiedSourceFile = SourceFile.DateModified() DateModifiedDestFile = DestFolder & "\" & SourceFile.DateModified() If DateModifiedSourceFile < DateModifiedDestFile Copy SourceFile to SourceFolder End if End Sub Would this work? I'm not quite sure how it can be done, but I could probably spend all day figuring it out. But the people here are generally so amazingly smart that with your help it would take alot less time :)

    Read the article

  • How can I read pcap files in a friendly format?

    - by Tony
    a simple cat on the pcap file looks terrible: $cat tcp_dump.pcap ?ò????YVJ? JJ ?@@.?E<??@@ ?CA??qe?U?????h? .Ceh?YVJ?? JJ ?@@.?E<??@@ CA??qe?U?????z? .ChV?YVJ$?JJ ?@@.?E<-/@@A?CA??9????F???A&? .Ck??YVJgeJJ@@.??#3E<@3{n??9CA??P???F???<K? ??`.Ck??YVJgeBB ?@@.?E4-0@@AFCA??9????F?P????? .Ck???`?YVJ?""@@.??#3E?L@3?I??9CA??P???F????? ???.Ck?220-rly-da03.mx etc. I tried to make it prettier with: sudo tcpdump -ttttnnr tcp_dump.pcap reading from file tcp_dump.pcap, link-type EN10MB (Ethernet) 2009-07-09 20:57:40.819734 IP 67.23.28.65.49237 > 216.239.113.101.25: S 2535121895:2535121895(0) win 5840 <mss 1460,sackOK,timestamp 776168808 0,nop,wscale 5> 2009-07-09 20:57:43.819905 IP 67.23.28.65.49237 > 216.239.113.101.25: S 2535121895:2535121895(0) win 5840 <mss 1460,sackOK,timestamp 776169558 0,nop,wscale 5> 2009-07-09 20:57:47.248100 IP 67.23.28.65.42385 > 205.188.159.57.25: S 2644526720:2644526720(0) win 5840 <mss 1460,sackOK,timestamp 776170415 0,nop,wscale 5> 2009-07-09 20:57:47.288103 IP 205.188.159.57.25 > 67.23.28.65.42385: S 1358829769:1358829769(0) ack 2644526721 win 5792 <mss 1460,sackOK,timestamp 4292123488 776170415,nop,wscale 2> 2009-07-09 20:57:47.288103 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 1 win 183 <nop,nop,timestamp 776170425 4292123488> 2009-07-09 20:57:47.368107 IP 205.188.159.57.25 > 67.23.28.65.42385: P 1:481(480) ack 1 win 1448 <nop,nop,timestamp 4292123568 776170425> 2009-07-09 20:57:47.368107 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 481 win 216 <nop,nop,timestamp 776170445 4292123568> 2009-07-09 20:57:47.368107 IP 67.23.28.65.42385 > 205.188.159.57.25: P 1:18(17) ack 481 win 216 <nop,nop,timestamp 776170445 4292123568> 2009-07-09 20:57:47.404109 IP 205.188.159.57.25 > 67.23.28.65.42385: . ack 18 win 1448 <nop,nop,timestamp 4292123606 776170445> 2009-07-09 20:57:47.404109 IP 205.188.159.57.25 > 67.23.28.65.42385: P 481:536(55) ack 18 win 1448 <nop,nop,timestamp 4292123606 776170445> 2009-07-09 20:57:47.404109 IP 67.23.28.65.42385 > 205.188.159.57.25: P 18:44(26) ack 536 win 216 <nop,nop,timestamp 776170454 4292123606> 2009-07-09 20:57:47.444112 IP 205.188.159.57.25 > 67.23.28.65.42385: P 536:581(45) ack 44 win 1448 <nop,nop,timestamp 4292123644 776170454> 2009-07-09 20:57:47.484114 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 581 win 216 <nop,nop,timestamp 776170474 4292123644> 2009-07-09 20:57:47.616121 IP 67.23.28.65.42385 > 205.188.159.57.25: P 44:50(6) ack 581 win 216 <nop,nop,timestamp 776170507 4292123644> 2009-07-09 20:57:47.652123 IP 205.188.159.57.25 > 67.23.28.65.42385: P 581:589(8) ack 50 win 1448 <nop,nop,timestamp 4292123855 776170507> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: P 50:56(6) ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: F 56:56(0) ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.668124 IP 67.23.28.65.49239 > 216.239.113.101.25: S 2642380481:2642380481(0) win 5840 <mss 1460,sackOK,timestamp 776170520 0,nop,wscale 5> 2009-07-09 20:57:47.692126 IP 205.188.159.57.25 > 67.23.28.65.42385: P 589:618(29) ack 57 win 1448 <nop,nop,timestamp 4292123893 776170516> 2009-07-09 20:57:47.692126 IP 67.23.28.65.42385 > 205.188.159.57.25: R 2644526777:2644526777(0) win 0 2009-07-09 20:57:47.692126 IP 205.188.159.57.25 > 67.23.28.65.42385: F 618:618(0) ack 57 win 1448 <nop,nop,timestamp 4292123893 776170516> 2009-07-09 20:57:47.692126 IP 67.23.28.65.42385 > 205.188.159.57.25: R 2644526777:2644526777(0) win 0 Well...that is much prettier but it doesn't show the actual messages. I can actually extract more information just viewing the RAW file. What is the best ( and preferably easiest) way to just view all the contents of the pcap file? UPDATE Thanks to the responses below, I made some progress. Here is what it looks like now: tcpdump -qns 0 -A -r blah.pcap 20:57:47.368107 IP 205.188.159.57.25 > 67.23.28.65.42385: tcp 480 0x0000: 4500 0214 834c 4000 3306 f649 cdbc 9f39 [email protected] 0x0010: 4317 1c41 0019 a591 50fe 18ca 9da0 4681 C..A....P.....F. 0x0020: 8018 05a8 848f 0000 0101 080a ffd4 9bb0 ................ 0x0030: 2e43 6bb9 3232 302d 726c 792d 6461 3033 .Ck.220-rly-da03 0x0040: 2e6d 782e 616f 6c2e 636f 6d20 4553 4d54 .mx.aol.com.ESMT 0x0050: 5020 6d61 696c 5f72 656c 6179 5f69 6e2d P.mail_relay_in- 0x0060: 6461 3033 2e34 3b20 5468 752c 2030 3920 da03.4;.Thu,.09. 0x0070: 4a75 6c20 3230 3039 2031 363a 3537 3a34 Jul.2009.16:57:4 0x0080: 3720 2d30 3430 300d 0a32 3230 2d41 6d65 7.-0400..220-Ame 0x0090: 7269 6361 204f 6e6c 696e 6520 2841 4f4c rica.Online.(AOL 0x00a0: 2920 616e 6420 6974 7320 6166 6669 6c69 ).and.its.affili 0x00b0: 6174 6564 2063 6f6d 7061 6e69 6573 2064 ated.companies.d etc. This looks good, but it still makes the actual message on the right difficult to read. Is there a way to view those messages in a more friendly way? UPDATE This made it pretty: tcpick -C -yP -r tcp_dump.pcap Thanks!

    Read the article

  • Is there a distributed VCS that can manage large files?

    - by joelhardi
    Is there a distributed version control system (git, bazaar, mercurial, darcs etc.) that can handle files larger than available RAM? I need to be able to commit large binary files (i.e. datasets, source video/images, archives), but I don't need to be able to diff them, just be able to commit and then update when the file changes. I last looked at this about a year ago, and none of the obvious candidates allowed this, since they're all designed to diff in memory for speed. That left me with a VCS for managing code and something else ("asset management" software or just rsync and scripts) for large files, which is pretty ugly when the directory structures of the two overlap.

    Read the article

  • Dreamweaver creating temp php files instead of working direct from the local test server files

    - by Dan382
    Trying to get MAMP running with Dreamweaver. When I preview a php file inside Dreamweavers 'Live View' mode instead of working direct from my file: http://localhost/php_test/timetest.php it creates its own temp file which looks like: http:// 127.0.0.1/php_test/ TMPWY5ZEM.php (I've added the additional spaces as stockoverflow assumed I was spamming) I know the localhost runs correctly as if I type the URL directly into a browser it runs fine. I've set up the Dreamweaver site correctly to the best of my knowledge, the details are below: Local site folder: /Applications/MAMP/htdocs/php_test Server Folder: /Applications/MAMP/htdocs/php_test Web URL: http://localhost/php_test/ Testing Server: PHP MySQL Any help?

    Read the article

  • macro collapse all in solution visual studio 2010

    - by rod
    Hi All, I found the CollapseAll macro online that has worked for me in vs2005 and vs2008. However, this half way works in vs2010. It looks like it only collapses the top nodes and not any subnodes that may be expanded? any ideas? Thanks, rod. Sub CollapseAll() ' Get the the Solution Explorer tree Dim UIHSolutionExplorer As UIHierarchy UIHSolutionExplorer = DTE.Windows.Item(Constants.vsext_wk_SProjectWindow).Object() ' Check if there is any open solution If (UIHSolutionExplorer.UIHierarchyItems.Count = 0) Then ' MsgBox("Nothing to collapse. You must have an open solution.") Return End If ' Get the top node (the name of the solution) Dim UIHSolutionRootNode As UIHierarchyItem UIHSolutionRootNode = UIHSolutionExplorer.UIHierarchyItems.Item(1) UIHSolutionRootNode.DTE.SuppressUI = True ' Collapse each project node Dim UIHItem As UIHierarchyItem For Each UIHItem In UIHSolutionRootNode.UIHierarchyItems 'UIHItem.UIHierarchyItems.Expanded = False If UIHItem.UIHierarchyItems.Expanded Then Collapse(UIHItem) End If Next ' Select the solution node, or else when you click ' on the solution window ' scrollbar, it will synchronize the open document ' with the tree and pop ' out the corresponding node which is probably not what you want. UIHSolutionRootNode.Select(vsUISelectionType.vsUISelectionTypeSelect) UIHSolutionRootNode.DTE.SuppressUI = False End Sub Private Sub Collapse(ByVal item As UIHierarchyItem) For Each eitem As UIHierarchyItem In item.UIHierarchyItems If eitem.UIHierarchyItems.Expanded AndAlso eitem.UIHierarchyItems.Count > 0 Then Collapse(eitem) End If Next item.UIHierarchyItems.Expanded = False End Sub End Module

    Read the article

  • how to get files from one PC to another using c#.net?

    - by shruti
    im using c#.net..i want to get the files that are on the server PC to my PC..both the PCs are connected through network.. i have given IP address of that PC in the path...but its not copying the files to my folder. im using the following code ...but its not working..kindly help me out.. File.Copy(Path.GetFileName(sourceFile), Path.GetDirectoryName(targetpath)); in sourceFile i have given IP address + folder path of the server PC and in the targetpath i have given the path of the folder of my PC to which i want to copy the files..

    Read the article

  • Where do I put the links to my Javascript/jQuery files in my html file?

    - by Qlidnaque
    I recently noticed that some (not all) of my javascript and jQuery scripts wouldn't work unless I put the link for the .js files nearer towards the bottom of the page instead of the head area where I put my links for my .css files. From what I understand, javascript can go in either places and it is recommended to not be put in the header as it slows down the page loading process as well. At the same time, if I put it in the body tag of the html file, it looks somewhat messy and was wondering what the best practice is for putting .js files in a cleanly place. Should I always put it at the very bottom right before the ending body tag? How do professional web developers handle this?

    Read the article

  • VBA Solution to VLOOKUP with Hyperlinks

    - by Emily2
    I am looking for some help with a VBA solution for preserving hyperlinks when using VLOOKUP on Excel (2010). I have a load of data on Sheet 1 for internal use only, and a cut-down version of this on Sheet 2. Instead of recreating Sheet 2 everytime, I am looking to have a working version which updates everytime Sheet1 is updated. Thus, I have used VLOOKUP on Sheet 2 so that only the desired info is returned on sheet 2. However, the problem was that sheet 1 contained in many cells Hyperlinks to external websites, and this would not pull through to Sheet2 using VLOOKUP. With some help, however, using the following VBA solution the hyperlinks now pull through: Function GetHyperLink(r As Range) As String If r.Hyperlinks.Count Then GetHyperLink = r.Hyperlinks(1).Address End If End Function And I am using the following formula in the relevant cell(s) in Sheet2: =HYPERLINK(GetHyperLink(INDEX('Sheet 1'!$B$1:$B$10001,MATCH(A4,'Sheet 1'!$A$1:$A$10001,0))),(VLOOKUP(A4,'Sheet 1'!$A$1:$B$10001,2,FALSE))) However, the problem is with formatting: every cell on Sheet2 is formatted blue and underlined, even although some of them do not contain a hyperlink! Is someone able to help with a VBA solution/formula to fix this last piece of the puzzle? Many thanks, in anticipation.

    Read the article

  • trying to prune down a list of files

    - by romunov
    I have a list of files and I'm trying to extract all layer1_*.grd files. Is there a way of doing this in one grep expression? lof <- c("layer1_1.grd", "layer1_1.gri", "layer1_2.grd", "layer1_2.gri", "layer1_3.grd", "layer1_3.gri", "layer1_4.grd", "layer1_4.gri", "layer1_5.grd", "layer1_5.gri", "layer2_1.grd", "layer2_1.gri", "layer2_2.grd", "layer2_2.gri", "layer2_3.grd", "layer2_3.gri", "layer2_4.grd", "layer2_4.gri", "layer2_5.grd", "layer2_5.gri", "layer3_1.grd", "layer3_1.gri", "layer3_2.grd", "layer3_2.gri", "layer3_3.grd", "layer3_3.gri", "layer3_4.grd", "layer3_4.gri", "layer3_5.grd", "layer3_5.gri", "layer4_1.grd", "layer4_1.gri", "layer4_2.grd", "layer4_2.gri", "layer4_3.grd", "layer4_3.gri", "layer4_4.grd", "layer4_4.gri", "layer4_5.grd", "layer4_5.gri" I tried doing this in two steps: list.of.files <- list.files(pattern = c("1_")) list.of.files <- list.of.files[grep(".grd", list.of.files)] Can someone enlighten me how to do this with grep in one step? I naively tried passing list() and c() to the grep but, as you can imagine, it doesn't work. list.of.files <- list.files() list.of.files <- list.of.files[grep(list("1_", ".grd"), list.of.files)]

    Read the article

  • how to copy database files from the network access server to Client PC in c#.net?

    - by zoya
    im using a code to copy the files from the database of server PC. so im accessing that server PC through IP address but it is giving me error and not copying the files in the folder of my PC (client PC) this is my code that im using...can u tell me where im wrong?? the file path is given on my listview in winform.. public string RecordingFileCopy(string recordpath,string ipadd) { string strFinalPath; strFinalPath = String.Format("\\{0}'{1}'",ipadd,recordpath); return strFinalPath; } on button click event.... { try { foreach (ListViewItem item in listView1.Items) { string sourceFile = item.SubItems[5].Text; RecordingFileCopy(sourceFile,"10.0.4.123"); File.Copy(sourceFile, Path.Combine(@"E:\name\MyDir", Path.GetFileName(sourceFile))); } } catch { MessageBox.Show("Files are not copied to folder", _strMsg, MessageBoxButtons.OK, MessageBoxIcon.Error); } }

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >