Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 151/1640 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • IO operation taking long time for files in remote server

    - by user841311
    I have files of size 150 MB each in a remote server in a different domain in the network. I am accessing them thorugh UNC path. I want to read the file content and perform a basic string search. When I try reading the files line by line, the operation just don't finish and takes long time, more than 30 minutes. However when I copy those files to my local machine, the same code reads and performs the string search in less than 5 seconds. I don't have .NET framework installed in the server so I have to do this from my machine. I want to perform all this through C# code in .NET framework 3.5 so I don't want to explictly ftp all the files to my machine before performing this operation. Sample Code DirectoryInfo dir = new DirectoryInfo(@strFilePath); FileInfo[] fiArray = dir.getFiles("*.txt"); foreach (FileInfo fi in fiArray) { //reading file content from server takes long time but fast in local machine //perform string search } Let me know if my requirement is not clear. Thanks in advance!

    Read the article

  • Created files on Archos 5 invisible on Windows Xp

    - by user352042
    I am fairly new to Android and this is my first post so I apologise in advance if I am breaking protocol or posting to the wrong board. Please feel free to move this post to somewhere more appropriate if required. I am developing for the 160 Gb Archos 5 Internet tablet. Not ideal as a development platform I know, but customer requirements mean we have no choice. It is running Android 1.6. I have updated the device firmware to the most recent available. Updating the version of Android is not an option at this point. Part of my app's requirement is to write information out to .txt files on the external storage directory so that these can be copied over the USB connection to a Windows XP PC using the Mobile media device (MTP) mode. I have followed all instructions I have come across carefully, eg I check that the storage is available using the technique described at http://developer.android.com/guide/topics/data/data-storage.html#filesExternal. However, althoug the files are created succesfully on the device (I can browse them and open them using the device's File Explorer - they are fine), when I connect the device to a Windows XP computer none of the directories or files I created appear and the size of their parent files suggest they do not exist. I have tried running over the ADB, checked logcat, tried a (signed) release version and even written a second test application which just creates a folder (this behaves the same, ie it creates the folder but this is not visible in Windows Explorer) - nothing anywhere gives me any suggestion as to what the problem might be. If anyone has heard of this before or has any ideas as to what else | could try to fix it please get in touch! We do not have any other devices to test on at the moment, although I hope to remedy this soon, customer permitting.

    Read the article

  • Stage untracked files for commit without staging tracked file changes

    - by Blair Holloway
    Oftentimes, when using git, I find myself in this situation: I have changes to several files, but I only want to commit parts of them. I have added several untracked files, which I want to track and commit. Solving the first part is easy; I run: git add -p Then, I choose which hunks to stage, and which hunks remain in my working tree, but unstaged. However, git's patch mode skips over untracked files. What I would like to do is something like: git add --untracked But no such option appears to exist. If I have, say, six untracked files, I could stage them using add in interactive mode and the add untracked option, like so: git add -i a<CR> 1<CR> 2<CR> 3<CR> 4<CR> 5<CR> 6<CR> <CR> q<CR> I feel like there is, or should be, a quicker way of doing this, though. What am I missing?

    Read the article

  • Specifying both multiple targets and multiple build files with ant or subant in 1.6

    - by Paul Marshall
    I'm trying to unify a build process, running one build to get multiple packages. My first shot at this is just having a central build script call <ant or <subant on each project's build.xml file. I'm using Ant 1.6, and I've run into a funny problem: either I use the <ant task, and I can specify multiple targets but not multiple build files, or I use the <subant task, and I can specify multiple build files but not multiple targets. I realize there's a few solutions here already: Just upgrade to Ant 1.7 already; <antcall can do multiple targets there. Edit the separate project build files to have a variety of top-level targets, so I can call each individual file with just one target, and use <antcall. Copy-paste a lot of <ant tasks, with a little help from <macrodef to help the sanity. Is there something I've missed, that will allow me to do what I want from this single central build.xml without a) editing individual project files, b) writing lots of repetitive code, or c) upgrading Ant, and that d) doesn't require editing every time I add a new project?

    Read the article

  • Problem with filefield module after migrating drupal site to a new server: cant upload files

    - by oalo
    We have a content type with two imagefield / filefield fields, and after migrating our site to a new server, we have the following problem: When we submit a new item for this content type, with two images for those fields, drupal gives us the following error and does not upload the images: warning: fopen(sites/default/files/.htaccess) [function.fopen]: failed to open stream: Permission denied in /websites/sitename/data/sites/all/modules/filefield/field_file.inc on line 349. warning: fopen(sites/default/files/.htaccess) [function.fopen]: failed to open stream: Permission denied in /websites/sitename/data/sites/all/modules/filefield/field_file.inc on line 349. An image thumbnail was not able to be created. warning: fopen(sites/default/files/.htaccess) [function.fopen]: failed to open stream: Permission denied in /websites/sitename/data/sites/all/modules/filefield/field_file.inc on line 349. warning: fopen(sites/default/files/.htaccess) [function.fopen]: failed to open stream: Permission denied in /websites/sitename/data/sites/all/modules/filefield/field_file.inc on line 349. An image thumbnail was not able to be created. I understand this is a permissions error, but it is not clear to me where do I have to change permissions. Line 349 of file.inc has the following code: if (($fp = fopen("$directory/.htaccess", 'w')) && fputs($fp, $htaccess_lines)) { fclose($fp); chmod($directory .'/.htaccess', 0664); } else { $repl = array('%directory' = $directory, '!htaccess' = nl2br(check_plain($htaccess_lines))); form_set_error($form_item, t("Security warning: Couldn't write .htaccess file. Please create a .htaccess file in your %directory directory which contains the following lines:!htaccess", $repl));

    Read the article

  • Parsing multiple files at a time in Perl

    - by sfactor
    I have a large data set (around 90GB) to work with. There are data files (tab delimited) for each hour of each day and I need to perform operations in the entire data set. For example, get the share of OSes which are given in one of the columns. I tried merging all the files into one huge file and performing the simple count operation but it was simply too huge for the server memory. So, I guess I need to perform the operation each file at a time and then add up in the end. I am new to perl and am especially naive about the performance issues. How do I do such operations in a case like this. As an example two columns of the file are. ID OS 1 Windows 2 Linux 3 Windows 4 Windows Lets do something simple, counting the share of the OSes in the data set. So, each .txt file has millions of these lines and there are many such files. What would be the most efficient way to operate on the entire files.

    Read the article

  • Configuring TeamCity + NUnit unit tests so files can be loaded properly

    - by Dave
    In a nutshell, I have a solution that builds fine in the IDE, and the unit tests all run fine with the NUnit GUI (via the NUnitit VS2008 plugin). However, when I execute my TeamCity build runner, all unit tests that require file access (e.g. for running tests against specific XML files), I just get System.IO.DirectoryNotFoundExceptions. The reason for this is clear: it's looking for those supporting XML files loaded by various unit tests in the wrong folder. The way my unit tests are structured looks like this: +-- project folder +-- unit tests folder +-- test.xml +-- test.cs +-- project file.xaml +-- project file.xaml.cs All of my projects own their own UnitTests folder, which contains the .cs file and any XML files, XML Schemas, etc that are necessary to run the tests. So when I write my test.cs, I have it look for "test.xml" in the code because they are in the same folder (actually, I do something like ....\unit tests\test.xml, but that's kind of silly). As I said before, the tests run great in NUnit. But that's because the unit tests are part of the project. When running the unit tests from TeamCity, I am executing them against the assemblies that get copied to the main app's output folder. These unit test XML files should not be copied willy-nilly to the output folder just to make the tests pass. Can anyone suggest a better method of organizing my unit tests in each project (which are dependencies for the main app), such that I can execute the unit tests from NUnit and from the TeamCity build runner? The only other option I can come up with is to just put the testing XML data in code, rather than loading it from a file. I would rather not do this.

    Read the article

  • Installing a group of .deb files in Ubuntu

    - by p00ya
    Hi. I have a directory of .deb files which I copied from the cache folder of apt. There are many applications and Ubuntu updates among them. but there's no dependency failure because they were all downloaded by 'add/remove applications' and 'update manager' automatically. Now I have installed the same version of Ubuntu (9.04) and I want to install those apps and updates again(though they are not new versions). In other words, I want to make this fresh Ubuntu install exactly like the old one but without downloading any thing and using only those .deb files that I copied. All I have is an archive folder containing the .deb files and a 'pkgcache.bin' file. I know I can double-click the .deb files and install them manually but then I have to find out and follow the dependencies one by one from the installer errors. I have also tried adding an offline repository but it didn't work. I think because all of my .deb's are in on folder, and there is no separate 'main', 'restricted', ... folder?! Is there a way to do all of this automatically? thanks

    Read the article

  • Best way to associate data files with particular tests in RSpec / Ruby

    - by Bill T
    For my RSpec tests I would to automatically associate data files with each test. To clarify, if my tests each require an xml file as input data and then some xpath statements to validate the responses they get back I would like to externalize the xml and xpath as files and have the testing framework easily associate them with the particular test being run by using the unique ID of the test as the file(s) name. I tried to get this behavior but the solution isn't very clean. I wrote a helper method that takes the value of "description" and combines it with FILE to create a unique identifier which is set into a global variable that other utilities can access. The unique identifier is used to associate the data files I need. I have to call this helper method as the first line of every test, which is ugly. If I have an RSpec example that looks like this: describe "Basic functions of this server I'm testing" do it "should give me back a response" do # Sets a global var to: "my_tests_spec.rb_should_give_me_back_a_response" TestHelper::who_am_i __FILE__, description ... end end Is there some better/cleaner/slicker way I can get an unique ID for each test that I could use to associate data files with? Perhaps something build into RSpec I'm unaware of? Thank you, -Bill

    Read the article

  • Finding missing files by checksum

    - by grw
    Hi there, I'm doing a large data migration between two file systems (let's call them F1 and F2) on a Linux system which will necessarily involve copying the data verbatim into a differently-structured hierarchy on F2 and changing the file names. I'd like to write a script to generate a list of files which are in F1 but not in F2, i.e. the ones which weren't copied by the migration script into the new hierarchy, so that I can go back and migrate them manually. Unfortunately for reasons not worth going into, the migration script can't be modified to list files that it doesn't migrate. My question differs from this previously answered one because of the fact that I cannot rely on filenames as a comparison. I know the basic outline of the process would be: Generate a list of checksums for all files, recursing through F1 Do the same for F2 Compare the lists and generate a negative intersection of the checksums, ignoring the file names, to find files which are in F1 but not in F2. I'm kind of stuck getting past that stage, so I'd appreciate any pointers on which tools to use. I think I need to use the 'comm' command to compare the list of file checksums, but since md5sum, sha512sum and the like put the file name next to the checksum, I can't see a way to get it to bring me a useful comparison. Maybe awk is the way to go? I'm using Red Hat Enterprise Linux 5.x. Thanks.

    Read the article

  • Redeploying an ASP.NET site in IIS7 without files in use interfering

    - by fyjham
    Hey, We've got a process currently which causes ASP.NET websites to be redeployed. The code is itself an ASP.NET application. The current method, which has worked for quite a while, is simply to loop over all the files in one folder and copy them over the top of the files in the webroot. The problem that's arisen is that occasionally files end up being in use and hence can't be copied over. This has in the past been intermittent to the point it didn't matter but on some of our higher traffic sites it happens the majority of the time now. I'm wondering if anyone has a workaround or alternative approach to this that I haven't thought of. Currently my ideas are: Simply retry each file until it works. That's going to cause errors for a short time though which isn't really that good. Deploy to a new folder and update IIS's webroot to the new folder. I'm not sure how to do this short of running the application as an administrator and running batch files, which is very untidy. Does anyone know what the best way to do this is, or if it's possible to do #2 without running the publishing application as a user who has admin access (Willing to grant it special privileges, but I'd prefer to stop short of administrator)?

    Read the article

  • Git add not working with .png files?

    - by D Lawson
    I have a dirty working tree, dirty because I made changes to source files and touched up some images. I was trying to add just the images to the index, so I ran this command: git add *.png But, this doesn't add the files. There were a few new image files that were added, but none of the ones that were modified/pre-existing were added. What gives? Edit: Here is some relevant terminal output $ git status # On branch master # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: src/main/java/net/plugins/analysis/FormMatcher.java # modified: src/main/resources/icons/doctor_edit_male.png # modified: src/main/resources/icons/doctor_female.png # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # src/main/resources/icons/arrow_up.png # src/main/resources/icons/bullet_arrow_down.png # src/main/resources/icons/bullet_arrow_up.png no changes added to commit (use "git add" and/or "git commit -a") Then executed "git add *.png" (no output after command) Then: $ git status # On branch master # # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # new file: src/main/resources/icons/arrow_up.png # new file: src/main/resources/icons/bullet_arrow_down.png # new file: src/main/resources/icons/bullet_arrow_up.png # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: src/main/java/net/plugins/analysis/FormMatcher.java # modified: src/main/resources/icons/doctor_edit_female.png # modified: src/main/resources/icons/doctor_edit_male.png

    Read the article

  • Python: problem with tiny script to delete files

    - by Rosarch
    I have a project that used to be under SVN, but now I'm moving it to Mercurial, so I want to clear out all the .svn files. It's a little too big to do it by hand, so I decided to write a python script to do it. But it isn't working. def cleandir(dir_path): print "Cleaning %s\n" % dir_path toDelete = [] files = os.listdir(dir_path) for filedir in files: print "considering %s" % filedir # continue if filedir == '.' or filedir == '..': print "skipping %s" % filedir continue path = dir_path + os.sep + filedir if os.path.isdir(path): cleandir(path) else: print "not dir: %s" % path if 'svn' in filedir: toDelete.append(path) print "Files to be deleted:" for candidate in toDelete: print candidate print "Delete all? [y|n]" choice = raw_input() if choice == 'y': for filedir in toDelete: if os.path.isdir(filedir): os.rmdir(filedir) else: os.unlink(filedir) exit() if __name__ == "__main__": cleandir(dir) The print statements show that it's only "considering" the filedirs whose names start with ".". However, if I uncomment the continue statement, all the filedirs are "considered". Why is this? Or is there some other utility that already exists to recursively de-SVN-ify a directory tree?

    Read the article

  • How to handle too many files in Qt

    - by mree
    I'm not sure how to ask this, but here goes the question: I'm migrating from J2SE to Qt. After creating some small applications in Qt, I noticed that I've created way too many files compared to what I would've create if I was developing in Java (I use Netbeans). For an example, for a GUI to Orders, I'd have to create Main Order Search Window Edit Order Dialog Manage Order Dialog Maybe some other dialogs... For Java, I don't have to create a new file for every new Dialog, the Dialog will be created in the JFrame class itself. So, I will only be seeing 1 file for Orders which has other Dialogs in it. However, in Qt, I'd have to create 1 ui file, 1 header file, 1 cpp file for each of the Dialog (I know I can just put the cpp in the header, but it's easier to view codes in seperate files). So, in the end, I might end up with 3 (if there are 3 dialogs) x3 files = 9 files for the GUI in Qt, compared to Java which is only 1 file. I do know that I can create a GUI by coding it manually. But it seems easy on small GUIs but not some on complicated GUIs with lots of inputs, tabs and etc. So, is there any suggestion on how to minimize the file created in Qt?

    Read the article

  • download large files using servlet

    - by niks
    I am using Apache Tomcat Server 6 and Java 1.6 and am trying to write large mp3 files to the ServletOutputStream for a user to download. Files are ranging from a 50-750MB at the moment. The smaller files aren't causing too much of a problem but with the larger files it and getting socket exception broken pipe. File fileMp3 = new File(objDownloadSong.getStrSongFolder() + "/" + strSongIdName); FileInputStream fis = new FileInputStream(fileMp3); response.setContentType("audio/mpeg"); response.setHeader("Content-Disposition", "attachment; filename=\"" + strSongName + ".mp3\";"); response.setContentLength((int) fileMp3.length()); OutputStream os = response.getOutputStream(); try { int byteRead = 0; while ((byteRead = fis.read()) != -1) { os.write(byteRead); } os.flush(); } catch (Exception excp) { downloadComplete = "-1"; excp.printStackTrace(); } finally { os.close(); fis.close(); }

    Read the article

  • Process a set of files from a source directory to a destination directory in Python

    - by Spoike
    Being completely new in python I'm trying to run a command over a set of files in python. The command requires both source and destination file (I'm actually using imagemagick convert as in the example below). I can supply both source and destination directories, however I can't figure out how to easily retain the directory structure from the source to the destination directory. E.g. say the srcdir contains the following: srcdir/ file1 file3 dir1/ file1 file2 Then I want the program to create the following destination files on destdir: destdir/file1, destdir/file3, destdir/dir1/file1 and destdir/dir1/file2 So far this is what I came up with: import os from subprocess import call srcdir = os.curdir # just use the current directory destdir = 'path/to/destination' for root, dirs, files in os.walk(srcdir): for filename in files: sourceFile = os.path.join(root, filename) destFile = '???' cmd = "convert %s -resize 50%% %s" % (sourceFile, destFile) call(cmd, shell=True) The walk method doesn't directly provide what directory the file is under srcdir other than concatenating the root directory string with the file name. Is there some easy way to get the destination file, or do I have to do some string manipulation in order to do this?

    Read the article

  • Best way to convert log files (*.txt) to web-friendly files (*.html, *.jsp, etc)?

    - by prometheus
    I have a bunch of log files which are pure text. Here is an example of one... Overall Failures Log SW Failures - 03.09.2010 - /logs/swfailures.txt - 23 errors - 24 warnings HW Failures - 03.09.2010 - /logs/hwfailures.txt - 42 errors - 25 warnings SW Failures - 03.10.2010 - /logs/swfailures.txt - 32 errors - 27 warnings HW Failures - 03.10.2010 - /logs/hwfailures.txt - 11 errors - 31 warnings These files can get quite large and contain a lot of other information. I'd like to produce an HTML file from this log that will add links to key portions and allow the user to open up other log files as a result... SW Failures - 03.09.2010 - <a href="/logs/swfailures.txt">/logs/swfailures.txt</a> - 23 errors - 24 warnings This is greatly simplified as I would like to add many more links and other html elements. My question is -- what is the best way to do this? If the files are large, should I generate the html before serving it to the user or will jsp do? Should I use perl or other scripting languages to do this? What are your thoughts and experiences?

    Read the article

  • Scalable way to store files on server (PHP)?

    - by Nathaniel Bennett
    I'm creating my first web application - a really simplistic online text editor. What I need to do is find the best way to store text based files - a lot of them. These text files can be past 10,000 words in size (text words not computer words.) in essence I want the text documents to be limitless in size. I was thinking about storing the text files in my MySQL database - but thought there was a better way. Instead I'm planing on storing the text files in XML based format in a directory on my server. The rows in the database define the name of the xml based text file and the user who created the text along with basic metadata. An ID is generated using a V4 GUID generator , which gives the text an id and stores the text in the "/store" directory on my server. The text definitions in my server contain this id, and the android app I'm developing gets the contents of the text file by retrieving the text definition and then downloading the text to the local device using the GUID in the text definition. I just think this is a botch job? how can I improve this system? There has been cases of GUID colliding. I don't want this to happen. A "slim" possibility isn't good enough - I need to make sure there is absolutely no chance in a GUID collision. I was planning on checking the database for texts that have the same id before storing the text with a particular id - I however believe with over 20,000 pieces of text in my database this would take an long time and produce unneeded stress on the server. How can I make GUID safe? What happens when a GUID collides? The server backend is going to be written in PHP.

    Read the article

  • &lsquo;Publish&hellip;&rsquo; Resulting in Directory With No Files

    - by ToStringTheory
    I was pulling my hair out with this one…  Which isn’t good considering I have so little of it left!  I had just upgraded to the Windows Azure 1.7 SDK the day before with no problems, and used the upgraded ‘Publish…’ dialog to successfully publish a website to my hard disk for hosting on an internal development server.  However, when trying to deploy another project to my file system, it said it was successful, but there were no files in the directory.  The only difference, the first project was an Azure project, the second was a standard ASP.Net Web Application.  If you installed the Windows Azure 1.7 SDK, you may want to read this. The Problem At first it appears that there is no problem: However you may remember that when publishing a web application, the output window will generally iterate through each of the directories as it copies the files from that directory over.  Sure enough, when looking at the output directory – there are no files, no bin directory, no nothing… Troubleshooting Since one site published and the other did not, I believed that the failure may have been to a failed SQL Server 2012 installation that happened between publish.  I rolled back the installation, however that did not work either.  I also checked the Configuration Manager dialog, and ensured that the projects were selected to actually build (just checking, even though the output said it built them..)  I checked the properties of the solution and the projects, and a selection of files in the project to make sure that they were selected for content…  Nothing seemed to work. I then decided to uninstall the Azure 1.7 SDK to see if that was the culprit.  When I opened the Windows 7 ‘Uninstall a Program’ dialog, I noticed that the Azure SDK came with 2 extra packages that just so happen to be in a Release Candidate state from Microsoft – ‘Microsoft Web Deploy 3.0’ and ‘Microsoft Web Publish – Visual Studio 2010’.  It dawned on me that the publish dialog must not be just for Azure, since it appeared when I tried to deploy the regular web application as well.  Therefore, it must have been an upgrade to the publish mechanism in Visual Studio.  I uninstalled both of the programs and received my old publish dialog once again, and was able to successfully publish the solution above as I had done before. After celebrating solving the problem, I tried reinstalling the Azure package, to see if it would repair the publishing process. Even though it brought back the updated dialogs, it did not publish any files. Instead of uninstalling and retreating, I now KNEW what the cause was, and these were packages not just for Azure. I now knew a product name to search for. The Solution Sure enough, with the correct search term in Google – ‘microsoft web publish no files’, and setting the timeline to 1 week, I found what I needed - Microsoft Connect - Publish Web Application FAILS! (by Andrew Rits). I am surprised that I missed something that ended up being so simple…  In the Configuration Manager, I had the following settings: This is how I had been building and debugging the solution always…  However, apparently when installing the new Web Publishing package, it does things a little differently in its configuration for publishing: You see the difference?  The configuration here is set to ‘x86’ instead of ‘Any CPU’.  Sure enough, as soon as I switched the configuration to ‘Release – Any CPU’, the deployment built and published all of my files as I expected. Conclusion It was a small change, but apparently the new ‘Publish web application’ defaults to the x86 configuration, thereby not copying any of the project/bin files to the publish target directory.  I spent forever trying things, but this small drop down eluded me until I was able to target that the dialog was actually working apparently, I just didn’t have the correct configuration. I hope that this saves you the hours of frustration and hastened hair loss that it caused me…  I also hope that before Microsoft brings this publishing package out of RC status, that they change the behavior of that menu to default to the settings of the old publish menu for the first time. Happy Coding!

    Read the article

  • Where is my app.config for SSIS?

    Sometimes when working with SSIS you need to add or change settings in the .NET application configuration file, which can be a bit confusing when you are building a SSIS package not an application. First of all lets review a couple of examples where you may need to do this. You are using referencing an assembly in a Script Task that uses Enterprise Library (aka EntLib), so you need to add the relevant configuration sections and settings, perhaps for the logging application block. You are using using Enterprise Library in a custom task or component, and again you need to add the relevant configuration sections and settings. You are using a web service with Microsoft Web Services Enhancements (WSE) 3.0 and hosting the proxy in SSIS, in an assembly used by your package, and need to add the configuration sections and settings. You need to change behaviours of the .NET framework which can be influenced by a configuration file, such as the System.Net.Mail default SMTP settings. Perhaps you wish to configure System.Net and the httpWebRequest header for parsing unsafe header (useUnsafeHeaderParsing), which will change the way the HTTP Connection manager behaves. You are consuming a WCF service and wish to specify the endpoint in configuration. There are no doubt plenty more examples but each of these requires us to identify the correct configuration file and and make the relevant changes. There are actually several configuration files, each used by a different execution host depending on how you are working with the SSIS package. The folders we need to look in will actually vary depending on the version of SQL Server as well as the processor architecture, but most are all what we can call the Binn folder. The SQL Server 2005 Binn folder is at C:\Program Files\Microsoft SQL Server\90\DTS\Binn\, compared to C:\Program Files\Microsoft SQL Server\100\DTS\Binn\ for SQL Server 2008. If you are on a 64-bit machine then you will see C:\Program Files (x86)\Microsoft SQL Server\90\DTS\Binn\ for the 32-bit executables and C:\Program Files\Microsoft SQL Server\90\DTS\Binn\ for 64-bit, so be sure to check all relevant locations. Of course SQL Server 2008 may have a C:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn\ on a 64-bit machine too. To recap, the version of SQL Server determines if you look in the 90 or 100 sub-folder under SQL Server in Program Files (C:\Program Files\Microsoft SQL Server\nn\) . If you are running a 64-bit operating system then you will have two instances program files, C:\Program Files (x86)\ for 32-bit and  C:\Program Files\ for 64-bit. You may wish to check both depending on what you are doing, but this is covered more under each section below. There are a total of five specific configuration files that you may need to change, each one is detailed below: DTExec.exe.config DTExec.exe is the standalone command line tool used for executing SSIS packages, and therefore it is an execution host with an app.config file. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTExec.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. DtsDebugHost.exe.config DtsDebugHost.exe is the execution host used by Business Intelligence Development Studio (BIDS) / Visual Studio when executing a package from the designer in debug mode, which is the default behaviour. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DtsDebugHost.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. This may surprise some people as Visual Studio is only 32-bit, but thankfully the debugger supports both. This can be set in the project properties, see the Run64BitRuntime property (true or false) in the Debugging pane of the Project Properties. dtshost.exe.config dtshost.exe is the execution host used by what I think of as the built-in features of SQL Server such as SQL Server Agent e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\dtshost.exe.config This file can be found in both the 32-bit and 64-bit Binn folders devenv.exe.config Something slightly different is devenv.exe which is Visual Studio. This configuration file may also need changing if you need a feature at design-time such as in a Task Editor or Connection Manager editor. Visual Studio 2005 for SQL Server 2005  - C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\devenv.exe.config Visual Studio 2008 for SQL Server 2008  - C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe.config Visual Studio is only available for 32-bit so on a 64-bit machine you will have to look in C:\Program Files (x86)\ only. DTExecUI.exe.config The DTExec UI tool can also have a configuration file and these cab be found under the Tools folders for SQL Sever as shown below. C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe A configuration file may not exist, but if you can find the matching executable you know you are in the right place so can go ahead and add a new file yourself. In summary we have covered the assembly configuration files for all of the standard methods of building and running a SSIS package, but obviously if you are working programmatically you will need to make the relevant modifications to your program’s app.config as well.

    Read the article

  • Where is my app.config for SSIS?

    Sometimes when working with SSIS you need to add or change settings in the .NET application configuration file, which can be a bit confusing when you are building a SSIS package not an application. First of all lets review a couple of examples where you may need to do this. You are using referencing an assembly in a Script Task that uses Enterprise Library (aka EntLib), so you need to add the relevant configuration sections and settings, perhaps for the logging application block. You are using using Enterprise Library in a custom task or component, and again you need to add the relevant configuration sections and settings. You are using a web service with Microsoft Web Services Enhancements (WSE) 3.0 and hosting the proxy in SSIS, in an assembly used by your package, and need to add the configuration sections and settings. You need to change behaviours of the .NET framework which can be influenced by a configuration file, such as the System.Net.Mail default SMTP settings. Perhaps you wish to configure System.Net and the httpWebRequest header for parsing unsafe header (useUnsafeHeaderParsing), which will change the way the HTTP Connection manager behaves. You are consuming a WCF service and wish to specify the endpoint in configuration. There are no doubt plenty more examples but each of these requires us to identify the correct configuration file and and make the relevant changes. There are actually several configuration files, each used by a different execution host depending on how you are working with the SSIS package. The folders we need to look in will actually vary depending on the version of SQL Server as well as the processor architecture, but most are all what we can call the Binn folder. The SQL Server 2005 Binn folder is at C:\Program Files\Microsoft SQL Server\90\DTS\Binn\, compared to C:\Program Files\Microsoft SQL Server\100\DTS\Binn\ for SQL Server 2008. If you are on a 64-bit machine then you will see C:\Program Files (x86)\Microsoft SQL Server\90\DTS\Binn\ for the 32-bit executables and C:\Program Files\Microsoft SQL Server\90\DTS\Binn\ for 64-bit, so be sure to check all relevant locations. Of course SQL Server 2008 may have a C:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn\ on a 64-bit machine too. To recap, the version of SQL Server determines if you look in the 90 or 100 sub-folder under SQL Server in Program Files (C:\Program Files\Microsoft SQL Server\nn\) . If you are running a 64-bit operating system then you will have two instances program files, C:\Program Files (x86)\ for 32-bit and  C:\Program Files\ for 64-bit. You may wish to check both depending on what you are doing, but this is covered more under each section below. There are a total of five specific configuration files that you may need to change, each one is detailed below: DTExec.exe.config DTExec.exe is the standalone command line tool used for executing SSIS packages, and therefore it is an execution host with an app.config file. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTExec.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. DtsDebugHost.exe.config DtsDebugHost.exe is the execution host used by Business Intelligence Development Studio (BIDS) / Visual Studio when executing a package from the designer in debug mode, which is the default behaviour. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DtsDebugHost.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. This may surprise some people as Visual Studio is only 32-bit, but thankfully the debugger supports both. This can be set in the project properties, see the Run64BitRuntime property (true or false) in the Debugging pane of the Project Properties. dtshost.exe.config dtshost.exe is the execution host used by what I think of as the built-in features of SQL Server such as SQL Server Agent e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\dtshost.exe.config This file can be found in both the 32-bit and 64-bit Binn folders devenv.exe.config Something slightly different is devenv.exe which is Visual Studio. This configuration file may also need changing if you need a feature at design-time such as in a Task Editor or Connection Manager editor. Visual Studio 2005 for SQL Server 2005  - C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\devenv.exe.config Visual Studio 2008 for SQL Server 2008  - C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe.config Visual Studio is only available for 32-bit so on a 64-bit machine you will have to look in C:\Program Files (x86)\ only. DTExecUI.exe.config The DTExec UI tool can also have a configuration file and these cab be found under the Tools folders for SQL Sever as shown below. C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe A configuration file may not exist, but if you can find the matching executable you know you are in the right place so can go ahead and add a new file yourself. In summary we have covered the assembly configuration files for all of the standard methods of building and running a SSIS package, but obviously if you are working programmatically you will need to make the relevant modifications to your program’s app.config as well.

    Read the article

  • Workspace.PendEdit not checking out files

    - by MasterMax1313
    I'm using the TFS 2010 SDK to programmatically check in edits to files into TFS 2010. The documentation on the TFS 2010 SDK is sparse at best. When I call the method workspace.pendedit() passing in an array of files I want to mark as having a pending edit, nothing is actually checked out. So when I call workspace.checkin() passing in workspace.getpendingchanges and some comments I get an exception that there must be at least one thing that has a pending change (which should be what I passed into pendedit). Any thoughts on why the app isn't marking the files as having a pending edit in the workspace?

    Read the article

  • VSS: Move file from one folder to another?

    - by shoosh
    Is there a way in Visual SourceSafe to move a file from one directory to another while retaining its history? Edit I actually found a round about way to do this. First I drag the file I want to move to the directory I want to move it to, this creates a "Link" to the file there and then I "permanently destroy" the file in its original location. Does this actually do what I thing it does?

    Read the article

  • IOException reading a large file from a UNC path into a byte array using .NET

    - by Matt
    I am using the following code to attempt to read a large file (280Mb) into a byte array from a UNC path public void ReadWholeArray(string fileName, byte[] data) { int offset = 0; int remaining = data.Length; log.Debug("ReadWholeArray"); FileStream stream = new FileStream(fileName, FileMode.Open, FileAccess.Read); while (remaining > 0) { int read = stream.Read(data, offset, remaining); if (read <= 0) throw new EndOfStreamException (String.Format("End of stream reached with {0} bytes left to read", remaining)); remaining -= read; offset += read; } } This is blowing up with the following error. System.IO.IOException: Insufficient system resources exist to complete the requested If I run this using a local path it works fine, in my test case the UNC path is actually pointing to the local box. Any thoughts what is going on here ?

    Read the article

  • JavaScript and CSS files for ASP.NET MVC 2 EditorTemplate user controls

    - by Zack Peterson
    I'm using an EditorTemplate DateTime.ascx in my ASP.NET MVC 2 project. <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<DateTime>" %> <%: Html.TextBox(String.Empty, Model.ToString("M/dd/yyyy h:mm tt")) %> <script type="text/javascript"> $(function () { $('#<%: ViewData.TemplateInfo.GetFullHtmlFieldId(String.Empty) %>').AnyTime_picker({ format: "%c/%d/%Y %l:%i %p" }); }); </script> This uses the Any+Time™ JavaScript library for jQuery by Andrew M. Andrews III. I've added those library files (anytimec.js and anytimec.css) to the <head> section of my master page. Rather than include these JavaScript and Cascading Style Sheet files on every page of my web site, how can I instead include the .js and .css files only on pages that need them--pages that edit a DateTime type value?

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >