Search Results

Search found 3168 results on 127 pages for 'directories'.

Page 114/127 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • How to import a module from PyPI when I have another module with the same name

    - by kuzzooroo
    I'm trying to use the lockfile module from PyPI. I do my development within Spyder. After installing the module from PyPI, I can't import it by doing import lockfile. I end up importing anaconda/lib/python2.7/site-packages/spyderlib/utils/external/lockfile.py instead. Spyder seems to want to have the spyderlib/utils/external directory at the beginning of sys.path, or at least none of the polite ways I can find to add my other paths get me in front of spyderlib/utils/external. I'm using python2.7 but with from __future__ import absolute_import. Here's what I've already tried: Writing code that modifies sys.path before running import lockfile. This works, but it can't be the correct way of doing things. Circumventing the normal mechanics of importing in Python using the imp module (I haven't gotten this to work yet, but I'm guessing it could be made to work) Installing the package with something like pip install --install-option="--prefix=modules_with_name_collisions" package_name. I haven't gotten this to work yet either, but I'm guess it could be made to work. It looks like this option is intended to create an entirely separate lib tree, which is more than I need. Source Using pip install --target=lockfile_from_pip. The files show up in the directory where I tell them to go, but import doesn't find them. And in fact pip uninstall can't find them either. I get Cannot uninstall requirement lockfile-from-pip, not installed and I guess I will just delete the directories and hope that's clean. Source So what's the preferred way for me to get access to the PyPI lockfile module?

    Read the article

  • How to mock/stub a directory of files and their contents using RSpec?

    - by John Topley
    A while ago I asked "How to test obtaining a list of files within a directory using RSpec?" and although I got a couple of useful answers, I'm still stuck, hence a new question with some more detail about what I'm trying to do. I'm writing my first RubyGem. It has a module that contains a class method that returns an array containing a list of non-hidden files within a specified directory. Like this: files = Foo.bar :directory => './public' The array also contains an element that represents metadata about the files. This is actually a hash of hashes generated from the contents of the files, the idea being that changing even a single file changes the hash. I've written my pending RSpec examples, but I really have no idea how to implement them: it "should compute a hash of the files within the specified directory" it "shouldn't include hidden files or directories within the specified directory" it "should compute a different hash if the content of a file changes" I really don't want to have the tests dependent on real files acting as fixtures. How can I mock or stub the files and their contents? The gem implementation will use Find.find, but as one of the answers to my other question said, I don't need to test the library. I really have no idea how to write these specs, so any help much appreciated!

    Read the article

  • Process a set of files from a source directory to a destination directory in Python

    - by Spoike
    Being completely new in python I'm trying to run a command over a set of files in python. The command requires both source and destination file (I'm actually using imagemagick convert as in the example below). I can supply both source and destination directories, however I can't figure out how to easily retain the directory structure from the source to the destination directory. E.g. say the srcdir contains the following: srcdir/ file1 file3 dir1/ file1 file2 Then I want the program to create the following destination files on destdir: destdir/file1, destdir/file3, destdir/dir1/file1 and destdir/dir1/file2 So far this is what I came up with: import os from subprocess import call srcdir = os.curdir # just use the current directory destdir = 'path/to/destination' for root, dirs, files in os.walk(srcdir): for filename in files: sourceFile = os.path.join(root, filename) destFile = '???' cmd = "convert %s -resize 50%% %s" % (sourceFile, destFile) call(cmd, shell=True) The walk method doesn't directly provide what directory the file is under srcdir other than concatenating the root directory string with the file name. Is there some easy way to get the destination file, or do I have to do some string manipulation in order to do this?

    Read the article

  • [javascript] Populating jsTree based on XML data uploaded to server folder

    - by PFM
    tl:dr How can I populate jsTree based on a folder location instead of an exact XML url? I'm looking for a little direction on this project. Currently I am trying to copy file structures of hard drives as XML files and recreate them using jsTree on the webserver for a completely independent version of the file structure. I have some python script that outputs XML files that are formed to jsTree and automatically uploads to a folder on the server. The problem is now I am a little lost because I have to manually enter each XML file into jsTree code for it to display so I have multiple entries like this: $("#tree1") .jstree({ "plugins" : [ "themes", "xml_data", "ui", "search", "types" ], "xml_data" : { "ajax" : { "url" : "./XML_DATA/DRIVE1.xml" }, "xsl" : "nest" }, I see in the documentation that instead of populated by the direct file the folders are populated by "server.php" but no where in the php code does it point to any directories or files. After considering the problem I thought of a few solutions and could use some advice on them: Should I be trying to write php code to automatically look through my XML_DATA folder to upload each XML file? Should I just upload all the XML to mySQL and populate my tree based on that? Should the javascript be the code looking through the server's folder for XML files? All the XML is formed the same way but the number of XML files on the server will increase and will have to be refreshed as well as they will be overwritten with changes. Any direction would be appreciated, thanks.

    Read the article

  • Javascript/Jquery pop-up window in Asp.Net MVC4

    - by Mark
    Below I have a "button" (just a span with an icon) that creates a pop-up view of a div in my application to allow users to compare information in seperate windows. However, I get and Asp.Net Error as follows: **Server Error in '/' Application. The resource cannot be found. Requested URL: /Home/[object Object]** Does anyone have an Idea of why this is happending? Below is my code: <div class="module_actions"> <div class="actions"> <span class="icon-expand2 pop-out"></span> </div> </div> <script> $(document).ajaxSuccess(function () { var Clone = $(".pop-out").click(function () { $(this).parents(".module").clone().appendTo("#NewWindow"); }); $(".pop-out").click(function popitup(url) { LeftPosition = (screen.width) ? (screen.width - 400) / 1 : 0; TopPosition = (screen.height) ? (screen.height - 700) / 1 : 0; var sheight = (screen.height) * 0.5; var swidth = (screen.width) * 0.5; settings = 'height=' + sheight + ',width=' + swidth + ',top=' + TopPosition + ',left=' + LeftPosition + ',scrollbars=yes,resizable=yes,toolbar=no,status=no,menu=no, directories=no,titlebar=no,location=no,addressbar=no' newwindow = window.open(url, '/Index', settings); if (window.focus) { newwindow.focus() } return false; }); });

    Read the article

  • Compiling my Boost/NTL program with c++ on Linux.

    - by Martin Lauridsen
    Hi SO, I wrote a client program and a server program, that uses the NTL library and Boost::Asio, to do client/server communication for an integer factorization application, in C++. Both sides consist of several headers and cpp files. Both project compile fine individually on Windows in Visual Studio. All I did, was add the include path of NTL and Boost to both projects: Additional include paths: "D:\Downloads\WinNTL-5_5_2\include";D:\boost_1_42_0 Furthermore, for both projects, I added the two library paths to both projects in VS: Additional library directories: D:\boost_1_42_0\stage\lib;"D:\Documents\Visual Studio 2008\Projects\ntl\Debug" And added under Additional dependencies: ntl.lib As said, it compiles fine on Windows. But when I put the code on a Linux machine provided by university, I try to compile with the following statement c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include -I/appl/htopopt/Linux_x86_64/boost_1_43_0/include client_protocol.cpp mpqs_client.cpp mpqs_sieve.cpp mpqs_helper.cpp -o mpqs_helper -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm -L/appl/htopopt/Linux_x86_64/boost_1_43_0/lib -lboost_system -static Upon doing this, I get a huuuge error, which I posted here. Any idea how to fix this, please??

    Read the article

  • Problems installing a package from PyPI: root files not installed

    - by intuited
    After installing the BitTorrent-bencode package, either via easy_install BitTorrent-bencode or pip install BitTorrent-bencode, or by downloading the tarball and installing that via easy_install $tarball, I discover that /usr/local/lib/python2.6/dist-packages/BitTorrent_bencode-5.0.8-py2.6.egg/ contains EGG-INFO/ and test/ directories. Although both of these subdirectories contain files, there are no files in the BitTorr* directory itself. The tarball does contain bencode.py, which is meant to be the actual source for this package, but it's not installed by either of those utils. I'm pretty new to all of this so I'm not sure if this is a problem with the package or with what I'm doing. The package was packaged a while ago (2007), so perhaps it's using some deprecated configuration aspect that I need to supply a command-line flag for. I'm more interested in learning what's wrong with either the package or my procedures than in getting this particular package installed; there is another package called hunnyb that seems to do a decent enough job of decoding bencoded data. Mostly I'd like to know how to deal with such problems in other packages.

    Read the article

  • Behavior of Struts2 and convention-plugin when there is Index(extends ActionSupport)

    - by hanishi
    We have an Action class named 'Index' immediately under com.example.common.action and is annotated @ParentPackage('default') which is declared in package directive in struts.xml and has "/" for its namespace and extends "struts-default". It also declares @Result so that it responses with jsp files corresponding the string values returned by its execute() method. In our struts.xml, the following struts setting is configured along with other necessary configurations that are needed for convention-plugin. <constant name="struts.action.extension" value=","/> When accessing /my_context/none_existing_path, the request apparently hits this Index class and the contents of the jsp declared in the Index's @Result section gets returned. However, if we provide /my_context/, we receive the following error: HTTP Status 404-There is no Action mapped for namespace[/] and action name [] associated with context path [/my_context]. We want to know the reason why accessing /my_context/none_existing_path, where none_existing_path has no matching action, can fallback to Index class, but error is returned when when the URL requested is just /my_context/. Currently, our convention-plugin settings are declared as follows: <constant name="struts.convention.package.locators.basePackage" value="com.example"/> <constant name="struts.convention.package.locators" value="action"/> Strangely, if we changed the value of the struts.convention.package.locators.basePackage to om.example.common, in which the aforementioned Index file can be immediately found by narrowing the search scope, requesting /my_context/ displays the content of the jsps declared in @Result section of the Index class. However, as our action classes are distributed throughout the com.example.[a-z].action packages, where [a-z] represents the large volume of directories we have in our package structure, we cannot use this trick as a workaround. We have also tried placing index.jsp at the top level of the class path, and have the index.jsp redirect to /my_context/index, which worked but not what we want. Could this be a bug? We appreciate your responses. Thank you in advance. EDIT: JIRA registered, problem solved (from Struts 2.3.12 up)

    Read the article

  • jsf and eclipse setup. Lib in Explorer is different than lib in tomcat. How come?

    - by user384706
    Hi, I am starter in JSF2.0, and I have a question related to Eclipse (I am using Helios). 1)I create a Dynamic Project 2)I add JSF project facet. 3)I choose JSF user library (I have created it using MyFaces) All ok so far. I notice though that in the Project Explorer in WebContent/WEB-INF/lib the lib directory is empty instead of having MyFaces jars. The application works fine though. I looked into this and the jars are actually being placed in the corresponding lib directory of the app deployed under wtpwebapps of the Tomcat instance of eclipse in the .pluggins directory. Ok, it works but IMHO it is incorrect to have the Project Explorer inconsistent with the directories actually deployed. I.e. lib is shown empty in Project Explorer but with jars under wtpwebapps. Am I wrong to dislike this inconsistency? Is this how it should work or am I doing something wrong in the way I am setting my project? Thanks!

    Read the article

  • Are there any free Xml Diff/Merge tools available?

    - by Russell
    I have several config files in my .net applications which I would like to merge application settings elements etc. I was about to begin doing it manually as I usually do, however thought there must be an XML diff GUI tool available somewhere. The tool would be able to go to the element level to compare and display the differences etc. However Google gave no substantive free tool results and no hints for anything of value. Is such a tool available? That is very useful? For free? Thanks in advance. :) Edit: Here is a bit of clarification of the functionality that would turn my error-prone, tedious manual job into a 1-minute simpler task (and potential to automate): In KDiff3, you can do a diff/merge of entire directories. There is a hierarchical diff which is very accurate, user-friendly and clear. I was interested in finding a similar solution, however instead of directory hierarchy, an XML element hierarchy. If there is no such open source software, I am considering creating one on CodePlex to provide this functionality.

    Read the article

  • how to seamlessly integrate subversion and git?

    - by mattv
    I'm looking for tips on how to seamlessly integrate subversion and git, for deploying web sites by a small team of web developers. We each have our own development versions of our sites on our local machines. We also have dev, staging, and live servers. As our team has grown, we haven't updated our revision control and deployment strategies accordingly. We had all been checking into the trunk of a shared Subversion repository. Both the dev & staging servers ran from a checkout of the trunk, so updating them involved running "svn update" while the live server ran as an export from trunk which required an "svn export" to get the latest code. In either case, we would often update just certain files by updating or exporting just those files or directories. That worked okay when there was just one or two developers. However, a big downside was that we couldn't point to an individual tag that represented what was currently on live at any given time. In keeping with corporate policy, we'd like to continue to use Subversion to store what we're now calling our "production branch," which will be what goes onto staging and live. However, we would like to use Git on our local and development sites. We especially like the idea of easier merges and being able to "cherry pick" updates that need to go live. We had initially planned on using git-svn, but it doesn't seem to work well in a shared environment such as our dev or staging servers. Anyone else doing something like this? What's the best way to make it work? Or are we making it more difficult than it should be?

    Read the article

  • Mercurial repository narrow clone?

    - by Berry Langerak
    Hi. I'm currently in the process of moving from Subversion to Mercurial, and I have to say I don't regret that decision. However, when trying to convert my project, I ran into a problem of Mercurial, which I can't seem to get fixed. I have two distinct projects: one is a framework, and the other is an application that relies on that framework. Here's what the repositories look like: The Framework repository: docs/ deploy/ lib/ tests/ The Application repository: application/ config/ lib/ tests/ www/ What I'd like is for the application's lib directory to contain a copy of the frameworks' lib/ directory. I used to do this using svn:externals. Now, I am aware that Mercurial supports the concept of subrepositories, but that doesn't seem like the "correct" solution, as it doesn't actually pull in the lib/ directory like I wanted, as you'll still have to pull and push changes manually. That, plus once you clone the framework repository, you'll get all of it, not just the lib/ directory. I only need the lib/ directory, not the tests, or the docs. Now, I thought up two different solutions to this problem, but I wonder which is the best. The first solution would be to clone the framework in a different directory altogether and create symlink in the application's lib/ directory which points to the framework's lib/ directory. Putting the symlink in .hgignore should make sure all is well, I think? That means that you could edit the frameworks code, and commit that, and you could edit the application's code and commit that, too. The other option is to have multiple repositories. The framework gets pulled as a whole, which means you'll get the docs/, deploy/, test/ etc. directories, which are not needed for usage of the framework. I thought maybe creating a repository purely for the library might be a solution, although I sincerely doubt it, as the Unit Tests are very dependant upon the library itself. Does anyone know a decent solution for this problem?

    Read the article

  • Installing Image_Graph and using with BASE

    - by PPowerHouseK
    Hello all, I have Windows 7 with the latest XAMPP installation. I configured BASE to work, for the most part. My problem is that in BASE, when I click on the graph alerts button, I get this error: Error loading the Graphing library: Check your Pear::Image_Graph installation! * Image_Graph can be found here:at http://pear.veggerby.dk/. Without this library no graphing operations can be performed. * Make sure PEAR libraries can be found by php at all: pear config-show | grep "PEAR directory" PEAR directory php_dir /usr/share/pear This path must be part of the include path of php (cf. /etc/php.ini): php -i | grep "include_path" include_path => .:/usr/share/pear:/usr/share/php => .:/usr/share/pear:/usr/share/php</code> I think it may have to do with the include path of the php.ini so here is what it currently says: ;;;;;;;;;;;;;;;;;;;;;;;;; ; Paths and Directories ; ;;;;;;;;;;;;;;;;;;;;;;;;; ; UNIX: "/path1:/path2" ;include_path = ".:/php/includes" ; ; Windows: "\path1;\path2" ;include_path = ".;c:\php\includes" ; ; PHP's default setting for include_path is ".;/path/to/php/pear" ; http://php.net/include-path include_path = ".;C:\xampp\php\PEAR" I am really at a loss as to how to resolve this. I searched for a while for some documentation but most referred to installing on ubuntu. I don't know any pear or php, so if you know how to fix this please explain thoroughly. I am willing to supply as much information as needed.

    Read the article

  • .htaccess - alias all www-only requests to subdirectory

    - by CodeMoose
    Trying to install the wonderful Concrete5 CMS to use as my main site engine - the problem is it has about 15 different files and directories, and they clutter up my root. I'd really like to move it to a /_concrete/ subdirectory, and still maintain it in the domain root. htaccess has never been my strong suit - after a lot of research and learning, and a lot of error 500s, my frustration is overriding my pride and I'm posting here. Here's exactly what I'm trying to accomplish: Any requests that come through www.domain.com are forwarded to www.domain.com/_concrete/, except in the case of an existing file. The end-user URL shouldn't change - they will still see the site as www.domain.com, even though they're being served www.domain.com/_concrete/. Multiple subdomains exist on this site as sub-folders within the root - thus, only requests coming through www.domain.com should be redirected. Here's the closest I got with my htaccess, which produces an error 500: RewriteEngine On RewriteCond %{HTTP_HOST} ^(www\.)?domain\.com [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !^_concrete RewriteRule ^(.*)$ _concrete/$1 [L,QSA] This is the result of 4 hours of sweat and blood (mostly blood), so I have to be close. I'm hoping one of your fine minds can point out a stupid mistake and put this thing to rest swiftly. Thanks for your time! Addendum: I previously posted .htaccess - alias domain root to subfolder a while ago, which got me started. Please don't fall into the trap of thinking it's a duplicate.

    Read the article

  • Folders and its files do get copied.. pls help me in given code..

    - by OM The Eternity
    I have a joomla folder, and i have a script which has to copy the complete joomla folder to another new folder..below is the code which copies only the files contain in the main folder but NOT the other directories existing in the joomla folder, I know that i have to plcae some check for dir_exist function and create it if do not exist.. also I want this code to perform a function to overrite the previously existing files and folders.. how can i accomplish thisss?????? <?php $source = '/var/www/html/pranav_test/'; $destination = '/var/www/html/parth/'; $sourceFiles = glob($source . '*'); foreach($sourceFiles as $file) { $baseFile = basename($file); if (file_exists($destination . $baseFile)) { $originalHash = md5_file($file); $destinationHash = md5_file($destination . $baseFile); if ($originalHash === $destinationHash) { continue; } } copy($file, $destination . $baseFile); } ?> Thanks to @alex who helped me to get the code... But I need more support pls help...

    Read the article

  • Crash in C++ Code

    - by Ankuj
    I am trying to list all files in a directory recursively. But my code crashes without giving any error. When the given file is directory I recursively call the function or else print its name. I am using dirent.h int list_file(string path) { DIR *dir; struct dirent *ent; char *c_style_path; c_style_path = new char[path.length()]; c_style_path = (char *)path.c_str(); dir = opendir (c_style_path); if (dir != NULL) { /* print all the files and directories within directory */ while ((ent = readdir (dir)) != NULL) { if(ent->d_type == DT_DIR && (strcmp(ent->d_name,".")!=0) && (strcmp(ent->d_name,"..")!=0)) { string tmp = path + "\\" + ent->d_name; list_file(tmp); } else { cout<<ent->d_name<<endl; } } closedir (dir); } else { /* could not open directory */ perror (""); return EXIT_FAILURE; } delete [] c_style_path; return 0; } I am not getting as to what I am doing wrong here. Any clues ?

    Read the article

  • Doing a virus check on a file from a build script

    - by the_mandrill
    I would like to be be able to invoke a virus check as the final stage of the build process (please don't question why a dev machine would get a virus, it's just a belt-and-braces approach to avoid the risk of getting sued by customers...). Also I'd like the option of having AV on a machine but switching the auto file system protection off (at least for the build directories). What I would like is a generic way of scanning a file using whatever AV system is in place. I'm assuming that there's an Windows API to do this, given that Windows detects the presence of an AV system, and browsers such as Firefox invoke a virus scan whenever a file is downloaded. So what's the API that they're using? There's the Microsoft AntiVirus API but that seems to be specific to Office documents. Does the approach involve using WMI? (and if you can detect the AV provider from there, how do you then invoke it to scan a file?) I know that I could write the script to manually call the AV scanner that I know to be installed, but as an intellectual exercise I'm more interested to know how apps like Firefox are doing this.

    Read the article

  • find, excluding dir, not descending into dir, AND using maxdepth and mindepth

    - by user1680819
    This is RHEL 5.6 and GNU find 4.2.27. I am trying to exclude a directory from find, and want to make sure that directory isn't descended into. I've seen plenty of posts saying -prune will do this - and it does. I can run this command: find . -type d -name "./.snapshot*" -prune -o -print and it works. I run it through strace and verify it is NOT descending into .snapshot. I also want to find directories ONLY at a certain level. I can use mindepth and maxdepth to do this: find . -maxdepth 8 -mindepth 8 -type d and it gives me all the dirs 8 levels down, including what's in .snapshot. If I combine the prune and mindepth and maxdepth options: find . -maxdepth 8 -mindepth 8 -type d \( -path "./.snapshot/*" -prune -o -print \) the output is right - I see all the dirs 8 levels down except for what's in .snapshot, but if I run that find through strace, I see that .snapshot is still being descended into - to levels 1 through 8. I've tried a variety of different combinations, moving the precedence parens around, reording expression components - everything that yields the right output still descends into .snapshot. I see in the man page that -prune doesn't work with -depth, but doesn't say anything about mindepth and maxdepth. Can anyone offer any advice? Thanks... Bill

    Read the article

  • hash fragments and collisions cont.

    - by Mark
    For this application I've mine I feel like I can get away with a 40 bit hash key, which seems awfully low, but see if you can confirm my reasoning (I want a small key because I want a small filename and the key will be converted to a filename): (Note: only accidental collisions a concern - no security issues.) A key point here is that the population in question is divided into groups, and a collision is only relevant if it occurs within the same group. A "group" is a directory on a user's system (the contents of files are hashed and a collision is only relevant if it occurs for files within the same directory). So with speculating roughly 100,000 potential users, say 2^17, that corresponds to 2^18 "groups" assuming 2 directories per user on average. So with a 40 bit key I can expect 2^(20+9) files created (among all users) before a collision occurs for some user somewhere. (Or IOW 2^((40+18)/2), due to the "birthday effect".) That's an average 4096 unique files created per user, for 2^17 users, before a single collision occurs for some user somewhere. And then that long again before another collision occurs somewhere (right?)

    Read the article

  • Make an abstract class or use a processor?

    - by Tim Murphy
    I'm developing a class to compare two directories and run an action when a directory/file in the source directory does not exist in destination directory. Here is a prototype of the class: public abstract class IdenticalDirectories { private DirectoryInfo _sourceDirectory; private DirectoryInfo _destinationDirectory; protected abstract void DirectoryExists(DirectoryInfo sourceDirectory, DirectoryInfo destinationDirectory); protected abstract void DirectoryDoesNotExist(DirectoryInfo sourceDirectory, DirectoryInfo destinationDirectory); protected abstract void FileExists(DirectoryInfo sourceDirectory, DirectoryInfo destinationDirectory); protected abstract void FileDoesNotExist(DirectoryInfo sourceDirectory, DirectoryInfo destinationDirectory); public IdenticalDirectories(DirectoryInfo sourceDirectory, DirectoryInfo destinationDirectory) { ... } public void Run() { foreach (DirectoryInfo sourceSubDirectory in _sourceDirectory.GetDirectories()) { DirectoryInfo destinationSubDirectory = this.GetDestinationDirectoryInfo(subDirectory); if (destinationSubDirectory.Exists()) { this.DirectoryExists(sourceSubDirectory, destinationSubDirectory); } else { this.DirectoryDoesNotExist(sourceSubDirectory, destinationSubDirectory); } foreach (FileInfo sourceFile in sourceSubDirectory.GetFiles()) { FileInfo destinationFile = this.GetDestinationFileInfo(sourceFile); if (destinationFile.Exists()) { this.FileExists(sourceFile, destinationFile); } else { this.FileDoesNotExist(sourceFile, destinationFile); } } } } } The above prototype is an abstract class. I'm wondering if it would be better to make the class non-abstract and have the Run method receiver a processor? eg. public void Run(IIdenticalDirectoriesProcessor processor) { foreach (DirectoryInfo sourceSubDirectory in _sourceDirectory.GetDirectories()) { DirectoryInfo destinationSubDirectory = this.GetDestinationDirectoryInfo(subDirectory); if (destinationSubDirectory.Exists()) { processor.DirectoryExists(sourceSubDirectory, destinationSubDirectory); } else { processor.DirectoryDoesNotExist(sourceSubDirectory, destinationSubDirectory); } foreach (FileInfo sourceFile in sourceSubDirectory.GetFiles()) { FileInfo destinationFile = this.GetDestinationFileInfo(sourceFile); if (destinationFile.Exists()) { processor.FileExists(sourceFile, destinationFile); } else { processor.FileDoesNotExist(sourceFile, destinationFile); } } } } What do you see as the pros and cons of each implementation?

    Read the article

  • Compiling 32-bit Program on VS 2008

    - by gordonwd
    I've been developing on VC++ 2003 on an XP PC but am now on Windows 7 and bought a cheap legal copy of VS 2008 to continue work on the same project. My product has to continue to run on customers' XP systems, so I'm strictly interested in a 32-bit executable. The first issue I ran into was the PRJ0003 error "spawning cl.exe". I had to add the path to this file to the VC++ Directories settings (it appears in both a bin\amd64 and bin\x86_amd64 directory, but I don't think it matters output-wise which I use?). The issue I now have (not counting a tedious cleanup to convert strcpy to strcpy_s, etc.) is that I'm not clear on whether I'm generating a 32-bit or 64-bit exe out of this. My project properties are set to a target of "Win32", so I assume that all is well. Is this correct? I have read some discussions about this, but it's never quite clear if they are talking about whether the compiler itself is running x64 vs. x86, or whether the compiled code is x64 vs. x86, and how this is differentiated. So am I doing the right thing to generate a 32-bit, Win32, x-86 program?

    Read the article

  • Background loading javascript into iframe with using jQuery/Ajax?

    - by user210099
    I'm working on an offline only help system which requires loading a large amount of search-related data into an iframe before the search functionality can be used. Due to the folder structure of the project, I am unable to use Ajax-related background load methods, since the files I need are loaded a few directories "up and over." I have written some code which delays the loading of the help data until the rest of the webpage is loaded. The help data consists of a bunch of javascript files which have information about the terms, ect that exist in the help books which are installed on the system. The webpage works fine, until I start to load this help data into a hidden iframe. While the javascript files are loading, I can not use any of the webpage. Links that require a small files be downloaded for hover over effects don't show up, javascript (switching tabs on the page) has no effect. I'm wondering if this is just a limitation of the way javascript works, or if there's something else going on here. Once all the files are loaded for the help system, the webpage works as expected. function test(){ var MGCFrame = eval("parent.parent"); if((ALLFRAMESLOADED == true)){ t2 = MGCFrame.setTimeout("this.IHHeader.frames[0].loadData()",1); } else{ t1 = MGCFrame.setTimeout("this.IHHeader.frames[0].test()",1000); } } Load data simply starts the data loading process. Thanks for any help you can provide.

    Read the article

  • How can I diff against a revision of a single file using the default Git GUI tools?

    - by Rich
    I want to view the history of a single file, and then compare a single revision from that history against the current version. On the command line, this is easy: Run: git log -- <filename> Locate the version you want to compare, Run: git diff <commitid> -- <filename> But how can this be done in the default Git gui tools, git gui and gitk? I know of two methods using gitk, but they're both horribly clunky: Either: Select the New View option from the View menu, Type in the full path to your file into the box labelled Enter files and directories to include, one per line, Locate the version you want to compare by looking at the highlighted items in the top pane, and click on it to select it, Right-click on the current version and select Diff selected - this, Or: Select Tree in the bottom right-hand pane, Locate the file you want to look at, right-click on it, and select Highlight this only, Locate the version you want to compare by looking at the highlighted items in the top pane, and click on it to select it, Right-click on the current version and select Diff selected - this, Click on the file in the bottom right-hand pane to jump to it in the diff output, or scroll manually. Is a better method than this?

    Read the article

  • .htaccess redirect noticably increasing load time

    - by GTCrais
    I set up a SEF link via .htaccess RewriteRule to one of the articles on my website just to see how that works, and it does work but it considerably increases the load time of that particular page. On average the articles (including the one I'm talking about, when not using the rewrite rule) load in about 1.3 seconds. With the rewrite rule, the load time is 3.3 seconds on average until the page displays, and the loader thingy in the firefox tab keeps spinning for another 2 seconds. I have WAMP setup on my computer, and the website is being accessed through no-ip.com. Here is the .htaccess config (very simple, as you can see): Options +FollowSymLinks RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^o-sw-liji /NewSWL/o-nama.php?body=o-sw-liji In httpd.conf I have this (somewhere I read this might affect the load time for some reason - searching for files through all the directories or something, I don't remember exactly what I read): <Directory /> Options None AllowOverride None Order deny,allow Deny from all </Directory> DocumentRoot "Z:/Program Files (x86)/wamp/www/" <Directory "Z:/Program Files (x86)/wamp/www/"> Options None AllowOverride All Order allow,deny Allow from all </Directory> Any ideas why .htaccess redirect increases the load time by so much? UPDATE: so I put a session based counter in the "o-nama.php" script. Apparently when I access the web via the 'normal' link i.e. 'o-nama.php?body=o-sw-liji', the counter increases by one, as it should - it's one page load. But when the page is accessed through the redirected link, i.e. 'o-nama/o-sharewood-liji' the counter increases by 6-8, which naturally makes the load time a lot longer, since it's loading the same page for 6-8 times. I have no idea why this is happening. Any help is appreciated.

    Read the article

  • Could git do not store history of specific folders when working with git-svn?

    - by Timofey Basanov
    In short: Is there a way to disable storing full history for specific folders in git-svn repo? We have pretty large SVN repo with big checkout. I would like to migrate it to Git for my local development, because Git speeds up update and status commands orders of magnitude. When I simply do git svn clone it creates very big repo. Big enough to be bigger then my whole HDD. The problem lies in binary directories for which history is too large. Latest binaries are required for proper local build, but history is not required at all for my development process. I will never change them myself. I would like to store only latest versions for specific folders, or may be a history, but for no more than a week. I could only found filter for git svn fetch, which excludes specific folders at all. This is not exactly what I need. It's OK with me to have Cron task which deletes history from specific folders, but I do not know how to make one. Also Cron does not solve problem of first git svn clone. P.S. SVN repository structure could not be changed by any means.

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >