Search Results

Search found 17710 results on 709 pages for 'portable home directories'.

Page 453/709 | < Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >

  • File mkdirs() method not working in android/java

    - by Leif Andersen
    I've been pulling out my hair on this for a while now. The following method is supposed to download a file, and save it to the location specified on the hard drive. private static void saveImage(Context context, boolean backgroundUpdate, URL url, File file) { if (!Tools.checkNetworkState(context, backgroundUpdate)) return; // Get the image try { // Make the file file.getParentFile().mkdirs(); // Set up the connection URLConnection uCon = url.openConnection(); InputStream is = uCon.getInputStream(); BufferedInputStream bis = new BufferedInputStream(is); // Download the data ByteArrayBuffer baf = new ByteArrayBuffer(50); int current = 0; while ((current = bis.read()) != -1) { baf.append((byte) current); } // Write the bits to the file OutputStream os = new FileOutputStream(file); os.write(baf.toByteArray()); os.close(); } catch (Exception e) { // Any exception is probably a newtork faiilure, bail return; } } Also, if the file doesn't exist, it is supposed to make the directory for the file. (And if there is another file already in that spot, it should just not do anything). However, for some reason, the mkdirs() method never makes the directory. I've tried everything from explicit parentheses, to explicitly making the parent file class, and nothing seems to work. I'm fairly certain that the drive is writable, as it's only called after that has already been determined, also that is true after running through it while debugging. So the method fails because the parent directories aren't made. Can anyone tell me if there is anything wrong with the way I'm calling it? Also, if it helps, here is the source for the file I'm calling it in: https://github.com/LeifAndersen/NetCatch/blob/master/src/net/leifandersen/mobile/android/netcatch/services/RSSService.java Thank you

    Read the article

  • How can I add SOAP Headers to a WSDL generated Borland C++ Builder 6 application.

    - by MJG
    Using a WSDL that requires a SOAP HEADER for Authentication (fragment below) code that gets generated when creating a web service client via the "WSDL Importer" has no concept of the Authentication Headers and there are no examples in BCB6 C++ Examples/WebServices directories that show how, and nothing on Web that I can find. Anyone with BCB6 C++ (not Delphi) have an example of adding SOAP Headers to a TRemotable subclass? <s:element name="AuthenticationHeader" type="tns:AuthenticationHeader"/> <s:complexType name="AuthenticationHeader"> <s:complexContent mixed="false"> <s:extension base="tns:UserAuthHeader"> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="Function" type="s:string"/> <s:element minOccurs="1" maxOccurs="1" name="TimeOutMilliSec" type="s:int"/> </s:sequence> </s:extension> </s:complexContent> </s:complexType> <s:complexType name="UserAuthHeader"> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="Username" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="Password" type="s:string"/> </s:sequence> <s:anyAttribute/> </s:complexType>

    Read the article

  • Repository layout and sparse checkouts

    - by chuanose
    My team is considering to move from Clearcase to Subversion and we are thinking of organising the repository like this: \trunk\project1 \trunk\project2 \trunk\project3 \trunk\staticlib1 \trunk\staticlib2 \trunk\staticlib3 \branches\.. \tags\.. The issue here is that we have lots of projects (1000+) and each project is a dll that links in several common static libraries. Therefore checking out everything in trunk is a non-starter as it will take way too long (~2 GB), and is unwieldy for branching. Using svn:externals to pull out relevant folders for each project doesn't seem ideal because it results in several working copies for each static library folder. We also cannot do an atomic commit if the changes span the project and some static libraries. Sparse checkouts sounds very suitable for this as we can write a script to pull out only the required directories. However when we want to merge changes from a branch back to the trunk we will need to first check out a full trunk. Wonder if there is some advice on 1) a better repository organization or 2) a way to merge over branch changes to a trunk working copy that is sparse?

    Read the article

  • How do I delete folders in bash after successful copy (Mac OSX)?

    - by cohortq
    Hello! I recently created my first bash script, and I am having problems perfecting it's operation. I am trying to copy certain folders from one local drive, to a network drive. I am having the problem of deleting folders once they are copied over, well and also really verifying that they were copied over). Is there a better way to try to delete folders after rsync is done copying? I was trying to exclude the live tv buffer folder, but really, I can blow it away without consequence if need be. Any help would be great! thanks! #!/bin/bash network="CBS" useracct="tvcapture" thedate=$(date "+%m%d%Y") folderToBeMoved="/users/$useracct/Documents" newfoldername="/Volumes/Media/TV/$network/$thedate" ECHO "Network is $network" ECHO "date is $thedate" ECHO "source is $folderToBeMoved" ECHO "dest is $newfoldername" mkdir $newfoldername rsync -av $folderToBeMoved/"EyeTV Archive"/*.eyetv $newfoldername --exclude="Live TV Buffer.eyetv" # this fails when there is more than one *.eyetv folder if [ -d $newfoldername/*.eyetv ]; then #this deletes the contents of the directories find $folderToBeMoved/"EyeTV Archive"/*.eyetv \( ! -path $folderToBeMoved/"EyeTV Archive"/"Live TV Buffer.eyetv" \) -delete #remove empty directory find $folderToBeMoved/"EyeTV Archive"/*.eyetv -type d -exec rmdir {} \; fi

    Read the article

  • Why does the Maven goal "package" include the resources in the jar, but the goal "jar:jar" doesnt?

    - by Bernhard V
    Hi, when I package my project with the Maven goal "package", the resources are included as well. They are originally located in the directory "src/main/resources". Because I want to create an executable jar and add the classpath to the manifest, I'm using maven-jar-plugin. I've configured it as the following likes: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.2</version> <configuration> <archive> <manifest> <addClasspath>true</addClasspath> <mainClass>at.sozvers.stp.zpv.ekvkumsetzer.Main</mainClass> </manifest> </archive> </configuration> </plugin> Why won't the jar file created with "jar:jar" include my resources as well. As far as I'm concerned it should use the same directories as the "package" goal (which are in my case inherited from the Maven Super POM).

    Read the article

  • importing symbols from python package into caller's namespace

    - by Paul C
    I have a little internal DSL written in a single Python file that has grown to a point where I would like to split the contents across a number of different directories + files. The new directory structure currently looks like this: dsl/ __init__.py types/ __init__.py type1.py type2.py and each type file contains a class (e.g. Type1). My problem is that I would like to keep the implementation of code that uses this DSL as simple as possible, something like: import dsl x = Type1() ... This means that all of the important symbols should be available directly in the user's namespace. I have tried updating the top-level __init__.py file to import the relevant symbols: from types.type1 import Type1 from types.type2 import Type2 ... print globals() the output shows that the symbols are imported correctly, but they still aren't present in the caller's code (the code that's doing the import dsl). I think that the problem is that the symbols are actually being imported to the 'dsl' namespace. How can I change this so that the classes are also directly available in the caller's namespace?

    Read the article

  • Resources for setting up a Visual Studio/C++ development environment

    - by Tom H.
    I haven't done much "front-end" development in about 15 years since moving to database development. I'm planning to start work on a personal project using C++ and since I already have MSDN I'll probably end up doing it in Visual Studio 2010. I'm thinking about using Subversion as a version control system eventually. Of course, I'd like to get up and running as quickly as I can, but I'd also like to avoid any pitfalls from a poorly organized project environment. So, my question is, are there any good resources with common best practices for setting up a development environment? I'm thinking along the lines of where to break down a solution into multiple projects if necessary, how to set up a unit testing process, organizing resources, directories, etc. Are there any great add-ons that I should make sure I have set up from the start? Most tutorials just have one simple project, type in your code and click on build to see that your new application says, "Hello World!". This will be a Windows application with several DLLs as well (no web development), so there doesn't need to be a deploy to a web server kind of process. Mostly I just want to make sure that I don't miss anything big and then have to extensively refactor because of it. Thanks!

    Read the article

  • How to configure an index.htm file in IIS?

    - by salvationishere
    I am running IIS 6.0 on an XP OS using VS 2008 and SQL Server 2008 (Full install). I developed two web apps. Both of these I can run from IIS by setting them to the default website. However, now I tried adding an index.htm file. Real simple; all it has is two hyperlinks to these web apps. But now only the first web app works. The first web app is pure VS. The second web app modifies an Adventureworks database table. But now when I click the hyperlink for the second web app, it gives me the error below. However this error doesn't make sense to me cause I have the two web apps configured as two virtual directories beneath C:\inetpub\ and the index.htm file is also beneath C:\inetpub. And the default website is set to home directory C:\inetpub\ with Document index.htm on top. Also, why does the first web app work and not the second now? Server Error in '/AddFileToSQL' Application. The path '/AddFileToSQL/App_GlobalResources/' maps to a directory outside this application, which is not supported. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: The path '/AddFileToSQL/App_GlobalResources/' maps to a directory outside this application, which is not supported. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

    Read the article

  • How to mock/stub a directory of files and their contents using RSpec?

    - by John Topley
    A while ago I asked "How to test obtaining a list of files within a directory using RSpec?" and although I got a couple of useful answers, I'm still stuck, hence a new question with some more detail about what I'm trying to do. I'm writing my first RubyGem. It has a module that contains a class method that returns an array containing a list of non-hidden files within a specified directory. Like this: files = Foo.bar :directory => './public' The array also contains an element that represents metadata about the files. This is actually a hash of hashes generated from the contents of the files, the idea being that changing even a single file changes the hash. I've written my pending RSpec examples, but I really have no idea how to implement them: it "should compute a hash of the files within the specified directory" it "shouldn't include hidden files or directories within the specified directory" it "should compute a different hash if the content of a file changes" I really don't want to have the tests dependent on real files acting as fixtures. How can I mock or stub the files and their contents? The gem implementation will use Find.find, but as one of the answers to my other question said, I don't need to test the library. I really have no idea how to write these specs, so any help much appreciated!

    Read the article

  • What is the proper syntax for getting a Makefile to print the output directory of one of its output zip files?

    - by 9exceptionThrower9
    I'm trying to edit an Android Makefile in the hopes of getting it to print out the directory (path) location of one the ZIP files it creates. Ideally, since the build process is long and does many things, I would like for it print out the pathway to the ZIP file to a text file in a different directory I can access later: Pseudo-code idea: # print the desired pathway to output file print(getDirectoryOf(variable-name.zip)) > ~/Desktop/location_of_file.txt The Makefile snippet where I would like to insert this new bit of code is shown below. I am interested in finding the directory of $(name).zip (that is specific file I want to locate): # ----------------------------------------------------------------- # A zip of the directories that map to the target filesystem. # This zip can be used to create an OTA package or filesystem image # as a post-build step. # name := $(TARGET_PRODUCT) ifeq ($(TARGET_BUILD_TYPE),debug) name := $(name)_debug endif name := $(name)-target_files-$(FILE_NAME_TAG) intermediates := $(call intermediates-dir-for,PACKAGING,target_files) BUILT_TARGET_FILES_PACKAGE := $(intermediates)/$(name).zip $(BUILT_TARGET_FILES_PACKAGE): intermediates := $(intermediates) $(BUILT_TARGET_FILES_PACKAGE): \ zip_root := $(intermediates)/$(name) # $(1): Directory to copy # $(2): Location to copy it to # The "ls -A" is to prevent "acp s/* d" from failing if s is empty. define package_files-copy-root if [ -d "$(strip $(1))" -a "$$(ls -A $(1))" ]; then \ mkdir -p $(2) && \ $(ACP) -rd $(strip $(1))/* $(2); \ fi endef

    Read the article

  • Post parameters to a frame of new window

    - by st.stoqnov
    I have to modify an existing web search page. There is a page, where all the search filters are (form with name "searchform"). When the search button is pressed, results are shown in new window. Because the search takes up to 30 seconds, and while searching the window stays blank, I have to add a label "Searching. Please wait..." at the new created window with the results. So the search window is created, and I set it's location to a frameset. First frame will show the label, and the second will show the results. But i can't manage to update the second frame with the results. Result windows is created as: var left = (screen.width/2) - 750/2; var top = (screen.height/2) - 600/2-100; var styleStr = 'toolbar=no,location=no,directories=no,status=no,menubar=no,scrollbars=yes,resizable=yes,copyhistory=yes,width=750,height=600,left='+left+',top='+top+',screenX='+left+',screenY='+top; var msgWindow = window.open('ex_search_frameset.php', 'results_exi', styleStr); msgWindow.focus(); Than the search is requested: f = document.searchform; f.action = 'ex_search_results.php'; f.target = msgWindow.document.res_frame; // here i can't figure out what the target must be // or how to post the params from "f" to the second frame "res_frame" f.submit(); Here is the frameset. <frameset rows="200,*" border="1" name="SearchFrame"> <frame name="wait_frame" src="ex_search_wait.php" target="right"> <frame name="res_frame" src="ex_search_results.php" target="_self"> </frameset> Any idea how to do this?

    Read the article

  • [javascript] Populating jsTree based on XML data uploaded to server folder

    - by PFM
    tl:dr How can I populate jsTree based on a folder location instead of an exact XML url? I'm looking for a little direction on this project. Currently I am trying to copy file structures of hard drives as XML files and recreate them using jsTree on the webserver for a completely independent version of the file structure. I have some python script that outputs XML files that are formed to jsTree and automatically uploads to a folder on the server. The problem is now I am a little lost because I have to manually enter each XML file into jsTree code for it to display so I have multiple entries like this: $("#tree1") .jstree({ "plugins" : [ "themes", "xml_data", "ui", "search", "types" ], "xml_data" : { "ajax" : { "url" : "./XML_DATA/DRIVE1.xml" }, "xsl" : "nest" }, I see in the documentation that instead of populated by the direct file the folders are populated by "server.php" but no where in the php code does it point to any directories or files. After considering the problem I thought of a few solutions and could use some advice on them: Should I be trying to write php code to automatically look through my XML_DATA folder to upload each XML file? Should I just upload all the XML to mySQL and populate my tree based on that? Should the javascript be the code looking through the server's folder for XML files? All the XML is formed the same way but the number of XML files on the server will increase and will have to be refreshed as well as they will be overwritten with changes. Any direction would be appreciated, thanks.

    Read the article

  • Process a set of files from a source directory to a destination directory in Python

    - by Spoike
    Being completely new in python I'm trying to run a command over a set of files in python. The command requires both source and destination file (I'm actually using imagemagick convert as in the example below). I can supply both source and destination directories, however I can't figure out how to easily retain the directory structure from the source to the destination directory. E.g. say the srcdir contains the following: srcdir/ file1 file3 dir1/ file1 file2 Then I want the program to create the following destination files on destdir: destdir/file1, destdir/file3, destdir/dir1/file1 and destdir/dir1/file2 So far this is what I came up with: import os from subprocess import call srcdir = os.curdir # just use the current directory destdir = 'path/to/destination' for root, dirs, files in os.walk(srcdir): for filename in files: sourceFile = os.path.join(root, filename) destFile = '???' cmd = "convert %s -resize 50%% %s" % (sourceFile, destFile) call(cmd, shell=True) The walk method doesn't directly provide what directory the file is under srcdir other than concatenating the root directory string with the file name. Is there some easy way to get the destination file, or do I have to do some string manipulation in order to do this?

    Read the article

  • Compiling my Boost/NTL program with c++ on Linux.

    - by Martin Lauridsen
    Hi SO, I wrote a client program and a server program, that uses the NTL library and Boost::Asio, to do client/server communication for an integer factorization application, in C++. Both sides consist of several headers and cpp files. Both project compile fine individually on Windows in Visual Studio. All I did, was add the include path of NTL and Boost to both projects: Additional include paths: "D:\Downloads\WinNTL-5_5_2\include";D:\boost_1_42_0 Furthermore, for both projects, I added the two library paths to both projects in VS: Additional library directories: D:\boost_1_42_0\stage\lib;"D:\Documents\Visual Studio 2008\Projects\ntl\Debug" And added under Additional dependencies: ntl.lib As said, it compiles fine on Windows. But when I put the code on a Linux machine provided by university, I try to compile with the following statement c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include -I/appl/htopopt/Linux_x86_64/boost_1_43_0/include client_protocol.cpp mpqs_client.cpp mpqs_sieve.cpp mpqs_helper.cpp -o mpqs_helper -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm -L/appl/htopopt/Linux_x86_64/boost_1_43_0/lib -lboost_system -static Upon doing this, I get a huuuge error, which I posted here. Any idea how to fix this, please??

    Read the article

  • How to import a module from PyPI when I have another module with the same name

    - by kuzzooroo
    I'm trying to use the lockfile module from PyPI. I do my development within Spyder. After installing the module from PyPI, I can't import it by doing import lockfile. I end up importing anaconda/lib/python2.7/site-packages/spyderlib/utils/external/lockfile.py instead. Spyder seems to want to have the spyderlib/utils/external directory at the beginning of sys.path, or at least none of the polite ways I can find to add my other paths get me in front of spyderlib/utils/external. I'm using python2.7 but with from __future__ import absolute_import. Here's what I've already tried: Writing code that modifies sys.path before running import lockfile. This works, but it can't be the correct way of doing things. Circumventing the normal mechanics of importing in Python using the imp module (I haven't gotten this to work yet, but I'm guessing it could be made to work) Installing the package with something like pip install --install-option="--prefix=modules_with_name_collisions" package_name. I haven't gotten this to work yet either, but I'm guess it could be made to work. It looks like this option is intended to create an entirely separate lib tree, which is more than I need. Source Using pip install --target=lockfile_from_pip. The files show up in the directory where I tell them to go, but import doesn't find them. And in fact pip uninstall can't find them either. I get Cannot uninstall requirement lockfile-from-pip, not installed and I guess I will just delete the directories and hope that's clean. Source So what's the preferred way for me to get access to the PyPI lockfile module?

    Read the article

  • Javascript/Jquery pop-up window in Asp.Net MVC4

    - by Mark
    Below I have a "button" (just a span with an icon) that creates a pop-up view of a div in my application to allow users to compare information in seperate windows. However, I get and Asp.Net Error as follows: **Server Error in '/' Application. The resource cannot be found. Requested URL: /Home/[object Object]** Does anyone have an Idea of why this is happending? Below is my code: <div class="module_actions"> <div class="actions"> <span class="icon-expand2 pop-out"></span> </div> </div> <script> $(document).ajaxSuccess(function () { var Clone = $(".pop-out").click(function () { $(this).parents(".module").clone().appendTo("#NewWindow"); }); $(".pop-out").click(function popitup(url) { LeftPosition = (screen.width) ? (screen.width - 400) / 1 : 0; TopPosition = (screen.height) ? (screen.height - 700) / 1 : 0; var sheight = (screen.height) * 0.5; var swidth = (screen.width) * 0.5; settings = 'height=' + sheight + ',width=' + swidth + ',top=' + TopPosition + ',left=' + LeftPosition + ',scrollbars=yes,resizable=yes,toolbar=no,status=no,menu=no, directories=no,titlebar=no,location=no,addressbar=no' newwindow = window.open(url, '/Index', settings); if (window.focus) { newwindow.focus() } return false; }); });

    Read the article

  • jsf and eclipse setup. Lib in Explorer is different than lib in tomcat. How come?

    - by user384706
    Hi, I am starter in JSF2.0, and I have a question related to Eclipse (I am using Helios). 1)I create a Dynamic Project 2)I add JSF project facet. 3)I choose JSF user library (I have created it using MyFaces) All ok so far. I notice though that in the Project Explorer in WebContent/WEB-INF/lib the lib directory is empty instead of having MyFaces jars. The application works fine though. I looked into this and the jars are actually being placed in the corresponding lib directory of the app deployed under wtpwebapps of the Tomcat instance of eclipse in the .pluggins directory. Ok, it works but IMHO it is incorrect to have the Project Explorer inconsistent with the directories actually deployed. I.e. lib is shown empty in Project Explorer but with jars under wtpwebapps. Am I wrong to dislike this inconsistency? Is this how it should work or am I doing something wrong in the way I am setting my project? Thanks!

    Read the article

  • Problems installing a package from PyPI: root files not installed

    - by intuited
    After installing the BitTorrent-bencode package, either via easy_install BitTorrent-bencode or pip install BitTorrent-bencode, or by downloading the tarball and installing that via easy_install $tarball, I discover that /usr/local/lib/python2.6/dist-packages/BitTorrent_bencode-5.0.8-py2.6.egg/ contains EGG-INFO/ and test/ directories. Although both of these subdirectories contain files, there are no files in the BitTorr* directory itself. The tarball does contain bencode.py, which is meant to be the actual source for this package, but it's not installed by either of those utils. I'm pretty new to all of this so I'm not sure if this is a problem with the package or with what I'm doing. The package was packaged a while ago (2007), so perhaps it's using some deprecated configuration aspect that I need to supply a command-line flag for. I'm more interested in learning what's wrong with either the package or my procedures than in getting this particular package installed; there is another package called hunnyb that seems to do a decent enough job of decoding bencoded data. Mostly I'd like to know how to deal with such problems in other packages.

    Read the article

  • Behavior of Struts2 and convention-plugin when there is Index(extends ActionSupport)

    - by hanishi
    We have an Action class named 'Index' immediately under com.example.common.action and is annotated @ParentPackage('default') which is declared in package directive in struts.xml and has "/" for its namespace and extends "struts-default". It also declares @Result so that it responses with jsp files corresponding the string values returned by its execute() method. In our struts.xml, the following struts setting is configured along with other necessary configurations that are needed for convention-plugin. <constant name="struts.action.extension" value=","/> When accessing /my_context/none_existing_path, the request apparently hits this Index class and the contents of the jsp declared in the Index's @Result section gets returned. However, if we provide /my_context/, we receive the following error: HTTP Status 404-There is no Action mapped for namespace[/] and action name [] associated with context path [/my_context]. We want to know the reason why accessing /my_context/none_existing_path, where none_existing_path has no matching action, can fallback to Index class, but error is returned when when the URL requested is just /my_context/. Currently, our convention-plugin settings are declared as follows: <constant name="struts.convention.package.locators.basePackage" value="com.example"/> <constant name="struts.convention.package.locators" value="action"/> Strangely, if we changed the value of the struts.convention.package.locators.basePackage to om.example.common, in which the aforementioned Index file can be immediately found by narrowing the search scope, requesting /my_context/ displays the content of the jsps declared in @Result section of the Index class. However, as our action classes are distributed throughout the com.example.[a-z].action packages, where [a-z] represents the large volume of directories we have in our package structure, we cannot use this trick as a workaround. We have also tried placing index.jsp at the top level of the class path, and have the index.jsp redirect to /my_context/index, which worked but not what we want. Could this be a bug? We appreciate your responses. Thank you in advance. EDIT: JIRA registered, problem solved (from Struts 2.3.12 up)

    Read the article

  • how to seamlessly integrate subversion and git?

    - by mattv
    I'm looking for tips on how to seamlessly integrate subversion and git, for deploying web sites by a small team of web developers. We each have our own development versions of our sites on our local machines. We also have dev, staging, and live servers. As our team has grown, we haven't updated our revision control and deployment strategies accordingly. We had all been checking into the trunk of a shared Subversion repository. Both the dev & staging servers ran from a checkout of the trunk, so updating them involved running "svn update" while the live server ran as an export from trunk which required an "svn export" to get the latest code. In either case, we would often update just certain files by updating or exporting just those files or directories. That worked okay when there was just one or two developers. However, a big downside was that we couldn't point to an individual tag that represented what was currently on live at any given time. In keeping with corporate policy, we'd like to continue to use Subversion to store what we're now calling our "production branch," which will be what goes onto staging and live. However, we would like to use Git on our local and development sites. We especially like the idea of easier merges and being able to "cherry pick" updates that need to go live. We had initially planned on using git-svn, but it doesn't seem to work well in a shared environment such as our dev or staging servers. Anyone else doing something like this? What's the best way to make it work? Or are we making it more difficult than it should be?

    Read the article

  • Mercurial repository narrow clone?

    - by Berry Langerak
    Hi. I'm currently in the process of moving from Subversion to Mercurial, and I have to say I don't regret that decision. However, when trying to convert my project, I ran into a problem of Mercurial, which I can't seem to get fixed. I have two distinct projects: one is a framework, and the other is an application that relies on that framework. Here's what the repositories look like: The Framework repository: docs/ deploy/ lib/ tests/ The Application repository: application/ config/ lib/ tests/ www/ What I'd like is for the application's lib directory to contain a copy of the frameworks' lib/ directory. I used to do this using svn:externals. Now, I am aware that Mercurial supports the concept of subrepositories, but that doesn't seem like the "correct" solution, as it doesn't actually pull in the lib/ directory like I wanted, as you'll still have to pull and push changes manually. That, plus once you clone the framework repository, you'll get all of it, not just the lib/ directory. I only need the lib/ directory, not the tests, or the docs. Now, I thought up two different solutions to this problem, but I wonder which is the best. The first solution would be to clone the framework in a different directory altogether and create symlink in the application's lib/ directory which points to the framework's lib/ directory. Putting the symlink in .hgignore should make sure all is well, I think? That means that you could edit the frameworks code, and commit that, and you could edit the application's code and commit that, too. The other option is to have multiple repositories. The framework gets pulled as a whole, which means you'll get the docs/, deploy/, test/ etc. directories, which are not needed for usage of the framework. I thought maybe creating a repository purely for the library might be a solution, although I sincerely doubt it, as the Unit Tests are very dependant upon the library itself. Does anyone know a decent solution for this problem?

    Read the article

  • Are there any free Xml Diff/Merge tools available?

    - by Russell
    I have several config files in my .net applications which I would like to merge application settings elements etc. I was about to begin doing it manually as I usually do, however thought there must be an XML diff GUI tool available somewhere. The tool would be able to go to the element level to compare and display the differences etc. However Google gave no substantive free tool results and no hints for anything of value. Is such a tool available? That is very useful? For free? Thanks in advance. :) Edit: Here is a bit of clarification of the functionality that would turn my error-prone, tedious manual job into a 1-minute simpler task (and potential to automate): In KDiff3, you can do a diff/merge of entire directories. There is a hierarchical diff which is very accurate, user-friendly and clear. I was interested in finding a similar solution, however instead of directory hierarchy, an XML element hierarchy. If there is no such open source software, I am considering creating one on CodePlex to provide this functionality.

    Read the article

  • Folders and its files do get copied.. pls help me in given code..

    - by OM The Eternity
    I have a joomla folder, and i have a script which has to copy the complete joomla folder to another new folder..below is the code which copies only the files contain in the main folder but NOT the other directories existing in the joomla folder, I know that i have to plcae some check for dir_exist function and create it if do not exist.. also I want this code to perform a function to overrite the previously existing files and folders.. how can i accomplish thisss?????? <?php $source = '/var/www/html/pranav_test/'; $destination = '/var/www/html/parth/'; $sourceFiles = glob($source . '*'); foreach($sourceFiles as $file) { $baseFile = basename($file); if (file_exists($destination . $baseFile)) { $originalHash = md5_file($file); $destinationHash = md5_file($destination . $baseFile); if ($originalHash === $destinationHash) { continue; } } copy($file, $destination . $baseFile); } ?> Thanks to @alex who helped me to get the code... But I need more support pls help...

    Read the article

  • Installing Image_Graph and using with BASE

    - by PPowerHouseK
    Hello all, I have Windows 7 with the latest XAMPP installation. I configured BASE to work, for the most part. My problem is that in BASE, when I click on the graph alerts button, I get this error: Error loading the Graphing library: Check your Pear::Image_Graph installation! * Image_Graph can be found here:at http://pear.veggerby.dk/. Without this library no graphing operations can be performed. * Make sure PEAR libraries can be found by php at all: pear config-show | grep "PEAR directory" PEAR directory php_dir /usr/share/pear This path must be part of the include path of php (cf. /etc/php.ini): php -i | grep "include_path" include_path => .:/usr/share/pear:/usr/share/php => .:/usr/share/pear:/usr/share/php</code> I think it may have to do with the include path of the php.ini so here is what it currently says: ;;;;;;;;;;;;;;;;;;;;;;;;; ; Paths and Directories ; ;;;;;;;;;;;;;;;;;;;;;;;;; ; UNIX: "/path1:/path2" ;include_path = ".:/php/includes" ; ; Windows: "\path1;\path2" ;include_path = ".;c:\php\includes" ; ; PHP's default setting for include_path is ".;/path/to/php/pear" ; http://php.net/include-path include_path = ".;C:\xampp\php\PEAR" I am really at a loss as to how to resolve this. I searched for a while for some documentation but most referred to installing on ubuntu. I don't know any pear or php, so if you know how to fix this please explain thoroughly. I am willing to supply as much information as needed.

    Read the article

  • .htaccess - alias all www-only requests to subdirectory

    - by CodeMoose
    Trying to install the wonderful Concrete5 CMS to use as my main site engine - the problem is it has about 15 different files and directories, and they clutter up my root. I'd really like to move it to a /_concrete/ subdirectory, and still maintain it in the domain root. htaccess has never been my strong suit - after a lot of research and learning, and a lot of error 500s, my frustration is overriding my pride and I'm posting here. Here's exactly what I'm trying to accomplish: Any requests that come through www.domain.com are forwarded to www.domain.com/_concrete/, except in the case of an existing file. The end-user URL shouldn't change - they will still see the site as www.domain.com, even though they're being served www.domain.com/_concrete/. Multiple subdomains exist on this site as sub-folders within the root - thus, only requests coming through www.domain.com should be redirected. Here's the closest I got with my htaccess, which produces an error 500: RewriteEngine On RewriteCond %{HTTP_HOST} ^(www\.)?domain\.com [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !^_concrete RewriteRule ^(.*)$ _concrete/$1 [L,QSA] This is the result of 4 hours of sweat and blood (mostly blood), so I have to be close. I'm hoping one of your fine minds can point out a stupid mistake and put this thing to rest swiftly. Thanks for your time! Addendum: I previously posted .htaccess - alias domain root to subfolder a while ago, which got me started. Please don't fall into the trap of thinking it's a duplicate.

    Read the article

< Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >