Search Results

Search found 18333 results on 734 pages for 'temporary directory'.

Page 265/734 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • autotools: no rule to make target all

    - by Raffo
    I'm trying to port an application I'm developing to autotools. I'm not an expert in writing makefiles and it's a requisite for me to be able to use autotools. In particular, the structure of the project is the following: .. ../src/Main.cpp ../src/foo/ ../src/foo/x.cpp ../src/foo/y.cpp ../src/foo/A/k.cpp ../src/foo/A/Makefile.am ../src/foo/Makefile.am ../src/bar/ ../src/bar/z.cpp ../src/bar/w.cpp ../src/bar/Makefile.am ../inc/foo/ ../inc/bar/ ../inc/foo/A ../configure.in ../Makefile.am The root folder of the project contains a "src" folder containing the main of the program AND a number of subfolders containing the other sources of the program. The root of the project also contains an "inc" folder containing the .h files that are nothing more than the definitions of the classes in "src", thus "inc" reflects the structure of "src". I have written the following configure.in in the root: AC_INIT([PNAME], [1.0]) AC_CONFIG_SRCDIR([src/Main.cpp]) AC_CONFIG_HEADER([config.h]) AC_PROG_CXX AC_PROG_CC AC_PROG_LIBTOOL AM_INIT_AUTOMAKE([foreign]) AC_CONFIG_FILES([Makefile src/Makefile src/foo/Makefile src/foo/A/Makefile src/bar/Makefile]) AC_OUTPUT And the following is ../Makefile.am SUBDIRS = src and then in ../src where the main of the project is contained: bin_PROGRAMS = pname gsi_SOURCES = Main.cpp AM_CPPFLAGS = -I../../inc/foo\ -I../../inc/foo/A \ -I../../inc/bar/ pname_LDADD= foo/libfoo.a bar/libbar.a SUBDIRS = foo bar and in ../src/foo noinst_LIBRARIES = libfoo.a libfoo_a_SOURCES = \ x.cpp \ y.cpp AM_CPPFLAGS = \ -I../../inc/foo \ -I../../inc/foo/A \ -I../../inc/bar And the analogous in src/bar. The problem is that after calling automake and autoconf, when calling "make" the compilation fails. In particular, the program enters the directory src, then foo and creates libfoo.a, but the same fail for libbar.a, with the following error: Making all in bar make[3]: Entering directory `/user/Raffo/project/src/bar' make[3]: *** No rule to make target `all'. Stop. I have read the autotools documentation, but I'm not able to find a similar example to the one I am working on. Unfortunately I can't change the directory structure as this is a fixed requisite of the project I'm working on. I don't know if you can help me or give me any hint, but maybe you can guess the error or give me a link to a similar structured example. Thank you.

    Read the article

  • Qt4Dotnet on Mac OS X

    - by Tony
    Hello everyone. I'm using Qt4Dotnet project in order to port application originally written in C# on Linux and Mac. Port to Linux hasn't taken much efforts and works fine. But Mac (10.4 Tiger) is a bit more stubborn. The problem is: when I try to start my application it throws an exception. Exception states that com.trolltech.qt.QtJambi_LibraryInitializer is unable to find all necessary ibraries. QtJambi library initializer uses java.library.path VM environment variable. This variable includes current working directory. I put all necessary libraries in a working directory. When I try to run the application from MonoDevelop IDE, initializer is able to load one library, but the other libraries are 'missing': An exception was thrown by the type initializer for com.trolltech.qt.QtJambi_LibraryInitializer --- java.lang.RuntimeException: Loading library failed, progress so far: No 'qtjambi-deployment.xml' found in classpath, loading libraries via 'java.library.path' Loading library: 'libQtCore.4.dylib'... - using 'java.library.path' - ok, path was: /Users/chin/test/bin/Debug/libQtCore.4.dylib Loading library: 'libqtjambi.jnilib'... - using 'java.library.path' Both libQtCore.4.dylib and libqtjambi.jnilib are in the same directory. When I try to run it from the command prompt, the initializer is unable to load even libQtCore.4.dylib. I'm using Qt4Dotnet v4.5.0 (currently the latest) with QtJambi v4.5.2 libraries. This might be the source of the problem, but I'm neither able to compile Qt4Dotnet v4.5.2 by myself nor to find QtJambi v4.5.0 libraries. Project's page states that some sort of patch should be applied to QtJambi's source code in order to be compatible with Mono framework, but this patch hasn't been released yet. Without this patch application crashes in a strange manner (other than library seek fault). I must note that original QtJambi loads all necessary libraries perfectly, so it might be issues of IKVM compiler used to translate QtJambi into .Net library. Any suggestions how can I overcome this problem?

    Read the article

  • Unit Testing a rails 2.3.5 plugin

    - by brad
    I'm writing a new plugin for a rails 2.3.5 app. I've included an app directory (which makes it an engine) so i can easily load some extra routes. Not sure if that affects anything. Anyway, in the test directory i have two files: test_helper.rb and my_plugin_test.rb These files were generated automatically using script/generate plugin my_plugin When I go to vendor/plugins/my_plugin directory and run rake test they don't seem to run. I get the following console output: (in /Users/me/Repos/my_app/source/trunk/vendor/plugins/my_plugin) /Users/me/.rvm/rubies/jruby-1.4.0/bin/jruby -I"lib:lib:test" "/Users/me/.rvm/gems/jruby-1.4.0/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/my_plugin_test.rb" So it obviously sees my test file, but none of the tests inside get run, I just get back to my console prompt. What am I missing here? I figured the generated code would work out of the box Here are the two files test_helper.rb require 'rubygems' require 'active_support' require 'active_support/test_case' my_plugin_test.rb require 'test_helper' class MyPluginTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end test "Factories are supported" do assert_not_nil Factory end end File structure vendor - plugins - my_plugin - app - config - routes.rb - generators - my_plugin - some generator files.rb - lib - my_plugin.rb - my_plugin - my_plugin_lib_file.rb - rails - init.rb - Rakefile - tasks - my_plugin_tasks.rake - test - test_helper.rb - my_plugin_test.rb

    Read the article

  • Use an audio/video file from a Linux laptop via USB to be played by Magic Sing ET-23H

    - by AisIceEyes
    I am one of the technical directors of a regular karaoke contest event. For the karaoke contest itself, due to tight budget, we are using what one of the sponsors are providing - Magic Sing ET-23H . The video output of the Magic Sing ET-23H are broadcasted at two big screens that are being shown to the audience and event attendees. When a karaoke contestant provides his / her karaoke video, the video itself is in a readable USB flashdrive and is attached to the USB input of Magic Sing ET-23H. What really bugs me is that the interface of Magic Sing ET-23H are also being broadcasted at the big screen video feeds. The interface of choosing the video file is being seen in the Magic Sing ET-23H - also to the big video screens that are seen by the audience and event goers. I will post in the comments ( if my less than 10 reputation would allow me) the picture of Magic Sing ET-23KH USB input of the device. I always bring my laptop, Acer AS5742-7653, during the regular karaoke event. I'm using my laptop also for tallying of scores from the judges, and also playing audio files from contestants that did not provide a karaoke video. I personally am using different Linux distros, but I next to all the time use my Ubuntu Studio 12.04.3 64bit partition during the regular karaoke contest event. My question is this: Is there a way I can share a temporary video/audio file directly from the laptop I'm using, going to the Magic Sing ET-23H that can broadcast both the video/audio file? Just like how in Window's Avisynth AVS files, or VirtualDub's temporary avi file, or like using ffplay (of ffmpeg), etc. I have researched somewhat the matter and found links in SuperUser.com. Though I can only provide the links at the comments section of this post if my reputation of less than 10 would allow me. I have a hunch it is possible, but I have not fully understood the device being used at the event, Magic Sing ET-23H, if there are other ways for it to broadcast video and audio files besides its USB input. Any help to my current predicament is highly appreciated. Thank you. PS: Since I need at least 10 reputation to post more than 2 links and also post images, I will try to post the image & links at the comments (if my below 10 reputation would allow me).

    Read the article

  • Write vim file as super-user ?

    - by zimbatm
    This is a usability problem that happens often to me : I open a read-only system file with vim, even editing it, because I'm not attentive enough, or because the vim on the system is badly configured. Once my changes are done, I'm stuck having to write them in a temporary file or loosing them, because :w! won't work. Is there a vim command (:W!!!) that allows you to write the current buffer as a super-user ? (Vim would ask for your sudo or su password naturally)

    Read the article

  • Why does first call to java.io.File.createTempFile(String,String,File) take 5 seconds on Citrix?

    - by Ben Roling
    While debugging slow startup of an Eclipse RCP app on a Citrix server, I came to find out that java.io.createTempFile(String,String,File) is taking 5 seconds. It does this only on the first execution and only for certain user accounts. Specifically, I am noticing it Citrix anonymous user accounts. I have not tried many other types of accounts, but this behavior is not exhibited with an administrator account. Also, it does not matter if the user has access to write to the given directory or not. If the user does not have access, the call will take 5 seconds to fail. If they do have access, the call with take 5 seconds to succeed. This is on a Windows 2003 Server. I've tried Sun's 1.6.0_16 and 1.6.0_19 JREs and see the same behavior. I googled a bit expecting this to be some sort of known issue, but didn't find anything. It seems like someone else would have had to have run into this before. The Eclipse Platform uses File.createTempFile() to test various directories to see if they are writeable during initialization and this issue adds 5 seconds to the startup time of our application. I imagine somebody has run into this before and might have some insight. Here is sample code I executed to see that it is indeed this call that is consuming the time. I also tried it with a second call to createTempFile and notice that subsequent calls return nearly instantaneously. public static void main(final String[] args) throws IOException { final File directory = new File(args[0]); final long startTime = System.currentTimeMillis(); File file = null; try { file = File.createTempFile("prefix", "suffix", directory); System.out.println(file.getAbsolutePath()); } finally { System.out.println(System.currentTimeMillis() - startTime); if (file != null) { file.delete(); } } } Sample output of this program is the following: C:\java.exe -jar filetest.jar C:/Temp C:\Temp\prefix8098550723198856667suffix 5093

    Read the article

  • How to Make my application handle Errors for a few different scenarios?

    - by NightsEVil
    so i have this code to extract a program to the temp directory then run it, the problem is it doesn't work perfectly on every computer (for some reason it hits a error or exception sometimes) string tempFolder = System.IO.Path.Combine(System.IO.Path.GetTempPath(), ""); System.Diagnostics.Process defrag1 = System.Diagnostics.Process.Start(@"Programs\Optimize\AusLogics_Defrag.exe", string.Format(" -o{0} -y", tempFolder)); defrag1.WaitForExit(); string executableDirectoryName = Path.GetDirectoryName(Application.ExecutablePath); System.Diagnostics.Process defrag2 = System.Diagnostics.Process.Start(tempFolder + "\\" + "AusLogics_Defrag" + "\\" + "DiskDefrag.exe", ""); defrag2.WaitForExit(); System.IO.Directory.Delete(tempFolder + "\\" + "AusLogics_Defrag", true); and what i wanna know is there a way that say if it starts to extract but hits a error (no matter what it is) it will automatically change and go to this code, but if it DOESN'T hit a error it continues as it was meant to? string tempFolder = Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData); System.Diagnostics.Process defrag1 = System.Diagnostics.Process.Start(@"Programs\Optimize\AusLogics_Defrag.exe", string.Format(" -o{0} -y", tempFolder)); defrag1.WaitForExit(); string executableDirectoryName = Path.GetDirectoryName(Application.ExecutablePath); System.Diagnostics.Process defrag2 = System.Diagnostics.Process.Start(tempFolder + "\\" + "AusLogics_Defrag" + "\\" + "DiskDefrag.exe", ""); defrag2.WaitForExit(); System.IO.Directory.Delete(tempFolder + "\\" + "AusLogics_Defrag", true); with the path going to the application data folder? and if THAT hits a error it would change that path to this, but if it DOESN'T hit a error it continues as it was meant to? string tempFolder = System.Environment.GetEnvironmentVariable("HomeDrive");

    Read the article

  • How do I know which include path will be used in PHP?

    - by Joe Majewski
    When I run phpinfo() and look by the Configuration category under PHP Core, I see a directive titled include_path, with a local value and a master value. In this case, my local value is set to .: ./include: ../include: /usr/share/php: /usr/share/php/smarty: /usr/share/pear and my master value is set to .: /usr/share/php: /usr/share/pear: /usr/share/php/pear: /usr/share/php/smarty The reason I am trying to learn how this works is because there is a file in the system I am working on titled Smarty.class.php, which I'm sure sounds very familiar to anyone who uses Smarty Templating Engine. One of the PHP files has the following includes: require_once("Smarty.class.php"); require_once("user_info_class.inc"); The file user_info_class.inc is in the same directory as the file making the include, which makes perfect sense to me, and is the way that I've always referenced files. I decided that I wanted to open up the Smarty.class.php file and had assumed it would be in the same directory, but it was not. After doing a bit of digging, I discovered those php_ini variables, and was finally able to locate the file in the directory usr/share/php/smarty/. So it would seem that when making an include, it follows some sort of order between the Local and Master values for the include_path. Assuming that my deductions were correct thus far, can someone explain the order in which PHP searches for the files to be included?

    Read the article

  • Can the Subversion client (svn) derefence symbolic links as if they were files?

    - by Ryan B. Lynch
    I have a directory on a Linux system that mostly contains symlinks to files on a different filesystem. I'd like to add the directory to a Subversion repository, dereferencing the symlinks in the process (treating them as the files they point to, rather than links). Generally, I'd like to be able to handle any working-copy operations with this behavior, but the 'svn add' command is where it starts, I think. The SVN client utility doesn't appear to have any options related to symlink dereferencing in the working copy. I didn't find any references to this in the manual (http://svnbook.red-bean.com/en/1.5/index.html), either. I found a poster on the SVN users mailing list who asked the same question but never received an answer, here: http://markmail.org/message/ngchfnzlmm43yj7h (That poster ended up using hard links instead of symlinks. That technique is not an option, in my case, because the real underlying files reside on a separate filesystem.) I'm using Subversion v1.6.1 on Fedora 11. For what it's worth, I know that there are alternative tools/techniques that could help approximate this behavior, but which I have to discard for various reasons. I've already considered [and dust-binned] these possibilities: - a "union" mount, merging all of the the directories containing the real files, with the SVN working-copy directory as the "top" layer in the union; - copying/moving the real files to the same filesystem as the SVN working-copy, and using hardlinks instead of symlinks; - non-SVN version control systems. These were all neat ideas, and I'm sure they are good solutions to other problems, but they won't work given the constraints of this environment and situation.

    Read the article

  • SVN 409 conflict on commits and updates

    - by bhefny
    We have been using SVN for the past year now and when we migrated to an online server we started getting this error: Commit: Commit failed (details follow): File or directory 'x.php' is out of date; try updating resource out of date; try updating CHECKOUT of '/!svn/ver/491/x.php': 409 Conflict (http://svn.example.com) We are currently using SmartSVN 6.5 and we have also tested with RapidSVN & Syncro (but we can't use tortoise as we have a lot of Ubunutu users) at the begining I though this How do you fix an SVN 409 Conflict Error would help, but it didn't we are still facing the same error and it's even more absurd now. the main problem is that after you get the error, you can't shake it of. Updating doesn't solve, reverting doesn't solve. You are just stuck with the error. The only thing that could work is removing the file from SVN and adding your version but that would be against why we are using SVN in the first place This is our apache config (and yes autoversioning is ON) <Location /> DAV svn SVNPath /home/example/svn SVNAutoversioning on AuthType Basic AuthName "Access Restricted" AuthUserFile /home/example/svn-auth-file Require valid-user </Location> <Directory /> <Files ~ "^\.ht"> Order allow,deny Allow from all Satisfy All </Files> <Files ~ "^error_log"> Order allow,deny Allow from all Satisfy All </Files> </Directory> And here are some observation: We don't receive conflicts anymore, we just get this 409 conflict you can somehow avoid the error if you always update before committing When committing a modified file + a newly added file, you get the error. As if the added file incremented the version by one and then you are committing another file with a older version. Please advise, we are about to go insane

    Read the article

  • Windows Azure: Creating a subdirectories inside the blob

    - by veda
    I wanted to create some subdirectories inside my blob. But it is not working out well Here is my code protected void ButUpload_click(object sender, EventArgs e) { // store upladed file as a blob storage if (uplFileUpload.HasFile) { name = uplFileUpload.FileName; // get refernce to the cloud blob container CloudBlobContainer blobContainer = cloudBlobClient.GetContainerReference("documents"); if (textbox.Text != "") { name = textbox.Text + "/" + name; } // set the name for the uploading files string UploadDocName = name; // get the blob reference and set the metadata properties CloudBlockBlob blob = blobContainer.GetBlockBlobReference(UploadDocName); blob.Metadata["FILETYPE"] = "text"; blob.Properties.ContentType = uplFileUpload.PostedFile.ContentType; // upload the blob to the storage blob.UploadFromStream(uplFileUpload.FileContent); } } What I did is that, If I have to create a sub directory, I will enter the name of the sub directory in the textbox. for example, if I need to create a file named "test.txt" inside the sub directory "files" Then, my textbox.text = files and uplFileUpload.FileName = test.txt Now I will concatenate them and upload to the blob.. But it is not working well.. I am getting just https://test.core.windows.net/documents/files/ I am not getting the entire thing I was expecting https://test.core.windows.net/documents/files/test.txt What am I doing wrong... How to create sub directories inside the blob.

    Read the article

  • How can I split my conkeror-rc config over multiple files?

    - by Ryan Thompson
    Short version: can you help me fill in this code? var conkeror_settings_dir = ".conkeror.mozdev.org/settings"; function load_all_js_files_in_dir (dir) { var full_path = get_home_directory().appendRelativePath(dir); // YOUR CODE HERE } load_all_js_files_in_dir(conkeror_settings_dir); Background I'm trying out Conkeror for web browsing. It's an emacs-like browser running on Mozilla's rendering engine, using javascript as configuration language (filling the role that elisp plays for emacs). In my emacs config, I have split my customizations into a series of files, where each file is a single unit of related options (for example, all my perl-related settings might be in perl-settings.el. All these settings files are loaded automatically by a function in my .emacs that simply loads every elisp file under my "settings" directory. I am looking to structure my Conkeror config in the same way, with my main conkeror-rc file basically being a stub that loads all the js files under a certain directory relative to my home directory. Unfortunately, I am much less literate in javascript than I am in elisp, so I don't even know how to "source" a file.

    Read the article

  • i have problem with sharing a folder through programming using c#?

    - by moon
    here is my code it shares the folder but that does not work correctly when i want to access it , it shows access denied help required, private static void ShareFolder(string FolderPath, string ShareName, string Description) { try { // Create a ManagementClass object ManagementClass managementClass = new ManagementClass("Win32_Share"); // Create ManagementBaseObjects for in and out parameters ManagementBaseObject inParams = managementClass.GetMethodParameters("Create"); ManagementBaseObject outParams; // Set the input parameters inParams["Description"] = Description; inParams["Name"] = ShareName; inParams["Path"] = FolderPath; inParams["Type"] = 0x0; // Disk Drive //Another Type: //DISK_DRIVE = 0x0; //PRINT_QUEUE = 0x1; //DEVICE = 0x2; //IPC = 0x3; //DISK_DRIVE_ADMIN = 0x80000000; //PRINT_QUEUE_ADMIN = 0x80000001; //DEVICE_ADMIN = 0x80000002; //IPC_ADMIN = 0x8000003; //inParams["MaximumAllowed"] = int maxConnectionsNum; // Invoke the method on the ManagementClass object outParams = managementClass.InvokeMethod("Create", inParams, null); // Check to see if the method invocation was successful if ((uint)(outParams.Properties["ReturnValue"].Value) != 0) { throw new Exception("Unable to share directory. Because Directory is already shared or directory not exist"); }//end if }//end try catch (Exception ex) { MessageBox.Show(ex.Message, "error!"); }//end catch }//End Method

    Read the article

  • What can i use to journal writes to file system

    - by Dmitry
    Hello, all I need to track all writes to files in order to have synchronized version of files on different place (server or just other directory, not considerable). Let it: all files located in same directory feel free to create some system files (e.g. SomeFileName.Ext~temp-data) no one have concurrent access to synced directory; nobody spoil ours meta-files or change real-files before we do postponed writes (like a commits) do not to care recovering "local" changes in case of crash; system can just rolled back to state of "server" by simple copy from it significant to have it transparent to use (so programmer must just call ordinary fopen(), read(), write()) It must be guaranteed that copy of files which "server" have is consistent. That is whole files scope existed in some moment of time. They may be sufficiently outdated but it must be fair snapshot of all files at some time. As i understand i should overload writing logic to collect data in order sent changes to "server". For example writing to temporary File~tmp. And so i have to overload reads in order program could read actual data of file. It would be great if you suggest some existing library (java or c++, it is unimportant) or solution (VCS customizing?). Or give hints how should i write it by myself. edit: After some reading i have more precision requirements: I need COW (Copy-on-write) wrapper for fopen(),fwrite(),.. or interceptor (hook) WriteFile() and other FS api system calls. Log-structured file system in userspace would be a alternative too.

    Read the article

  • Windows 64bit Sandboxing software alternatives

    - by Pacifika
    As you might know sandboxing software doesn't work in 64bit Windows due to patchguard. What are the alternatives for a person looking to test untrusted / temporary software? Edit: @Nick I'd prefer an alternative to VMs as I'm not happy with the extended startup time, the extra login sequences and the memory overhead that accompanies booting a VM solution to test something out ocassionally as a home user. Also it's another system that needs to be kept secure and up to date.

    Read the article

  • Apache is looking for htaccess and htpasswd files that aren't there

    - by user1094092
    Having an issue where Apache is requesting authentication, and looking for an .htpasswd file, based on instructions from an .htaccess file that's no longer in DocumentRoot. Background: In my DocumentRoot, I'd previously copied an .htaccess and .htpasswd file from another machine (along with all of the other website files). .htaccess contents: AuthType Basic AuthName "Password is required" AuthUserFile /some/directory/that/was/on/the/other/server/not/this/one/.htpasswd Require valid-user Here's the catch: I moved .htaccess and .htpasswd out of DocumentRoot and even renamed the files. There is no longer an .htaccess file in DocumentRoot at all. But, when I try to access localhost from a browser, I am prompted to enter the login and password. When I enter the login and password (from the old, not-in-DocumentRoot .hpasswd file), I get a 500 Internal Server error and the log shows: [error] [client 127.0.0.1] (2)No such file or directory: Could not open password file: /some/directory/that/was/on/the/other/server/not/this/one/.htpasswd This has been quite a puzzle, because there's no longer a .htaccess or .htpasswd file anywhere in DocumentRoot !! Have tried several apache restarts and also tried using a blank .htaccess file in the DocumentRoot. Even grepped the entire machine for references to AuthType Basic to see if I missed anything. httpd.conf looks normal enough...I can post that if needed, but this question seems long enough as it is :) Thanks for any assistance you can provide

    Read the article

  • nginx rule to capture header and append value as query string

    - by John Schulze
    I have an interesting problem I need to solve in nginx: one of the sites I'm building receives inbound traffic on port 80 (and only port 80) which may have a certain header set in the request. If this header is present I need to capture the value of it and append that as a querystring parameter before doing a temporary redirect (rewrite) to a different (secure) server, while passing the paramater and any other querystring params along. This should be very doable, but how!? Many thanks, JS

    Read the article

  • Apache Content negotiation returns wrong Content-Type with Mulitviews :(

    - by edbras
    I am using Apache webserver 2.2 with Content Negotiation through Multiviews. I have: <Directory /home/develop/web/prodBuild> Options +MultiViews AddEncoding x-gzip .gz </Directory> This directory contains the gzip files. So if a browser requests the file bla.js, the server will return the file bla.js.gz as the file bla.js doesn't exists. However, the Content-Type on my Ubuntu server is set to be "application/x-gzip" which is wrong as it's a javascript file, so is has be "application/javascript" I have this working on my local Windows Apache server, but don't seem to get it working on the remote Ubuntu server. :(... NO idea why... Who/why is it setting this wrong Content-Type ? BTW: I don't have this line "AddType application/x-gzip .gz .tgz" somwhere in my config, as then I understand why it goes wrong. My workaround: add the following line under the above "AddEncoding": AddType "" .gz It will then set an empty string as Content-Type and then it works.. I think that the browser will try to discover the content type itself... I hope somebody can explain to me why this is not working and why this content type is set ? :(

    Read the article

  • Python FTP grabbing and saving images issue

    - by PylonsN00b
    OK So I have been messing with this all day long. I am fairly new to Python FTP. So I have searched through here and came up w/ this: images = notions_ftp.nlst() for image_name in image_names: if found_url == False: try: for image in images: ftp_image_name = "./%s" % image_name if ftp_image_name == image: found_url = True image_name_we_want = image_name except: pass # We failed to find an image for this product, it will have to be done manually if found_url == False: log.info("Image ain't there baby -- SKU: %s" % sku) return False # Hey we found something! Open the image.... notions_ftp.retrlines("RETR %s" % image_name_we_want, open(image_name_we_want, "rb")) 1/0 So I have narrowed the error down to the line before I divide by zero. Here is the error: Traceback (most recent call last): File "<console>", line 6, in <module> File "<console>", line 39, in insert_image IOError: [Errno 2] No such file or directory: '411483CC-IT,IM.jpg' So if you follow the code you will see that the image IS in the directory because image_name_we_want is set if found in that directory listing on the first line of my code. And I KNOW it's there because I am looking at the FTP site myself and ...it's freakin there. So at some point during all of this I got the image to save locally, which is most desired, but I have long since forgot what I used to make it do that. Either way, why does it think that the image isn't there when it clearly finds it in the listing.

    Read the article

  • asp.net on linux/mono broken? weird load then 404/resource error

    - by acidzombie24
    I am testing out my asp.net on mono's VM Ware Image using opensuse From the home page it says Trying out your own code You can test your own applications by connecting with the file manager on this machine to the machine hosting your application, and copying over the directory containing application and its associated files. To test your ASP.NET applications, copy your code to a new directory in /srv/www/htdocs , then visit the following url: http://localhost/directoryname/page.aspx Where directoryname is the directory where you deployed your application, and page.aspx is the initial page for your software, typically Default.aspx. Mirror: http://tortillaflatscafe.com/ I saw my app had errors. After i restarted the VM i can no longer run my app. Instead of getting error messages or anything i get this error instead. Server Error in '/myfoldername' Application The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /Default.aspx I tried /default.aspx, using sudo to set permissions to 777 recursively, restarting apache via terminal i could not get this error to go away. How do i fix this?

    Read the article

  • Can IBM IHS server support more than one Websphere Application Server plug-in location?

    - by Spike Williams
    We want http://webserver.com/foo to point to an instance of WAS 6.0, and http://webserver.com/foo2 to point to an instance of WAS 7.0, running on the same server, but with different port numbers. This is a temporary thing, as we need to have both servers running as we transition our applications from running on 6.0 to 7.0. The webserver is IBMIHS (an Apache variant), and it needs to use the WebSphere plugin to connect to the WAS servers. Will this work? Any drawbacks?

    Read the article

  • How secure is TeXShop+Truecrypt on OS X?

    - by trolle3000
    Hello All of the question I would like to ask is pretty much contained in the title... I have a couple Truecrypt-encrypted folders on OS X, where I keep some .tex documents I edit in TeXShop, and the question is: does TexShop 'bleed' information? Does it store temporary files anywhere else on the system? Filenames, filecontents etc. Cheers!

    Read the article

  • Haskell lazy I/O and closing files

    - by Jesse
    I've written a small Haskell program to print the MD5 checksums of all files in the current directory (searched recursively). Basically a Haskell version of md5deep. All is fine and dandy except if the current directory has a very large number of files, in which case I get an error like: <program>: <currentFile>: openBinaryFile: resource exhausted (Too many open files) It seems Haskell's laziness is causing it not to close files, even after its corresponding line of output has been completed. The relevant code is below. The function of interest is getList. import qualified Data.ByteString.Lazy as BS main :: IO () main = putStr . unlines =<< getList "." getList :: FilePath -> IO [String] getList p = let getFileLine path = liftM (\c -> (hex $ hash $ BS.unpack c) ++ " " ++ path) (BS.readFile path) in mapM getFileLine =<< getRecursiveContents p hex :: [Word8] -> String hex = concatMap (\x -> printf "%0.2x" (toInteger x)) getRecursiveContents :: FilePath -> IO [FilePath] -- ^ Just gets the paths to all the files in the given directory. Are there any ideas on how I could solve this problem? The entire program is available here: http://haskell.pastebin.com/PAZm0Dcb

    Read the article

  • How might one cope with the ambiguous value produced by GetDllDirectory?

    - by Integer Poet
    GetDllDirectory produces an ambiguous value. When the string this call produces is empty, it means one of the following: nobody has called SetDllDirectory somebody passed NULL to SetDllDirectory somebody passed an empty string to SetDllDirectory The first two cases are equivalent for my purposes, but the third case is a problem. If I want to write save/restore code (call GetDllDirectory to save the "old" value, SetDllDirectory to set a "new" value temporarily, and later SetDllDirectory again to restore the "old" value), I run the risk of reversing some other programmer's intent. If the other programmer intended for the current working directory to be in the DLL search order (in other words, one of the first two bullets is true), and I pass an empty string to SetDllDirectory, I will be taking the current working directory out of the DLL search order, reversing the other programmer's intent. Can anyone suggest an approach to eliminate or work around this ambiguity? P.S. I know having the current working directory in the DLL search order could be interpreted as a security hole. Nevertheless, it is the default behavior, and my code is not in a position to undo that; my code needs to be compatible with the expectations of all potential callers, many of which are large and old and beyond my control.

    Read the article

  • Visual Studio 2010 / ASP.NET MVC 2 / Publish

    - by SevenCentral
    I just did a clean install on Windows 7 x64 Professional with the final release of Visual Studio 2010 Premium. In order to duplicate what I'm experiencing do the following in: Create a new ASP.NET MVC 2 Web Application Right click the project and select Properties On the Web tab, select "Use Local IIS Web Server" Click on Create Virtual Directory Save all Unload the project Edit the project file Change MvcBuildViews to true Save all Reload project Right click the project and select Publish Choose the file system publish method Enter a target location Choose Delete all existing files Select Publish Right click the project Select Publish Each time I do the above I get the following errror: "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level..." The error originates from obj\debug\package\packagetmp\web.config, relative to the project directory. I can repeat this all day long with any MVC 2 project I've built. In order to fix this problem, I need to set MvcBuildViews to false in the project file. That's not really an option. This wasn't a problem in Visual Studio 2008 and it seems to be an issue with the way the Publish command stages files beneath the project directory. Can anyone else duplicate this error? Is this a bug or by design? Is there a fix, workaround, etc...? Thanks.

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >