Search Results

Search found 14454 results on 579 pages for 'unc path'.

Page 89/579 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Why is sudo bash different from regular bash

    - by cyberjar09
    Problem description : I am using something called play framework in my development which requires me to make the python script play available in the path. Hence I create a symbolic link in /usr/local/bin ... Now I have written a shell script (call it status.sh) which calls this python script as follows : play status <some values here related to my app> &> /tmp/xyz.txt and this shell script then sends me the file via email. This works perfectly when I execute the script as follows ./script.sh. However when the script is executed as a cron expression everyday I get an output from stderr saying 'play: command not found'. Hence I did some digging on my own and here are my findings : echo $PATH when I am on the shell shows that I have /usr/local/bin available to me hence I can successfully execute the command play status however when I type in sudo bash and then echo $PATH I do not have the path /usr/local/bin anymore. It is a limited set of folders (one of them being /usr/bin). Q : Why this behavior ?! I fail to understand why the path is different. Also as a workaround would you suggest I do : new symbolic link from /usr/bin to /usr/local/bin (what are the side effects of this?) remove /usr/local/bin sym link altogether and only use /usr/bin is there a convention that I am not following here for linking new programs and executing them from $PATH ? Thanks.

    Read the article

  • Check if all files in a directory exists elsewhere

    - by aioobe
    I'm about to remove an old backup directory, but before doing so I'd like to make sure that all these files exist in a newer directory. Is there a tool for this? Or am I best off doing this "manually" using find, md5sum, sorting, comparing, etc? Clarification: If I have the following directory listings /path/to/old_backup/dir1/fileA /path/to/old_backup/dir1/fileB /path/to/old_backup/dir2/fileC and /path/to/new_backup/dir1/fileA /path/to/new_backup/dir2/fileB /path/to/new_backup/dir2/fileD then fileA and fileB exists in new_backup (fileA in its original directory, and fileB has moved from dir1 to dir2). fileC on the other hand is missing in new_backup and fileD has been created. In this situation I'd like the output to be something like fileC exists in old_backup, but not in new_backup.

    Read the article

  • Issue with percona-xtrabackup-2.0.0 hotbackup on MyIsam tables

    - by arn
    I am trying to implement hot backup for MyIsam tables with "percona-xtrabackup-2.0.0" and getting the following error? As the all tables are MyIsam I doubt am I using the correct package ? Backup : ./innobackupex --user="root" --password=<pass> --defaults-file="<path>/my.cnf" --ibbackup="<path>/percona-xtrabackup-2.0.0/bin/xtrabackup" <path>/backup/ innobackupex: fatal error: no 'mysqld' group in MySQL options innobackupex: fatal error: OR no 'datadir' option in group 'mysqld' in MySQL options apply-log : ./innobackupex-1.5.1 --apply-log --defaults-file=<path>/backup/2012-06-02_09-59-30/backup-my.cnf --ibbackup=<path>/percona-xtrabackup-2.0.0/bin/xtrabackup <path>/backup/2012-06-02_09-59-30/

    Read the article

  • How can I diff two Redhat Linux servers?

    - by Stuart Woodward
    I have two servers that have should have the same setup except for known differences. By running: find / \( -path /proc -o -path /sys -o -path /dev \) -prune -o -print | sort > allfiles.txt I can find a list of all the files on one server and compare it against the list of files on the the other server. This will show me the differences in the names of the files that reside on the servers. What I really want to do is run a checksum on all the files on both of the servers and compare them to also find where the contents are different. e.g find / \( -path /proc -o -path /sys -o -path /dev \) -prune -o -print | xargs /usr/bin/sha1sum Is this a sensible way to do this? I was thinking that rysnc already has most of this functionality but can it be used to provide the list of differences?

    Read the article

  • How to get which file is requested to open using a mac application?

    - by ramsey
    i have created an mac application which can be open my file extensions. But when i tested it, i dont get the path of the file requested to open using the application, instead i got the "psn_0_151589". I checked it for itunes, textedit, xcode and other applications. Below is my app sample main code where i process path of the opened file python code import sys import os.path print("File opened with this app :: ",sys.argv[1]) if(os.path.exists(sys.argv[1]): print("valid file :: { do something...}\n") else: print("Invalid file path received :: { do nothing }\n") OUTPUT : File opened with this app :: psn_0_151589 Invalid file path received :: { do nothing } Hope someone knows how to get the filepath which was opened using any application. Any help would be greatly appreciated. -ramsey

    Read the article

  • Password protect an alias virtual difrecory

    - by Jason
    I have a main domain being hosted through CPanel. I also have a sub-domain that I would like to appear as a path under the main domain instead of as a sub-domain. So I have: http://example.com/ pointing to the main hosted file. http://example.com/mydir pointing to the subdomain files. This is achieved by a httpd.conf include from the main domain section to set an alias: alias /mydir /path/to/subdomain/files/ Now, that works fine so far. The problem is that if a .htaccess file under /path/to/the/subdomain/files/ contains an error, the alias is completely skipped, and /mydir goes instead to the main host files. That is kind of surprising to me - I would expect an error to return an error instead. Now the killer: if I try to password protect /path/to/subdomain/files/, then trying to access http://example.com/mydir will again attempt to deliver from under the main hosted files and not from /path/to/subdomain/files/ I am not seeing any errors reported on the .htaccess file in the apache error log, so I am assuming the .htaccess is valid: AuthUserFile /path/to/valid/readable/.htpasswd AuthName "Secure Access" AuthType Basic Require valid-user This kind of behaviour does not seem right to me. Is there something obvious that could be causing it? Or is this just the way it works? Perhaps using an alias is the wrong way to go?

    Read the article

  • Will adding q&a help my site's rankings, and if so, what are the implications of a sub-domain for q&a rather than a path on the site? [closed]

    - by ElHaix
    Possible Duplicate: Subdomain versus subdirectory One of our web properties is doing quite well without any additional links being created on the site, and our link inventory is tightly managed - no user-generated links. To introduce a community aspect to the site, we want to implement a q&a forum. Once in place, new links will populate our link inventory with keywords that are not necessarily targeted to the site. With the q&a on a sub-domain, would that not affect the main site's rankings? What's the best approach for this?

    Read the article

  • Git can no longer open emacs as its editor

    - by mwilliams
    I'm running Git version 1.7.3.2 that I built from source, zsh is my shell, and emacs is my editor. Recently I started seeing the following: /usr/local/Cellar/git/1.7.3.2/libexec/git-core/git-sh-setup: line 106: emacs: command not found Could not execute editor My zshrc looks like the following so I can use the Cocoa build and the console binary provided with it. EMACS_HOME="/Applications/Emacs.app/Contents/MacOS" function e() { PATH=$EMACS_HOME/bin:$PATH $EMACS_HOME/Emacs -nw $@ } function ec() { PATH=$EMACS_HOME/bin:$PATH emacsclient -t $@ } function es() { e --daemon=$1 && ec -s $1 } function el() { ps ax|grep Emacs } function ek() { $EMACS_HOME/bin/emacsclient -e '(kill-emacs)' -s $1 } function ecompile() { e -eval "(setq load-path (cons (expand-file-name \".\") load-path))" \ -batch -f batch-byte-compile $@ } alias emacs=e alias emacsclient=ec And I also have export EDITOR="emacs" and have tried adding export GIT_EDITOR="emacs" (and swapping that out with "e") But whatever I try I can't get git to open emacs whenever I need to do a commit or an interactive rebase, etc etc...

    Read the article

  • Password protect an alias virtual directory

    - by Jason
    I have a main domain being hosted through CPanel. I also have a sub-domain that I would like to appear as a path under the main domain instead of as a sub-domain. So I have: http://example.com/ pointing to the main hosted file. http://example.com/mydir pointing to the subdomain files. This is achieved by a httpd.conf include from the main domain section to set an alias: alias /mydir /path/to/subdomain/files/ Now, that works fine so far. The problem is that if a .htaccess file under /path/to/the/subdomain/files/ contains an error, the alias is completely skipped, and /mydir goes instead to the main host files. That is kind of surprising to me - I would expect an error to return an error instead. Now the killer: if I try to password protect /path/to/subdomain/files/, then trying to access http://example.com/mydir will again attempt to deliver from under the main hosted files and not from /path/to/subdomain/files/ I am not seeing any errors reported on the .htaccess file in the apache error log, so I am assuming the .htaccess is valid: AuthUserFile /path/to/valid/readable/.htpasswd AuthName "Secure Access" AuthType Basic Require valid-user This kind of behaviour does not seem right to me. Is there something obvious that could be causing it? Or is this just the way it works? Perhaps using an alias is the wrong way to go?

    Read the article

  • Tried to install some software, it says some packages are damaged, cannot fix them

    - by lempira
    So, I go to the Ubuntu Software Center, as soon as it opens, a window pops up with the following text: "Items cannot be installed or removed until the package catalog is repaired. Do you want to repair it now?" Then I click the "Repair" button, then a new window pops up with the following text: "Package operation failed. The installation or removal of a software package failed." Then I click on the "Details" button, which returns me the following text: installArchives() failed: Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (2) dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. What should I do?

    Read the article

  • sudo prompts for password over ssh

    - by Joe Watkins
    I have sudo set up for a shell script as follows on "hostname" (sudo -l output): (suser) NOPASSWD: /path/script* sudoers content is: myuser ALL=(suser) NOPASSWD: /path/script* this works fine, so I can run the following, logged in locally on hostname, without need for password: sudo -u suser /path/script however, when I use ssh (with keys set up, so no password require) to login and run, as follows: ssh hostname sudo -u suser /path/script I get prompted for a password, and when the password is entered I get: Sorry, user myuser is not allowed to execute '/path/script' as suser on hostname. Why? NB the following does not prompt for password at any point: $ ssh hostname $ sudo -u suser /path/script

    Read the article

  • How to generalize a method call in Java (to avoid code duplication)

    - by dln385
    I have a process that needs to call a method and return its value. However, there are several different methods that this process may need to call, depending on the situation. If I could pass the method and its arguments to the process (like in Python), then this would be no problem. However, I don't know of any way to do this in Java. Here's a concrete example. (This example uses Apache ZooKeeper, but you don't need to know anything about ZooKeeper to understand the example.) The ZooKeeper object has several methods that will fail if the network goes down. In this case, I always want to retry the method. To make this easy, I made a "BetterZooKeeper" class that inherits the ZooKeeper class, and all of its methods automatically retry on failure. This is what the code looked like: public class BetterZooKeeper extends ZooKeeper { private void waitForReconnect() { // logic } @Override public Stat exists(String path, Watcher watcher) { while (true) { try { return super.exists(path, watcher); } catch (KeeperException e) { // We will retry. } waitForReconnect(); } } @Override public byte[] getData(String path, boolean watch, Stat stat) { while (true) { try { return super.getData(path, watch, stat); } catch (KeeperException e) { // We will retry. } waitForReconnect(); } } @Override public void delete(String path, int version) { while (true) { try { super.delete(path, version); return; } catch (KeeperException e) { // We will retry. } waitForReconnect(); } } } (In the actual program there is much more logic and many more methods that I took out of the example for simplicity.) We can see that I'm using the same retry logic, but the arguments, method call, and return type are all different for each of the methods. Here's what I did to eliminate the duplication of code: public class BetterZooKeeper extends ZooKeeper { private void waitForReconnect() { // logic } @Override public Stat exists(final String path, final Watcher watcher) { return new RetryableZooKeeperAction<Stat>() { @Override public Stat action() { return BetterZooKeeper.super.exists(path, watcher); } }.run(); } @Override public byte[] getData(final String path, final boolean watch, final Stat stat) { return new RetryableZooKeeperAction<byte[]>() { @Override public byte[] action() { return BetterZooKeeper.super.getData(path, watch, stat); } }.run(); } @Override public void delete(final String path, final int version) { new RetryableZooKeeperAction<Object>() { @Override public Object action() { BetterZooKeeper.super.delete(path, version); return null; } }.run(); return; } private abstract class RetryableZooKeeperAction<T> { public abstract T action(); public final T run() { while (true) { try { return action(); } catch (KeeperException e) { // We will retry. } waitForReconnect(); } } } } The RetryableZooKeeperAction is parameterized with the return type of the function. The run() method holds the retry logic, and the action() method is a placeholder for whichever ZooKeeper method needs to be run. Each of the public methods of BetterZooKeeper instantiates an anonymous inner class that is a subclass of the RetryableZooKeeperAction inner class, and it overrides the action() method. The local variables are (strangely enough) implicitly passed to the action() method, which is possible because they are final. In the end, this approach does work and it does eliminate the duplication of the retry logic. However, it has two major drawbacks: (1) it creates a new object every time a method is called, and (2) it's ugly and hardly readable. Also I had to workaround the 'delete' method which has a void return value. So, here is my question: is there a better way to do this in Java? This can't be a totally uncommon task, and other languages (like Python) make it easier by allowing methods to be passed. I suspect there might be a way to do this through reflection, but I haven't been able to wrap my head around it.

    Read the article

  • Group Policy: Block access to \\localhost\C$

    - by Ryan R
    We have a restricted Windows 7 computer that hides and prevents non-admin users from accessing the C Drive. However, they are able to circumvent this by typing the following into Explorer: \\localhost\C$ How can I disable this path but allow other UNC paths. For example they are allowed to access a shared folder on a different computer. eg. \\192.168.2.1\SharedTransfer Note: Simply Enabling the Group Policy: Remove Run menu from Start Menu will not work as this blocks all UNC paths.

    Read the article

  • Hard Disk DRDY error: is it a crash

    - by pranjal
    I am using IBM Thinkpad, 1.7GHz, 512 RAM with Linux Mint 9 installed. I have two partitions in addition to root. One of the partitions became read-only yesterday, after which I rebooted my system. It is extremely slow along with DRDY Error : Is my Hard disk crashed ? Error Log while booting. Differences between boot sector and its backup. failed command : READ DMA BMDMA : stat 0X25 ata 1.00 : status : { DRDY ERR } ata 1.00 : status :{ UNC } Buffer I/O error on logical device, logical block 65467 smartctl output for the partition: mint mint # smartctl -a /dev/sda1 smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: TOSHIBA MK4026GAX RoHS Serial Number: X5LY1623T Firmware Version: PA107E User Capacity: 40,007,761,920 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 6 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Thu Feb 17 06:48:25 2011 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x84) Offline data collection activity was suspended by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 153) seconds. Offline data collection capabilities: (0x1b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. No Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. No General Purpose Logging support. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 30) minutes. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0 2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0 3 Spin_Up_Time 0x0027 100 100 001 Pre-fail Always - 310 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 3968 5 Reallocated_Sector_Ct 0x0033 100 100 050 Pre-fail Always - 40 7 Seek_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0 8 Seek_Time_Performance 0x0005 100 100 050 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 082 082 000 Old_age Always - 7257 10 Spin_Retry_Count 0x0033 179 100 030 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 3484 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 489 193 Load_Cycle_Count 0x0032 064 064 000 Old_age Always - 367150 194 Temperature_Celsius 0x0022 100 100 000 Old_age Always - 36 (Lifetime Min/Max 14/57) 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 33 197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 82 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 1 199 UDMA_CRC_Error_Count 0x0032 200 253 000 Old_age Always - 0 220 Disk_Shift 0x0002 100 100 000 Old_age Always - 101 222 Loaded_Hours 0x0032 085 085 000 Old_age Always - 6146 223 Load_Retry_Count 0x0032 100 100 000 Old_age Always - 0 224 Load_Friction 0x0022 100 100 000 Old_age Always - 0 226 Load-in_Time 0x0026 100 100 000 Old_age Always - 227 240 Head_Flying_Hours 0x0001 100 100 001 Pre-fail Offline - 0 SMART Error Log Version: 1 ATA Error Count: 2371 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 2371 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:03:10.061 READ DMA f8 00 00 00 00 00 e0 00 00:03:10.061 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:03:10.053 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:03:10.053 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:03:10.053 READ NATIVE MAX ADDRESS Error 2370 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:03:03.328 READ DMA f8 00 00 00 00 00 e0 00 00:03:03.327 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:03:03.320 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:03:03.319 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:03:03.319 READ NATIVE MAX ADDRESS Error 2369 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:02:56.582 READ DMA f8 00 00 00 00 00 e0 00 00:02:56.582 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:02:56.574 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:02:56.574 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:02:56.574 READ NATIVE MAX ADDRESS Error 2368 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:02:49.809 READ DMA f8 00 00 00 00 00 e0 00 00:02:49.809 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:02:49.801 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:02:49.801 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:02:49.801 READ NATIVE MAX ADDRESS Error 2367 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:02:43.056 READ DMA f8 00 00 00 00 00 e0 00 00:02:43.056 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:02:43.048 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:02:43.048 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:02:43.047 READ NATIVE MAX ADDRESS SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] Device does not support Selective Self Tests/Logging Do I need to get a new Hard Disk my PC ?

    Read the article

  • Ubuntu hard disk problem

    - by Henadzy
    Hello! I have got the error with a hard disk on Ubuntu 9.10. It slows down my system, applications have not been responding for a long time. But when I mount and use filesystem which placed on this hard disk at other computer it works properly. disk: SAMSUNG HD161HJ (SATA) syslog: Apr 25 00:28:25 vare6gin kernel: [ 885.773839] ata3.00: exception Emask 0x1 SAct 0x1e SErr 0x0 action 0x6 frozen Apr 25 00:28:25 vare6gin kernel: [ 885.773845] ata3.00: Ata error. fis:0x21 Apr 25 00:28:25 vare6gin kernel: [ 885.773861] ata3.00: cmd 60/08:08:3f:00:ad/00:00:10:00:00/40 tag 1 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773864] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773871] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773877] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773890] ata3.00: cmd 60/18:10:9f:6b:ed/00:00:0e:00:00/40 tag 2 ncq 12288 in Apr 25 00:28:25 vare6gin kernel: [ 885.773893] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773900] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773904] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773918] ata3.00: cmd 60/08:18:3f:5f:ed/00:00:0e:00:00/40 tag 3 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773921] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773927] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773932] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773946] ata3.00: cmd 60/08:20:67:c8:91/00:00:05:00:00/40 tag 4 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773948] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773955] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773960] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773970] ata3: hard resetting link Apr 25 00:28:25 vare6gin kernel: [ 885.773974] ata3: nv: skipping hardreset on occupied port Apr 25 00:28:25 vare6gin kernel: [ 886.240073] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 25 00:28:25 vare6gin kernel: [ 886.256277] ata3.00: configured for UDMA/133 Apr 25 00:28:25 vare6gin kernel: [ 886.256305] ata3: EH complete Apr 25 00:28:27 vare6gin kernel: [ 888.176088] ata3: EH in SWNCQ mode,QC:qc_active 0xF sactive 0xF Apr 25 00:28:27 vare6gin kernel: [ 888.176099] ata3: SWNCQ:qc_active 0xF defer_bits 0x0 last_issue_tag 0x3 Apr 25 00:28:27 vare6gin kernel: [ 888.176102] dhfis 0xF dmafis 0x1 sdbfis 0x0 Apr 25 00:28:27 vare6gin kernel: [ 888.176109] ata3: ATA_REG 0x51 ERR_REG 0x40 Apr 25 00:28:27 vare6gin kernel: [ 888.176113] ata3: tag : dhfis dmafis sdbfis sacitve Apr 25 00:28:27 vare6gin kernel: [ 888.176120] ata3: tag 0x0: 1 1 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176126] ata3: tag 0x1: 1 0 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176131] ata3: tag 0x2: 1 0 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176136] ata3: tag 0x3: 1 0 0 1

    Read the article

  • Ubuntu hard disk problem

    - by Henadzy
    Hello! I have got the error with a hard disk on Ubuntu 9.10. It slows down my system, applications have not been responding for a long time. But when I mount and use filesystem which placed on this hard disk at other computer it works properly. disk: SAMSUNG HD161HJ (SATA) syslog: Apr 25 00:28:25 vare6gin kernel: [ 885.773839] ata3.00: exception Emask 0x1 SAct 0x1e SErr 0x0 action 0x6 frozen Apr 25 00:28:25 vare6gin kernel: [ 885.773845] ata3.00: Ata error. fis:0x21 Apr 25 00:28:25 vare6gin kernel: [ 885.773861] ata3.00: cmd 60/08:08:3f:00:ad/00:00:10:00:00/40 tag 1 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773864] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773871] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773877] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773890] ata3.00: cmd 60/18:10:9f:6b:ed/00:00:0e:00:00/40 tag 2 ncq 12288 in Apr 25 00:28:25 vare6gin kernel: [ 885.773893] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773900] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773904] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773918] ata3.00: cmd 60/08:18:3f:5f:ed/00:00:0e:00:00/40 tag 3 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773921] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773927] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773932] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773946] ata3.00: cmd 60/08:20:67:c8:91/00:00:05:00:00/40 tag 4 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773948] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773955] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773960] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773970] ata3: hard resetting link Apr 25 00:28:25 vare6gin kernel: [ 885.773974] ata3: nv: skipping hardreset on occupied port Apr 25 00:28:25 vare6gin kernel: [ 886.240073] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 25 00:28:25 vare6gin kernel: [ 886.256277] ata3.00: configured for UDMA/133 Apr 25 00:28:25 vare6gin kernel: [ 886.256305] ata3: EH complete Apr 25 00:28:27 vare6gin kernel: [ 888.176088] ata3: EH in SWNCQ mode,QC:qc_active 0xF sactive 0xF Apr 25 00:28:27 vare6gin kernel: [ 888.176099] ata3: SWNCQ:qc_active 0xF defer_bits 0x0 last_issue_tag 0x3 Apr 25 00:28:27 vare6gin kernel: [ 888.176102] dhfis 0xF dmafis 0x1 sdbfis 0x0 Apr 25 00:28:27 vare6gin kernel: [ 888.176109] ata3: ATA_REG 0x51 ERR_REG 0x40 Apr 25 00:28:27 vare6gin kernel: [ 888.176113] ata3: tag : dhfis dmafis sdbfis sacitve Apr 25 00:28:27 vare6gin kernel: [ 888.176120] ata3: tag 0x0: 1 1 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176126] ata3: tag 0x1: 1 0 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176131] ata3: tag 0x2: 1 0 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176136] ata3: tag 0x3: 1 0 0 1

    Read the article

  • Accessing Network Printers from a Citrix Session:

    - by Harry
    We have an application that uses Active Reports documents. You pass a document the UNC of the printer and away it goes. We have a group that runs this application within a Citrix session and the truly networked printers function perfectly but shared printers that work well outside of Citrix become unreachable. Printers do not need to be defined on the machine running the report for the system to work. There is something in the way Critrix passes the information to the destination UNC that I don’t understand.

    Read the article

  • Accessing Network Printers from a Citrix Session:

    - by Harry
    We have an application that uses Active Reports documents. You pass a document the UNC of the printer and away it goes. We have a group that runs this application within a Citrix session and the truly networked printers function perfectly but shared printers that work well outside of Citrix become unreachable. Printers do not need to be defined on the machine running the report for the system to work. There is something in the way Critrix passes the information to the destination UNC that I don’t understand.

    Read the article

  • Web Application Problems (web.config errors) HTTP 500.19 with IIS7.5 and ASP.NET v2

    - by Django Reinhardt
    This is driving the whole team crazy. There must be some simple mis-configured part of IIS or our Web Server, but every time we try to run out ASP.NET Web Application on IIS 7.5 we get the following error... Here's the error in full: HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. `Detailed Error Information` Module IIS Web Core Notification Unknown Handler Not yet determined Error Code 0x8007000d Config Error Config File \\?\E:\wwwroot\web.config Requested URL http://localhost:80/Default.aspx Physical Path Logon Method Not yet determined Logon User Not yet determined Config Source -1: 0: The machine is running Windows Server 2008 R2. We're developing our Web Application using Visual Studio 2008. According to Microsoft the code 8007000d means there's a syntax error in our web.config -- except the project builds and runs fine locally. Looking at the web.config in XML Notepad doesn't bring up any syntax errors, either. I'm assuming it must be some sort of poor configuration on my part...? Does anyone know where I might find further information about the error? Nothing is showing in EventViewer, either :( Not sure what else would be helpful to mention... Assistance is greatly appreciated. Thanks! UPDATES! - POSTED WEB.CONFIG BELOW Ok, since I posted the original question above, I've tracked down the precise lines in the web.config that were causing the error. Here are the lines (they appear between <System.webServer> tags)... <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> </httpHandlers> Note: If I delete the lines between the <httpHandlers> I STILL get the error. I literally have to delete <httpHandlers> (and the lines inbetween) to stop getting the above error. Once I've done this I get a new 500.19 error, however. Thankfully, this time IIS actually tells me which bit of the web.config is causing a problem... <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory,System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> </handlers> Looking at these lines it's clear the problem has migrated further within the same <system.webServer> tag to the <handlers> tag. The new error is also more explicit and specifically complains that it doesn't recognize the attribute "validate" (as seen on the third line above). Removing this attribute then makes it complain that the same line doesn't have the required "name" attribute. Adding this attribute then brings up ASP.NET error... Could not load file or assembly 'System.web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56' or one of its dependencies. The system cannot find the file specified. Obviously I think these new errors have just arisen from me deleting the <httpHandlers> tags in the first place -- they're obviously needed by the application -- so the question remains: Why would these tags kick up an error in IIS in the first place??? Do I need to install something to IIS to make it work with them? Thanks again for any help. WEB.CONFIG Here's the troublesome bits of our web.Config... I hope this helps someone find our problem! <system.Web> <!-- stuff cut out --> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56" validate="false"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> </httpModules> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <add name="ScriptModule" preCondition="integratedMode" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> </modules> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory,System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=f2cb5667dc123a56"/> </handlers> </system.webServer>

    Read the article

  • how to import the parent model on gae-python

    - by zjm1126
    main:. +-a ¦ +-__init__.py ¦ +-aa.py +-b ¦ +-__init__.py ¦ +-bb.py +-cc.py if i am in aa.py , how to import cc.py ? this is my code ,but it is error : from main import cc what should i do . thanks updated in normal python file (not on gae),i can use this code : import os,sys dirname=os.path.dirname path=os.path.join(dirname(dirname(__file__))) sys.path.insert(0,path) import cc print cc.c but on gae , it show error : ImportError: No module named cc

    Read the article

  • Java - Problem with the classpath on Eclipse.

    - by Amokrane
    I'm trying to recompile a project I've been working on and I keep getting an error message when trying to load a property file: The system cannot find the path specified. I guess this has to do with the classpath. But I've added the path to the file in Properties- Java build path- Libraries (external class). I also checked the .classpath file generated by eclipse, and the path is really there! Why isn't Eclipse looking at the right path?

    Read the article

  • how to make my method running on the template of google-app-engine..

    - by zjm1126
    the model is : class someModel(db.Model): name = db.StringProperty() def name_is_sss(self): return self.name=='sss' the view is : a=someModel() a.name='sss' path = os.path.join(os.path.dirname(__file__), os.path.join('templates', 'blog/a.html')) self.response.out.write(template.render(path, {'a':a})) and the html is : {{ a.name_is_sss }} the page shows : True so i want to make it more useful, and like this: the model: class someModel(db.Model): name = db.StringProperty() def name_is_x(self,x): return self.name==x the html is : {% a.name_is_x 'www'%} or {{ a.name_is_x 'www'}} but the error is : TemplateSyntaxError: Invalid block tag: 'a.name_is_x' or TemplateSyntaxError: Could not parse the remainder: 'www' so how to make my method running thanks

    Read the article

  • Why my json_encode get corrupted

    - by Cullen SUN
    $model = new XUploadForm; $model->file = CUploadedFile::getInstance( $model, 'file' ); //We check that the file was successfully uploaded if( $model->file !== null ) { //Grab some data $model->mime_type = $model->file->getType( ); $model->size = $model->file->getSize( ); $model->name = $model->file->getName( ); $file_extention = $model->file->getExtensionName( ); //(optional) Generate a random name for our file $file_tem_name = md5(Yii::app( )->user->id.microtime( ).$model->name); $file_thumb_name = $file_tem_name.'_thumb.'.$file_extention; $file_image_name = $file_tem_name.".".$file_extention; if( $model->validate( ) ) { //Move our file to our temporary dir $model->file->saveAs( $path.$file_image_name ); if(chmod($path.$file_image_name, 0777 )){ // Yii::import("ext.EPhpThumb.EPhpThumb"); // $thumb_=new EPhpThumb(); // $thumb_->init(); // $thumb_->create($path.$file_image_name) // ->resize(110,80) // ->save($path.$file_thumb_name); } //here you can also generate the image versions you need //using something like PHPThumb //Now we need to save this path to the user's session if( Yii::app( )->user->hasState( 'images' ) ) { $userImages = Yii::app( )->user->getState( 'images' ); } else { $userImages = array(); } $userImages[] = array( "filename" => $file_image_name, 'size' => $model->size, 'mime' => $model->mime_type, "path" => $path.$file_image_name, // "thumb" => $path.$file_thumb_name, ); Yii::app( )->user->setState('images', $userImages); //Now we need to tell our widget that the upload was succesfull //We do so, using the json structure defined in // https://github.com/blueimp/jQuery-File-Upload/wiki/Setup echo json_encode( array( array( "type" => $model->mime_type, "size" => $model->size, "url" => $publicPath.$file_image_name, //"thumbnail_url" => $publicPath.$file_thumb_name, //"thumbnail_url" => $publicPath."thumbs/$filename", "delete_url" => $this->createUrl( "upload", array( "_method" => "delete", "file" => $file_image_name ) ), "delete_type" => "POST" ) ) ); Above code give me correct response, [{"type":"image/jpeg","size":2266,"url":"/uploads/tmp/0b00cbaee07c6410241428c74aae1dca.jpeg","delete_url":"/api/imageUpload/upload?_method=delete&file=0b00cbaee07c6410241428c74aae1dca.jpeg","delete_type":"POST"}] but if I uncomment the following // Yii::import("ext.EPhpThumb.EPhpThumb"); // $thumb_=new EPhpThumb(); // $thumb_->init(); // $thumb_->create($path.$file_image_name) // ->resize(110,80) // ->save($path.$file_thumb_name); it gave me corrupted response: Mac OS X 2??ATTR?dA??Y?Ycom.apple.quarantine0001;50655994;Google\x20Chrome.app;2599ECF9-69C5-4386-B3D9-9F5CC7E0EE1D|com.google.ChromeThis resource fork intentionally left blank ??[{"type":"image/jpeg","size":1941,"url":"/uploads/tmp/409c5921c6d20944e1a81f32b12fc380.jpeg","delete_url":"/api/imageUpload/upload?_method=delete&file=409c5921c6d20944e1a81f32b12fc380.jpeg","delete_type":"POST"}]

    Read the article

  • check if directory exists c#

    - by Ant
    I am trying to see if a directory exists based on an input field from the user. When the user types in the path, I want to check if the path actually exists. I have some c# code already. It returns 1 for any local path, but always returns 0 when I am checking a network path. static string checkValidPath(string path) { //Insert your code that runs under the security context of the authenticating user here. using (ImpersonateUser user = new ImpersonateUser(user, "", password)) { //DirectoryInfo d = new DirectoryInfo(quotelessPath); bool doesExist = Directory.Exists(path); //if (d.Exists) if(doesExist) { user.Dispose(); return "1"; } else { user.Dispose(); return "0"; } } } public class ImpersonateUser : IDisposable { [DllImport("advapi32.dll", SetLastError = true)] private static extern bool LogonUser(string lpszUsername, string lpszDomain, string lpszPassword, int dwLogonType, int dwLogonProvider, out IntPtr phToken); [DllImport("kernel32", SetLastError = true)] private static extern bool CloseHandle(IntPtr hObject); private IntPtr userHandle = IntPtr.Zero; private WindowsImpersonationContext impersonationContext; public ImpersonateUser(string user, string domain, string password) { if (!string.IsNullOrEmpty(user)) { // Call LogonUser to get a token for the user bool loggedOn = LogonUser(user, domain, password, 9 /*(int)LogonType.LOGON32_LOGON_NEW_CREDENTIALS*/, 3 /*(int)LogonProvider.LOGON32_PROVIDER_WINNT50*/, out userHandle); if (!loggedOn) throw new Win32Exception(Marshal.GetLastWin32Error()); // Begin impersonating the user impersonationContext = WindowsIdentity.Impersonate(userHandle); } } public void Dispose() { if (userHandle != IntPtr.Zero) CloseHandle(userHandle); if (impersonationContext != null) impersonationContext.Undo(); } } Any help is appreciated. Thanks! EDIT 3: updated code to use BrokenGlass's impersonation functions. However, I need to initialize "password" to something... EDIT 2: I updated the code to try and use impersonation as suggested below. It still fails everytime. I assume I am using impersonation improperly... EDIT: As requested by ChrisF, here is the function that calls the checkValidPath function. Frontend aspx file... $.get('processor.ashx', { a: '7', path: x }, function(o) { alert(o); if (o=="0") { $("#outputPathDivValid").dialog({ title: 'Output Path is not valid! Please enter a path that exists!', width: 500, modal: true, resizable: false, buttons: { 'Close': function() { $(this).dialog('close'); } } }); } }); Backend ashx file... public void ProcessRequest (HttpContext context) { context.Response.Cache.SetExpires(DateTime.Now); string sSid = context.Request["sid"]; switch (context.Request["a"]) {//a bunch of case statements here... case "7": context.Response.Write(checkValidPath(context.Request["path"].ToString())); break;

    Read the article

  • organizing external libraries and include files

    - by stijn
    Over the years my projects use more and more external libraries, and the way I did it starts feeling more and more awkward (although, that has to be said, it does work flawlessly). I use VS on Windows, CMake on others, and CodeComposer for targetting DSPs on Windows. Except for the DSPs, both 32bit and 64bit platforms are used. Here's a sample of what I am doing now; note that as shown, the different external libraries themselves are not always organized in the same way. Some have different lib/include/src folders, others have a single src folder. Some came ready-to-use with static and/or shared libraries, others were built /path/to/projects /projectA /projectB /path/to/apis /apiA /src /include /lib /apiB /include /i386/lib /amd64/lib /path/to/otherapis /apiC /src /path/to/sharedlibs /apiA_x86.lib -->some libs were built in all possible configurations /apiA_x86d.lib /apiA_x64.lib /apiA_x64d.lib /apiA_static_x86.lib /apiB.lib -->other libs have just one import library /path/to/dlls -->most of this directory also gets distributed to clients /apiA_x86.dll and it's in the PATH /apiB.dll Each time I add an external libary, I roughly use this process: build it, if needed, for different configurations (release/debug/platform) copy it's static and/or import libraries to 'sharedlibs' copy it's shared libraries to 'dlls' add an environment variable, eg 'API_A_DIR' that points to the root for ApiA, like '/path/to/apis/apiA' create a VS property sheet and a CMake file to state include path and eventually the library name, like include = '$(API_A_DIR)/Include' and lib = apiA.lib add the propertysheet/cmake file to the project needing the library It's especially step 4 and 5 that are bothering me. I am pretty sure I am not the only one facing this problem, and would like see how others deal with this. I was thinking to get rid of the environment variables per library, and use just one 'API_INCLUDE_DIR' and populating it with the include files in an organized way: /path/to/api/include /apiA /apiB /apiC This way I do not need the include path in the propertysheets nor the environment variables. For libs that are only used on windows I even don't need a propertysheet at all as I can use #pragmas to instruct the linker what library to link to. Also in the code it will be more clear what gets included, and no need for wrappers to include files having the same name but are from different libraries: #include <apiA/header.h> #include <apiB/header.h> #include <apiC_version1/header.h> The withdrawal is off course that I have to copy include files, and possibly** introduce duplicates on the filesystem, but that looks like a minor price to pay, doesn't it? ** actually once libraries are built, the only thing I need from them is the include files and thie libs. Since each of those would have a dedicated directory, the original source tree is not needed anymore so can be deleted..

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >