Search Results

Search found 3168 results on 127 pages for 'directories'.

Page 66/127 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Overcrowded Windows XP Folders

    - by BlairHippo
    I know that, technically, an individual Windows XP directory can hold an immense number of files (over 4.29 billion, according to a quick Google search). However, is there a practical ceiling where too many files in one directory starts having an impact on reads to those files? If so, what factors would exacerbate or help the issue? I ask because my employer has several hundred XP machines in the field at client sites, and the performance on some of the older ones is getting "sludgy." The machines download and display client-defined images, and my supervisor and I suspect that our slacktastic approach to cache management could be to blame. (Some of the directories have tens of thousands of images in them.) I'm trying to gather evidence to support or contest the theory before spending time on a coding fix.

    Read the article

  • How to make FileZilla open all the required files with one click

    - by Omar Tariq
    Is there any way of configuring FileZilla so that I can open all the files on a server that I use to edit with just one click. For example if the files are like this: /home/abc/def/one.txt /home/abc/def/yet/another/directory/two.txt /home/abc/def/ghi/yet/another/directory/three.txt Then it is very time-consuming to navigate through each directory and open the required files. These are only 3 files but what if we have around 10 to 20 files? Yes, copying the path of the directories is one thing. But something that is built-in so that I can just click a button like open all the required files of this connection and it opens all the files in the editor (as set in FileZilla preferences) then that would be great!

    Read the article

  • How to copy files pointed to (not the shortcut files themselves)

    - by Ivo Bosticky
    I have a folder containing shortcuts that point to files that are located in various directories and drives. I would like to copy the files pointed to (NOT the shortcut files themselves) to a single destination folder. Is there a way in windows (XP, Vista, 7), file manager, or some utility I can use to do this? I've heard you can do this with various multi-step custom scripts. However, I've heard rumors there is a one click way to do this without having to fabricate a custom script each time, where regardless where the shortcuts point to, I can select the group of shortcuts and do a copy operation that will grab the files they point to. Then, I can paste or otherwise put the actual files (not shortcuts) into one directory. It would be very time consuming to manually find each file pointed to by a shortcut and one by one copy them to the target folder. Note that I've seen this question asked before on the internet but haven't seen a good answer.

    Read the article

  • Adding git branch to bash prompt on snow leopard

    - by crayment
    I am using this: $(__git_ps1 '(%s)') It works however it does not update when I change directories or checkout a new branch. I also have this alias: alias reload='. ~/.bash_profile' Sample run: user@machine:~/dev/rails$cd git_folder/ user@machine:~/dev/rails/git_folder$reload user@machine:~/dev/rails/git_folder(test)$git checkout master Switched to branch 'master' user@machine:~/dev/rails/git_folder(test)$reload user@machine:~/dev/rails/git_folder(master)$ As you can see it is being set correctly but only if I reload bash_profile. I have wasted way to much time on this. I am using bash on snow leopard. Please help!

    Read the article

  • Can't delete a directory on external drive (OS X)

    - by Martin Tóth
    I have a brand new Transcend StoreJet 25M3 (external HDD) mounted to MacBook (Leopard 10.5.8) at /Volumes/Transcend. I copied some data from my old Windows (XP) machine on it, and now, after cleaning some stuff up, I wanted to delete some directories, but this is what happened: $ rmdir My\ Pictures/ rmdir: My Pictures/: Operation not permitted Using Finder just asks for password, but does not delete the directory (sound of "moved to Trash" is played). I thought it's some permission "thing", but: $ ls -l drwxrwxrwx 1 martin staff 32768 5 jan 16:11 My Pictures/ $ sudo rm -rf My\ Pictures rm: My Pictures: Operation not permitted I re-mounted, rebooted (thinking that there's some file lock), but that did not help. What might have happened here? How to delete it?

    Read the article

  • Is there any way to use arrays in a puppet module (not in template)?

    - by KARASZI István
    I want to use puppet to manage a hadoop cluster. On the machines we have several directories which must be created and set permissions. But i'm unable to add array values for defined methods. define hdfs_site( $dirs ) { file { $dirs: ensure => directory, owner => "hadoop", group => "hadoop", mode => 755; } file { "/opt/hadoop/conf/hdfs-site.xml": content => template("hdfs-site.xml.erb"), owner => "root", group => "root", mode => 644; } } define hadoop_slave( $mem, $cpu, $dirs ) { hadoop_base { mem => $mem, cpu => $cpu, } hdfs_site { dirs => $dirs, } } hadoop_base is similar to hdfs_site. Thanks!

    Read the article

  • persistant data in tor browser bundle?

    - by Snesticle
    What sort of persistent data is generated by bundled Tor? I recently did an experiment using the Tor Browser Bundle for GNU-Linux. I created two directories, A and B, and placed an identical copy of Tor in each one. Next I placed a simple python script in directory A that both launched the vidalia package and, when exiting the network, deleted the entire contents of A with the exception of itself and rebuilt the bundle from the original archive. What surprises me is that after about ten hours of browsing each, A and B now show a distinct difference in startup time. Also curious is that I get a message in the log of B that never shows up in A: new control connection open which is a notice level advisory. This has nothing to do with what I was originally testing but now I'm interested in what exactly is going on. By the way I do not have to rely on Tor for my personal safety as many are forced to do so even if you just have a hunch I'd be interested in hearing it.

    Read the article

  • Postfix tutorial inconsistency

    - by Desmond Hume
    I'm following this tutorial to setup a Postfix/Dovecot mail server with Postfix Admin as a web front end. As regards directory structure for virtual mail users, the author of the tutorial writes: Virtual mail users are those that do not exist as Unix system users. They thus don't use the standard Unix methods of authentication or mail delivery and don't have home directories. That is how we are managing things here: mail users are defined in the database created by Postfix Admin rather than existing as system users. Mail will be kept in subfolders per domain and account under /var/vmail - e.g. [email protected] will have a mail directory of /var/vmail/example.com/me. But when he gives instructions about configuring Postfix Admin, he suggests this to be contained by Postfix Admin's config.inc.php: // Mailboxes // If you want to store the mailboxes per domain set this to 'YES'. // Examples: // YES: /usr/local/virtual/domain.tld/[email protected] // NO: /usr/local/virtual/[email protected] $CONF['domain_path'] = 'NO'; Is there an inconsistency?

    Read the article

  • I need a few minutes of dedicated server a week, but not for hosting, just to convert ogg etc.

    - by talkingnews
    I'm completely happy with my webhosting, it's just that I need to do one little thing they won't allow, and that's run an instance of Sox to convert about 30 mp3s to ogg files, in various directories, a couple of times a week, to be done automatically in response to the detection of the upload of an mp3. Probably looking at a minute of server time over the whole week. I've had unhelpful suggestions on other forums like "why not leave your home PC on 24 hours a day and then use all your isp bandwidth to do this", which doesn't work for me. I know that I can host files on, say, Amazon S3, but is there something similar for my needs? All it would need to do would be: wget/ftp the mp3 files, convert them to ogg, ftp the files back to my hosting. Of course, all this wouldn't be needed if there was such a thing as a compiled binary of Sox (or any mp3ogg converter) for Centos which I could upload without needing root access, but I've given up asking that one, but always open to suggestions!

    Read the article

  • Physically moving a hard drive from older iMac (c2d) to new iMac (i7) ?

    - by Inshim
    Instead of my usual habit of using superduper to mirror my drive to a new computer, I just physically moved the hard drive from an older iMac to a new one. But... it now doesn't boot, getting stuck at the apple logo screen. Since the hard drive that came with the new iMac works well, and my old drive works well when I return it to the older iMac, I conclude that there is some problem at the system/kernel level due to the different hardware. In the past I did similar things (e.g. starting a C2D machine from a Core Duo in target disk mode), so perhaps the change in architecture to the i5/i7 is too problematic? The main point: do you know of any way to get the system to rebuild for itself the proper versions of the system components when booting? Are there certain directories that I can safely delete to make that happen? Thanks

    Read the article

  • Bash Shell Hangs on ?+Tab-complete

    - by michaelmichael
    I often use tab completion in Bash when completing directories, but I find that it hangs for an unacceptable amount of time if I accidentally include a question mark in the directory. I'd like to know why and how to prevent it if possible. Here's the scenario: I start a command and use the ~ key to represent home: ls ~?Desktop/co Oops! I held down the Shift for a split-second too long. I had intended for ? to be /. But (oh no!) muscle memory has already kicked in. I've hit the Tab before I noticed the mistake. Now I'm stuck waiting for the shell to beep angrily at me. Usually a minute or two. What happened? Why did the question mark cause it to hang and eventually beep? Any way to stop it from hanging?

    Read the article

  • Sub-process /usr/bin/dpkg returned an error code (1)

    - by rohit
    Hey friends i am getting the following error when i am trying to purge shorewall root@aptosid:/etc# apt-get purge shorewall Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: shorewall* 0 upgraded, 0 newly installed, 1 to remove and 3 not upgraded. 1 not fully installed or removed. After this operation, 1,843 kB disk space will be freed. Do you want to continue [Y/n]? (Reading database ... 212702 files and directories currently installed.) Removing shorewall ... : not found/shorewall: 25: /etc/default/shorewall: :q Stopping "Shorewall firewall": not done (check /var/log/shorewall-init.log). invoke-rc.d: initscript shorewall, action "stop" failed. dpkg: error processing shorewall (--purge): subprocess installed pre-removal script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: shorewall E: Sub-process /usr/bin/dpkg returned an error code (1) root@aptosid:/etc# please help me out ...........?

    Read the article

  • rsyslog from Heroku drain creates empty log files

    - by Jeff Lee
    I'm sending logs from my Heroku app to an rsyslog server, but the resulting log files seem to come up empty. The rsyslog configuration for receiving remote messages is as follows: $template RemoteDailyLog,"/var/log/remote/%hostname%/%$year%/%$month%/%$day%.log" :fromhost-ip, !isequal, "127.0.0.1" -?RemoteDailyLog & ~ My complete rsyslog configuration is available in this paste. This configuration appears to create the directories correctly. I see the Heroku app's logging hostname (of the form "d.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx") appear in /var/log on the rsyslog host, which implies that log messages are successfully making it to the logging daemon, but the resulting logfiles are zero-size. I'm guessing the issue is with rsyslog, rather than Heroku, but I'm not sure where to look next.

    Read the article

  • WebDAV "PROPFIND" exception in IIS due to network share?

    - by jacko
    We're finding continuous exceptions in our event viewer on our live box to the following exception: [snippet] Process information: Process ID: 3916 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: HttpException Exception message: Path 'PROPFIND' is forbidden. Thread information: Thread ID: 14 Thread account name: OURDOMAIN\Account Is impersonating: True Stack trace: at System.Web.HttpMethodNotAllowedHandler.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Other Specs: Windows Server 2003 R2 & IIS 6.0 We've narrowed it down to occuring when people try to access shares on the box from within the network, and have discovered (we think) that its due to the WebDAV web services extension being previously disabled by past staff. The exceptions are being thrown when trying to access directories that are virtual dirs in IIS, and plain old UNC network shares What the implications for enabling the WebDAV extensions on our live web server? And will this solve our problems with the exceptions in our event log?

    Read the article

  • Problems while applying an svn patch to a mercurial repository

    - by user26453
    Patch file is made with TopirtiseSVN - Create Patch... Attempting to import patch into the mercurial repository using hg import patchfile. The problem I'm running into is that there seems to be problems with how hg looks for files referenced in the patch file: unable to find 'gui/gui/RemoteFramework.cpp' for patching 2 out of 2 hunks FAILED -- saving rejects to file gui/gui/RemoteFramwork.cpp.rej Seems to be an issue of where the patch was made in terms of directories and where it should be applied. Have tried playing with the --base option for hg import, but haven't gotten anywhere just yet. Anyone have any tips?

    Read the article

  • MSDeploy - possible to call setAcl on multiple destinations in one go?

    - by growse
    I'm building a nice little continuous integration environment for our development team, based on TeamCity. It's working rather nicely, as it can build a mix of .NET and PHP projects, and push them to our internal and external platforms. I'm primarily using MsDeploy to push everything to the internal platform, as that's all IIS based. However, there's a number of builds where I need to set directory permissions on the destination directory. I can use the setAcl operator just fine, but that only seems to take a single destination as an argument. Therefore, if I need to alter the permissions on 5 destination directories, I need to call MsDeploy 5 times, which seems a lot of overhead. Is there a sensible way around this? Reading the documentation, I don't think MsDeploy takes more than a single argument for the setAcl operator, but could be wrong. Is there a better way for a build server to set multiple directory permissions in one go?

    Read the article

  • nginx 1.2.3 installed but remains at 1.1.19

    - by Nyxynyxx
    I've installed nginx 1.2.3 by adding a new ppa sudo add-apt-repository ppa:nginx/stable sudo apt-get update sudo apt-get install nginx However, nginx -v still gives me 1.1.19. What happened? Output The following packages will be upgraded: nginx 1 upgraded, 0 newly installed, 0 to remove and 46 not upgraded. Need to get 61.8 kB of archives. After this operation, 3,072 B of additional disk space will be used. Get:1 http://ppa.launchpad.net/nginx/stable/ubuntu/ precise/main nginx all 1.2.3-0ubuntu0ppa3~precise [61.8 kB] Fetched 61.8 kB in 0s (89.7 kB/s) (Reading database ... 79914 files and directories currently installed.) Preparing to replace nginx 1.1.19-1 (using .../nginx_1.2.3-0ubuntu0ppa3~precise_all.deb) ... Unpacking replacement nginx ... Setting up nginx (1.2.3-0ubuntu0ppa3~precise) ... root@precise64:/var/www/apadment# nginx -v nginx version: nginx/1.1.19

    Read the article

  • Sharing storage on Linux and Solaris

    - by devlearn
    I'm looking for a solution in order to share a san mounted volume between several hosts running on Linux (RHEL) and/or Solaris (Sparc). Note that I basically need to share a set of directories containing large binary files that are accessed in random R/W mode. I have the following reqs : keep the data on the SAN suitable i/o performances as the software is pretty demanding on IOPS stick to a shared file system as I can't afford a cluster fs (lack of MDS/OSS infrastructure) compression could be really usefull For now I've found only the following candidates : GFS2 , supports Linux only, no compression VxFS , supports Linux and Solaris, compression supported So if you have some suggestions for this list, I'll really welcome them. Thanks in advance,

    Read the article

  • Blocking a specific URL by IP (a URL create by mod-rewrite)

    - by Alex
    We need to block a specific URL for anyone not on a local IP (anyone without a 192.168.. address) We however cannot use apache's <Directory /var/www/foo/bar> Order allow,deny Allow from 192.168 </Directory> <Files /var/www/foo/bar> Order allow,deny Allow from 192.168 <Files> Because these would block specific files or directories, we need to block a specific URL which is created by mod-rewrite and the page is dynamically created using PHP. Any ideas would be greatly appreciated

    Read the article

  • Ubuntu + latest samba version, symlinks no longer work on share mounted in windows

    - by Roy Rico
    I just apt-getted (apt-got?) the latest software for my Ubuntu 9.10 linux box, and I noticed that samba was the included in the update. After the install, the symlinks in my home directory no longer work when mounted as a drive in my linux box. They worked literally seconds before I did the update. All my normal directories work just fine. Viewing the directory listing on the command line, all the files, dirs & links have the exact same permissions, yet this is the error I get: Location is not available L:\LinkDir is not accessible. Access is denied. I looked on the forums, and i saw this option for the smb.conf follow symlinks = yes wide symlinks = yes unix extensions = no I put those in, but they had no effect. Has anyone had this problem yet?

    Read the article

  • Create 8.3 name for an existing directory

    - by Chris Karcher
    I have a machine that initially had 8.3 filename creation disabled. However, this was causing issues with some legacy software, so it was re-enabled. I'm wondering if it's possible to go back and "add" 8.3 filenames to certain existing directories. For example, say I have a directory named "C:\name with spaces" and I get the following output when I run "dir /x": C:\>dir /x Volume in drive C has no label. Volume Serial Number is 6873-65B8 Directory of C:\ 04/09/2010 01:57 PM <DIR> name with spaces ... I'd like to somehow add an 8.3 name for the directory without recreating it, and then get the following: C:\>dir /x Volume in drive C has no label. Volume Serial Number is 6873-65B8 Directory of C:\ 04/09/2010 01:57 PM <DIR> NAMEWI~1 name with spaces ... I tried the 'rename' command but it didn't do the trick.

    Read the article

  • Preparing a new physical system with VMWare

    - by Max
    I need to create a new installation of Windows, but at the same time I need this computer. So I decided to create a new physical disk from within VMWare, install windows/drivers/software and then just replace the HDD in the computer. I've bought a new HDD, split it into two partions and installed Windows 7 using the VMWare's ability to use phusical disks. I can see the windows files and directories that have been created on this partition, but when I'm replacing the HDD in the host machine it cannot boot from it. Why is that? Is it at all possible to create a bootable physical disk with VMWare or I should create a virtual disk and then use some HDD imaging tool to copy the HDD image to a physical disk? Maybe there's a better way of installing a new system and working on the computer at the same time?

    Read the article

  • Can't connect using Jail SFTP account

    - by Fazal
    I've been following this tutorial "Limiting Access with SFTP Jails on Debian and Ubuntu" and whilst I've had no errors setting it up, I've had issues on Ubuntu 10.04LTS logging in as a user on a virtualhost. I've changed my SSH port to 22022, and enter all the credentials when attempting to login. I ran these commands to add a user to the virtualhost: # useradd -d /srv/www/[domain] [username] # passwd [username] # usermod -G filetransfer [username] # chown [username]:[username] /srv/www/[domain]/public_html I should add that this is the only time I've setup the user they have no other /home directories or such. The directory that does exist is at /srv/www/example.com/public_html When I try using a desktop package such as cyberduck to login to the site, I keep getting a "Login failed with this username or password". I am completely lost as what to do next... The reason why I'm trying this method is because I want my clients to use SFTP and not FTP to upload files to their websites. Any help or direction is appreciated.

    Read the article

  • Accurate Windows equivalent of the Unix which(1) command

    - by SamB
    It's easy enough to write a simple script that works like the which(1) command from unix, which searches for a given command along the PATH. Unfortunately, the CreateProcess function is not so simple, so this type of script does not give accurate results: CreateProcess looks in a number of directories not in the PATH, looks for files with all of the extensions listed in PATHEXT, etc. Worse, who knows what might be added in future versions of Windows? Anyway, my question is: is there a robust, accurate which(1) equivalent for Windows, which always tells you what file CreateProcess would find?

    Read the article

  • htaccess rewrite and auth conflict

    - by Michael
    I have 2 directories each with a .htaccess file: html/.htaccess - There is a rewrite in this file to send almost everything to url.php RewriteCond %{REQUEST_URI} !(exported/?|\.(php|gif|jpe?g|png|css|js|pdf|doc|xml|ico))$ RewriteRule (.*)$ /url.php [L] and html/exported/.htaccess AuthType Basic AuthName "exported" AuthUserFile "/home/siteuser/.htpasswd" require valid-user If I remove html/exported/.htaccess the rewriting works fine and the exported directory can be access. If I remove html/.htaccess the authentication works fine. However when I have both .htaccess files exported/ is being rewritten to /url.php. Any ideas how I can prevent it?

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >