Search Results

Search found 46908 results on 1877 pages for 'managing files and folder'.

Page 322/1877 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • What's the easiest way to 'cat' groups of files together?

    - by rajitha
    I have files with naming convention of this pattern: bond_7.LEU.CA.1.dat bond_7.LEU.CA.2.dat bond_7.LEU.CA.3.dat bond_12.ALA.CB.1.dat bond_12.ALA.CB.2.dat bond_12.ALA.CB.3.dat ... I want to concatenate all files of the same group into a single one. For example: cat bond_7.LEU.CA.*.dat > ../bondvalues/bond_7.LEU.CA.1_3.dat There's large number of these files. How can achieve this with a bash script?

    Read the article

  • Index a low-cost NAS on Windows 7

    - by JcMaco
    Has anyone found a way to index the files stored on a Networked Attached Storage on Windows 7 so that the files can be available in Windows Search and Libraries? I am referring to the cheap and available NAS like the Western Digital My Book series that use an embedded linux server. Similar question: http://windows7forums.com/windows-7-networking/6700-indexing-nas-drive-libraries.html EDIT Windows help proposes to make the files stored on the NAS available offline. This is obviously not a good solution if the NAS has more data than what the client can store. If the folder is on a network device that is not part of your homegroup, it can be included as long as the content of the folder is indexed. If the folder is already indexed on the device where it is stored, you should be able to include it directly in the library. If the network folder is not indexed, an easy way to index it is to make the folder available offline. This will create offline versions of the files in the folder, and add these files to the index on your computer. Once you make a folder available offline, you can include it in a library. When you make a network folder available offline, copies of all the files in that folder will be stored on your computer's hard disk. Take this into consideration if the network folder contains a large number of files.

    Read the article

  • Where do Outlook folders go when moved?

    - by balexandre
    I have an account with external user mailboxes opened and accidentally I have moved a folder and now I can't find it anywhere. Action: I clicked on a folder and dragged it into another one. Result: Can't find the moved folder anywhere The above picture is the folders I currently have from my Outlook 2010 (via Exchange 2010), under an AD Network. Where can I (me, having admin rights over the network) retrieve that missing folder again? Attempts: The original and the one folder I need was accidentally moved, but I have created a poi folder and tried the same way, and I got the same result... the folder went missing. I also tried to reboot the client machine and access the same mailbox from OWA ... no luck on both attempts :( Any ideas on how I can retrieve the missing folder and its emails again?

    Read the article

  • how to rename and move files according to directory names?

    - by Shan
    I have bunch of directories containing the file with the same name. I want to move these files to another directory and at the same time renaming them with the directory name so that they are distinguished and are not over-written. EDIT: All the directories are in the same directory. Destination is one directory on the system which could be anything. We read directory and read file form it and rename it exactly as the directory name and put it to the destination. An important constraint is that the name of the file is given which will be in all of the directories. Directories might contain other files bit also the one which is given Thanks a lot

    Read the article

  • Ping Unknown Host on CentOS at EC2

    - by organicveggie
    Weird problem. We have a collection of servers running CentOS 5 on EC2. The setup includes two DNS servers and two LDAP servers. DNS has a CNAME pointing at the primary LDAP server. One machine (and only one machine) is giving me problems. I can ssh into the server using LDAP authentication. But once I'm on the machine, ping won't resolve the LDAP host even though DNS seems to work fine. Here's ping: $ ping ldap.mycompany.ec2 ping: unknown host ldap.mycompany.ec2 Here's the output of dig: $ dig ldap.mycompany.ec2 ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> ldap.studyblue.ec2 ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2893 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ldap.mycompany.ec2. IN A ;; ANSWER SECTION: ldap.mycompany.ec2. 3600 IN CNAME ec2-hostname.compute-1.amazonaws.com. ec2-hostname.compute-1.amazonaws.com. 55 IN A aaa.bbb.ccc.ddd ;; Query time: 12 msec ;; SERVER: 10.32.159.xxx#53(10.32.159.xxx) ;; WHEN: Tue May 31 11:16:30 2011 ;; MSG SIZE rcvd: 107 And here is resolv.conf: $ cat /etc/resolv.conf search mycompany.ec2 nameserver 10.32.159.xxx nameserver 10.244.19.yyy And here is my hosts file: $ cat /etc/hosts 10.122.15.zzz bamboo4 bamboo4.mycompany.ec2 127.0.0.1 localhost localhost.localdomain And here's nsswitch.conf $ cat /etc/nsswitch.conf passwd: files ldap shadow: files ldap group: files ldap sudoers: ldap files hosts: files dns bootparams: nisplus [NOTFOUND=return] files ethers: files netmasks: files networks: files protocols: files rpc: files services: files netgroup: files ldap publickey: nisplus automount: files ldap aliases: files nisplus So DNS works the way I would expect. And I can ping the ldap server by ip address. And I can even access the box with SSH using LDAP authentication. Any suggestions?

    Read the article

  • How can I maximally compress .gz files in Nautilus?

    - by Takkat
    When selecting Compress... from the right click context menu in Nautilus I am able to quickly compress files to .gz format. However by default Nautilus does not use maximum compression. Can I make Nautilus to use maximum compression like gzip -9? Using gconftool or gconf-editor to set the compression_level for File Roller to maximum seems right but infortunately has not the desired effect and will not lead to maximum compressed files. As this is the expected way of how to set compression levels a bug report has been filed upstream. Any ideas for a workaround are welcome.

    Read the article

  • Backing up files on ubuntu for reinstall. Will there be problems with permissions?

    - by adam
    I have some very important files I want to backup before I reinstall my Ubuntu back to 9.04 from the 9.10 (its causing me all sorts of problems). The files total size is small so im just going to copy them over to Dropbox. Im wondering, when i reinstall Ubuntu and copy them back will there be any issues re the permissions of those files because my old user account which created them and the new user Ill setup on the new install will be different?

    Read the article

  • What to do with ca.crt, name.crt, name.key, name.ovpn files?

    - by tipu
    I was given these four files to access the office's vpn server. I am on ubuntu 12.04, and am unsure how to began using these. I tried using the vpn connection tab under the network connections, but my files didn't specify a username after importing and it forced to me to save one, so attempting to connect to it didn't yield any results. What am I supposed to do with these four files to connect to the vpn?

    Read the article

  • Why is it good to have website content files on a separate drive other than system (OS) drive?

    - by Jeffrey
    I am wondering what benefits will give me to move all website content files from the default inetpub directory (C:) to something like D:\wwwroot. By default IIS creates separate application pool for each website and I am using the built-in user and group (IURS) as the authentication method. I’ve made sure each site directory has the appropriate permission settings so I am not sure what benefits I will gain. Some of the environment settings are as below: VMWare Windows 2008 R2 64 IIS 7.5 C:\inetpub\site1 C:\inetpub\site2 Also as this article (moving the iis7 inetpub directory to a different drive) points out, not sure if it's worth the trouble to migrate files to a different drive: PLEASE BE AWARE OF THE FOLLOWING: WINDOWS SERVICING EVENTS (I.E. HOTFIXES AND SERVICE PACKS) WOULD STILL REPLACE FILES IN THE ORIGINAL DIRECTORIES. THE LIKELIHOOD THAT FILES IN THE INETPUB DIRECTORIES HAVE TO BE REPLACED BY SERVICING IS LOW BUT FOR THIS REASON DELETING THE ORIGINAL DIRECTORIES IS NOT POSSIBLE.

    Read the article

  • Lubuntu 14.04 Problem starting lxsession-default-apps

    - by user278179
    I have one problem, I can't execute lxsession-default-apps on Lubuntu 14.04 because I get because said to me "The database is updating, please wait" If I try to run lxsession-default-apps, I get this error: ** Message: utils.vala:30: config_path_directory: /home/USER/.config/lxsession-default-apps ** Message: desktop-files-backend.vala:171: test config_path: /home/USER/.config/lxsession-default-apps/settings.conf ** Message: desktop-files-backend.vala:237: Scanning folder: /usr/share/applications ** Message: desktop-files-backend.vala:278: Start scanning ** Message: desktop-files-backend.vala:257: Scanning folder: /usr/share/app-install/desktop ** Message: desktop-files-backend.vala:278: Start scanning Error: list_files failed: No such file or directory ** Message: desktop-files-backend.vala:333: Finishing scanning ** Message: desktop-files-backend.vala:189: Signal finish scanning with mode: write ** Message: desktop-files-backend.vala:333: Finishing scanning Any help would be appreciated. Thanks. Regards.

    Read the article

  • Why does Windows Media Center try to open zip files?

    - by gpryatel
    Notes: OS is windows 7, browser is latest firefox. After saving a zip file to the desktop, Windows Media Center opens up. I looked around its config settings but could not find anything related to zip files. How do I turn that off? Also, don't know if this should be a separate question or not: Unless I right click save link as... for zip files, I don't get a firefox dialogue asking what to do with the file (Open/Save). The files get saved to some place like c:\users\namegoeshere\appdata This only happens on the win7 computer. I looked around in firefox's settings for saving files, and I do have 'ask me where to download...' enabled. I can get more exact path names when I get home.

    Read the article

  • Can I use a wildcard to denote subdirectories as opposed to just files in the Windows Command Prompt

    - by Dinosaurus
    I know I can use a wildcard to list the files in a single directory: dir *.java However, does anyone know if it is possible to denote a subdirectory with a wildcard as well? I would like to do something like dir classes/*/*.java Where, it will list all the java files in every subdirectory beneath the classes directory. So, if there is: classes/cs1100/ classes/cs1200/ classes/cs1500/ It will list all the java files within these. Note, I'm not using this specifically for the "Dir" command, but instead another command line tool that accepts a list of files. But, if it works for Dir, it shoudl work in my other program as well.

    Read the article

  • 7ZIP - Command Line Compression | Can Never Keep it Simple

    - by OneTwoYou
    I've been Googleing for a few hours on how to just compress a file inside a directory and I can't find anything. I found how to just compress a folder in general. Now I wish to know how I can compress a folder in a folder with a file. Current code: 7zG.exe a -tzip "test.zip" dontcompressme/compressme/new.txt pause As you can see above, I don't want to compress the first folder, but only the second and what ever is within that folder. I have the 7zG.exe sitting in the main folder and I have some files that are three folders in, but I don't know how to only compress those. Here is my directory list: Folder One (don't compress) Folder Two (don't compress) Folder Three (okay to compress) Document One.txt (okay to compress) Document Two.txt (okay to compress) Index.html (okay to compress) Does anyone know how I can do this in the most simplest way ever invented by man? Cause whenever I go to a website using Google it goes throw all these methods on how to compress a folder, but not do it the way I wish it to do. It makes me kinda upset cause I can't get a simple and straight forward answer. Thank you if you answer my question.

    Read the article

  • Why is 'libgnomevfs' files under /usr/include/gnome-vfs-2.0?

    - by George Edison
    Most applications, including the gnomevfs headers themselves, expect the files to be under /usr/include/libgnomevfs, but Ubuntu has them under /usr/include/gnome-vfs-2.0/libgnomevfs. Why? The package I'm referring to is called libgnomevfs2. Inside /usr/include/gnome-vfs-2.0/libgnomevfs/gnome-vfs.h` we find: #include <libgnomevfs/gnome-vfs-acl.h> #include <libgnomevfs/gnome-vfs-address.h> #include <libgnomevfs/gnome-vfs-async-ops.h> #include <libgnomevfs/gnome-vfs-cancellation.h> ... Meaning that even the headers themselves expect the files to be in that location - and nothing that includes this file will work. Am I missing something, or is this a glitch?

    Read the article

  • How do I change the default ftp folder in MacOS X 10.6?

    - by Wild_Eep
    I'm running WordPress 2.9.1 from a Mac running 10.6.3. WordPress is installed to the /Library/WebServer/Documents folder. WordPress has a feature called AutoUpdate. Clicking an autoupdate button will download and install updated versions of the WordPress software, or third-party plugin tools. It's a convenient way to keep things up to date. WordPress uses FTP to download the files. I've enabled FTP and set up a user account and opened the requisite ports in my firewall for FTP traffic. This doesn't seem to be enough for my self-hosted installation, though. I'm sure this feature was originally designed for someone who has access to a remote shared webserver, and that it's merely a configuration challenge related to the FTP setup. I feel that if I can adjust the initial directory that the FTP service presents to the AutoUpdate feature, everything else will work properly. So, my question is, how do I adjust what folder is presented when a given user connects to a Mac running 10.6.3 via FTP?

    Read the article

  • How can I configure Samba to share (read/write) any folder with root permissions?

    - by Mike Toews
    I have a CentOS 5 VirtualBox guest on a Win7x64 host. I am attempting to setup a read/write share a directory owned by root with my Windows host using Samba, but I'm having no luck after running around in circles. To simplify matters, I've disabled my Firewall (/etc/init.d/iptables stop). As security and permissions are irrelevant for this purpose, I'd rather not have to set up another unix user/group/password. Here is the output from testparm Load smb config files from /etc/samba/smb.conf rlimit_max: rlimit_max (1024) below minimum Windows limit (16384) Processing section "[Guest Share]" Loaded services file OK. Server role: ROLE_STANDALONE and the source of /etc/samba/smb.conf: [global] workgroup = WRKGRP netbios name = SMBSERVER security = SHARE load printers = No [Guest Share] comment = Guest access share path = /root/src read only = No guest ok = Yes Running /etc/init.d/smb restart shows an OK status. However, on my Windows host, I can only see the share folder on the guest \\IPv4, but I cannot go into "Guest Share": "The network name cannot be found" error message is a common error, with a likely cause: The user you are trying to access the share with does not have sufficient permissions to access the path for the share. Both read (r) and access (x) should be possible. Am I trying to use root as a passwordless Samba guest? I'd like to, is it possible? How can I configure Samba to share (read/write) any folder with root permissions?

    Read the article

  • How can I protect files on my NGiNX server?

    - by Jean-Nicolas Boulay Desjardins
    I am trying to protect files on my server (multiple types), with NGiNX and PHP. Basically I want people to have to sign in to the website if they want to access those static files like images. DropBox does it very well. Where by they force you to sign in to access any static files you put on there server. I though about using NGiNX Perl Module. And I would write a perl script that would check the session to see if the user was sign in to give them access to a static file. I would prefer using PHP because all my code is running under PHP and I am not sure how to check a session created by PHP with PERL. So basically my question is: How can I protect static files of any types that would need the user to have sign in and have a valid session created with a PHP script?

    Read the article

  • How can I send super large files directly to another computer in the Internet for free?

    - by Cruise
    I regulary need to transfer very large files (30 GB) to my friend - financial statistics. I don't have any problem with bandwidth: it is very broad here. I did some research in the area, so: 1. I would not use FTP, as it is very tricky to get it working behind a NAT. 2. I would not use Skype/MSN/ICQ, as it is not designed for file transfer and it underperforms on the huge files. 3. I would not use file-sharing services, as I need to pay for big files (30 GB is a problem here) and I don't like holding any piece of my data on the third-party server. So, I need some smart tool that will do what I need: sending files directly browser-to-browser and not browser-server-browser. Is it so complex? Is there some web application in the Internet that can do this?

    Read the article

  • Why some recovery tools are still able to find deleted files after I purge Recycle Bin, defrag the disk and zero-fill free space?

    - by Ivan
    As far as I understand, when I delete (without using Recycle Bin) a file, its record is removed from the file system table of contents (FAT/MFT/etc...) but the values of the disk sectors which were occupied by the file remain intact until these sectors are reused to write something else. When I use some sort of erased files recovery tool, it reads those sectors directly and tries to build up the original file. In this case, what I can't understand is why recovery tools are still able to find deleted files (with reduced chance of rebuilding them though) after I defragment the drive and overwrite all the free space with zeros. Can you explain this? I thought zero-overwritten deleted files can be only found by means of some special forensic lab magnetic scan hardware and those complex wiping algorithms (overwriting free space multiple times with random and non-random patterns) only make sense to prevent such a physical scan to succeed, but practically it seems that plain zero-fill is not enough to wipe all the tracks of deleted files. How can this be?

    Read the article

  • On linux how can make a list of files that are owned by a particular owner and then fix the group and owner?

    - by Stuart Woodward
    I have a deep and complex file system where some files have been accidently written by root. I want to change the ownership of those files back to the original owner in one go. I am playing with commands like: find /folder -type f | xargs ls -l | grep "root root" but there is a lot of garbage coming out too. I want to make a list first and then change only the files in that list after confirmation.

    Read the article

  • Accessing a webpage folder with .htaccess in it via apache webdav?

    - by pingo
    I have setup webdav access in order to enable an external user to upload the content of his web page to his folder on my server that is served by apache to the web. This way he could update his web page via webdav. Now the problem is that the user requires a .htaccess file and of course .htaccess breaks webdav probably because it overrides settings. (new files cannot be uploaded anymore via webdav if below specified .htaccess exists) I am running Apache2.2.17 and this is my webdav config: Alias /folderDAV "d:/wamp/www/somewebsite/" <Location /folderDAV> Order Allow,Deny Allow from all Dav On AuthType Digest AuthName DAV-upload AuthUserFile "D:/wamp/passtore/user.passwd" AuthDigestProvider file require valid-user </Location> This config is part of my naive solution to fixing this problem. The idea was to specify an alias to the web page folder where webdav would be enabled and then set AllowOverride to none so that the .htaccess would have no effect. Of course I then found out that in <Location /> AllowOverride directive is not valid. The .htaccess file looks like this: #opencart settings Options +FollowSymlinks Options -Indexes <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] ErrorDocument 403 /403.html deny from 1.1.1.1/19 allow from 2.2.2.2 What would be the solution here? I would like to have the web page accessible from the web but at the same time be able to access and modify it via apache's webdav (with digest auth). How would I do that? Also if possible I would like a solution that permits the existence of the .htaccess so that the user still has the power to setup access rules for his web page.

    Read the article

  • 403 Forbidden serving static files from VirtualBox shared folder with nginx (Ubuntu 10.04LTS guest, Windows 7 host)

    - by Chris Pratt
    I'm working on a local development VM and trying to test serving my site with gunicorn and nginx as a reverse proxy for static resources only. The site loads minus static resources with user nginx; in nginx.conf. Attempting to load a static resource individually reveals a 403 Forbidden error. For background. The static resources are in a shared folder under /media/sf_work. All files are owned by root:vboxsf (VirtualBox default). My user account on the system has been added to the vboxsf group, and I have full access to the shared folder. For comparison, I tried changing the nginx.conf user to my user account. In that scenario, the static files did load, but then the homepage itself gives a 403 Forbidden error. So, I then tried adding the nginx user to the vboxsf group, but then everything gives a 403 Forbidden error. After further investigation it seems that if the nginx.conf user is in any group, it results in a 403 Forbidden. Any idea what could possibly be going on here?

    Read the article

  • Does saving my progress on a U1-synced file/folder put unneccesary strain on the servers?

    - by Chauncellor
    I love Ubuntu One and I use it all the time. I have my documents and music composition folders set to sync. It's been a real boon. However, sometimes I feel that constantly saving my progress forces the file to sync dozens and dozens of times to the servers. It seems wasteful to me so I've been disconnecting U1 until I'm finished working on a project. Is this an unnecessary action that I am taking? I know it's using Amazon's storage but I'm still paranoid that I'm costing Canonical money when I constantly save my progress.

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >