Search Results

Search found 506 results on 21 pages for 'mnt'.

Page 3/21 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Can't create a valid symlink under VMWare HGFS

    - by Alexander Gladysh
    Host: OS X 10.6.5 VMWare Fusion: 3.1.2 Guest: Ubuntu x86 10.10 $ uname -a Linux ubuntu 2.6.35-24-generic #42-Ubuntu SMP Thu Dec 2 01:41:57 UTC 2010 i686 GNU/Linux I can not create a symlink, readable from the Guest OS anywhere in the directory, mounted with hgfs: /mnt/hgfs/projects/tmp$ touch aaa /mnt/hgfs/projects/tmp$ ln -s aaa bbb /mnt/hgfs/projects/tmp$ less bbb bbb: No such file or directory /mnt/hgfs/projects/tmp$ ls -la total 6 drwxr-xr-x 1 501 users 136 2010-12-28 18:12 . drwxr-xr-x 1 501 users 8602 2010-12-28 18:12 .. -rw-r--r-- 1 501 users 0 2010-12-28 18:12 aaa lrwxr-xr-x 1 501 users 3 2010-12-28 18:12 bbb - aaa /mnt/hgfs/projects/tmp$ readlink bbb aaa The same symlink is perfectly accessible in OS X host. Is there a workaround for this?

    Read the article

  • Copying files to zfs mountpoint doesn't work - the files aren't actually copied to the other filesystem,

    - by user113904
    I have 3 x 4 TB disks in a NAS that I want to group together and access as if they were one whole 'unit' of some kind. I also have a 250GB disk containing the OS - this is full of films and tv shows currently. I thought zfs sounded good so I created a raidz zpool, after installing the ppa sudo zpool create store raidz /dev/sdb /dev/sdc /dev/sdd and set the mountpoint to /mnt/store sudo zfs set mountpoint=/mnt/store /store checked it was successful - I think it was sudo zfs list NAME USED AVAIL REFER MOUNTPOINT store 266K 7.16T 170K /mnt/store Then I wanted to move over a whole load of files from my home directory. I went to where the to-be-copied folder was (called media) and entered sudo cp -R * /mnt/store cp: cannot create directory `/mnt/store/media': No space left on device It seems like it's not copying over to the new filesystem I made (or thought I did). I've never really done this type of thing until a few days ago so may be running before I can walk... is this not the right way to copy files across? I've only used windows before so the idea of mountpoints is a bit mind boggling. I'm using XBMCbuntu 12 beta 2.0 which is based on 12.04. Will retry with normal Ubuntu 12.04 desktop to see if that's the problem. thanks for the help!

    Read the article

  • Grub bootloader goes fail to boot in dual boot system after installing Ubuntu 12.10

    - by K.K Patel
    I had 12.04 with my dual boot system. Yesterday I downloaded Ubuntu 12.10 make bootable USB and choose Upgrade option in Installer. After installation Grub failed to boot my machine. I tried following to fix grub bootloder . Same problem I fixed with Ubuntu 12.04 using live USB but this solution not work for Ubuntu 12.10. Now coming at exactly where this solution goes fail. I followed this steps after booting Live USB and opening terminal. 1) sudo fdisk -l to see where Linux is installed 2) sudo mount /dev/sda9 /mnt where sda9 is my linux partition 3) sudo mount /dev/sda9 /mnt/boot 4) sudo mount --bind /dev /mnt/dev 5) sudo chroot /mnt (No problems with this steps done perfectly) 6) grub-install /dev/sda when I type command I got error that source_dir doesn't exist Please specify --target or --directory How can I solve this ?

    Read the article

  • Cannot Restore GRUB (Ubuntu 11.04 + Win 7)

    - by Benny
    I'm trying to fix GRUB on my PC, but I'm running into serious issues doing so. Any help would be greatly appreciated as I'm completely crippled right now. Here is the sequence of events for this PC: Installed Windows 7 Split full disk into two partitions (one for win7 and one for multimedia) Long time passed Split one of the partitions into two again Installed Ubuntu 11.04 on new partition A little time passed Windows 7 acting up, reinstall Ubuntu GRUB gone Tried restoring GRUB by mounting and grub-install from live USB Tried switching to a live CD instead of USB (thinking it might be the drive) Now I don't see GRUB and I'm getting "input/output" errors An example i/o error: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xbe86aff6 Device Boot Start End Blocks Id System /dev/sda1 * 1 48727 391393280 7 HPFS/NTFS /dev/sda2 48727 77063 227612647+ 7 HPFS/NTFS /dev/sda3 77063 91202 113566721 5 Extended /dev/sda5 77063 90622 108908544 83 Linux /dev/sda6 90622 91202 4657152 82 Linux swap / Solaris ubuntu@ubuntu:~$ sudo mount /dev/sda5 /mnt ubuntu@ubuntu:~$ sudo grub-install --root-directory=/mnt /dev/sda mkdir: cannot create directory `/mnt/boot': Input/output error ubuntu@ubuntu:~$ cd /mnt ubuntu@ubuntu:/mnt$ ls ls: cannot access etc: Input/output error

    Read the article

  • Can no longer boot with rEFIt and Grub on early 2006 MacBook Pro

    - by Don Quixote
    I don't know what happened to cause this. I have Snow Leopard, Ubuntu 11.04 Natty Narwhal and Windows XP SP3 on my early 2006 MacBook Pro. It is a Core Duo unit, NOT Core 2 Duo, so it is 32-bit only - Model Identifier MacBookPro1,1. I use rEFIt 0.14 for my boot menu. For some reason neither XP nor Ubuntu would boot anymore. I'd just get a black screen with a rapidly flashing underscore in the top-left corner. Having both those OSes failing to boot suggested a problem with the boot loader in my MBR. The rEFIT partition tool verified that my MBR partitions were still synced with my GPT partitions, so I rewrote my MBR partition table with fdisk while booted from Parted Magic: # fdisk /dev/sda (fdisk warns about the disk having a GPT. I press on anyway.) p (Print the existing partition table to make sure it's OK.) w (Write the old partition table back to disk. This also writes a new MBR boot loader.) After this XP would boot but Ubuntu would not, with the same symptom. Now I used update-grub while chrooted into Ubuntu from Parted Magic: # mount /dev/sda3 /mnt # mount --bind /dev /mnt/dev # mount --bind /sys /mnt/sys # mount --bind /proc /mnt/proc # chroot /mnt Chroot issues some warnings about not being able to identify some group IDs. I don't know why that happens, or whether it is a problem. At this point while I am still booted off of Parted Magic's kernel, I am running from Natty's filesystem. # update-grub Update-grub detects each of my operating systems then claims to complete successfully, but still won't boot. I asked this same question over at rEFIt's Sourceforge support forum but there have been no replies yet. I also Googled quite a bit, and see many who have the same black screen problem, but none of their situations seem quite like mine. Thanks for any help you can give me. -- Don Quixote

    Read the article

  • mount fstab partition with public access

    - by Mikhail
    How do I specify that an fstab mount-point should be public? I want /mnt/windows to be accessible to normal users. I believe I am using ntfs-3g. If I set the /mnt/windows to 777 will it be publicly accessible without changing the permissions on the NTFS disk? /dev/sdb4 /mnt/windows ntfs noatime 0 1 /dev/sdb5 / ext4 noatime 0 1 UUID=5AA4-168D /boot/efi vfat defaults 0 1 and localhost my_computer # stat /mnt/windows/ File: '/mnt/windows/' Size: 12288 Blocks: 24 IO Block: 512 directory Device: 814h/2068d Inode: 5 Links: 1 Access: (0700/drwx------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-08-21 18:29:13.597722200 -0500 Modify: 2014-08-21 18:29:13.597722200 -0500 Change: 2014-08-21 18:29:13.597722200 -0500 Birth: -

    Read the article

  • How to mount a drive for other user than root?

    - by Ondra Žižka
    I've attached a SSD disk though USB. Then: sudo su - mkdir /mnt/hx chown ondra /mnt/hx mount /dev/sdb1 /mnt/hx # It's FAT32 now, but was the same with EXT4 The last command changes dir owner to root. Whenever I create a file in the root dir, I need to be root and root is the owner. Can I set different user as owner of the mounted dir? Or, simply said, ensure that user XY can freely read/write on the drive.

    Read the article

  • How to make Ubuntu LiveCD be able to use USB Flash drive and external hard drive?

    - by ????
    I am booting up Ubuntu 2012.04 LiveCD... and was able to do /sudo mount /dev/sda1 /mnt and be able to see files in /mnt, which is the main hard drive that can't boot up any more. So to copy files from /mnt to an external hard drive or USB flash drive, I connected a 1TB external hard drive and 2 USB flash drives to the computer, but for some reason, in "File Systems", I can't drag and drop files from /mnt into those external hard drive or USB flash drives? I can't open or look into those drives either... How to make it work?

    Read the article

  • Heroku app throws "Internal Server Error"

    - by picardo
    This app works just fine on my local computer. After pushing it to Heroku, static pages appear to be working but the blog section throws an Internal Server Error. I pulled the logs by running "heroku logs" and this is what I get: ==> production.log <== /usr/ruby1.8.7/lib/ruby/gems/1.8/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/backends/base.rb:57:in `start' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/server.rb:156:in `start' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/controllers/controller.rb:80:in `start' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/runner.rb:177:in `send' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/runner.rb:177:in `run_command' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/lib/thin/runner.rb:143:in `run!' /home/slugs/215194_e5b887e_c999/mnt/.bundle/gems/gems/thin-1.2.7/bin/thin:6 Something wrong with the eventmachine gem, I suppose....but it works fine on my machine. So I'm not sure what's going on or how to debug it.

    Read the article

  • CentOS - mdadm raid1 drive won't mount to default location

    - by danny
    I'm running CentOS 5.5, the system, boot, swap, etc. is all on /dev/sda and I have two identical single-partition drives /dev/sdb1 /dev/sdc1 that are configured in RAID1 (using mdadm). It was working fine (configured to mount to /mnt/data in the fstab file) and I recently let yum install a couple of automatic updates without paying attention to what they were, and now it doesn't work. Raid is working fine (dmesg shows it gets loaded correctly). mdstat shows: # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] XXXX blocks [2/2] [UU] unused devices: <none> Additionally, I can mount it anywhere other than its default directory (i.e. the following works, and I can read data off the drives). # mount /dev/md0 /mnt/data2 EXT3-fs warning: mounting fs with errors, running e2fsck is recommended But when I run the following I get: # mount -a mount: /dev/sdb1 already mounted or /mnt/data busy It says nothing is mounted when I try to umount /dev/sdb1 or umount /mnt/data, so I assume it's the second of those errors. However, lsof | grep mnt shows nothing. The weird thing is that I can save files in /mnt/data. So something is obviously mounted there, but when I try to umount it I get the error that nothing is mounted. /etc/mtab doesn't mention any of the partitions or files I am trying to work with, and fstab just has that one line I mentioned above that is supposed to mount my raid partition. Again, it was all working fine until I On Google I've found a few things about dmraid interfering with mdadm after an update, but I yum remove'd dmraid and rebooted and it didn't help. I'm really confused and need to get this working to get on with my work!

    Read the article

  • Permission denied problem in Freenas + Transmission

    - by Torbjörn Karlsson
    Running Freenas 0.7.2 (5543) and Transmission 2.11 The problem it that i can not save a torrent where ever i want.. For example... I can save in: /nmt/1-500gb/Tv/dexter but i can not save in /nmt/4-1000gb/tv/Lost When i try to save in the lost folder I get a permission denied error in the Web interface. But when I try to save the same torrent file in the dexter folder everything works fine... This is probably an easy thing to fix, but I'm new to Freenas. The user name for Transmission is TorrentUser if that helps. Now I find out that I can not browse the disk in Quixplorer.. I can browse nmt/4-1000gb/ but not /nmt/1-500gb When I try to browse the nmt/4-1000gb/ I get Unable to read directory $ mount /dev/md0 on / (ufs, local) devfs on /dev (devfs, local) procfs on /proc (procfs, local) /dev/fuse1 on /mnt/5 - 500gb (fusefs, local, synchronous) /dev/fuse2 on /mnt/2 - 1000gb (fusefs, local, synchronous) /dev/fuse3 on /mnt/3 - 1000gb (fusefs, local, synchronous) /dev/fuse4 on /mnt/4 - 1000gb (fusefs, local, synchronous) /dev/fuse5 on /mnt/320GB - USB (fusefs, local, synchronous) /dev/md1 on /var (ufs, local) /dev/da0a on /cf (ufs, local, read-only) /dev/fuse0 on /mnt/1 - 500gb (fusefs, local, synchronous) Dont work : 1 - 500gb 2 - 1000gb 3 - 1000gb Works: 320GB - USB 4 - 1000gb 5 - 500gb And this 3 disk is the same disks that I can save my torrents to. Ps. Every disk works perfect when i use ftp...

    Read the article

  • heroku mongohq and mongoid Mongo::ConnectionFailure

    - by Ole Morten Amundsen
    I have added the mongoHQ addon for mongodb at heroku. It crashes with something like this. connect_to_master': failed to connect to any given host:port (Mongo::ConnectionFailure) The descriptions online (heroku mongohq) are more directed towards mongomapper, as I see it. I'm running ruby 1.9.1 and rails 3-beta with mongoid. My feeling says that there's something with ENV['MONGOHQ_URL'], which it says the MongoHQ addon sets, but I haven't set MONGOHQ_URL anywhere in my app. I guess the problem is in my mongoid.yml ? defaults: &defaults host: localhost development: <<: *defaults database: aliado_development test: <<: *defaults database: aliado_test # set these environment variables on your prod server production: <<: *defaults host: <%= ENV['MONGOID_HOST'] %> port: <%= ENV['MONGOID_PORT'] %> username: <%= ENV['MONGOID_USERNAME'] %> password: <%= ENV['MONGOID_PASSWORD'] %> database: <%= ENV['MONGOID_DATABASE'] %> It works fine locally, but fails at heroku, more stack trace: ==> crashlog.log <== Cannot write to outdated .bundle/environment.rb to update it /disk1/home/slugs/176479_b14df52_b875/mnt/.bundle/gems/gems/rack-1.1.0/lib/rack.rb:14: warning: already initialized constant VERSION /disk1/home/slugs/176479_b14df52_b875/mnt/.bundle/gems/gems/mongo-0.20.1/lib/mongo/connection.rb:435:in `connect_to_master': failed to connect to any given host:port (Mongo::ConnectionFailure) from /disk1/home/slugs/176479_b14df52_b875/mnt/.bundle/gems/gems/mongo-0.20.1/lib/mongo/connection.rb:112:in `initialize' from /disk1/home/slugs/176479_b14df52_b875/mnt/.bundle/gems/gems/mongoid-2.0.0.beta4 /lib/mongoid/railtie.rb:32:in `new' from /disk1/home/slugs/176479_b14df52_b875/mnt/.bundle/gems/gems/mongoid-2.0.0.beta4/lib/mongoid/railtie.rb:32:in `block (2 levels) in <class:Railtie>' from /disk1/home/slugs/176479_b14df52_b875/mnt/.bundle/gems/gems/mongoid-2.0.0.beta4/lib/mongoid.rb:110:in `configure' from /disk1/home/slugs/176479_b14df52_b875/mnt/.bundle/gems/gems/mongoid-2.0.0.beta4/lib/mongoid/railtie.rb:21:in `block in <class:Railtie>' from /disk1/home/slugs/176479_b14df52_b875/mnt/.bundle/gems/gems/railties-3.0.0.beta3/lib/rails/initializable.rb:25:in `instance_exec' ..... It all works locally, both tests and app. I'm out of ideas... Any suggestions? PS: Somebody with high repu mind create the tag 'mongohq'?

    Read the article

  • Dynamic mass hosting using mod_wsgi

    - by Virgil Balibanu
    Hi, I am trying to configure an apache server using mod_wsgi for dynamic mass hosting. Each user will have it's own instance of a python application located in /mnt/data/www/domains/[user_name] and there will be a vhost.map telling me which domain maps to each user's directory (the directory will have the same name as the user). What i do not know is how to write the WSGIScriptAliasMatch line so that it also takes the path from the vhost.map file. What i want to do is something like this: I can have on my server different domains like www.virgilbalibanu.com or virgil.balibanu.com and flaviu.balibanu.com where each domain would belog to another user, the user name having no neccesary connection to the domain name. I want to do this beacuse a user, wehn he makes an acoount receives something like virgil.mydomain.com but if he has his own domain he can change it later to that, for example www.virgilbalibanu.ro, and this way I would only need to chenage the line in the vhost.map file So far I have something like this: Alias /media/ /mnt/data/www/iitcms/media/ #all media is taken from here RewriteEngine on RewriteMap lowercase int:tolower # define the map file RewriteMap vhost txt:/mnt/data/www/domains/vhost.map #this does not work either, can;t say why atm RewriteCond %{REQUEST_URI} ^/uploads/ RewriteCond ${lowercase:%{SERVER_NAME}} ^(.+)$ RewriteCond ${vhost:%1} ^(/.*)$ RewriteRule ^/(.*)$ %1/media/uploads/$1 #---> this I have no ideea how i could do WSGIScriptAliasMatch ^([^/]+) /mnt/data/www/domains/$1/apache/django.wsgi <Directory "/mnt/data/www/domains"> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> <DirectoryMatch ^/mnt/data/www/domains/([^/]+)/apache> AllowOverride None Options FollowSymLinks ExecCGI Order deny,allow Allow from all </DirectoryMatch> <Directory /mnt/data/www/iitcms/media> AllowOverride None Options Indexes FollowSymLinks MultiViews Order allow,deny Allow from all </Directory> <DirectoryMatch ^/mnt/data/www/domains/([^/]+)/media/uploads> AllowOverride None Options Indexes FollowSymLinks MultiViews Order allow,deny Allow from all </DirectoryMatch> I know the part i did with mod_rewrite doesn't work, couldn't really say why not but that's not as important so far, I am curious how could i write the WSGIScriptAliasMatch line so that to accomplish my objective. I would be very grateful for any help, or any other ideas related to how i can deal with this. Also it would be great if I'd manage to get each site to run in wsgi daemon mode, thou that is not as important. Thanks, Virgil

    Read the article

  • Heroku and Refinerycms: Application failed to start ~ attachment_fu problem

    - by John Deely
    Ok so I'm trying to get Refinerycms working with Heroku, and I'm new at all of this. I've set up an amazon s3 account and added keys and ids to the amazon_s3.yml files. When launched on Heroku at gart.heroku.com I get the following error: App failed to start /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu/backends/s3_backend.rb:187:in read': No such file or directory - /disk1/home/slugs/141557_e8490b3_d5eb/mnt/config/amazon_s3.yml (Errno::ENOENT) from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu/backends/s3_backend.rb:187:inincluded' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu.rb:123:in include' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu.rb:123:inhas_attachment' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/app/models/image.rb:13 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:in require' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:265:inrequire_or_load' ... 42 levels... from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:in instance_eval' from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:ininitialize' from /home/heroku_rack/heroku.ru:1:in `new' from /home/heroku_rack/heroku.ru:1 The s3_backend.rb line 187 contains: @@s3_config = @@s3_config = YAML.load(ERB.new(File.read(@@s3_config_path)).result)[RAILS_ENV].symbolize_keys Any help would be great!

    Read the article

  • Moses v1.0 multi language ini file

    - by Milan Kocic
    I was working with mosesserver 0.91 and everything works fine but now there is version 1.0 and nothing is same as before. Here is my situation: I want to have multi language translation from arabic to english and from english to arabic. All data and configuration file I have works with 0.91 version of mosesserver. Here is my config file: ------------------------------------------------- ######################### ### MOSES CONFIG FILE ### ######################### # D - decoding path, R - reordering model, L - language model [translation-systems] ar-en D 0 R 0 L 0 en-ar D 1 R 1 L 1 # input factors [input-factors] 0 # mapping steps [mapping] 0 T 0 1 T 1 # translation tables: table type (hierarchical(0), textual (0), binary (1)), source-factors, target-factors, number of scores, file # OLD FORMAT is still handled for back-compatibility # OLD FORMAT translation tables: source-factors, target-factors, number of scores, file # OLD FORMAT a binary table type (1) is assumed [ttable-file] 1 0 0 5 /mnt/models/ar-en/phrase-table/phrase-table 1 0 0 5 /mnt/models/en-ar/phrase-table/phrase-table # no generation models, no generation-file section # language models: type(srilm/irstlm), factors, order, file [lmodel-file] 1 0 5 /mnt/models/ar-en/language-model/en.qblm.mm 1 0 5 /mnt/models/en-ar/language-model/ar.lm.d1.blm.mm # limit on how many phrase translations e for each phrase f are loaded # 0 = all elements loaded [ttable-limit] 20 # distortion (reordering) files [distortion-file] 0-0 wbe-msd-bidirectional-fe-allff 6 /mnt/models/ar-en/reordering-table/reordering-table.wbe-msd-bidirectional-fe.gz 0-0 wbe-msd-bidirectional-fe-allff 6 /mnt/models/en-ar/reordering-model/reordering-table.wbe-msd-bidirectional-fe.gz # distortion (reordering) weight [weight-d] 0.3 0.3 # lexicalised distortion weights [weight-lr] 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 # language model weights [weight-l] 0.5000 0.5000 # translation model weights [weight-t] 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 # no generation models, no weight-generation section # word penalty [weight-w] -1 -1 [distortion-limit] 12 --------------------------------------------------------- So please can someone help me and rewrite this config file so it can work in version 1.0. And i need some python sample code of translation. I am using xmlrpc in python and earler I sent http request with: import xmlrpclib client = xmlrpclib.ServerProxy('http://localhost:8080') client.translate({'text': 'some text', 'system': 'en-ar'}) but now seems there is no more 'system' parameter and moses use always default settings.

    Read the article

  • Help recovering broken OS (permissions issue)

    - by Guandalino
    (At the bottom there is an important update.) I was doing experiments in order to backup a remote account to my local system, Ubuntu 12.04 LTS. I'm not confident with duplicity and probably, due to wrong syntax, some local files have been replaced with remote files. This is just a supposition, I'm not sure this is the real cause of OS corruption. The corruption happened after experimenting with backups, so I think I did something wrong at this regard. I was aware there was a problem when I tried to access a command using sudo: $ sudo ls sudo: unable to open /etc/sudoers: Permission denied sudo: no valid sudoers sources found, quitting sudo: unable to initialize policy plugin This is how /etc/sudoers looks like: $ ls -ald /etc/sudoers -r--r----- 1 root root 788 Oct 2 18:30 /etc/sudoers At this point I tried to reboot and now this is the message I get: The system is running in low graphics mode. Your screen, graphics card and input device settings could not be detected correctly. You will need to configure these yourself. I tried to follow the wizard to configure these settings, but without luck (the system prevents me going on when I press "Next"). The thing that makes me a bit less worried is that all the data on the disk seems readable and I'm able to access them using a live cd. I run memtest and RAM seems to be OK. Do you have any idea about how to recover my system? I'm very glad to provide further information, just let me know what info could be helpful. UPDATE. The issue is about wrong permissions and this is how I discovered: I mounted the root partition of the broken OS on /mnt/broken/ (live CD) and did ls /mnt/broken/. I got a permission denied error, while I expected to have the directory listing. I had to do sudo ls /mnt/broken/ and this worked. Thus without having root permission via sudo it's impossible to access the root of broken os. The current output of ls -ld /mnt/broken/ is: drwxr-x--- 29 1000 812 4096 2012-12-08 21:58 /mnt/broken Any thoughts on how to restore the old (working) set of permissions?

    Read the article

  • How do I mount a "DiskSecure Multiboot" partition?

    - by ????
    For a hard drive that has 4 or 5 partitions, I was able to mount one of them using Ubuntu LiveCD: sudo mount /dev/sda1 /mnt but is there a way to mount to the other partitions? (if using sudo fdisk -l, it only shows /dev/sda) GParted's snapshot is: Right now, the fdisk info is as follows: ubuntu@ubuntu:~$ sudo fdisk -l /dev/sda Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1aca8ea5 Device Boot Start End Blocks Id System /dev/sda1 284993226 350602558 32804666+ 7 HPFS/NTFS/exFAT and then ubuntu@ubuntu:/mnt$ sudo fdisk -l /dev/sda1 Disk /dev/sda1: 33.6 GB, 33591978496 bytes 255 heads, 63 sectors/track, 4083 cylinders, total 65609333 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x2052474d This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sda1p1 ? 6579571 1924427647 958924038+ 70 DiskSecure Multi-Boot /dev/sda1p2 ? 1953251627 3771827541 909287957+ 43 Unknown /dev/sda1p3 ? 225735265 225735274 5 72 Unknown /dev/sda1p4 2642411520 2642463409 25945 0 Empty Partition table entries are not in disk order Per @lgarzo's request, parted info is: ubuntu@ubuntu:/mnt$ sudo parted /dev/sda print Model: ATA ST3320820AS (scsi) Disk /dev/sda: 320GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 146GB 180GB 33.6GB primary ntfs boot The command sudo mount /dev/sda1p2 /mnt won't work.

    Read the article

  • Mount SMB / AFP 13.10

    - by Jeffery
    I cannot seem to get Ubuntu to mount a mac share via SMB or AFP. I've tried the following... AFP: apt-get install afpfs-ng-utils mount_afp afp://user:password@localip/share /mnt/share Error given: "Could not connect, never got a reponse to getstatus, Connection timed out". Which is odd as I can access the share just fine via Mac. SMB: apt-get install cifs-utils nano /etc/fstab added the following line "//localip/share /mnt/share cifs username=user,password=pass,iocharset=utf8,sec=nltm 0 0" mount -a Error given: root@Asrock:~# mount -a -vvv mount: fstab path: "/etc/fstab" mount: mtab path: "/etc/mtab" mount: lock path: "/etc/mtab~" mount: temp path: "/etc/mtab.tmp" mount: UID: 0 mount: eUID: 0 mount: spec: "//10.0.1.3/NAS" mount: node: "/mnt/NAS" mount: types: "cifs" mount: opts: "username=user,password=pass,iocharset=utf8,sec=nltm" mount: external mount: argv[0] = "/sbin/mount.cifs" mount: external mount: argv[1] = "//10.0.1.3/NAS" mount: external mount: argv[2] = "/mnt/NAS" mount: external mount: argv[3] = "-v" mount: external mount: argv[4] = "-o" mount: external mount: argv[5] = "rw,username=user,password=pass,iocharset=utf8,sec=nltm" mount.cifs kernel mount options: ip=10.0.1.3,unc=\\10.0.1.3\NAS,iocharset=utf8,sec=nltm,user=user,pass=* mount error(22): Invalid argument Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) I don't really care which it uses I just want it to work! Am I doing something wrong?

    Read the article

  • 12.04 ext4 - cannot create regular file/No space left - with a lot of space and inodes

    - by user1434058
    This seems similar: EXT4 "No space left on device (28)" incorrect but there is no explanation I created an ext4 filesystem on a RAID 1 array with: mke2fs -t ext4 -T small /dev/md0 Copying a single directory with many tiny files I get: cp: cannot create regular file `/mnt/raid1_new/pics/pic3412.jpg': No space left on device space used 5% inodes used 1% I manually tried: cp /source/test1.jpg /mnt/raid1_new/pics/test1.jpg --- error cp /source/test1.jpg /mnt/raid1_new/pics/test2.jpg --- ERROR cp /source/test1.jpg /mnt/raid1_new/pics/test3.jpg --- no error Notes: RAID 1 disks are error free. I tried mv instead of cp and got the same thing. I tried omitting -T small with no effect Can somebody help me understand this magic?

    Read the article

  • Wildcards not being substituted

    - by user21463
    #!/bin/bash loc=`echo ~/.gvfs/*/DCIM/100_FUJI` rm -f /mnt/fujifilmA100 ln -s "$loc" /mnt/fujifilmA100 For some reason the variable * doesn't get substituted with the only possible value and gets given the value /home/chris/.gvfs/*/DCIM/100_FUJI. Does anyone have an idea of why? Please note: If global expansion fails, the pattern is not substituted. I ran the commands: chris@comp2008:~$ loc=`echo ~/.gvfs/*/DCIM/100_FUJI ` chris@comp2008:~$ echo $loc /home/chris/.gvfs/gphoto2 mount on usb%3A001,008/DCIM/100_FUJI So we can see the expansion should work I have now switched to using: loc = `find ~/.gvfs -name 100_FUJI ` I am just curious why it doesn't work as is. Debugging output using sh -x echo /home/chris/.gvfs/*/DCIM/100_FUJI loc=/home/chris/.gvfs/*/DCIM/100_FUJI rm -f /mnt/fujifilmA100 ln -s /home/chris/.gvfs/*/DCIM/100_FUJI/mnt/fujifilmA100

    Read the article

  • Yum install in chroot directory

    - by pulegium
    I'm trying to install the Base group on a mounted volume. Here's the custom yum.conf that I'm using: [main] cachedir=/var/cache/yum/ debuglevel=2 logfile=/var/log/yum.log exclude=*-debuginfo obsoletes=1 gpgcheck=0 reposdir=/dev/null [base] name=Fedora 13 - i386 baseurl=file:///media/Fedora\ 13\ i386\ DVD/ enabled=1 [updates] name=Fedora 13 - i386 - Updates baseurl=http://mirror.sov.uk.goscomb.net/fedora/linux/updates/13/i386/ enabled=1 When I run # yum -c yum.conf --installroot=/mnt groupinstall Base I would expect yum to install everything under /mnt But it keeps on saying: [...] Package irda-utils-0.9.18-10.fc12.i686 already installed and latest version Package time-1.7-37.fc12.i686 already installed and latest version Package man-pages-3.23-6.fc13.noarch already installed and latest version Package talk-0.17-33.2.4.i686 already installed and latest version Package pam_passwdqc-1.0.5-6.fc13.i686 already installed and latest version [...] I tried rpm --base=/mnt --initdb and then use rpm to install fedora-release (which worked and installed the package under /mnt) But yum keeps on saying that all packages are installed. Any ideas?...

    Read the article

  • Variable directory names over SCP

    - by nedm
    We have a backup routine that previously ran from one disk to another on the same server, but have recently moved the source data to a remote server and are trying to replicate the job via scp. We need to run the script on the target server, and we've set up key-based scp (no username/password required) between the two servers. Using scp to copy specific files and directories works perfectly: scp -r -p -B [email protected]:/mnt/disk1/bsource/filename.txt /mnt/disk2/btarget/ However, our previous routine iterates through directories on the source disk to determine which files to copy, then runs them individually through gpg encryption. Is there any way to do this only by using scp? Again, this script needs to run from the target server, and the user the job runs under only has scp (no ssh) access to the target system. The old job would look something like this: #Change to source dir cd /mnt/disk1 #Create variable to store # directories named by date YYYYMMDD j="20000101/" #Iterate though directories in the current dir # to get the most recent folder name for i in $(ls -d */); do if [ "$j" \< "$i" ]; then j=${i%/*} fi done #Encrypt individual files from $j to target directory cd ./${j%%}/bsource/ for k in $(ls -p | grep -v /$); do sudo /usr/bin/gpg -e -r "Backup Key" --batch --no-tty -o "/mnt/disk2/btarget/$k.gpg" "$/mnt/disk1/$j/bsource/$k" done Can anyone suggest how to do this via scp from the target system? Thanks in advance.

    Read the article

  • Virtual box - How to add disks and move var, opt and home to them?

    - by Jarrod Roberson
    I created a CentOS 5.6 Guest OS Virtual Machine. I made the first disk 10GB, I am rapidly outgrowing it. It was suggested that I make disks for my /var, /opt and /home directories and move them so I can better manage the disks for backing up and what not. This sounds like a good idea. I know how to create the disks in Virtual Box. I have dug around Google and the internet in general and all my attempts at doing this have failed. Snapshots are awesome! I can get the drives fdisked, and I have had limited success mounting them to /mnt/var, /mnt/home and /mnt/opt, but even in single user mode ( init 1 ) I can't get the entire contents of the directories to move over, and then the machine won't reboot correctly. cd /var cp * -ax /mnt/var The /var directory in particular is not wanting to move everything to the new location. How do I format, mount and move the /var, /opt and /home to my new disks?

    Read the article

  • Is there a way to do something like LVM over NFS?

    - by warren
    I realize that since NFS is not block-level, LVM can't be used directly. However: is there a way to combine multiple NFS exports (from, say, 3 servers) into one mount point on a different server? Specifically, I'd like to be able to do this on RHEL 4 (or 5, and re-export the combined mount to my RHEL 4 server). expansion The reason I pegged lvm is that I want a bunch of exported mounts (servera:/mnt/export, serverb:/mnt/export, serverc:/mnt/export, etc) to all mount at /mnt/space so that my /mnt/space on this server (serverx) as one large filesystem. Yes, I know that re-exporting is generally a Bad Thing™ but thought it might work, if there was a way to accomplish this on a newer release as opposed to an older one From reading the unionfs docs, it appears that I can't use it over a remote connection - have I misread it? More accurately, since Union FS merges the contents of multiple branches, but makes them appear as one, it doesn't seem to go in reverse: I'm trying to mount a bunch of NFS points in a merged fashion, then write to them - not caring where data goes, a la LVM .

    Read the article

  • Why apache throws 403 on index file after install?

    - by den-javamaniac
    Hi. I've just installed apache and php from sources using next commands: ./configure --prefix="/mnt/workspace/servers/web/apache-2.2.17" \ --enable-info --enable-rewrite --enable-usertrack --enable-mime-magic for apache and ./configure --with-apxs2=/mnt/workspace/servers/web/apache-2.2.17/bin/apxs \ --prefix=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-config-file-path=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-mysql=mysqlnd for php. After adjusting configuration (httpd.conf) and starting apache it gives a 403 response on http://localhost:8060/index.html (presuming that 8060 is used) request. There are next directory settings in httpd.conf: <Directory "/mnt/workspace/servers/web/apache-2.2.17/htdocs"> ... Order allow,deny Allow from all ... </Directory> <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> It should be noted that I've got apache on a mounted (default auto mount configured while installing ubuntu) partition. Log Files Access log: ::1 - - [12/Feb/2011:17:48:30 +0200] "GET / HTTP/1.1" 403 202 ::1 - - [12/Feb/2011:17:48:31 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /favicon.ico HTTP/1.1" 403 213 Error log: [Sat Feb 12 18:59:13 2011] [notice] Apache/2.2.17 (Unix) PHP/5.3.5 configured -- resuming normal operations [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to / denied [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to /favicon.ico denied [Sat Feb 12 18:59:36 2011] [error] [client ::1] (13)Permission denied: access to /index.html denied

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >