Search Results

Search found 2282 results on 92 pages for 'filesystem'.

Page 78/92 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • How to keep subtree removal (`rm -rf`) from starving other processes for Disk I/O?

    - by David Eyk
    We have a very large (multi-GB) Nginx cache directory for a busy site, which we occasionally need to clear all at once. I've solved this in the past by moving the cache folder to a new path, making a new cache folder at the old path, and then rm -rfing the old cache folder. Lately, however, when I need to clear the cache on a busy morning, the I/O from rm -rf is starving my server processes of disk access, as both Nginx and the server it fronts for are read-intensive. I can watch the load average climb while the CPUs sit idle and rm -rf takes 98-99% of Disk IO in iotop. I've tried ionice -c 3 when invoking rm, but it seems to have no appreciable effect on the observed behavior. Is there any way to tame rm -rf to share the disk more? Do I need to use a different technique that will take its cues from ionice? Update: The filesystem in question is an AWS EC2 instance store (the primary disk is EBS). The /etc/fstab entry looks like this: /dev/xvdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2

    Read the article

  • How do I prevent a tar pipe from causing swapping?

    - by Jeff Shattock
    I have a rather large filesystem that I need to transfer from one Linux server to another. I figured the best way to do this was via a tar/netcat pipe arrangment, something like tar c . | pv | nc blah blah blah And it works great, the network stays fairly saturated, life is good. Until the source machine starts swapping. The files are on a raid on the source system, so the read speed is much faster than the write speed on the other end. Since the dest machine hasnt picked up the data yet, the source machine needs to stick it somewhere, so into RAM it goes, until there is no more free RAM. It then starts swapping, which is horribly painful since that machine has its OS installed on a somewhat slow CF card. Both machines have 4GB of physical ram, 64 bit Ubuntu 9.04 server. GigE link between them. How do I prevent this swapping? Can I put a "speed-limit" on the tar or netcat process so that the transfer speed doesn't overwhelm the write throughput on the destination end? The man pages didn't list anything, but there might be something I'm overlooking.

    Read the article

  • Virtualbox Networking: XP Guest, Ubuntu Host: Connecting to Windows servers & local network?

    - by user51833
    Here's what I have: Windows XP running in VirtualBox 3.0.8_OSE r53138; Host OS = Ubuntu 9.10 "Karmic Koala"; Windows network in my office with smb fileservers; Guest OS is connected to the internet and is sharing folders with Host OS; Limited networking expertise. Here's what I actually need to do: Use MS Outlook in my XP guest with all its calendar-sharing features and stuff (if this is all done through the internet then great) - or find a Linux app that can do the same stuff; Map Windows network servers, eg. smb://server01/ in my XP guest (I can already access these in Ubuntu. Here's what I've tried with no luck: Entering the server address (example above) in my XP guest windows explorer address bar (got a "could not access the file, path or drive" error message - maybe if I could enter login/pass information? But I don't know how); Mapping the server as a network drive (Windows could not find the path); Mounting the server as one of my shared folders (I couldn't find it through the shared folders browser in VirtualBox - is there somewhere in the Linux filesystem that Ubuntu keeps links to mounted servers?).

    Read the article

  • Multi- authentication scenario for a public internet service using Kerberos

    - by StrangeLoop
    I have a public web server which has users coming from internet (via HTTPS) and from a corporate intranet. I wish to use Kerberos authentication for the intranet users so that they would be automatically logged in the web application without the need to provide any login/password (assuming they are already logged to the Windows domain). For the users coming from internet I want to provide traditional basic/form- based authentication. User/password data for these users would be stored internally in a database used by the application. Web application will be configured to use Kerberos authentication for users coming from specific intranet ip networks and basic/form- based authentication will be used for the rest of the users. From a security perspective, are there some risks involved in this kind of setup or is this a generally accepted solution? My understanding is that server doesn't need access to KDC (see Kerberos authentication, service host and access to KDC) and it can be completely isolated from AD and corporate intranet. The server has a keytab file stored locally that is used to decrypt tickets sent by the users coming from intranet. The tickets only contain username and domain of the incoming user. Server never sees the passwords of authenticated users. If the server would be hacked and the keytab file compromised, it would mean that attacker could forge tickets for any domain user and get access to the web application as any user. But typically this is the case anyway if hacker gains access to the keytab file on the local filesystem. The encryption key contained in the keytab file is based on the service account password in AD and is in hashed form, I guess it is very difficult to brute force this password if strong Kerberos encryption like AES-256-SHA1 is used. As the server has no network access to intranet, even the compromised service account couldn't be directly used for anything.

    Read the article

  • Moving a site from IIs6 to IIS7.5

    - by Sukotto
    I need to move a site off of IIS6 (Win Server 2003) and onto IIS7.5 (Win Server 2008) as soon as possible. Preferably tomorrow. The site itself is a delightful mix of classic asp (vbscript) and one-off asp.net (C#) applications (each asp.net app is in its own virtual dir and has a self-contained web.config). In case it's relevant, this is a sort of research site made up of 40 or 50 unconnected microsites. Each microsite is typically a simple form allowing a user to submit a form, which then runs a Stored Proc on a sqlserver db and displays a chart and/or table of the results. There is very little security to worry about. The database connection info is in a central file (in the case of the classic asp) or app's individual web.config (lots of duplication there) To add a little spice to the exercise... I have no idea how to admin IIS The company no longer employs the sysadmin or the guys who set this thing up. (They're not going to employ me much longer either but my sense of professional pride does not permit me to just walk away from this task). The servers are on mutually firewalled networks and I have to perform a convoluted, multi-step process to copy anything from one to the other. Would someone please point me to a crash-course tutorial for accomplishing the above? I have: a complete copy of the site's filesystem on the new box installed the 3rd party charting tool on the new system a config.xml file from the "all tasks - save configuration to a file" right click menu. There doesn't seem to be a way to import it on the new system however. The newer IIS manager has a completely different UI and I'm totally lost. Please help.

    Read the article

  • Why is there no /usr/bin/ in windows? Would it be dangerous to the entire Program Files to the path?

    - by dotancohen
    I am a Linux user spending some time in Windows and I'm trying to understand some of the Windows paradigms instead of fighting them. I notice that each program installed in the traditional manner (i.e. via orgasmic installers: Yes, Yes, Yes, Finish) adds the executables to C:/Program Files/foo/bar.exe and then adds a shortcut to the Desktop / Start Menu containing the entire path. However, there is no common directory with links to the software, i.e. C:/bin/bar.exe which would link to C:/Program Files/foo/bar.exe. Therefore, after installing an application the only way to use the application is via the clicky-clicky menus or by navigating to the executable in the filesystem. One cannot simply Win-R to open the run dialogue and then type bar or bar.exe as is possible with notepad or mspaint. I realize that Windows 8 improves on this with the otherwise horrendous Start Screen which does support typing the name of the app, but again this depends on the app having registered itself for such. Would I be doing any harm by adding C:/Program Files recursively to the Windows path? I do realize that there will be name collisions (i.e. uninstall.exe) but could there be other issues?

    Read the article

  • What's a good solution for file-tagging in linux?

    - by julien
    I've been looking for a way to tag my files and search/filter them based on those tags. Here are my (updated) requirements : any file readable by the user can be tagged freely a user can search for files matching one or several tags files can be moved around without losing the previously associated tags the system could be backed up easily no dependencies on any desktop environment if any gui is involved, there must be a cli fallback I've been hoping for some basic filesystem & coreutils hackery to handle this, but I haven' thought about this hard enough yet. Meanwhile I'll review beagle and metatracker, which have been mentionned here, and see how they perform. Ok so beagle has huge gnome dependencies, and tracker is okish, but still has some dependencies I don't like... Been doing some more research, and the way to go could very well be extended file attributes. That's a native solution for most recent filesystems, but they aren't very well supported yet (most coreutils destroys them by default, cp for example needs the -a flag to preserve them). Would like to hear some thoughts on using them while I try my hand at some hacks myself, eventhough this might warrant a new question.

    Read the article

  • Ngins wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • grub refuses to install to raid array

    - by ronno
    I have a software raid 0 setup with dual booting Windows 7 and Ubuntu 12.04. The GRUB bootloader that is already on the hard drive seems to work fine. However, since the latest package update for grub, it refuses to install the new version to the hard disk. grub-install throws the following error: /usr/sbin/grub-probe: error: cannot find a GRUB drive for /dev/mapper/< raid name_RAID0p9. Check your device.map. Auto-detection of a filesystem of /dev/mapper/< raid name_RAID0p9 failed. Try with --recheck. If the problem persists please report this together with the output of "/usr/sbin/grub-probe --device-map="/boot/grub/device.map" --target=fs -v /boot/grub" to < [email protected] update-grub pops the same "/usr/sbin/grub-probe: error: cannot find a GRUB drive for /dev/mapper/< raid name_RAID0p9. Check your device.map." every alternate line. I don't understand what exactly is going on. I'm afraid to reinstall the grub package because it might mess up the boot, which currently works fine. Is it safe to just ignore this?

    Read the article

  • P2V Wouldn't Boot, Rebuilt initrd, Need to Clean Up

    - by Mike Soule
    We have a CentOS 5.4 server (build 2.6.18-164.el5xen). We went to P2V this server so we can have redundancy, the physical only has one PSU. The P2V only completed 99% of the way, we have a VMWare ticket opened, but they marked the ticket as low priority. I was able to boot into a rescue disc of Red Hat 5.4 and rebuild the initrd with the help of this blog post. Now the only issue is the original server had a modified initrd, which was also from a different OS build and made by an outside provider. We do not have a document outlining modifications. My question is, is it at all possible to copy the initrd off of the physical server and replace it on the virtual and some how have the virtual machine boot? Thanks for any input. Edit: I copied the initrd img from the physical and it recreated the original issue. Here is a screen capture of the error. http://i.imgur.com/MqC73.jpg Edit2: echo Scanning logical volumes lvm vgscan --ignorelockingfailure echo Activating logical volumes lvm vgchange -ay --ignorelockingfailure VolGroup00 resume /dev/VolGroup00/LogVol01 echo Creating root device. mkrootdev -t ext3 -o defaults,ro /dev/VolGroup00/LogVol00 echo Mounting root filesystem. mount /sysroot

    Read the article

  • Relayout LVM Disk

    - by Tom
    I have an Ubuntu 11.10 system with two 500GB disks. The partition tables look like this: /dev/sda1 primary 465.52GB /dev/sda2 extended 243.17MB -> /dev/sda5 logical 243.14MB /dev/sdb1 primary 465.76GB sda1 and sdb1 are in a single LVM physical volume group containing a single logical volume containing a single logical filesystem which is mounted as /. sda5 is mounted as /boot. The problem comes when I want to upgrade to Ubuntu 12.04, which requires at least 247MB free on /boot. So I need to reduce the size of sda1 so that I can increase the size of sda2 and sda5. How the heck do I do that? I can find how to shrink the logical volume group, but I'm not at all clear on how to clear out the end part of sda1 so that I can reduce the physical volume group. Does pvresize just deal with this automagically? Or is that wild wishful thinking? I guess the alternatives are to back everything up onto something or other and recreate the thing from scratch or find out whether GRUB2 supports using LVM for /boot.

    Read the article

  • Is it a good Idea to switch to a SSD to use less battery?

    - by Walter Maier-Murdnelch
    I am thinking of buying a SSD for my laptop, mainly for the purpose of extended operating time when running on battery. At the moment I use a Hitachi HTS545032B9A300 (320GB) (Datasheet) as main drive and a Seagate Momentus 5400.3 120GB as secondary drive. I dualboot Windows and Linux but I don't need the windows partition any longer, a 120GB SDD would be more than sufficient space-wise. Speed is not an issue for me, I make heavy use of tmpfs (ramdrive) within Linux and transfers of bigger files are mainly through some network filesystem anyways, thus a cheaper SSD should do. For the purpose of comparison I chose the OCZ Vertex Plus 120GB. Power consumption always is a big promotional thing the industry uses to make me want to buy their SSDs, some sheet on the OCZ page provides an astonishing comparison of desktop HDDS and SSDs. The numbers I got comparing my laptop HDD and their SSD were not really astonishing any longer. Hitachi 320GB HDD: Startup (W, peak, max.) 4.5 Seek (W, avg.) 1.7 Read / Write (W, avg.) 1.4 Performance idle (W, avg.) 1.3 Active idle (W, avg.) 0.8 Low power idle (W, avg.) 0.5 Standby (W, avg.) 0.2 Sleep 0.1 OCZ 120GB SSD: 1.5W active 0.3W standby I see that there are differences, but actually they don't seem that high as I though they were. And compared to the power consuption of the rest of my system I wonder if it makes a difference at all. Have I just taken the wrong look at the whole thing or may I be better off to buy another battery for my laptop?

    Read the article

  • How do I add a second disk to my zfs root pool

    - by ankimal
    I am trying to add a new disk to my zfs root pool. Here is my current config: zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c0d0s0 ONLINE 0 0 0 errors: No known data errors bash-3.00# df -h Filesystem Size Used Avail Use% Mounted on rpool/ROOT/s10x_u7wos_08 311G 18G 293G 6% / swap 14G 384K 14G 1% /etc/svc/volatile /usr/lib/libc/libc_hwcap1.so.1 311G 18G 293G 6% /lib/libc.so.1 swap 14G 52K 14G 1% /tmp swap 14G 40K 14G 1% /var/run rpool/export 293G 19K 293G 1% /export rpool/export/home 430G 138G 293G 32% /export/home rpool 293G 36K 293G 1% /rpool # format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 63> /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0 1. c2d0 <Hitachi- JK1181YAHL0YK-0001-16777216.> /pci@0,0/pci-ide@1f,5/ide@1/cmdk@0,0 Disk 1 above is the new disk I need to attach to expand my root pool (give /export/home some extra space). If I try to attach my new disk to the pool # zpool attach -f rpool c0d0s0 c2d0s0 cannot attach c2d0s0 to c0d0s0: new device must be a single disk # uname -a SunOS dsol1 5.10 Generic_139556-08 i86pc i386 i86pc Solaris Any ideas?

    Read the article

  • GRUB reporting wrong partition type

    - by plok
    It all started when I had to replace one of the disks that the software RAID 1 on this machine currently uses. From that moment on I have not been able to boot to the Windows XP that is installed on the fourth hard drive, /dev/sdd. I am almost positive that the problem is related not to Windows but to GRUB, as if I unplug all the other hard drives so that the Windows XP disk is now /dev/sda it boots with no problem. The problem seems to be that GRUB detects a wrong partition type, which I understand suggest that something is really messed up. This is what I get when I try to follow the steps that until now had worked like a charm: grub> map (hd0) (hd3) grub> map (hd3) (hd0) grub> root (hd3,0) Filesystem type is ext2fs, partition type 0xfd 0xfd? That doesn't make sense. /dev/sdb and sdc are 0xfd (Linux raid), but not /dev/sdd: edel:~# fdisk -l [...] Disk /dev/sdd: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00048d89 Device Boot Start End Blocks Id System /dev/sdd1 * 1 30400 244187968+ 7 HPFS/NTFS edel:/boot/grub# cat device.map (hd0) /dev/sda (hd1) /dev/sdb (hd2) /dev/sdc (hd3) /dev/sdd I have been trying to work this out for hours, to no avail. Can anyone point me in the right direction?

    Read the article

  • Attempted hack on VPS, how to protect in future, what were they trying to do?

    - by Moin Zaman
    UPDATE: They're still here. Help me stop or trap them! Hi SF'ers, I've just had someone hack one of my clients sites. They managed to get to change a file so that the checkout page on the site writes payment information to a text file. Fortunately or unfortunately they stuffed up, the had a typo in the code, which broke the site so I came to know about it straight away. I have some inkling as to how they managed to do this: My website CMS has a File upload area where you can upload images and files to be used within the website. The uploads are limited to 2 folders. I found two suspicious files in these folders and on examining the contents it looks like these files allow the hacker to view the server's filesystem and upload their own files, modify files and even change registry keys?! I've deleted some files, and changed passwords and am in the process of trying to secure the CMS and limit file uploads by extensions. Anything else you guys can suggest I do to try and find out more details about how they got in and what else I can do to prevent this in future?

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • Decrease in disk performance after partitioning and encryption, is this much of a drop normal?

    - by Biohazard
    I have a server that I only have remote access to. Earlier in the week I repartitioned the 2 disk raid as follows: Filesystem Size Used Avail Use% Mounted on /dev/mapper/sda1_crypt 363G 1.8G 343G 1% / tmpfs 2.0G 0 2.0G 0% /lib/init/rw udev 2.0G 140K 2.0G 1% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/sda5 461M 26M 412M 6% /boot /dev/sda7 179G 8.6G 162G 6% /data The raid consists of 2 x 300gb SAS 15k disks. Prior to the changes I made, it was being used as a single unencrypted root parition and hdparm -t /dev/sda was giving readings around 240mb/s, which I still get if I do it now: /dev/sda: Timing buffered disk reads: 730 MB in 3.00 seconds = 243.06 MB/sec Since the repartition and encryption, I get the following on the separate partitions: Unencrypted /dev/sda7: /dev/sda7: Timing buffered disk reads: 540 MB in 3.00 seconds = 179.78 MB/sec Unencrypted /dev/sda5: /dev/sda5: Timing buffered disk reads: 476 MB in 2.55 seconds = 186.86 MB/sec Encrypted /dev/mapper/sda1_crypt: /dev/mapper/sda1_crypt: Timing buffered disk reads: 150 MB in 3.03 seconds = 49.54 MB/sec I expected a drop in performance on the encrypted partition, but not that much, but I didn't expect I would get a drop in performance on the other partitions at all. The other hardware in the server is: 2 x Quad Core Intel(R) Xeon(R) CPU E5405 @ 2.00GHz and 4gb RAM $ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 32 Lun: 00 Vendor: DP Model: BACKPLANE Rev: 1.05 Type: Enclosure ANSI SCSI revision: 05 Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC 6/i Rev: 1.11 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: CD-ROM GCR-8240N Rev: 1.10 Type: CD-ROM ANSI SCSI revision: 05 I'm guessing this means the server has a PERC 6/i RAID controller? The encryption was done with default settings during debian 6 installation. I can't recall the exact specifics and am not sure how I go about finding them? Thanks

    Read the article

  • Deploying multiple identical copies of a virtual machine for compute tasks

    - by Reid
    I have a compute task which has a large number of library dependencies. I would like to deploy it on some of my company's large Linux clusters, where I do not have root. I could probably track down, compile, and install the right versions of all the libraries, but this looks to be quite tedious and would have to be repeated if I deployed it again somewhere else. On the other hand, it's pretty easy to install on current Ubuntu. This led me to wonder about a virtual machine approach. Could I put together a virtual machine which booted up, ran the computation (with parameters from and results to the host), and then shut down? In other words, I'd like a command like this that I could run on the host: $ ./run-vm --ram N --task /path/on/host/foo.sh --results /another/host/dir/ This would boot the VM, run foo.sh, and put the (relatively small) results of the computation in /another/host/dir/. It's important to start up many instances of the VM simultaneously, both on a single node and multiple nodes of the cluster. So it would be nice if I didn't have to make many copies of the VM virtual disk and metadata. As the task instances are completely independent, the VMs would not need any network support once deployed, or any outside communications beyond reading and writing the host filesystem. Is this possible, and if so, how might I go about doing it? Are there assumptions I've made above which are bogus?

    Read the article

  • How to recover deleted files on ext3 fs

    - by Mike
    I have a drive which was using the ext3 filesystem. I am told that about 10Gb of data was deleted off the drive (probably via rm). The drive is currently mounted as read-only to preserve all data. Does anyone know of a method to restore some or all of the data? Also if it helps, the OS was Fedora. I've also been told that the data is mostly ASCII fortan source code and Matlab files. Conclusion I have finally managed to get the data back, and with the simplest means ever! After weeks of trying and failing to bring back much of any data, I brought someone in today to take a look at it and offer suggestions, he simply cd'd to the directory and everything was there! It was never lost in the first place!!! Needless to say I feel really dumb right now, but I learned quite a lot with this whole fiasco. At any rate, while I was looking through data forensics solutions, I found that the Autopsy, or more specifically the SleuthKit was the most helpful. So I will accept that as the final answer. I would also like to note for anyone that comes across this later on that the most up-voted (currently) answer by sekenre was also helpful and I learned a lot, but ultimately it did not help with the type (very many, and some being very large) of files I was dealing with. So thank to all you that provided suggestions and wish you all the best!

    Read the article

  • How to safely use grub rescue> in Fedora 16? System does not boot anymore

    - by YumYumYum
    When i boot my PC, i get this in my Fedora 16 distro. I have tried as following but none allowing me to boot anymore. Any help please? I am blocked completely. Grub loading. Welcome to GRUB! error: file not found. Entering rescue mode... grub rescue> grub rescue> ls (hd0) (hd0,gpt3) (hd0,gpt2) (hd0,gpt1) grub rescue> ls (hd0,gpt2)/ ./ ../ lost+found/ memtest86+-4.20 grub2/ System.map-3.1.0-0.rc3.git0.0.fc16.i686 config 3.1.0.0.rc3.git0.0.fc16.i686 grub/ vmlinuz-3.1.0.0.rc3.git0.0.fc16.i686 elf-memtest86+-4.20 initramfs-3.1.0.0.rc3.git0.0.fc16.i686.img initramfs-3.1.0.0.rc4.git0.0.fc16.i686.img System.mpa-3.1.0.0.rc3.git0.0.fc16.i686 config-3.1.0.0.rc3.git0.0.fc16.i686 vmlinuz-3.1.0.0.rc3.git0.0.fc16.i686 grub rescue> set prefix=(hd0,gpt2)/boot/grub grub rescue> set root=(hd0,gpt2) grub rescue>insmod normal error unknown filesystem. or sometimes "error: file not found." grub rescue>normal unknown command normal

    Read the article

  • centos install / partitioning

    - by ServerSideX
    I'm using NOC-PS to remotely install Centos 6.2 via KVM / IPMI. I'm going to install cPanel as well and they recommend this layout /boot (99MB) swap (2x server RAM) / (remainder) In the o/s install profile within NOC-PS software, it shows as this: part /boot --fstype ext2 --size 250 part pv.01 --size 1 --grow volgroup vg pv.01 logvol / --vgname=vg --size=1 --grow --fstype ext4 --fsoptions=discard,noatime --name=root logvol /tmp --vgname=vg --size=1024 --fstype ext4 --fsoptions=discard,noatime --name=tmp logvol swap --vgname=vg --recommended --name=swap By the time the default partition setup was done installing Centos, I get this [root@server005 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg-root 532G 907M 504G 1% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 243M 28M 202M 13% /boot /dev/mapper/vg-tmp 1008M 34M 924M 4% /tmp [root@server005 ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Fri Dec 7 18:47:24 2012 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg-root / ext4 discard,noatime 1 1 UUID=58b31aaf-5072-4fb1-a858-33bc316fa793 /boot ext2 defaults 1 2 /dev/mapper/vg-tmp /tmp ext4 discard,noatime 1 2 /dev/mapper/vg-swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 My question is, how should the NOC-PS install profile look like to get the recommended cPanel partitioning? The server has 16GB RAM, dual 600GB SAS drives and will be used for cPanel shared hosting.

    Read the article

  • Cannot destroy ZFS snapshot: dataset already exists

    - by Morven
    I have a server (T5220, though I doubt it matters) running Solaris 10 8/07 and I have a ZFS pool, "mysql", on internal disk. Within it I have a filesystem "mysql/data/4.1.12", which I snapshot hourly with a script from cron. I have one snapshot, created as one of those hourly snaps, that will not destroy. I have renamed it out of sequence to be "mysql/data/4.1.12@wibble" so that my script will not try and fail to destroy it, but it was originally within the sequence, though I doubt that matters. It renames successfully. The snapshot can be successfully navigated and read from through the .zfs/snapshots directory. It has no clones based on it. Trying to destroy it does this: (265) root@web-mysql4:/# zfs destroy mysql/data/4.1.12@wibble cannot destroy 'mysql/data/4.1.12@wibble': dataset already exists (266) root@web-mysql4:/# which is apparently nonsensical: of course it already exists, that's the point! Anyone seen anything like this before? Web searches show nothing obviously similar. I can provide patches installed if necessary.

    Read the article

  • Windows 7 - ignore security when reading external drive

    - by w-
    hi, My system hard drive on an XP computer kind of failed (random corrupt sectors). So i got a new harddrive and am trying to recover the files. The filesystem is NTFS. The system i'm trying to use when recovering the files is Windows 7. I'm obviously an admin on this box. The last data i'm trying to recover is stuff in the Documents and Settings folder. I'm using a SATA to a USB cable thingy so that I just plug it in as an External Hard Drive. The problem: In Windows Explorer when i try to copy the data, I keep getting prompted with Security warnings and error messages. It keeps telling me i have to change the owner permissions of the folder and all it's contents. If i tell it to change all the files and folder permissions it takes a really long time because it has to recurse through all the folder contents to change the permissions. Is there a way for me to ignore the file permissions when doing this? thanks

    Read the article

  • How to fix Truecrypt MBR using Command Prompt or Linux live USB?

    - by Michal Stefanow
    I was playing with TrueCrypt and decided to make a fresh installation of Windows 7 from USB stick. Unfortunately Windows 7 installer: "setup was unable to create a new system partition" My entire HDD has been formatted and is visible as 320GB unallocated space, but no fdisk nor Windows 7 installer nor Windows XP installer could help. (Windows XP doesn't even see the HDD, it sees only USB stick and says "not enough space to install") It may be related to Truecrypt and pre-boot authentification, boot loader and/or MBR. As I don't have optical device I could not have created rescue disk. Right now I need a rescue of some kind, supposingly by erasing/fixing MBR using Linux live USB or using Command Promt. Another approach is to click "repair your comuter" menu from Windows 7 installer then click "restore your computer", then click OK upon error and get access to Command Prompt. Yet another another approach is to start computer without Linux USB I receive this: error:unknown filesystem. grub rescue> Any help would be greatly appreciated as my laptop is kind of not fully operational now. UPDATE: This was asked long time ago, ended up formatting everything (eventually it worked using different bootable USB)...

    Read the article

  • Size of modules within initrd

    - by LiKao
    I am currently trying to manually replace the kernel within ubuntu-core on an embedded device with a custom kernel. However when I try to update the initrd my initrd becomes much bigger. Here is what I did: Extract the initrd that was shipped with ubuntu Make a list of all modules within the old initrd get the same modules from the new module director at /lib/modules/new_kernel_version add these modules to the initrd and remove the old ones If I do this my initrd becomes quite bigger than the original one, so I checked the individual modules. I picked the btrfs.ko filesystem driver as an example. So by comparing these two modules I noticed the one I just picked into the initrd was much bigger, which caused the difference in size. -rw-r--r-- 1 root root 999K Nov 14 15:06 btrfs.ko For the btrfs.ko within the shipped initrd. -rw-r--r-- 1 root root 7.2M Nov 14 15:08 btrfs.ko For the new btrfs.ko. What is different between these two modules? Could this be caused by some faulty setting for the new kernel? When producing the kernel I copied /proc/config.gz and used make oldconfig to update it, so all optimisations should be the same for both kernels. Or is there something else which is being done to the modules before they are put into the initrd? Maybe is there even some better way to build a new initrd for the new kernel in ubuntu altogether. Update: I just also tested with an initrd which I created from scratch using the mkinitrfs command within ubuntu, and it has the same size difference that I found for the initrd I manually updated.

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >