Search Results

Search found 36498 results on 1460 pages for 'linux usb drive'.

Page 439/1460 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • Why can't I mount an image hosted on a read-only HFS+ partition via Boot Camp?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Boot Camp driver package includes file system drivers that enable Windows to access the Mac OS HFS+ partition. It's read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Family server setup [closed]

    - by Manny
    Hi all, I really hope some of you can give me some direction. I have setup a linux server at home and through samba I can access files from different computers in my home. I would like to use this server as a file-server for my family (brothers, sisters and parents who all live in their own homes). I really like the way it is set up right now with user and permission controls, but I've read that it is bad idea to open up the samba port to the world. The requirements are simple: 1) it should be easy to access, by using standard web browsers or mounting the drive (shouldn't have to use any VPN setup or use putty etc) 2) should be somewhat secure. We just want to share family pictures instead of putting them on facebook or picasa or other web server, nothing top secret. Here is what I've looked into: 1)Webdav. It seems decent but seems like it windows7 doesn't like it very much, even with digest mode authentication. User controls and permissions are not as flexible as samba (or at least to my knowledge). I really like the user and group permissions in samba, but if I could live with webdav if it worked seamlessly with windows, it should just work shouldn't it? 2) I read somewhere to stay away from ftp as it is outdated and that there are newer and better internet file-server setups? Was that a reference to webdav? I am so confused, please help... Manny

    Read the article

  • Where do vendors publish internal transfer rates of HDDs?

    - by red888
    So I've started to dig into storage fundamentals and found that in order to calculate the IOPS of a HDD you need to know the internal transfer rate of the drive (time it takes data to move from the platters to internal disk's cache). I went on newegg and even a few vendor sites and could not find this info published for any HDDs. Is it sometimes called something else? Take this link to a seagate HDD for instance. Nowhere do I see "internal transfer rate", but I do see something called "Sustained Data Rate OD"- is that the same thing? Just so you know where I'm getting this info (Book: "Information Storage and Management Storing, Managing..."): Consider an example with the following specifications provided for a disk: The average seek time is 5 ms in a random I/O environment; therefore, T = 5 ms. Disk rotation speed of 15,000 revolutions per minute or 250 revolutions per second — from which rotational latency (L) can be determined, which is one-half of the time taken for a full rotation or L = (0.5/250 rps expressed in ms). 40 MB/s internal data transfer rate, from which the internal transfer time (X) is derived based on the block size of the I/O — for example, an I/O with a block size of 32 KB; therefore X = 32 KB/40 MB. Consequently, the time taken by the I/O controller to serve an I/O of block size 32 KB is (TS) = 5 ms + (0.5/250) + 32 KB/40 MB = 7.8 ms. Therefore, the maximum number of I/Os serviced per second or IOPS is (1/TS) = 1/(7.8 × 10^-3) = 128 IOPS.

    Read the article

  • Why do disk images hosted on a read-only HFS+ partition behave differently?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Boot Camp driver package includes file system drivers that enable Windows to access the Mac OS HFS+ partition. It's read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • a brand new FS based on a database without using fuse

    - by Devrim
    hi all, To serve millions of files out of a single directory, being able to connect to a drive from hundreds of endpoints, and for some other reasons (to avoid gluster/nfs/all fs based networking solutions), I want to evaluate the possibility of making a filesystem that's based on a mongodb (or any other). Basically, it works like fusefs, every single file is kept in mongo gridfs. In theory, I do, mount mongodbfs /mountPoint mongodb://localhost then when i say touch /mountPoint/test.txt this file is inserted into mongodb. This FS will also store uid/gid and perms with the file, we can throw hundreds of servers to it, and no useradd will be necessary. I'm not thinking to include all the features of FS, just the ones we need. My question is, how do I start my quest in finding resources, books, links, people, developers who'd help me implement this? at least a proof of concept. Is it feasible? What should I expect as a timeline for such undertaking? Please only think about gazillion small files and folders.

    Read the article

  • Problem with running a script at startup as root?

    - by Usman Ajmal
    Hi The main question: Is there a way I can run 'completely' one of my script when ubuntu's desktop appears no matter if root , administrator, desktop user or an unprivileged user logged in? What does the script do? The script mounts a partition, looks for a file in that partition and finally on the basis of that file a decision of copying a partition to another partition is made. That copying is done via dd if=/dev/sda2 of=/dev/sda5 When does the script run finely? Script runs smoothly when I run it from the terminal by sudo ./my_copying_script This command asks me for the password of currently logged in user. I enter the password and the script starts working. When does the script NOT run finely? I want to run the script at startup. I set it a startup program by using the Startup Applications utility of Ubuntu. Script ran at startup but exited at the dd command returing following error: dd: opening '/dev/sda2': Permission denied On edk's suggestion I set the owner of my_copying_script as root and set the SUID. Now the permissions of my_copying_script are (-rwsr-sr-x). edk's point of view was that once I set the suid, the startup program will run with the permissions of its owner. I did that but the same /dev/sda2 permission denied error came up. I then prefixed the dd with sudo as mentioned below sudo dd if=/dev/sda2 of=/dev/sda5 but this returned following error: sudo: no tty present and no askpass program specified In other words the mounting failed. If I run the script using sudo ./myProgram i don't face this problem and the drive gets mounted successfully.

    Read the article

  • kill a hung mount process

    - by John P
    I have a virtual machine drive that ran out of space, so I shutdown the VM, extended the volume using lvextend. After resizing the partition (ext3), I ran e2fsck on it, and it found and corrected errors. Unfortunately, when I ran efsck one more time, there were more errors that had to be fixed. I went through 3 rounds of e2fsck before I decided to try mounting it to clean up some space manually. I tried mounting it, but the mount process hung. I tried to "kill -9" the mount process, but that did not kill it. I killed the parent process, but that did not kill it either. Any ideas on how to kill a rogue mount process? Some evidence: ps -l 13292 F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 4 R 0 13292 1 99 85 0 - 17964 - ? 11:27 mount /dev/mapper/xen7-123p3 /tmp/p3/ lsof -p 13292 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mount 13292 root cwd DIR 9,2 4096 25264129 /root mount 13292 root rtd DIR 9,2 4096 2 / mount 13292 root txt REG 9,2 61656 2916434 /bin/mount mount 13292 root mem REG 9,2 144776 31457282 /lib64/ld-2.5.so mount 13292 root mem REG 9,2 1718232 31457284 /lib64/libc-2.5.so mount 13292 root mem REG 9,2 23360 31457291 /lib64/libdl-2.5.so mount 13292 root mem REG 9,2 43808 31457783 /lib64/libblkid.so.1.0 mount 13292 root mem REG 9,2 247496 31457331 /lib64/libsepol.so.1 mount 13292 root mem REG 9,2 95464 31457337 /lib64/libselinux.so.1 mount 13292 root mem REG 9,2 154640 31457491 /lib64/libdevmapper.so.1.02 mount 13292 root mem REG 9,2 17936 31457472 /lib64/libuuid.so.1.2 mount 13292 root mem REG 9,2 56438208 12684878 /usr/lib/locale/locale-archive mount 13292 root 0u CHR 136,11 0t0 13 /dev/pts/11 (deleted) mount 13292 root 1u CHR 136,11 0t0 13 /dev/pts/11 (deleted) mount 13292 root 2u CHR 136,11 0t0 13 /dev/pts/11 (deleted) umount -f /tmp/p3/ umount2: Invalid argument umount: /tmp/p3/: not mounted

    Read the article

  • Allowing access to company files accross the internet

    - by Renaud Bompuis
    The premise I've been tasked with finding a solution to the following scenario: our main file server is a Linux machine. on the LAN, users simply access the files using SMB. each user has an account on the file server and his/her own access rights. user accounts are simple passwd/group security accounts, not NIS/LDAP. The problem We want to give users (or at least some of them, say if they belong to a particular group) the ability to access the files from the Internet while travelling. Ideally I'd like a seamless solution. Maybe something that allows the user to access a mapped drive would be ideal. A web-oriented solution is also good but it should present files in a way that is familiar to users, in an explorer-like fashion for instance. Security is a must of course, and users would be expected to log-in. The connection to the server should also be encrypted. Anyone has some pointers to neat solutions? Any experiences? Edit The client machines are Windows only.

    Read the article

  • SSD suddenly full

    - by Daniel
    Today the hard drive of our server was suddenly full. The disk usage always stayed around 50 % in the weeks and months before (old data is regularly expunged from the server). I deleted 10 GB of files in /tmp, which strangely freed 51 GB. Here is what I did: root@***:~# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 137G 0 100% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot root@***:/var# du -hs * 3,3M backups 438M cache 9,4G lib 4,0K local 12K lock 76M log 24K mail 4,0K opt 88K run 184K spool 10G tmp 12K www root@***:/var/tmp# find -type f -print0 | xargs -0 rm root@***:/var/tmp# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 81G 51G 62% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot Any explanation as to why deleting 10 GB in /tmp gave me back 51 GB on the disk? Could this point to an SSD failure? Are there any tools for Debian to test SSD health? I already have checked syslog. The first entry relating to this incidient is a mysql message: 1:22:02 [ERROR] /usr/sbin/mysqld: Disk is full writing... So I have absolutely no idea what caused this.

    Read the article

  • Family server setup

    - by Manny
    Hi all, I really hope some of you can give me some direction. I have setup a linux server at home and through samba I can access files from different computers in my home. I would like to use this server as a file-server for my family (brothers, sisters and parents who all live in their own homes). I really like the way it is set up right now with user and permission controls, but I've read that it is bad idea to open up the samba port to the world. The requirements are simple: 1) it should be easy to access, by using standard web browsers or mounting the drive (shouldn't have to use any VPN setup or use putty etc) 2) should be somewhat secure. We just want to share family pictures instead of putting them on facebook or picasa or other web server, nothing top secret. Here is what I've looked into: 1)Webdav. It seems decent but seems like it windows7 doesn't like it very much, even with digest mode authentication. User controls and permissions are not as flexible as samba (or at least to my knowledge). I really like the user and group permissions in samba, but if I could live with webdav if it worked seamlessly with windows, it should just work shouldn't it? 2) I read somewhere to stay away from ftp as it is outdated and that there are newer and better internet file-server setups? Was that a reference to webdav? I am so confused, please help... Manny

    Read the article

  • pcfg_openfile: unable to check htaccess file, ensure it is readable

    - by rxt
    After moving a website folder on my local development machine to another drive, then moving it back, I got a 403 error. Most of this problem had probably to do with rights that got messed up. After deleting the code and restoring it from SVN, the rights seemed allright. The error stayed however. The setup is a bit complex, as follows: I have Ubuntu 10.4 as development machine, trying to mimic the server as much as possible We use Eclipse + SVN and I create all projects in a local folder under my user account In /var/www-vhosts I create folders for each vhost, like this one: test.localhost test.local/index.php: includes the index file of the project test.local/.htaccess is a dynamic link to the htaccess file in a project subfolder I get the following error in the apache error log: [Thu Jul 08 15:55:56 2010] [crit] [client 127.0.0.1] (13)Permission denied: /var/www-vhosts/test.localhost/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable The problem seems to be the .htaccess file, or the link to it. When I empty the htaccess, nothing changes When I remove the link, the index-include produces some output (in the apache error log) When I remove the link and replace it with the actual file, I get another error: [Thu Jul 08 16:47:54 2010] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www-vhosts/test.localhost/test I'm lost here, don't know what to do next. Do you have any ideas what I can try? This setup has worked before, but I don't know what is different now.

    Read the article

  • Basic connectivity issues between Win 7 and XP mixed wired/wireless network. [Solved]

    - by Pulse
    Setup: Windows 7 x64 Ultimate desktop hard wired to Asus WL500gp router (WL500gpv2-1.9.2.7-d-r1445 firmware) Several Bridged VirtualBox VM's running XP, 7, ubuntu server 10.04, Mint 9 and SuSE 11.2 Win XP Pro SP3 notebook with D-Link Airplus wireless network card. No firewall or other security software currently running on either platform (at least for the duration of the test) Situation: Router is acting DHCP server Clients are receiving correct addresses and additional parameters Internet connectivity is available from all clients Windows 7 sharing is set to Network type = work (not home group) NetBT is disabled on all clients using smb over TCP What I can do: I can ping the router and internet addresses from the wireless XP notebook I can ping the Win 7 desktop and any VM from the XP wireless notebook I can ping all devices from the router All VM's and 7 can ping each other and the router as well as Internet addresses What I can't do: I cannot ping the XP wireless notebook from either the Win 7 desktop or the VM's; it always returns a destination host unreachable error. Tracert resolves the name or the XP notebook but also returns a destination host unreachable. From the above it would seem that something is blocking connectivity in a single direction (from the Win 7 box to the Win XP notebook) only but the router can ping the XP notebook. Some fresh input would be most welcome, as this is beginning to drive me batty. Thanks

    Read the article

  • What does it mean to install two OS's alongside each other?

    - by Josh
    I currently have Windows 7 installed on my PC. However, I just tried out Ubuntu via booting from a disc and I love it. I want to install it onto my HDD, but I don't want to get rid of Windows 7. I know HOW to do this, but I am a little unsure what the consequences might be. What does it mean to install Ubuntu alongside Windows? Do they share the same resources? Also, I have my HDD already partitioned into two sections, a 70 GB section where Windows is installed and then another 400 GB section where all my data is stored. There is currently 26 GB free on the 70GB partition. I know Ubuntu doesn't take up much space. However, if I install Ubuntu in that space, will I still be able to install programs on Windows in the future? My main concern is that I am going to short-change my hard drive space for future installations. EDIT: I guess another big question I have is if I install a program on one OS, will the other be able to use it?

    Read the article

  • Matched or unmatched drives for RAID arrays?

    - by Will
    Looking around there is conflciting information on this, with some strongly suggesting one or the other. From my understanding the issue with matched drives is that the wear on both drives is more or less the same, so the potential for the second drive failing with or very soon after the first is pretty high. People also claim matched drives give substianatally higher performance however assuming the unmatched drives are more or less the same (eg 2, 1 TB STATA II 7200rpm drives with 32MB cache), would the minor differences between say a Seagate and a Western Digital one (say one has a 128MB/s read rate, and the other a 150MB/s read rate, as well as I guess various other minor differences) actually cause any notable performance loss, ie potentialy worse than two matched 128MB/s drives, or does RAID not really care and give you essentially an optimal solution (eg upto 278MB/s total read speed for RAID 0 and 1) and similar for other RAID with more "unmatched" drives (5 and 1+0 come to mind as possibilities)? Also I couldnt find much info on how this is different on different RAID setups, eg RAID 0 or RAID 1, software or hardware RAID, etc. I'm assuming such things have an effect, and thats it's not all the same for RAID in general?

    Read the article

  • Safe to remove Python2.6 files?

    - by darkfeline
    I'm using Linux Mint 11 (will upgrade soon), and I've noticed that, even though I don't have any python2.6 packages installed with apt, there's a bunch of residual python2.6 files scattered around my drive, including, but not limited to, dist-packages in /usr/lib/python2.6 and various /usr/share stuff. Is there any way to test if these files are still being used? I'm tempted to sudo rm -rf the lot of them, but I'm scared it'll break stuff. Also, does anyone have any idea where these files could have come from? I believe I had python2.6 installed once upon a time, but I made sure to --purge them, so there shouldn't be any trace of them left, right? EDIT: after using a quick script to check all of the files, it appears most of them belong to important packages, so I won't try weeding out the few which I know are probably useless. Although I am curious why so many packages have python2.6 files when I don't even have it installed. These files are not associated with any packages and I'm not sure if they are safe to remove: /usr/bin/ipython2.6 /usr/lib/python2.6/dist-packages/distribute-0.6.15.egg-info /usr/lib/python2.6/dist-packages/easy_install.py /usr/lib/python2.6/dist-packages/IPython /usr/lib/python2.6/dist-packages/ipython-0.10.1.egg-info /usr/lib/python2.6/dist-packages/setuptools /usr/lib/python2.6/dist-packages/setuptools.egg-info /usr/lib/python2.6/dist-packages/setuptools.pth /usr/lib/python2.6/dist-packages/site.py /usr/lib/python2.6/dist-packages/wx.pth /usr/local/lib/python2.6 /usr/local/lib/python2.6/dist-packages /usr/local/lib/python2.6/site-packages /usr/share/man/man1/ipython2.6.1.gz

    Read the article

  • Fedora 19 no longer bootable

    - by Parisa
    I had fedora dual-booted with windows on my laptop for a while but with windows refresh grub was gone and my system directly booted windows. I booted fedora with my systems boot options and with this tutorial: https://fedoraproject.org/wiki/GRUB_2 I reinstalled grub2 but then had my system booted into an empty grub prompt: grub So I found the drive containing vmlinuz and initramfs (completely sure about thair location and versions) and tried to manually boot it but after the boot command it said: no suitable video mode found booting in blind mode and nothing happened. Such a tragedy... I have already tried to use live disks rescue system. Funny but troubleshooting options don't apear on my laptop while they do on my desktop pc. I cant even go to boot prompt on my lenovo idepad z400 laptop. I also tried EasyBCD so maybe I could boot it with windows but it comes up with this error: missing AutoNeoGrub().mbr Now I have removed the grub prompt (don't know why) and its really hard for me to reinstall my dearly customized fedora. If anyone knows a way to help boot it again or reinstall it keeping my files and installations I really need it. Thanks PS:I have already tried Boot-repair Disk but it asks me to enable the repo containing grub-efi on my fedora to reinstall the grub2 and fix the boot for me (how could i?).

    Read the article

  • Virtual Server HDD shrinks without apparent reason

    - by Christian
    We have a virtual hosted Linux server, and in the last few months every now and then the HDD shrinks from 400GB down to the exact byte count that is in use. All existing data can be downloaded and displayed without a problem, but we can't upload or edit any files because of the "full" hard drive. Here is a screenshot, where "size" should be 400GB: This has happened twice before, and again today. The last times, when I reported the issue to the host, they said "that isn't possible, you must be doing it wrong", but soon after the call, the problem vanished without us doing anything, so I suppose that they have some kind of problem they're not willing to admit. Even after the fact, they acted like nothing was wrong and wrote me a mail in which they explained that I can use "df -h" to view available disk space (well duh, how do you think I noticed this particular issue?). Questions about if and what they had done were ignored. It has happened around the 25th to 28th of the month, so I suspect that they might have a cronjob running every 30 days or so which wreaks havoc with some VM configs. I just want to understand the problem, but the host support hasn't been very helpful in that regard. I have tried Googling the issue, but any combination of search terms I can come up with just gives me tutorials on how to change HDD size in a virtual machine. a) What could be the cause of shrinking HDD size in a Ubuntu 12.04.3 LTS server? Could there be anything in our virtual machine or is it more likely to be an issue with the vm host? b) Can I do anything about it without needing to contact the host's support? c) Is there anyway I can prevent this from happening at all?

    Read the article

  • System Issues and Major Malfuctions after Failed hibernation Exit

    - by Sarah Seguin
    I have a HP G71-340US that went into hibernation mode for a while and when I tried coming out of it, I got an error message: You're computer cannot come out if hibernation . Status: 0xc000009a Info: A fatal error occurred processing the restoration data. File: \hiberfil.sys Any information that was not saved before the computer went into hybernation will be lost enter=continue So I hit continue and it ran soooo super slow it. It was seriously crawling. Finally I gave up and turned it off manually (IE press and hold the button). It's been a week or two since then and EVERY SINGLE TIME I have tried to to do ANYTHING it takes forever. When I say forever, I literally mean takes 5-7 minutes to load the internet, then the page itself, then to click a link, so on so forth. Eventually everything just goes not responding and I have to give up (4-6 HOURS later). I also cannot access my thumb/jump drives once I've managed to load windows. I was going to try runing malware bytes incase of a virus, but it's windows explorer developes errors and goes not responding on me. Currently I'm running scan disk or check disk and like every file is coming back unreadable. I let it run the last 2 hours straight in chkdesk and I'm only at 6 percent with around 500+ errors and still going. Yes, I've taken logs of the errors via cell phone camera and patience. A week or two prior to this happening I had to change our the hard drive due to blunt force trama next to the mouse. OH! Running on Windows 7: ) And I've tried loading the computer in safe mode and it makes absolutely no difference. Any and all help would be appreciated. I really don't know what to do from here and I'm kind of freaking out. I've googled different part of the error and things that I've done/seen and there are so many different answers/topics that I thought it best to just post the questions.

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

  • Question marks showing in ls of directory. IO errors too.

    - by jaymoo
    Has anyone seen this before? I've got a raid 5 mounted on my server and for whatever reason it started showing this: jason@box2:/mnt/raid1/cra$ ls -alh ls: cannot access e6eacc985fea729b2d5bc74078632738: Input/output error ls: cannot access 257ad35ee0b12a714530c30dccf9210f: Input/output error total 0 drwxr-xr-x 5 root root 123 2009-08-19 16:33 . drwxr-xr-x 3 root root 16 2009-08-14 17:15 .. ?????????? ? ? ? ? ? 257ad35ee0b12a714530c30dccf9210f drwxr-xr-x 3 root root 57 2009-08-19 16:58 9c89a78e93ae6738e01136db9153361b ?????????? ? ? ? ? ? e6eacc985fea729b2d5bc74078632738 The md5 strings are actual directory names and not part of the error. The question marks are odd, and any directory with a question mark throws an io error when you attempt to use/delete/etc it. I was unable to umount the drive due to "busy". Rebooting the server "fixed" it but it was throwing some raid errors on shutdown. I have configured two raid 5 arrays and both started doing this on random files. Both are using the following config: mkfs.xfs -l size=128m -d agcount=32 mount -t xfs -o noatime,logbufs=8 Nothing too fancy, but part of an optimized config for this box. We're not partitioning the drives and that was suggested as a possible issue. Could this be the culprit?

    Read the article

  • Get more space for redis in unix server

    - by DevTraveler
    In our app we have only windows servers beside the cases we use redis queues. This case , we use unix server created by amazon. As you can see we do not have a lot of space available and we want to make sure redis has enough space to work without getting stuck. I am little bit new in unix and after reading some data about the unix file system i still was not fully sure how can i give the redis drive (it is in the home directory) more space. I see the mnt that has a lot of space but read it is temporary for cd-rom and network drives. Can you help me figure out how to get more space to my redis ? If it is possible i prefer not to re-install the redis server. Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 6.7G 880M 89% / udev 7.4G 4.0K 7.4G 1% /dev tmpfs 3.0G 152K 3.0G 1% /run none 5.0M 0 5.0M 0% /run/lock none 7.4G 0 7.4G 0% /run/shm /dev/xvdb 414G 30G 364G 8% /mnt Thanks.

    Read the article

  • Matched or unmatched drives for RAID arrays?

    - by Will
    Looking around there is conflciting information on this, with some strongly suggesting one or the other. From my understanding the issue with matched drives is that the wear on both drives is more or less the same, so the potential for the second drive failing with or very soon after the first is pretty high. People also claim matched drives give substianatally higher performance however assuming the unmatched drives are more or less the same (eg 2, 1 TB STATA II 7200rpm drives with 32MB cache), would the minor differences between say a Seagate and a Western Digital one (say one has a 128MB/s read rate, and the other a 150MB/s read rate, as well as I guess various other minor differences) actually cause any notable performance loss, ie potentialy worse than two matched 128MB/s drives, or does RAID not really care and give you essentially an optimal solution (eg upto 278MB/s total read speed for RAID 0 and 1) and similar for other RAID with more "unmatched" drives (5 and 1+0 come to mind as possibilities)? Also I couldnt find much info on how this is different on different RAID setups, eg RAID 0 or RAID 1, software or hardware RAID, etc. I'm assuming such things have an effect, and thats it's not all the same for RAID in general?

    Read the article

  • Can I recover a non-system disk deleted during 2008 R2 setup?

    - by serialhobbyist
    I've done a truly stupid thing and 'deleted' the data disk on a Server 2008 R2 box. Can I recover it? If so, how? I was rebuilding the box because a motherboard change had broken things. I've built loads of boxes and was going through the standard stuff without much concentration. I got to the disk screen which normally displays the two paritions on the drive: the recovery one and the system one. As normal, I deleted the two things I saw. It was only when two lots of unallocated space didn't merge into one that the full horror of what I'd done hit me. Yes, I've got backups... of the stuff I have space to back up. The real irony is that, earlier in the day, I'd ordered to 1 TB disks to deal with the problem. So, anyway, I'd really like to get this partition back because it'll save me a lot of time. How can I do it?

    Read the article

  • Seeking faster access/transfer times for accounting application

    - by Markaway
    Our accounting software, Sage 50, has been getting slower to open on workstations and reading the company file. The company file only contains 2 years worth of transactions, and we just cleared out 2011 so the file size has gotten a lot smaller. There are 10 users, 6 of which are on it all day, 4 are on and off throughout the day. Our network is entirely GbE and the switches are set to prioritize traffic on that port number. Watching network traffic, we barely use 40% of the network capability on the workstation, so I don't think that is our bottleneck. Our server contains two older Raptors Sata 2(3GB/s) 150GB in RAID 1. We were considering switching to SSD's, but a lot of what I read says to stay away from MLC's, especially for production environment and definitely avoid putting them in a RAID config. So would upgrading to newer Raptors with SATA 3(6GB/s) offer noticable benefits? What other options are out there that aren't so expensive? Trying to keep it to 200-300 per drive. We need at least 150GB, but going to 250-300GB would be better as it gives us more room to grow. We have about 30% space remaining on what we have now.

    Read the article

  • How to create an EFI System Partition?

    - by Alex Popov
    TL; DR How do I create an EFI system partition from scratch? How do I put the EFI firmware on it onces it is created? Long version I hava Toshiba T430 laptop. I received it with Windows 7 installed (but I think originally it has shipped with Windows 8). I installed Ubuntu on it, but deleted some partitions on the disk so that I ended up wiping out the Windows and only having Ubuntu. Among the deleted partitions was the EFI System partition. I discovered that Ubuntu now boots in Legacy mode (and not UEFI). I am trying to follow this guide on converting my Ubuntu installation from Legacy to UEFI. The problem - since there is no EFI partition whenever I choose from BIOS to boot using UEFI I cannot boot. That counts not only for the harddrive, but usb and DVD as well. I think this is logical - it expects an EFI partition and since it can't find it, it cannot continue booting futher, be it from HDD or DVD. So how do I recreate the EFI partition? The guide above says: Creating an EFI partition If you are manually partitioning your disk in the Ubuntu installer, you need to make sure you have an EFI partition set up. If your disk already contains an EFI partition (eg if your computer had Windows8 preinstalled), it can be used for Ubuntu too. Do not format it. It is strongly recommended to have only 1 EFI partition per disk. An EFI partition can be created via a recent version of GParted (the Gparted version included in the 12.04 disk is OK), and must have the following attributes: Mount point: /boot/efi (remark: no need to set this mount point when using the manual partitioning, the Ubuntu installer will detect it automatically) Size: minimum 100Mib. 200MiB recommended. Type: FAT32 Other: needs a "boot" flag. I had some trouble creating this partition: I boot from a live Ubuntu DVD, open GParted, create a 200MB partition and format it to FAT32. In GParted I cannot set the mount point and thus cannot set the bootflag. I didn't set the mount point in /etc/fstab since it's a live CD and fstab looked quite differently from what I expected compared to a normal boot. Anyway, I just didn't know what values to set. I booted again via the live DVD and then chose to install Ubuntu. I then created a partition with the mentioned criteria - mount point, 200MB, FAT32, boot flag. However, I continue to have this problem and I suppose it's because on that partition there is no EFI firmware, it's just an empty partition, which is suitable to have EFI firmware. So again, how do I create an EFI partition, which has the EFI software, so that the laptop can once again boot in UEFI mode?

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >