Search Results

Search found 3973 results on 159 pages for 'boost filesystem'.

Page 114/159 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • How does the process of disk partitioning actually work on most HDD's?

    - by Dark Templar
    From what I know of most laptops, you are able to "partition" your disk into as many other drives as you please. The more you cut it up, the smaller your partitions are, but from an organizational point of view, this may be desirable... I was wondering how the filesystem itself becomes partitioned underneath the partitions visible to the user. For instance, a laptop disk is usually divided into platters, each with two surfaces. The surfaces are further divided into "tracks". I guess what I am asking is, is it possible to identify how the disk itself keeps track of partitions? (whether each partition has its own platter? each partition has its own set of adjacent tracks? or some other configuration, or whether the data from different partitions are just randomly interleaved and scattered throughout the disk?)

    Read the article

  • MySQL ERROR 1045 Access Denied

    - by winarm
    Hello- I recently installed MySQL on Fedora13. Now, when I try to create a database, it denies me access. I tried resetting the password and it does not recognize my system root. I tried resetting password with an init-file containing: UPDATE mysql.user SET Password=PASSWORD('MyNewPass') WHERE User='root'; FLUSH PRIVILEGES; I tried uninstall and then reinstall and it is still not working. I am new to Linux and not comfortable with the filesystem. Talk to me like I'm four. Thank you, kindly.

    Read the article

  • Should I format USB sticks and SD cards to FAT, FAT32, exFAT or NTFS? (Windows files, live Linux distors)

    - by superuser
    Does it depend on the media size which one to chose or on some other parameters? In Windows 7 FAT16 is the default. In pendrivelinux.com's Universal USB Installer FAT32. Which one to chose? How about NTFS for Windows use? How about exFAT? It is tne Microsoft designed filesystem for removable media. Is there a difference in USB sticks and SD cards in this regard? Edit: seeing developments in the other thread, should I still use something like exFAT if I don't want Recycle bins created on every single machine I plug my USB thumb drive in?

    Read the article

  • Transparently decompressing data in archive to allow greater compression later

    - by Vi.
    I have, for example, filesystem image which have some compressed files (with weak compression such as gzip), for example, manpages or archives with the same uncompressed content nearby. How to pre-filter the data to "expand" compressed data to plain form (to re-compress it with strong compression) and then post-filter after decompression to restore original "semi-compressed" image? SHA-1 match is advices but not strictly required (but the resulting image must work, e.g. re-compressed files should not grow too much, be decompressible etc.) Like improving compression ratio by reversing weak compression algorithms. Are there programs for this?

    Read the article

  • how to stop powershell mangling command line options for program executed from shell?

    - by kem
    From the powershell prompt, when I try to run a program and feed it a command line option, powershell ends up mangling the option. Why does this happen? Is there any way to stop it besides enclosing the option in quotes? For example, from the powershell prompt: PS Microsoft.PowerShell.Core\FileSystem::\\mach\share .\myprog.exe -file=input.txt myprog.exe ends up getting two arguments: 1) -file=input 2) .txt I need to run it like: .\myprog.exe "-file=input.txt" or .\myprog.exe '-file=input.txt' to force it to be one argument. No other shell does this.

    Read the article

  • Hiding a directory through the FAT table

    - by hennobal
    I've looked into the FAT file system, trying to find a way to make a directory hidden from view of the user. This has been done with malware previously, so it should be possible. The SpyEye trojan hid inside a directory C:\cleansweep.exe\ which was only reachable through the command line. I know deletion is possible by substituting the first character of the directory in the FAT table with 0xE5, but then it will not be accessible. Any ideas on how the scenario from SpyEye can be recreated? Any filesystem is interesting, but ideally FAT or NTFS.

    Read the article

  • Creating disk snapshots in Windows 7

    - by Puneet Arora
    Does anyone know of a command or tool to create disk snapshots on Windows 7 (client SKU)? I see vssadmin.exe has a "create shadow" option, but that's available only on server SKUs: http://technet.microsoft.com/en-us/library/cc788055(v=ws.10).aspx I've a backup tool that replicates changes (creations, modifications and deletions to files and directories) since last backup, to my backup volume. Before each time this happens I want to create a persistent snapshot on my backup volume. I could then mount previous snapshots to view previous backups achieving a behavior similar to that of TimeMachine in OS X. This question has been asked before but unfortunately there weren't any good answers: Taking snapshots of filesystem/volume in Windows 7?

    Read the article

  • stat and ls show wrong file size (terabytes wrong)

    - by WolleTD
    Ok, I have a bunch of vCard files, all about 200 to 300 Bytes in size. While trying to get them archived, I wondered why that takes so long and discovered that there is one file with a wrong size. Both ls and stat are showing a size of about 8.1 Terabytes. That's amazing because my SSD is only about 250 Gigabytes in size. There are some other files with wrong sizes, too, but this is clearly the biggest one. I already gave it a fsck, but there seem to be no errors in the (ext4) filesystem. How can I get rid of this wrong size? Thanks, Wolle

    Read the article

  • What is needed for 'Previous Versions' to be visible on the client OS?

    - by Zoredache
    I have servers with Shadow Copies enabled taking snapshots a couple times a day. From the server, if you look at the local devices you can see the Previous Versions being populated reliably. But from remote clients, the ability for an end-user to see the Previous Versions seems to be very hit-or-miss. For the sake of this question you can assume that all my clients are Windows 7 and the Servers are Windows Server 2008 R2. Is there an exhaustive list of everything that is required for end user to see Previous Versions? Are their any requirements for a certain level of share or filesystem permissions, other then read access? Does something need to be open on the firewall, other then what is already in-place for normal Windows networking?

    Read the article

  • Apache 2 Symbolic link not allowed or link target not accessible

    - by astropanic
    My apache server runs as user foo. I have some Rails applications in /home/foo/app1 /home/foo/app2. Each of them has an vhost <VirtualHost *:80> ServerName app1.foobar.com ServerAlias www.app1.foobar.com DocumentRoot /var/www/html/app1/current/public RailsEnv production <Directory /var/www/html/app1/current/public> AllowOverride all Options -MultiViews </Directory> </VirtualHost> I have a symlink in /var/www/html/app1 : current -> /home/foo/app1/tmp_20102611 All file permissons are set correctly (user foo group foo), I can go through the filesystem from shell. SELINUX is disabled Distro is CentOs 5.5 Which the above symlink I get an 403 and an error entry in error_log Symbolic link not allowed or link target not accessible:/var/www/html/app1/current When I symlink my app in the subdir of /var/www/html instead of /home/foo it works. How I can avoid this error still placing my app in my /home/foo directory ?

    Read the article

  • Windows 7 installer doesn't recognize NTFS partition.

    - by ifesdjeen
    Hi, I'm trying to install windows 7 on my Macbook. I've created NTFS partition, but when i'm starting up Windows 7 installation, it says that i can't install windows on this partition, since drive already contains maximum amount of partitions with this filesystem type. I haven't heard of any limits on filesystems, but still i can't even format this drive from Win7 installer. I've found access to command line from win7 installation CD, but i can't find fdisk there to format. Do you have any idea on about how to deal with it?

    Read the article

  • Windows server backup fails at 40%

    - by Abraham Borbujo
    I´m configuring windows server backup as full system backup. It starts fine, but when it´s making system drive (c:) backup it stops at 40% every time i try. It only backups 7.28 GB of the total 18.19 GB. I tried changing destination drive and also checking c: filesystem in order to find any problem, but it seems to be ok and the problem is still the same. I got a message telling that the backup is completed with warnings. The warning details says that it didn´t complete backup because of input/output error in source or destination. Thanks for your help.

    Read the article

  • Resize the /var directory in redhat enterprise edition 4

    - by Sri
    I am running NDB mysql. the log files fills up the /var directory. therefore i cant start the ndbd service now. as a temporary fix, i have deleted the log files and again working fine. but again the log files fill up the /var directory. i got plenty of space in other partition. therefore i would like to swap the partition from one directory to /var. here if my input from df -h Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 ext3 54G 2.9G 49G 6% / /dev/cciss/c0d0p1 ext3 99M 14M 81M 14% /boot none tmpfs 1013M 0 1013M 0% /dev/shm /dev/cciss/c0d0p2 ext3 9.7G 9.7G 0 100% /var there are plenty of space in /dev/mapper/VolGroup00-LogVol00. Therefore i will like to swap 10 G space from this directory to /var. could you please help me out to solve this problem?

    Read the article

  • Sharing / replicating EBS across AWS nodes

    - by skrat
    I would like to use single EBS storage across multiple EC2 nodes (web/app servers). I've read some articles on snapshot sharing, but that doesn't suit well for what we need. We use filesystem for storing DB record attachments, so if one such attachment gets created, we need it to be immediately available to all nodes (to serve). So far only NFS seem to be viable, but it's a pain to configure and maintain. Another option could be storing those attachments on S3 instead, but that would cut us of doing any analysis on that data. This must be quite common problem when scaling in AWS, what solutions are there?

    Read the article

  • Experience with MooseFS?

    - by brown.2179
    Anyone have any experience using MooseFS? I want an easy distributed storage platform to store static data archive of about 10 TB and serve it to 20-40 nodes. Also I want to be able to add storage as the archive grows without having to rebuild the filesystem. I don't care if it's a bit slow. I just want it to be simple and stable. Basically from what I can see for OS X it's between MooseFS and Gluster. Any other suggestions?

    Read the article

  • df-h command in ubuntu

    - by Esha Sharma
    I am a new user of Ubuntu. When I type df -h in terminal , it gives me list of all storage devices and space usage. In my system I get this. Filesystem Size Used Avail Use% Mounted on /cow 934M 173M 761M 19% / udev 925M 4.0K 925M 1% /dev tmpfs 374M 856K 373M 1% /run /dev/sdb1 7.5G 2.8G 4.8G 37% /cdrom /dev/loop0 1.5G 1.5G 0 100% /rofs tmpfs 934M 16K 934M 1% /tmp none 5.0M 0 5.0M 0% /run/lock none 934M 76K 934M 1% /run/shm /dev/sda 299G 74M 299G 1% /media/q I understand that /dev/sda is my hard drive which is 320 gb(in gib it is 299 and hopefully that is what is being displayed) and /dev/sdb1 is pendrive of 8gb from which I am running the live cd. My question is what are the other folders and what is the physical location of these folders if complete memory is taken by the device dev/sda?

    Read the article

  • Splitting an archive on multiple media

    - by Robert Munteanu
    I'm generating archives which are larger than my current physical media ( DVD ). I'd like to split those archives: automatically - instead of generating mini-archives by hand; consistently - so that an archive can be extracted independently of another. For instance for a tree of 24GB which would be archived into 10GB I would get 3 archives, all of them < 4.7 GB and each of them being able to be extracted without the other 2. I'm using dirvish so I'm archiving a filesystem tree. Update: I'm using Linux.

    Read the article

  • How can I specify custom folders for file-browsing in Metro Apps?

    - by klyonrad
    Whenever you use an Metro app and you want to import some files there is a little file browser. Like this: A lot of folders possible; however there is a folder that is very important: The personal Dropbox. How can I add this folder as a "favorite" in this view? Always browsing through the whole filesystem is slow in the Metro Interface. I realize I could make symlinks for all the typical Dropbox folders but that's simply annoying and there has to be another way (just like it's possible to "hack" the "Send To..." options for the context menu.

    Read the article

  • Linux Raid: Can mdadm --grow a raid1 while mounted?

    - by Chris
    I have 2 500gb drives in a RAID1 setup that I needed to upgrade for more space. I mdadm --fail'ed each drive in turn and I used dd to copy each drive to it's respective larger drive (2tb each), removed the smaller drives and replaced them with the larger drives, and reassembled the array and forced a resync. So now I've got a 500gb RAID1 sitting on 2TB drives, and wish to grow them. The plan is to use mdadm --manage /dev/md0 --grow to grow them, then boot a rescue cd, assemble the array under that environment, and do the resize2fs on them. Can I use mdadm --grow on a mounted and live filesystem? Also, do I need more options to make sure the grow operation stays raid1?

    Read the article

  • Best alternatives to recover lost directories in FAT32 external hard drive?

    - by Sergio
    I have an 320 GB ADATA CH91 external hard drive. I guess it has some problems with the connector of the USB jack. The point is that in certain occasions it fails in write operations generating data losses. Right now I lost a directory with several GB's of very useful information. Since then I have not attempted to write to the disk any more. What tool would you recommend to recover the lost data? The disk is FAT32 formatted (only one partition) and I use both Linux and Windows. What filesystem format would you recommend to avoid future data losses? I currently only use this external hard drive in Linux so there are several available choices (FAT, NTFS, ext3, ext4, reiser, etc.).

    Read the article

  • My hard disk does't get recognized

    - by SteveL
    For a few days now I have a problem with my 500GB internal hard disk. I am on Linux Mint 13 but I have the same problem with my Windows installation. When running fdisk -l I can see my hard disk (same on BIOS) but I can't mount it even via the disk utility program. In Windows XP I can see it on the My Computer menu but when I click it, it say's: D:\ is not accessible The file or directory is corrupted and unreadable Is there a way to fix it? Or at least save some of my files and format it? Should I be thinking about the worst-case scenario e.g. my HDD is dead? Edit: The filesystem is NTFS.

    Read the article

  • Tell the linux kernel to put a file in the disk cache?

    - by Rory
    Is there any command to for a file to be read in and loaded into the linux disk cache? This is on an up-to-date debian system. I know in the general case, it's better to let the linux kernel figure this out. But I have an edge case. I have a laptop that has an NFS director mounted, and i want to play a long video file, but I don't want to have a network problem interrupt the playnig. I know that (largeish) file will be read in it's entirety later on. I know that nothing else (really) will be running while playing this video. There is enough free memory to store this file. (I know I could just copy the file into a new tmpfs filesystem, but I'm curious if there's an even shorter way to do it)

    Read the article

  • Replicate a big, dense Windows volume over a WAN -- too big for DFS-R

    - by Jesse
    I've got a server with a LOT of small files -- many millions files, and over 1.5 TB of data. I need a decent backup strategy. Any filesystem-based backup takes too long -- just enumerating which files need to be copied takes a day. Acronis can do a disk image in 24 hours, but fails when it tries to do a differential backup the next day. DFS-R won't replicate a volume with this many files. I'm starting to look at Double Take, which seems to be able to do continuous replication. Are there other solutions that can do continuous replication at a block or sector level -- not file-by-file over a WAN?

    Read the article

  • What is the secure way to isolate ftp server users on unix?

    - by djs
    I've read documentation for various ftp daemons and various long threads about the security implications of using a chroot environment for an ftp server when giving users write access. If you read the vsftpd documentation, in particular, it implies that using chroot_local_user is a security hazard, while not using it is not. There seems to be no coverage of the implications of allowing the user access to the entire filesystem (as permitted by their user and group membership), nor to the confusion this can create. So, I'd like to understand what is the correct method to use in practice. Should an ftp server with authenticated write-access users provide a non-chroot environment, a chroot environment, or some other option? Given that Windows ftp daemons don't have the option to use chroot, they need to implement isolation otherwise. Do any unix ftp daemons do something similar?

    Read the article

  • How to use ccache selectively?

    - by Anonymous
    I have to compile multiple versions of an app written in C++ and I think to use ccache for speeding up the process. ccache howtos have examples which suggest to create symlinks named gcc, g++ etc and make sure they appear in PATH before the original gcc binaries, so ccache is used instead. So far so good, but I'd like to use ccache only when compiling this particular app, not always. Of course, I can write a shell script that will try to create these symlinks every time I want to compile the app and will delete them when the app is compiled. But this looks like filesystem abuse to me. Are there better ways to use ccache selectively, not always? For compilation of a single source code file, I could just manually call ccache instead of gcc and be done, but I have to deal with a complex app that uses an automated build system for multiple source code files.

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >