Search Results

Search found 4355 results on 175 pages for 'filesystem notification'.

Page 103/175 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • best practice? Consumer data in MySQL on Amazon EBS (Elastic block store)

    - by jeff7091
    This is a consumer app, so I will care about storage costs - I don't want to have 5x copies of data lying about. The app shards very well, so I can use MySQL and not have scaling issues. Amazon EBS has a nice baseline+snapshot backup capability that uses S3. This should have a light footprint (in terms of storage cost). BUT: the magnolia.com story scares the crap out of me: basically flawless block-level backup of a corrupt DB or filesystem. Is there anything that is nearly as storage efficient as EBS at the MySQL level?

    Read the article

  • Best alternatives to recover lost directories in FAT32 external hard drive?

    - by Sergio
    Hi: I have an 320 GB ADATA CH91 external hard drive. I guess it has some problems with the connector of the USB jack. The point is that in certain occasions it fails in write operations generating data losses. Right now I lost a directory with several GB's of very useful information. Since then I have not attempted to write to the disk any more. What tool would you recommend to recover the lost data? The disk is FAT32 formatted (only one partition) and I use both Linux and Windows. What filesystem format would you recommend to avoid future data losses? I currently only use this external hard drive in Linux so there are several available choices (FAT, NTFS, ext3, ext4, reiser, etc.). Regards, Sergio

    Read the article

  • XenServer: Editing clone configuration before boot

    - by Jeff Ferland
    Upon cloning a base image, I need to reconfigure basic settings. Regenerating the ssh host key, changing static IP assignments, setting the host name, etc. Because of the network setup, DHCP is not an option. That more or less rules out SSHing in with a predefined key or running a startup script since I can't provide the IP externally. I'd most like to mount the filesystem of the new machine on Dom0, but the lvm volumes are exported and it appears to be Bad Form to import them so the Dom0 machine can see them. What's your best suggestion for altering files in a cloned VM before boot? Must be non-interactive, and I'm going to guess out the gate that scripting access via xe console is not going to work well.

    Read the article

  • Customizing post-commit messages in svn for different users

    - by Suresh
    I have an svn repository that users can access (read/write) using their account OR via tunneling over ssh with svnserve. I also have a post-commit hook that sends mails to specific users for different projects via svnnotify: the typical command is svnnotify <params> --to-regex-map <list of email IDs> <regex> For users who have accounts on the system, the notification email is sent from @machine.domain, which is fine. For users coming in via tunnelling, the email gets sent from @machine.domain, which is a fake address since these users don't have an account - the only reason I specify a tunnel-user id is to keep track of who made which update. So my question (finally) is: is there a way to pass a parameter (the "true" email address) to svnserve so that when the post-commit mail is sent, it can be sent "from" the correct email address ? p.s this is my first post here - if I haven't provided sufficient information, apologies: I'm happy to provide more details.

    Read the article

  • check_mk IPMI PCM sensor reading randomly fails

    - by Julian Kessel
    I use check_mk_agent for monitoring a server with IPMI and the freeipmi-tools installed. As far as I can see, the monitoring randomly detects no value returned by the IPMI Sensor "Temperature_PCH_Temp". That's a problem since it results in a CRITICAL state triggering a notification. The interruption lasts only over one check, the following is always OK. The temperature is in no edge area and neither the readings before the fail nor after show a Temp that is tending to overrun a treshold. Has someone an idea on what could be the reason for this behaviour and how prevent it?

    Read the article

  • 1Tb disk formatted on Linux won't mount on windows nor mac

    - by Pedro MC
    I have an external HD (western digital) with 1Tb. I use Linux but I wanted to reserve a cross platform partition on the disk. I decided to create two partitions and used the "disks" application to do it. I created one partition with the LUKS (version 1) encryption and the other one, cross platform, in NTFS filesystem. Things work fine on my OS but when I try to use the disk (the cross platform partition) on both windows and mac the device is not recognized. What could it be? Next, output of "sfdisk -l /dev/sdb": Disk /dev/sdb: 121600 cylinders, 255 heads, 63 sectors/track Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/sdb1 0+ 36473- 36473- 292968750 83 Linux /dev/sdb2 36473+ 121600- 85128- 683789062+ 83 Linux /dev/sdb3 0 - 0 0 0 Empty /dev/sdb4 0 - 0 0 0 Empty

    Read the article

  • SVN multiple repositories in subfolders

    - by fampinheiro
    I'm using apache+svn apache config file: LoadModule dav_module modules/mod_dav.so LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so <Location /code> DAV svn SVNParentPath "c:/repositories" </Location> Imagine i have this file structure (in every t? i have one svn repository) c repositories uc1 0809v t1 t2 t3 0809i t1 t2 uc2 t1 t2 t1 I can access the repositories using: svn://domain.com/code/uc1/0809v/t1 svn://domain.com/code/uc1/0809v/t2 svn://domain.com/code/uc1/0809v/t3 I want to access them using the urls: http://domain.com/code/uc1/0809v/t1 http://domain.com/code/uc1/0809v/t2 http://domain.com/code/uc1/0809v/t3 and see the content of the repository in the browser. If i create the repository on the root of the svn folder i can see the repository (http://domain.com/code/t1) when i try the other urls i get the error Could not open the requested SVN filesystem My question is, It is possible to do a search in all subfolders looking for svn repositories?

    Read the article

  • OSX 10.8 Corrupted User Account Using Launchctl

    - by Scott
    I used the following command: launchctl unload -w /System/Library/LaunchAgents/com.apple.notificationcenterui.plist in an attempt to disable notification center. I'm not sure that I got all of the commands right and appear to have corrupted the account that I executed it from - I get a grey screen when I try to login on that account. Fortunately I have another account on the machine with admin privileges so I can still use the machine. I would however like to restore the account to a working condition preferably without having to resort to a complete system restore from my time machine backup. Is there a way of diagnosing the current status of this launchagent and returning it to its original state?

    Read the article

  • Default permissions for courier imap folders

    - by JoeCoder
    I'm using courier imap. When a mail client creates a new folder, it's created on the filesystem with 640 permission. I need it to be writable by the group, or 660. I currently have /etc/courier/imapd IMAP_UMASK=007, but that's not enough. I'm not sure what else to try. Any ideas? I'm using ubuntu server 12.04. EDIT: I added a 50pt bounty to this. For an acceptable answer, I need a way to make it work from a package in a standard repo. If I download source and compile it myself, it won't be automatically kept up to date with security fixes. If I don't find a better answer, I'll add code to the admin script to call another sudo approved script to chmod -R the whole directory before every change. But this is kind of hack-ish.

    Read the article

  • performance of vmware-machine on different computers

    - by bxshi
    I'm working on a filesystem improving project, and found a paper says the cheating on benchmark, and it gives a solution that use VMs could help others to reproduce our result. And the question is, if I have made a specific vmware virtual machine, will it runs the same at different computer and platform? For example, I have a virtual machine which is 1G RAM, 4G HD and 2G one-core CPU. Will that runs the same at a qual-core 3G CPU and a 2.4G P4? What if the computer have 4G RAM? Will vmware use some buffer mechanism to improve performance? If that's true, does it means the VM runs on a 2G RAM host will slower than on a 4G host? Hope you can help me on that, or just told me where could I find the answer.

    Read the article

  • How do you create large, growable, shared filesystems on Linux at AWS?

    - by Reece
    What are acceptable/reasonable/best ways to provide large, growable, shared storage at AWS, exposed as a single filesystem? We're currently making 1TB EBS volumes ~biweekly and NFS exporting with no_subtree_check and nohide. In this setup, distinct exports appear under a single mount on the client. This arrangement does not scale well. The options we've considered: LVM2 with ext4. resize2fs is too slow. Btrfs on Linux. not obviously ready for prime time yet. ZFS on Linux. not obviously ready for prime time yet (although LLNL uses it) ZFS on Solaris. future of this combo is uncertain (to me), and new OS in the mix glusterfs. heard mostly good but two scary (and maybe old?) stories. The ideal solution would provide sharing, a single fs view, easy expandability, snapshots, and replication. Thanks for sharing ideas and experience.

    Read the article

  • Fsck stuck on "Clone Multiply-claimed blocks"

    - by user3436581
    Update: I fixed the issue. But I don't see eth0 directory in /sys/class/net Any idea how to fix that? I could not bring up eth0 and I need it badly so that I can backup everything over the network since I'm working on VM console. This virtual machine sda1 is stuck. I've tried e2fsck and fsck and both gets stuck after "Clone multiply-claimed blocls? yes" I've waited for around 5 to 8 hours and it still the same. I could not mount the filesystem without fixing these errors. I'm doing this after un-mounting all filesystems in rescue mode.. Reboot does not help. Any suggestions? Screenshot: http://i.stack.imgur.com/lgixr.jpg Alternative screenshot url: http://s27.postimg.org/grk4p9eeb/error.png

    Read the article

  • How to backup/restore excluding filestream varbinary in SQL Server 2008?

    - by fdierre
    There is an application used in a production site that uses SQL Server 2008 as its DBMS. The database schema uses Filestream Varbinary to save binary data on the filesystem instead of directly into the DB tables. The point is that now and then it would be useful to copy the production database on development machines, mostly for doing troubleshooting. The database is too big for comfortably moving it around, but it would be ok if it could be moved leaving out the filestream varbinary fields. In other words, I am trying to make an "imperfect" copy of a database: i.e., on the destination database, it is ok to have NULL values instead of the varbinary. Is this possible? I tried looking for the feature on the SQL Server Management studio and did a backup that excludes the filegroup containing the filestream varbinary, but I cannot restore: MSSMS complains that the restore cannot be done because the backup is incomplete (of course). Is it possible to achieve what I am trying to do in some way?

    Read the article

  • please explain my fio results - is O_SYNC|O_DIRECT misbehaving on linux?

    - by Zoltan
    I'm going mad over figuring out what the problem could be with one of our storage boxes. With a simple fio script I'm testing random writes using bs=1M and direct=1. The SSD is a Samsung 840pro attached to an LSI HBA (3Gbit/s ports). This is the result I'm getting under FreeBSD 9.1: WRITE: io=13169MB, aggrb=224743KB/s, minb=224743KB/s, maxb=224743KB/s, mint=60002msec, maxt=60002msec This is regardless of sync being set to 0 or 1. On linux, this is the result with sync=0: WRITE: io=14828MB, aggrb=253060KB/s, minb=253060KB/s, maxb=253060KB/s, mint=60001msec, maxt=60001msec and with sync=1: WRITE: io=6360.0MB, aggrb=108542KB/s, minb=108542KB/s, maxb=108542KB/s, mint=60001msec, maxt=60001msec My understanding is that since I'm operating on the raw block device, O_SYNC should not make any difference - there's no filesystem, any barrier, anything between the writes and the drive itself. Especially with O_DIRECT|O_SYNC set. Any ideas? For reference, here's the fio script I'm testing with: [global] bs=1M ioengine=sync iodepth=4 size=16g direct=1 runtime=60 filename=/dev/sdh sync=1 [rand-write] rw=randwrite stonewall

    Read the article

  • ZFS: Redistribute zvol over all disks in the zpool?

    - by growse
    Is there a way in which ZFS can be prompted to redistribute a given filesystem over the all of the disks in its zpool? I'm thinking of a scenario where I have a fixed size ZFS volume that's exported as a LUN over FC. The current zpool is small, just two 1TB mirrored disks, and the zvol is 750GB in total. If I were to suddenly expand the size of the zpool to, say, 12 1TB disks, I believe the zvol would still effectively be 'housed' on the first two spindles only. Given that more spindles = more IOPS, what method could I use to 'redistribute' the zvol over all 12 spindles to take advantage of them?

    Read the article

  • zram trimming by writing zero pages

    - by qdot
    I'm using ZRAM as a backing block device for /tmp filesystem in the following manner: echo 8000000000 > /sys/block/zram0/disksize mkfs.ext4 -O dir_nlink,extent,extra_isize,flex_bg,^has_journal,uninit_bg -m0 \ -b 4096 -L "zram0" /dev/zram0 mount -o barrier=0,commit=240,noatime,nodev,nosuid /dev/zram0 /tmp chmod aogu+rwx /tmp It works out reasonably well for me - however, there is an issue here - when files are removed, they are not zero'ed, so the ZRAM does not remote the compressed pages. Obviously running dd if=/dev/zero of=/tmp/ZERO bs=1M count={free-space-some-rest}; rm /tmp/ZERO clears it up in the ZRAM - it gets notified of zero-pages and shrinks the store. How can I get ext4 to zero used pages on delete? Also, any other suggestions on how to optimize it?

    Read the article

  • Mac OS X Server add server user

    - by Meltemi
    What's the recommended way to add a user to Mac OS X Server that doesn't need all the hoopla associated with Workgroup Manager? There are many users pre-configured in Mac OS X Server (www, root, ldapadmin, etc.) that don't have "Full Name" or mail accounts, etc. I'd like to create a 'svn' user to be the owner of our Subversion Repository as per this tutorial: If you've decided to use either Apache or stock svnserve, create a single svn user on your system and run the server process as that user. Be sure to make the repository directory wholly owned by the svn user as well. From a security point of view, this keeps the repository data nicely siloed and protected by operating system filesystem permissions, changeable by only the Sub- version server process itself. Wondering if there's a way outside of WorkgroupManager and OpenDirectory as this account will be entirely server based. Is this still sound advice under OS X Server? If so what's the easiest way to create the user (Mac OS X Server doesn't seem to respond to useradd).

    Read the article

  • Using GlusterFS for simple replication

    - by k7k0
    Hi, newbie question. I need to build this: /shared folder ~500GB of files, ~1MB each one. Two boxes (server1 and server2) connected by a 1Gbs LAN Every box needs to get r/w access to the files, so their are both clients I want that the files replicated on both boxes, every time a file is written in one server the same file should be present in the other one. My questions regarding GlusterFS: It'll duplicate the files on the same box?. For example the files are on /shared and the mount in /mnt/shared. It'll take 1GB space on every server? Instead, should I use the filesystem directly, locally writing on /shared? Does the replication work in this way without mountin a client? Also, if anyone know any other way to acomplish this setup I'll be very grateful. Thanks in advance.

    Read the article

  • How to unmount a VHD in Windows 7. There is no unmount option.

    - by Triynko
    I mounted a VHD file in Windows 7 using the Disk Manager. Once mounted, there is no option to Unmount it. The only thing close to such an option that I can find is if I click the icon in the taskbar notification area that I use to remove USB devices... there's an option to eject the virtual hard disk. However, when I click that, it says that it's in use and cannot be ejected. Even though... it's not in use, I never even browsed the drive. The disk manager is closed... and the only open files handles to the drive (according to disk performance in task manager) is SYSTEM. Ejecting devices cleanly has been a problem since Windows XP, and it sickens me to see it persist into windows 7.

    Read the article

  • How can I tell whether an interrupted rm -r removed any files?

    - by Jake Petroules
    I installed sshfs a Linux box and then mounted my Mac home directory. In the middle of troubleshooting a configuration issue, I did an ls -l on the mount directory (as normal user), receiving: total 0 d????????? ? ? ? ? ? sl I then ran sudo rm -r on that directory but pressed Ctrl+C to terminate it immediately before it (looks) like the command did anything. I notice no files missing but I want to be sure - is there a way I can somehow inspect the filesystem log on my Mac to see if any files were actually removed?

    Read the article

  • Sudden restart now chrome kills computer

    - by Kai
    My computer suddenly restart itself the other day and when it came back up so much as clicking on the icon for Google Chrome freezes everything. I uninstalled Chrome and tried reinstalling is but as soon as the download finished, the computer froze again. I tried installing an earlier version (about a week prior) and it froze differently but still froze. I am also getting notification that the battery needs to be replaced. At the moment I am running it sans battery and using firefox and everything seems to be fine. I have a HP dv4t running Windows 7.

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • Possible to get SSD TRIM (discard) working on ext4 + LVM + software RAID in Linux?

    - by Don MacAskill
    We use RAID1+0 with md on Linux (currently 2.6.37) to create an md device, then use LVM to provide volume management on top of the device, and then use ext4 as our filesystem on the LVM volume groups. With SSDs as the drives, we'd like to see the TRIM commands propagate through the layers (ext4 - LVM - md - SSD) to the devices. It looks like recent 2.6.3x kernels have had a lot of new SSD-related TRIM support added, including lots more coverage of Device Mapper scenarios, but we still can't seem to get it to cascade down properly. Is this possible yet? If so, how? If not, is any progress being made?

    Read the article

  • OpenAFS on Fedora/CentOS

    - by Michael Pliskin
    I am trying to see if OpenAFS fits my needs as a distributed filesystem and is a bit stuck. There are docs but they're all quite hard to understand, so asking for some expert advice here. My questions: which version to install? I need windows client support so I need 1.5 - right? But it is not stable.. Or is it? And don't see any pre-built rpms for it, so compiling from sources? tried to compile and it worked but it created a non-"mp" kernel module while my kernel needs an mp one - how to workaround that? do I really need a new fresh partition to start with or I can re-use an existing one and just make it available via afp? any nice HOWTOs around?

    Read the article

  • Plesk file permissions - Apache/PHP conflicting with user accounts.

    - by hfidgen
    Hiya, I'm building a Drupal site which performs various automatic disk operations using the apache user (id=40). The problem is that the site was set up on a subdomain belonging to user ID 10001 (ie my main FTP account) so the filesystem belongs to that user ID. So I keep getting errors like this: warning: move_uploaded_file() [function.move-uploaded-file]: SAFE MODE Restriction in effect. The script whose uid is 10001 is not allowed to access /var/www/vhosts/domain.com/httpdocs/sites/default/files/images/user owned by uid 48 in /var/www/vhosts/domain.com/httpdocs/includes/file.inc on line 579. I've tried changing the apache group in httpd.conf to apache:psacln, psacln being the default group for all web users but that's not helped. The situation now is: ..../files/images/ = 777 and chown = ftplogin:psacln ..../files/images/user = 775 and chown = apache:psacln ..../files/tmp = 777 and chown = ftplogin:psacln So apparently uid 40 and 10001 both have permissions to write to any of the 3 directories involved, but still can't. Am i missing something here? Can anyone help? Thanks!

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >