Search Results

Search found 10076 results on 404 pages for 'high volume'.

Page 161/404 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Run a MongoDB configuration server without 3GB of journal files

    - by Thilo
    For a production sharded MongoDB installation we need 3 configuration servers. According to the documentation "the config server mongod process is fairly lightweight and can be ran on machines performing other work". However, in the default configuration, they all have journalling enabled, and with preallocation this takes up 3 GB of disk space. I assume that the actual data and transaction volume of a config server is quite small, so that this seems a bit too much. Is there a way to (safely!) run these config servers with much less disk use for the journal? Do I need journalling at all on config servers? Can I set the journal size to be smaller?

    Read the article

  • Using NDMP as an alternative to CIFS mount

    - by user138922
    I have a weird but interesting use-case. I use CIFS to mount shares from a File Server (NetApp, EMC etc) to an application server (win/linux server where my application runs). My application needs to process each of the file from the shares that I mount via CIFS. My application also needs access to the meta-data of these files such as Name, Size, ACLs etc. I would like to see if I can achieve the same via NDMP. I have some very basic questions regarding this use-case. It would be great if you guys can help me out here. Is this even something which is achievable? Can I just transfer share that interest me instead of entire volume?

    Read the article

  • Best way to backup and restore millions of files

    - by bongo
    Hi, I'm facing a rebuilding of the volume on which I host the mail storage (kerio mailserver, which uses maildirs). I need to backup and restore as quick as possible the 3.5+ millions (for about 600GB) small files of the store directory. It takes more than 12 hours via rsync to a NFS share, but I also have a 1TB firewire 800 raid1 disk that I can use (from some preliminary tests it's faster). I'm working off a XServe intel. What is the fastest way to do it? Rsync? Finder copy? tar?

    Read the article

  • Lost disk space in Windows 7, cannot find the missing

    - by Tsanders
    My hard drive is complaining it is low on disk space, but a strange thing seems to be happening: Explorer reports 10Gb of available space (on a 120 Gb hard disk), chkdsk in the command prompt does the same but if I use a disk space tool such as SpaceSniffer or WinDirStat, only 50Gb of data is found. My guess is that there somehow is a hold on a large block of disk space (but that's just a guess) because of a prior very large (40 Gb) download attempt that didn't complete. There isn't 40Gb of files on the drive (hidden or visible), yet Explorer insists that something is there. How can I claim back this hard disk drive (without formatting my hard disk)? SpaceMonger is providing a clue, reporting four unscannable folders which add up to 43Gb: C:\RRBackups C:\System Volume Information C:\Windows\Csc\v2.06 C:\Windows\System32\LogFiles\Wmi\RtBackup Does anybody know what these folders are for, and how I can claim back at least some space? Restore point claims about 4Gb, so that doesn't seem to be the main problem.

    Read the article

  • Bitlocker-to-go on fixed drive

    - by Unsigned
    Scenario Two drives are connected to a computer. One via a SATA-to-USB interface, the other directly via a SATA-to-eSATA cable. The drive on USB appears as a removable drive, the drive on eSATA appears as a fixed drive. Both use NTFS. The USB drive offers Bitlocker-To-Go, the eSATA drive only offers BitLocker. Question It is my understanding that drives encrypted with BitLocker-To-Go include an app to allow Windows XP read-only access to the volume. Is this the only difference, and is there a way to use Bitlocker-To-Go on the eSATA drive? Update Another difference is found here: The recovery key is required when a BitLocker-protected fixed data drive configured for automatic unlocking is moved to another computer.[1] Assuming that does not apply to removable drives.

    Read the article

  • VSS Not Creating Shadow Set

    - by Jeff Leyser
    I'm trying to setup backup scripts on WinXP to use Volume Shadow Sets. I downloaded the VSS 7.2 SDK from MSFT, and used the include vshadow.exe to create a shadow set: vshadow -script=vss-setvar.cmd f: (note that I've tried both f: and c:) vshadow executes just find, giving no errors, reporting the shadow is created. However, executing vshadow -q as the very next command results in "There are no shadows on the system" and, indeed, if I use dosdev to try and map the Shadow set named in vss-setvar.cmd, it will not work. Am I missing a step?

    Read the article

  • How to Recover HDD Formatted by "Create a Recovery Drive" Tool of Windows 8.1?

    - by ide
    I have 2 TB USB HDD which had these drives F: about 1 TB with 750 GB data H: about 120 GB with 60 GB data I: about 780 GB with 250 GB data (For TV: It was raw in Windows but visible in the Smart TV) I took 521 MB from last part of H to get new G drive. Then I run "Create a Recovery Drive" tool of Windows 8.1 and chose G drive. It said all data in the drive will be deleted. I thought it is just G drive but it deleted my whole HDD. It created 32 GB new F drive with writing 337 MB on it and rest of HDD is unallocated. I tried these programs to get my first 3 drives but non of them helped for getting 1st partition. TestDisk MiniTool Partition Wizard Home Edition EaseUS Partition Master 9.2.2 (I deleted new F drive volume because it scans only unallocated part) Recuva PC Inspector File Recovery

    Read the article

  • Create mirror software raid with bad blocks hdd. How to check data integrity?

    - by rumburak
    There is error in System event log like this one: "The device, \Device\Harddisk1\DR1, has a bad block." Because of above I created Raid 1 on this disk and other one. I'm using Windows Server 2008 R2 software RAID volumes. Volume in Disk Manager is marked as "Failed Redundancy" and "At Risk". I could command to "Reactivate Disk" and it's starts to re-sync, but after a while it stops and returns to previous state. It stops re-sync on bad block on old disk and creates same error in System event log. Old disk status is Errors, new disk status is Online. How can I check that there is exact copy of the old disk on new one ? It is server machine so I would prefer to keep it running during this check.

    Read the article

  • To Delete a tape from the ACSLSlibrary

    - by Senthil Kumar
    Hi Anyone there can help me out for an issue as below: There was a stuck tape in the drive and so the stuck tape was removed, but i need to logically delete the tape entry from NB, so that the same media can be inserted back for operations. Netback thinks that the tape is still in that location, hence it should be removed so that the entry is not there and NB does not recognise that the tape in that location, so the same tape can be taken in through inventory. The NB used is NB5.1 Any command to delete this entry, this is a clustered based Environment (Active/Passive), and we use a ACSLS library (Physical) as well a Switch-SN6000(Logical) Kindly help me out as when we tried to delete the media from GUI it said- Could not delete- Cannot delete assigned volume (92).

    Read the article

  • Sound does not work on the administrator profile.Works on a non-administrator profile on Windows XP

    - by Sharjeel Sayed
    Initially I suspected a missing driver, but then sound ( for movies,songs etc ) works fine on the other non-administrator account, but does not work when I log in to the Administrator account. And yes..I have checked the sound volume and mute status as well. Details of my system OS: Windows XP Professional Service Pack 3 (build 2600) Processor: 2.00 gigahertz AMD Athlon 64 Memory: 448 Megabytes Usable Installed Memory Board: ASUSTeK Computer Inc. K8V-MX Bus Clock: 200 megahertz BIOS: American Megatrends Inc. 0112 07/18/2005 Multimedia: SoundMAX Integrated Digital Audio Any help would be appreciated.Thanks in advance

    Read the article

  • How to disable the second partition without unmount it in Mac?

    - by bagusflyer
    I've installed OSX Yosemite in another partition in my Mac. But there is a problem. For example, I installed iBooks in both partition. When I right click one of my epub or pdf file, both iBooks are shown in my context menu. This is not what I want. What I want is to only allow the apps in Yosemite shown. Of course I can disable apps in my old Maverick partition by unmount the volume. But again this is not what I want because it will hide the partition when I boot my machine so that I can't boot up into my Maverick partition. Can anybody advise if there are any better ideas? Thanks

    Read the article

  • When connecting to a unix box, how does it know you have SSH setup on your desktop?

    - by Blankman
    When you use something like putty to connect to a linux box, and you setup your SSH keys etc. When connecting, how does it tell the server that you want to connect using your SSH keys? Is SSH running as a service on a particular port or does it simply pass your private-key and then the login service sees that and tries to connect using it? Just looking for a fairly high level understanding (with maybe some detail if you want to...)

    Read the article

  • Disable all but RC4 in apache

    - by Daniel
    Our PCI compliance vendor requires that we disable all but RC4 encryption on our web server. Currently our apache config file looks like this: SSLHonorCipherOrder On SSLCipherSuite RC4-SHA:HIGH:!ADH:!AES256-SHA:!ECDHE-RSA-AES256-SHA384:!AES128-SHA:!DES-CBC:!aNull:!eNull:!LOW:!SSLv2 However, https://www.ssllabs.com reports the following ciphers are allowed: TLS_RSA_WITH_RC4_128_SHA TLS_DHE_RSA_WITH_AES_256_CBC_SHA TLS_DHE_RSA_WITH_AES_128_CBC_SHA TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA How can I configure apache to only allow RC4?

    Read the article

  • ZFS: Redistribute zvol over all disks in the zpool?

    - by growse
    Is there a way in which ZFS can be prompted to redistribute a given filesystem over the all of the disks in its zpool? I'm thinking of a scenario where I have a fixed size ZFS volume that's exported as a LUN over FC. The current zpool is small, just two 1TB mirrored disks, and the zvol is 750GB in total. If I were to suddenly expand the size of the zpool to, say, 12 1TB disks, I believe the zvol would still effectively be 'housed' on the first two spindles only. Given that more spindles = more IOPS, what method could I use to 'redistribute' the zvol over all 12 spindles to take advantage of them?

    Read the article

  • Right-button drag-drop function in Gnome

    - by HorusKol
    While I don't really miss the annoyances that go with working on Windows, one thing I do wish I had in Gnome is the ability to hold the right-mouse button down on a file and drag it to get the context menu asking if I want to move or copy the file. I realise the default tries to be sensible (like in Windows - defaults to move if on the same volume, or copy if on different) and that it can be overridden with <ctrl> or <shift> - but i'm still used to the right-mouse drag option and keep getting frustrated when it doesn't work...

    Read the article

  • Please wait for User Profile Service... on WIndows 7 takes around 1-2 minutes to process

    - by Chris
    When loggging into our domain, after entering account credentials the log in process takes around 1-2 minutes before it gets past the User Profile Service, the rest of the process takes 2-3 secs. This effects all machines running Windows 7 Enteprise 32-bit and is on fairly high spec laptops (SSD drives, i5 2.93Ghz CPU, 4GB memory). Is there any way to speed this up or is this time delay acceptable? Thanks in advance.

    Read the article

  • Disabling RAID feature on HP Smart Array P400

    - by Arie K
    I'm planning to use ZFS on my system (HP ML370 G5, Smart Array P400, 8 SAS disk). I want ZFS to manage all disks individually, so it can utilize better scheduling (i.e. I want to use software RAID feature in ZFS). The problem is, I can't find a way to disable RAID feature on the RAID controller. Right now, the controller aggregates all of the disks into one big RAID-5 volume. So ZFS can't see individual disk. Is there any way to acomplish this setup?

    Read the article

  • Shows error message "ialmrnt 5 display driver stooped working".

    - by user68705
    when ever i play movie with high resolution(720hz) My Destop PC shows a black screen and after a while shows message "ialmrnt 5 display driver stooped working". and i am having the following configuration 1GB RAM DDR1 windows xp OS processor- Intel Pentium 4 Processor Srm-Motherboard Graphics Driver Product Detected -Intel® 845G Current Driver Installed-6.14.10.3889 Intel Chipset Software Installation Utility (Chipset INF) - Current Version Installed 9.1.0.1012 kindly help me to fix this problem

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • Keepalived alternative for Solaris 10

    - by antispam
    We are considering an architecture like the one in the picture for Solaris 10 That is, high avalaibility software load balancers in front of web and application servers. Unfortunately, Keepalived is not available for Solaris at the moment. Is there an equivalent artifact for substituing Keepalived which is supported in Solaris 10? Is there an equivalent architecture for Solaris using HA SW load balancing? Thank you.

    Read the article

  • Plugging GlusterFS and Openfiler together

    - by lpfavreau
    Has anyone had experience plugging GlusterFS and Openfiler together or something similar? Here is the motivation: Disk space on multiple server regrouped using GlusterFS Centralized access using LDAP/AD and quota management using Openfiler as the GlusterFS client SMB/CIFS server for easy sharing to multiple users on Mac and Windows I know I can have Gluster installed on Openfiler (rPath Linux) successfully but Openfiler seems to be very picky on what it can use as a shared drive. Mounting the Gluster volume inside an existing share does not seem to allow quotas with the mounted folder free space. If this is not possible, is there any alternative to give the same capabilities?

    Read the article

  • hyper-v multiple virtual machines with >2TB volumes from one raid

    - by wurlog
    I have a server with two Raids. Raid 0: 2x 1TB Raid 6: 8x 2TB The first raid I used for the hyper-v installation itself. The virtual machines should use the Raid 6, but how can I config it? I need at least one file server with the most of the disc space (maybe a second). But every vhd has a maximum of 2 TB and I can't use the volume directly because other virtual machines have to have access the Raid6. What do I do?

    Read the article

  • Helicon ISAPI_REWRITE 3 - Not Logging Anything

    - by Brian
    Hello, The ISAPI_REWRITE Helicon product does not log anything... I setup logging to run as: [ISAPI_Rewrite] RewriteEngine on #enabling rewrite.log RewriteLogLevel 9 #enabling error.log LogLevel debug But nothing is getting logged. Is it something I'm doing, is it working? It is installed, it's given high priority in IIS (do see it visibly present). Any ideas why it isn't logging? Should it log even if not rewriting?

    Read the article

  • OS X 10.6 Snow Leopard no longer mounting an external USB drive

    - by Brant Bobby
    I have a 1TB generic external hard drive containing a single HFS partition. I originally formatted this using Disk Utility and it worked fine. Now, for some reason, it's not auto-mounting when I start up. Using mount at the command line gives the following error: $ sudo mount /dev/disk1s2 /Volumes/Test /dev/disk1s2 on /Volumes/Test: Incorrect super block. ... but if I use the mount_hfs command it works fine, mounts, and is readable. $ mount_hfs /dev/disk1s2 /Volumes/Test/ fsck gives me an error about a bad super block: $ fsck /dev/disk1 ** /dev/rdisk1 (NO WRITE) BAD SUPER BLOCK: MAGIC NUMBER WRONG ... but fsck_hfs -fn /dev/disk1s2 doesn't find any problems and reports that the volume appears to be OK. In Disk Utility, the drive appears to have a single MS-DOS partition with a curious notice about how it appears to be partitioned for Boot Camp: I have the Boot Camp HFS driver installed in WIndows 7, and that OS sees the drive/partition normally. What's wrong with my disk?

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >