Search Results

Search found 16174 results on 647 pages for 'disk space'.

Page 128/647 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • No device file for partition on logical volume (Linux LVM)

    - by Brian
    I created a logical volume (scandata) containing a single ext3 partition. It is the only logical volume in its volume group (case4t). Said volume group is comprised by 3 physical volumes, which are three primary partitions on a single block device (/dev/sdb). When I created it, I could mount the partition via the block device /dev/mapper/case4t-scandatap1. Since last reboot the aforementioned block device file has disappeared. It may be of note -- I'm not sure -- that my superior (a college professor) had prompted this reboot by running sudo chmod -R [his name] /usr/bin, which obliterated all suid in its path, preventing the both of us from sudo-ing. That issue has been (temporarily) rectified via this operation. Now I'll cut the chatter and get started with the terminal dumps: $ sudo pvs; sudo vgs; sudo lvs Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Scanning for physical volume names PV VG Fmt Attr PSize PFree /dev/sdb1 case4t lvm2 a- 819.32G 0 /dev/sdb2 case4t lvm2 a- 866.40G 0 /dev/sdb3 case4t lvm2 a- 47.09G 0 Wiping internal VG cache Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Finding all volume groups Finding volume group "case4t" VG #PV #LV #SN Attr VSize VFree case4t 3 1 0 wz--n- 1.69T 0 Wiping internal VG cache Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Finding all logical volumes LV VG Attr LSize Origin Snap% Move Log Copy% Convert scandata case4t -wi-a- 1.69T Wiping internal VG cache $ sudo vgchange -a y Logging initialised at Sat Jan 8 11:43:14 2011 Set umask to 0077 Finding all volume groups Finding volume group "case4t" 1 logical volume(s) in volume group "case4t" already active 1 existing logical volume(s) in volume group "case4t" monitored Found volume group "case4t" Activated logical volumes in volume group "case4t" 1 logical volume(s) in volume group "case4t" now active Wiping internal VG cache $ ls /dev | grep case4t case4t $ ls /dev/mapper case4t-scandata control $ sudo fdisk -l /dev/case4t/scandata Disk /dev/case4t/scandata: 1860.5 GB, 1860584865792 bytes 255 heads, 63 sectors/track, 226203 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00049bf5 Device Boot Start End Blocks Id System /dev/case4t/scandata1 1 226203 1816975566 83 Linux $ sudo parted /dev/case4t/scandata print Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/case4t-scandata: 1861GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 1861GB 1861GB primary ext3 $ sudo fdisk -l /dev/sdb Disk /dev/sdb: 1860.5 GB, 1860593254400 bytes 255 heads, 63 sectors/track, 226204 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000081 Device Boot Start End Blocks Id System /dev/sdb1 1 106955 859116006 83 Linux /dev/sdb2 113103 226204 908491815 83 Linux /dev/sdb3 106956 113102 49375777+ 83 Linux Partition table entries are not in disk order $ sudo parted /dev/sdb print Model: DELL PERC 6/i (scsi) Disk /dev/sdb: 1861GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 880GB 880GB primary reiserfs 3 880GB 930GB 50.6GB primary 2 930GB 1861GB 930GB primary I find it a bit strange that partition one above is said to be reiserfs, or if it matters -- it was previously reiserfs, but LVM recognizes it as a PV. To reiterate, neither /dev/mapper/case4t-scandatap1 (which I had used previously) nor /dev/case4t/scandata1 (as printed by fdisk) exists. And /dev/case4t/scandata (no partition number) cannot be mounted: $sudo mount -t ext3 /dev/case4t/scandata /mnt/new mount: wrong fs type, bad option, bad superblock on /dev/mapper/case4t-scandata, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so All I get on syslog is: [170059.538137] VFS: Can't find ext3 filesystem on dev dm-0. Thanks in advance for any help you can offer, Brian P.S. I am on Ubuntu GNU/Linux 2.6.28-11-server (Jaunty) (out of date, I know -- that's on the laundry list).

    Read the article

  • Why after deleting a 110+ GB collection, my /var/lib/mongodb directory still have same size?

    - by tunnuz
    I am having some troubles with MongoDB and space usage. In particular, I once used to have a large collection of about 600 million records totaling 110+ GB on disk. Recently I decided to drop it because the data was outdated, to do so I dropped the collection through rockmongo's web interface. Accordingly, rockmongo doesn't show me the collection anymore, however my disk usage hasn't changed at all. Is there any clean operation which I am not aware of, which must be run in order to synchronize the database with database files on disk? I have tried to perform a "repair" but the system complains that there's not enough space on disk ... that's because it is all used by MongoDB.

    Read the article

  • btrfs: can i create a btrfs file system with data as jbod and metadata mirrored

    - by Yogi
    I am trying to build a home server that will be my NAS/Media server as well a the XBMC front end. I am planning on using Ubuntu with btrfs for the NAS part of it. The current setup consists of 1TB hdd for the OS etc and two 2TB hdd's for data. I plan to have the 2TB hdd's used as JBOD btrfs system in which i can add hdd's as needed later, basically growing the filesystem online. They way I had setup the file system for testing was while installing the OS just have one of the HDD's connected and have btrfs on it mounted as /data. Later on add another hdd to this file system. When the second disk was added btrfs made as RAID 0, with metadata being RAID 1. However, this presents a problem: even if one of the disk fails I loose all my data (mostly media). Also most of the time the server will be running without doing any disk access, i.e. the HDD's can be spun down, when a access request comes in this with the current RAID 0 setup both disks will spin up. in case I manage a JBOD only the disk that has the file needs to be spun up. This should hopefully reduce the MTBF for each disk. So, is there a way in which I can have btrfs setup such that metadata is mirrored but data stays in a JBOD formation? Another question I have is this, I understand that a full drive failure in JBOD will lose data on the drive, but having metadeta mirrored across all drives, will this help the filesytem correct errors that migh creep in (ex bit rot?) and is btrfs capable of doing this.

    Read the article

  • Estimate compressed file size in tar.gz

    - by liori
    I've got a set of .tar.gz files, which are duplicity backup files (either full backups or incremental ones). I'd like to compute which directories take the most space on backups. This will most probably be a different figure to calculating which directories take the most space on a live filesystem because I need to account for how often are files changing (and therefore taking space on incremental backups) and how compressible are files. I know that while many other archive formats store compressed files as different entities inside the archive file, .tar.gz files do not, and therefore it is impossible to get an exact amount of storage taken in the archive by a single file after compression. Are there any tools to calculate at least some estimates?

    Read the article

  • Tomcat web application intermittent freeze

    - by tinny
    I have a Grails web application (just a standard war file) deployed on a Ubuntu 10.10 server running on tomcat 6. My database is postgresql. The problem is that every so often (once or twice a day after inactivity) when I try to log into this web application it just freezes. I can navigate to the login page but when I try and login (first time the DB is hit, might be a clue..?) the application just freezes indefinitely, no 500 response code... the browser just waits and waits. I followed the instructions detailed here because the problem described sounded the same as mine. My GC logging showed no long running GC, all sub sec. When the application freezes a jmap heap output is... using parallel threads in the new generation. using thread-local object allocation. Concurrent Mark-Sweep GC Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 536870912 (512.0MB) NewSize = 21757952 (20.75MB) MaxNewSize = 87228416 (83.1875MB) OldSize = 65404928 (62.375MB) NewRatio = 7 SurvivorRatio = 8 PermSize = 21757952 (20.75MB) MaxPermSize = 85983232 (82.0MB) Heap Usage: New Generation (Eden + 1 Survivor Space): capacity = 19595264 (18.6875MB) used = 11411976 (10.883308410644531MB) free = 8183288 (7.804191589355469MB) 58.23843965562291% used Eden Space: capacity = 17432576 (16.625MB) used = 9249296 (8.820816040039062MB) free = 8183280 (7.8041839599609375MB) 53.05754009046053% used From Space: capacity = 2162688 (2.0625MB) used = 2162680 (2.0624923706054688MB) free = 8 (7.62939453125E-6MB) 99.99963008996212% used To Space: capacity = 2162688 (2.0625MB) used = 0 (0.0MB) free = 2162688 (2.0625MB) 0.0% used concurrent mark-sweep generation: capacity = 101556224 (96.8515625MB) used = 83906080 (80.01907348632812MB) free = 17650144 (16.832489013671875MB) 82.62032270912317% used Perm Generation: capacity = 85983232 (82.0MB) used = 62866832 (59.95448303222656MB) free = 23116400 (22.045516967773438MB) 73.1152232100324% used Anyone know what "From Space:" is? Any ideas on further fault finding ideas? I dont have much experience with this type of fault finding.

    Read the article

  • How to unmount a VHD in Windows 7. There is no unmount option.

    - by Triynko
    I mounted a VHD file in Windows 7 using the Disk Manager. Once mounted, there is no option to Unmount it. The only thing close to such an option that I can find is if I click the icon in the taskbar notification area that I use to remove USB devices... there's an option to eject the virtual hard disk. However, when I click that, it says that it's in use and cannot be ejected. Even though... it's not in use, I never even browsed the drive. The disk manager is closed... and the only open files handles to the drive (according to disk performance in task manager) is SYSTEM. Ejecting devices cleanly has been a problem since Windows XP, and it sickens me to see it persist into windows 7.

    Read the article

  • Win7 - DVD drive spins up but fails to read, fails write

    - by MA
    Running Windows 7 x64. DVD drive is a BenQ DC DQ60 ATA dvd-dl rw. Everything functions correctly in linux, and I can boot to cd/dvds, so the drive itself does work. Symptom: when I insert any CD or DVD (burned or retail), the drive spins up the disk, and (usually) displays the disk title in My Computer, but just continues to spin indefinitely. I cannot browse the disk in the drive, install from it, or read anything.

    Read the article

  • Partial-stroking / Short-stroking / Half-stroking Hard Drives?

    - by Daniel Magliola
    Could anyone here explain to me what is implied by this term? (I've seen the same thing mentioned with the 3 terms). At first when I read about it, for some reason I understood that it was some way of splitting the bytes across the platters of the disk, which sounded like a good idea and obviously doesn't make sense, because that wouldn't cut disk size in half (and disk are probably already splitting bytes across platters)... The best I've come to understand is that basically instead of creating one partition for the whole size of the disk, you create 2 partitions, and use only one of them, either the one in the "center" or the one in the "rim" of the platters, and since one of the two is faster (people didn't seem to agree on which one was faster), that makes everything better. Am I understanding this correctly? Has anyone tried this with their drives and had a good outcome? Thanks!

    Read the article

  • Is putting the swapfile & temp folder to ramdisk a good idea in Windows 7 64 bit with lots of RAM?

    - by Tony_Henrich
    I want my Windows to run as fast as possible. If I have 12GB RAM in Windows 7 64bit, quad core CPU, and all apps fit in memory, will the swap file ever be used for anything? The question is about if it's a good idea to put the swap file in a RAM disk. Would a RAM disk help in any way or will Windows intelligently use all the available memory for all its work? I am also thinking of putting the temp folder on a RAM disk. I know the RAM disk is volatile memory and I don't care about its content if it gets lost.

    Read the article

  • Defragment a file or folder? Windows 7

    - by acidzombie24
    Is there a built-in way to defragment a folder? I am using VM Player so I would like my 3 GB disk image to be defragmented if possible. FYI my disk partition that the image lies in has 12 GB left and has roughly 90% of the disk used. I probably would not need a defrag but I would like to do it if it's possible.

    Read the article

  • I/O intensive MySql server on Amazon AWS

    - by rhossi
    We recently moved from a traditional Data Center to cloud computing on AWS. We are developing a product in partnership with another company, and we need to create a database server for the product we'll release. I have been using Amazon Web Services for the past 3 years, but this is the first time I received a spec with this very specific hardware configuration. I know there are trade-offs and that real hardware will always be faster than virtual machines, and knowing that fact forehand, what would you recommend? 1) Amazon EC2? 2) Amazon RDS? 3) Something else? 4) Forget it baby, stick to the real hardware Here is the hardware requirements This server will be focused on I/O and MySQL for the statistics, memory size and disk space for the images hosting. Server 1 I/O The very main part on this server will be I/O processing, FusionIO cards have proven themselves extremely efficient, this is currently the best you can have in this domain. o Fusion ioDrive2 MLC 365GB (http://www.fusionio.com/load/-media-/1m66wu/docsLibrary/FIO_ioDrive2_Datasheet.pdf) CPU MySQL will use less CPU cores than Apache but it will use them very hard, the E7 family has 30M Cache L3 wichi provide boost performance : o 1x Intel E7-2870 will be ok. Storage SAS will be good enough in terms of performance, especially considering the space required. o RAID 10 of 4 x SAS 10k or 15k for a total available space of 512 GB. Memory o 64 GB minimum is required on this server considering the size of the statistics database. Warning: the statistics database will grow quickly, if possible consider starting with 128 GB directly, it will help. This server will be focused on I/O and MySQL for the statistics, memory size and disk space for the images hosting. Server 2 I/O The very main part on this server will be I/O processing, FusionIO cards have proven themselves extremely efficient, this is currently the best you can have in this domain. o Fusion ioDrive2 MLC 365GB (http://www.fusionio.com/load/-media-/1m66wu/docsLibrary/FIO_ioDrive2_Datasheet.pdf) CPU MySQL will use less CPU cores than Apache but it will use them very hard, the E7 family has 30M Cache L3 wichi provide boost performance : o 1x Intel E7-2870 will be ok. Storage SAS will be good enough in terms of performance, especially considering the space required. o RAID 10 of 4 x SAS 10k or 15k for a total available space of 512 GB. Memory o 64 GB minimum is required on this server considering the size of the statistics database. Warning: the statistics database will grow quickly, if possible consider starting with 128 GB directly, it will help. Thanks in advance. Best,

    Read the article

  • Resize a RAID 1 volume on OSX Snow Leopard - how? (Note: software raid)

    - by Emmel
    I've scoured the Internet in search of an answer to this question, and as usual with OSX-related topics, I often don't find any deep-dive technical explanations sufficient enough to feel confident doing dangerous things. Here is my question: I have a Mac Pro, running OSX 10.6.2. I have, as my main root/boot disk, a RAID 1 volume called "Mirror1". Mirror1 is comprised of two 1 TB disks. Mirror1, however, is fixed at 640 GB. That's because, I originally took a 640GB disk, bought a terabyte disk, mirrored it (using diskutil appleraid enable...), when it synced I removed the 640GB and replaced it with a second 1 TB disk, and synced again. Voila! A single 640 GB replaced by two 1 TB disks in a mirror.. Actually, no. There's still something missing from the equation: Mirror1 needs to be expanded from 640GB to 1 TB to match the partition sizes on each of those disks. How do I do this? Perhaps the diskutil output will help: -> diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI 209.7 MB disk0s1 2: Apple_RAID 999.9 GB disk0s2 3: Apple_Boot Boot OSX 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk1 1: EFI 209.7 MB disk1s1 2: Apple_RAID 999.9 GB disk1s2 3: Apple_Boot Boot OSX 134.2 MB disk1s3 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *640.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Mac Disk 2 536.7 GB disk2s2 3: Microsoft Basic Data BOOTCAMP 103.1 GB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Mirror1 *639.8 GB disk3 -> diskutil appleraid list AppleRAID sets (1 found) =============================================================================== Name: Macintosh HD Unique ID: 1953F864-B474-4EB6-8E69-41834EBD0247 Type: Mirror Status: Online Size: 639.8 GB (639791038464 Bytes) Rebuild: manual Device Node: disk3 ------------------------------------------------------------------------------- # Device Node UUID Status ------------------------------------------------------------------------------- 0 disk1s2 25109BAE-5697-40EA-B612-0217851444F7 Online 1 disk0s2 11B83AB0-8148-4DB6-8761-DEF08C855F8D Online =============================================================================== Thanks in advance.

    Read the article

  • Windows 8.1 Insufficient storage available to create shadow copy

    - by Bob.at.SBS
    [Note: After I entered the problem statement, I found this question, which is apparently the same problem. Maybe one of us will get a good answer...] I have used the "Windows 7 File Recovery" tool under Windows 8 to create system image backups to an external USB hard drive. I built a new Windows 8.1 machine, and I want to create my first system image backup of that machine to the same USB hard drive. The "Windows 7 File Recovery" tool is gone in Windows 8.1, but wbAdmin is alive and well: wbAdmin start backup -backupTarget:\\?\Volume{2a2b...994f} -allCritical -quiet fails with this text displayed: wbadmin 1.0 - Backup command-line tool (C) Copyright 2013 Microsoft Corporation. All rights reserved. Retrieving volume information... This will back up (EFI System Partition),(C:),Recovery (300.00 MB) to \?\Volume {2a2b1255-3a86-11e3-be86-b8ca3a83994f}. The backup operation to F: is starting. Creating a shadow copy of the volumes specified for backup... Summary of the backup operation: The backup operation stopped before completing. The backup operation stopped before completing. Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. The EFI System Partition is 100 MB The Recovery Partition is 300 MB The C partition is 1.72 TB, NTFS, 218 GB used, 1.51 TB free The destination drive is 1.81 TB, NTFS, 678 GB used, 1.15 TB free I've fiddled with vssadmin resize shadowstorage, with no change in the error. vssadmin list shadowstorage displays: Shadow Copy Storage association For volume: (C:)\?\Volume{37a0...263}\ Shadow Copy Storage volume: (C:)\?\Volume{37a0...263}\ Used Shadow Copy Storage space: 2.39 GB (0%) Allocated Shadow Copy Storage space: 2.81 GB (0%) Maximum Shadow Copy Storage space: 531 GB (30%) Shadow Copy Storage association For volume: (F:)\?\Volume{2a2...94f}\ Shadow Copy Storage volume: (F:)\?\Volume{2a2...94f}\ Used Shadow Copy Storage space: 334 GB (17%) Allocated Shadow Copy Storage space: 337 GB (18%) Maximum Shadow Copy Storage space: UNBOUNDED (922154758%) (Yeah, the "percent calculation" for UNBOUNDED is seriously bogus.) I've run SFC /verifyonly and it seems happy. I've verified that the new `Volume Shadow Copy" service starts when I start the backup operation. Any suggestions?

    Read the article

  • Safer RAID5 rebuilds using partially failed disks?

    - by arcticmac
    There have been lots of articles posted recently about how RAID5 is dangerous because of long resilver times, and in particular because of increasing chances of encountering a URE during the resilver. Obviously this is a significant concern. However, it seems that in many cases of interest (as long as you're keeping some kind of eye on your disks), when it comes time to rebuild the array, the disk that I'm replacing is still mostly readable. If you try to explain this predicament to the average layperson, they are typically very confused as to why you have two almost completely functional disks but can't produce one working array. It seems to me that there ought to be some way to take advantage of this to make rebuilds safer, as long as I'm willing to have the RAID5 be read-only for a couple of days while it rebuilds. Conceptually, what I have in mind looks something like this: When a disk fails, immediately take the RAID5 offline or mount it read-only Attach a new disk (either in a spare bay, or externally via eSATA) and begin rebuilding it to replace the failed one. If known, perhaps start with the stripes in which the failure occurred, to minimize the chances of losing those if another disk fails. In the event that a second disk experiences a URE or other failure during the rebuild, try to source that data from the disk that is being replaced. Presumably if this happens, more rebuilding would be necessary. When complete, shut down the server, swap the replacement drive into the original bay if desired, and bring the array back up. Obviously such a process would not be appropriate for applications where uptime is critical or data loss cannot be tolerated, but it seems to me that this could help considerably to improve the reliability of RAID5. I assume that there's not a good way to implement a recovery like this at present, given that I haven't seen any indication of tools that are designed to do this, and that it seems like it would be rather obtuse to work out manually. Are there also technical issues with it that I haven't thought of (I'm still fairly new to RAID stuff)? Any thoughts on how hard something like this would be to implement (e.g. in linux md raid)?

    Read the article

  • Reduce the I/O priority of Windows Backup (Windows Server 2008 R2)

    - by HelloSam
    I have a PostgreSQL running on Windows Server 2008 R2 x64 box. And I have scheduled a backup everyday from the RAID 1 DB disk to a dedicated standalone disk. They are SAS 15k on Dell PERC 6i. I am using the built-in Windows Server Backup for purpose. The problem is, whenever the backup process is kicked in, the database performance is hogged. I would say almost a 10x of performance reduction. From the resource monitor, the disk queue is in the double digit range when backing up, and less than 1 during the day. The disk activity is like ~30-50MB/s during backup, so I guess the hardware is acting normally, though wbengine.exe takes up most of the portions. I think reduce the IO priority of the backup process would be an answer, but I couldn't find a way to. Tuning process CPU priority does not seems to help.

    Read the article

  • Natively boot Virtualbox Image

    - by isync
    I am faced with a Windows hardware/software problem left over from another person. It's on me to resolve. It's a mission critical setup. The situation is: I've got a physical server machine with: -Disk C:\ (one disk) containing a basic install of Windows Server 2008 R2, formerly Win Vista Pro, now gone. -Disk D:\ (software Raid) containing a VirtualBox disk image of a configured Windows Server 2008 R2 running SQL Server R2 among others. What shall I do now? Migrate all the stuff from the configured VM to the basic but natively installed C:\ Windows Server 2008 R2 (with the possibility of breaking stuff)? Or, Setting up the machine to "natively boot" the VM with the help of bcdedit.exe (something I've read about, what I've never done, what I don't know of if it works, if it hits performance, or if it is stable for production) For me, being old skool, I am in the process of de-virtualising everything (option 1). But I'd be happy if someone suggests I am ok to go down the "natively boot" route.

    Read the article

  • Resize primary partition

    - by telebog
    I have a hdd with the folowing partition table 12Gb Primary Partition (ntfs) 140Gb Extended Partition (ntfs) I want to install windows 7 and I need more space for the Primary Partition. The problem is that when I resize partitons I obtain: 12Gb Primary Partition (ntfs) 110Gb Extended Partition (ntfs) 30Gb Free Space So I can't allocate the free space to primary partition because the free space is at the end of the disk. Is there a solution to extend the primary partition as: 42Gb Primary Partition (ntfs) 110Gb Extended Partition (ntfs) without repartitioning the entire disk? I used partition magic, gparted-live-0.4.6-4 and others with no success. With the Disk Management from Vista I manage to extend primary partition, but made my partitions dinamic.

    Read the article

  • DPM 2010 iSCSI Mirror

    - by Thermionix
    We're using DPM 2010 for exchange backups, The backup Disk(s) are iSCSI attached drives from multiple NAS boxes. We'd like to mirror iqn.2009-07.com.example.example:RAID.iscsi4.vg0.iscsi05 onto iqn.2012-3.com.example.example:RAID.iscsi4.vg0.iscsi05 DPM 2010 requires the disk for itself and handles volume creation, Therefore we can't just create a mirrored volume in Disk Management. DPM itself doesn't seem to have any ability to mirror the Disks in its storage pool. Any tips on how to mirror the volumes from one drive to the other?

    Read the article

  • Force Finder to log in as Guest to a SMB share

    - by slhck
    I have a QNAP NAS that offers a few SMB shares. As I'm in a trusted environment, my shares are accessible as guest rather than with a combination of username and password. Problem Now, when I click the name of the device in Finder's sidebar, I get the black "Connection failed" bar, with the option "Connect as...". When I click that, I receive: I can however press ? + K and enter the server's name manually, which gets me to this window: Here, I have to select "guest". Now, I can select one of the shares to connect to, and I'm finally connected to the server. If I select it in the sidebar, I get a list of all shares available, because I'm connected as "guest", obviously: What I need Well, as soon as I unmount all shares, I have to go through the same procedure of manually logging in as "guest" again, which I find quite annoying. Is there any way I could get Finder (or the underlying SMB client) to know which credentials to use? Or should I look for the solution rather on the server side? (I know that other SMB shares seem to work fine in my network) Diagnostics The only thing I can get out of Console.app is: 5/15/11 7:36:40 PM /System/Library/CoreServices/Finder.app/Contents/MacOS/Finder[200] SharePointBrowser::handleOpenCallBack returned 64 This message occurs when I click the name of the SMB server in the Finder sidebar. Here's the output of `smbclient -L meredith -U guest -d=2 charon:~ werner$ smbclient -L meredith -U guest -d=2 added interface ip=192.168.100.11 bcast=192.168.100.255 nmask=255.255.255.0 tdb(unnamed): tdb_open_ex: could not open file /private/var/samba/gencache.tdb: Permission denied Got a positive name query response from 192.168.100.100 ( 192.168.100.100 ) Password: Domain=[MEREDITH] OS=[Unix] Server=[Samba 3.5.2] Sharename Type Comment --------- ---- ------- music Disk movies Disk photos Disk software Disk archive Disk backups Disk IPC$ IPC IPC Service (NAS Server) Got a positive name query response from 192.168.100.100 ( 192.168.100.100 ) Domain=[MEREDITH] OS=[Unix] Server=[Samba 3.5.2] Server Comment --------- ------- Workgroup Master --------- ------- WORKGROUP MEREDITH Also, things I've tried: There is no relevant entry in the Keychain (but why would it, I'm only connecting as guest) Connecting with user name "Guest" and empty password logs me in but still after ejecting the last share, I get the same "Connection failed" error as before. The appropriate entry is made in the Keychain but obviously has no effect.

    Read the article

  • How can I permanently remove default root hints from a Server 2008 DNS server?

    - by TonyD
    My network exists in private address space and I am unable to perform DNS lookups against DNS servers on the internet directly (blocked by firewall). There are other networks that exist in the same private address space as my network. I need to be able to perform DNS lookups for devices in these networks as well. There are 2 main internal DNS servers in this private address space, but not on my netowrk. I can perform DNS lookups against both of these servers for devices internal to our address space and names on the internet. I would like to permanently remove the root hints from our Server 2008 R2 DNS server and replace them with these 2 internal DNS servers. I have removed them from the dnsmgmt console, the C:\Windows\System32\DNS\cache.dns file, and from the RootDNSServers folder under the System folder in ADUC. Even so, they continue to repopulate into the root hints tab in the server properties for DNS after roughly an hour. Does anyone know how to permanently remove these entries?

    Read the article

  • Force Finder to log in as Guest to a SMB share

    - by slhck
    I have a QNAP NAS that offers a few SMB shares. As I'm in a trusted environment, my shares are accessible as guest rather than with a combination of username and password. Problem Now, when I click the name of the device in Finder's sidebar, I get the black "Connection failed" bar, with the option "Connect as...". When I click that, I receive: I can however press ? + K and enter the server's name manually, which gets me to this window: Here, I have to select "guest". Now, I can select one of the shares to connect to, and I'm finally connected to the server. If I select it in the sidebar, I get a list of all shares available, because I'm connected as "guest", obviously: What I need Well, as soon as I unmount all shares, I have to go through the same procedure of manually logging in as "guest" again, which I find quite annoying. Is there any way I could get Finder (or the underlying SMB client) to know which credentials to use? Or should I look for the solution rather on the server side? (I know that other SMB shares seem to work fine in my network) Diagnostics The only thing I can get out of Console.app is: 5/15/11 7:36:40 PM /System/Library/CoreServices/Finder.app/Contents/MacOS/Finder[200] SharePointBrowser::handleOpenCallBack returned 64 This message occurs when I click the name of the SMB server in the Finder sidebar. Here's the output of `smbclient -L meredith -U guest -d=2 charon:~ werner$ smbclient -L meredith -U guest -d=2 added interface ip=192.168.100.11 bcast=192.168.100.255 nmask=255.255.255.0 tdb(unnamed): tdb_open_ex: could not open file /private/var/samba/gencache.tdb: Permission denied Got a positive name query response from 192.168.100.100 ( 192.168.100.100 ) Password: Domain=[MEREDITH] OS=[Unix] Server=[Samba 3.5.2] Sharename Type Comment --------- ---- ------- music Disk movies Disk photos Disk software Disk archive Disk backups Disk IPC$ IPC IPC Service (NAS Server) Got a positive name query response from 192.168.100.100 ( 192.168.100.100 ) Domain=[MEREDITH] OS=[Unix] Server=[Samba 3.5.2] Server Comment --------- ------- Workgroup Master --------- ------- WORKGROUP MEREDITH Also, things I've tried: There is no relevant entry in the Keychain (but why would it, I'm only connecting as guest) Connecting with user name "Guest" and empty password logs me in but still after ejecting the last share, I get the same "Connection failed" error as before. The appropriate entry is made in the Keychain but obviously has no effect.

    Read the article

  • What is the keyboard shortcut to clear formatting on Office 2008 Mac?

    - by Laurent Bourgault-Roy
    I recently switched from a PC to a brand new Macbook Pro. I was happy to see that most of the keyboard shortcuts in Word 2008 where the same or almost the same than in Word 2007. However, there is one keyboard shortcut that I dearly miss : the clear formatting shortcut (ctrl + space on Windows). I know the feature exists since it's in the formatting bar, but I can't find the keyboard shorcut. I tried cmd + space but that is the system keyboard shortcut to switch the keyboard type. alt + space delete the text without deleting the formatting. ctrl + space doesn't seem to do anything. Does anyone know what the keyboard shortcut for the clear formatting command may be? I hope there's one because it's really painful to reach for the mouse everytime I put a title and I want to switch to a normal paragraph style...

    Read the article

  • Hyper-V 2012 and P2000 SAS SAN

    - by user155950
    Hi I am having major problems setting up a Hyper-V 2012 cluster on a P2000 SAS SAN. Running System Center VMM 2012 SP1 I am unable to see any storage to create my cluster. Has anyone had experienced anything similar? Under fabric and storage I can't add the P2000, all I can do is use storage spaces in server manager to create a storage pool and virtual disk. This allows me to create a file share which I can add to VMM but I still can't see any disk to create a cluster. I am just about at the point where I want to tear my hair out wipe the servers and stick VMware on them because I know it works as I have set several systems up like this in the past. The Hyper-V servers can see the storage and in server manager on my management machine it seems to know both servers can see the same disk. VMM is running on the same machine and it can't see any disk. Help..... Thanks Mike

    Read the article

  • md5sum repeatedly gives different checksum for same file on same machine

    - by Joel
    I have a very small and quite old hard drive disk, about 32G. On to this disk I have copied a largish tar file, about 5G. When I run md5sum to generate a checksum on this file I repeatedly get different results (on the same machine and the same file). This obviously should not happen. If I repeat the experiment with a much smaller file, as expected the checksum is the same each time. I can only assume that because the large file is spanning most of the disk, and it is an old drive, I am experiencing a lot of read errors on the hard drive - and it needs replacing? Could there be any other good reason for this? Something I can do to fix the problem other than buying a new disk? Update: sha1sum also produces inconsistent results.

    Read the article

  • MySQL slave server not removing old relay binlogs

    - by MKzero
    I have a MySQL server with slave replication on another host. Today I stumbled across the high disk usage of the slave host and invastigated what takes up all the space. As it turns out this space is occupied by the slaves relay logs. I tried to turn the expire_logs_days variable down and restarted the MySQL daemon but the reported disk space stays the same. I could't really find anything exept that FLUSH LOGS should clear old logs. I tried that with no result. Is there any way I can reduce the disk space that the relay logs take?

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >