Search Results

Search found 7632 results on 306 pages for 'volume label'.

Page 197/306 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • How to enable connection security for WMI firewall rules when using VAMT 2.0?

    - by Ondrej Tucny
    I want to use VAMT 2.0 to install product keys and active software in remote machines. Everything works fine as long as the ASync-In, DCOM-In, and WMI-In Windows Firewall rules are enabled and the action is set to Allow the connection. However, when I try using Allow the connection if it is secure (regardless of the connection security option chosen) VAMT won't connect to the remote machine. I tried using wbemtest and the error always is “The RPC server is unavailable”, error code 0x800706ba. How do I setup at least some level of connection security for remote WMI access for VAMT to work? I googled for correct VAMT setup, read the Volume Activation 2.0 Step-by-Step guide, but no luck finding anything about connection security.

    Read the article

  • connect 2.1 stereo speakers to LG LCD-TV (5500 series)

    - by rMaero
    I bought a pair of speakers for my dad's TV, LG 32LE5500. When I installed them, it just sounded worse than the integrated ones and that's where I realized the subwoofer didn't work at all and both speakers make lower volume than the internal ones. The audio output jack says "H/P" (standing for headphones, and a matching symbol) before buying I checked this output with my phone's headphones and it worked so I figured it would work with a set of speakers since it's a standard audio output. I guess it's literally for headphones and not any other kind of sound players. There is only one other audio output and it is the optical-digital, so I can't use that. Not at least with these speakers.. am I screwed? or is there any workaround?

    Read the article

  • How should I host a site that could potentially get a short spike in traffic of 1000%+

    - by James Simpson
    This is a purely theoretical question, but what if I had a site that would normally only get a couple thousand hits a day, but for a few days each month that could shoot to several hundred thousand or even several million hits over the period of 1-3 days. The site would be pretty bare-bones (as in, 2-3 total pages with 1-2 max MySQL queries on each page and some PHP), so bandwidth wouldn't be the issue, but sheer volume taking down the site would be the main concern. Cloud hosting seems like the best way to go, but would something like Amazon EC2, MediaTemple, or something else be the right choice in this case?

    Read the article

  • Can you upgrade OEM Office with an OEM Upgrade

    - by LuckyLindy
    We have a bunch of computers at work that have OEM Office 2000. We have all the material, CDs, etc., and amazingly the computers still work well (they were top of the line when purchased in 2002). However, we'd like to upgrade to Office 2003, our corporate standard. We've found OEM Office 2003 upgrade software online for ~$60 apiece, which would save us thousands over installing retail upgrades or volume licenses. But can we do this? I haven't been able to get a clear answer from Microsoft or anyone else if OEM Upgrades can be applied by non-System Builders to OEM Office.

    Read the article

  • Function keys on Dell laptop work double as OEM keys

    - by Factor Mystic
    I'm working with a new Dell Studio 1555, and the F1-F12 keys at the top of the keyboard are dual function with OEM keys such as volume and screen brightness. The problem is, is that the OEM keys are the default, and you have to press the Fn key to get the F- key to work. For example, this means you have to hit Alt+Fn+F4 to close a window, instead of the regular Alt+F4. This is really annoying. Is there a way to reverse the default functions of the F- keys in Windows? Ideally this is possible without some kind of third party hotkey manager.

    Read the article

  • File Sharing: User-created folders are read-only to others on Mac 10.6 Server

    - by Anriëtte Combrink
    Hi there We recently got a new Mac Mini Server with 10.6 Server on it. It has two 500GB volumes, one of which [Macintosh HD2 the extra one other than the boot disk] we are using to share our work files. I have added a user account for each user in the Users pane on Server Preferences, and all our staff (users added to the system) are added to a new group, called toolboxstaff. Now, when a user creates a new folder on this volume, folders are created with read-only access for everyone else besides the owner. How do I set it that when a user creates a folder, it creates it with RW access for the toolboxstaff group? Thanks in advance.

    Read the article

  • Alignment requirements: converting basic disk to dynamic disk in order to set up software RAID?

    - by 0xC0000022L
    On Windows 7 x64 Professional I am struggling to convert a basic disk to a dynamic one. Under Disk Management in the MMC the conversion is supposed to be initiated automatically, but it doesn't. My guess: because of using third-party partitioning tools there isn't enough space in front and after the partitions (system-reserved/boot + system volume) to store the required meta-data. When demoting a dynamic disk to a basic disk manually, I noticed that some space seems to be required before and after the partitions. What are the exact alignment requirements that allow the on-board tools in Windows to do the conversion?

    Read the article

  • Driver to split audio to 2 different devices?

    - by ThantiK
    I recently bought one of these USB headsets against my own better judgement, and it's really costing my sanity at this point. Previously when using a standard jack, I just used a splitter so I could split off the things I was doing with my TV or headset, I could just turn the TV off or the headset volume down should I want to use one at a time. Now, along comes this USB headset and I find that I can't choose for the sound of 1 application to pipe to 2 different devices on Windows; How can I solve this? Does any software out there exist for this purpose?

    Read the article

  • radius traffic accounting - what attributes do I use for traffic (and how)

    - by Mark Regensberg
    we are building a web front end for a internet access token management system that uses radius (freeradius) queried from a captive portal. Reason for building this part is the integration into the accounting and billing platform that operates behind the scenes (all other parts are currently available open source software) The structure is fairly standard, and setting up the basic bits were easy enough (authentication, traffic updates from the captive portal, account expiry date/times) - but I seem to have run out of ability when it comes to limiting an account by traffic consumed. So we can: set up usernames / passwords set expiry dates/times for a given user see the traffic for that user being accurately updated in RADACCT But we can't figure out the correct way/attribute to expire a user when they have consumed X octets of traffic. What attributes are used, or - maybe more accurately - what would be the correct way to use these attributes to limit an account to a certain volume of traffic? Any links to documentation appreciated - freeradius documentation doesn't seem to address the issue directly, or I'm looking in the wrong place... --mark

    Read the article

  • Why does my custom Amazon EC2 AMI have limited instance type options?

    - by John
    The Basic 64-bit Amazon Linux AMI has the following instance type options available: Micro Large Extra-Large High-Memory Extra Large ... etc I booted up this AMI as a micro type, made customizations, shut it down, detached the volume, took a snapshot, and registered my own custom AMI: ec2-register –snapshot [snapshot_id] –description "my description" –name "my name" –kernel aki-427d952b That worked. HOWEVER, when I try to create an instance from my custom AMI, only the following instance types are available: Micro Small High-CPU Medium ... which coincidentally are the same instance types available if you try to boot up the 32-bit Amazon image. Why are the available instance types of my custom image varying from the available instance types of the image I based it off of?

    Read the article

  • Using NDMP as an alternative to CIFS mount

    - by user138922
    I have a weird but interesting use-case. I use CIFS to mount shares from a File Server (NetApp, EMC etc) to an application server (win/linux server where my application runs). My application needs to process each of the file from the shares that I mount via CIFS. My application also needs access to the meta-data of these files such as Name, Size, ACLs etc. I would like to see if I can achieve the same via NDMP. I have some very basic questions regarding this use-case. It would be great if you guys can help me out here. Is this even something which is achievable? Can I just transfer share that interest me instead of entire volume?

    Read the article

  • Bitlocker-to-go on fixed drive

    - by Unsigned
    Scenario Two drives are connected to a computer. One via a SATA-to-USB interface, the other directly via a SATA-to-eSATA cable. The drive on USB appears as a removable drive, the drive on eSATA appears as a fixed drive. Both use NTFS. The USB drive offers Bitlocker-To-Go, the eSATA drive only offers BitLocker. Question It is my understanding that drives encrypted with BitLocker-To-Go include an app to allow Windows XP read-only access to the volume. Is this the only difference, and is there a way to use Bitlocker-To-Go on the eSATA drive? Update Another difference is found here: The recovery key is required when a BitLocker-protected fixed data drive configured for automatic unlocking is moved to another computer.[1] Assuming that does not apply to removable drives.

    Read the article

  • Run a MongoDB configuration server without 3GB of journal files

    - by Thilo
    For a production sharded MongoDB installation we need 3 configuration servers. According to the documentation "the config server mongod process is fairly lightweight and can be ran on machines performing other work". However, in the default configuration, they all have journalling enabled, and with preallocation this takes up 3 GB of disk space. I assume that the actual data and transaction volume of a config server is quite small, so that this seems a bit too much. Is there a way to (safely!) run these config servers with much less disk use for the journal? Do I need journalling at all on config servers? Can I set the journal size to be smaller?

    Read the article

  • Lost disk space in Windows 7, cannot find the missing

    - by Tsanders
    My hard drive is complaining it is low on disk space, but a strange thing seems to be happening: Explorer reports 10Gb of available space (on a 120 Gb hard disk), chkdsk in the command prompt does the same but if I use a disk space tool such as SpaceSniffer or WinDirStat, only 50Gb of data is found. My guess is that there somehow is a hold on a large block of disk space (but that's just a guess) because of a prior very large (40 Gb) download attempt that didn't complete. There isn't 40Gb of files on the drive (hidden or visible), yet Explorer insists that something is there. How can I claim back this hard disk drive (without formatting my hard disk)? SpaceMonger is providing a clue, reporting four unscannable folders which add up to 43Gb: C:\RRBackups C:\System Volume Information C:\Windows\Csc\v2.06 C:\Windows\System32\LogFiles\Wmi\RtBackup Does anybody know what these folders are for, and how I can claim back at least some space? Restore point claims about 4Gb, so that doesn't seem to be the main problem.

    Read the article

  • Best way to backup and restore millions of files

    - by bongo
    Hi, I'm facing a rebuilding of the volume on which I host the mail storage (kerio mailserver, which uses maildirs). I need to backup and restore as quick as possible the 3.5+ millions (for about 600GB) small files of the store directory. It takes more than 12 hours via rsync to a NFS share, but I also have a 1TB firewire 800 raid1 disk that I can use (from some preliminary tests it's faster). I'm working off a XServe intel. What is the fastest way to do it? Rsync? Finder copy? tar?

    Read the article

  • VSS Not Creating Shadow Set

    - by Jeff Leyser
    I'm trying to setup backup scripts on WinXP to use Volume Shadow Sets. I downloaded the VSS 7.2 SDK from MSFT, and used the include vshadow.exe to create a shadow set: vshadow -script=vss-setvar.cmd f: (note that I've tried both f: and c:) vshadow executes just find, giving no errors, reporting the shadow is created. However, executing vshadow -q as the very next command results in "There are no shadows on the system" and, indeed, if I use dosdev to try and map the Shadow set named in vss-setvar.cmd, it will not work. Am I missing a step?

    Read the article

  • To Delete a tape from the ACSLSlibrary

    - by Senthil Kumar
    Hi Anyone there can help me out for an issue as below: There was a stuck tape in the drive and so the stuck tape was removed, but i need to logically delete the tape entry from NB, so that the same media can be inserted back for operations. Netback thinks that the tape is still in that location, hence it should be removed so that the entry is not there and NB does not recognise that the tape in that location, so the same tape can be taken in through inventory. The NB used is NB5.1 Any command to delete this entry, this is a clustered based Environment (Active/Passive), and we use a ACSLS library (Physical) as well a Switch-SN6000(Logical) Kindly help me out as when we tried to delete the media from GUI it said- Could not delete- Cannot delete assigned volume (92).

    Read the article

  • Create mirror software raid with bad blocks hdd. How to check data integrity?

    - by rumburak
    There is error in System event log like this one: "The device, \Device\Harddisk1\DR1, has a bad block." Because of above I created Raid 1 on this disk and other one. I'm using Windows Server 2008 R2 software RAID volumes. Volume in Disk Manager is marked as "Failed Redundancy" and "At Risk". I could command to "Reactivate Disk" and it's starts to re-sync, but after a while it stops and returns to previous state. It stops re-sync on bad block on old disk and creates same error in System event log. Old disk status is Errors, new disk status is Online. How can I check that there is exact copy of the old disk on new one ? It is server machine so I would prefer to keep it running during this check.

    Read the article

  • How to Recover HDD Formatted by "Create a Recovery Drive" Tool of Windows 8.1?

    - by ide
    I have 2 TB USB HDD which had these drives F: about 1 TB with 750 GB data H: about 120 GB with 60 GB data I: about 780 GB with 250 GB data (For TV: It was raw in Windows but visible in the Smart TV) I took 521 MB from last part of H to get new G drive. Then I run "Create a Recovery Drive" tool of Windows 8.1 and chose G drive. It said all data in the drive will be deleted. I thought it is just G drive but it deleted my whole HDD. It created 32 GB new F drive with writing 337 MB on it and rest of HDD is unallocated. I tried these programs to get my first 3 drives but non of them helped for getting 1st partition. TestDisk MiniTool Partition Wizard Home Edition EaseUS Partition Master 9.2.2 (I deleted new F drive volume because it scans only unallocated part) Recuva PC Inspector File Recovery

    Read the article

  • How to disable the second partition without unmount it in Mac?

    - by bagusflyer
    I've installed OSX Yosemite in another partition in my Mac. But there is a problem. For example, I installed iBooks in both partition. When I right click one of my epub or pdf file, both iBooks are shown in my context menu. This is not what I want. What I want is to only allow the apps in Yosemite shown. Of course I can disable apps in my old Maverick partition by unmount the volume. But again this is not what I want because it will hide the partition when I boot my machine so that I can't boot up into my Maverick partition. Can anybody advise if there are any better ideas? Thanks

    Read the article

  • Sound does not work on the administrator profile.Works on a non-administrator profile on Windows XP

    - by Sharjeel Sayed
    Initially I suspected a missing driver, but then sound ( for movies,songs etc ) works fine on the other non-administrator account, but does not work when I log in to the Administrator account. And yes..I have checked the sound volume and mute status as well. Details of my system OS: Windows XP Professional Service Pack 3 (build 2600) Processor: 2.00 gigahertz AMD Athlon 64 Memory: 448 Megabytes Usable Installed Memory Board: ASUSTeK Computer Inc. K8V-MX Bus Clock: 200 megahertz BIOS: American Megatrends Inc. 0112 07/18/2005 Multimedia: SoundMAX Integrated Digital Audio Any help would be appreciated.Thanks in advance

    Read the article

  • ZFS: Redistribute zvol over all disks in the zpool?

    - by growse
    Is there a way in which ZFS can be prompted to redistribute a given filesystem over the all of the disks in its zpool? I'm thinking of a scenario where I have a fixed size ZFS volume that's exported as a LUN over FC. The current zpool is small, just two 1TB mirrored disks, and the zvol is 750GB in total. If I were to suddenly expand the size of the zpool to, say, 12 1TB disks, I believe the zvol would still effectively be 'housed' on the first two spindles only. Given that more spindles = more IOPS, what method could I use to 'redistribute' the zvol over all 12 spindles to take advantage of them?

    Read the article

  • Right-button drag-drop function in Gnome

    - by HorusKol
    While I don't really miss the annoyances that go with working on Windows, one thing I do wish I had in Gnome is the ability to hold the right-mouse button down on a file and drag it to get the context menu asking if I want to move or copy the file. I realise the default tries to be sensible (like in Windows - defaults to move if on the same volume, or copy if on different) and that it can be overridden with <ctrl> or <shift> - but i'm still used to the right-mouse drag option and keep getting frustrated when it doesn't work...

    Read the article

  • Disabling RAID feature on HP Smart Array P400

    - by Arie K
    I'm planning to use ZFS on my system (HP ML370 G5, Smart Array P400, 8 SAS disk). I want ZFS to manage all disks individually, so it can utilize better scheduling (i.e. I want to use software RAID feature in ZFS). The problem is, I can't find a way to disable RAID feature on the RAID controller. Right now, the controller aggregates all of the disks into one big RAID-5 volume. So ZFS can't see individual disk. Is there any way to acomplish this setup?

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >