Search Results

Search found 20358 results on 815 pages for 'disk management'.

Page 534/815 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • Assistance in setting up new APC Smart-UPS RT on a new VMware enviroment

    - by user38085
    I'm new to the realm of setting up a APC Smart-UPS RT 8000VA UPS with a management network card (AP9618). The project calls for the upgrade of the firmware for the network card to the newest and greatest. It also calls for the Powerchute Business Software to be installed with notifications setup per email for temperature, shutdown, and battery low. I know I'll have to use the serial cable to flash the firmware and install the software on one server 2003 box. Also on that server I'll have to install the software and setup the GUI (IP address) interface. Whats confusing the most is the whole process, and steps to use without taking down the network, which would be very bad. In flashing the firmware does it take down the UPS? Do i have to run BOOTP commands to setup the network card? Also no agents will be used on any of the VWware OS's and no SNMP trap will be used.

    Read the article

  • Disable Windows Key Hotkeys when using Virtualbox

    - by statenjason
    I'm currently running a VirtualBox of Arch Linux with Windows 7 x64 as the host. In Arch, I use dwm for window management. As dwm is heavily dependent on hotkeys, I've used the ALT key as its META key to prevent conflict with the Windows 7 host. However, when using emacs (also heavy hotkey usage) within dwm, there are issues because it's also using ALT for its own META. I'd like to change either dwm or emacs to use the windows key as META, but commands such as Win+L will be captured by the host machine and lock my system. Is there any way to prevent these hotkeys from being triggered while within VirtualBox?

    Read the article

  • Hyper-V Boot failure on VHD made with Acronis?

    - by gary
    hoping someone can advise on my problem, I am running Hyper-V core and trying to create my first VM for testing purposes. Using Acronis True Image echo server with UR I converted a Seerver 2000 tib to VHD. I then copied this across to the Hyper-V local drive and created a new VM pointing the hard drive to the vhd image. When I boot this up all I get is "Boot failure. Reboot and Select proper Boot device or Insert Boot media in selected Boot device". The original server had SCSI disks, the Hyper-V server doesn't, but I have ensured that it boots from an IDE disk and that it is in fact booting from that not the CD. I can only imagine this is caused by the SCSI disks on VHD but cannot for the life of me work out how to fix, I have several of these I need to do so starting to worry now! I can confirm that when I did this from tib to vmdk it worked first time using VMware on a laptop. Any help very much appreciated. Gary

    Read the article

  • mkisofs creating iso file with no error or warning but iso corrupted

    - by user1291203
    I'm trying to make a dvd from mpeg2 files. First of all i'm on windows 7. I'm using the following binaries: jpeg2yuv mpeg2enc mplex spumux dvdauthor Now everything is fine till this point absolutely no errors, but then i'm using mkisofs to make the iso file also no errors or warnings. It creates the iso file but i cannot burn it to dvd it said: The selected disk image file isn't valid. I tried it on a Mac osx as well and there the iso is worked fine. It is an NTSC iso. I'm totaly stuck with this problem any help is really appreciated.

    Read the article

  • Overriding destination directory from ROBOCOPY Job file?

    - by marc_s
    I am using Robocopy for backing up my project directories to an external disk, and it works like a charm. Except for one little issue: sometimes, I wish I could override the destination directory which I specify in my Robocopy Job file (myproject.rcj) to send the files somewhere else. So if I have this in my myprojects.rcj: :: Robocopy Job MYPROJECT.RCJ :: Source Directory : /SD:d:\MyProject :: Source Directory. :: Destination Directory : /DD:f:\MyDefaultDestination :: Destination Directory. is there any way I can instruct Robocopy to use a different destination when executing it using a job? So I execute Robocopy like this: robocopy /job:myproject.rcj and I wish I could override the default destination directory by using: robocopy /job:myproject.rcj /DD:X:\OtherDestination but that doesn't seem to work..... ERROR : Invalid Parameter #2 : "/DD:X:\OtherDestination" Any ideas??

    Read the article

  • Which process is using my NAS?

    - by sethu
    I have a nas connected to my cluster. The NAS holds all our home directories. When I did a set of experiments last week, saving a 1 GB file to the nas took around 30 seconds. If i do the same to a local disk it takes 18 seconds. But when I tried doing the same process today, it takes 150 seconds. I am unsure what is the problem . Can someone help me pointout the issue? Is it possible to find out which process is accessing the NAS or how much NAS bandwidth is getting used ? Thanks for your help. -Sethu

    Read the article

  • Troubleshooting Amazon EC2 reboot

    - by tgm
    We've had a server (CentOS) running in EC2 for a few months. It had been going pretty smoothly until today when we got an alarm that the server was unavailable (HTTP service couldn't be reached). So I tried SSHing into the box but that timed out as well. I logged into the EC2 console and it said the instance was running and there wasn't anything in the system log. One odd thing I noticed is that even though we have an Elastic IP attached to it (which shows in the Elastic IP management area), the instance detail is not showing that there is an EIP associated with the instance. I looked through the message log and the last thing I see around the time we got our alert was the dhclient renewed the lease. I'm guessing there may have been some sort of issue with the networking. How might I check if that was the problem, or if there were any other issues that may have caused our instance to stop responding?

    Read the article

  • Problem with using sysprep tool for running Windows on a different Hardware

    - by Usman Ajmal
    Hi, I am using sysprep tool for running Windows 7 on a different Hardware. What I do is that run sysprep on a computer, select System Audit, check the Generalize check box, select Shutdown, click OK and wait for the computer to shutdown. When the system shuts down, I remove the hard disk from my computer and plug it into another computer having different Hardware. Then I turn ON the computer and after a series of operations (including one reboot), I eventually get to the Desktop of Windows on the changed hardware computer BUT the problem is that System Preparation Tool's start up automatically. I rebooted the computer but the System Preparation Tool start up each time. One more thing that noted was that computer gives a message at each reboot before loading Desktop that "System is now preparing your computer for first use". Any idea how can i get a clean Desktop after performing sysprep? or is there any step I am missing? Thanks a lot

    Read the article

  • Windows 7 CHKDSK log - What is "Internal Info"?

    - by Ron Klein
    If I run Disk Scan (CHKDSK) on Windows 7, I get the log in the event viewer. If I look inside it, I can see some kind of a binary dump: Internal Info: 00 4f 05 00 53 4a 05 00 ec 46 09 00 00 00 00 00 .O..SJ...F...... fa 03 00 00 5c 00 00 00 00 00 00 00 00 00 00 00 ....\........... 48 93 42 00 50 01 41 00 f8 1f 41 00 00 00 41 00 H.B.P.A...A...A. Is there any meaningful information in that field, other than debug info for the programmers who developed this tool?

    Read the article

  • Unable to see External HDD in Windows Explorer

    - by Jamie Keeling
    I have bought a 320GB External HDD which I want to use with my Playstation3. I know it will only work with a Fat32 file system so using some free HP software it formatted it, unbeknown to me it will only work up to 32GB. After seeing this I panicked, downloaded Partition Wizard Home Edition and deleted the partition. As I was about to create a new partition to put it back to NTFS (I'd just wanted to be able to use it in the first place at this point) I accidently knocked the cable out of my computer for the HDD and after replacing it the External HDD is no longer recognised by the My Computer option, Disk Management asks me to initialise the disc using MBR but it fails saying "Copy protected". Even the partitioning software I previously mentioned can't do anything about it, all it says is "Bad Condition" and I can't perform any operations on it. Would anybody be able to guide me in getting this sorted? I'm terrified i've wasted a perfectly good 320GB HDD.

    Read the article

  • Keyboard doesn't work after upgrade to Debian Wheezy

    - by mikhail
    After upgrade from lenny to wheezy keyboard and mouse don't work in X (keyboard available before it starts). I looked over internet about this issue and found some solutions: remove xorg.conf (http://forums.debian.net/viewtopic.php?f=7&t=62880) update udev and base-files (http://forums.debian.net/viewtopic.php?f=6&t=64927&p=376136#p376136) remove /run directory (http://forums.debian.net/viewtopic.php?f=6&t=64927&p=376136#p376136) reintall xserver and xorg But, nothing helped me :( Logs of X-server haven't got any messages about keyboard or mouse errors. Below you can see configuration of my system: krestyaninov@xxx# uname -a Linux xxx 3.0.0-1-686-pae #1 SMP Sat Aug 27 16:41:03 UTC 2011 i686 GNU/Linux krestyaninov@xxx# dpkg -l |grep udev ii libgudev-1.0-0 172-1 GObject-based wrapper library for libudev ii libudev0 172-1 libudev shared library ii udev 172-1 /dev/ and hotplug management daemon krestyaninov@xxx# dpkg -l |grep base-files ii base-files 6.5 Debian base system miscellaneous files krestyaninov@xxx# dpkg -l |grep xorg ii xorg 1:7.6+8 X.Org X Window System ... ii xserver-xorg 1:7.6+8 X.Org X server

    Read the article

  • Can Windows log CryptoAPI CRL timouts?

    - by makerofthings7
    We have several .NET applications that occasionally "act slow" with no CPU or disk access. I suspect that they are hung up on authentication when trying to validate the certificate, since the timeout is almost 20 seconds. As per this MSFT article Most applications do not specify to CryptoAPI to use a cumulative time-out. If the cumulative time-out option is not enabled, CryptoAPI uses the CryptoAPI default setting which is a time-out of 15 seconds per URL. If the cumulative time-out option specified by the application, then CryptoAPI will use a default setting of 20 seconds as the cumulative timeout. The first URL receives a maximum timeout of 10 seconds. Each subsequent URL timeout is half of the remaining balance in the cumulative timeout value. Since this is a service, how can I detect and log CryptoAPI hangs for applications I have sourcecode to, and also 3rd party

    Read the article

  • Adding MySQL servers/ data nodes into database clustering without restarting mysql cluster

    - by Dwayne Johnson
    I currently have mysql clustering up and running. For high scalability is there a way to include either mysql node, data nodes, or management nodes without restarting the entire cluster. I wish to understand how is it implement or is there a documentation I can read. I believe only the latest version can support this. I am running NDB 7.0. I am aware that I am able to add the nodes online, but it requires me perform a rolling restart. What other approach I can take to implement this without restarting in my network?

    Read the article

  • Deploy to JBoss 7 using Hudson Deploy plugin

    - by Uluk Biy
    I have 2 machines where one of them contains the Hudson CI and other JBoss 7 AS. In Hudson, I have installed "Deploy Plugin", created new job and filled required JBoss manager user connection fields. When I run the job, the project successfully built however the deployment process to remote JBoss AS is not being triggered. No errors or messages about the deployment in log. What should I do? EDIT The deployment is triggered (at least expected) as "Post-build Action" with parameters: [x] Deploy war/ear to a container WAR/EAR files : **/*.war Container : JBoss 7.x Manager user name : test Manager password : * * * * JBoss URL : http://192.168.1.2 JBoss JMX Management port : 9990 It is not a separate job.

    Read the article

  • Bad performance issue on dedicated server

    - by Pierre Espenan
    I just subscribed to a dedicated server offer, and encounter some bad PHP execution performances. Actually, the time execution may be 2 times bigger than it is on my old mutualized server! I'm definitely not an expert in server management, so I'm wondering what I missed. Here are some stuff that can help you understand what's wrong here : My server (in french but easy to understand) : http://www.online.net/fr/serveur-dedie/dedibox-sc phpinfo(); output : http://jsfiddle.net/E8b7W/embedded/result/ PHP bench script (dedicated server) : http://jsfiddle.net/EhXzK/embedded/result/ PHP bench script (old mutualized) : http://jsfiddle.net/ANbWt/embedded/result/ Is it normal to get such poor performances after a kernel update and basics "apt-get install" for apache2 and php ? Thanks !

    Read the article

  • How to move linux executables to a ramdisk?

    - by alfa64
    i've made a ramdisk this way: mkdir -p /media/ramdisk mount -t tmpfs -o size=512M tmpfs /media/ramdisk/ The reason for this is because i run a lot of node.js scripts and their execution time is very small, but i suspect that the time overhead is because it reloads the node.js executable from disk and destroys it on each subsecuent run. So i think this might be the solution to gain a bit, if not much, performance. How can i move a program like node to the ramdisk and run it from there? The idea is to have a startup script that creates the ramdisk and puts the node files inside of it. Note that i'm currently using fedora 16 for what's it's worth. Thanks in advance.

    Read the article

  • RAID 0 performance gains?

    - by NickAldwin
    I'm building a new computer over the summer. I'm fairly competent in computer hardware, and am thus building the computer from scratch. I have everything planned out, but I was wondering about RAID. I asked which RAID I should use earlier, but now that it's pretty clear that RAID 1 isn't really that great, I think I'll go with cloud-backup instead of disk-redundancy. However, I still face a choice: use two 1TB drives as two 1TB drives, or combine them into a RAID 0 striped array. Is there any performance gain at all? I know that if one drive dies, everything is gone, so is the performance gain worth it? I'm building a pretty advanced computer, with SLI video cards and a fast CPU, so I'm thinking RAID 0 would give me some good hard drive performance. From your experience, is RAID 0 viable?

    Read the article

  • Recover lost NTFS partition on SSD

    - by Emil
    Hello, About 2 month ago I upgraded my Dell Latitude E6500 laptop with a Corsair Force F120 SSD drive. Everything worked well until about a week or so back. I started the computer and was faced with a beep and a message saying "No boot sector on Internal HDD (IRRT). No bootable devices". Since I figured that the boot sector had somehow got corrupt I tried booting from the Windows 7 dvd in order to repair the boot sector. But the Windows 7 installation program only found a blank drive with 111GB of unallocated space. I panicked and brought the drive with me to work to let a colleague have a look at it. We made a disk image of the entire drive and ran the drive through Testdisk in Linux. Testdisk did not find any partitions. It appears that the drive has been completely erased... What has happened? What is causing this behavior on an SSD?

    Read the article

  • Linux partitioning problem

    - by Claudiu
    I am using cfdisk to repartition my hdd as from OS install I only got 1 big partition a swap. I wanted to resize the big partition to 1 GB /boot and use the rest of the space for an extended partition. After I do cfdisk, I recheck the partitions with fdisk -l and I get these: Disk /dev/sda: 320 GB, 320070320640 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda3 1 38455 308881755 f Extended LBA Warning: Partition 3 does not end on cylinder boundary. /dev/sda2 38455 38698 1951897 82 Linux swap /dev/sda1 * 38699 38913 311349654 83 Linux My problem is the Warning message, I think I know the cause, I think its because of sda1 Blocks size. How could that be soo big if Start and End interval is small?

    Read the article

  • Single/Mulitple LUN for vmware vm hosting

    - by Yucong Sun
    I'm building a iscsi storage system for hosting about ~500 Vmware vm running concurrently. And I have a disk array with 15 disks, I only need moderate write performance but preferably not SPOFed. so, that leaves me with RAID1 / RAID10 , I have couple choices: 1) 3x LUN 4disk RAID10 + 3 hot-swap 2) 1x LUN 14disk RAID10 + 1 hot-swap 3) 7x LUN 2disk RAID1 + 1 host-swap Which way is better? Is there a real problem running 500 vms on single LUN? and would it be better to resort to 7 LUns so each VM is better isolated with each other?

    Read the article

  • Windows Installer using usb drive for temp purposes

    - by Douglas Anderson
    When installing apps that are built around Windows Installer, it would appear that it often uses my external usb hard disk (when it's connected) as the temp location while it expands and installs the application (creates a folder off the root with a guid name). Is there anyway to change this so it always defaults to a specific drive? This appears to be the case on Windows Vista and 7, not sure about previous releases. EDIT: Current environment variables look like this: TEMP=C:\Users\<me>\AppData\Local\Temp TMP=C:\Users\<me>\AppData\Local\Temp EDIT: I have a funny suspicion that it's using the drive with the largest available free space.

    Read the article

  • PostgreSQL 9.1 Database Replication Between Two Production Environments with Load Balancer

    - by littleK
    I'm investigating different solutions for database replication between two PostgreSQL 9.1 databases. The setup will include two production servers on the cloud (Amazon EC2 X-Large Instances), with an elastic load balancer. What is the typical database implementation for for this type of setup? A master-master replication (with Bucardo or rubyrep)? Or perhaps use only one shared database between the two environments, with a shared disk failover? I've been getting some ideas from http://www.postgresql.org/docs/9.0/static/different-replication-solutions.html. Since I don't have a lot of experience in database replication, I figured I would ask the experts. What would you recommend for the described setup?

    Read the article

  • Corporate Wiki Organization - Technical Documentation

    - by Dave Jarvis
    Corporations have documents describing various aspects of their technical systems, including: Custom Applications Custom Development Frameworks Third Party Applications Accounting Bug Tracking Network Management How To Guides User Manuals Web Browsers Software Tools Development IDEs Graphics GIMP xv Text Editing File Transfer ncFTP WinSCP Hardware Servers Web Database Exchange File Network Devices Printers Drawings If you had to use a Wiki to manage the documentation, what other items would you add to the list, and how would you organize it? (For example, would Software Tools make more sense under Third Party Applications?) A few constraints: The structure should not go beyond three levels deep. Avoid the word "and" in favour of two different categories. Keep the structure general: it should appy as broadly as possible. Target audience is primarily technical, but could be visible by anyone.

    Read the article

  • Execute Backup-SqlDatabase cmdlet remotely

    - by Maxim V. Pavlov
    When I run the following script line locally on an SQL Server machine, it executes perfectly: Backup-SqlDatabase -ServerInstance $serverName -Database $sqldbname -BackupFile "$($backupFolder)$($dbname)_db_$($addinionToName).bak" $serverName contains a short name of the SQL Server instance. SQL Server is 2012, so these new cmdlets work like a charm. On the other hand, when I am trying to perform a DB backup from a TeamCity agent machine like this (Through Invoke-Command cmdlet): function BackupDB([String] $serverName, [String] $sqldbname, [String] $backupFolder, [String] $addinionToName) { Import-Module SQLPS -DisableNameChecking Backup-SqlDatabase -ServerInstance $serverName -Database $sqldbname -BackupFile "$($backupFolder)$($dbname)_db_$($addinionToName).bak" } Invoke-Command -computername $SQLComputerName -Credential $credentials -ScriptBlock ${function:BackupDB} -ArgumentList $SQLInstanceName, $DatabaseName, $BackupDirectory, $BakId results in an error: Failed to connect to server $serverName. + CategoryInfo : NotSpecified: (:) [Backup-SqlDatabase], ConnectionFailureException + FullyQualifiedErrorId : Microsoft.SqlServer.Management.Common.ConnectionFailureException,Microsoft.SqlServer.M anagement.PowerShell.BackupSqlDatabaseCommand What is the correct way to execute Backup-SqlDatabase cmdlet remotely?

    Read the article

  • Preventing out of office storms Exchange 2010, OWA and Auto Forward to a group

    - by Simon McLaren
    In my organization we have a group mailbox for a particular function. The actual function is preformed by 15 - 20 individuals on a rotating basis. The group mailbox serves as a record for all e-mail sent to that function. Individual access to the mailbox is established by adding a user to an A/D group. For convenience, those members of the group would prefer to not have to "check" this group/non-entiyy mailbox. To achieve that, I want to forward all incoming mail to the group mailbox to that group. So far I am not seeing any consistency in the way an out of office response looks in order to build an exception to the forward rule. We have not turned this feature on for the group, instead waiting until we are sure this will not be an issue. How do I preventing out of office replies to the group mailbox from being forwarded to the group? Management of the mailbox is conducted via OWA. Exchange 2010

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >