Search Results

Search found 3463 results on 139 pages for 'physical'.

Page 80/139 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • ZFS Configuration advice

    - by rbarrette
    I need some advice on configuring ZFS. Here is what I have: Physical Disks: 4x 3 TB 2x 2 TB 2x 1 TB What is the best configuration for my Vdevs and storage pool. I want to maximaze space but still maintain redundancy. Should I just get 2 more 3TB's and just create 2x 3-3TB raid2z storage pools? Create a 1x 4-3TB raidz2 vdev? Can I put redundancy at the pool level and create individual vdevs for each drive and then add 2x 1TB+2TB striped vdevs to keep all vdevs the same size. Keep in mind I do need to migrate data from the smaller drives and am planning on adding more 3tb drives later on. What do you think?

    Read the article

  • How to identify who is using Hardware Reserved Memory in Windows 7

    - by blasteralfred
    I run Windows 7 x86 Home Premium. I have an installed physical memory of 4 GB, out of which, 2.96 GB is usable (My Computer Properties). I checked the memory usage using Resource Monitor and found 3036 MB / 4096 MB is available. I noticed that 1060 MB is unavailable since it is reserved by some "Hardware component(s)". I would like to know which hardware component is using this 1060 MB. Is there any way or tool to identify this? Note: I know that Windows 7 Home Premium x86 supports a maximum of 4GB RAM.

    Read the article

  • Remotely Managing Storage on Hyper-V 2012 Core

    - by Vazgen
    I have a core Hyper-V Server 2012 that I am remotely managing from a Windows 8 client. I can connect in Hyper-V Manager, Server Manager, and MMC. However, I don't understand how I can manage the physical hard drive (for ex, deleting vhdx files, creating folders, etc) from my Windows 8 client. I tried to attach the remote share as follows: q: \\MyServer\c$ It said command completed successfully, but I don't see the drive on my client's Explorer. I can get to it in cmd.exe on the client but how can I manage it in a GUI? explorer q: Throws error:

    Read the article

  • Citrix Xen VM's lose networking

    - by Ash
    My client has a XenServer 6.0.2 installation with 2 Window Server 2008 R2 virtual machines. Whenever the virtual machines are rebooted they lose their IP settings (IP address, subnet, gateway). Each time after a reboot I need to login to each VM via XenCenter and re-apply the required static IP settings. This causes issues with connected iSCSI drives within each VM - drives need to be reconnected after each reboot. For example, a network adapter has the following settings pre-reboot: Description . . . . . . . . . . . : Citrix PV Ethernet Adapter #0 Physical Address. . . . . . . . . : C6-FB-A2-4F-2C-F3 IPv4 Address. . . . . . . . . . . : 10.101.0.101(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 10.101.0.10 DNS Servers . . . . . . . . . . . : 10.101.0.100 NetBIOS over Tcpip. . . . . . . . : Enabled Post-reboot: Description . . . . . . . . . . . : Citrix PV Ethernet Adapter #0 Physical Address. . . . . . . . . : C6-FB-A2-4F-2C-F3 Autoconfiguration IPv4 Address. . : 169.254.153.174(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : DNS Servers . . . . . . . . . . . : 10.101.0.100 NetBIOS over Tcpip. . . . . . . . : Enabled Under XenCenter -- Virtual Network Interfaces, each adapter is set to a static MAC address (i.e. "Use this MAC address"). I have tried the following commands within one VM but this had no effect: netsh winsock reset catalog netsh int ip reset Can someone please help?

    Read the article

  • Overheating on Ubuntu 12.04

    - by mati
    I have a Dell Inspiron Q17R with two graphic cards and I noticed that it is overheating. I installed Bumblebee, Jupiter and Flashblock, and I followed this guide as well, but it still got up to 74C. Is there anything more I can do? It still doesn't really seem to be working well and the fan keeps spinning really fast. After performing the following test: sensors in the terminal, this is what I got: Adapter: Virtual device temp1: +75.0°C (crit = +100.0°C) temp2: +75.0°C (crit = +100.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +68.0°C (high = +86.0°C, crit = +100.0°C) Core 0: +68.0°C (high = +86.0°C, crit = +100.0°C) Core 1: +65.0°C (high = +86.0°C, crit = +100.0°C) it doesn't look good.

    Read the article

  • Procedure for dual booting (2 copies of Win-7) off 2 partitions on same disk

    - by Sam Holder
    What procedure should I follow to set a dual boot (both Win-7 x64) on a machine where (ideally): Both operating systems will be installed on the same physical disk in different partitions When booting into either operating system the contents of the other OS partition disk will not be seen (this just seems safer) Other hard drives in the system will be visible by both OS's 1 copy of Win7 is already installed. Is it as simple as shrinking the existing volume and creating the partition, then sticking the CD in and booting off it and formatting the new partition and then installing another copy of windows onto the new partition? Or will that not work? Or are there gotchas?

    Read the article

  • What could be causing LVM errors on first boot after install in Debian?

    - by ianfuture
    Hi, I've installed Debian (lenny) on a machine at home. It was set up during install to have a /boot partition, then the rest was encrypted, then had an LVM ontop of that, then all the other partitons inside LVM. After install completed and on first boot it asked for password to un-encrypt(same password for both drives) then it showed an error which said LVM could not find a physical device with a particular UUID or something similar. LVM install is over two HDs. One is 120GB and one 40GB. 120GB is Master on its IDE cable and this has /boot on it. 40GB is slave on the other IDE cable. Is there anything that could be done to rescue this install? Or diagnose problem? It took ages to get installed due to time spent enrypting drives and I'd rather not go through that again. :( Thanks.. Ian

    Read the article

  • Nginx load distribution and multi-domain SSL

    - by Steve Clark
    I'm researching into the best methods of two new parts of our infrastructure, hopefully finding a single solution for both. 1) We're currently running a single application server, and we're going to be adding an additional application server and load balance between the two. 2) We handle a few thousand domains across the application server(s), and we're looking to support SSL. The best method i've come across so far is using nginx for it's Load Distribution to serve the requests to the application servers, and for it's SSL support. If a request is using SSL, nginx accepts the request on, terminates SSL and pipes to apache (app servers). Now, that's all good, but i'm yet to figure out how we can let nginx handle multiple domains using SSL. We're potentially looking at using UCC SSL Certs, so we can support 150 domains on a single certificate, with each cert on a single IP. I'm all new to this (My experience is just with physical load balancers and a single domains on SSL), so any advice would be very much appreciated.

    Read the article

  • VPN server on Windows Server 2008 for a small office

    - by cmbrnt
    I'm going to refurbish the IT-infrastructure for a small organization with one single office, and I'm not sure what VPN server to use. In your opinion, would the built-in Windows Server 2008 VPN server suffice or are there any specific problems with it as opposed to, for example, OpenVPN? I'd rather run a Windows native VPN server, but if there are few (preferably free) good alternatives, I could install VMware ESXi and virtualize both Windows and an OpenVPN-server. By the way, because of a low budget this office runs a solution with only one physical server. Any advice would be great to help me grasp this field of which I'm quite a novice. Thank you!

    Read the article

  • USB Dongle detected but connect option greyed out

    - by GrimSweeper48
    Hey I have just converted one of our physical license servers into a VMware server. It uses a USB key for a license doggle (WIBU-Key) for some software we have. Now it detects that the USB is plugged in on VMware Server (shows up as Wibu-box) under devices. But when I click on it all the options are greyed out including the connect and as such the VM can't read it? The VM is running Windows Server 2003. Anyone have any ideas? Ive attached an image so you can see clearer.

    Read the article

  • Install Windows with QEMU

    - by Radium
    I want to understand if it is possible to install Windows from Qemu to a physical HDD. I was trying to do that by doing something like this: qemu-system-x64 -m1024 -vnc :1 -hda /dev/sda -cdrom .../Windows.iso Installed successfully. But when i tried to boot normally got a blue screen telling me that a hardware change appeared so go away. I guess the problem is not because of QEMU-CPU emulation or smth, but because of HDD`s UID change which is used inside of Windows registry. Am i right? So if yes how to workaround this? Maybe i need to prepare Windows before the reboot? I have successfully installed FreeBSD via QEMU and thought with Windows it will go the same ...

    Read the article

  • Win7 Professional x64 16GB (4.99GB usable)

    - by Killrawr
    I've installed Corsair Vengeance CMZ16GX3M2A1600C10, 2x8GB, DDR3-1600, PC3-12800, CL10, DIMM and my BIOS picks up that there is 16GB, Windows says there is 16GB, CPU-z says there is 16GB. But it only says I can use 4.99GB out of 16GB. Motherboard is P55-GD65 (MS-7583) Supports four unbuffered DIMM of 1.5 Volt DDR3 1066/1333/1600*/2000*/2133* (OC) DRAM, 16GB Max Windows (Above screenshot specifies that I am on a System type: 64-bit OS) CPU-z Microsoft says that the physical memory limit on a 64 bit win7 professional operating system is 192GB. Dxdiag Run Command BIOS Screenshot #1 BIOS Screenshot #2 Why is my OS limiting me to just over a quarter of the available memory? is there anyway to increase it?

    Read the article

  • Default Program With Multiple Versions Installed

    - by Optimal Solutions
    I have multiple versions of Excel installed. Excel 2010, 2007 and 2003. I have them installed on one hard drive with Windows 7 Ultimate as the OS. When I double-click on an XLS file, Excel 2007 opens. I would like Excel 2010 to open. I read and followed the instructions to go to the Control Panel at "Control Panel\All Control Panel Items\Default Programs" and set the default programs. I changed the default to the physical EXE for Excel 2010 at the proper folder that it is installed. When I double-click on the XLS files, Excel 2007 still opens. So I tried to change it to Excel 2003 just to see if it changed to that and it still opens Excel 2007. What am I missing? I would really like the file extension to open Excel 2010, but can not seem to do that.

    Read the article

  • How to access vm inside a vm via VNC?

    - by can.
    For some reasons I installed virtual machines inside a virtual machine, like this: A( B( C )) where A is the physical machine, B is a vm and the network type is NAT. And C is also a virtual machine and the network type is bridged. The OSes are Ubuntu 12.04 and the hypervisors are kvm. I can access B via VNC and via ssh from A, but for C I can't use ssh because C has no IP address at the start. And I assume I can only access C via VNC. I tried something like(on A): iptables -t nat -A PREROUTING -d $ip-of-A -p tcp --dport 6500 -j DNAT --to-destination $ip-of-B:5900 (I referred to this) But it doesn't work. And I'm reading the man pages of iptables and hope someone could help :)

    Read the article

  • Do memory cards have any max file size limitation?

    - by Dmitriy R
    I am not sure where to ask this question, so perhaps it is physical limitation. I have a 8 GB flash micro SD memory card. When I copy any file size of up to few gigabytes, copying happens normally. But if I am trying to copy file over 4 GB file, then the system tells me like insufficient memory on card, although 8 GB is available. So perhaps only 32 bit address is used for keeping size of file in micro SD card, or is my micro SD defective?

    Read the article

  • Enlarge partition on SD card

    - by chenwj
    I have followed Cloning an SD card onto a larger SD card to clone a 2G SD card to a 32G SD card, and the file system is ext4. However, on the 32G SD card I only can see 2G space available. Is there a way to maximize it out? Here is the output of fdisk: Command (m for help): p Disk /dev/sdb: 32.0 GB, 32026656768 bytes 64 heads, 32 sectors/track, 30543 cylinders, total 62552064 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e015a Device Boot Start End Blocks Id System /dev/sdb1 * 32 147455 73712 c W95 FAT32 (LBA) /dev/sdb2 147456 3994623 1923584 83 Linux I want to make /dev/sdb2 use up the remaining space. I try resize2fs /dev/sdb after dd, but get message below: $ sudo resize2fs /dev/sdb resize2fs 1.42 (29-Nov-2011) resize2fs: Bad magic number in super-block while trying to open /dev/sdb Couldn't find valid filesystem superblock. Any idea on what I am doing wrong? Thanks.

    Read the article

  • Is it possible to put only the boot partition on a usb stick?

    - by Steve V.
    I've been looking at system encryption with ArchLinux and i think I have it pretty much figured out but I have a question about the /boot partition. Once the system is booted up is it possible to unmount the /boot partition and allow the system to continue to run? My thought was to install /boot to a USB stick since it can't be left encrypted and then boot from the USB stick which would boot up the encrypted hard disk. Then I can take the USB key out and just use the system as normal. The reason I want to do this is because if an attacker was able to get physical access to the machine they could modify the /boot partition with a keystroke logger and steal the key and if they already had a copy of the encrypted data they could just sit back and wait for the key. I guess I could come up with a system of verifying that the boot has been untouched at each startup. Has this been done before? Any guidance for implementing it on my own?

    Read the article

  • Preventing apps to access info from wifi device?

    - by heaosax
    Browsers like Chrome and Firefox can use my wifi device to get information about the surrounded APs and pin point my physical location using Google Location Services, I know these browser always ask for permissions to do this, and that these features can also be "turned off". But I was wondering if there's a better way to prevent ANY application to access this information from my wifi device. I don't like anyone on the internet knowing where I live, and I am worried some software could do the same as these browsers but without asking for permissions. I am using Ubuntu 10.04.

    Read the article

  • Load Balancing and High Availability for Web Site

    - by nzgirl
    We've developing a database driven (70%/30% read/write load) website using C#.NET, IIS and MS SQL Server 2008 to be hosted on Windows 2008. Due to contractual reasons our setup has to be hosted on our own physical/virtual servers instead of a cloud solution at this stage. Could someone outline or link to some best practices that would provide both high availability (priority at the moment) and eventually load balancing for our site. We're probably looking at some sort of 2 SQL server mirrored system and 2 ISS web servers to start with. Thanks in advance.

    Read the article

  • Creating bootable Fedora USB with persistent storage

    - by dooffas
    I am attempting to burn the full Fedora 19 x86_64 DVD iso to a USB drive and have a separate partition on it for a kickstart file / other media that will be installed in the kickstart process. With the Ubuntu server 12 iso, you can simply dd the iso to the usb drive: dd if=/path/to/iso of=/dev/sdb Once the iso has been burnt, open gparted and create a ext2 parition in the allocated space. However, this does not seem to work with the Fedora ISO. When loading the USB drive in gparted I get a warning and an error: Warning: The driver descriptor says the physical block size is 2048 bytes, but Linux says it is 512 bytes. Error: The partition's data region doesn't occupy the entire partition. Ignoring both of these errors allows gparted to load the usb drive, however it shows a blank drive with no partition table. Has anyone come across this before? From what I have found, it may have something to do with the fact that Fedora use isohybrid.

    Read the article

  • unable to format external drive to HFS+ for Mac

    - by dtlussier
    I have an external hard drive (1 TB Western Digital) that is currently formatted as FAT32 and I want to reformat it to a single partition of HFS+ for my Mac. I realize that I can read FAT32 from my Mac but want HFS as it has other feathers like permissions that I'd like to have. I have tried using Disk Utility to format the drive as I have done in the past, but when I go through the process it fails and throws out an error stating that it is unable to reformat the drive to HFS. What might be the reasons that this could happen? Are there any diagnostics I could run on the physical disk to check if it is running well?

    Read the article

  • Device that connects to a switch via RJ45, that emulates a PC

    - by Mike Christiansen
    One of my co-workers once saw a device that plugged into a RJ45 jack, that emulated a PC. It could be configured with an IP address, and respond to pings. I was wondering if anyone knew about these, or even what they are called? This will be used to simulate a PC in a classroom environment. Thanks in advance. Edit: This is a CCNA classroom, we are looking to simulate a PC connected to an ethernet port on a router. These will be on different subnets, etc. This might be doable with a VM through VLANs and virtual switches, but then we are getting away from configuring the physical ports on the router the way we want to.

    Read the article

  • Is it possible to change User's Home Directorys permission in OSX?

    - by Sosiska
    Most of your staff uses OSX as main operation system. The problem is that recently we were attacked with some odd malware: users are getting zip-file via mail, and when they open this zip file, they execute a binary keylogger malware, that is inside this zipped file. (One click is enough). We have some non-technical limitations and due this limitation we can't configure user's mail servers. But actually we have physical access to their laptops. As far as I know, there is possible to mount user's home directory without "x" (execution) permission in Linux and *BSD. So users can't run some binary file inside home directory. Is it possible to configure OS X so that user can't execute files inside /Users/?

    Read the article

  • Pixus MP990 Rejecting US Ink Cartridges

    - by QRohlf
    I recently acquired a Pixus MP990 Canon Printer. It is from Japan (hence the "Pixus" rather than "Pixma"). It's been working well, however I just had to change an ink cartridge today, and every time I attempt to do so I get a u140 "ink tank cannot be recognized" error. These are genuine Canon ink cartridges purchased from Canon's USA website. Is a region issue causing the rejection, and if so is there a way to change the region of the printer to the US? Is there anything else I can do? (I have made sure it's not the physical connection that is the problem - I've tried two different ink cartridges in their respective slots, and I still get the same problem.)

    Read the article

  • After moving our Servers to a virtual environment using VMware - SQL timeouts came in, why?

    - by RayofCommand
    We moved our servers to a virtual cloud (VMware) where only our servers are in. But as soon as we finished migrating everything we are fighting against SQL Timeouts and machine slowdowns we can't explain. Even though we ~ doubled the servers capacity while switching from physical to virtual. Now I googled and found that we are not alone. People are complaining about poor performance after moving to a cloud managed by VMware. Are there any known issues? Sometimes our services can't access a disk or SQL receives a timeout and we have no idea why.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >