Search Results

Search found 5287 results on 212 pages for 'physical computing'.

Page 139/212 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • What could be causing LVM errors on first boot after install in Debian?

    - by ianfuture
    Hi, I've installed Debian (lenny) on a machine at home. It was set up during install to have a /boot partition, then the rest was encrypted, then had an LVM ontop of that, then all the other partitons inside LVM. After install completed and on first boot it asked for password to un-encrypt(same password for both drives) then it showed an error which said LVM could not find a physical device with a particular UUID or something similar. LVM install is over two HDs. One is 120GB and one 40GB. 120GB is Master on its IDE cable and this has /boot on it. 40GB is slave on the other IDE cable. Is there anything that could be done to rescue this install? Or diagnose problem? It took ages to get installed due to time spent enrypting drives and I'd rather not go through that again. :( Thanks.. Ian

    Read the article

  • ZFS Configuration advice

    - by rbarrette
    I need some advice on configuring ZFS. Here is what I have: Physical Disks: 4x 3 TB 2x 2 TB 2x 1 TB What is the best configuration for my Vdevs and storage pool. I want to maximaze space but still maintain redundancy. Should I just get 2 more 3TB's and just create 2x 3-3TB raid2z storage pools? Create a 1x 4-3TB raidz2 vdev? Can I put redundancy at the pool level and create individual vdevs for each drive and then add 2x 1TB+2TB striped vdevs to keep all vdevs the same size. Keep in mind I do need to migrate data from the smaller drives and am planning on adding more 3tb drives later on. What do you think?

    Read the article

  • Citrix Xen VM's lose networking

    - by Ash
    My client has a XenServer 6.0.2 installation with 2 Window Server 2008 R2 virtual machines. Whenever the virtual machines are rebooted they lose their IP settings (IP address, subnet, gateway). Each time after a reboot I need to login to each VM via XenCenter and re-apply the required static IP settings. This causes issues with connected iSCSI drives within each VM - drives need to be reconnected after each reboot. For example, a network adapter has the following settings pre-reboot: Description . . . . . . . . . . . : Citrix PV Ethernet Adapter #0 Physical Address. . . . . . . . . : C6-FB-A2-4F-2C-F3 IPv4 Address. . . . . . . . . . . : 10.101.0.101(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 10.101.0.10 DNS Servers . . . . . . . . . . . : 10.101.0.100 NetBIOS over Tcpip. . . . . . . . : Enabled Post-reboot: Description . . . . . . . . . . . : Citrix PV Ethernet Adapter #0 Physical Address. . . . . . . . . : C6-FB-A2-4F-2C-F3 Autoconfiguration IPv4 Address. . : 169.254.153.174(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : DNS Servers . . . . . . . . . . . : 10.101.0.100 NetBIOS over Tcpip. . . . . . . . : Enabled Under XenCenter -- Virtual Network Interfaces, each adapter is set to a static MAC address (i.e. "Use this MAC address"). I have tried the following commands within one VM but this had no effect: netsh winsock reset catalog netsh int ip reset Can someone please help?

    Read the article

  • AD password not synchronising properly

    - by Kaczmar
    I have 600+ users in AD, but only one causes me trouble. The problem is I can reset his password from AD, he can then log in to his machine. After that he would like to change his password from Windows 7, and proceeds without errors. Logs out or locks the workstation but cannot access it again using either old or new password. So I have to reset it again and he can only use the one I provide for him. All our machines are in the same physical location in the same subnet. Functional level is 2003. I'm totally out of ideas. I could create him new user account, but I'd possibly like to know what causes this. I can only suspect some sort of synchronisation problems but other accounts work fine, and I don't know how to dig deeper into this. Thanks, Piotr

    Read the article

  • Procedure for dual booting (2 copies of Win-7) off 2 partitions on same disk

    - by Sam Holder
    What procedure should I follow to set a dual boot (both Win-7 x64) on a machine where (ideally): Both operating systems will be installed on the same physical disk in different partitions When booting into either operating system the contents of the other OS partition disk will not be seen (this just seems safer) Other hard drives in the system will be visible by both OS's 1 copy of Win7 is already installed. Is it as simple as shrinking the existing volume and creating the partition, then sticking the CD in and booting off it and formatting the new partition and then installing another copy of windows onto the new partition? Or will that not work? Or are there gotchas?

    Read the article

  • How to identify who is using Hardware Reserved Memory in Windows 7

    - by blasteralfred
    I run Windows 7 x86 Home Premium. I have an installed physical memory of 4 GB, out of which, 2.96 GB is usable (My Computer Properties). I checked the memory usage using Resource Monitor and found 3036 MB / 4096 MB is available. I noticed that 1060 MB is unavailable since it is reserved by some "Hardware component(s)". I would like to know which hardware component is using this 1060 MB. Is there any way or tool to identify this? Note: I know that Windows 7 Home Premium x86 supports a maximum of 4GB RAM.

    Read the article

  • Win7 Professional x64 16GB (4.99GB usable)

    - by Killrawr
    I've installed Corsair Vengeance CMZ16GX3M2A1600C10, 2x8GB, DDR3-1600, PC3-12800, CL10, DIMM and my BIOS picks up that there is 16GB, Windows says there is 16GB, CPU-z says there is 16GB. But it only says I can use 4.99GB out of 16GB. Motherboard is P55-GD65 (MS-7583) Supports four unbuffered DIMM of 1.5 Volt DDR3 1066/1333/1600*/2000*/2133* (OC) DRAM, 16GB Max Windows (Above screenshot specifies that I am on a System type: 64-bit OS) CPU-z Microsoft says that the physical memory limit on a 64 bit win7 professional operating system is 192GB. Dxdiag Run Command BIOS Screenshot #1 BIOS Screenshot #2 Why is my OS limiting me to just over a quarter of the available memory? is there anyway to increase it?

    Read the article

  • VPN server on Windows Server 2008 for a small office

    - by cmbrnt
    I'm going to refurbish the IT-infrastructure for a small organization with one single office, and I'm not sure what VPN server to use. In your opinion, would the built-in Windows Server 2008 VPN server suffice or are there any specific problems with it as opposed to, for example, OpenVPN? I'd rather run a Windows native VPN server, but if there are few (preferably free) good alternatives, I could install VMware ESXi and virtualize both Windows and an OpenVPN-server. By the way, because of a low budget this office runs a solution with only one physical server. Any advice would be great to help me grasp this field of which I'm quite a novice. Thank you!

    Read the article

  • Install Windows with QEMU

    - by Radium
    I want to understand if it is possible to install Windows from Qemu to a physical HDD. I was trying to do that by doing something like this: qemu-system-x64 -m1024 -vnc :1 -hda /dev/sda -cdrom .../Windows.iso Installed successfully. But when i tried to boot normally got a blue screen telling me that a hardware change appeared so go away. I guess the problem is not because of QEMU-CPU emulation or smth, but because of HDD`s UID change which is used inside of Windows registry. Am i right? So if yes how to workaround this? Maybe i need to prepare Windows before the reboot? I have successfully installed FreeBSD via QEMU and thought with Windows it will go the same ...

    Read the article

  • Enlarge partition on SD card

    - by chenwj
    I have followed Cloning an SD card onto a larger SD card to clone a 2G SD card to a 32G SD card, and the file system is ext4. However, on the 32G SD card I only can see 2G space available. Is there a way to maximize it out? Here is the output of fdisk: Command (m for help): p Disk /dev/sdb: 32.0 GB, 32026656768 bytes 64 heads, 32 sectors/track, 30543 cylinders, total 62552064 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e015a Device Boot Start End Blocks Id System /dev/sdb1 * 32 147455 73712 c W95 FAT32 (LBA) /dev/sdb2 147456 3994623 1923584 83 Linux I want to make /dev/sdb2 use up the remaining space. I try resize2fs /dev/sdb after dd, but get message below: $ sudo resize2fs /dev/sdb resize2fs 1.42 (29-Nov-2011) resize2fs: Bad magic number in super-block while trying to open /dev/sdb Couldn't find valid filesystem superblock. Any idea on what I am doing wrong? Thanks.

    Read the article

  • Do memory cards have any max file size limitation?

    - by Dmitriy R
    I am not sure where to ask this question, so perhaps it is physical limitation. I have a 8 GB flash micro SD memory card. When I copy any file size of up to few gigabytes, copying happens normally. But if I am trying to copy file over 4 GB file, then the system tells me like insufficient memory on card, although 8 GB is available. So perhaps only 32 bit address is used for keeping size of file in micro SD card, or is my micro SD defective?

    Read the article

  • USB Dongle detected but connect option greyed out

    - by GrimSweeper48
    Hey I have just converted one of our physical license servers into a VMware server. It uses a USB key for a license doggle (WIBU-Key) for some software we have. Now it detects that the USB is plugged in on VMware Server (shows up as Wibu-box) under devices. But when I click on it all the options are greyed out including the connect and as such the VM can't read it? The VM is running Windows Server 2003. Anyone have any ideas? Ive attached an image so you can see clearer.

    Read the article

  • Default Program With Multiple Versions Installed

    - by Optimal Solutions
    I have multiple versions of Excel installed. Excel 2010, 2007 and 2003. I have them installed on one hard drive with Windows 7 Ultimate as the OS. When I double-click on an XLS file, Excel 2007 opens. I would like Excel 2010 to open. I read and followed the instructions to go to the Control Panel at "Control Panel\All Control Panel Items\Default Programs" and set the default programs. I changed the default to the physical EXE for Excel 2010 at the proper folder that it is installed. When I double-click on the XLS files, Excel 2007 still opens. So I tried to change it to Excel 2003 just to see if it changed to that and it still opens Excel 2007. What am I missing? I would really like the file extension to open Excel 2010, but can not seem to do that.

    Read the article

  • How to access vm inside a vm via VNC?

    - by can.
    For some reasons I installed virtual machines inside a virtual machine, like this: A( B( C )) where A is the physical machine, B is a vm and the network type is NAT. And C is also a virtual machine and the network type is bridged. The OSes are Ubuntu 12.04 and the hypervisors are kvm. I can access B via VNC and via ssh from A, but for C I can't use ssh because C has no IP address at the start. And I assume I can only access C via VNC. I tried something like(on A): iptables -t nat -A PREROUTING -d $ip-of-A -p tcp --dport 6500 -j DNAT --to-destination $ip-of-B:5900 (I referred to this) But it doesn't work. And I'm reading the man pages of iptables and hope someone could help :)

    Read the article

  • Nginx load distribution and multi-domain SSL

    - by Steve Clark
    I'm researching into the best methods of two new parts of our infrastructure, hopefully finding a single solution for both. 1) We're currently running a single application server, and we're going to be adding an additional application server and load balance between the two. 2) We handle a few thousand domains across the application server(s), and we're looking to support SSL. The best method i've come across so far is using nginx for it's Load Distribution to serve the requests to the application servers, and for it's SSL support. If a request is using SSL, nginx accepts the request on, terminates SSL and pipes to apache (app servers). Now, that's all good, but i'm yet to figure out how we can let nginx handle multiple domains using SSL. We're potentially looking at using UCC SSL Certs, so we can support 150 domains on a single certificate, with each cert on a single IP. I'm all new to this (My experience is just with physical load balancers and a single domains on SSL), so any advice would be very much appreciated.

    Read the article

  • Is it possible to put only the boot partition on a usb stick?

    - by Steve V.
    I've been looking at system encryption with ArchLinux and i think I have it pretty much figured out but I have a question about the /boot partition. Once the system is booted up is it possible to unmount the /boot partition and allow the system to continue to run? My thought was to install /boot to a USB stick since it can't be left encrypted and then boot from the USB stick which would boot up the encrypted hard disk. Then I can take the USB key out and just use the system as normal. The reason I want to do this is because if an attacker was able to get physical access to the machine they could modify the /boot partition with a keystroke logger and steal the key and if they already had a copy of the encrypted data they could just sit back and wait for the key. I guess I could come up with a system of verifying that the boot has been untouched at each startup. Has this been done before? Any guidance for implementing it on my own?

    Read the article

  • Preventing apps to access info from wifi device?

    - by heaosax
    Browsers like Chrome and Firefox can use my wifi device to get information about the surrounded APs and pin point my physical location using Google Location Services, I know these browser always ask for permissions to do this, and that these features can also be "turned off". But I was wondering if there's a better way to prevent ANY application to access this information from my wifi device. I don't like anyone on the internet knowing where I live, and I am worried some software could do the same as these browsers but without asking for permissions. I am using Ubuntu 10.04.

    Read the article

  • How do I profile memory usage in my project

    - by Gacek
    Are there any good, free tools to profile memory usage in C# ? Details: I have a visualization project that uses quite large collections. I would like to check which parts of this project - on the data-processing side, or on the visualization side - use most of the memory, so I could optimize it. I know that when it comes to computing size of the collection the case is quite simple and I can do it on my own. But there are also certain elements for which I cannot estimate the memory usage so easily. The memory usage is quite big, for example processing a file of size 35 MB my program uses a little bit more than 250 MB of RAM.

    Read the article

  • Creating bootable Fedora USB with persistent storage

    - by dooffas
    I am attempting to burn the full Fedora 19 x86_64 DVD iso to a USB drive and have a separate partition on it for a kickstart file / other media that will be installed in the kickstart process. With the Ubuntu server 12 iso, you can simply dd the iso to the usb drive: dd if=/path/to/iso of=/dev/sdb Once the iso has been burnt, open gparted and create a ext2 parition in the allocated space. However, this does not seem to work with the Fedora ISO. When loading the USB drive in gparted I get a warning and an error: Warning: The driver descriptor says the physical block size is 2048 bytes, but Linux says it is 512 bytes. Error: The partition's data region doesn't occupy the entire partition. Ignoring both of these errors allows gparted to load the usb drive, however it shows a blank drive with no partition table. Has anyone come across this before? From what I have found, it may have something to do with the fact that Fedora use isohybrid.

    Read the article

  • Make user uploads go to different hard drive?

    - by Andrew Fashion
    I am using a pre-made social networking script where all user uploads go to site.com/public/user/ How can I make /public/user/ my secondary hard drive so all user uploads are uploaded to my second harddrive and not the primary hard drive. I have over 100GB of images, and I want them on my other HDD now. Thank you. I am running CentOS 5.5 64bit w/ Apache and PHP I have two 250GB Sata HDDs sudo parted /dev/sda print Model: ATA WDC WD2500KS-00M (scsi) Disk /dev/sda: 250GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 107MB 107MB primary ext3 boot 2 107MB 8595MB 8488MB primary linux-swap 3 8595MB 10.7GB 2147MB primary ext3 4 10.7GB 250GB 239GB extended 5 10.7GB 250GB 239GB logical ext3 Information: Don't forget to update /etc/fstab, if necessary. 5 10.7GB 250GB 239GB logical ext3

    Read the article

  • Pixus MP990 Rejecting US Ink Cartridges

    - by QRohlf
    I recently acquired a Pixus MP990 Canon Printer. It is from Japan (hence the "Pixus" rather than "Pixma"). It's been working well, however I just had to change an ink cartridge today, and every time I attempt to do so I get a u140 "ink tank cannot be recognized" error. These are genuine Canon ink cartridges purchased from Canon's USA website. Is a region issue causing the rejection, and if so is there a way to change the region of the printer to the US? Is there anything else I can do? (I have made sure it's not the physical connection that is the problem - I've tried two different ink cartridges in their respective slots, and I still get the same problem.)

    Read the article

  • Windows Load Balancing Services and File Shares

    - by cbkadel
    We are using Windows Load Balancing Services (WLBS). One of the things that I do notice, is that if I create a File Share on one of the physical hosts, I am able browse to that file share using the clustered-ip address. This might be a 'opinion' question, but I haven't been able to find much literature on file shares in particular with wlbs. Is this a recommendation configuration? Are there any limitations? What about when the share contains different sets of content on both hosts? For instance: Three 'hostnames' - host1 (physical1), host2 (physical2), and cluster. I create the following shares: \physical1\myshare \physical2\myshare What I notice is that i can see: \cluster\myshare I'm guessing that this is read-only, and that there's no file synchronization. But what happens if they are in fact out of sync, what would a network browser see then? Thanks for your time!

    Read the article

  • Trouble loading an ISO

    - by crocyson
    I've created a boot disk and ISO image with paragon. I boot up the Virtual pc with the disk in and I get to my recovery page. When I am supposed to select an ISO image? My Virtual PC doesn't acknowledge any of my physical hard drives on my host PC. I've also tried copy/paste the files into my datastore and they do not show up. I've tried the option during the set up to start with the ISO but again I am not able to browse to my external hard drive that I have stored the ISO on. VMWare will acknowledge it and say that it is connected but I can't browse to it. Am I doing something wrong? I created the back up disk and ISO image on the external with paragon.

    Read the article

  • Why are mainframes still around?

    - by ThaDon
    It's a question you've probably asked or been asked several times. What's so great about Mainframes? The answer you've probably been given is "they are fast" "normal computers can't process as many 'transactions' per second as they do". Jeese, I mean it's not like Google is running a bunch of Mainframes and look how many transactions/sec they do! The question here really is "why?". When I ask this question to the mainframe devs I know, they can't answer, they simply restate "It's fast". With the advent of Cloud Computing, I can't imagine mainframes being able to compete both cost-wise and mindshare-wise (aren't all the Cobol devs going to retire at some point, or will offshore just pickup the slack?). And yet, I know a few companies that still pump out net-new Cobol/Mainframe apps, even for things we could do easily in say .NET and Java. Anyone have a real good answer as to why "The Mainframe is faster", or can point me to some good articles relating to the topic?

    Read the article

  • What to use to wait on a indeterminate number of tasks?

    - by Scott Chamberlain
    I am still fairly new to parallel computing so I am not too sure which tool to use for the job. I have a System.Threading.Tasks.Task that needs to wait for n number number of tasks to finish before starting. The tricky part is some of its dependencies may start after this task starts (You are guaranteed to never hit 0 dependent tasks until they are all done). Here is kind of what is happening Parent thread creates somewhere between 1 and (NUMBER_OF_CPU_CORES - 1) tasks. Parent thread creates task to be run when all of the worker tasks are finished. Parent thread creates a monitoring thread Monitoring thread may kill a worker task or spawn a new task depending on load. I can figure out everything up to step 4. How do I get the task from step 2 to wait to run until any new worker threads created in step 4 finish?

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >