Search Results

Search found 20049 results on 802 pages for 'virtual drive'.

Page 259/802 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • Why is Apache ignoring VirtualHost directive for first name in hosts file?

    - by Peter Taylor
    Standard pre-emptive disclaimer: host names, IP addresses, and directories are anonymised. Problem We have a server with Apache 2.2 (WAMP) listening on one IP and IIS listening on another. An ASP.Net application running under IIS needs to do some simple GETs from the PHP applications running under Apache to build a unified search results page. This is a virtual server, so the internal IPs are mapped somehow to external ones. The internal DNS system doesn't resolve the publicly published names under which the applications are accessed externally, so the obvious solution was to add them to etc/hosts with the internal IP address: 127.0.0.1 localhost # 10.0.1.17 is the IP address Apache listens on 10.0.1.17 phpappone.example.com 10.0.1.17 phpapptwo.example.com After restarting Apache, phpappone.example.com stopped working. Instead of returning pages from that app, Apache was returning pages from the default site. The other PHP apps worked fine. Relevant configuration httpd.conf, summarised, says: ServerAdmin [email protected] ServerRoot "c:/server/Apache2" ServerName www.example.com Listen 10.0.1.17:80 Listen 10.0.1.17:443 # Not obviously related config options elided # Nothing obviously astandard # If you want more details, post a comment DocumentRoot "c:/server/Apache2/htdocs" <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> # Fallback for unknown host names <Directory "c:/server/Apache2/htdocs"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> # PHP apps common config <Directory "C:/Inetpub/wwwroot/phpapps"> Options FollowSymLinks -Indexes +ExecCGI AllowOverride All Order Allow,Deny Allow from All </Directory> # Virtual hosts NameVirtualHost 10.0.1.17:80 NameVirtualHost 10.0.1.17:443 <VirtualHost _default_:80> </VirtualHost> <VirtualHost _default_:443> SSLEngine On SSLCertificateFile "certs/example.crt" SSLCertificateKeyFile "certs/example.key" </VirtualHost> Include conf/vhosts/*.conf and the vhosts files are e.g. <VirtualHost 10.0.1.17:80> ServerName phpappone.example.com DocumentRoot "c:/Inetpub/wwwroot/phpapps/phpappone" </VirtualHost> <VirtualHost 10.0.1.17:443> ServerName phpappone.example.com DocumentRoot "c:/Inetpub/wwwroot/phpapps/phpappone" SSLEngine On SSLCertificateFile "certs/example.crt" SSLCertificateKeyFile "certs/example.key" </VirtualHost> Buggy behaviour or our misunderstanding? The documentation for name-based virtual hosts says that Now when a request arrives, the server will first check if it is using an IP address that matches the NameVirtualHost. If it is, then it will look at each <VirtualHost> section with a matching IP address and try to find one where the ServerName or ServerAlias matches the requested hostname. If it finds one, then it uses the configuration for that server. If no matching virtual host is found, then the first listed virtual host that matches the IP address will be used. Yet that isn't what we observe. It seems that if the hostname is the first hostname listed against the IP address in etc/hosts then it uses the configuration from the main server and skips the virtual host lookup. Workarounds The workaround we've put in place for the time being is to add a fake line to the hosts file: 127.0.0.1 localhost # 10.0.1.17 is the IP address Apache listens on 10.0.1.17 fakename.example.com 10.0.1.17 phpappone.example.com 10.0.1.17 phpapptwo.example.com This fixes the problem, but it's not very elegant. In addition, it seems a bit brittle: reordering lines in the hosts file (or deleting the nonsense value) can break it. The other obvious workaround is to make the main server configuration match that of the troublesome virtual host, but that is equally brittle. A third option, which is just ugly, would be to change the ASP.Net code to take separate config items for the IP address and the hostname and to implement HTTP manually. Ugh. The question Is there a good solution to this problem which localises any "Do not touch this!" explanations to the Apache config files?

    Read the article

  • Announcing StorageTek VSM 6 and VLE Capacity Increase

    - by uwes
    Announcing Increased Capacity on StorageTek Virtual Storage Manager System 6 (VSM6) and StorageTek Virtual Library Extension (VLE)! StorageTek Virtual Storage Manager System 6 (VSM 6) and the StorageTek Virtual Library Extension (VLE) makes data management simple for the mainframe data center - Simple to deploy, simple to manage, and simple to scale.  With this announcement, StorageTek VSM 6 as well as StorageTek VLE capacity scaling increases by 33% for StorageTek VSM 6 and 21% for StorageTek VLE.  This significant capacity increase can provide increased consolidation potential for multiple VSM 4/5’s into a single VSM 6. In addition to the StorageTek VSM 6 and VLE capacity increases we are announcing End of Life (EOL) for previous generation StorageTek VSM 6 and VLE part numbers.   Please read the Sales Bulletin on Oracle HW TRC for more details. (If you are not registered on Oracle HW TRC, click here ... and follow the instructions..) For More Information Go To: Oracle.com Tape Page Oracle Technology Network Tape Page

    Read the article

  • Strategy for using snapshots to back up Ubuntu Linux server?

    - by MountainX
    I need some backup advice for my home file server. Here are the mount points, volume groups, logical volumes and used/total space of all the volumes on my Ubuntu 8.10 home file server. / vgA/lvRoot [7.5G/50G] /tmp vgB/lvTmp [195M/30G] /var vgB/lvVar [780M/30G] swap vgB/lvSwap [16.00 GB] /media1 vgC/lvMedia1 [400G/975G] /media2 vgC/lvMedia2 [75G/295G] /boot partition (no volume group) [95M/200M] /video partition (no volume group) [450G/950G] /backups vgD/lvBackupTarget [800G/925G] /home vgE/lvHome [85G/200G] I have just added a 2.0 TB external USB drive that I would like to use to backup everything. (It will be a close fit to get it all on one 2.0 TB drive. I actually have a 2nd external USB drive if needed.) I'd like to backup "/", var, /media1, media2 and /home. I'll deal with /boot and /video separately since they are not logical volumes. For all the logical volumes I'm anticipating taking snapshots and then copying those snapshots to the 2.0 TB external USB drive. I have never done a task like that before. If I do that, I could use the tutorial I found here: http://www.howtoforge.com/linux_lvm_snapshots My questions are: What is the best overall strategy? Is it LVM snapshots, as I'm assuming? How should I prepare, subdivide and mount the 2.0 TB external USB drive? 2.a. Should I create one or more regular partitions or should I create a physical volume with one or more logical volumes? 2.b. Would it be advisable to extactly mirror the source pv/lv layout on the external drive, and if so, is this a good strategy? What's the best way to get the snapshots onto the external drive? dd? Even though this is a strategy question, feedback with actual commands is appreciated. I need step-by-step cookbook-style help because I don't do much server admin work. (Background: This is a home file server that I have rarely had to touch in about 2 years. It has done its job without much intervention. The really old PC that I used to back everything up recently failed, so I'm replacing that with the external USB drive(s) and I'd like to upgrade my backup strategy at the same time. Previously, I just copied stuff from /backups over to the other computer and that would not have made things very easy in a real restore situation. The /backups mount point contains backup copies of "most" of the important data on a file by file basis, but it does not contain copies of /boot, etc. BTW, the actual internal HDD that holds /backups is separate from the other storage devices.) EDIT: I'll propose a strategy... The idea came from a comment here: LVM mirroring VS RAID1 "LVM mirrors are for replication of a logical volume to a different physical volume. It's essentially meant to "move the data to a different disk". The mirror is then broken..." That would fit my requirements well. Here is an ideal situation: establish the LV mirror on the external drive break the link with the mirror create a (persistent) snapshot on the mirror after a week, resync the mirror with the original source and update the mirror break the link and create another snapshot on the mirror. Obviously, the mirror will be like a weekly full backup. And the snapshots on the mirror will represent earlier points in time. If this would work and if it would be time efficient, it would give a nice full & differential type backup on the external drive based on LVM. I have not heard of a strategy like this before. Will it work? Could it be scripted? Thoughts? EDIT 2: Creating Portable DiskSafes With LoopbackFS And LVM Snapshots This article seems intriguing: http://www.howtoforge.com/creating-portable-disksafes-with-loopbackfs-and-lvm-snapshots Unfortunately, I don't understand exactly how to map those ideas to the strategy I'm proposing above. I'm going to ask this last bit as a separate question. I will leave my original question in place because I still desire feedback on the overall best strategy. At this moment I'm assuming it is LVM mirroring in the style of "Creating Portable DiskSafes with LVM Snapshots" but that might be wrong.

    Read the article

  • How to "un-automount" external harddrive?

    - by Timon
    So I dualboot 12.10 and Win7. Both OSs are on the primary SSD while all commonly used data (documents, movies, music, profiles etc) is on a secondary NTFS-formatted HDD. Since I needed the NTFS drive to automatically mount in Ubuntu right at startup, I downloaded ntfs-config and set it to automount my NTFS drive. Problem is, I also accidentally told it to automount my external hard drive (which is also NTFS formatted). When booting up Ubuntu, it now checks for the presence of that drive every single time, which is getting annoying 'cause I don't always have it connected. I've tried un- and reinstalling ntfs-config, telling it to not automount the external HD, but to no avail. Any suggestions?

    Read the article

  • Ubuntu Server 12.04 LTS on Hyper-V 2012

    - by user137533
    I have the following scenario: Hyper-V 2012 server core installation. On top of this i created a virtual machine on which i tried installing Ubuntu Server 12.04 which should not have any compatibility issues according to what Microsoft and Ubuntu are saying (although it is not officially supported). I start, run the installation and everything is ok, no problems detecting the network device or the hard drive (unlike debian which didn't even detect the hard drive). Once the installation is complete it asks me to reboot, it unmounts the "dvd drive" and reboots. Once it tries to start again i get the following error: Boot failure. Reboot and Select proper Boot device or Insert Boot Media in the selected Boot device. It seems to not be booting up from the virtual hard drive. The hard drive is set up in SCSI mode, nothing mounted on the IDE controller (no iso image or anything else. Does anyone have any ideas on what i can do to solve this?

    Read the article

  • Windows 7 won't read from NAS on LAN

    - by Alfy
    I've got a Linkstation NAS drive on a local network. Having just got a new laptop with Windows 7 Home Professional, I can no longer read anything of the drive. I've tried accessing the drive using \192.168.1.55\share, using ftp programs such as WinSCP, filezilla and even using firefox to hit ftp://192.168.1.55. The really annoying thing is that through these methods I can see the files on the drive, counting out any kind of connection issues. I can navigate through the NAS file system, but as soon as I try and copy a file off the NAS, things just stop working. Accessing the drive through a Windows XP machine works fine. So far I've tried: Disabling firewalls Adding the LmCompatibilityLevel key to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa Using the 40 - 56 bit encryption instead of the 128 bit. Has anyone got any suggestions of what I can check or try? This is driving me crazy and I'm totally out of ideas?

    Read the article

  • How to control admission policy in vmware HA?

    - by John
    Simple question, I have 3 hosts running 4.1 Essentials Plus with vmware HA. I tried to create several virtual machines that filled 90% of each server's memory capacity. I know that vmware has really sophisticated memory management within virtual machines, but I do not understand how the vCenter can allow me even to power on the virtual machines that exceed the critical memory level, when the host failover can be still handled. Is it due to the fact that virtual machines does not use the memory, so that it is still considered as free, so virtual machines can be powered on ? But what would happen if all VMs would be really using the RAM before the host failure - they could not be migrated to other hosts after the failure. The default behaviour in XenServer is that, it automatically calculates the maximum memory level that can be used within the cluster so that the host failure is still protected. Vmware does the same thing ? The admission policy is enabled. Vmware HA enabled.

    Read the article

  • RAID1 rebuild fails due to disk errors

    - by overlord_tm
    Quick info: Dell R410 with 2x500GB drives in RAID1 on H700 Adapter Recently one of drives in RAID1 array on server failed, lets call it Drive 0. RAID controller marked it as fault and put it offline. I replaced faulty disk with new one (same series and manufacturer, just bigger) and configured new disk as hot spare. Rebuild from Drive1 started immediately and after 1.5 hour I got message that Drive 1 failed. Server was unresponsive (kernel panic) and required reboot. Given that half hour before this error rebuild was at about 40%, I estimated that new drive is not in sync yet and tried to reboot just with Drive 1. RAID controller complained a bit about missing RAID arrays, but it found foreign RAID array on Drive 1 and I imported it. Server booted and it runs (from degraded RAID). Here is SMART data for disks. Drive 0 (the one that failed first) ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-K 200 200 051 - 1 3 Spin_Up_Time POS--K 142 142 021 - 3866 4 Start_Stop_Count -O--CK 100 100 000 - 12 5 Reallocated_Sector_Ct PO--CK 200 200 140 - 0 7 Seek_Error_Rate -OSR-K 200 200 000 - 0 9 Power_On_Hours -O--CK 086 086 000 - 10432 10 Spin_Retry_Count -O--CK 100 253 000 - 0 11 Calibration_Retry_Count -O--CK 100 253 000 - 0 12 Power_Cycle_Count -O--CK 100 100 000 - 11 192 Power-Off_Retract_Count -O--CK 200 200 000 - 10 193 Load_Cycle_Count -O--CK 200 200 000 - 1 194 Temperature_Celsius -O---K 112 106 000 - 31 196 Reallocated_Event_Count -O--CK 200 200 000 - 0 197 Current_Pending_Sector -O--CK 200 200 000 - 0 198 Offline_Uncorrectable ----CK 200 200 000 - 0 199 UDMA_CRC_Error_Count -O--CK 200 200 000 - 0 200 Multi_Zone_Error_Rate ---R-- 200 198 000 - 3 And Drive 1 (the drive which was reported healthy from controller until rebuild was attempted) ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-K 200 200 051 - 35 3 Spin_Up_Time POS--K 143 143 021 - 3841 4 Start_Stop_Count -O--CK 100 100 000 - 12 5 Reallocated_Sector_Ct PO--CK 200 200 140 - 0 7 Seek_Error_Rate -OSR-K 200 200 000 - 0 9 Power_On_Hours -O--CK 086 086 000 - 10455 10 Spin_Retry_Count -O--CK 100 253 000 - 0 11 Calibration_Retry_Count -O--CK 100 253 000 - 0 12 Power_Cycle_Count -O--CK 100 100 000 - 11 192 Power-Off_Retract_Count -O--CK 200 200 000 - 10 193 Load_Cycle_Count -O--CK 200 200 000 - 1 194 Temperature_Celsius -O---K 114 105 000 - 29 196 Reallocated_Event_Count -O--CK 200 200 000 - 0 197 Current_Pending_Sector -O--CK 200 200 000 - 3 198 Offline_Uncorrectable ----CK 100 253 000 - 0 199 UDMA_CRC_Error_Count -O--CK 200 200 000 - 0 200 Multi_Zone_Error_Rate ---R-- 100 253 000 - 0 In extended error logs from SMART I found: Drive 0 has only one error Error 1 [0] occurred at disk power-on lifetime: 10282 hours (428 days + 10 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 10 -- 51 00 18 00 00 00 6a 24 20 40 00 Error: IDNF at LBA = 0x006a2420 = 6956064 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 61 00 60 00 f8 00 00 00 6a 24 20 40 00 17d+20:25:18.105 WRITE FPDMA QUEUED 61 00 18 00 60 00 00 00 6a 24 00 40 00 17d+20:25:18.105 WRITE FPDMA QUEUED 61 00 80 00 58 00 00 00 6a 23 80 40 00 17d+20:25:18.105 WRITE FPDMA QUEUED 61 00 68 00 50 00 00 00 6a 23 18 40 00 17d+20:25:18.105 WRITE FPDMA QUEUED 61 00 10 00 10 00 00 00 6a 23 00 40 00 17d+20:25:18.104 WRITE FPDMA QUEUED But Drive 1 has 883 errors. I see only few last ones and all errors I can see look like this: Error 883 [18] occurred at disk power-on lifetime: 10454 hours (435 days + 14 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 01 -- 51 00 80 00 00 39 97 19 c2 40 00 Error: AMNF at LBA = 0x399719c2 = 966203842 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 60 00 80 00 00 00 00 39 97 19 80 40 00 1d+00:25:57.802 READ FPDMA QUEUED 2f 00 00 00 01 00 00 00 00 00 10 40 00 1d+00:25:57.779 READ LOG EXT 60 00 80 00 00 00 00 39 97 19 80 40 00 1d+00:25:55.704 READ FPDMA QUEUED 2f 00 00 00 01 00 00 00 00 00 10 40 00 1d+00:25:55.681 READ LOG EXT 60 00 80 00 00 00 00 39 97 19 80 40 00 1d+00:25:53.606 READ FPDMA QUEUED Given those errors, is there any way I can rebuild RAID back, or should I make backup, shutdown server, replace disks with new ones and restore it? What about if I dd faulty disk to new one from linux running on USB/CD? Also, if anyone have more experiences, what could be causes for those errors? Crappy controller or disks? Disks are about 1 year old, but it is pretty unbelievable to me that both would die within so short timespan.

    Read the article

  • How do I refresh Disk Utility?

    - by detly
    I do a lot of live system building, which eventually involves imaging a USB drive with the built binary image: dd if=binary.img of=/dev/sdX sync ...where /dev/sdX is a USB drive. As part of my workflow, I like to have Ubuntu's Disk Utility open so I can verify the drive letter and unmount anything that gets mounted automatically. I also use it to create extra partitions for persistence. The trouble is, after writing the image to the device — and even after the sync operation — Disk Utility doesn't show the new partition. It just shows free space. GParted sees it and fdisk sees it. Even after closing and opening Disk Utility, it still shows only free space. If I click "Safe Removal" and physically unplug and replug the USB drive, Disk Utility will then see the partition. Why do I need to remove and re-insert the drive for Disk Utility to see the partitions on it? Can I force Disk Utility to update its information without needing to do this? (using Disk Utility 3.0.2 under Ubuntu 11.10.)

    Read the article

  • Do I need to rebuild the array after putting in a new hot spare?

    - by Shade34321
    So my experience with RAID is minimal. So I figured I'd come and ask here. We have a 16 drive RAID system that have 15 drives in RAID 5 with a hot spare left over. Recently one of the drives in the RAID was giving errors so I cloned it over to the hot spare and put a new drive in it's spot. I made the new drive the hot spare as I was told. I was told to rebuild the array after putting in the new drive as a hot spare so I tried and wasn't able to. So my question is do I need to rebuild it and if so why did it tell me I couldn't. Thanks! UPDATE: So I've come back up to work and looked at the RAID and it pulled in the hot spare into the raid and kicked out another drive.

    Read the article

  • VS2010 SP1 Installed NAD! (and another useful utility)

    - by TATWORTH
    I have installed VS2010 SP1 on my home development PC and No Abnormalities were detected. I downloaded the ISO image of the service pack and mounted it using Virtual Clone Drive - download page at http://www.slysoft.com/en/virtual-clonedrive.html.  It is unwise to install VS2010 like this and instead you need to copy all the file to a temporary directory on your hard disk, however it worked with the service pack image mounted as a virtual DVD! So far I have sucessfully recompiled 2 solutions - more about that later.

    Read the article

  • "Disk boot failure" error after installing Windows 7 on SSD

    - by Tony_Henrich
    I have a system with 3 SATA drives which runs fine. Got a new SSD drive and wanted to install a fresh Windows-7 on it. So I removed the boot drive and replaced it with the SSD drive. Installed Windows and when it was done, rebooted and now I get "Disk boot failure. Insert system disk and press enter" error message. I reinstall again and still same message. Removed the SSD and put back the original drive and I got the same message!! I checked the BIOS and things look good. Something is wrong. Two questions: 1- Why isn't the new Windows booting from the SSD? 2- Why isn't the machine booting using the previous working configuration anymore, after removing the SSD? I did connect it during the second Windows installation but it was the last drive in the SATA connector. Would Windows installer mess with its MBR sector?

    Read the article

  • Good Free Backup Tool - with provisos

    - by vaccano
    I have seen some Backup Questions around. But they are not quite what I am looking for. I would like to have a back up of my entire hard drive (to an external drive). I would like it to be the kind that has a base backup then just backs up the changes since the last backup. I would like it to be able to have a fully restorable image of my hard drive (not just key files). Lastly I would like it to be free (or super cheap). (The above requirements are important, but I will have to drop them if they up the price as my boss will not pay for them.) I have a Solid State Hard Drive 250 GB backing up to a 1TB external hard drive using Windows XP.

    Read the article

  • usb vs firewire for connecting two RAID 0 disks

    - by Arne
    I have a 2TB and a 4TB RAID 0 external drives (both have two physical hard drives in them). Both have a FW800, FW400, and USB port. My MacBook Pro has one FW800 port and two USB ports. I want to copy data from the 4TB drive to the 2TB drive. Is it better to A - connect both directly to the laptop, one with USB and one with FW800 or B - connect the 4TB drive to laptop with FW800 and the 2TB drive to the 4TB drive using a FW400 cable? Anyone have problems daisy-chaining RAID 0 disks using FW? Thanks!

    Read the article

  • BI Applications 7.9.6.3 and EBS 12.1.3 Vision: Integrated Demo Environments

    - by Mike.Hallett(at)Oracle-BI&EPM
    If you need a combined BI-Applications + eBusiness Suite Applications demonstration environment, or for proof of concept work for your customers, then these versions of images on Oracle Virtual Box are now available for partners to download and use.  To get access to these images, Partners must be OPN members, specialised in OBI or BI-Apps.   This is an integrated Demo/Test Drive/POC/Self Enablement environment including two separate images (in English) representing the entire Oracle Stack – Applications, Middleware, Database, Operating System and Virtual Machine. Minimum Hardware requirements for each image to run separately 4GB RAM Minimum Hardware requirements for both images to run concurrently 8GB RAM Dual CPU 64 bit OS   BI Applications 7.9.6.3 Linux based and running on Oracle Virtual Box and compatible with OVM Image Content: BI Application Analytics demo data extracted from EBS 12.1.3 Vision for Financials and HR using EBS 12.1.3 Vision (image supplied) Built Integration to EBS 12.1.3 Vision image (provided). Fully functional BI Applications 7.9.6.3 software install and configuration Image can be connected to load any data from any other compatible source system. BI Apps Demo data is based on OOTB EBS Vision 12.1.3 Configured to run BI Apps data load for all other modules of EBS 12.1.3 Vision. Includes OBIEE Sample demonstration content Documented scripts for running presentations, demonstrations and Test Drives Image Size: 34GB zip, 84GB unzip.  Min Hardware 4GB RAM         EBS Vision 12.1.3 Linux based and running on Oracle Virtual Box and compatible with OVM Image Content: eBusiness Suite (EBS) Applications Vision 12.1.3 Standard Vision instance with all given setups, configurations and data Source system for BI Apps 7.9.6.3 Image Size: 76GB zip, 300GB unzip.  Min Hardware 4GB RAM Distribution: The Virtual Box images are posted on an external FTP server @ BI Applications 7.9.6.3 EBS12.1.3   To download, Partners need to request the current password to access the images.  To request the current ftp.oracle.com password and the password required to unzip the images, please email Marek Winiarski   Support Contact =  Marek Winiarski: Oracle Partner Solution Consultant

    Read the article

  • Windows 7 won't read from NAS on LAN

    - by Alfy
    I've got a Linkstation NAS drive on a local network. Having just got a new laptop with Windows 7 Home Professional, I can no longer read anything of the drive. I've tried accessing the drive using \192.168.1.55\share, using ftp programs such as WinSCP, filezilla and even using firefox to hit ftp://192.168.1.55. The really annoying thing is that through these methods I can see the files on the drive, counting out any kind of connection issues. I can navigate through the NAS file system, but as soon as I try and copy a file off the NAS, things just stop working. Accessing the drive through a WindowsXP machine works fine. So far I've tried: Disabling firewalls Adding the LmCompatibilityLevel key to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa Using the 40 - 56 bit encryption instead of the 128 bit. Has anyone got any suggestions of what I can check or try. This is driving me crazy and I'm totally out of ideas? Thanks

    Read the article

  • How to rescan and remount drives on Ubuntu Hardy or Jaunty?

    - by pts
    When I connect an USB drive to an Ununtu Hardy and Jaunty system, the system mounts the partitions found on the drive, and opens a Nautilus window for each mounted partitions. Within Nautilus, I am able to unmount partitions. What I need is a command or action which forces the system to rescan the available drives and partitions, and automount each not mounted partition, including those which I've manually unmounted from Nautilus. sudo /etc/init.d/udev restart or ... reload doesn't do this. As of now, I just unplug the USB drive, and commect it again, which will force a scan and a mount on that drive. But I want to do force the rescan and remount without unplugging anything, preferably without the user having the know device or drive names.

    Read the article

  • Ubuntu 12.10 + Windows 7 - No option to install alongside windows 7

    - by user1828314
    I have a 64-bit Windows 7 OS installed at the moment. I have used GPartEd to shrink the current Windows 7 partition on my 720GB HDD to 200GB. I have then made a new NTFS partition of 200GB which I will keep for later on as a shared drive between both Windows 7 and Ubuntu. So in GPartEd I now have 3 partitions which were all automatically there from the Windows 7 installation, I only shrank the 3rd one from the 698GB or so that it was to 200GB and created the 200GB for the shared drive. I first tried creating another 200GB partition at this stage to install Ubuntu too but when I burnt the DVD and loaded it, Ubuntu gave me no option to install alongside Windows, only the option to erase the entire disk and install Ubuntu on the blank drive...not what I want to do. So I tried installing it through clicking 'Something else', it downloaded all the install files but didn't install. I then had a lot of problems with getting the DVD drive to work and what not but now have this fixed so I can use Windows again. So now I've used GPartEd to delete the partitions so again I'm now left with the 3 partitions there which Windows 7 automatically installs and a 200GB NTFS partition I will later use as a shared drive. Booting up from the Ubuntu disc and again there is no option to install alongside Windows 7. How do I get it to do so? All I would like is Windows 7 and Ubuntu on a dual boot, with a 200GB NTFS partition to dump my work onto so that I can access it from both OS's. Thanks.

    Read the article

  • The Hot-Add Memory Hogs

    - by Andrew Clarke
    One of the more difficult tasks, when virtualizing a server, is to determine the amount of memory that Hypervisor should assign to the virtual machine. This requires accurate monitoring and, because of the consequences of setting the value too low, there is a great temptation to err on the side of over-provisioning. This results in fewer guest VMs and, in fact, with more accurate memory provisioning, many virtual environments could support 30% more VMs. In order to achieve a better consolidation (aka VM density) ratio, Windows Server 2008 R2 SP1 has introduced what Microsoft calls ‘Dynamic Memory’. This means that the start-up RAM VM memory assigned to guest virtual machines can be allowed to vary according to demand, changing dynamically while the VM is running, based on the workload of applications running inside. If demand outstrips supply, then memory can be rationed according to the ‘memory weight’ assigned to the guest VM. By this mechanism, memory becomes a shared resource that can be reallocated automatically as demand patterns vary. Unlike VMWare’s Memory Overcommit technology, the sum of all the memory allocations to each virtual machine will not exceed the total memory of the host computer. This is fine for applications that are self-regulating in their demands for memory, releasing memory back into the 'pool' when not under peak load. Other applications however, such as SQL Server Standard and Enterprise, are by nature, memory hogs under high workload; they can grab hot-add memory whilst running under load and then never release it. This requires more careful setting-up and the SQLOS team have provided some guidelines from for configuring SQL Server in virtual environments. Whereas VMWare’s Memory Overcommit is well-proven in a number of different configurations, Hyper-V’s ‘Dynamic Memory’ is new. So far, the indications are that it will improve the business case for virtualizing and it is probably a far more intuitive technology for the average IT professional to grasp. It is certainly worth testing to see whether it works for you.

    Read the article

  • More Great Improvements to the Windows Azure Management Portal

    - by ScottGu
    Over the last 3 weeks we’ve released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Localization Support for 6 languages Operation Log Support Support for SQL Database Metrics Virtual Machine Enhancements (quick create Windows + Linux VMs) Web Site Enhancements (support for creating sites in all regions, private github repo deployment) Cloud Service Improvements (deploy from storage account, configuration support of dedicated cache) Media Service Enhancements (upload, encode, publish, stream all from within the portal) Virtual Networking Usability Enhancements Custom CNAME support with Storage Accounts All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Localization Support The Windows Azure Portal now supports 6 languages – English, German, Spanish, French, Italian and Japanese. You can easily switch between languages by clicking on the Avatar bar on the top right corner of the Portal: Selecting a different language will automatically refresh the UI within the portal in the selected language: Operation Log Support The Windows Azure Portal now supports the ability for administrators to review the “operation logs” of the services they manage – making it easy to see exactly what management operations were performed on them.  You can query for these by selecting the “Settings” tab within the Portal and then choosing the “Operation Logs” tab within it.  This displays a filter UI that enables you to query for operations by date and time: As of the most recent release we now show logs for all operations performed on Cloud Services and Storage Accounts.  You can click on any operation in the list and click the “Details” button in the command bar to retrieve detailed status about it.  This now makes it possible to retrieve details about every management operation performed. In future updates you’ll see us extend the operation log capability to apply to all Windows Azure Services – which will enable great post-mortem and audit support. Support for SQL Database Metrics You can now monitor the number of successful connections, failed connections and deadlocks in your SQL databases using the new “Dashboard” view provided on each SQL Database resource: Additionally, if the database is added as a “linked resource” to a Web Site or Cloud Service, monitoring metrics for the linked SQL database are shown along with the Web Site or Cloud Service metrics in the dashboard. This helps with viewing and managing aggregated information across both resources in your application. Enhancements to Virtual Machines The most recent Windows Azure Portal release brings with it some nice usability improvements to Virtual Machines: Integrated Quick Create experience for Windows and Linux VMs Creating a new Windows or Linux VM is now easy using the new “Quick Create” experience in the Portal: In addition to Windows VM templates you can also now select Linux image templates in the quick create UI: This makes it incredibly easy to create a new Virtual Machine in only a few seconds. Enhancements to Web Sites Prior to this past month’s release, users were forced to choose a single geographical region when creating their first site.  After that, subsequent sites could only be created in that same region.  This restriction has now been removed, and you can now create sites in any region at any time and have up to 10 free sites in each supported region: One of the new regions we’ve recently opened up is the “East Asia” region.  This allows you to now deploy sites to North America, Europe and Asia simultaneously.  Private GitHub Repository Support This past week we also enabled Git based continuous deployment support for Web Sites from private GitHub and BitBucket repositories (previous to this you could only enable this with public repositories).  Enhancements to Cloud Services Experience The most recent Windows Azure Portal release brings with it some nice usability improvements to Cloud Services: Deploy a Cloud Service from a Windows Azure Storage Account The Windows Azure Portal now supports deploying an application package and configuration file stored in a blob container in Windows Azure Storage. The ability to upload an application package from storage is available when you custom create, or upload to, or update a cloud service deployment. To upload an application package and configuration, create a Cloud Service, then select the file upload dialog, and choose to upload from a Windows Azure Storage Account: To upload an application package from storage, click the “FROM STORAGE” button and select the application package and configuration file to use from the new blob storage explorer in the portal. Configure Windows Azure Caching in a caching enabled cloud service If you have deployed the new dedicated cache within a cloud service role, you can also now configure the cache settings in the portal by navigating to the configuration tab of for your Cloud Service deployment. The configuration experience is similar to the one in Visual Studio when you create a cloud service and add a caching role.  The portal now allows you to add or remove named caches and change the settings for the named caches – all from within the Portal and without needing to redeploy your application. Enhancements to Media Services You can now upload, encode, publish, and play your video content directly from within the Windows Azure Portal.  This makes it incredibly easy to get started with Windows Azure Media Services and perform common tasks without having to write any code. Simply navigate to your media service and then click on the “Content” tab.  All of the media content within your media service account will be listed here: Clicking the “upload” button within the portal now allows you to upload a media file directly from your computer: This will cause the video file you chose from your local file-system to be uploaded into Windows Azure.  Once uploaded, you can select the file within the content tab of the Portal and click the “Encode” button to transcode it into different streaming formats: The portal includes a number of pre-set encoding formats that you can easily convert media content into: Once you select an encoding and click the ok button, Windows Azure Media Services will kick off an encoding job that will happen in the cloud (no need for you to stand-up or configure a custom encoding server).  When it’s finished, you can select the video in the “Content” tab and then click PUBLISH in the command bar to setup an origin streaming end-point to it: Once the media file is published you can point apps against the public URL and play the content using Windows Azure Media Services – no need to setup or run your own streaming server.  You can also now select the file and click the “Play” button in the command bar to play it using the streaming endpoint directly within the Portal: This makes it incredibly easy to try out and use Windows Azure Media Services and test out an end-to-end workflow without having to write any code.  Once you test things out you can of course automate it using script or code – providing you with an incredibly powerful Cloud Media platform that you can use. Enhancements to Virtual Network Experience Over the last few months, we have received feedback on the complexity of the Virtual Network creation experience. With these most recent Portal updates, we have added a Quick Create experience that makes the creation experience very simple. All that an administrator now needs to do is to provide a VNET name, choose an address space and the size of the VNET address space. They no longer need to understand the intricacies of the CIDR format or walk through a 4-page wizard or create a VNET / subnet. This makes creating virtual networks really simple: The portal also now has a “Register DNS Server” task that makes it easy to register DNS servers and associate them with a virtual network. Enhancements to Storage Experience The portal now lets you register custom domain names for your Windows Azure Storage Accounts.  To enable this, select a storage resource and then go to the CONFIGURE tab for a storage account, and then click MANAGE DOMAIN on the command bar: Clicking “Manage Domain” will bring up a dialog that allows you to register any CNAME you want: Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. One of the other cool features that is now live within the portal is our new Windows Azure Store – which makes it incredibly easy to try and purchase developer services from a variety of partners.  It is an incredibly awesome new capability – and something I’ll be doing a dedicated post about shortly. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Where is custom icon information stored in Mac OS X Snow Leopard?

    - by AmazingRobie
    I have an external Lacie hard drive connected via USB to my Macbook Pro which is running Snow Leopard. I have nothing but music on the external drive with every album sorted in it's own individual folder and have changed all of the individual folder icons to display the album art of the songs from the album inside. I want to reformat my laptop, but I'm afraid if I do that, the album art will disappear if it's stored on a system file within the main hard drive OS. My question is this, is the information which tells the OS to display the album art listed in a hidden system file on the external Lacie drive or my laptop hard drive and if I reformat will I have to reassociate all of the album art to the folders on the external or will it keep it's associations. Thanks in advance.

    Read the article

  • MediaTomb permission denied on my truecrypt mount

    - by sarveshlad
    I want to install Mediatomb i have two HDD a small 120 gb and a 1tb drive the 1 tb drive has 3 partition and is encrypted with truecrypt when i run the Mediatomb it can read the drives on the 120 gb but not on the 1 tb the 1tb drive is mounted on startup using a script also i have added truecrypt to the sudoers permission if it helps the permision on all 1 tb truecrypt mount pops up as my username where as the 2 partition from 120 gig has nobody i just got a Asus TF300T and i wanna stream media using DLNA/UPnP

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >