Search Results

Search found 40915 results on 1637 pages for 'virtual method'.

Page 65/1637 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • IIS 7 - The virtual path 'null' maps to another application, which is not allowed

    - by Miro
    I have run into issue when set up IIS 7 Farm for Load balancing. Add 4 server to IIS Farm with appropriate ports(8080,8081,8082,8083). Also add Inbound rule for IIS Farm. The Tomcat instances listens these ports. When i'm opening url(which i set on inbound rule), i got the following exception: The virtual path 'null' maps to another application, which is not allowed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [ArgumentException: The virtual path 'null' maps to another application, which is not allowed.] System.Web.CachedPathData.GetVirtualPathData(VirtualPath virtualPath, Boolean permitPathsOutsideApp) +8839122 System.Web.HttpContext.GetFilePathData() +36 System.Web.HttpContext.GetConfigurationPathData() +26 System.Web.Configuration.RuntimeConfig.GetConfig(HttpContext context) +43 System.Web.Configuration.CustomErrorsSection.GetSettings(HttpContext context, Boolean canThrow) +41 System.Web.HttpResponse.ReportRuntimeError(Exception e, Boolean canThrow, Boolean localExecute) +101 System.Web.HttpContext.ReportRuntimeErrorIfExists(RequestNotificationStatus& status) +538 How can i solve this issue?

    Read the article

  • Using virtual IP with stunnel and haproxy

    - by beardtwizzle
    Hi there, We have a load-balancer setup, in which an HTTPS Request flows through the following steps:- Client -> DNS -> stunnel on Load-Balancer -> HAProxy on LB -> Web-Server This setup works perfectly when stunnel is listening to the local IP of the Load-Balancer. However in our setup we have 2 load-balancers and we want to be able to listen to a virtual IP, which only ever exists on one LB at a time (keepalived flips the IP to the second LB if the first one falls over). HAProxy has no problem in doing this (and I can ping the assigned virtual IP on the load-balancer I'm testing), but it seems stunnel hates the concept. Has anyone achieved this before (below is my stunnel config - as you can see I'm actually listening for ALL traffic on 443):- cert= /etc/ssl/certs/mycert.crt key = /etc/ssl/certs/mykey.key ;setuid = nobody ;setgid = nogroup pid = /etc/stunnel/stunnel.pid debug = 3 output = /etc/stunnel/stunnel.log socket=l:TCP_NODELAY=1 socket=r:TCP_NODELAY=1 [https] accept=443 connect=127.0.0.1:8443 TIMEOUTclose=0 xforwardedfor=yes Sorry for the long-winded question!

    Read the article

  • How to setup a virtual host in Ubuntu?

    - by Rade
    I have an app that's accessible via 1.2.3.4/myapp. The app is installed in /var/www/myapp. I've set up a subdomain(apps.mydomain.com) that points to 1.2.3.4. I want the server to point to var/www/myapp if I type apps.mydomain.com/myapp, how do I do that? I have experience creating virtual hosts(lots of them) locally but I'm lost because it's now in production and it's a little different. Here's my virtual host config: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName apps.mydomain.com/myapp DocumentRoot /var/www/myapp/public <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Any idea why I still see the files instead of pointing me to the document root? Just in case someone might ask, the app is based on Laravel 4 framework. It's really bad right now because anyone can access the files from the browser.

    Read the article

  • Slow upload speeds with pfsense virtual appliance

    - by Justin Shin
    I have a pfSense virtual appliance set up in front of a Windows server. The pfSense appliance has been configured with two L2L IPSec VPN sites and not too much else. The appliance has two vNics which both exist on the same VLAN, but one is "WAN" and the other is "LAN." When I run speedtest.net on my Windows server when I have configured it to use a static WAN address and gateway, I get great speeds - maybe around 50 down, 15 up. However, when I configure it with a private IP address, I get similar download speeds but terrible upload speeds - around 2 or 3 Mbps consistently. I used Wireshark to see what gives but there didn't appear to be too much helpful information there, or I just could not find it. Besides the L2L VPNs, other configurations include: Automatic Outbound NAT Virtual P-ARP IP for the Windows Server WAN Firewall rule to allow * to * on RDP WAN Firewall rule to allow * to * (enabled this just for testing... didn't help!) No DHCP or any other services besides IPSec VPN No Errors LAN or WAN No collisions LAN or WAN I would be happy to post the full config file if it would help. I've been scratching my head at this one all day!

    Read the article

  • Virtual bridged networking with VLAN, could not ping

    - by v.yegy
    I require a virtual network with VLAN be build between two virtual hosts - which can be (lxc/ vbox -ubuntu or win xp). I tried with lxc and vbox with Ubuntu and was finding difficult to make it work without vlan, but was successful with vbox with xp. vbox-xp1 --- br1 ---------------- br2 ---- vbox-xp2 The config is: brctl addbr br1; brctl addbr br2 ifconfig br1 up; ifconfig br2 up stp br1 off; stp br2 off ip link add name br1-br2-l0 type veth peer name br1-br2-l1 sudo brctl addif br1 br1-br2-l0 sudo brctl addif br2 br1-br2-l1 vbox - xp 1 and 2 with network ; bridged and br1 and br2 respectively. The adapter is intel PRO/1000 MT Server and driver installed in guests. Configured IPs and two hosts pinged! VLAN config: ip link add link br1 name br1-2.5 type vlan id 5 brctl addif br2 br1-2.5 create vlan 5 in xp 1 and 2 and assign ip address Ping on with this config does not work. Wireshark trace on interface br1-br2-l1 / br1-2.5 shows that one ping results in ~240 ping packets and each growing by 4 bytes - first one being correct and 60, ping does not reach other host as I see mac is not learnt[arp -a]. -- if br1-2.5 is not configured, I see untagged packets in br1-br2-l1/0, but still not reaching other host as mac is not learnt. if br1-br2-l0/1 is made down, even if br1-2.5 is up, I count not see any packets. I tried with ebtables, but still could not make a correct config to work. -- If any one here are aware of any configuration, please let me know. I need to make a network of switches. Seems I have a very long way. Sorry for a very long question. Thanks and regards, vy

    Read the article

  • Where's the Swap File/Partition?

    - by chrisbunney
    I'm investigating the virtual memory configuration of a Debian based Amazon EC2 instance, and as my background isn't in system admin, I'm slightly confused by what I'm seeing. We're using MongoDB, and the monitoring server we have indicates that the Mongo process is using about 20GB of swap space, however I can't figure out where this is located on the server. As far as I can tell from using the various suggested methods from Google, there is either a much smaller amount, or none at all. top indicates that there is 1.8GB of swap memory: top - 15:35:21 up 6 days, 3:23, 1 user, load average: 1.60, 1.43, 1.37 Tasks: 47 total, 2 running, 45 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 1.3%sy, 0.0%ni, 14.7%id, 83.8%wa, 0.0%hi, 0.0%si, 0.1%st Mem: 3928924k total, 2855572k used, 1073352k free, 640564k buffers Swap: 0k total, 0k used, 0k free, 1887788k cached swapon -s doesn't seem to think there's any swap space: Filename Type Size Used Priority free -m doesn't think there's any swap either: total used free shared buffers cached Mem: 3836 3663 172 0 626 2701 -/+ buffers/cache: 336 3500 Swap: 0 0 0 And neither does vmstat: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 3 0 66224 641372 2874744 0 0 21 5012 21 33 2 2 76 19 But cat /etc/fstab thinks there is a swap partition: /dev/xvda1 / ext3 defaults 1 1 /dev/xvda2 /mnt ext3 defaults 0 0 /dev/xvda3 swap swap defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 However df -k gives no indication of the xvda3 partition: Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 16513960 15675324 0 100% / tmpfs 1964460 8 1964452 1% /lib/init/rw udev 1914148 28 1914120 1% /dev tmpfs 1964460 4 1964456 1% /dev/shm So I really don't know what to make of this, because I appear to have a process using about 10 times more virtual memory than what might be available, and I have no idea where this virtual memory is on the system. I'm probably misinterpreting the output of the tools, so I'd be grateful if someone would be able to set me straight: What have I got wrong, what's the right interpretation, and how do you reach that interpretation? EDIT0: We use 10gen's MMS for monitoring the database, the relevant section for memory from the last data point is: "mem": { "virtual": 20749, "bits": 64, "supported": true, "mappedWithJournal": 20376, "mapped": 10188, "resident": 1219 }, This JSON is specific to the database process (I believe) rather than the system as a whole. fdisk -l /dev/xvda outputs... nothing? I tried each of the 3 xvda entries in /etc/fstab as well: root@ip:~# fdisk -l /dev/xvda1 Disk /dev/xvda1: 34.4 GB, 34359738368 bytes 255 heads, 63 sectors/track, 4177 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvda1 doesn't contain a valid partition table root@ip:~# fdisk -l /dev/xvda2 root@ip:~# fdisk -l /dev/xvda3 root@ip:~# Edit1: Output of cat /proc/meminfo for the sake of completeness: MemTotal: 3928924 kB MemFree: 726600 kB Buffers: 648368 kB Cached: 2216556 kB SwapCached: 0 kB Active: 1945100 kB Inactive: 994016 kB Active(anon): 60476 kB Inactive(anon): 12952 kB Active(file): 1884624 kB Inactive(file): 981064 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 387180 kB Writeback: 0 kB AnonPages: 73380 kB Mapped: 1188260 kB Shmem: 48 kB Slab: 149768 kB SReclaimable: 146076 kB SUnreclaim: 3692 kB KernelStack: 1104 kB PageTables: 16096 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 1964460 kB Committed_AS: 305572 kB VmallocTotal: 34359738367 kB VmallocUsed: 16760 kB VmallocChunk: 34359721448 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 3932160 kB DirectMap2M: 0 kB

    Read the article

  • Networking 2 Virtual PC with one VPC as DHCP server

    - by vivek
    My host OS is Win XP Professional. The host has a real network connection via DSL and I created a second network connection using Microsoft Loopback Adapter. Internet connection sharing is enabled. The Microsoft Loopback adapter has a IP address of 192.168.0.1. I have 1 Virtual PC which has Windows Server 2003. I have setup the network connection on this VPC to use Microsoft Loopback Adapter. I setup this VPC to be the Domain Controller , DNS Server and DHCP Server. I set this to a static IP address 192.168.0.2 (on the same subnet as the MS Loopback adapter) I have a second Virtual PC which also has Windows Server 2003. The network connection on this VPC is set to "Local Only". I want this VPC to get its IP address from the 1st VPC on which I setup as a DHCP server. What i want is the 2 VPC should be in a network with one of the VPC acting as the domain controller, DNS Server and DHCP server. The second VPC shoud get its IP address from the 1st VPC. It should be a part of the domain of the 1st VPC. When i tried to make the second VPC get the IP address from the first VPC I am not succeeding. Can somebody post some suggestions on how to go about this ?

    Read the article

  • IIS 6 ASP.NET default handler-mappings and virtual directories

    - by mlauter
    I'm having a problem with setting a default mapping in IIS 6. I want to secure *.HTML files with ASP.NET forms authentication. The problem seems to have something to do with using virtual directories to hold the html files. Here's how it's setup: sample directory tree c:/inetpub/ (nothing in here) d:/web_files/my_web_apps d:/web_files/my_web_apps/app1/ d:/web_files/my_web_apps/app2/ d:/web_files/my_web_apps/html_files/ app1 and app2 both access the same html_files directory, so html_files is set as a virtual directory in the web apps in IIS... sample web directory tree //app1/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) //app2/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) If I put a file called test.html in the root of //app1/ and then add the default mapping to the asp.net dll and setup my security on the root folder with deny="?", then accessing test.html works exactly as expected. If I'm not authenticated, it takes me to the login.aspx page, and if I am authenticated then it displays test.html. If I put the test.html file in the html_files directory I get a totally different behavior. Now the login.aspx page loads and I stuck some code in to check if I was still authenticated: <p>autheticated: <%=User.Identity.IsAuthenticated%></p> I figured it would say false because why else would it bother to load the login page? Nope, it says true - so it knows i'm authenticated, but it won't give me access to the test.html file. I've spent several hours on this and haven't been able to solve it. I'm going to spend some more time on google to see if I've missed something. Fingers crossed.

    Read the article

  • Dynamic virtual host configuration in Apache

    - by Kostas Andrianopoulos
    I want to make a virtual host in Apache with dynamic configuration for my websites. For example something like this would be perfect. <VirtualHost *:80> AssignUserId $domain webspaces ServerName $subdomain.$domain.$tld ServerAdmin admin@$domain.$tld DocumentRoot "/home/webspaces/$domain.$tld/subdomains/$subdomain" <Directory "/home/webspaces/$domain.$tld/subdomains/$subdomain"> .... </Directory> php_admin_value open_basedir "/tmp/:/usr/share/pear/:/home/webspaces/$domain.$tld/subdomains/$subdomain" </VirtualHost> $subdomain, $domain, $tld would be extracted from the HTTP_HOST variable using regex at request time. No more loads of configuration, no more apache reloading every x minutes, no more stupid logic. Notice that I use mpm-itk (AssignUserId directive) so each virtual host runs as a different user. I do not intend to change this part. Since now I have tried: - mod_vhost_alias but this allows dynamic configuration of only the document root. - mod_macro but this still requires the arguments of the vhost to be declared explicitly for each vhost. - I have read about mod_vhs and other modules which store configuration in a SQL or LDAP server which is not acceptable as there is no need for configuration! Those 3 necessary arguments can be generated at runtime. - I have seen some Perl suggestions like this, but as the author states $s->add_config would add a directive after every request, thus leading to a memory leak, and $r->add_config seems not to be a feasible solution.

    Read the article

  • Use shared 404 page for virtual hosts in Nginx

    - by Choy
    I'd like to have a shared 404 page to use across my virtual hosts. The following is my setup. Two sites, each with their own config file in /sites-available/ and /sites-enabled/ www.foo.com bar.foo.com The www directory is set up as: www/ foo.com/ foo.com/index.html bar.foo.com/ bar.foo.com/index.html shared/ shared/404.html Both config files in /sites-available are the same except for the root and server name: root /var/www/bar.foo.com; index index.html index.htm index.php; server_name bar.foo.com; location / { try_files $uri $uri/ /index.php; } error_page 404 /404.html; location = /404.html { root /var/www/shared; } I've tried the above code and also tried setting error_page 404 /var/www/shared/404.html (without the following location block). I've also double checked to make sure my permissions are set to 775 for all folders and files in www. When I try to access a non-existent page, Nginx serves the respective index.php of the virtual host I'm trying to access. Can anyone point out what I'm doing wrong? Thanks!

    Read the article

  • How to add a specific method to a particular scope in Visual Studion 2005

    - by pragadheesh
    Hi, In my visual studio project (C++), when i copy a method(meth1) of a particular scope say 'scope1' and paste it in the same code area, it is getting pasted in General Scope. i.e I want to add a method into a particular scope but when i try it is getting added in general scope. How can i solve this? For eg: There is an existing method: void add(int a, int b) { .... } This method is in File scope. i.e limited for that file. Now i want to add another method add2 in the same file scope. So I copied the existing add method and pasted it. void add2(int a, int b) { .... } But this method is getting added in the global scope and not in the file scope.

    Read the article

  • IIS6 Virtual SMTP server isn't coming back up automatically after a system restart

    - by Julian James
    I've got a virtual server running Win2008 RC2. I've set up IIS6 with a virtual SMTP server on it to be the mail provider for the websites I'm hosting there. It all works great, but if for some reason the server reboots (auto updates are still enabled - I'm trying to make this as little work as possible as we've got a Lot of clients), the IIS6 doesn't restart the SMTP server. The failure causes 500 errors on the current setup, so I'm spending half the day apologising. Any ideas? In Services I've set everything to come back up automatically, but still no dice. As soon as I restart the SMTP, no problems, all the mail gets sent. It's working perfectly, it just won't restart on it's own. I'd really rather not turn auto updates off as we're such a small company I just can't spare the time to be manually updating 15 copies of windows every time MS decide there's a security patch. All advice appreciated! BTW, I am a complete newb to these forums. I searched but couldn't find an answer, so please be nice. But firm. I've got to learn here.

    Read the article

  • ServerName wildcards in Apache name-based virtual hosts?

    - by Martijn Heemels
    On our LAN I've set up several 'fake' TLDs in the DNS server, with the intention of using them for Apache name-based virtual hosting. I'd like to combine this with mass-virtual-hosting (i.e. VirtualDocumentRoot) on an Ubuntu 10.04 LAMP server. However, I can't get it to select the right vhost! Here is a summary of the Apache config: NameVirtualHost 10.10.0.205 <VirtualHost 10.10.0.205> ServerName *.test VirtualDocumentRoot /var/www/%-3.0.%-2/test/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> <VirtualHost 10.10.0.205> ServerName *.dev VirtualDocumentRoot /var/www/%-3.0.%-2/dev/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> A hostname such as www.domain.com.dev, correctly resolves to 10.10.0.205, but always selects the top vhost, instead of the bottom one, which matches more closely. I was under the impression that Apache would first try to match the ServerName before defaulting to the top vhost for a given IP. What am I doing wrong? Or is this not possible and must I use another IP for each TLD? apachectl -S outputs (trimmed): 10.10.0.205:* is a NameVirtualHost default server *.test port * namevhost *.test port * namevhost *.dev

    Read the article

  • ESXi - change to thin - virtual disk filesize is the same

    - by sven
    running ESXi 5.5 here with a datastore on a single SSD. Now, I thought about changing to thin disks from thick and found that I could use a tool on the ESXi host to do that. However, the file size of the new created virtual disk is not changing. I run: vmkfstools -i loader.vmdk -d 'thin' thinloader.vmdk Destination disk format: VMFS thin-provisioned Cloning disk 'loader.vmdk'... Clone: 100% done. After that I compared the virtual disksizes: ls -la *.vmdk -rw------- 1 root root 32212254720 Jun 10 08:25 loader-flat.vmdk -rw------- 1 root root 467 May 21 17:04 loader.vmdk -rw------- 1 root root 32212254720 Jun 10 08:27 thinloader-flat.vmdk -rw------- 1 root root 520 Jun 10 08:33 thinloader.vmdk Stats on the original file: stat loader.vmdk File: loader.vmdk Size: 467 Blocks: 0 IO Block: 131072 regular file Device: 8bf64d175e27544ch/10085333178302026828d Inode: 419443780 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-01-25 10:17:34.000000000 Modify: 2014-05-21 17:04:06.000000000 Change: 2014-05-21 17:04:06.000000000 and on the thin file: stat thinloader.vmdk File: thinloader.vmdk Size: 520 Blocks: 0 IO Block: 131072 regular file Device: 8bf64d175e27544ch/10085333178302026828d Inode: 432026692 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-06-10 08:27:45.000000000 Modify: 2014-06-10 08:33:30.000000000 Change: 2014-06-10 08:33:30.000000000 Anyone an idea why the disk is not providing any more space (tried with multiple VM's already - all the same)? Also, I have noticed that the newly created file "autoappend" "-flat" to the disk ... Thanks Sven Update - diff of the vmdk config* --- loader.vmdk +++ thinloader.vmdk @@ -7,15 +7,17 @@ createType="vmfs" -RW 62914560 VMFS "loader-flat.vmdk" +RW 62914560 VMFS "thinloader-flat.vmdk" ddb.adapterType = "lsilogic" +ddb.deletable = "true" ddb.geometry.cylinders = "3916" ddb.geometry.heads = "255" ddb.geometry.sectors = "63" ddb.longContentID = "6d95855805dfa0079327dfee29b48dca" -ddb.uuid = "60 00 C2 98 d5 7d 17 bf-ac 54 70 b1 2d 39 43 d5" +ddb.thinProvisioned = "1" +ddb.uuid = "60 00 C2 93 c4 13 6c cf-bb 7b 34 c9 2c b4 dc 1e" ddb.virtualHWVersion = "8"

    Read the article

  • Call a macro every time any method is called - Objective C

    - by Jacob Relkin
    Hi, I wrote a debug macro that prints to the console the passed-in string whenever the global kDebug flag == YES. I need to print out the name of a method and it's classname whenever any method is called. That works fine when i painstakingly go through every method and write the name of the class and the method in a string. Is there any special handler that gets called when any method in Objective-C is called, and if so, is there a way i can somehow override it to call my debug macro?? The entire purpose of this is so that I don't have to go through every method in my code and hand-code the method signature in the debug macro call. Thanks

    Read the article

  • Mindtouch broke my Apache2 virtual host configuration.

    - by grenade
    I installed mindtouch using the instructions here and it seems to have broken my Virtual Host configuration. I have several domains running off the same apache instance and this was working fine but now all my domain names resolve to the virtualhost where mindtouch was installed. So mindtouch made all my domain names point to the new mindtouch instance. Grrr! I use debians default virtual host mechanisms (sites-enabled, etc). Does anyone know what apache directive mindtouch is using to ruin my vh setup? I've scoured all the conf files and there is nothing obvious in apache2.conf or httpd.conf that would cause the behaviour. Did it create a sym-link somewhere that I should destroy? I should add that I uninstalled the mindtouch packages already but apache persists in redirecting all domains to the first one mentioned in the sites-enabled folder. thini:~# apache2ctl -S [Wed Jan 05 13:39:11 2011] [warn] NameVirtualHost *:80 has no VirtualHosts VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:* www.openancestry.org (/etc/apache2/sites-enabled/openancestry.org:1) *:* www.pragmantra.com (/etc/apache2/sites-enabled/pragmantra.com:1) *:* services.pragmantra.com (/etc/apache2/sites-enabled/services.pragmantra.com:1) *:* www.subversionreports.com (/etc/apache2/sites-enabled/subversionreports.com:1) *:* www.thijssen.ch (/etc/apache2/sites-enabled/thijssen.ch:1) Syntax OK

    Read the article

  • Networking 2 Virtual PC with one VPC as DHCP server

    - by vivek
    My host OS is Win XP Professional. The host has a real network connection via DSL and I created a second network connection using Microsoft Loopback Adapter. Internet connection sharing is enabled. The Microsoft Loopback adapter has a IP address of 192.168.0.1. I have 1 Virtual PC which has Windows Server 2003. I have setup the network connection on this VPC to use Microsoft Loopback Adapter. I setup this VPC to be the Domain Controller , DNS Server and DHCP Server. I set this to a static IP address 192.168.0.2 (on the same subnet as the MS Loopback adapter) I have a second Virtual PC which also has Windows Server 2003. The network connection on this VPC is set to "Local Only". I want this VPC to get its IP address from the 1st VPC on which I setup as a DHCP server. What i want is the 2 VPC should be in a network with one of the VPC acting as the domain controller, DNS Server and DHCP server. The second VPC shoud get its IP address from the 1st VPC. It should be a part of the domain of the 1st VPC. When i tried to make the second VPC get the IP address from the first VPC I am not succeeding. Can somebody post some suggestions on how to go about this ?

    Read the article

  • Virtual host redirects to localhost in Ubuntu

    - by Salman
    I have recently configured Virtual Host in my Ubuntu 11.10. But whatever site i type, it always redirects to the localhost page. This is my "our-test-site" file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/zftut/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/zftut/public/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> and this is my, "etc/hosts" file: 127.0.0.1 localhost 127.0.0.1 our-test-site.local 127.0.0.1 zftut.local 127.0.1.1 System.B System Now when I try to go for "zftut.local", it redirects me to localhost page, showing me this: It works! This is the default web page for this server. The web server software is running but no content has been added, yet. What am I doing wrong? I refered "this" tutorial for setting up virtual host.

    Read the article

  • Gathering buslogic SCSI hardware and virtual machine operating system

    - by Julian
    I'm trying to use Powershell to get SCSI hardware from several virtual servers and get the operating system of each specific server. I've managed to get the specific SCSI hardware that I want to find with my code, however I'm unable to figure out how to properly get the operating system of each of the servers. Also, I'm trying to send all the data that I find into a csv log file, however I'm unsure of how you can make a powershell script create multiple columns. Here is my code (almost works but something's wrong): $log = "C:\Users\me\Documents\Scripts\ScsiLog.csv" Get-VM | Foreach-Object { $vm = $_ Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic" } | Foreach-Object { get-VMGuest -VM $vm } | Foreach-Object{ Write-output $vm.Guest.VmName >> $log } } I don't receive any errors when I run this code however whenever I run it I'm only getting the name of the servers and not the OS. Also I'm not sure what I need to do to make the OS appear in a different column from the name of the server in the csv log that I'm creating. What do I need to change in my code to get the OS version of each virtual machine and output it in a different column in my csv log file? EDIT: Here's a more in depth look at things I've tried that have all failed: Get-VM | Foreach-Object { $vm = $_ $svm = Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic" } Foreach-Object {get-VMGuest -VM $svm } | Foreach-Object{Write-output $svm >> $log} } #Get-VM | Foreach-Object { # $vm = $_ # Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic"} #| write-host $vm # | Foreach-Object { # # #get-VMGuest -VM $_ | # #write-host $vm # #get-VMGuest -VM $vm } | Foreach-Object{ # #write-output $vm.VmName >> $log # #write-output $vm.guest.VmName, get-VmGuest -VM $vm >> $log NO GOOD # # Write-host $vm.Guest.VmName #+ get-vmGuest -vm $VM >> $log # # # } # } I'm not sure why get-VmGuest fails though. I'm getting the scsi hardware, filtering the hardware to only get buslogic, and then wanting to get the operating system of just the filtered VMs. I don't see where my code fails though.

    Read the article

  • IIS 6 ASP.NET default handler-mappings and virtual directories

    - by Mark Lauter
    I'm having a problem with setting a default mapping in IIS 6. I want to secure *.HTML files with ASP.NET forms authentication. The problem seems to have something to do with using virtual directories to hold the html files. Here's how it's setup: sample directory tree c:/inetpub/ (nothing in here) d:/web_files/my_web_apps d:/web_files/my_web_apps/app1/ d:/web_files/my_web_apps/app2/ d:/web_files/my_web_apps/html_files/ app1 and app2 both access the same html_files directory, so html_files is set as a virtual directory in the web apps in IIS... sample web directory tree //app1/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) //app2/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) If I put a file called test.html in the root of //app1/ and then add the default mapping to the asp.net dll and setup my security on the root folder with deny="?", then accessing test.html works exactly as expected. If I'm not authenticated, it takes me to the login.aspx page, and if I am authenticated then it displays test.html. If I put the test.html file in the html_files directory I get a totally different behavior. Now the login.aspx page loads and I stuck some code in to check if I was still authenticated: <p>autheticated: <%=User.Identity.IsAuthenticated%></p> I figured it would say false because why else would it bother to load the login page? Nope, it says true - so it knows i'm authenticated, but it won't give me access to the test.html file. I've spent several hours on this and haven't been able to solve it. I'm going to spend some more time on google to see if I've missed something. Fingers crossed.

    Read the article

  • Bacula virtual backup job doesn't run, no output?

    - by Zoredache
    I am trying to get Virtual Backups working, but when I try to run a virtual backup job, it appears to get created, but then never seems to actually run. I have a full, and a couple incremental backups. status director JobId Level Files Bytes Status Finished Name ==================================================================== 1283 Full 10,565 1.963 G OK 21-Dec-12 09:47 nms-Job 1284 Incr 314 129.6 M OK 21-Dec-12 09:49 nms-Job 1285 Incr 230 147.2 M OK 21-Dec-12 09:51 nms-Job 1288 Incr 525 138.8 M OK 21-Dec-12 11:25 nms-Job I attempt to start a job from bconsole like this. *run job=nms-Job level=VirtualFull Using Catalog "MySQL" Run Backup job JobName: nms-Job Level: VirtualFull Client: nms-FileDaemon FileSet: nms-FileSet Pool: nms-pool (From Job resource) Storage: File_d1 (From Pool resource) When: 2012-12-21 13:07:54 Priority: 10 OK to run? (yes/mod/no): Job queued. JobId=1291 Then my new job, just sits there, doing nothing. The JobStatus shows that the job was created, but it appears to never run? All the full, and incremental backups are terminating normally. *llist jobid=1291 JobId: 1,291 Job: nms-Job.2012-12-21_13.07.56_07 Name: nms-Job PurgedFiles: 0 Type: B Level: F ClientId: 4 Name: nms-FileDaemon JobStatus: C SchedTime: 2012-12-21 13:07:54 StartTime: 2012-12-21 13:07:56 EndTime: 0000-00-00 00:00:00 RealEndTime: 0000-00-00 00:00:00 JobTDate: 1,356,124,076 VolSessionId: 0 VolSessionTime: 0 JobFiles: 0 JobErrors: 0 JobMissingFiles: 0 PoolId: 19 PooLname: nms-pool PriorJobId: 0 FileSetId: 11 FileSet: nms-FileSet I am getting very frustrated, that this isn't working, mostly because it isn't giving me any error logs, or output at all. I submit the job, and as far as I can tell nothing happens. Is there some status, or debugging level that I can set to get a useful information about why this isn't working? What can I do to make this work? I was originally running Bacula 5.0.2 on Debian Squeeze, out of frustration, I upgraded to the 5.2.6 in the backports repository, hoping that a new version might give me better results.

    Read the article

  • VirtualBox: using physical partition as virtual drive

    - by Hamman Samuel
    Background: I am using VirtualBox installed on Windows 7. From within VirtualBox I am using Xubuntu as a virtual OS. The reason I chose this approach is so that I don't have to keep turning off Windows and rebooting from Xubuntu every time I needed to switch OSes. And VirtualBox's seamless mode is pretty amazing to allow me see Xubuntu and Windows 7 all in one screen. Issue: Now I am thinking of a way to have Xubuntu more integrated into my system. By this I mean I want to have a physical partition for Xubuntu. But I want to still have the feeling of the seamless mode. Question: So finally, my question is: is it possible to load a partition in VirtualBox as a virtual OS? Case examples: Ideal scenario would be: I physically boot up and login to Windows 7. Now I want to access Xubuntu, so I load VirtualBox and access my Xubuntu partition without rebooting. And the other way around too, i.e. I boot up the system, login to Xubuntu, and can access the actual Windows 7 partition through VirtualBox. Other info: Please note that I am not talking about getting access to files, as I have a completely separate partition for my files, and am very familiar with VirtualBox's Shared Folders option.

    Read the article

  • Flickering dual screens in Virtual Box Ubuntu 13.10 Guest

    - by alexleonard
    I have Ubuntu 13.10 x64 installed as a guest in VirtualBox (under a Windows 8.1 host) and have the settings for the virtual machine setup to run with a monitor count of 2, 128MB video memory and 3D acceleration enabled. In my guest I have the virtual box additions installed (which allowed me to have two 1920x1080 screens). Here's a screenshot of my VM settings. My laptop is an Asus N550JV which has both Intel's HD Graphics 4600 GPU and Nvidia's GeForce GT 750M. By default though I believe the Intel GFX card is being used to render the VM. When I boot up the VM it loads perfectly on dual screens, however whenever I move the mouse from one screen to the other (I have a Dell S2340L running over a HDMI connection as a second screen) the screen flickers. I've tried a variety of settings changes in both Ubuntu and the VM settings, but cannot seem to stop this screen flicker. I also used the NVidia control panel in Windows to force the dedicated graphics card to always be used but found that the display driver sometimes crashed whilst working in the VM, resulting in my VM session being destroyed, so I figured it's better to stick with the Intel GFX as that appears to be more stable. I also tried without 3D acceleration but that was much worse, and if I ran the VM with a low amount of graphics memory it really struggled. Here's my dmesg output: http://pastebin.com/1LJuYWMj (not sure if this is helpful in this situation). I read some posts suggesting changes to /etc/X11/xorg.conf but I don't appear to have an xorg.conf file. There were also a few posts (though related to Synergy) suggesting running xset -dpms but this command doesn't appear to have had any effect for me. As an additional note, I'm finding that window drawing in the guest is a little laggy/glitchy. For example, quickly scrolling through a web page may result in parts of the viewport displaying original content. Certainly I notice drawing issues most in the web browser, but it also impacts other software with parts of the window not being drawn when, say, switching between accounts in thunderbird. Any suggestions greatly appreciated!

    Read the article

  • Apache Named Virtual Hosts and HTTPS

    - by Freddie Witherden
    I have an SSL certificate which is valid for multiple (sub-) domains. In Apache I have configured this as follows: In /etc/apache2/apache2.conf NameVirtualHost <my ip>:443 Then for one named virtual host I have <VirtualHost <my ip>:443> ServerName ... SSLEngine on SSLCertificateFile ... SSLCertificateKeyFile ... SSLCertificateChainFile ... SSLCACertificateFile ... </VirtualHost> Finally, for every other site I want to be accessible over HTTPS I just have a <VirtualHost <my ip>:443> ServerName ... </VirtualHost> The good news is that it works. However, when I start Apache I get warning messages [warn] Init: SSL server IP/port conflict: Domain A:443 (...) vs. Domain B:443 (...) [warn] Init: SSL server IP/port conflict: Domain C:443 (...) vs. Domain B:443 (...) [warn] Init: You should not use name-based virtual hosts in conjunction with SSL!! So, my question is: how should I be configuring this? Clearly from the warning messages I am doing something wrong (although it does work!), however, the above configuration was the only one I could get to work. It is somewhat annoying as the configuration files have an explicit dependence on my IP address.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >