Search Results

Search found 13904 results on 557 pages for 'host on demand'.

Page 9/557 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Bridging LXC containers to host eth0 so they can have a public IP

    - by Vianney Stroebel
    UPDATE: I found the solution there: http://www.linuxfoundation.org/collaborate/workgroups/networking/bridge#No_traffic_gets_trough_.28except_ARP_and_STP.29 # cd /proc/sys/net/bridge # ls bridge-nf-call-arptables bridge-nf-call-iptables bridge-nf-call-ip6tables bridge-nf-filter-vlan-tagged # for f in bridge-nf-*; do echo 0 $f; done But I'd like to have expert opinions on this: is it safe to disable all bridge-nf-*? What are they here for? END OF UPDATE I need to bridge LXC containers to the physical interface (eth0) of my host, reading numerous tutorials, documents and blog posts on the subject. I need the containers to have their own public IP (which I've previously done KVM/libvirt). After two days of searching and trying, I still can't make it work with LXC containers. The host runs a freshly installed Ubuntu Server Quantal (12.10) with only libvirt (which I'm not using here) and lxc installed. I created the containers with : lxc-create -t ubuntu -n mycontainer So they also run Ubuntu 12.10. Content of /var/lib/lxc/mycontainer/config is: lxc.utsname = mycontainer lxc.mount = /var/lib/lxc/test/fstab lxc.rootfs = /var/lib/lxc/test/rootfs lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.veth.pair = vethmycontainer lxc.network.ipv4 = 179.43.46.233 lxc.network.hwaddr= 02:00:00:86:5b:11 lxc.devttydir = lxc lxc.tty = 4 lxc.pts = 1024 lxc.arch = amd64 lxc.cap.drop = sys_module mac_admin mac_override lxc.pivotdir = lxc_putold # uncomment the next line to run the container unconfined: #lxc.aa_profile = unconfined lxc.cgroup.devices.deny = a # Allow any mknod (but not using the node) lxc.cgroup.devices.allow = c *:* m lxc.cgroup.devices.allow = b *:* m # /dev/null and zero lxc.cgroup.devices.allow = c 1:3 rwm lxc.cgroup.devices.allow = c 1:5 rwm # consoles lxc.cgroup.devices.allow = c 5:1 rwm lxc.cgroup.devices.allow = c 5:0 rwm #lxc.cgroup.devices.allow = c 4:0 rwm #lxc.cgroup.devices.allow = c 4:1 rwm # /dev/{,u}random lxc.cgroup.devices.allow = c 1:9 rwm lxc.cgroup.devices.allow = c 1:8 rwm lxc.cgroup.devices.allow = c 136:* rwm lxc.cgroup.devices.allow = c 5:2 rwm # rtc lxc.cgroup.devices.allow = c 254:0 rwm #fuse lxc.cgroup.devices.allow = c 10:229 rwm #tun lxc.cgroup.devices.allow = c 10:200 rwm #full lxc.cgroup.devices.allow = c 1:7 rwm #hpet lxc.cgroup.devices.allow = c 10:228 rwm #kvm lxc.cgroup.devices.allow = c 10:232 rwm Then I changed my host /etc/network/interfaces to: auto lo iface lo inet loopback auto br0 iface br0 inet static bridge_ports eth0 bridge_fd 0 address 92.281.86.226 netmask 255.255.255.0 network 92.281.86.0 broadcast 92.281.86.255 gateway 92.281.86.254 dns-nameservers 213.186.33.99 dns-search ovh.net When I try command line configuration ("brctl addif", "ifconfig eth0", etc.) my remote host becomes inaccessible and I have to hard reboot it. I changed the content of /var/lib/lxc/mycontainer/rootfs/etc/network/interfaces to: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 179.43.46.233 netmask 255.255.255.255 broadcast 178.33.40.233 gateway 92.281.86.254 It takes several minutes for mycontainer to start (lxc-start -n mycontainer). I tried replacing gateway 92.281.86.254 by : post-up route add 92.281.86.254 dev eth0 post-up route add default gw 92.281.86.254 post-down route del 92.281.86.254 dev eth0 post-down route del default gw 92.281.86.254 My container then starts instantly. But whatever configuration I set in /var/lib/lxc/mycontainer/rootfs/etc/network/interfaces, I cannot ping from mycontainer to any IP (including the host's) : ubuntu@mycontainer:~$ ping 92.281.86.226 PING 92.281.86.226 (92.281.86.226) 56(84) bytes of data. ^C --- 92.281.86.226 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 5031ms And my host cannot ping the container: root@host:~# ping 179.43.46.233 PING 179.43.46.233 (179.43.46.233) 56(84) bytes of data. ^C --- 179.43.46.233 ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 4000ms My container's ifconfig: ubuntu@mycontainer:~$ ifconfig eth0 Link encap:Ethernet HWaddr 02:00:00:86:5b:11 inet addr:179.43.46.233 Bcast:255.255.255.255 Mask:0.0.0.0 inet6 addr: fe80::ff:fe79:5a31/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:64 errors:0 dropped:6 overruns:0 frame:0 TX packets:54 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4070 (4.0 KB) TX bytes:4168 (4.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:32 errors:0 dropped:0 overruns:0 frame:0 TX packets:32 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2496 (2.4 KB) TX bytes:2496 (2.4 KB) My host's ifconfig: root@host:~# ifconfig br0 Link encap:Ethernet HWaddr 4c:72:b9:43:65:2b inet addr:92.281.86.226 Bcast:91.121.67.255 Mask:255.255.255.0 inet6 addr: fe80::4e72:b9ff:fe43:652b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1453 errors:0 dropped:18 overruns:0 frame:0 TX packets:1630 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:145125 (145.1 KB) TX bytes:299943 (299.9 KB) eth0 Link encap:Ethernet HWaddr 4c:72:b9:43:65:2b UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3178 errors:0 dropped:0 overruns:0 frame:0 TX packets:1637 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:298263 (298.2 KB) TX bytes:309167 (309.1 KB) Interrupt:20 Memory:fe500000-fe520000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:300 (300.0 B) TX bytes:300 (300.0 B) vethtest Link encap:Ethernet HWaddr fe:0d:7f:3e:70:88 inet6 addr: fe80::fc0d:7fff:fe3e:7088/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:54 errors:0 dropped:0 overruns:0 frame:0 TX packets:67 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4168 (4.1 KB) TX bytes:4250 (4.2 KB) virbr0 Link encap:Ethernet HWaddr de:49:c5:66:cf:84 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) I have disabled lxcbr0 (USE_LXC_BRIDGE="false" in /etc/default/lxc). root@host:~# brctl show bridge name bridge id STP enabled interfaces br0 8000.4c72b943652b no eth0 vethtest I have configured the IP 179.43.46.233 to point to 02:00:00:86:5b:11 in my hosting provider (OVH) config panel. (The IPs in this post are not the real ones.) Thanks for reading this long question! :-) Vianney

    Read the article

  • Slow http traffic between VMWare guest and host.

    - by toluju
    I have a web application running as an http server inside the VMWare guest OS, and I'm trying to access the content from the host OS. The guest is running Ubuntu, and the host is running Windows XP. The problem is, when I try to access the application from a browser in the host OS, the content takes a very long time to load (up to a minute for a single page). A browser in the guest OS can access the application with no problems. I've tried using both NAT and bridged networking, but the results are the same. The Windows firewall is turned off. The connection itself appears fine, as ping requests from guest to host as well as host to guest complete without errors or delays. Both guest and host can access the external Internet connection without a problem. I'm using VMWare Player. Any ideas?

    Read the article

  • Multiple SSH private keys for the same host

    - by Sencha
    How can I store 2 different private SSH keys for the same host? I have tried 2 entries in /etc/ssh/ssh_config for the same host with the different keys, and I've also tried to put both keys in the same file and referencing it from one hosts setting, however both do not work. More detail: I'm running Ubuntu server (12.04) and I want to connect to GitHub via SSH to download the latest source for my projects. There are multiple projects running on the same server and each project has a GitHub repo with it's own unique deloyment key-pair. So the host is always the same (github.com) but the keys need to be different depending on which repo I'm using. Different /etc/ssh/ssh_config versions I have tried: Host github.com IdentityFile /etc/ssh/my_project_1_github_deploy_key StrictHostKeyChecking no Host github.com IdentityFile /etc/ssh/my_project_2_github_deploy_key StrictHostKeyChecking no and this with both keys in the same file: Host github.com IdentityFile /etc/ssh/my_project_github_deploy_keys StrictHostKeyChecking no I've had no luck with either. Any help would be greatly appreciated!

    Read the article

  • nmap reports host up when it isn't

    - by martianway
    On an Ubuntu VM I ran: sudo nmap -sP 192.168.0.* This returned: Starting Nmap 5.00 ( http://nmap.org ) at 2010-12-28 22:46 PST Host 192.168.0.0 is up (0.00064s latency). Host 192.168.0.1 is up (0.00078s latency). Host 192.168.0.2 is up (0.00011s latency). . . . Host 192.168.0.254 is up (0.00068s latency). Host 192.168.0.255 is up (0.00066s latency). The problem is I only have 4 live machines on 192.168.0.* so why did nmap report every ip in the subnet has a live host? The ip address of the Ubuntu machine is 192.168.28.131 From this VM I can ping the live systems on my internal subnet 192.168.0.* and get the expected response. And if I ping a machine that doesn't exist I can get no response as expected.

    Read the article

  • Why can't all zeros in the host portion of IP address be used for a host?

    - by Grezzo
    I know that if I have a network 83.23.159.0/24 then I have 254 usable host IP addresses because: 83.23.159.0 (in binary: host portion all zeros) is the subnet address 83.23.159.1-254 are host addresses 83.23.159.255 (in binary: host portion all ones) is the broadcast address I understand the use for a broadcast address, but I don't understand what the subnet address is ever used for. I can't see any reason that an IP packet's destination address would be set to the subnet address, so why does the subnet itself need an address if it is never going to be the endpoint for AN IP flow? To me it seems like a waste to not allow this address to be used as a host address. To summarise, my questions are: Is an IP packet's destination ever set to the subnet IP address? If yes, in what cases and why? If no, then why not free up that address for any host to use?

    Read the article

  • How can I tell if a host is bridged and acting as a router

    - by makerofthings7
    I would like to scan my DMZ for hosts that are bridged between subnets and have routing enabled. Since I have everything from VMWare servers, to load balancers on the DMZ I'm unsure if every host is configured correctly. What IP, ICMP, or SNMP (etc) tricks can I use to poll the hosts and determine if the host is acting as a router? I'm assuming this test would presume I know the target IP, but in a large network with many subnets, I'd have to test many different combinations of networks and see if I get success. Here is one example (ping): For each IP in the DMZ, arp for the host MAC Send a ICMP reply message to that host directed at an online host on each subnet I think that there is a more optimal way to get the information, namely from within ICMP/IP itself, but I'm not sure what low level bits to look for. I would also be interested if it's possible to determine the "router" status without knowing the subnets that the host may be connected to. This would be useful to know when improving our security posture.

    Read the article

  • Host file in browser?

    - by acidzombie24
    Instead of changing my host file i'd like firefox to think my domain is on my own server while my other browser to use the real ip address. I use to edit my host files but that forces both browsers to change the ip address. I found change hostbut it doesnt appear to use the alternative host file. I also saw a comment asking when it will work on firefox 6+ I tried Host Admin and it fails. It works but the alternative IP address must be in your host file already (which i dont want) and it lets you deselect a domain so the host file is ignored. Not what i want.

    Read the article

  • apache virtual host concept and dns

    - by Subhransu
    I want to have around 60 repositories of projects and I want to serve them from a dedicated remote server(ubuntu) with the help of mercurial server so that all my developers will be able to update their changes. I have followed this article in order to do that but stocked in the Apache Configuration Step (section 2.5 2.5.4). I have some following questions: What are the steps I need to follow to make apache to serve /home/hg/repositories/private/hgweb.cgi when I enter dev.example.com/private ? Is my virtual host file is correct or do I need to change anything ? I bought the example.com and how to make it to serve dev.example.com/private. Do I need to add A name(like : subdomain.example.com and then IP of my server) in the cpannel of hosting company? ServerAdmin webmaster@localhost ServerName dev.example.com ScriptAlias /private /home/hg/repositories/private/hgweb.cgi <Directory /home/hg/repositories/private/> Options ExecCGI FollowSymlinks AddHandler cgi-script .cgi DirectoryIndex hgweb.cgi AuthType Basic AuthName "Mercurial repositories" AuthUserFile /home/hg/tools/hgusers Require valid-user </Directory> ErrorLog ${APACHE_LOG_DIR}/dev.example.com_error.log # Possible values include: debug, info, notice, warn, error, cr$ # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/dev.example.com_ssl_access.lo$ SSLEngine on SSLCertificateFile "/etc/apache2/ssl/dev.example.com.crt" SSLCertificateKeyFile "/etc/apache2/ssl/dev.example.com.k$ NOTE: The above is my virtual host file. I have not enabled the site yet and also not changed any host or hostname or httpd.conf file.

    Read the article

  • More information about worldwide nodes how to get?

    - by Aubergine
    The context: Six hosts across worldwide were traced over week from UK. Ten thousands of lines to be parsed and analysed. And then I try to find any clue of geographical information and path - from where it jumps where. Then after Austria or Germany(each time different) I have mysterious 62.208.72.6 which in GEO LOC gives me Falklands Islands (which is where my target host is by the way, but before target host I still have 5 other nodes) Then I do whois for this 62.208.72.6 route: 62.208.0.0/16 descr: DE-ECRC-62-208-0-0 origin: AS1273 mnt-by: CW-EUROPE-GSOC source: RIPE # Filtered Why it says Europe now? How to understand this enigma code? I want to confirm more or less whether this is in europe or in falkland islands? But it can't be in FK yet as after next two hosts I get New York? Could you also tell me what does this CW-EUROPE-GSOC abbreviation means. (To preserve your sanity better not google, unless you already know it :-D) And the actual whois for the destination/target host, which completely destroys my head: route: 195.248.193.0/24 descr: HORIZON descr: Cable and Wireless Falkland Islands descr: Via Cable and Wireless Communications UK origin: AS5551 mnt-by: AS5551-MNT source: RIPE # Filtered How is it Via Cable and Wireless Communications UK if two nodes before I was in New York? Thank you guys,

    Read the article

  • Dreamweaver CS5 Test server works but cannot connect to host server through files window

    - by Toni
    I've been managing this site for a long time and update coupons on it approximately every 60 days. For some reason, I'm not having problems: I opened DW CS5 today and made the changes necessary to update coupons. I was able to connect to the host server with no problem but most of my coupon images were not showing up. DW tells me I have 70 broken links, which can't be the case because I've reviewed them. Some links work and are the same as the broken links other than the file name. Unable to figure it out, I thought maybe restarting my Mac would help. However, upon logging back into DW, I am now unable to connect to the host server. I get an FTP error notice that the file doesn't exist or there is a permissions problem. Funny thing is, I can connect successfully if I test the connection through the Site Management window. I have connected to my host server through FileZilla and see all the files there, unfortunately, I still can't get the web pages to display the coupons. Has anyone else had this issue and if so, what is the solution? I feel like this is probably a simple fix, but I cannot for the life of me determine what it is! If anyone knows a solution, I'd really appreciate the help! -Toni

    Read the article

  • How to solve: "Connect to host some_hostname port 22: Connection timed out"

    - by Aufwind
    I have two Ubuntu machines. Both have openssh-client and openssh-server installed on them. ssh-ing from machine G (fresh Ubuntu 11.10 installation) to machine K works great. But ssh-ing from machine K to machine G results always in the Error: Connect to host some_hostname port 22: Connection timed out I went through the troubleshooting section of help.ubuntu.com and I got the following results: ps -A | grep sshd # results in 848 ? 00:00:00 sshd - sudo ss -lnp | grep sshd # results in 0 128 :::22 :::* users:(("sshd",848,4)) 0 128 *:22 *:* users:(("sshd",848,3)) - ssh -v localhost # works! - sudo ufw status verbose # yields: "Status: inactive" I haven't change anything in the config file. What can I do to locate the Problem and solve it? Glad about every hint! Edit: ping was succesful in both directions! I did a telnet <machineK> 22 from machin G which resulted in Trying and then in telnet: Unable to connect to remote host: Connection timed out. But telnet the other way around worked just fine! Edit 2: ssh start/running, process 966 # yields: ssh start/running, process 966 /etc/hostname # contains my hostname, let's call it blubb /etc/hosts # contains the following 127.0.0.1 localhost # 127.0.1.1 blubb 129.26.68.74 blubb # I added this! - sudo service ufw status # yields: ufw start/running I installed Gufw and set it to ON. Then I selected from Incoming the option ALLOW. Then I sshed to another machine from where I sshed back to my machine. Still the same error as above: connect to host blubb port 22: Connection timed out Any more hints, what I can check?

    Read the article

  • Best way to setup hosts, subdomains, and IPs [closed]

    - by LynnOwens
    I own a domain, let's call it mydomain.com. I need to host the following off it: forums.mydomain.com www.mydomain.com blog.mydomain.com objects.mydomain.com I believe I can get 5 static IPs. I plan on assigning one each to those four hosts. Then I need to adhoc create names, all below objects.mydomain.com. For instance: one.objects.mydomain.com two.objects.mydomain.com three.objects.mydomain.com I need to create these names programatically, and without human intervention. Preferably, they would not get their own IPs. They would use the IP of objects.mydomain.com. First question: Does this mean that I need to host my own DNS? Second question: I'm using Apache as a web server. What does the virtual host configuration look like? I was experimenting with the following to understand how routing on domain names works and I always ended up at www. <VirtualHost *:80> ServerAdmin [email protected] ServerName www.mydomain.com ServerAlias www.mydomain.com DocumentRoot "E:/Static/www" RewriteEngine On RewriteRule ^(/www/.*) /www$1 </Virtualhost> <VirtualHost *:80> ServerAdmin [email protected] ServerName forums.mydomain.com ServerAlias forums.mydomain.com DocumentRoot "E:/Static/forums" RewriteEngine On RewriteRule ^(/forums/.*) /forums$1 </Virtualhost>

    Read the article

  • Guest (and occasional co-host) on Jesse Liberty's Yet Another Podcast

    - by Jon Galloway
    I was a recent guest on Jesse Liberty's Yet Another Podcast talking about the latest Visual Studio, ASP.NET and Azure releases. Download / Listen: Yet Another Podcast #75–Jon Galloway on ASP.NET/ MVC/ Azure Co-hosted shows: Jesse's been inviting me to co-host shows and I told him I'd show up when I was available. It's a nice change to be a drive-by co-host on a show (compared with the work that goes into organizing / editing / typing show notes for Herding Code shows). My main focus is on Herding Code, but it's nice to pop in and talk to Jesse's excellent guests when it works out. Some shows I've co-hosted over the past year: Yet Another Podcast #76–Glenn Block on Node.js & Technology in China Yet Another Podcast  #73 - Adam Kinney on developing for Windows 8 with HTML5 Yet Another Podcast #64 - John Papa & Javascript Yet Another Podcast #60 - Steve Sanderson and John Papa on Knockout.js Yet Another Podcast #54–Damian Edwards on ASP.NET Yet Another Podcast #53–Scott Hanselman on Blogging Yet Another Podcast #52–Peter Torr on Windows Phone Multitasking Yet Another Podcast #51–Shawn Wildermuth: //build, Xaml Programming & Beyond And some more on the way that haven't been released yet. Some of these I'm pretty quiet, on others I get wacky and hassle the guests because, hey, not my podcast so not my problem. Show notes from the ASP.NET / MVC / Azure show: What was just released Visual Studio 2012 Web Developer features ASP.NET 4.5 Web Forms Strongly Typed data controls Data access via command methods Similar Binding syntax to ASP.NET MVC Some context: Damian Edwards and WebFormsMVP Two questions from Jesse: Q: Are you making this harder or more complicated for Web Forms developers? Short answer: Nothing's removed, it's just a new option History of SqlDataSource, ObjectDataSource Q: If I'm using some MVC patterns, why not just move to MVC? Short answer: This works really well in hybrid applications, doesn't require a rewrite Allows sharing models, validation, other code between Web Forms and MVC ASP.NET MVC Adaptive Rendering (oh, also, this is in Web Forms 4.5 as well) Display Modes Mobile project template using jQuery Mobile OAuth login to allow Twitter, Google, Facebook, etc. login Jon (and friends') MVC 4 book on the way: Professional ASP.NET MVC 4 Windows 8 development Jesse and Jon announce they're working on a new book: Pro Windows 8 Development with XAML and C# Jon and Jesse agree that it's nice to be able to write Windows 8 applications using the same skills they picked up for Silverlight, WPF, and Windows Phone development. Compare / contrast ASP.NET MVC and Windows 8 development Q: Does ASP.NET and HTML5 development overlap? Jon thinks they overlap in the MVC world because you're writing HTML views without controls Jon describes how his web development career moved from a preoccupation with server code to a focus on user interaction, which occurs in the browser Jon mentions his NDC Oslo presentation on Learning To Love HTML as Beautiful Code Q: How do you apply C# / XAML or HTML5 skills to Windows 8 development? Q: If I'm a XAML programmer, what's the learning curve on getting up to speed on ASP.NET MVC? Jon describes the difference in application lifecycle and state management Jon says it's nice that web development is really interactive compared to application development Q: Can you learn MVC by reading a book? Or is it a lot bigger than that? What is Azure, and why would I use it? Jon describes the traditional Azure platform mode and how Azure Web Sites fits in Q: Why wouldn't Jesse host his blog on Azure Web Sites? Domain names on Azure Web Sites File hosting options Q: Is Azure just another host? How is it different from any of the other shared hosting options? A: Azure gives you the ability to scale up or down whenever you want A: Other services are available if or when you want them

    Read the article

  • Passing data between the VirtualBox Host and the Guest

    - by Fat Bloke
    Here's a good question: "How can you figure out the VM name from within the VM itself?" While this data is not automatically available, the general purpose, and very powerful VirtualBox "GuestProperty" APIs can be used from the host and guest to pass arbitrary data, in key/value pairs format, in and out of the guest. Note that this does require that the VirtualBox Guest Additions have been installed in the guest. To play with this, try using the "VBoxManage" command line on your VirtualBox host machine, and "VBoxControl" in the guest. Host syntax VBoxManage guestproperty get <vmname>|<uuid> <property> [--verbose] VBoxManage guestproperty set <vmname>|<uuid> <property> [<value> [--flags <flags>]] VBoxManage guestproperty enumerate <vmname>|<uuid> [--patterns <patterns>] VBoxManage guestproperty wait <vmname>|<uuid> <patterns> [--timeout <msec>] [--fail-on-timeout]   Guest syntax VBoxControl.exe guestproperty        get <property> [-verbose] VBoxControl.exe guestproperty        set <property> [<value> [-flags <flags>]] VBoxControl.exe guestproperty        enumerate [-patterns <patterns>] VBoxControl.exe guestproperty        wait <patterns>                                      [-timestamp <last timestamp>]                                      [-timeout <timeout in ms>  So to solve our problem above, we set the vm name in the Host system on an arbitrary key like this: $ VBoxManage guestproperty set "Windows 7 (x64)" /MyData/VMname "Windows 7 (x64)" And within the guest we can use: C:\Program Files\Oracle\VirtualBox Guest Additions>VBoxControl.exe guestproperty get /MyData/VMname Oracle VM VirtualBox Guest Additions Command Line Management Interface Version 4.1.14 (C) 2008-2012 Oracle Corporation All rights reserved. Value: Windows 7 (x64) The GuestProperty API is pretty powerful, so for the interested, get more info in the User Manual. - FB 

    Read the article

  • How to VPN on demand Mac OS X?

    - by Kami
    I'm trying to configure the Snow Leopard's VPN on demand service without success I've tried the following domain+configuration pairs but none of them have worked: domain.net default *.domain.net default My goal is that each time I go to www.domain.net with Safari, ssh server1.domain.net or everything else on this domain.net the connection will be established trough the VPN ! I've tried plenty of different configs but it has never worked so far... Edit : R

    Read the article

  • On-demand RHEL/Centos Linux admin and MySQL admin

    - by user1322092
    Could you share with me a few reputable businesses/websites where I can quickly onboard a RHEL/Centos Linux admin or even a MySQL admins (say if I need help with disaster recovery). I have a cloud server, and I would like to task an admin to perform specific maintenance or even periodic. With the abundance of solely-run cloud servers, I would imagine there's a demand for this type of service (certainly for me).

    Read the article

  • Why do ICMP Redirct Host happen?

    - by El Barto
    I'm setting up a Debian box as a router for 4 subnets. For that I have defined 4 virtual interfaces on the NIC where the LAN is connected (eth1). eth1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.1.1 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:d98/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6026521 errors:0 dropped:0 overruns:0 frame:0 TX packets:35331299 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:673201397 (642.0 MiB) TX bytes:177276932 (169.0 MiB) Interrupt:19 Base address:0x6000 eth1:0 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.2.1 Bcast:10.1.2.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth1:1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.3.1 Bcast:10.1.3.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth1:2 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.4.1 Bcast:10.1.4.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth2 Link encap:Ethernet HWaddr 6c:f0:49:a4:47:38 inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:fea4:4738/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:199809345 errors:0 dropped:0 overruns:0 frame:0 TX packets:158362936 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:3656983762 (3.4 GiB) TX bytes:1715848473 (1.5 GiB) Interrupt:27 eth3 Link encap:Ethernet HWaddr 94:0c:6d:82:c8:72 inet addr:192.168.2.5 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:c872/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:110814 errors:0 dropped:0 overruns:0 frame:0 TX packets:73386 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:16044901 (15.3 MiB) TX bytes:42125647 (40.1 MiB) Interrupt:20 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:22351 errors:0 dropped:0 overruns:0 frame:0 TX packets:22351 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2625143 (2.5 MiB) TX bytes:2625143 (2.5 MiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:41358924 errors:0 dropped:0 overruns:0 frame:0 TX packets:23116350 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:3065505744 (2.8 GiB) TX bytes:1324358330 (1.2 GiB) I have two other computers connected to this network. One has IP 10.1.1.12 (subnet mask 255.255.255.0) and the other one 10.1.2.20 (subnet mask 255.255.255.0). I want to be able to reach 10.1.1.12 from 10.1.2.20. Since packet forwarding is enabled in the router and the policy of the FORWARD chain is ACCEPT (and there are no other rules), I understand that there should be no problem to ping from 10.1.2.20 to 10.1.1.12 going through the router. However, this is what I get: $ ping -c15 10.1.1.12 PING 10.1.1.12 (10.1.1.12): 56 data bytes Request timeout for icmp_seq 0 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 81d4 0 0000 3f 01 e2b3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 1 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 899b 0 0000 3f 01 daec 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 2 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 78fe 0 0000 3f 01 eb89 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 3 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 14b8 0 0000 3f 01 4fd0 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 4 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 8ef7 0 0000 3f 01 d590 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 5 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 ec9d 0 0000 3f 01 77ea 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 6 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 70e6 0 0000 3f 01 f3a1 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 7 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 b0d2 0 0000 3f 01 b3b5 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 8 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 f8b4 0 0000 3f 01 6bd3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 9 Request timeout for icmp_seq 10 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 1c95 0 0000 3f 01 47f3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 11 Request timeout for icmp_seq 12 Request timeout for icmp_seq 13 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 62bc 0 0000 3f 01 01cc 10.1.2.20 10.1.1.12 Why does this happen? From what I've read the Redirect Host response has something to do with the fact that the two hosts are in the same network and there being a shorter route (or so I understood). They are in fact in the same physical network, but why would there be a better route if they are not on the same subnet (they can't see each other)? What am I missing? Some extra info you might want to see: # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth2 10.1.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.3.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth2 0.0.0.0 192.168.2.1 0.0.0.0 UG 100 0 0 eth3 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- !10.0.0.0/8 10.0.0.0/8 MASQUERADE all -- 10.0.0.0/8 !10.0.0.0/8 Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Why do ICMP Redirect Host happen?

    - by El Barto
    I'm setting up a Debian box as a router for 4 subnets. For that I have defined 4 virtual interfaces on the NIC where the LAN is connected (eth1). eth1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.1.1 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:d98/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6026521 errors:0 dropped:0 overruns:0 frame:0 TX packets:35331299 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:673201397 (642.0 MiB) TX bytes:177276932 (169.0 MiB) Interrupt:19 Base address:0x6000 eth1:0 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.2.1 Bcast:10.1.2.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth1:1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.3.1 Bcast:10.1.3.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth1:2 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.4.1 Bcast:10.1.4.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth2 Link encap:Ethernet HWaddr 6c:f0:49:a4:47:38 inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:fea4:4738/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:199809345 errors:0 dropped:0 overruns:0 frame:0 TX packets:158362936 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:3656983762 (3.4 GiB) TX bytes:1715848473 (1.5 GiB) Interrupt:27 eth3 Link encap:Ethernet HWaddr 94:0c:6d:82:c8:72 inet addr:192.168.2.5 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:c872/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:110814 errors:0 dropped:0 overruns:0 frame:0 TX packets:73386 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:16044901 (15.3 MiB) TX bytes:42125647 (40.1 MiB) Interrupt:20 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:22351 errors:0 dropped:0 overruns:0 frame:0 TX packets:22351 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2625143 (2.5 MiB) TX bytes:2625143 (2.5 MiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:41358924 errors:0 dropped:0 overruns:0 frame:0 TX packets:23116350 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:3065505744 (2.8 GiB) TX bytes:1324358330 (1.2 GiB) I have two other computers connected to this network. One has IP 10.1.1.12 (subnet mask 255.255.255.0) and the other one 10.1.2.20 (subnet mask 255.255.255.0). I want to be able to reach 10.1.1.12 from 10.1.2.20. Since packet forwarding is enabled in the router and the policy of the FORWARD chain is ACCEPT (and there are no other rules), I understand that there should be no problem to ping from 10.1.2.20 to 10.1.1.12 going through the router. However, this is what I get: $ ping -c15 10.1.1.12 PING 10.1.1.12 (10.1.1.12): 56 data bytes Request timeout for icmp_seq 0 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 81d4 0 0000 3f 01 e2b3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 1 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 899b 0 0000 3f 01 daec 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 2 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 78fe 0 0000 3f 01 eb89 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 3 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 14b8 0 0000 3f 01 4fd0 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 4 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 8ef7 0 0000 3f 01 d590 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 5 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 ec9d 0 0000 3f 01 77ea 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 6 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 70e6 0 0000 3f 01 f3a1 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 7 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 b0d2 0 0000 3f 01 b3b5 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 8 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 f8b4 0 0000 3f 01 6bd3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 9 Request timeout for icmp_seq 10 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 1c95 0 0000 3f 01 47f3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 11 Request timeout for icmp_seq 12 Request timeout for icmp_seq 13 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 62bc 0 0000 3f 01 01cc 10.1.2.20 10.1.1.12 Why does this happen? From what I've read the Redirect Host response has something to do with the fact that the two hosts are in the same network and there being a shorter route (or so I understood). They are in fact in the same physical network, but why would there be a better route if they are not on the same subnet (they can't see each other)? What am I missing? Some extra info you might want to see: # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth2 10.1.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.3.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth2 0.0.0.0 192.168.2.1 0.0.0.0 UG 100 0 0 eth3 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- !10.0.0.0/8 10.0.0.0/8 MASQUERADE all -- 10.0.0.0/8 !10.0.0.0/8 Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Show & Hide hidden files on demand in Windows

    - by matt wilkie
    A little while ago I discovered that on Ubuntu linux one can display or conceal hidden files on demand at run-time by pressing [Ctrl-H]. This is really handy. Is there anyway this can be had on windows? I'm guessing Windows Explorer is not amenable to this, so I'm open to 3rd party file managers provided they're open source, freeware, or cheap (in order of preference). thanks

    Read the article

  • KVM with one host IP and a different subnet for machines

    - by Jguy
    I've already setup a KVM host with proper IP configurations, but my host had me create DHCP and use that to assign the IP's to the machines. I want to see if there's an easier way to do it (or better). Upon my first setting out on this, I didn't find anything that pointed me in the right direction. I'm coming off a fresh install of Debian 6.0 x64, so I have nothing installed. I've logged in, queried for the below information and changed the password from my host set one. I have a Debian 6.0 x64 system with the following initial network configuration (substituted 255 in place of my real first octave): # tail /etc/network/interfaces auto eth0 iface eth0 inet static address 255.9.24.80 broadcast 255.9.24.95 netmask 255.255.255.224 gateway 255.9.24.65 # default route to access subnet up route add -net 255.9.24.64 netmask 255.255.255.224 gw 255.9.24.65 eth0 I have a /29 subnet that I want the virtual machines to use from my host: IP: 255.46.187.152 /29 Mask: 255.255.255.248 Broadcast: 255.46.187.159 Usable IP addresses: 255.46.187.153 to 255.46.187.158 I like the interface of Cloudmin, so I want to try and use that if I can to administrate my guests. So, my questions: How do I set this up on the host system the best so that I can use the additional Subnet IP's on the guests and have them accessible from the internet? I also need to host a DNS server, which means one of these VM's has to have two IP's assigned to it and accessable from the outside world. How can I do that using Cloudmin? I had a question about this here: Multiple IP addresses assigned to one KVM VM But I just reformatted the entire server and am trying to figure out a better way of doing this. Machine information: # ip route show 255.9.24.64/27 via 255.9.24.65 dev eth0 255.9.24.64/27 dev eth0 proto kernel scope link src 255.9.24.80 default via 255.9.24.65 dev eth0 brctl is empty # ip addr list 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether c8:60:00:54:b5:d8 brd ff:ff:ff:ff:ff:ff inet 255.9.24.80/27 brd 255.9.24.95 scope global eth0 inet6 fe80::ca60:ff:fe54:b5d8/64 scope link valid_lft forever preferred_lft forever Thank you for any help you can provide me. EDIT: I've installed kvm and cloudmin: aptitude install qemu-kvm libvirt-bin wget http://cloudmin.virtualmin.com/gpl/scripts/cloudmin-kvm-debian-install.sh ./cloudmin-kvm-debian-install.sh Rebooted and now my network configuration looks like this: # device: eth0 iface eth0 inet manual # default route to access subnet iface br0 inet static address 255.9.24.80 netmask 255.255.255.224 broadcast 255.9.24.95 network 255.9.24.64 bridge_ports eth0 gateway 255.9.24.65 I setup in Cloudmin the Start IP as 255.46.187.153 and End IP as 255.46.187.158. The CIDR is 29 and the gateway is 255.46.187.152. I've installed a guest with ubuntuserver 12.04 x64, which was able to get and retrieve internet resources during installation, but now cannot reach anything nor can it be reached from anything. Its network configuration is: iface eth0 inet static address 255.46.187.153 netmask 255.255.255.224 broadcast 255.46.187.159 gateway 255.46.187.152 dns-nameservers <host provided nameservers> And is not able to ping google.com through DNS or direct IP, I can't ping the VM from the outside or the host. any ideas now?

    Read the article

  • iMac OSX "no route to host"

    - by jairo
    I have an issue with one of my computer on my network. It is an iMac running OS X 10.5.8. The issue is accessing certain websites. For instance, one of these websites is that the computer is unable to connect to is farmville.com. When I ping farmville.com it returns "no route to host": $ ping farmville.com PING farmville.com (50.16.253.102): 56 data bytes ping: sendto: No route to host ping: sendto: No route to host ping: sendto: No route to host When I traceroute farmville: $ traceroute farmville.com traceroute: Warning: farmville.com has multiple addresses; using 50.16.253.109 traceroute to farmville.com (50.16.253.109), 64 hops max, 40 byte packets traceroute: sendto: No route to host 1 traceroute: wrote farmville.com 40 chars, ret=-1 tracerouting the farmville ip address: 50.16.253.109 $ traceroute 50.16.253.109 traceroute to farmville.com (50.16.253.109), 64 hops max, 40 byte packets traceroute: sendto: No route to host 1 traceroute: wrote farmville.com 40 chars, ret=-1 Now the interesting part is that I on another computer (running Ubuntu 10.10) I have no issues at all accessing this website. Which tells me that it's not the internet connection. I've also disabled the firewall on the router to no avail. The /etc/hosts file in the mac is the following. The /private/etc/hosts file is empty: ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost #255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost Any help is appreciated. Many thanks

    Read the article

  • OpenVZ: Choosing right MySQL-Server depending on host

    - by Scheintod
    What I have: Two servers running Wheezy/OpenVZ with One MySQL container on each host master/master replicated (mysql1/mysql2) Replicated DNS on each host (dns1/dns2) different web-containers on each host but regulary backuped to the other. What I want: Each container should use the "local" MySQL-Server (the one which runs on the same hardware-node). I'd like to be able to move the web-containers between the to hosts. Each container should choose the MySQL-Server (semi) automatically. This scheme should continue working if one host is down. What I tried: Currently I'm keeping track on which container should run on which host by DNS entries which are queries by scripts e.g. for questions like: "Which container should be backuped on/to which host." For choosing the right MySQL server I have one extra entry like "mysql.container_abc" which resolves to either mysql1/mysql2. So in the applications in the container I can use "mysql.container_abc" for e.g. mysql_connect and if I want to move the container around I just need to change the dns. Now I notices one problem with this approach: Every mysql_connect generates one DNS query because the dns is not cached and this slows the request down unnecessarily. What I would like better: Some way of passing the information on which host we are running to the container and using it directly instead of using DNS. E.g. some way of setting a custom /etc/hosts entry in the container. Or any other great idea. Doesn't have to include DNS but shouldn't require to much special "magic" inside the container.

    Read the article

  • Virtual box host-only adapter configuration

    - by Xoundboy
    I have VirtualBox 4 running on Win 7 with a Centos 6 guest VM set up for hosting my dev server. When I'm connected to my home network the guest can be accessed via a static IP address that I configured (192.168.56.2), but not when I'm in the office. I'm guessing that the DHCP server in the office doesn't have a gateway configured for the 192.168.56.x IP range. I read something about the VB host-only adapter that should allow me to set this guest VM up in such a way that I don't need to be on any network to be able to access the guest from the host using a static IP. I've not been able to find out exactly how to configure this though. Can anyone give me an example configuration, thanks. UPDATE: Thanks for your responses. I've now set up a single virtual network adapter in VirtualBox and set it to host-only: C:\Users\Ben>vboxmanage list hostonlyifs Name: VirtualBox Host-Only Ethernet Adapter GUID: d419ef62-3c46-4525-ad2d-be506c90459a Dhcp: Disabled IPAddress: 192.168.56.2 NetworkMask: 255.255.255.0 IPV6Address: fe80:0000:0000:0000:78e3:b200:5af3:2a57 IPV6NetworkMaskPrefixLength: 64 HardwareAddress: 08:00:27:00:94:e8 MediumType: Ethernet Status: Up VBoxNetworkName: HostInterfaceNetworking-VirtualBox Host-Only Ethernet Adapter On the guest I've set up eth0 to use the same IP address as the host-only adapter (192.168.56.2) but when I try to log in using Putty I still get "Network Error : connection refused". VirtualBox DHCP servier is enabled but I can't ping the gateway (192.168.56.1) from either host nor guest. There's no firewall running on either OS. What next?

    Read the article

  • Macvlan based interface pings from host but not from namespace

    - by jtlebi
    My setup: Private network vboxnet1 10.0.7.0/24 1 Host, ubuntu desktop 1 VM, ubuntu server (VirtualBox) Adressing layout: HOST: 10.0.7.1 VM: 10.0.7.101 VM MAC NAMESPACE: 10.0.7.102 On the VM, I ran the following commands: ip netns add mac # create a new nmespace ip link add link eth0 mac0 type macvlan # create a new macvlan interface ip link set mac0 netns mac On the mac namespace, inside the VM: ip link set lo up ip link set mac up ip addr add 10.0.7.102/24 dev mac0 So that we basically end up with: (Like Inception ?) +------------------------+ | Host: 10.0.7.1 | | | | +--------------------+ | | | VM: 10.0.7.101 | | | | | | | | +----------------+ | | | | | NS: 10.0.7.102 | | | | | | | | | | | +----------------+ | | | +--------------------+ | +------------------------+ What works: Ping between Host and VM Ping between NS and NS dhclient from NS What does not work: ping between NS and VM ping between NS and Host Where I started to go nuts: tcpdump on host (the real machine) actually shows ARP request AND replies tcpdump on NS shows ARP requests sent to the host tcpdump on VM makes the whole mess work (!) -- ping starts to get answers when tcpdump is started on the VM ?!? So, I bet you were eager for it, my question is: how to I make it work ? I suspect something's wrong with ARP on the macvlan inside the NS but can't figure out what exactly... Btw, I did the same expérimentations with the mac0 interface directly on the VM (no namespace) and it worked flawlessly.

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >