Search Results

Search found 18363 results on 735 pages for 'external ip'.

Page 205/735 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • Can't connect my server outside my wireless network. server is openERP running on ubuntu 12.04 desktop router is ciso small business router

    - by user2613541
    I've looked on the internet regarding port forwarding. I've successfully fowarded port 8069 to my server's ip address. I can access openERP when I'm connected to the network of my office but not when I'm outside my office's network. What am I missing? my computer's ip address starts with 192... Do I have to first up the router's ip address and then my server's ip address to get to my server from the outside? what should I type in my internet browser? I've looked all day yesterday.

    Read the article

  • LAN full of public ipv4 addresses - How to filter it?

    - by sparc86
    The answer to my question maybe is not that hard but anyways, I do not know what to do. So, I just got in a new job in a Univerisity and I found out that the network (the LAN) is full of public IP addresses. Seriously, the whole LAN (probably more than 150 hosts) has it' own internet IP address and I don't know how to manage it. I have a very good experience using iptables (Linux firewall) in a NAT'ed environment. But then how should I proceed in an environment where all my LAN is working with a bunch of public IP addresses? Should I just use the "forward" rules and ignore the NAT rules or is there any other issue in such environment which I should take care? Can I add a firewall between the router and the LAN in order to produce packet filtering for these public IP addresses in my LAN or will this just not work? Thanks!

    Read the article

  • Website is not accessible from server which is using proxy

    - by Bhoot
    I hosted a website in a win 2008 R2 server which runs in private domain. I set up bindings for port 80 and 443 for http & https respectively. Created inbound rule for port 80 and 443 also in windows firewall. After doing all this, i am still not able to access my website from remote machine. IE : Internet Explorer cannot display the webpage. Chrome : Oops! Google Chrome could not find xxxxxx Tried accessing website by ip address but no luck. I tried to ping that server but it says TTL expired in Transit. Now i found some more information over internet to check if the server is using any kind of proxy in between. I found my IP address at www.getip.com, but ipconfig/all gives me a different IP address. Is it really a problem if we use proxy ? I am not sure if i have concluded it correctly. But is there any way out to resolve this issue? Update ::: I figured it out. I have to call that website with external IP address. due to the proxy settings i was not able to call that website by the server's IP or name of that machine.

    Read the article

  • Iptables and counters

    - by mehturt
    I'm trying to use iptables counters with munin to monitor traffic of hosts on my local subnet. For each host I set up a rule like this: iptables -I OUTPUT -d $ip This should count the packets going from firewall to $ip, correct? I found out that this does not seem to count all packets. I start tcpdump on my router (Linux) and I see packets to $ip that are not counted. For example I check number of packets for rule to my phone IP. I start tcpdump, refresh Gmail on my phoone, I see packets in tcpdump's output but iptables rule counters are not incremented. Then I open a web page on the same phone and the counters are incremented. What could be the reason?

    Read the article

  • Destination host unreachable

    - by user1010101
    I have 2 router connected to 1 switch which contains two vlans, 1 router has the ip table of vlan1 and the other router have the ip table of vlan2 I have trunked both router cable to the switch. I have set 1 ip table per router which correspond to the ip address of the PC that have this router address as a gateway. When I ping from 192.168.100.2 to 192.168.200.2 it tells me that destination host is unreachable and the message is from the router 192.168.100.1. So I guess router for 192.168.100.x does not see the router for 192.168.200.x , right ? Or am I wrong ? What are good troubleshooting steps ? http://i.imgur.com/b94Ir.png is the representation of the network, i cannot post image since im not reputated enough.

    Read the article

  • A really unique case of Load Balancing

    - by Shamshun
    I have an internet connection which has limited up/down bandwidth per IP address. What I want to do is to get multiple IP addresses on a "single" LAN interface, and use a load balancer to distribute traffic through them. I was successful at getting 100 ip addresses on a single interface. my problem is that Linux and Windows use the first ip address of an interface by default, so my bandwidth is not increased. I would appreciate if someone tells me what Load Balancing software has the ability to distribute load between multiple IPs on a single interface. I have tried to do so on both Win7 and Backtrack-LinuxR2

    Read the article

  • How to set up vpn tunnel (ipsec) connection

    - by Alfwed
    I'm working with a client who wants to set up a vpn tunnel between their network and ours. They're in charge of the tunnel and to give us the access they are asking me my public IP and my LAN IP. This is what i've got when i do an ifconfig on the server i will use to connect to the vpn $ ifconfig eth0 Link encap:Ethernet HWaddr d4:ae:52:cd:xx:xx inet adr:62.210.xxx.xxx Bcast:62.210.xxx.xxx Masque:255.255.255.0 adr inet6: fe80::d6ae:52ff:xxxx:xx/64 Scope:Lien UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Packets reçus:55255032 erreurs:0 :779628 overruns:0 frame:0 TX packets:5419527 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 Octets reçus:5598164393 (5.5 GB) Octets transmis:1034297288 (1.0 GB) Interruption:16 Mémoire:c0000000-c0012800 lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6: ::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:16436 Metric:1 Packets reçus:45923382 erreurs:0 :0 overruns:0 frame:0 TX packets:45923382 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 The inet adr:62.210.xxx.xxx is my public IP but it seems like i dont have any LAN IP. Can the connection work without LAN IP or should I create a private network somehow?

    Read the article

  • Mac OS X - Time Machine backup fails verification - What can I do to save the history?

    - by usermac75
    Hi, How do I make Time Machine to make a new complete backup without losing older versions of backed up files? Verbose: I am using the Time Machine backup on my OS X (Snow Leopard) to backup the whole computer to an external drive. I especially like the "history", i.e. the feature that allows you to restore the older version of a file. Problem: I had some data corruption on my external backup drive, I repaired it with the System Tool for doing that, it found some faults. I had the disk tool repair the external drive. After that, the external drive was OK and I could use Time Machine again. I let Time Machine do one more backup. Now I made a verification according to http://superuser.com/questions/47628/verifying-time-machine-backups, namely along sudo diff -qr . $HOME/Desktop 2>&1 | tee $HOME/timemachine-diff.log However: After doing the command above, several differences and missing files were reported, approx. 200 files in sum. Whereas some of the missing files were cache or excluded directories, the differences do bother me, especially as some important documents from me are listed as differing. How can I make sure that the data on the external drive is synced correctly? Is it possible to have Time Machine to do a complete new backup without losing the history? Or to have Time Machine compare all files for differences and re-write all files that are different? Or can I set some flags on the files that do not match to have them copied again? (like the archive-flag in Windows/Dos). I'd rather not touch the files because I would like to keep the date of last change/date of creation) Thank you for your thoughts!

    Read the article

  • 2 servers, high availability and faster response

    - by user17886
    I recently bought a second webserver because I worry about hardware failure of my old server. Now that I have that second server I wish to do a little more then just have one server standby and replicate all day. As long as it's there I might as well get some advantage our of it ! I have a website powered by ubuntu 12.04, nginx, php-fpm, apc, mysql (5.5) and couchdb. Im currently testing configurations where i can achieve failover AND make good use of the extra harware for faster responses / distributed load. The setup I am testing nowinvolves heartbeat for ip failover and two identical servers. Of the two servers only one has a public ip adress. If one server crashes the other server takes over the public ip adress. On an incoming request nginx forwards the request tot php-fpm to either server a of server b (50/50 if both servers are alive). Once the request has been send to php-fpm both servers look at localhost for the mysql server. I use master-master mysql replication for this. The file system is synced with lsyncd. This works pretty well but Im reading it's discouraged by the (mysql) community. Another option I could think of is to use one server as a mysql master and one server as a web/php server. The servers would still sync their filesystem, would still run the same duplicate software (nginx,mysql) but master slave mysql replication could be used. As long as bother servers are alive I could just prefer nginx to listen to ip a and mysql to ip b. If one server is down, the other server could take over the task of the other server, simply by ip switching. But im completely new at this so I would greatly value your expert advice. Is either of the two setups any good ? If you have any thoughts on this please let me know ! PS, virtualisation, hosting on different locations or active/passive setups are not solutions im looking for. I find virtual server either too slow or too expensive. I already have a passive failover on another location. But in case of a crash I found the site was still unreachable for too long due to dns caching.

    Read the article

  • Site down, SSH and FTP work fine

    - by Euskadi
    I cannot access my site through my web browser, using either the domain name or IP address (which also pings fine). My site was working last night when I went to bed, so I can't think what might have done it. I have tried restarting apache to no avail. update: I've noticed that http://[IP Address]/phpymyadmin works but [domain name]/phpmyadmin does not, and the same happens for webmin (https://[IP]:1000). Could this be a problem with BIND?

    Read the article

  • Grub File Stolen ( WINDOWS and OpenSus )

    - by NESIRSUSEJ
    I have a problem with my computer. I installed OpenSus on my external, and I still have Windows on my HDD. OpenSus took the Grub file and placed it on my external, so now I have to open OpenSus to open Windows. I never got a Windows CD when I bought my computer ( I live in South Africa :) ).... I want to install Ubuntu 12.04 on my external, but then I will have to format my external in which case I will lose the Grub file causing me to lose Windows, which I can't afford to do... yet. Does anyone have an idea what I can do?

    Read the article

  • taglib link errors

    - by Vihaan Verma
    I m using taglib for one of my projects . The Debug/Release library is build using MSVC 10. On compiling the code with the library in taglib/taglib/Release some linker error are thrown . id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: class TagLib::AudioPropertie s * __cdecl TagLib::FileRef::audioProperties(void)const " (__imp_?audioProperties@FileRef@TagLib@@QEBAPEAVAudioProp erties@2@XZ) referenced in function "struct MetaData __cdecl ID3::getMetaDataOfFile(class std::basic_string<char,st ruct std::char_traits<char>,class std::allocator<char> >)" (?getMetaDataOfFile@ID3@@YA?AUMetaData@@V?$basic_string@ DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: virtual __cdecl TagLib::Stri ng::~String(void)" (__imp_??1String@TagLib@@UEAA@XZ) referenced in function "struct MetaData __cdecl ID3::getMetaDa taOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)" (?getMetaDataOfF ile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: class std::basic_string<char ,struct std::char_traits<char>,class std::allocator<char> > __cdecl TagLib::String::to8Bit(bool)const " (__imp_?to8 Bit@String@TagLib@@QEBA?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@_N@Z) referenced in function "struct MetaData __cdecl ID3::getMetaDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class s td::allocator<char> >)" (?getMetaDataOfFile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator @D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: virtual __cdecl TagLib::File Ref::~FileRef(void)" (__imp_??1FileRef@TagLib@@UEAA@XZ) referenced in function "struct MetaData __cdecl ID3::getMet aDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)" (?getMetaData OfFile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: class TagLib::Tag * __cdecl TagLib::FileRef::tag(void)const " (__imp_?tag@FileRef@TagLib@@QEBAPEAVTag@2@XZ) referenced in function "struct Meta Data __cdecl ID3::getMetaDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator <char> >)" (?getMetaDataOfFile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z ) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: bool __cdecl TagLib::FileRef ::isNull(void)const " (__imp_?isNull@FileRef@TagLib@@QEBA_NXZ) referenced in function "struct MetaData __cdecl ID3: :getMetaDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)" (?getM etaDataOfFile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: __cdecl TagLib::FileRef::Fil eRef(class TagLib::FileName,bool,enum TagLib::AudioProperties::ReadStyle)" (__imp_??0FileRef@TagLib@@QEAA@VFileName @1@_NW4ReadStyle@AudioProperties@1@@Z) referenced in function "struct MetaData __cdecl ID3::getMetaDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)" (?getMetaDataOfFile@ID3@@YA?AU MetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: __cdecl TagLib::FileName::Fi leName(char const *)" (__imp_??0FileName@TagLib@@QEAA@PEBD@Z) referenced in function "struct MetaData __cdecl ID3:: getMetaDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)" (?getMe taDataOfFile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) I m only including tag.lib from taglib/taglib/Release folder . Is there some other library I m missing out?

    Read the article

  • virtualbox, MAAS: help needed

    - by Roberto Attias
    Ok, I made some progress wrt the original question (still below). I found /etc/maas/dhcpd.conf contained option domain-name-servers 10.0.3.15, and changed it to 192.168.0.11. After restarting the daemon, I now see "node" getting the right DNS, unfortunately this doesn't fix the main problem, which I believe is the reference to 169.254.169.254. It does introduce a new question: while the remaining information from /etc/maas/dhcp.conf is present in the maas GUI, there is no field to enter the dns address. Why? Anyway, my original problem still stands... Any idea? Original question follows. In VirtualBox, I have: master VM: ubuntu 12.04.3 server eth0: Internal Network, IP= 192.168.0.11 eth1: NAT, IP= 10.0.3.15 eth2: Host-only, IP= 192.168.56.102 running MAAS region and cluster controlller, with DHCP and DNS enabled node VM: eth0: Internal Network node VM boots in PXEboot. DHCP succeeds, and the boot process starts, but during boot I see some issues. One of them is "disk drive not ready yet or not present" for / and /tmp. I've googled this issue, and some people say it happens when the fisical disk is a SSD, which is my case. Anywaythe system seems to recover from this eventually. Immediately after it starts printing a lot of messages of the form: 2013-10-01 16:52:37,142 - url_helper.py[WARNING]: Calling 'http://169.254.168.254/2009-04-04/meta-data/instance-id failed [x/y]: url error [[Errno 113] No route to host] That IP address is clearly bogous, not sure where it came from. Before that point, I had seen the following network configuration: address: 192.168.0.100 broadcast: 192.168.0.255 netmask: 255.255.255.0 gateway: 192.168.0.1 dns0 : 10.0.3.15 dns1 : 0.0.0.0 Not sure if related, but the dns doesn't seem right, as node doesn't have an interface to reach 10.0.3.15. If that's the problem, what should I change to have the DNS point to 192.168.0.11? Thanks, Roberto

    Read the article

  • GPLv2 - Multiple AI chess engines to bypass GPL

    - by Dogbert
    I have gone through a number of GPL-related questions, the most recent being this one: http://stackoverflow.com/questions/3248823/legal-question-about-the-gpl-license-net-dlls/3249001#3249001 I'm trying to see how this would work, so bear with me. I have a simple GUI interface for a game of Chess. It essentially can send/receive commands to/from an external chess engine (ie: Tong, Fruit, etc). The application/GUI is similar in nature to XBoard ( http://www.gnu.org/software/xboard/ ), but was independently designed. After going through a number of threads on this topic, it seems that the FSF considers dynamically linking against a GPLv2 library as a derivative work, and that by doing so, the GPLv2 extends to my proprietary code, and I must release the source to my entire project. Other legal precedents indicate the opposite, and that dynamic linking doesn't cause the "viral" effect of the GPL to propagate to my proprietary code. Since there is no official consensus that can give a "hard-and-fast" answer to the dynamic linking question, would this be an acceptable alternative: I build my chess GUI so that it sends/receives the chess engine AI logic as text commands from an external interface library that I write The interface library I wrote itself is then released under the GPL The interface library is only used to communicate via a generic text pipe to external command-line chess engines The chess engine itself would be built as a command-line utility rather than as a library of any sort, and just sends strings in the Universal Chess Interface of Chess Engine Communication Protocol ( http://en.wikipedia.org/wiki/Chess_Engine_Communication_Protocol ) format. The one "gotcha" is that the interface library should not be specific to one single GPL'ed chess engine, otherwise the entire GUI would be "entirely dependent" on it. So, I just make my interface library so that it is able to connect to any command-line chess engine that uses a specific format, rather than just one unique engine. I could then include pre-built command-line-app versions of any of the chess engines I'm using. Would that sort of approach allow me to do the following: NOT release the source for my UI Release the source of the interface library I built (if necessary) Use one or more chess engines and bundle them as external command-line utilities that ship with a binary version of my UI Thank you.

    Read the article

  • Securing ClickOnce hosted with Amazon S3 Storage

    - by saifkhan
    Well, since my post on hosting ClickOnce with Amazon S3 Storage, I've received quite a few emails asking how to secure the deployment. At the time of this post I regret to say that there is no way to secure your ClickOnce deployment hosted with Amazon S3. The S3 storage is secured by ACL meaning that a username and password will have to be provided before access. The Amazon CloudFront, which sits on top of S3, allows you to apply security settings to your CloudFront distribution by Applying an encryption to the URL. Restricting by IP. The problem with the CloudFront is that the encryption of the URL is mandatory. ClickOnce does not provide a way to pass the "Amazon Public Key" to the CloudFront URL (you probably can if you start editing the XML and HTML files ClickOnce generate but that defeats the porpose of ClickOnce all together). What would be nice is if Amazon can allow users to restrict by IP addresses or IP Blocks. I'd sent them an email and received a response that this is something they are looking into...I won't hold my breadth though. Alternative I suggest you look at Rack Space Cloud hosting http://www.rackspacecloud.com they have very competitive pricing and recently started hosting Windows Virtual Servers. What you can do is rent a virtual server, setup IIS to host your ClickOnce applications. You can then use IIS security setting to restrict what IP/Blocks can access your ClickOnce payloads. Note: You don't really need Windows Server to host ClickOnce. Any web server can do. If you are familiar with Linux you can run that VM with rackspace for half the price of Windows. I hope you found this information helpful.

    Read the article

  • confusion about installing/using git; how to undo

    - by dan
    I'm very new to ubuntu so I'm sure this is a dumb question. I wanted to install some source code that was on git. Don't really know what that means, I've never used git before, but I figured it was time to learn so I first installed git. Next I tried to clone the git directory of the software I want to install. I got a message saying "the authenticity of IP:IP:IP:IP can't be established". I went ahead and ended up with another message saying warning such and such will be added to known hosts. I went ahead and it said something about hanging up on the connection. After searching the internet for awhile I realized I didn't need git to install the software but now I have it installed and have added some host to some file or another. I'm concerned I've created some security issues I need to fix. I know this is stupid but can anyone help me undo what I've done, or better understand what I've done. Did adding a git project open up my system? Beyond that can anyone tell me how git works. Everything I've found assumes I know stuff that I don't yet. Thanks. Dan

    Read the article

  • Google-Chrome 10 stable crash on every page

    - by Achu
    I installed google-chrome today, when i open any page including askubuntu i got this error message. i see my memory usage is normal(Memory 56% and swap 4.8%) also I reload and i go to another page same problem What is the problem? the last dmesg output [26612.341865] lo: Disabled Privacy Extensions [29651.852476] chrome[15472] general protection ip:1528e26 sp:7fff514a9dc0 error:0 in chrome[400000+3082000] [31447.190586] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=15939 PROTO=UDP SPT=4243 DPT=161 LEN=49 [31451.250190] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=16180 PROTO=UDP SPT=4243 DPT=161 LEN=49 [31454.260150] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=16322 PROTO=UDP SPT=4243 DPT=161 LEN=49 [31458.648164] [UFW BLOCK] IN=eth1 OUT= MAC=00:1c:25:a1:e7:67:00:16:3e:28:5a:b7:08:00 SRC=172.23.100.6 DST=172.23.20.128 LEN=69 TOS=0x00 PREC=0x00 TTL=128 ID=16513 PROTO=UDP SPT=4243 DPT=161 LEN=49 [33124.300112] lo: Disabled Privacy Extensions [33601.021406] Skipping EDID probe due to cached edid [34594.043501] chrome[15746]: segfault at 0 ip 0000000000d5cdd0 sp 00007fff5149ec20 error 6 in chrome[400000+3082000] [34597.395334] chrome[18112] general protection ip:17c85bf sp:7fff514aa4f0 error:0 in chrome[400000+3082000] [34616.786643] chrome[18124]: segfault at 1007 ip 00000000017c849f sp 00007fff514aabd0 error 4 in chrome[400000+3082000] [37277.436207] lo: Disabled Privacy Extensions [38549.501390] e1000e: eth1 NIC Link is Down [38551.122253] e1000e: eth1 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX [38551.122263] e1000e 0000:00:19.0: eth1: 10/100 speed: disabling TSO

    Read the article

  • Help with IPTables - Masquerading + Forwarding, 1-to-1?

    - by Artiom Chilaru
    I've got a clean Ubuntu Server 10.10 with OpenSSH, OpenVPN and vsFTPd installed. The server is running as a VM on the Hyper-V server (hypervisor), has two network interfaces mapped to physical adapters (eth0 and eth1), and a virtual interface with a direct connection to the hypervisor (eth2). The VPN will create a tun0 interface when a client connects. What I want is the remote user, connecting over VPN to be able to connect to the hypervisor (all ports, ping etc). The initial idea was to make the VPN create a tap0 interface, and bridge eth2 to tap0, but this didn't work, unfortunately, as it seems that the adapters don't want to go into promiscuous mode (partially confirmed by MS) At the same time, both the hypervisor and the remove client over VPN can successfully ping/connect to the ubuntu server with no problems. So my plan right now is to try doing some 1-1 masquerading, if possible. Basically, I want every request sent from the VPN client to the ubuntu server to be redurected to the hypervisor instead (with IP translation ofc), and every request from the hypervisor to the ubuntu machine sent to the VPN client (IP translated too). Only 1 client will be connected at a time to the VPN, so I can force limit it to a single IP at all times, if necessary. Is this the right way to go, and if true, how can this be achieved? It's almost like a special case of port-forwarding, except every single port on tun0 is forwarded to a machine in eth2, and every port on the eth2 side forwards to an ip on tun0 I guess it could be done with iptables, but I'm rather new in linux, so I can't do it myself... help? :(

    Read the article

  • Exempt programs from using active VPN connection

    - by Oxwivi
    When I connect to a VPN, all my network traffic is automatically routed through it. Is there a way to add exemptions to that? I don't know if adding exceptions has anything to do with the VPN protocol, but the VPN I'm using is of the OpenVPN protocol. Speaking of OpenVPN, why is it not installed by default on Ubuntu installs unlike PPTP? I could not get the list of IRCHighWay's servers, and this is the result I get trying to connect on XChat with running the bash script running: * Looking up irc.irchighway.net * Connecting to irc.irchighway.net (65.23.153.98) port 6667... * Connected. Now logging in... * You have been K-Lined. * *** You are not welcome on this network. * *** K-Lined for Open proxies are not allowed. (2011/02/26 01.21) * *** Your IP is 173.0.14.9 * *** For assistance, please email [email protected] and include everything shown here. * Closing Link: 0.0.0.0 (Open proxies are not allowed. (2011/02/26 01.21)) * Disconnected (Remote host closed socket). The IP 173.0.14.9 is the one due to my VPN. I had forgotten to check ip route list before running the script, and this is the one after running it: ~$ ip route list 99.192.193.241 dev ppp0 proto kernel scope link src 173.0.14.9 173.0.14.2 via 192.168.1.1 dev eth1 proto static 173.0.14.2 via 192.168.1.1 dev eth1 src 192.168.1.3 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.3 metric 2 169.254.0.0/16 dev eth1 scope link metric 1000 default dev ppp0 proto static Oh and running the script returned this output: ~$ sudo bash irc_route.sh Usage: inet_route [-vF] del {-host|-net} Target[/prefix] [gw Gw] [metric M] [[dev] If] inet_route [-vF] add {-host|-net} Target[/prefix] [gw Gw] [metric M] [netmask N] [mss Mss] [window W] [irtt I] [mod] [dyn] [reinstate] [[dev] If] inet_route [-vF] add {-host|-net} Target[/prefix] [metric M] reject inet_route [-FC] flush NOT supported I ran the script after connecting to the VPN.

    Read the article

  • How do I get Google to crawl my content when it's only displayed when you fill in a form?

    - by Sarang Patil
    I have a webpage. It has a form and the "results" section is blank. When the user searches for items, and a list that pops up, he/she chooses one option from list and then the corresponding results are displayed in results section. I once decided to log every ip,url of person with time that visits my page. One ip was 66.249.73.26, and on doing google search I came to know it is ip of google bot. link for whatmyipaddress google bot Now when I searched for the links that this ip visited, it was like this: search?id=100 search?id=110 ... search?id=200 ... then afterwards it incremented in steps of 1, like 400,401.. But people search for strings and not numbers. And because googlebot searches for numbers like this, I think the corresponding content is never displayed and so my page content is never indexed, even though it has rich content. So I want to ask you is that in order to show google bot all the content that the webpage has, should I list all the results in index page and ask users to enter string to filter results?

    Read the article

  • Site migration and SEO impact

    - by John Smith
    I'd greatly appreciate a response on the following question relating to site migration and SEO impact. Here's some background on how my domain name and site is currently configured: My domain name provider has the following settings: host name @ is an A NAME record and points to IP address x.x.x.x host name www is an A NAME record and points to IP address x.x.x.x sub-domain host name new.example.com is an A NAME record and points to IP address x.x.x.x My hosting provider has the following settings: host record @ is an A NAME record and points to IP address x.x.x.x, folder home/public_html/old host record www is a C NAME record and points to example.com sub-domain host record new.example.com points to home/public_html/new I want to: point the domain (example.com AND www.example.com) to the content hosted under folder home/public_html/new, which is currently the content directory for new.example.com retire the content hosted under folder home/public_html/old retire the sub-domain host record new.example.com I believe the easiest method of doing this, is: removing the sub-domain host record new.example.com; and changing the following line in the .htaccess file in home/public_html from # Change 'subdirectory' to be the directory you will use for your main domain. RewriteCond %{REQUEST_URI} !^/old/ to # Change 'subdirectory' to be the directory you will use for your main domain. RewriteCond %{REQUEST_URI} !^/new/ But I don't understand how this will impact my SERP - ideally, I'd like it to remain the same. Research on this topic resulted in the following Google page, which was no help, and this related StackExchange question, which suggests that this should not affect my SERP (at least, not permanently). But I wanted to make certain with a more specific example, and hopefully contribute to the community at the same time. I'd appreciate any feedback on this. Is there a better/recommended method to migrate sites this way? Is there an SEO impact?

    Read the article

  • Can't install Ubuntu on Asus Eee 1015pem

    - by Peter
    I'm having trouble to install Ubuntu. I use a ASUS Eee 1015pem netbook. Recently, I my netbook got wet. I had it inside my backback and all my things got wet. The netbook boots up fine but it will not load the OS. I downloaded ubuntu onto my external hard drive and changed the settings in my Bios to boot from a removable device. Nothing happens. When I plug in my external hard drive I'm not able to get to the boot icon. I have to unplug it the external hard drive. Set my boot settings I tried both Removable and CD-Rom. Than I plug my external drive back in and nothing happens on either settings. My Asus never came with a recovery disk and suppose to have a build in recovery by pressing F9 in the Bios. Also I need to disable Boot Booster in Bios and Boot Booster is not even an opition in Bios. My friend told me try installing Ubuntu but now I'm having no luck with Ubuntu. Any suggestions?

    Read the article

  • Mapping XFCE4/xRDP sessions to users

    - by garrilla
    I have Ubuntu 13.10 with Xubuntu Desktop - XFCE4. I'm trying to use XDRP to allow MS Windows users to login to the machine with their own user. I've been a lot around the houses with this! I've find two half-way solutions, but can't get them to work as I'd like... 1) in /etc/xrdp/xrdp.ini I set the port to -1 [xrdp1] name=sesman-Xvnc lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=-1 each time any user logs on they get a new session - they can never go back to their original session 2) in /etc/xrdp/xrdp.ini I set the port to 5912 (e.g) [xrdp1] name=sesman-Xvnc lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5912 each time any user logs on they always log on to the same session irrespective of their logon details ??) I found a mid-way solution, to create a lot of sessions by adding adding additional options in the xrdp.ini e.g. [xrdp8] name=Bob's Logon lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5913 [xrdp9] name=Jill's Logon lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5914 and so on, but he problem with this is that Jill can still log into Bob's remote session ??? Is it possible to to do what I'm trying to do? Maybe I have to use different tools?

    Read the article

  • How can I make multiple displays work on my Asus UX32VD?

    - by oKtosiTe
    Original title: Why do I have two trash icons in the Unity Launcher? Whether I run Ubuntu as a live-USB or install it, I always have two trash bins on the Unity Launcher. Both work, and both open the same location. This seems a bit redundant; what could be done about it? Update: Turning auto-hide on made it obvious that I have multiple Launchers showing. With auto-hide off, they simply overlap, making it look like there's a double trash icon, but with auto-hide enabled, I can display one Launcher (and therefore one trash icon) at a time. Still, two are running simultaneously. Second update: This problem appears to be caused by the way Ubuntu handles multiple displays on my Asus UX32VD Ultrabook. Somehow, the laptop display cannot be used while my external display is connected. It is shown in the Displays list, but remains black no matter how I configure it. The external display runs at 1920x1200, the laptop monitor should run at 1920x1080. It therefore becomes obvious that the Launcher that's supposed to run on the laptop display, is actually displayed on the external monitor. Using nomodeset as a kernel parameter as indicated here makes the laptop display inaccessible altogether, detecting the external monitor as the laptop display and making resolutions other than 1920x1200 inaccessible. That is not an option.

    Read the article

  • Subdomain still times out after set up a month ago

    - by user8137
    I'm a newbie on this and this has probably been asked already but the subjects online were close but too vague in there answers so I've probably really messed this up. I would really appreciate specific step by step instructions. This is what I'd like to do: use the subdomain www.high-res.domain.com to be accessed by external customers with specific permissions to access the site (like ftp). We use Network Solutions to house domain.com. We recently added a new ip address to point to www.high-res.domain.com. I gave the ip address to the company that hosts our website. I pinged www.high-res.domain.com and it points to the correct ip address but still times out. It’s been a few weeks now and when you ping it, it still times out. C:ping XXX.XXX.X.XXX Pinging XXX.XXX.X.XXX with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for XXX.XXX.X.XXX: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss). Tracert times out as well. I even went to DNS tools and a few other sites for checking this and it shows the same thing. I recently went into the DNSmgmt on our server (wink2k3sp1) and created an A record under the DomainDnsZones which translated to a Cname when you look at it. Under the Domain it has two entries one to the subdomain and the other to the website host each with separate ip addresses. Is this correct? The website people are too busy on another project to research it further and my friends haven't gotten back to me. Please help. Thanks KK

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >