Search Results

Search found 15835 results on 634 pages for 'static routes'.

Page 475/634 | < Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >

  • Discover the public ip of a network without being connected

    - by Martin Trigaux
    Let say, I'm next to a network and can see the traffic (with airodump or similar tool) but can not decipher it (because I am not connected on the network). Is it possible to discover the public ip address of the network ? I know the MAC address of the users connected on the network but do I know the one of the router ? If yes, maybe there is a way to do the matching. I know IP addresses are not forever but some addresses are static and never change. Maybe there is a database of MAC address having recorded that. Google has a database that match MAC address and geographical coordinates so why not with IP addresses ? Other idea, if I know where am I, I can maybe guess the IP range used in the city by the ISP (is it findable ?) and then try to "ping" each IP on the range (if it is a /24, it's possible, even /16 maybe). Will I get some information like the MAC of the box or see some traffic on the network ? These are two ideas I had. I don't know if they are doable, certainly not perfect. Do you think of some others ? By trying several methods, maybe I can get a guess with a bit of luck. Thank you

    Read the article

  • VPN into multiple LAN Subnets

    - by Rain
    I need to figure out a way to allow access to two LAN subnets on a SonicWall NSA 220 through the built-in SonicWall GlobalVPN server. I've Googled and tried everything I can think of, but nothing has worked. The SonicWall NSA management web interface is also very unorganized; I'm probably missing something simple/obvious. There are two networks, called Network A and Network B for simplicity, with two different subnets. A SonicWall NSA 220 is the router/firewall/DHCP Server for Network A, which is plugged into the X2 port. Some other router is the router/firewall/DHCP server for Network B. Both of these networks need to be managed through a VPN connection. I setup the X3 interface on the SonicWall to have a static IP in the Network B subnet and plugged it in. Network A and Network B should not be able to access each other, which appears the be the default configuration. I then configured and enabled VPN. The SonicWall currently has the X1 interface setup with a subnet of 192.168.1.0/24 with a DHCP Server enabled, although it is not plugged in. When I VPN into the SonicWall, I get an IP address supplied by the DHCP Server on the X1 interface and I can access Network A remotely although I do not have access to Network B. How can I allow access to both Network A and Network B to VPN clients although keep devices on Network B from accessing Network A and vice-versa. Is there some way to create a VPN-only subnet (something like 10.100.0.0/24) on the SonicWall that can access Network A and Network B without changing the current network configuration or allowing devices on both netorks "see" each other? How would I go about setting this up? Diagram of the network: (Hopefully this kind of helps) WAN1 WAN2 | | [ SonicWall NSA 220 ]-(X3)-----------------[ Router 2 ] | | (X2) 192.168.2.0/24 10.1.1.0/24 Any help would be greatly appriciated!

    Read the article

  • Nginx + PHPBB3 reverse proxy images problem

    - by siberiano
    Hello all I have a problem with my Nginx Frontend + Apache2 backend + PHPBB3 software. It doesn't load the CSS and the images neither. I get constant errors like these: 2010/04/14 16:57:25 [error] 13365#0: *69 open() "/var/www/foo/styles/styles/coffee_time/theme/large.css" failed (2: No such file or directory), client: 83.44.175.237, server: www.foo.com, request: "GET /styles/coffee_time/theme/large.css HTTP/1.1", host: "www.foo.com", referrer: "http://www.foo.com/viewforum.php?f=43" This is my config of the site: server { listen 80; server_name www.foo.com; access_log /var/log/nginx/foo.access.log; # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; root /var/www/trasteando/; } location / { root /var/www/foo/; index /var/www/foo/index.php; } # proxy the PHP scripts to predefined upstream .apache. # location ~ .php$ { proxy_pass http://apache; } location /styles/ { root /var/www/foo/styles/; }

    Read the article

  • bond0:0 + define virtual IP

    - by yael
    hi all in my linux server I have the follwoing: Linux Version - RedHat-Linux- 5.3.0.0 (this linux server only only one LAN) more /etc/sysconfig/network-scripts/ifcfg-bond0:0 DEVICE=bond0:0 ONBOOT=yes BOOTPROTO=static IPADDR=10.10.10.12 NETMASK=255.255.255.0 ifconfig -a bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 UP BROADCAST MASTER MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) bond0:0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:10.10.10.12 Bcast:1.1.1.255 Mask:255.255.255.0 UP BROADCAST MASTER MULTICAST MTU:1500 Metric:1 eth0 Link encap:Ethernet HWaddr 00:0E:0C:C7:F8:92 inet addr:1.1.1.1 Bcast:1.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::20e:cff:fec7:f892/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8600 errors:0 dropped:0 overruns:0 frame:0 TX packets:4764 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:717979 (701.1 KiB) TX bytes:598620 (584.5 KiB) Memory:b8820000-b8840000 my problems: why I get HWaddr 00:00:00:00:00:00 and not the real MAC address I cant ping to other server with 10.10.10.11 from my server is it posible to define bond0:0 when I have only one LAN (eth0) other info: more /etc/modprobe.conf alias eth0 e1000e alias eth1 e1000e alias eth2 e1000e alias eth3 e1000e alias scsi_hostadapter mptbase alias scsi_hostadapter1 mptsas alias scsi_hostadapter2 ata_piix alias bond0 bonding alias bond1 bonding

    Read the article

  • bond0:0 + define virtual IP

    - by yael
    hi all in my linux server I have the follwoing: Linux Version - RedHat-Linux- 5.3.0.0 (this linux server only only one LAN) more /etc/sysconfig/network-scripts/ifcfg-bond0:0 DEVICE=bond0:0 ONBOOT=yes BOOTPROTO=static IPADDR=10.10.10.12 NETMASK=255.255.255.0 ifconfig -a bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 UP BROADCAST MASTER MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) bond0:0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:10.10.10.12 Bcast:1.1.1.255 Mask:255.255.255.0 UP BROADCAST MASTER MULTICAST MTU:1500 Metric:1 eth0 Link encap:Ethernet HWaddr 00:0E:0C:C7:F8:92 inet addr:1.1.1.1 Bcast:1.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::20e:cff:fec7:f892/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8600 errors:0 dropped:0 overruns:0 frame:0 TX packets:4764 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:717979 (701.1 KiB) TX bytes:598620 (584.5 KiB) Memory:b8820000-b8840000 my problems: why I get HWaddr 00:00:00:00:00:00 and not the real MAC address I cant ping to other server with 10.10.10.11 from my server is it posible to define bond0:0 when I have only one LAN (eth0) other info: more /etc/modprobe.conf alias eth0 e1000e alias eth1 e1000e alias eth2 e1000e alias eth3 e1000e alias scsi_hostadapter mptbase alias scsi_hostadapter1 mptsas alias scsi_hostadapter2 ata_piix alias bond0 bonding alias bond1 bonding

    Read the article

  • Systemd Service Start With Dynamic Port Value From Docker

    - by Sheriffen
    Using CoreOS, Docker and systemd to manage my services I want to properly perform service discovery. Since CoreOS utilizes etcd (distributed key-value) there is a very convenient way to do this. On systemd's ExecStartPost I can just insert the started service into etcd without problems. My usecase needs something like this: ExecStartPost=/usr/bin/etcdctl set /services/myServiceName '{ \"host\": \"%H\", \"port\": 5555 }' which works like a charm. But this is where my idea popped up. Docker has the power to randomly assign a port if I just run docker run -p 5555 which is awesome since I don't have to set it statically in the *.service file and I could possibly run multiple instances on the same host. What if I could get the randomly assigned port and insert instead of the static 5555? Turns out I can use the docker port command to get the port and with some formatting we can get just the port with $(echo $(/usr/bin/docker port my-container-name 5555) | cut -d':' -f2) which works if I set it (using bash) like this: /usr/bin/etcdctl set /services/myServiceName '{ \"host\": \"%H\", \"port\": '$(echo $(/usr/bin/docker port my-container-name 5555) | cut -d':' -f2)' }' but using systemd I just can't get it to work. This is the code I'm using: ExecStartPost=/usr/bin/etcdctl set /services/myServiceName '{ \"host\": \"%H\", \"port\": '$(echo $(/usr/bin/docker port my-container-name 5555) | cut -d':' -f2)'}' Somehow I got something wrong but it's hard to debug since it works when typed in the terminal.

    Read the article

  • What's the mysql-5.5 compilation configuration arguments on Ubuntu 10.04?

    - by photon
    I want to install mysql 5.5 on my Ubuntu10.04 desktop system. But I'm not sure what arguments I should use after the cmake command. Though I've seen these articles: https://wikis.oracle.com/display/mysql/Cmake Building mysql-5.5.19 from source on ubuntu 11.10 with the static flag Compile MySQL 5.5.15 from source using autorun.sh and cmake, unable to start MySQL after Would anyone like to share the mysql-5.5 configuration arguments of compilation on Ubuntu 10.04? $cmake # what arguments to enter for this command update: cmake . -DBUILD_CONFIG=mysql_release -DCMAKE_INSTALL_PREFIX=/path/to/mysql_installation_dir -DWITH_SSL=no the official web site says it need to use cmake to compile the source package, but according to a teck blog, it doesn't need to compile the source, so which one is correct? When I use Cmake, I also had following error message: $ sudo cmake . -DBUILD_CONFIG=mysql_release -DCMAKE_INSTALL_PREFIX=/usr/local/mysql_community_5.5 -- The CXX compiler identification is unknown CMake Error: your CXX compiler: "CMAKE_CXX_COMPILER-NOTFOUND" was not found. Please set CMAKE_CXX_COMPILER to a valid compiler path or name. CMake Error at cmake/build_configurations/mysql_release.cmake:126 (MESSAGE): Clarification: I'm not clever and I'm a slow-thinking guy. And I cannot find a clever guy around me to give me some useful help. So I come here and hope someone is kind and generous enough to take the time to post the details.

    Read the article

  • Remote network traffic not passing through VPN

    - by John Virgolino
    We have the following topology: LAN A LAN B LAN C 10.14.0.0/16 <-VPN-> 10.18.0.0/16 --- SONICWALL <-VPN-> M0N0WALL --- 10.32.0.0/16 Traffic between LAN A and LAN B works perfectly. Traffic between LAN C and LAN B works perfectly. Traffic between LAN A and LAN C, not so much. LAN A's gateway has a route to LAN C that points to the Sonicwall. The Sonicwall has a route to LAN A pointing to the VPN gateway connecting LAN B to LAN A. Tracing packets on the Sonicwall shows the LAN C destined traffic to arrive on the Sonicwall, but it does not forward the traffic, it dies there. Traffic from LAN B gets forwarded. Tracing packets on the Sonicwall while sending traffic from LAN C destined for LAN A shows nothing. This tells me that the M0N0WALL is not forwarding traffic for the 10.14.0.0 network and the Sonicwall is not forwarding from 10.14.0.0. The SA on the Sonicwall terminates on the WAN ZONE and is defined to use an address group that incorporates both the 10.14.0.0 and 10.18.0.0 networks. The M0N0WALL is configured for the 10.18.0.0 network and I have tried with both a static route to 10.14.0.0 and without on the M0N0WALL. I tried manually adding the 10.14.0.0 network to the SA on the M0N0WALL, but that really aggravated it and the SA never came up, so I reverted. I have checked all the firewall rules to make sure nothing is blocked. All of the Sonicwall auto-added rules look right. Specs: Sonicwall TZ200, Enhanced OS M0N0WALL v1.32 I'm at a loss at this point. Any help would be appreciated.

    Read the article

  • Force request to miss cache but still store the response

    - by Tom Marthenal
    I have a slow web app that I've placed Varnish in front of. All of the pages are static (they don't vary for a different user), but they need to be updated every 5 minutes so they contain recent data. I have a simple script (wget --mirror) that crawls the entire website every 15 minutes. Each crawl takes about 5 minutes. The point of the crawl is to update every page in the Varnish cache so that a user never has to wait for the page to generate (since all pages have been generated recently thanks to the spider). The timeline looks like this: 00:00:00: Cache flushed 00:00:00: Spider starts crawling to update cache with new pages 00:05:00: Spider finishes crawling, all pages are updated until 1:15 A request that comes in between 0:00:00 and 0:05:00 might hit a page that hasn't been updated yet, and will be forced to wait a few seconds for a response. This isn't acceptable. What I'd like to do is, perhaps using some VCL magic, always foward requests from the spider to the backend, but still store the response in the cache. This way, a user will never have to wait for a page to generate since there is no 5-minute window in which parts of the cache are empty (except perhaps at server startup). How can I do this?

    Read the article

  • Permission denied (publickey,gssapi-with-mic,password) ssh error

    - by zentenk
    Heads up I'm a noob with linux and networking. I set up a ubuntu server and I have a static ip for my network. When I try to connect to the server at home (external), it prompts me to log in. I supply the correct password (or incorrect pw), I get the error Permission denied, please try again. and after 3 times I get Permission denied (publickey,gssapi-with-mic,password) I am however able to connect with SSH from another computer in the same network with ssh < internal ip of server > I'm connecting with mac os x and my config file is vanilla. Note: During installation of ubuntu it says I don't have a default route or something while doing auto network configuration, but I ignored it and continued the installation, could this be the problem? EDIT: I have tried the below, I have nothing in hosts.allow and also iptables shows the ports that I have allowed, which is 22. I checked the auth.log, and there is nothing when I connect to it remotely (even when it says permission denied). I have tried connecting to it internally and the correct authentication logs show. Any idea whats wrong?

    Read the article

  • why won't php 5.3.3 compile libphp5.so on redhat ent

    - by spatel
    I'm trying to upgrade to php 5.3.3 from php 5.2.13. However, the apache module, libphp5.so will not be compiled. Below is a output I got along with the configure options I used. The configure statement is a reduced version of what I normally use. ========== './configure' '--disable-debug' '--disable-rpath' '--with-apxs2=/usr/local/apache2/bin/apxs' ... ** ** ** Warning: inter-library dependencies are not known to be supported. ** ** ** All declared inter-library dependencies are being dropped. ** ** ** Warning: libtool could not satisfy all declared inter-library ** ** ** dependencies of module libphp5. Therefore, libtool will create ** ** ** a static module, that should work as long as the dlopening ** ** ** application is linked with the -dlopen flag. copying selected object files to avoid basename conflicts... Generating phar.php Generating phar.phar PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled. clicommand.inc pharcommand.inc directorytreeiterator.inc directorygraphiterator.inc invertedregexiterator.inc phar.inc Build complete. Don't forget to run 'make test'. ============= php 5.2.13 recompiles just fine so something is up with 5.3.3. Any help would be greatly appreciated!!

    Read the article

  • How to speed up request/response to django using apache or another solution?

    - by jbcurtin
    Hey all, I'm mainly a developer, but every now and again I jump into the sys-admin position. For the most part I've gotten away with deploying php and python apps using apache. I write today because I'm starting to research faster alternatives to apache, yet still have some of the core features I require like put and delete methods and the ability to connect to a socket via apache. ( This I have not tried, but might be a nice whistle if I ever employ comet on my apps. ) As you've probably guessed, I use javascript exclusively for all my websites utilizing deep linking for SEO support. The main areas that I'm looking to increase performance is the connection between the django apps and the web server to the client response. Every day I work my best to keep the smallest memory foot print as possible, however I am getting to the end of my rope when it comes to working with apache. In general, keep in mind that I'm just starting this research so I'm looking more for material to read then solutions at this moment. My main questions: Am I missing something about apache that makes it faster then everything else? What would be a good server environment to deploy just static files one? What are some of the leading open-source and paid alternatives?

    Read the article

  • Debian Wheezy (testing) df reported volume size

    - by TheRoadrunner
    I am a bit confused about the /dev/sda* references since I installed Wheezy instead of Squeeze on a testing box. fdisk -l returns: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e9623 Device Boot Start End Blocks Id System /dev/sda1 * 2048 480278527 240138240 83 Linux /dev/sda2 480280574 488396799 4058113 5 Extended /dev/sda5 480280576 488396799 4058112 82 Linux swap / Solaris This seems correct. But df -h /dev/sda (and /dev/sda1 and /dev/sda2 and /dev/sda5) returns: Filesystem Size Used Avail Use% Mounted on udev 10M 0 10M 0% /dev The same happens with every entry under /dev/disk/by-id and /dev/disk/by-path. Only one of two entries under /dev/disk/by-uuid returns the correct volume size: df -h /dev/disk/by-uuid/cacdbad6-7e6b-4e80-84ba-e3c77ef48796 Filesystem Size Used Avail Use% Mounted on /dev/disk/by-uuid/cacdbad6-7e6b-4e80-84ba-e3c77ef48796 229G 22G 196G 11% / Contents of /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=cacdbad6-7e6b-4e80-84ba-e3c77ef48796 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=45840d13-ee36-4e77-8e73-16cbdff25eb1 none swap sw 0 0 /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto 0 0 It seems all other references than the uuid points to the swap partition. Is this because Wheezy is in testing, and should it be reported as an error?

    Read the article

  • MBR seems to be gone

    - by bobobobo
    So, horror story for everyone. I bought two spanking new HDD's. MM!! Gbitage. I removed all my old HDD's, physically labelled them, and was preparing to install all new HDD's (fresh sys install included!) To make sure what HDD was what, I popped each OLD HDD (data filleD!) into a Thermaltake Blacx toaster.. surprisingly BOTH couldn't be read. I didn't have static on my hands! I'm certain of it. I touched metal, touched wood, before beginning this all. Thinking that was strage, I hauled up the new sys, installed Win XP (of course!) on the new HDD, and now the two OLD HDD's (data filled!) that were entered into the toaster cannot be read. And they had tons of data on them. I read about MBR's being nuked and it sounds like that is what it is. But I'm at a loss what to do. There are so many MBR recovery programs out there, I kind of feel overwhelmed. I don't want to lose my data by just pikcing one, yet it seems so close within reach, I'm not panicking anymore.. Anybody have a play by play that I could follow? I just don't want to spend $900 on data recovery centers if I can do this myself..

    Read the article

  • How to setup ping between XP guest from Win8 Host using Hyper-V virtual swtich

    - by rism
    Hyper-V client is installed on a Win8 Pro 64 bit box and a VM running XP has been created within that with an internal virtual swtich. The VM can be booted and accessed and there is a default virtual NIC within it with dynamic IP of 169.254.x.x which i have changed to be a static IP of 192.168.0.12/255.255.255.0 confirmed via ipconfig on the XP guest. The Host has IP of 192.168.0.7/255.255.255.0. Both host and guest have their firewalls disabled for simplicity. I cant ping guest from host nor host from guest. TTL timeout. And with regard to Hyper-V and VMs I dont know what to do next. Both are in same workgroup (as per name) but since they cant ping I guess that means nothing. .... My objective is to share a folder on VM so I can install a 32bit accountancy app that wont run on Win8/7 so if there is a more simplistic way then Im all ears but typically a peer to peer is very simple.

    Read the article

  • Win 7 Remote Desktop connection failure when already logged in.

    - by Andy E
    I have a bit of a strange problem, magnified recently by my broadband dropouts. I wasn't sure whether to post this on SU or SF, so I thought I'd start here as more users would be likely to know what the problem is. In short, when I try and connect to my server (Windows Server 2008) from my laptop running Windows 7, I can only connect if my remote account was previously logged out. If I'm still logged in I get the error message: Windows cannot connect to the remote server. No explanation or anything. If my IP address is the same, I don't have this problem. If I boot up Windows XP Mode and run XP's remote desktop connection it works just fine -- I think the difference there is it takes me to the remote server's logon screen. With Win 7 RDC you never see the logon screen, it asks you for credentials before entering full screen mode. The real problem is that I'm having random broadband dropouts and my IP isn't static. If I logon via Win XP RDC, log out and then run Win 7 RDC then it works fine. I realize I can just use Win XP's RDC for now, but I don't really like keeping XP Mode open if I can help it. Does anyone know a way around this problem? Maybe forcing Win 7 RDC to go to the logon screen, or changing some server-side settings to work around the IP address issue?

    Read the article

  • Router that allows custom Dynamic DNS server [closed]

    - by Thuy
    I've made my own DDNS service and it works fine using an application running on clients to update the IP. But if for some reason I don't have the choice of using my software and instead I need to use a router to update the IP, it becomes troublesome. For example, I needed to setup IPsec from a customer to me and the customers router/firewall (netgear srx5308) has a dynamic IP which is given from the ISP which can't offer static IPs. So it needs to use dynamic dns for it to work. In this case there really isn't a client to run the software on since it's a router/firewall. Unfortunately it seems that most routers are rather unfriendly towards custom DDNS solutions and most offer only dyndns.com or similar templates. Which was the case with this router too. Leaving me with no way to use my own dynamic dns server IP. I have the option of switching out the customers router and I've been looking around for alternatives and other routers/solutions and I was wondering if anyone on this great site might have been in a similar situation or might just know about some router/firewall that is more friendly towards custom ddns solutions that I might be able to use. Thanks in advance for any help or guidance!

    Read the article

  • Home Network Stopped Working

    - by James
    I have a home network where a machine running Windows 7 Ultimate N acts as a central hub for other devices to access media. This has been running for around 2 years now and there has been no recent configuration changes. The machine has a static IP address (192.168.0.3), which also has not been changed. A few laptops, Sonos music system and mobiles are using the machine for music/video mostly. Additionally, post 3389 was also open for RDP. I used a no-ip agent to map a hostname so I could RDP to the machine from the internet. As of yesterday when I try to ping the machine, I get PING:Transmit Failed, General Error I noticed however, the IP it is pinging is 0.0.2.233. All shares etc are no longer functioning including RDP. On the machine itself, an IP config shows like nothing has changed. It still shows the expected IP. If I ping itself from its hostname, I get the same error as above. machine has been rebooted, the router has been also. Any ideas where to even start on this?

    Read the article

  • Wireless Access Point stopped working

    - by Alex Pritchard
    I have a simple LAN set up at home using a Linksys WRT54GSV4 as my primary router and an Encore ENHWI-2AN3 as an access point. I connect the Encore to the Linksys by running a cable from one of the Linksys LAN ports into the Encore WAN input. I originally configured this using the Encore setup wizard, setting the device up in AP Router Mode. It detected the input network and worked about as expected, creating a second network that used my primary network to connect to the internet. It worked fine for about 2 weeks, then abruptly cut out today. I checked to make sure the network was still live through the cable going into the Encore (provides internet when connected to a laptop directly) and that devices are still able to connect to the network being broadcast by the Encore. When I try to rerun the connection wizard on the Encore, I receive the message "No Services found in WAN port." The WAN Settings is no longer retrieving a dynamic ip from the line. I tried providing a static IP, assigning an IP address within the subnet range of my primary router that wasn't being used and pointing the Default Gateway to the Linksys IP, but this did not work either. When I plug the cable into the WAN port, an internet light comes on that is not lit when a live network is not connected. I've tried doing a hard reset on the Encore (held down the rest button until the lights flashed, reconfigured from scratch), but the WAN settings are still not detected. Also tried powering off and on the modem, linksys, and encore. Any suggestions would be appreciated!

    Read the article

  • Nginx: Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • How to determine if my AWS/EC2 server has been compromised / resolution?

    - by ElHaix
    I have recently seen an increase in network in/out activity on my server and am trying to determine if my AWS/EC2 instance has been compromised, and if so, how to resolve? In my security group I have: Inbound: 80 (HTTP) 0.0.0.0/0 Outbound: 80 (HTTP) 0.0.0.0/0 443 (HTTPS) 0.0.0.0/0 Using TCP-UDP Endpoint Viewer: I see a lot of w3wp.exe TCP processes with varying local ports http and numbered, as well as varying remote ports. Some processes go red/yellow/green on updates . I see Remote address for most w3wp processes are my ec2 instance, however I am seeing several to *.deploy.akamaitechnologies.com and *.deploy.static.akamaitechnologies.com with received bytes varying between 4-11 megs. I also see Ec2Config.exe, remote address: 169.254.169.254 System Process Remote Address: fetcher4-4.p.mail.ru (how can I get rid of this one?!) local port: http remote port: 33432 I am also seeing some system processes from 114.216-244-93-rdns.wowrack.com: Protocol: TCP local port: http remote port: varying As well as some baiduspider "System Process"'s. I'm afraid that my system may have been compromised, and wondering if these results are any indication of that. If so, how can I get eliminate these possible threats? I have MS Security Essentials installed.

    Read the article

  • Multiple Routers, Failover, DHCP and multiple gateways. NOT WAN-failover

    - by u_b
    I've had a look around google and this forum but could not find an answer to my question. So probably one of you can help me a little. My intended setup is: Router R1: wan connection to isp. connected backup server. provides some wireless SSID. other connected devices like printer, laptop, etc. both wired and wireless. Router R2: no wan connection to isp but connected to R2. connects mp3-streamer and music server. also serves as a wireless access point with same SSID. other than described connections only wireless connections. I would like to be able to control music even if R1 is off, e.g. with no internet connection. On the other hand I would like to access internet also in the case that R2 is off, i.e. no music access. Last but not least I would like to stream internet radio, i.e., R1 and R2 are on, and music is streamed from internet to R1 to R2 to streamer. I would like to realize all this using DHCP (also using static assignments) so i do not have to configure statically on android, laptop, etc. So my questions are: Can I make DHCP provide a list of two default gateways R1,R2? In order to make clients fallback to other gateway if currently assigned gateway is turned off? Thanks in advance, u_b

    Read the article

  • Changing the View of a Folder When It Is Opened from Finder

    - by user60044
    When I was running OS 10.4 I had all my folders beautifully organized with position and which ones should be opened in View mode and which one in Icon, etc. When I transitioned to 10.6 I found that OS ignored all that information and imposes an awful consequence of changing the view for some folder and that view moving up to all parent folders which don't have their views locked. The only way I can think of to get this functionality back is to either write a program that given a root directory will enumerate it setting the views based upon a static template I have. The other way I thought I could accomplish this is by Folder Actions. Aside from the fact that I don't know AppleScript it seems folder actions cannot be inherited. Since I have 1000s of folders involved here, all being inherited from a single root, I could not possibly manually add that action to each of those folders. What I would like to have is a folder action that whenever I open the folder from Finder, then if it contains any JPG, GIF, etc. type image files that it will automatically open that folder with a view of Icon with a reasonable size to support the number of images. If the folder contains only folders, then to open that folder in the list view. Does anyone have anything that could do this for me? Thank You, -- Mark

    Read the article

  • Mounted NFS directory not writable by Apache / PHP

    - by phpfour
    Need some help here with NFS. Here's what I have (all servers running CentOS 5.6 with SELinux): 172.17.20.1 - Primary server with static IP. Varnish redirects requests to the web servers. 172.17.20.2 - Web server 1 172.17.20.3 - Web server 2 The application residing on the web servers is running Drupal and I need both of them to share the same files directory. I have created a folder in 172.17.20.1 called /var/nfs with root user. Here is my /etc/exports content: /var/nfs 172.17.20.2(rw,sync,no_root_squash) 172.17.20.3(rw,sync,no_root_squash) On both the web servers (172.17.20.2/3), I have it mounted like below: [root@web2 ~]# mount ... 172.17.20.1:/var/nfs on /mnt/nfs/var/nfs type nfs (rw,sync,hard,intr,addr=172.17.20.1) On all the servers, I've added the user apache to the root group to get the desired write access: [root@main ~]# cat /etc/group root:x:0:root,apache .... .... apache:x:48: [root@web1 ~]# cat /etc/group root:x:0:root,apache .... .... apache:x:48: Despite all this, when I try to write files into the /mnt/nfs/var/nfs folder from Drupal/PHP, it cannot write to it. I even tried with a simple PHP upload script but it doesn't work, so the problem is not with Drupal. Any help you guys can do is much appreciated. I've spent hours and hours with it, without any success :( Thanks in advance.

    Read the article

  • Why domain.com appears as theplant.com when hosted on hostgator?

    - by silow
    I have a script that's supposed to detect the url of its caller website. If the caller is another website, it should give something like http://callersite.com. I'm using this line of php code (though I suspect this won't matter for sysadmins) gethostbyaddr($_SERVER['REMOTE_ADDR']) I'm testing with a caller site that's hosted on hostgator. What I'm noticing though is that I don't get callersite.com, I get something like 1a.12.12ab.static.theplanet.com. I don't know what theplanet.com is and why I'm not getting caller site.com. Also what do I need to do to really get the domain of the site making a call to my script? -- Thanks for the explanation. Some have advised I use $_SERVER['HTTP_REFERER'] but it's not what I'm after. My script acts as an API. Another website makes a curl request to it and gets an output and later on presents it to the user. So http referrer gives false since the caller site.com is making a direct call to me. So any hope?

    Read the article

< Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >