Search Results

Search found 5709 results on 229 pages for 'behind the scenes'.

Page 170/229 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Connect USB hard drive to wireless router on RJ45 port? Possible?

    - by lawphotog
    just a quick story behind. I was trying to set up wireless networked hard drive at home. My wireless router doesn't take USB. I am considering few options. First i was considering to get something like WD My Cloud. My router is an old one provided by service provider. It only has 10/100 Ethernet. WD My Cloud has Gigabit interface. So unless i changed a new router, data transfer will be slow. So upgrading the router is a must if i want fast transfer speed. Plus I already own an external hard drive with USB 3.0 interface. So if I get a router like Netgear D6300, i can get a decent speed wireless shared drive at home. And i can use my existing HDD instead of WD My Cloud. But the router isn't cheap so I am saving up for that. In the meantime I found out the existence of USB to RJ45 adaptor. I read the reviews and some say it works for them and for some don't. They didn't really say what they were trying to do so I'm confused. So if i bought an adaptor like this, can i connect my existing HDD (USB) with my existing router (RJ45) and use it as a shared drive for data transfer? I know it will be slow as the adaptor will only have USB 2.0 and 10/100 for Ethernet. But it's fine as it's for temporary until i got my new router.

    Read the article

  • Cisco ASA 5505 - InterVLAN NAT Exemptions Implementation not working

    - by Brandon Bearden
    Short version is we cannot communicate between our subnets. We have a Cisco ASA 5505 we are using for our network router. We have a Netgear L3 switch behind that with 10 vlans. Each VLAN is on its own subnet. (10.0.10.x/24, 10.0.11.x/24, etc) So ASA Switch Hosts We have PAT for each subnet to our outside interface. Each subnet NATs out properly. I have NAT exemption enabled for 2 of the subnets (eventually I will need all, but am just testing at the moment). Config is here: http://pastebin.com/pDsG7hsh I have tried multiple ways for the NAT exemption to allow all traffic from our inside VLANS. At this point in time I am trying to get "Engineering" to communicate with all hosts on "AuthUser". I can ping some hosts, but not as many as if I am directly on the interface. I can reach a port 80 service, but not 443. I cannot access anything via hostname or NetBIOS. What am I missing to allow higher security level interfaces to fully communicate with lower security level interfaces? Thx!

    Read the article

  • What steps should I take to secure Tomcat 6.x?

    - by PAS
    I am in the process of setting up an new Tomcat deployment, and want it to be as secure as possible. I have created a 'jakarta' user and have jsvc running Tomcat as a daemon. Any tips on directory permissions and such to limit access to Tomcat's files? I know I will need to remove the default webapps - docs, examples, etc... are there any best practices I should be using here? What about all the config XML files? Any tips there? Is it worth enabling the Security manager so that webapps run in a sandbox? Has anyone had experience setting this up? I have seen examples of people running two instances of Tomcat behind Apache. It seems this can be done using mod_jk or with mod_proxy... any pros/cons of either? Is it worth the trouble? In case it matters, the OS is Debian lenny. I am not using apt-get because lenny only offers tomcat 5.5 and we require 6.x. Thanks!

    Read the article

  • Setting up test an dlive enviornment - how?

    - by Sean
    I am a bit new to servers and stuff so had a question. I have my development team working on my website. They are in different countries and currently they put all the work live on the test site. But the test site is open to anyone who knows the URL. It is behind a directory but this effects my QA process because i cannot use the accurate URL structures to prevent the general public from seeing it. So what I want to do it: Have my site live on the net but only for me and my team, so like an internal network. Also I will need to mirror this to my live site when i put it live. So i guess this is something like setting up a staging and live environment. So how to do it and are both environments on the same physical server or do i need to buy two servers? And if i setup a staging environment how will i access it and my team since we are all spread out so i assume we need to log into something to access it? What about the URL - do i need a different URL for the test site or can i use the same live url for the test site? I plan to get a dedicated server + CDN for my site.

    Read the article

  • How to automate downloading files?

    - by Damon
    I got a book which had a pass to access digital versions of hi-res scans of much of the artwork in the book. Amazing! Unfortunately the presentation of all the these are 177 pages of 8 images each with links to zip files of jpgs. It is extremely tedious to browse, and I would love to be able to get all the files at once rather than sitting and clicking through each one separately. archive_bookname/index.1.htm - archive_bookname/index.177.htm each of those pages have 8 links each to the files linking to files such as <snip>/downloads/_Q6Q9265.jpg.zip, <snip>/downloads/_Q6Q7069.jpg.zip, <snip>/downloads/_Q6Q5354.jpg.zip. that don't quite go in order. I cannot get a directory listing of the parent /downloads/ folder. Also, the file is behind a login-wall, so doing a non-browser tool, might be difficult without knowing how to recreate the session info. I've looked into wget a little but I'm pretty confused and have no idea if it will help me with this. Any advice on how to tackle this? Can wget do this for me automatically?

    Read the article

  • D-Link wireless router losing outbound data

    - by gsteinert
    I have a Linux box running the Apache web server behind a D-Link wireless router (nothing fancy, just standard kit that comes with Virgin Media broadband). My issue is that when requesting web pages (from within the network or via the web), the back end of the page seems to be being dropped. For example, I tried to display a text-only file, and all I could get was the first 40-70% of the file (it changed slightly with each refresh). The apache access logs show that only part of the data was being sent (~6000 bytes instead of the 12000+ bytes of the file). Removing my router from the equation fixes the issue and I can download any files no matter the size with no problems. My theory is that the uploaded packets are either being dropped or held up by the config of the router. Is there anything I can do to alleviate the problem? (Perhaps a way of reconfiguring the router to upload packets harder/better/faster/stronger or an option in apache that provides a workaround) As a last resort I will get a second NIC for my Linux box and turn it into a router, but that would mean the box will be on 24/7... not the most ideal of circumstances. Gary

    Read the article

  • Autossh startup on Ubuntu 10.04 - fails after powering off

    - by grant
    I'm using upstart to keep a reverse ssh tunnel alive using auto ssh similar to Using Upstart to Manage AutoSSH Reverse Tunnel. This works fine, except after a manual power down I can no longer connect to the machine through the "central server" using the tunnel. I receive "ssh_exchange_identification: Connection closed by remote host". The autossh process is running on the client. I can connect again after re-starting networking. I'm trying to figure out why this is failing consistently after a manual shutdown. Is it possible that I need to do some cleanup on startup that would allow the tunnel to work in this situation, or are there some other debugging/troubleshooting steps I can take to determine the problem? Machine A is the client machine, using autossh. This machine sits behind a firewall and uses the following command in upstart to create an ssh tunnel: /usr/bin/autossh -fN -i /keyfile -o StrictHostKeyChecking=no -R 20098:localhost:22 user@centralserver Machine B we'll call the "central server", which sits in the cloud and is the host. This machine is "centralserver" in the command above. When Machine A is hard powered off, and back on, I cannot connect to it by SSH'ing from my machine (C) to Machine B in the cloud, then using the following command to get to Machine A: ssh -p 2098 user@localhost Again, after a reboot of the client (A), this works fine. It is only after a hard power down that the problem occurs. There are autossh processes that are running on the client machine (A) after powering down and back up, but they just don't seem to doing their job.

    Read the article

  • How can I optimize my ajax calls to deliver at 60ms.

    - by Quintin Par
    I am building an autocomplete functionality for my site and the Google instant results are my benchmark. When I look at Google, the 50-60 ms response time baffle me. They look insane. In comparison here’s how mine looks like. To give you an idea my results are cached on the load balancer and served from a machine that has httpd slowstart and initcwnd fixed. My site is also behind cloudflare From a server side perspective I don’t think I can do anything more. Can someone help me take this 500 ms response time to 60ms? What more should I be doing to achieve Google level performance? Edit: People, you seemed to be angry that I did a comparison to Google and the question is very generic. Sorry about that. To rephrase: How can I bring down response time from 500 ms to 60 ms provided my server response time is just a fraction of ms. Assume the results are served from Nginx - Varnish with a cache hit. Here are some answers I would like to answer myself assume the response sizes remained more or less the same. Ensure results are http compressed Ensure SPDY if you are on https Ensure you have initcwnd set to 10 and disable slow start on linux machines. Etc. I don’t think I’ll end up with 60 ms at Google level but your collective expertise can help easily shave off a 100 ms and that’s a big win.

    Read the article

  • Remotely port forward/launch process or a client-less remote desktop app?

    - by DC177E
    I have an XP box running Logmein at a remote location behind a linksys router, which was running well for a whole of four days, until we had a power failure. Our ISP gave us a new IP, the machine restarted, and logmein did not autorun (or, at least, it did not automatically sign in), and our service (which may or may not be a Minecraft server with non-backed-up save files) also did not run upon startup. Logmein does not register the new IP (it still displays the old one). I have a DDNS updater service, so I do know the new dynamic address. I have tried using the built in XP remote desktop service, but, as with almost all non-cloud-based remote desktop services, it requires a port forward. Thus, I would appreciate it if anyone has any ideas as to: A: Any way of accessing our router remotely to forward the remote desktop port. I've seen the Remote Management option (forwarding the setup page to port 8080), but I do not have it enabled. I've tried UPnP, but again, the setup page for our router is not forwarded. B: Any way of remotely launching a process that does not require port forwarding (or uses ports 255XX, 18XXX, or 9000.), such as a remote console service built into XP. I realize this is a near impossibility. C: A Way to remotely start logmein, and sign in, which is likely a definite impossibility. Sorry if this is too specific for Stackexchange, or if I've put it into the wrong section (is SuperUser the correct place for this?). Ideas would, again be much appreciated, as shot-in-the-dark-like this may be.

    Read the article

  • nginx reverse proxy slows down my throughput by half

    - by Isaac A Mosquera
    I'm currently using nginx to proxy back to gunicorn with 8 workers. I'm using an amazon extra large instance with 4 virtual cores. When I connect to gunicorn directly I get about 10K requests/sec. When I serve a static file from nginx I get about 25 requests/sec. But when I place gunicorn behind nginx on the same physical server I get about 5K requests/sec. I understand there will be some latency from nginx, but I think there might be a problem since it's a 50% drops. Anybody heard of something similar? any help would be great! Here is the relevant nginx conf: worker_processes 4; worker_rlimit_nofile 30000; events { worker_connections 5120; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; } sites-enabled/default: upstream backend { server 127.0.0.1:8000; } server { server_name api.domain.com ; location / { proxy_pass http://backend; proxy_buffering off; } }

    Read the article

  • Java games applet not connecting to Yahoo

    - by Steve
    Hi. I am trying to play a Y! game which use a java applet. The applet displays the message: Alert. Unable to connect to server. One of four things could have caused this: 1) You are behind a firewall. 2) You are not connected to the internet. 3) The games server is down. 4) You have a stale page in your cache. I have added an exception to the Windows firewall for java.exe. I am obviously connected to the Internet okay. The games server is not down when I am at home. I doubt it is down when I am at work. I have never successfully loaded this page before, so I doubt I have a stale page in cache. Could it be the corporate firewall? Nothing else in my web browser has been blocked before. Maybe the java applet connects on a different port to the browser. What should I test?

    Read the article

  • Exchange 2007 Backup - For a newbie

    - by mew3900
    I am trying to setup an exchange 2007 backup solution. After doing a lot of reading, Microsoft have decided in server 2008 unless you are willing to spend a great deal on a 3rd party solution you are pretty stuck! Essentially what I have been asked to do is perform an off-line file backup of our current exchange server and replicate this onto a new 2nd server. The reasoning behind this is that we need to upgrade our current installation of exchange 2007 to SP2 so that the exchange plug-in for windows server backup will be available to us. From this I can then actually take an exchange aware backup weekly and take it off site. Ideally then also we can migrate to this new server and keep the old one as a fail over. Is there a way I can copy across the files required onto a second server, although I doubt very much it is that simple. I may be barking up completely the wrong tree, however I have very limited knowledge with Exchange and any help and advice on how I would resolve this would be much appreciated. Thanks in advance

    Read the article

  • What other ways can I load balance EC2 servers without using Elastic Load Balancing?

    - by undefined
    I have a web application that consists of a web server managed by a web hosting firm, a set of EC2 instances in amazons cloud and a MySQL database (hosted on the webserver). MySQL is behind a firewall and is set to allow access from Localhost and from a single IP address which is an Amazon Elastic IP address that is attached to the EC2 instance I have been running up to now. The problem is that I want to look at my scaling up and load balancing strategy for my EC2 instance. To this end I have been investigating the Elastic Load Balancers and Autoscaling tools that Amazon provides and have managed to set this up fine but for one thing - connecting to the MySQL database running on my webserver. I realised (thanks to answers on Serverfault) that I needed to check firewall settings and add the IP address for the load balancer, however Elastic Load Balancers provide you with a DNS name, not an IP address and infact the IP addresses change over time so this will not work. I have been told by the company hosting the database that the way the firewall works is to look up the IP address of the DNS name and store the IP rather than the DNS name. so basically this will not work and the only way to allow access would be to open up the SQL port to allow access from anyone! Is this a viable idea? Should I look at moving my database into the cloud? Is there another firewall that the server company can use? Should I find another way of load balancing (if so what?) tricky one eh? any help appreciated!

    Read the article

  • Apache 2 settings for high traffic website

    - by Harry
    I'm having problems with the load on my website. It's an amazon ec2 server with 15Gb ram and 4 CPUs behind an LB. apachetop says I'm getting around 80 reqs per second which seems really low for this kind of server and the load ( given by top ) is usually around 15 but does increase to about 150 in 24 hrs. I'm seeing about 100 active apache processes at any time. Apache is in prefork mode. Mysql is used very little on the server and there are almost no static files. Here are my Apache settings: Timeout 20 KeepAlive Off MaxKeepAliveRequests 0 KeepAliveTimeout 3 <IfModule mpm_prefork_module> StartServers 40 MinSpareServers 25 MaxSpareServers 40 ServerLimit 400 MaxClients 400 MaxRequestsPerChild 4 </IfModule> Can anyone advise on how to tweak the settings? Thanx! Edit: The config was gotten by trial and error. Any and I mean by a number, change to these lines make the load skyrocket in like 5 minutes. It literally jumps to like 200-300 in a matter of minutes. Especially MaxRequestsPerChild. I've tried with 10, 15, 100, 1000 and the load just skyrockets. About php - there are actually only a few php files which aren't really that expensive at all. They just spit some simple stuff out. If I turn on KeepAlive load also goes to space..

    Read the article

  • Reverse NAT Setup for Hyper-V on Win 2008 R2

    - by sukru
    I'm trying to setup a Linux server behind a Windows Hyper-V host that will help supply some of the services (SSH, HTTPS, etc). However getting RRAS configured for reverse NAT (port forwarding) turned out to be a non trivial task. As a staring point, I tried forwarding port 22 (SSH) to the virtual machine. The virtual machine is on a public interface (i.e.: it also has a visible IP on the same network as the host). On RRAS management console I tried to add a rule, by adding "Local Area Connection" to NAT pool (Public Interface - Enable Nat), and an incoming rule for port 22 - :22. I also tried with the same port enabled on Windows Firewall (and not). The NAT management page tells there are "1 mappings" and "30+ Outbound packets transleted". However all other counters (Inbound packets translated, and respective rejected ones) are always zero. (I'm trying to access the server from an external machine). I can directly access the service if I give the VM's public IP, but not the host's one. Is there a way to enable this on RRAS?

    Read the article

  • How to find the reason for a weekly downtime on an Ubuntu web server hosted by AWS?

    - by IceSheep
    We started monitoring our web server using Pingdom and found out that we have a downtime of a few minutes every Sunday at 0:00 UTC. The test runs every minute and checks if a successful HTTP response (code 200) is returned on port 80. The test fails due to a timeout (no response after 30 seconds). Here's what we've already checked – without success: Since we run our webserver behind a load balancer, I've set the Pingdom test on the load balancer's public DNS and the webserver's public DNS in order to find out if there's a problem with the AWS load balancer – both tests return the same result We set up Munin on our webserver. Everything looked fine even after the failure. Since the last failure lasted only 2 minutes I suppose Munin couldn't capture a potential problem (it only checks every 5 minutes) I have checked /var/log/apache2/error.log and /var/log/syslog for suspicious entries I have checked /etc/cron.weekly and /etc/crontab for suspicious entries I have searched for files created or last-modified during 0:00 and 0:15 using this method: touch -t 201209020000 start touch -t 201209020015 end find / -newer start -and ! -newer end (nothing found) Has anybody experienced a similar problem? Any proposals on how to find the reason for this behavior? It's Ubuntu 10.04 LTS running on an AWS m1.large instance. Thanks!

    Read the article

  • Can I use squid (or anything) to do this?

    - by user269334
    I have a really crappy VPS, and a really good computer at my office (with a really good internet connection), but behind a NAT. Is it possible to expose my good computer by doing this: 1. The good computer connects to the VPS (and keeps the connection alive) 2. The users connects to the VPS, and sends http(s) requests to the VPS. 3. The VPS just passes that http(s) requests to the good computer (including some identifications, so the servers can distinguish connections) 4. The good computer passes that http(s) response to the VPS 5. In turn, the VPS receives the http(s) response, and passes back to the client. Is it possible to do this? (btw, the VPS and the good computer are located in different countries) And also, is this "reverse proxy"? I heard that reverse proxy is for protecting the internal network by putting a middle server. And will this affect SSL configurations? (or make SSL impossible?) I'm intending to run nginx on the good computer. Thanks in advance : )

    Read the article

  • Tunnell network requests with Windows 7

    - by mark
    I've Windows 7 64bit Pro client in a private LAN behind a Netgear wgr614v7 router. I've also a remote Debian server machine outside. I'd like to tunnel all (or specified ports/protocols) over this outside server, so when I'm on the Windows machine and I request serverfault.com it would not appear from the wgr614v7 public IP but from the server. But it's not only about HTTP traffic, it's basically about everything I'd like to: other TCP ports, even UDP, etc. It must be transparent to the application, e.g. they shouldn't be aware of this. All their requests just appear as being from the server and the tunnel between them takes care about the packets. I'm aware of e.g. Putty and forwarding individual ports or using it as a socks proxy, however not many applications to support this and the support in windows itself looks non-existent to me. I might add it should be something "reasonable" easy to set up. I've heard about PPTP but I'm unsure about it's security implications (by design). Should I go for VPN? There seem to be two common solutions for Linux (OpenSwan and StrongSwan), why would I pick the one over the other? I also fear that setting up a VPN might be quite complex, OTOH maybe it's the only sane way to do the things right? Or is OpenVPN sufficient? I'm seeking for open (source) solutions, what other options to I have or which direction should I head to?

    Read the article

  • Summer daylight time not changing on some active directory domain clients.

    - by Nick Gorbikoff
    We just had a summer daylight change in US. and pc's on my network are behaving strange, some of them change time and some didn't. My network: 2 locations both in Midwest, same time zone. Location 1: 120 pcs (windows xp & windows 200) , with 1 Active Direcotry Domain Controller on Windows 2003 Standard. A couple of windows 2000 servers (they up to date) the rest of the servers are Xen or Debian machines (all up to date) , Second location connected through OpenVPN link all pc's are running fine - but they are all connecting to our AD domain controller. Locaiton 2: 10 pcs, and a shared LAN NAS. Both of the routers/firewalls in both locations are pFsense boxes with ntp service running - but it's up to date. Tried all the usual suspects: I have all the latest updates installed restarted them domain controller is running fine most computers are running fine I have only one domain controller on my network also my firewall serves as ntp server (pfsense) but it's up to date. all of the linux machines are fine since they are querying firewall / router for the time. about 1/3 of my pcs are 1 hour behind. If I change them manually they just change back ( the way domain pc's are supposed to). I've tried everything but I can't think of anything else to try.

    Read the article

  • Routing Traffic With OpenVPN

    - by user224277
    Few minutes ago i configured my VPN server, and actually I can connect to my VPN but all trafic is going through my normal home network. On my OpenVPN application I've got an information : Server IP: **.185.***.*10 Client IP: 10.8.0.6 Traffic: 7.3 KB in, 5.6 KB out Connected: 10 June 2014 19:21:59 So everything is connected but how I can setup on windows 7 that all trafic have to go through OpenVPN network card ?? Client setting : client dev tun proto udp # enter the server's hostname # or IP address here, and port number remote **.185.***.*10 1194 resolv-retry infinite nobind persist-key persist-tun # Use the full filepaths to your # certificates and keys ca ca.crt cert user1.crt key user1.key ns-cert-type server comp-lzo verb 6 Server setting : port 1194 proto udp dev tun # the full paths to your server keys and certs ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/server.crt key /etc/openvpn/keys/server.key dh /etc/openvpn/keys/dh2048.pem cipher BF-CBC # Set server mode, and define a virtual pool of IP # addresses for clients to use. Use any subnet # that does not collide with your existing subnets. # In this example, the server can be pinged at 10.8.0.1 server 10.8.0.0 255.255.255.0 # Set up route(s) to subnet(s) behind # OpenVPN server push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" ifconfig-pool-persist /etc/openvpn/ipp.txt keepalive 10 120 status openvpn-status.log verb 6 and sysctl : net.ipv4.ip_forward=1 Thank you for your time and help.

    Read the article

  • Passive mode FTP file download hangs from specific machine

    - by chiptuned
    I have a server which is an AWS instance that just cannot download files from a specific FTP server. I can connect to the FTP server fine and run some commands, but when I request a file it just hangs. Here is the debug output of the base linux ftp client after login: ---> SYST 215 UNIX Type: Apache FtpServer Remote system type is UNIX. ftp> get outgoing/catalog.gz catalog.gz local: catalog.gz remote: outgoing/catalog.gz ---> PASV 227 Entering Passive Mode (64,156,167,125,135,191) ---> RETR outgoing/catalog.gz 150 File status okay; about to open data connection. Thats it. Then it just sits there and nothing transfers. I have verified that a data connection is made but the client gets no data. ? ss -nt dst 64.156.167.125 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.185.147.150:41190 64.156.167.125:21 ESTAB 0 0 10.185.147.150:48871 64.156.167.125:48557 The FTP server is not in my control and downloads from other FTP servers in passive mode have worked. Active mode does not work as the system is behind a firewall. Every FTP client I've tried has the same problem. The download works from other systems, even from other AWS instances I have with the same Security Group. Not necessarily the same distro or config though. I understand it may be some issue on the server side, but I want to know what it is about my particular machine where the transfer hangs and where on every other machine I can get my hands on, it works. Please let me know what the culprit on the client side could be or ideas on what else to look at.

    Read the article

  • HAProxy appsession vs cookie precedence

    - by user1139473
    I am trying to find the best solution for balancing and keeping persistence on our application behind HAProxy. Here is our basic configuration: https://gist.github.com/endzyme/1804046b23c37beba520 After playing around with taking members down and up and also reloading the haproxy (with -sf) I have noticed that appsession isn't 100% effective, it would appear that sometimes it doesn't always 'request-learn'. I also tried to add a cookie JSESSION prefix to balance in case request-learn didn't take. Unfortunately it would present scenarios where the prefix would list svr2 but it was balanced to a different server. I am assuming it's because the appsession table takes first then sticks on that before using the cookie parameter. I have not tested with using cookie as an inserted option (not prefix on existing cookie) but I am thinking it would yield similar results. My question is: Which one is checked first, appsession or cookie, and is it an immediate catch after it reads the first one, or a fall through? Also as a follow up - is it not recommended to use both in the same backend? Cookie as I understand takes less memory resources, is agnostic to reloads and has way better reliability of persistence. Appsession I assume takes less cpu resource, since it's reading not writing. (Bonus Question: is there a way to inspect appsession/cookie table map? socket show table doesn't show anything except stick-tables) Many thanks in advance, -Nick

    Read the article

  • I can get in, but I can't get out

    - by robwilkerson
    Like most technical folks, I suppose, I'm my family's primary source of tech support. I'm a developer--not a sysadmin--by trade and tonight I bumped into something I've never seen before. I'm hoping someone here has. In order to better help my Mom, I have her set up on a home network behind a Linksys router (WRT54G). She's got a Mac, so I have her router set up to forward SSH requests to her laptop's internal IP. I also have her router running DDNS through DynDns. Tonight she called to tell me that she can't access the Internet. Assuming it was one of the many simple, stupid problems most of us encounter with parents, I logged into the router admin remotely and took a look around. Everything looked normal. Then I SSH'd into her machine to check out her IP, DNS, etc. settings. Everything still looked fine. Then I noticed something weird. When SSH'd into her machine, I can't ping her router. In other words, I seem to be able to access her computer through her router, but not access her router from her computer. A traceroute dies immediately as well. Any ideas what I might try next? I've bounced her computer and even unplugged her router (it was plugged back in, of course). Thanks.

    Read the article

  • I can get in, but I can't get out

    - by robwilkerson
    Like most technical folks, I suppose, I'm my family's primary source of tech support. I'm a developer--not a sysadmin--by trade and tonight I bumped into something I've never seen before. I'm hoping someone here has. In order to better help my Mom, I have her set up on a home network behind a Linksys router (WRT54G). She's got a Mac, so I have her router set up to forward SSH requests to her laptop's internal IP. I also have her router running DDNS through DynDns. Tonight she called to tell me that she can't access the Internet. Assuming it was one of the many simple, stupid problems most of us encounter with parents, I logged into the router admin remotely and took a look around. Everything looked normal. Then I SSH'd into her machine to check out her IP, DNS, etc. settings. Everything still looked fine. Then I noticed something weird. When SSH'd into her machine, I can't ping her router. In other words, I seem to be able to access her computer through her router, but not access her router from her computer. A traceroute dies immediately as well. Any ideas what I might try next? I've bounced her computer and even unplugged her router (it was plugged back in, of course). Thanks.

    Read the article

  • Print over the internet from a remote linux session locally (on a Windows 7 machine) to the shared printers?

    - by obeliksz
    I'm trying to use a linux virtual machine as a file server for windows clients. I have successfully implemented remote file sharing (samba+ssh) with which I am able to print locally with a little program that I made for this purpose (jetforms style)... but I would like to hear about a somewhat more direct approach. How can I attach the printers to the server, so that I can for example open a file on the remote session and in the print dialogbox I would see my local printers (on the machine from which I have established a remote session)? I guess there should be some kind of putty tunneling, but dont know how. I have a windows 7 machine locally; there is a CentOS 6 VM over the internet. It has ssh, cups, and samba. I have found a question which asks the opposite: there is a windows based server to connect form linux but that windows has a domain, mine is just a simple windows workstation that is behind NAT and has a dynamic IP. That question is: Print from Linux to Windows networked printer.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >