Search Results

Search found 18580 results on 744 pages for 'wireless connection'.

Page 386/744 | < Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >

  • What does opening a serial port do?

    - by reve_etrange
    What does opening the standard PC serial port do, in electrical terms (i.e. what voltages on which pins)? For example, the ancient VB6 program which controls an apparatus I am tasked with maintaining toggles .PortOpen to control some TTL. The connection only used 2 pins (bad solders fell apart), so which pins do I solder to? The only labels / documentation refer to pins 7 and 9, saying 0V and 5V parenthetically, but does .PortOpen really just put 5V between RI and RTS?. As a post script, this isn't the weirdest thing about the set up. The TTL I referred to above also connects to an instrument via a BNC to DB9 (!), with only 1 pin used. I guess there was an assumption about a common ground, since the BNC shielding isn't connected to the GND pin? The connection is to the instrument's 'foot pedal' pin, it was a way to remotely trigger the device. Update According to this page, the DTR and RTS pins can go high when the port is opened. If they were so configured, they will subsequently go low when the port is closed. If DTR and RTS are not enabled, opening the port should set both to low (and keep them low).

    Read the article

  • Strange ssh key issue

    - by user55714
    Scenario 1. I am doing this from /home/deploy directory I am trying to set up ssh with github for capistrano deployment. this has been an absolute nightmare. when I do ssh [email protected] as the deploy account I get Permission denied (publickey). so may be the key is not being found, so If I do a ssh-add /home/deploy/.ssh/id_rsa Could not open a connection to your authentication agent. (i did verify that the ssh-agent was running) If I do exec ssh-agent bash and then repeat the ssh-add then the key does get added and I can ssh into github. Now I exit from the ssh connection to my server and ssh back in and I can't ssh into github anymore! Scenario 2 if I login to my remote server and then cd into my .ssh directory and ssh into github then it all works fine I guess there is a problem with locating the key and for some reason the agent isn't funcitoning correctly. Any ideas? Her is a pastie with more details..my .bashrc, permissions etc. http://pastie.org/pastes/1190557/

    Read the article

  • xen + debian network after upgrade squeeze to wheeze

    - by rush
    I've got a Debian + Xen server. After a system upgrade to the stable version the network doesn't come up after boot. Every time after reboot I need to bring it up manually. The network configuration was not changed during upgrade. Here is /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 11.22.33.44 netmask 255.255.255.0 gateway 11.22.33.1 nameserver 8.8.8.8 After boot ip r shows no route and eth0 has no ip address. Manually ip and route setup goes fine and network starts working. Messages from dmesg about network I've found (looks like nothing interesting) [ 3.894401] ACPI: Fan [FAN3] (off) [ 3.894444] ACPI: Fan [FAN4] (off) [ 4.178348] e1000e 0000:00:19.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:1e:67:14:66:c9 [ 4.178351] e1000e 0000:00:19.0: eth0: Intel(R) PRO/1000 Network Connection [ 4.178392] e1000e 0000:00:19.0: eth0: MAC: 10, PHY: 11, PBA No: 0100FF-0FF [ 4.178413] e1000e 0000:02:00.0: Disabling ASPM L0s L1 [ 4.178432] xen: registering gsi 16 triggering 0 polarity 1 -- [ 4.223667] ata5: DUMMY [ 4.223668] ata6: DUMMY [ 4.289153] e1000e 0000:02:00.0: eth1: (PCI Express:2.5GT/s:Width x1) 00:1e:67:14:66:c8 [ 4.289155] e1000e 0000:02:00.0: eth1: Intel(R) PRO/1000 Network Connection [ 4.289245] e1000e 0000:02:00.0: eth1: MAC: 3, PHY: 8, PBA No: 1000FF-0FF [ 4.506908] usb 1-1: new high-speed USB device number 2 using ehci_hcd [ 4.542920] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) -- [ 10.362999] EXT4-fs (dm-23): mounted filesystem with ordered data mode. Opts: (null) [ 10.419103] EXT4-fs (dm-3): mounted filesystem with ordered data mode. Opts: (null) [ 10.988255] ADDRCONF(NETDEV_UP): eth1: link is not ready [ 13.175533] Event-channel device installed. [ 13.287555] XENBUS: Unable to read cpu state -- [ 13.288670] XENBUS: Unable to read cpu state [ 13.965939] Bridge firewalling registered [ 14.134048] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx [ 14.283862] ADDRCONF(NETDEV_UP): peth0: link is not ready [ 14.284543] ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready [ 17.800627] e1000e: peth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx [ 17.801377] ADDRCONF(NETDEV_CHANGE): peth0: link becomes ready [ 18.307278] device peth0 entered promiscuous mode [ 24.538899] eth1: no IPv6 routers present [ 28.570902] peth0: no IPv6 routers present I've upgraded two servers and I've such behaviour on two of them. How to fix this and get network starts automatically on boot?

    Read the article

  • How can I stop SipVicious ('friendly-scanner') from flooding my SIP server?

    - by a1kmm
    I run an SIP server which listens on UDP port 5060, and needs to accept authenticated requests from the public Internet. The problem is that occasionally it gets picked up by people scanning for SIP servers to exploit, who then sit there all day trying to brute force the server. I use credentials that are long enough that this attack will never feasibly work, but it is annoying because it uses up a lot of bandwidth. I have tried setting up fail2ban to read the Asterisk log and ban IPs that do this with iptables, which stops Asterisk from seeing the incoming SIP REGISTER attempts after 10 failed attempts (which happens in well under a second at the rate of attacks I'm seeing). However, SipVicious derived scripts do not immediately stop sending after getting an ICMP Destination Host Unreachable - they keep hammering the connection with packets. The time until they stop is configurable, but unfortunately it seems that the attackers doing these types of brute force attacks generally set the timeout to be very high (attacks continue at a high rate for hours after fail2ban has stopped them from getting any SIP response back once they have seen initial confirmation of an SIP server). Is there a way to make it stop sending packets at my connection?

    Read the article

  • Software RAID 1 Configuration

    - by Corve
    I have created a software RAID 1 quite some while ago and it always seemed to work for me. However I am not completely sure that I have configured everything right and do not have the experience to check so I would be very grateful for some advice or just verification that all seems right so far. I am using Linux Fedora 20 (32 bit with plans to upgrade to 64bit) The RAID 1 should consist of two 1TB SATA hard drives. This is the output of mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Jan 29 11:25:18 2012 Raid Level : raid1 Array Size : 976761424 (931.51 GiB 1000.20 GB) Used Dev Size : 976761424 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Sat Jun 7 10:38:09 2014 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : argo:0 (local to host argo) UUID : 1596d0a1:5806e590:c56d0b27:765e3220 Events : 996387 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 0 1 active sync /dev/sda The RAID is mounted successfully: friedrich@argo:~ ? sudo mount -l | grep md0 /dev/md0 on /mnt/raid type ext4 (rw,relatime,data=ordered) Basically my question are: Why do I only have 1 active device? What does the State removed at bottom mean? Also I noticed some strange error messages that I see on the console on system start and shutdown and always repeating in the background when I switch with Ctrl + Alt + F2: ... ata2: irq_stat 0x00000040 connection status changed ata2: SError: { CommWake DevExch } ata2: COMRESET failed (errno=-32) ata2: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen ata2: irq_stat 0x00000040 connection status changed ata2: SError: { CommWake DevExch } ata2: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen ... Are these errors related to the RAID? Something seems wrong with the SATA devices.. All together the system works (I can read and write to the mounted raid) but I always had these strange errors on startup shutdown (probably always in the background). Thx for your help

    Read the article

  • Failed none and iptables

    - by Michael
    The problem is that when I ssh to my host with putty and enter user name, after that the password prompt delays. Found this is directly related to my iptables and can solve by changing default policy to ACCEPT. If default INPUT policy is ACCEPT, then password prompt is coming immediately. Mar 13 00:05:01 server-ubuntu sshd[6154]: Connection from 192.168.0.10 port 26304 Mar 13 00:05:06 server-ubuntu sshd[6154]: Failed none for acid from 192.168.0.10 port 26304 ssh2 However, if default INPUT policy is DROP, I got slight delay in getting password prompt after I enter username Mar 13 00:07:12 server-ubuntu sshd[6177]: Connection from 192.168.0.10 port 26333 Mar 13 00:07:35 server-ubuntu sshd[6177]: Failed none for acid from 192.168.0.10 port 26333 ssh2 For the second case, I tried to set default policy for FORWARD and OUTPUT chains to ACCEPT, but it didn't help. The only rule in this case is: -A INPUT -i eth1 -m mac --mac-source 00:26:XX:XX:XX:XX -j ACCEPT 00:26:XX:XX:XX:XX is the mac address from which I am trying to ssh to server's LAN(eth1). I'm sure there has to be some rule, which I can use while default INPUT chain policy is DENY in order to get password prompt immediately. I realize that the error message in the log is something normal and part of some verification procedure.

    Read the article

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • Raspberry pi slows down my entire network

    - by gnusouth
    Whenever my Raspberry Pi is connected to the network (via ethernet) the entire network is slowed to a crawl. On my main computer, ping times for google.com go from ~10ms to ~200ms and it takes forever to load web pages. Connections are also slow on the Pi, with an apt-get update showing pathetic speeds in the order of 1KB/s. Turning off the Pi completely removes the drag from the network. I've tried static and dynamic IP addresses for the Pi, but both have the same problems. I'm currently using Raspbian (downloaded today), but also had this problem with Arch Linux. I've checked the connection's duplex with dmesg | grep -i duplex, which shows that the Pi's connection is running at 100Mbps, full-duplex, as expected. My modem/router is a Billion 7404VNPX (an Australian thing); relatively high-end, albeit a bit buggy at times (it will occassionally delete all its firewall settings). It assigns IPs in the range 192.168.1.1 to 192.168.1.20 and has 192.168.1.254 as its own IP. When I assign static IPs I tend to use the 192.168.1.200 area. Does anyone have any idea as to what could be causing this weird slowdown? Or any tests I could try? Thanks

    Read the article

  • Linux: CIFS/Samba mount hangs for several minutes

    - by Pistos
    I have a small local network which has a Gentoo box and a Windows box. I mount a share originating on the Windows box onto the Gentoo box with a command like: mount -t cifs -o username=WindowsUsername,password=thepassword,uid=pistos //192.168.0.103/Users /mnt/windowsbox Most of the time, everything Just Works, and I can read and write without problems. However, every few weeks or so, the connection or the mount point seems to go dead or hang, such that any process that tries to access the mount point gets stuck in D state (disk, or I/O wait). These processes become impervious to TERM and KILL signals. Disconnecting and reconnecting the Windows box from the network does not help. The frozen state lasts for 5+ minutes. It's really frustrating and gets in the way of normal work, because it freezes Save As dialogues, ls commands, etc. If I issue a umount on the mount point, it either hangs also, or reports that the mount point is in use. Eventually, the dead state resolves itself, and the mount point gets unmounted, or it becomes possible to umount with no delay. My guess is that this happens when the connection/mount has gone idle, or when the Windows machine has been idle. I am not really sure. Why is this happening, and what can I do to prevent it? Or how can I successfully kill these D-state processes at will? Possibly related: CIFS mounts hang on read

    Read the article

  • Network structure --> Server 2k8r2 <--> Livebox <--> Router <--> Other PCs

    - by Yusuf
    I have a Livebox connection to the Internet and I have set up my network as follows: - Livebox <--> Win2k8R2 Server - Livebox <--> Netgear N150 Router - Router <--> Other PCs Therefore, in my LAN, - the Livebox has IP address 192.168.1.1, - the Router 192.168.1.12 (when accessed from the Livebox or the server), - the Router 10.0.0.1 (when accessed from the PCs connected to the Router), - the server 192.168.1.2, - the PCs 10.0.0.x I was using a previous configuration, which was as follows: - Livebox <--> Netgear N150 Router - Router <--> Win2k8R2 Server - Router <--> Other PCs Everything was simple, and I just had to forward all ports for incoming connection on the Livebox to the Router, and then forward the specific ports to the Server as needed (it must be however noted that any server I use is found on the Win2k8R2 server itself). In this previous configuration, the IP addresses were as follows: - Livebox 192.168.1.1 - Router 192.168.1.12 (when seen from Livebox) - Router 10.0.0.1 (when seen from server & PCs connected to it) - Server 10.0.0.2 - PCs 10.0.0.x So now of course, my port-forwarding does not work anymore since the server is not connected (directly) to the Router. What I would like to know is how do I configure the Livebox and Router to still have the features like before? From what I understand of networks (which is very limited, btw), I see these options: Make the router assign IPs like 192.168.1.x (but then I want the forwarding to be done from the router itself, is it possible?) The forwarding on the router to the server uses IP address 10.0.0.2. I could change it to 192.168.1.2 (Is that even possible, does it work?) Forward all ports from the Livebox itself to the server, and then manage them there (Is software-based port-forwarding as secure as hardware-based?)

    Read the article

  • Two internet connections at once in Windows 7

    - by webmasters
    I have a 3G wireless modem and I have a LAN - Right now they are both connected. I need a way to choose which applications will use the 3G connection and which applications will use the LAN. My Operating System is windows 7. How can I do this? Any ideas? Here is a route print: - the 3G modem's IP is 10.81.132.96 Lets say, for example, map google.com to using the 3G internet connection. IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.2.1 192.168.2.102 20 0.0.0.0 0.0.0.0 10.81.132.97 10.81.132.111 286 10.81.132.96 255.255.255.224 On-link 10.81.132.111 286 10.81.132.111 255.255.255.255 On-link 10.81.132.111 286 10.81.132.127 255.255.255.255 On-link 10.81.132.111 286 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.2.0 255.255.255.0 On-link 192.168.2.102 276 192.168.2.102 255.255.255.255 On-link 192.168.2.102 276 192.168.2.255 255.255.255.255 On-link 192.168.2.102 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.2.102 276 224.0.0.0 240.0.0.0 On-link 10.81.132.111 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.2.102 276 255.255.255.255 255.255.255.255 On-link 10.81.132.111 286 ===========================================================================

    Read the article

  • Nginx Ubuntu Postfix Config - Can't connect to incoming IMAP server 'server not responding' but can send mail via outgoing using same details?

    - by daveaspinall
    I'm pretty to new server admin and especially nginx but seem to be getting ok fine apart from accessing my mail via my iPhone? I've changed my domain to 'domain.com' The thing is I can send mail via my outgoing IMAP server but can't connect to the incoming one? I just get the message "the mail server at mail.domain.com is not responding" /etc/postfix/main.cf alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix home_mailbox = Maildir/ inet_interfaces = all inet_protocols = all mailbox_command = mailbox_size_limit = 0 mydestination = domain.com, mail.domain.com, localhost.com, , localhost, localhost.localdomain mydomain = domain.com myhostname = mail.domain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname recipient_delimiter = + relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_security_level = may smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom telnet localhost 25 ehlo locahost 250-mail.domain.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-AUTH=LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN Using the following details to connect: username password hostname: mail.domain.com port: 25 iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I also sent mail to the server as a test and got this missage if it helps? Technical details of temporary failure: [mail.domain.com. (10): Connection refused] I also looked in /var/log/mail.log and it has multiple entries of: postfix/smtpd[12239]: connect from 5acefc9a.bb.sky.com[90.206.252.xxx] Mar 23 06:47:09 new-domain postfix/smtpd[12239]: lost connection after CONNECT from 5acefc9a.bb.sky.com[90.206.252.154] Notice new-domain which is incorrect but the server hostname and hostname in the configs are correct? I recently moves servers and the host has set the primary domain on the service as new-domain.com so this may be the issue? Like I said, it works to connect to outgoing server, but incoming gets the not responding error? Any idea would be much appreciated!

    Read the article

  • Windows 2008 Server can't connect to FTP

    - by stivlo
    I have Windows 2008 Server R2, and I am trying to install FTP services. My problem is I can't connect from outside, FileZilla complains with: Error: Connection timed out Error: Could not connect to server Here is what I did. With the Server Manager, I've installed the Roles FTP Server, FTP Service and FTP Extensibility. In Internet Information Services version 7.5, I've chosen Add FTP Site, enabled Basic Authentication, Allow a user to connect Read and Write. In FTP Firewall support on the main server, just after start page, I've set Data Channel Port Range to 49100-49250 and set the external IP Address as the one I see from outside. If I click on FTP IPv4 Address and Domain Restrictions, and click on Edit Feature Settings, I see that access for unspecified clients is set to Allow, so I click OK without changing those defaults. In FTP SSL Policy, I've set to Require SSL connection, certificate is self signed. I tried to connect with FileZilla from the same host and it works, however it doesn't work remotely, as I said above. I've enabled pfirewall.log, but apparently nothing gets logged. The server is in Amazon EC2, and on the security group inbound firewall rules, I've set that ports 21 and ports 49100-49250 accepts connections from everywhere. What else should I be checking to solve the problem?

    Read the article

  • ssh initial prompt hangs for 10 minutes but console login and initial prompt is very responsive - why?

    - by rfreytag
    I have been running an ESXi 4.0 server for months with a couple of WinServer2003 and several Ubuntu Server 10.4 VMs. The performance has been impressive on 6GB i7 Asus P6T hardware. Suddenly, a week ago, ssh logins to the Ubuntu VMs take 10 minutes when connecting over the LAN (over a WAN the connection (pipe) is broken long before that). When logging in to these VMs the password prompt arrives immediately, and failed passwords are responded to immediately. But the moment I log in then the shell prompt appears and I hang for many minutes. Sometimes the connection hangs before the shell prompt appears and sometimes I can type in a command but the moment I hit return the machine hangs. 10 full minute later control returns and the VM is responsive. NOTE: there are several Ubuntu VMs on the same host machine that are identical in all ways that I can tell. However, only one of the VMs displays this behavior. That is why I mention the ESXi host in passing - I don't think it has anything to do with the problem. This behavior is never seen when I connect with the troubled-VM's console (through vSphere Client). From the console the Ubuntu VMs all respond beautifully. I have seen: http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1003496&sliceId=1&docTypeID=DT_KB_1_1&dialogID=229586372&stateId=1%200%20229588522 ...and since that relates to delays in seeing the password prompt that does not appear to be the solution here. Any other suggestions very welcome - thank you.

    Read the article

  • 'The rpc server is unavailable' or 'access is denied' error when using Remote desktop Services Manager on Windows 7 (but mstsc.exe works!)

    - by tbone
    I am trying to connect to a Windows XP workstation from a Windows 7 Ultimate workstation using Remote Desktop Services Manager. I am able to do a Remote Desktop (mstsc.exe) session from the Win7 machine to the WinXP machine with no problem at all. When running the Remote Desktops Admin (tsmmc.msc) too on a Windows XP box, I can also connect with no problem. However, when I use the new Remote Desktop Services Manager on Windows 7 and try to connect, I get the error: "The rpc server is unavailable" What could cause this? Has there been some fundamental change in Remote Desktop Services Manager, does it connect in a different way somehow? Update #1 Turned off firewall on the Windows XP box and the "The rpc server is unavailable" error went away; so RDSM seems to be using an entirely new port/connection/service compared to mstsc.exe or the old Remote Desktops Admin tool. Now... after disabling the firewall, I get a new error: Access is Denied. After doing some googling, I found some articles discussing this; basically, the error is very misleading - the actual problem is, if either side of the connection has dual monitors, and they are not both Win7 Ultimate, then you cannot connect using Remote Desktop Services Manager...the reason is, by default it uses the /multimon switch, and this switch requires a certain level of Windows license - and, there seems to be no way of changing this default (if anyone knows of a way to change this default, please post an answer or comment!). Nice going Microsoft. http://social.technet.microsoft.com/Forums/en-US/windowsserver2008r2rds/thread/4d06278f-e0f4-4f8e-a8e1-3697ee967ef4 http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Windows/Windows_7/Q_26225743.html

    Read the article

  • BIND having trouble resolving service.graphicly.com

    - by Keith Burgoyne
    Since about two weeks ago, we haven't been able to resolve service.graphicly.com: dig @192.168.0.12 service.graphicly.com ; <<>> DiG 9.3.4-P1 <<>> @192.168.0.12 service.graphicly.com ; (1 server found) ;; global options: printcmd ;; connection timed out; no servers could be reached Digging on the name servers listed for graphicly.com shows that service.graphicly.com is a CNAME to takecomicsadmin.cloudapp.net. Digging on cloudapp.net's name servers seems to fail: dig @NS1.LIVEDNS.MSFT.NET takecomicsadmin.cloudapp.net ; <<>> DiG 9.3.4-P1 <<>> @NS1.LIVEDNS.MSFT.NET takecomicsadmin.cloudapp.net ; (1 server found) ;; global options: printcmd ;; connection timed out; no servers could be reached Somehow, my home ISP's name servers can resolve service.graphicly.com without issue. Has anyone else noticed this problem? Does anyone know what the cause of this problem could be? Thanks!

    Read the article

  • Redmine does not return the web page

    - by m0skit0
    I migrated a Redmine installation from an Ubuntu machine to a Debian one (both 32-bits), and now for some reason, for some users it doesn't return the page but only a 200 OK message. Here is the flow (from Wireshark): GET /issues/142 HTTP/1.1 Host: debian:3000 User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:17.0) Gecko/20100101 Firefox/17.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Cookie: _redmine_session=BAh7DCIQX2NzcmZfdG9rZW4iMStIM1RBNTlNelZVUXlUazgrR1pUNGUvNGdEbytUZzRyMVFSUnBvNGhlSDg9Ihd0aW1lbG9nX2luZGV4X3NvcnQiEnNwZW50X29uOmRlc2MiD3Nlc3Npb25faWQiJThiMDk0MzVhOTEzYTI0MzVjOGEzYTRmNDU0NzcwMTAwIgx1c2VyX2lkaQoiFmlzc3Vlc19pbmRleF9zb3J0IgxpZDpkZXNjIg1wZXJfcGFnZWlpIgpxdWVyeXsHOg9wcm9qZWN0X2lkaQc6B2lkaQo%3D--8588c221c0642a12f396239455fb702aec14c9c9; my_wiki_session=f70ae11e1c533c86f0e039d63cf3f69c; my_wikiUserID=1; my_wikiUserName=Yasin Cache-Control: max-age=0 HTTP/1.1 200 OK Connection: Keep-Alive Date: Wed, 12 Dec 2012 16:30:16 GMT Server: WEBrick/1.3.1 (Ruby/1.8.7/2010-08-16) Content-Length: 0 As you can see, I get nothing from the server. This is mostly random because this blank page happens sometimes for some users, and for other users it almost never returns the page... I'm absolutely lost here. Any idea about what can be the cause? Thanks in advance!

    Read the article

  • Win 8.1 Hyper-v Full Screen and VPN problems

    - by tr0users
    I need to connect to my office using Cisco VPN software (RSA). Once connected all my internet traffic goes through the employer's VPN and this prevents me from listening to spotify. As a way around this I created a Win 2012 VM that I run in hyper-v from my Windows 8.1 Client. First I RDP to the VM, then I connect to the VPN. This forces the RDP session between my host laptop and the VM to close. I then open the hyper-v manager and double-click my VM to get a connection back (not great because I don't get the use of copy & paste this way). Previously when I opened my VM this way I would have full screen. I'm using a 1920x1080 monitor. Today when I re-open my connection to the VM it is displayed in a window that uses maybe 75% of the full screen. I have tried the menu option View\Full Screen Mode only centres the screen and apply black borders around the outside. Could anyone please suggest how I may solve the VPN or Full Screen problems? Thanks Rob.

    Read the article

  • Send command through PuTTY automatic login

    - by Arthur
    I am using the following to login automatically to a remote server and then run commands listed in a commands.txt, like this: C:\path to\putty.exe -ssh adreese.ip -l user -pw Password -m C:\Path to\command.txt commands.txt contains the following: wakeonlan -i broadcast adress Macadress However, when I try to do so a new window for PuTTY appears, but it closes and exits instantly after login. As a result, I cannot see the output of the command(s). After a several tests, it appears that the command is not execute , cause my computer doesn't "wake on lan". I don't understand what's going on here ? I cannot use the plink.exe program cause I cannot make connection with public key ( too much distant site for doing all the registration keys in putty ) Can someone help me with this ? Or can i use another program to make ssh connection and send command with script from a windows os? Edit : I also try to make a bash file in the distant server with the same command and execute it from the session like this : C:\path to\putty.exe -ssh adreese.ip -l user -pw Password \home\user\script.sh Ihave the same problem... Need help please : /

    Read the article

  • Cisco configuration for public library internet

    - by AlternateZ
    I'm a C/C++ computer programmer turned IT support guy working for a public library. My day is usually spent helping random grandparents learn how to use email, so my networking knowledge is limited to what I can glean from google. Here's the situation. We have a public library with 20 PCs on a LAN and also public wifi access. Previously we were running all of this on 1 ADSL connection and people complained about low speeds. We hired a networking company to set up a Cisco dual-WAN router for us, and purchased an additional ADSL connection. The intention was to give the LAN PCs a guaranteed amount of bandwidth each, and then let the wifi users split the rest. The results were far worse than what we expected, and all we got from the company was excuses and they've since washed their hands of us. During busy periods, net performance on the LAN PCs are so poor that attaching files to gmail etc often times out and fails - far from the "guaranteed amount of bandwidth each" that we hope for! Sometimes it feels like performance is worse than before when we had 1 ADSL link and an unconfigured router? Anyways, surely this is a problem encountered a million times over across the world? (Sharing internet across many users effectively.) What are standard solutions for something like this? I admit to not even knowing the right jargon to google for (load balancing?) I'd appreciate any links to resources/guides that might help me get a better understanding of the problem/solutions, and perhaps some stories of your own experience in solving similar problems. This will help us evaluate and negotiate with network consultants in the future. If its relevant, our router config contains a section "policy-map" with "bandwidth percent" for each class of user (LAN, wifi), and "fair queue".

    Read the article

  • tftpd starts randomly

    - by Mutant
    A few days ago my Little Snitch filter starts popping up tftpd. I'd never seen this before, so I immediately start freaking out thinking my Mac has been compromised. I can't find anything unusual on the system. The process usually dies before I can trace it (little snitch never allowed the connection just left the popup up). I finally caught it once, and found this: [10:32]: sudo lsof -nlP | fgrep tftp Password: tftpd 1924 18446744 cwd DIR 1,3 1326 2 / tftpd 1924 18446744 txt REG 1,3 29856 163979456 /usr/libexec/tftpd tftpd 1924 18446744 txt REG 1,3 600576 163686622 /usr/lib/dyld tftpd 1924 18446744 txt REG 1,3 303300608 189014898 /private/var/db/dyld/dyld_shared_cache_x86_64 tftpd 1924 18446744 0u IPv4 0x34a76100fcbb06e3 0t0 UDP *:55818 tftpd 1924 18446744 2u IPv4 0x34a76100f1113c53 0t0 UDP *:69 [10:32]: ps ax | fgrep 1924 1924 ?? S 0:00.00 /usr/libexec/tftpd -i /private/tftpboot 1949 s000 S+ 0:00.00 fgrep 1924 For the life of me I can't figure out what is starting this. Nothing in cron, launchdaemons, etc. Google searches haven't yielded much either. The connection IP is different each time. So my question is: Has anyone seen anything like this before?

    Read the article

  • MySQL stops accepting connections over 3306, still working on localhost

    - by Ben Dilts
    I have a MySQL database that stopped accepting connections from my web server altogether. So I SSH'ed into the server and started checking its vitals. The hard disks had plenty of open space, and there was plenty of available memory and swap space. Nothing was eating up the CPU (close to 100% idle). I even connected to MySQL locally and ran a few queries without any issues. But SHOW PROCESSLIST only showed my own connection, no others. Worst of all, in the MySQL log, no errors even remotely coincided with the unavailability of the server. On the web server, I got an error saying "Lost connection to MySQL server during query" at the moment the unavailability started, followed by a bunch of "MySQL server has gone away" errors. There's only one other application on the server that accepts network connections, and I killed that one (in case it was holding too many open connections or something), but it didn't help. Finally I just restarted the MySQL process, and everything is (for now) working again. What else should I check in these circumstances? Any idea what the problem might be? And how might I verify that is in fact the problem?

    Read the article

  • Server-side SSH jump hosts

    - by Dan Sosedoff
    Trying to figure out server side SSH jump hosts logic. Current network schema: [Client] <--> [Server A: hostname: a.com] <--> [Server B] [Client] <--> [Server A: hostname: b.com] <--> [Server C] Server A responds to both DNS records. Possible flow: Client opens a ssh connection with ssh [email protected]. Server A accepts it and should automatically jump user onto Server B with ssh user2@server_b.com. Client opens a ssh connection with ssh [email protected]. Server A accepts it and should automatically just user onto Server C with ssh user2@server_c.com. In other words, client should be able to connect to the target without performing any local configuration, assuming that we have a stock ssh config. The problem with ssh jumps is that user has to define hosts in local ~/.ssh/config file, which is not acceptable in my case. It needs to be a default sshd behavior. Im aware that you can define a custom command ~/.ssh/authorized_keys on server, but i dont think there is a way to properly detect source hostname where user tries to connect. It is possible at all ?

    Read the article

  • Setting up ssh config file with id_rsa through tunnel

    - by Rubens
    I've been struggling to set up a valid configuration to open a connection with a second machine, passing through another one, and using an id_rsa (which requests me a password) to connect to the third machine. I've asked this question in another forum, but I've received no answer that could be considered very helpful. The problem, better described, goes as follows: Local machine: user1@localhost Intermediary machine: user1@inter Remote target: user2@final I'm able to do the entire connection using pseudo-tty: ssh -t inter ssh user2@final (this will ask me the password for the id_rsa file I have in machine "inter") However, for speeding things up, I'd like to set my .ssh/config file, so that I can simply connect to machine "final" using: ssh final What I've got so far -- which does not work -- is, in my .ssh/config file: Host inter User user1 HostName inter.com IdentityFile ~/.ssh/id_rsa Host final User user2 HostName final.com IdentityFile ~/.ssh/id_rsa_2 ProxyCommand ssh inter nc %h %p The id_rsa file is used to connect to the middle machine (this requires me no password typing), and id_rsa_2 file is used to connect to machine "final" (this one requests a password). I've tried mixing up some LocalForward and/or RemoteForward fields, and putting the id_rsa files in both first and second machines, but I could not seem to succeed with no configuration whatsoever. Hope somebody can help me here! Regards! P.S.: the thread I've tried to get some help from: http://www.linuxquestions.org/questions/linux-general-1/proxycommand-on-ssh-config-file-4175433750/

    Read the article

  • How to handle OpenVPN client as a service, when the laptop is physically on the network already?

    - by James
    The Setup I've gotten OpenVPN working on our Windows XP laptops. Users are limited, so I went ahead and set OpenVPN client to run as a service, which is great anyway because that means they are on the VPN before logging in, so login scripts work, plus we can do remote support even if the user can not log in (such as connecting via VNC or resetting passwords). It is also configured to send all traffic over the tunnel, so when, for example, they browse the internet it is just like browsing from our corporate network. The Qestion(s) So, I'm wondering how does the OpenVPN client act when the computer is already physically on the same network as the OpenVPN server? Right now, the client is configured to connect the the public dns name which will resolve to the public ip address which will NOT get reflected back to the OpenVPN server, so it is affectively blocked from connecting to the OpenVPN server while on the network. Is that a good thing? Or will it constantly try to connect, using up system resources and network resources? We will likely have hundreds of laptops regularly on the physical network with this, so it could contribute to a lot of unnecessary network chatter. Alternatively Would it be better to have the firewall reflect the port back to the OpenVPN server and let it connect? Or have our internal dns resolve the name to the private ip and allow them to connect directly? Would traffic then go over the vpn connection (which I do not want, when already on the physical network)? Or is it possible to tell it to ignore the connection when the client and server are already on the same network? TLDR What's a sane way of handling OpenVPN client running as an always-on service when the client and server will often be on the same network?

    Read the article

< Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >