Search Results

Search found 37414 results on 1497 pages for 'open port'.

Page 479/1497 | < Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >

  • Looking for a new, free firewall (Sunbelt has a huge hole)

    - by Jason
    I've been using Sunbelt Personal Firewall v. 4.5 (previously Kerio). I've discovered that blocking Firefox connections in the configuration doesn't stop EXISTING Firefox connections. (See my post here yesterday http://superuser.com/questions/132625/sunbelt-firewall-4-5-wont-block-firefox) The "stop all traffic" may work on existing connections - but I'm done testing, as I need to be able to be selective, at any time. I was using the free version, so the "web filtering" option quit working after some time (mostly blocking ads and popups), but I didn't use that anyway. I used the last free version of Kerio before finally having to go to Sunbelt, because Kerio had an unfixed bug where you'd eventually get the BSOD and have to reset Kerio's configuration and start over (configure everything again). So I'm looking for a new Firewall. I don't like ZoneAlarm at all (no offense to all it's users that may be here - personal taste). I need the following: (Sunbelt has all these, except *) - 1. Be able to block in/out to localhost (trusted)/internet selectively for each application with a click (so there's 4 click boxes for each application) [*that effects everything immediately, regardless of what's already connected]. When a new application attempts a connection, you get an allow/deny/remember windows. - 2. Be able to easily set up filter rules for 'individual application'/'all applications,' by protocol, port/address (range), local, remote, in, out. [*Adding a filter rule also doesn't block existing connections in Sunbelt. That needs to work too.] - 3. Have an easy-to-get-to way to "stop all traffic" (like a right click option on the running icon in the task bar). - 4. Be able to set trusted/internet in/out block/allowed (4 things per item) for each of IGMP, ping, DNS, DHCP, VPN, and broadcasts. - 5. Define locahost as trusted/untrusted, define adapter connections as trusted/untrusted. - 6. Block incoming connetions during boot-up and shutdown. - 7. Show existing connections, including local & remote ip/port, protocol, current speed, total bytes transferred, and local ports opened for Listening. - 8. An Intrusion Prevention System which blocks (optionally select each one) known intrustions (long list). - 9. Block/allow applications from starting other applications (deny/allow/remember window). Wish list: A way of knowing what svchost.exe is doing - who is actually using it/calling it. I allowed it for localhost, and selectively allowed it for internet each time the allow/deny window came up. Thanks for any help/suggestions. (I'm using Windows XP SP3.)

    Read the article

  • IPTABLES route, redirect, forwardc traffic

    - by Anthony
    I am trying to redirect traffic from one IP reached on a specific port to a website. For example I have two external ips, lets say 194.145.63.1 and 194.145.63.2 set on one network card as 194.145.63.1 - eth0 and 194.145.63.2 -eth0:1 mywebsite.com allows access only from 194.145.63.1 and I want to set my rules like if I hit http://194.145.63.2:8080 to open mywebsite.com trough 194.145.63.1. Thanks in advance!

    Read the article

  • How do I check to see if an apt lock file is locked?

    - by gondoi
    I am writing a script to do some apt commands, but I am running into potential issues of the apt/dpkg databases being locked so my script bails. I want to check the lock files (i.e. /var/lib/dpkg/lock) before doing anything just like apt does when it runs it's command, but I can't figure out how apt is performing the lock. The lock file is always there, and apt-get is not doing a flock on the file. How else would it be checking to see if it is in use? From an strace I see that apt-get opens the file, but that is it. In my script, I can open the file while apt-get has it open as well.

    Read the article

  • Log ports opened by an application

    - by Simon A. Eugster
    I'm searching for something like: tcpdump -p PID # But tcpdump does not know the PID or lsof -i --continuous # But lsof just runs and exits, no «live logging» to log which connections an application opens. In my case, I want to find out to which port git connects when committing. This happens in a fraction of a second, so I cannot use lsof. If there is a lot of traffic, filtering by PID or process name would be useful.

    Read the article

  • Why does my downloads folder take so long to load?

    - by msbg
    When I open my downloads folder, no files appear, just a message saying "This folder is empty". I see the address bar progress slowly creeping along, and after about thirty seconds the files and folders appear. This problem only occurs in the downloads folder. If I search for a file in the downloads folder, it comes up, and if I right click on it and select "Open File Location", everything shows up instantly. I am using Windows 8, but I think I had a similar problem once in Windows 7. Sadly I can't remember how I fixed it.

    Read the article

  • Benefits of setting a webserver in Linux

    - by John Kent
    I wonder what main purposes and benefits can one get after he sets up a webserver running on Linux installed on a VM which is hosted by Windows and that this webserver can be used as a local host for windows ? That is, I have java application that is a webserver made on Linux, I set up the virtual machine for the windows client apps to listen to its (Linux)'s local IP address and port e.g 192.168.50.50:11111 When my webserver runs, I can use http://192.168.50.50/ as the windows's localhost address (instead of 127.0.0.1 as I usually do). Thank you

    Read the article

  • Setting Up Local Repository with TortoiseSVN in Windows

    - by Teno
    I'm trying to set up a local repository so that all commitments are copied to the local destination, not a remote server. I followed this tutorial. What I did. Created a folder named "SVN_Repo" under C:\Documents and Settings[user-name]\My Documents\ Right clicked on the folder and chose TortoiseSVN -> Create repository here Clicked OK in the pop up dialog asking whether to create a directory structure. Created a folder named Repos for the local destination, under E:\ Right clicked on the SVN_Repo folder and chose SVN Checkout... Typed file:///E:\repos in the URL of repository field and clicked the OK button. What I got: Checkout from file:///E:/repos, revision HEAD, Fully recursive, Externals included Unable to connect to a repository at URL 'file:///E:/repos' Unable to open an ra_local session to URL Unable to open repository 'file:///E:/repos' I must be doing something wrong. Could somebody point it out? Thanks.

    Read the article

  • Strange behavior from Firefox and Paypal Plugin

    - by Fake Name
    Ok, so I use the paypal plugin with firefox, which is really nice because you can generate single use credit-cards on the fly. However, recently it's been acting strange. Normally, when you use the plugin, a drop-down window open in firefox. However, at some point (I'm not sure exactly when it happened, because I thought it wasn't working for a while) the drop-down window started appearing in thunderbird. As in, I click on the plugin button in firefox, and the plugin window immediately opens in Thunderbird. If Thunderbird is closed, the plugin does not open at all. I can use it normally this way (everything works except the receipt-saving function). but the whole affair seems a little odd to me. Is there any reason that could cause things to be redirected from one mozilla-application-window to another mozilla-application-window? Everything is up to date: Firefox 3.6.3 Thunderbird 3.0.5 Paypal Plugin 2.2.26.0

    Read the article

  • ssh client problem: Connection reset by peer

    - by yonix
    I'm having a really annoying problem on my Ubuntu laptop. I noticed it today, after upgrading to Ubuntu 11.04, although I'm not entirely sure this is the cause as I played with my ssh keys a few days ago. The problem is, whenever I try to ssh to ANY host I get the following error: Read from socket failed: Connection reset by peer running with -vvv gives the following output: OpenSSH_5.8p1 Debian-1ubuntu3, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to hostname [10.0.0.2] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 1.99, remote software version OpenSSH_4.2 debug1: match: OpenSSH_4.2 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-1ubuntu3 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "hostname" from file "/root/.ssh/known_hosts" debug3: load_hostkeys: loaded 0 keys debug1: SSH2_MSG_KEXINIT sent Read from socket failed: Connection reset by peer My /etc/ssh/ssh_config: Host * SendEnv LANG LC_* HashKnownHosts yes GSSAPIAuthentication no GSSAPIDelegateCredentials no I can connect to my laptop from any other server via ssh, and I can also ssh localhost from my laptop successfully. I can connect to all these other server from other laptops, and I don't see anything in the logs of the other servers regarding my failed attempt. I tried to stop iptables, didn't help. I tried several tricks I could find online with my /etc/ssh/ssh_config, but I was unsuccessful in solving the problem... Any ideas? Edit: This is the log from one of the hosts I try to connect to: May 1 19:15:23 localhost sshd[2845]: debug1: Forked child 2847. May 1 19:15:23 localhost sshd[2845]: debug3: send_rexec_state: entering fd = 8 config len 577 May 1 19:15:23 localhost sshd[2845]: debug3: ssh_msg_send: type 0 May 1 19:15:23 localhost sshd[2845]: debug3: send_rexec_state: done May 1 19:15:23 localhost sshd[2847]: debug1: rexec start in 5 out 5 newsock 5 pipe 7 sock 8 May 1 19:15:23 localhost sshd[2847]: debug1: inetd sockets after dupping: 3, 3 May 1 19:15:23 localhost sshd[2847]: Connection from 10.0.0.7 port 55747 May 1 19:15:23 localhost sshd[2847]: debug1: Client protocol version 2.0; client software version OpenSSH_5.8p1 Debian-1ubuntu3 May 1 19:15:23 localhost sshd[2847]: debug1: match: OpenSSH_5.8p1 Debian-1ubuntu3 pat OpenSSH* May 1 19:15:23 localhost sshd[2847]: debug1: Enabling compatibility mode for protocol 2.0 May 1 19:15:23 localhost sshd[2847]: debug1: Local version string SSH-2.0-OpenSSH_5.3 May 1 19:15:23 localhost sshd[2847]: debug2: fd 3 setting O_NONBLOCK May 1 19:15:23 localhost sshd[2847]: debug2: Network child is on pid 2848 May 1 19:15:23 localhost sshd[2847]: debug3: preauth child monitor started May 1 19:15:23 localhost sshd[2847]: debug3: mm_request_receive entering May 1 19:15:23 localhost sshd[2848]: debug3: privsep user:group 74:74 May 1 19:15:23 localhost sshd[2848]: debug1: permanently_set_uid: 74/74 May 1 19:15:23 localhost sshd[2848]: debug1: list_hostkey_types: ssh-rsa,ssh-dss May 1 19:15:23 localhost sshd[2848]: debug1: SSH2_MSG_KEXINIT sent May 1 19:15:23 localhost sshd[2848]: debug3: Wrote 784 bytes for a total of 805 May 1 19:15:23 localhost sshd[2848]: fatal: Read from socket failed: Connection reset by peer

    Read the article

  • Where to find URLs for sources.list for debian for running apt-get update?

    - by Boda Cydo
    Can anyone tell me where to find URLs to put in /etc/apt/sources.list for debian so that I could run apt-get update? I couldn't find the precise answer by searching Google. When I currently try running apt-get update I get: W: Failed to fetch ftp://ftp.debian.org/debian/dists/lenny/contrib/binary-i386/Packages Unable to fetch file, server said 'Failed to open file. ' [IP: 130.89.148.12 21] W: Failed to fetch ftp://ftp.debian.org/debian/dists/lenny/non-free/binary-i386/Packages Unable to fetch file, server said 'Failed to open file. ' [IP: 130.89.148.12 21] I have no idea how to solve this. Here is how my current sources.list looks like: deb ftp://ftp.debian.org/debian lenny main contrib non-free deb-src ftp://ftp.debian.org/debian lenny main contrib non-free deb ftp://ftp.debian.org/debian lenny/updates main contrib non-free deb-src ftp://ftp.debian.org/debian lenny/updates main contrib non-free I'm running debian_version 5.0.8: # cat /etc/debian_version 5.0.8 Thanks!

    Read the article

  • Rejecting new HTTP requests when server reaches a certain throughput

    - by Sam
    I have a requirement to run an HTTP server that rejects new HTTP requests (with a 503, or similar) when the global transfer rate of current HTTP responses exceeds a certain level. For example, if the web server is transferring at 98Mbps, and a new HTTP request arrives, we would want to reject this (as we couldn't guarantee a good speed). I've had a look at mod_cband for Apache, limit_req for nginx, and lighttpd's rate limiting features, but none of them seem to handle my (rather contrived, granted) use case. I should add that I'm open to using pretty much any web server, and am open to implementing this in iptables rules if someone can craft such a rule! (Refusing the TCP connection is fine, it doesn't have to respond with an HTTP 503). Any suggestions?

    Read the article

  • Exchange not delivering the mail

    - by wolfvilleian
    I'm having an issue where my Exchange Edge Transport server receives mail (found in logs) and then it vanishes, never ending up in the users mailbox, I have a edge subscription setup between it and the main Exchange server, how can I go about tracing the message to figure out what is broken? I also have found records of the message in the logs on the main Exchange server. Thanks a ton for any help Edit: If I change port 25 on my main router to point to the main exchange server as opposed to the Edge Transport, email comes through fine form external domains and delivered in the correct mailbox

    Read the article

  • Automatically connect to 3G dialup connection

    - by Philipp
    I have a notebook with an internal 3G modem (a HUAWEI Mobile Connect) which appears to the system as a dialup modem. When I want to connect to the Internet, I need to open the charms bar, click settings, network, the modem and "connect" to open the dialup dialog on the desktop. When the connection is interrupted for some reason, I need to repeat these steps. Is there a way to configure Windows 8 to automatically (and preferably silently in the background) connect with this modem at system startup and automatically restore the connection should it get interrupted? As an operating system which is developed for mobile devices, I would expect it to be able to somehow be able to manage a always-on 3g connection.

    Read the article

  • Remove USB device from command line

    - by Luke
    I'm constructing a backup script for Windows 7, and the last action I want it to perform is to safely "remove" the USB drive that it is backing up to. I am under the impression that plugging the drive into the SAME USB port all the time will keep the same DEV_ID, correct me if I'm wrong. With a Command Line (or PowerShell), how can I tell Windows to safely remove the hardware automatically without user input? Just as a place holder, other OSes that may have a way to do this would be great to know as well

    Read the article

  • Squid on windows loadbalancing only to one server

    - by Martin L.
    After thousands of googles and trying days i cant get the load balancer/failover in squid on windows to work. Iam using squid 2.7. My webservers are 2 single NIC lighttpd and one dual nic lighttpd. server1 in this example is running squid on port 80 and lighttpd on port 8080 (just to test) Requirements: All 3 webservers running lighttpd should be balanced two option for load balancing: Best would be if server1 is busy server2 takes over, if server2 is busy server3 takes over, etc.. Round robin style evenly distributed load. Eg server1 takes first call, server2 second etc.. All requests should be treated the same way (no url rewriting or so on) Sent host headers have to be redirected to every server as http host header, speaking of "server1", "server1.company.internal" and "10.211.1.1". My approach: acl all src all acl manager proto cache_object http_port 80 accel defaultsite=server1.company.internal vhost #reverse proxy entries cache_peer 10.211.2.1 parent 8080 0 no-query originserver round-robin login=PASS name=server1_nic1 cache_peer 10.211.1.2 parent 80 0 no-query originserver round-robin login=PASS name=server2_nic1 cache_peer 10.211.2.3 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic1 cache_peer 10.211.2.4 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic2 #decl of names of squid host acl registered_name_hostdomain dstdomain server1.company.internal acl registered_name_host dstdomain server1 #ip of squid host acl registered_name_ip dstdomain 10.211.2.1 # access: redirects the correct squid hostname http_access allow registered_name_hostdomain http_access allow registered_name_host http_access allow registered_name_ip http_access deny all cache_peer_access server1_nic1 allow registered_name_hostdomain cache_peer_access server1_nic1 allow registered_name_host cache_peer_access server1_nic1 allow registered_name_ip cache_peer_access server2_nic1 allow registered_name_hostdomain cache_peer_access server2_nic1 allow registered_name_host cache_peer_access server2_nic1 allow registered_name_ip cache_peer_access server3_nic1 allow registered_name_hostdomain cache_peer_access server3_nic1 allow registered_name_host cache_peer_access server3_nic1 allow registered_name_ip cache_peer_access server3_nic2 allow registered_name_hostdomain cache_peer_access server3_nic2 allow registered_name_host cache_peer_access server3_nic2 allow registered_name_ip cache_peer_access server1_nic1 deny all cache_peer_access server2_nic1 deny all cache_peer_access server3_nic1 deny all cache_peer_access server3_nic2 deny all never_direct allow all Problems: Load balancer does not load balance other than to first server. Only if the first server is killed in any way the second will take over. I have seen the others working at some point, but definitely not as the intended load balancing described above. If the cache_peer_access is not defined sometimes the wrong hostname is sent to the backend webserver and this always depends on the defaultsite= parameter. Probably because the host header on the request to squid is not set and its replaced by defaultsite. Leaving out defaultsite didnt solve the problem. The only workaround i found for this is the current approach with cache_peer_access. Questions: Does the cache_peer_access influence the round-robin? Is there a better workaround to pass the host header to the backed webservers? Which parameters do increase the speed of load balancing or does anyone have a better approach? -Martin

    Read the article

  • FC SAN network high-error rate simulation

    - by Wieslaw Herr
    Is there a way to simulate a malfunctioning device or a faulty cable in a FC SAN network? edit: I know shutting down a port on a switch is an option, I'd like to simulate high error rates though. In an ethernet network it would be a simple case of adding a transparent bridge that discards a given percent of the packets, but I have absolutely no idea how to tackle that in an Fibre Channel environment...

    Read the article

  • Can only bring up one of two interfaces

    - by mstaessen
    I'm having a bizarre issue with my HP Proliant DL 360 G4p server. It has two gigabit ethernet interfaces but I can bring up only one of them. This is starting to freak me out and that's why I turned here. I'm running the x64 ubuntu 11.10 server edition. lshw -c network shows that the second interface is disabled. I have no idea why ans how to enable it. $ sudo lshw -c network *-network:0 description: Ethernet interface product: NetXtreme BCM5704 Gigabit Ethernet vendor: Broadcom Corporation physical id: 2 bus info: pci@0000:02:02.0 logical name: eth0 version: 10 serial: 00:18:71:e3:6d:26 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 66MHz capabilities: pcix pm vpd msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.119 duplex=full firmware=5704-v3.27b, ASFIPMIc v2.36 ip=10.48.8.x latency=64 link=yes mingnt=64 multicast=yes port=twisted pair speed=100Mbit/s resources: irq:25 memory:fdf70000-fdf7ffff *-network:1 DISABLED description: Ethernet interface product: NetXtreme BCM5704 Gigabit Ethernet vendor: Broadcom Corporation physical id: 2.1 bus info: pci@0000:02:02.1 logical name: eth1 version: 10 serial: 00:18:71:e3:6d:25 capacity: 1Gbit/s width: 64 bits clock: 66MHz capabilities: pcix pm vpd msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.119 firmware=5704-v3.27b latency=64 link=no mingnt=64 multicast=yes port=twisted pair resources: irq:26 memory:fdf60000-fdf6ffff If I try to ifup eth1, then I get $ sudo ifup eth1 Ignoring unknown interface eth1=eth1. I figured that's what happens when there is no eth1 listed in /etc/network/interfaces. But when I add the configuration for eth1, I still can't ifup. $ sudo ifup eth1 RTNETLINK answers: File exists Failed to bring up eth1. I've also tried ifconfig eth1 up but without any result. For clarity, I have added a masked version of /etc/network/interfaces. I don't think it is the cause of the problem though. $ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 10.48.8.x netmask 255.255.255.y network 10.48.8.z broadcast 10.48.8.t gateway 10.48.8.u auto eth1 iface eth1 inet static address 193.190.253.x netmask 255.255.255.y network 193.190.253.z broadcast 193.190.253.t gateway 193.190.253.u I really need some help fixing this. It's driving me crazy. Thanks.

    Read the article

  • Opening NBF backup file?

    - by ellisgeek
    I have a backup file from before i reinstalled windows but am unable to open it because the file is a NBF. It was created with Acer Backup Manager which is a proprietary version of NTI's backup software. is there any way to open this? I have tried using NTI Backup Now! 4.x but it says the file is invalid. Acer Backup Manager will only let me restore the ENTIRE image (not what I want), and many hours of googling have left me empty handed.

    Read the article

  • Windows 7 clean install, freezes when starting programs

    - by madmaze
    Hello Everyone, I have a problem on a brand new Windows 7 Lenovo T510 intel I5/4gb ram. When I got the Laptop, right off the bat i noticed that it would freeze up often on th most simple tasks. After a while I got fed up and reinstalled. All that is installed currently is AVG/Firefox/Office. I Rebooted after installing those things and when I try to open Firefox it freezes just trying to open. Ive tried a few other things and it freezes all the time. Any Ideas what it could be?

    Read the article

  • TCP Proxy with multiple clients?

    - by Daphna
    I am looking for a TCP proxy - a utility that will connect to a port, read a TCP stream, and write it to clients that connect to it. The key point here is that there may be more than one client, and each client should receive a copy of the stream. Preferably windows solution, but Linux can be useful as well.

    Read the article

  • Missing swf plugin in Google Chrome [Ubuntu]

    - by Bakhtiyor
    How to play .swf file in Google Chrome? Because when I open .swf files it says Missing Plug-in. though when I open chrome://plugins it shows that it has Shockwave Flash plugin installed already. My OS is Ubuntu 10 and would be happy if some linux geek could suggest me solution of my problem. Thanks and sorry for stupid question. Update 1 Here is information about version of the applications I use: Chrome 5.0.375.70 (48679) Ubuntu Shockwave Flash 9.0 r999 Update 2 I have solved the problem by the following way: Opened Application Manager Synaptic Deleted all the packages concerning flash and swf Opened up Application Center Ubuntu Searched for flash It found Adobe Flash Plugin Installed that application

    Read the article

  • Monit Webmin service is not activating

    - by Nagaraj
    I have written a script in monit interface for Webmin service. I can execute the process wherien I am unable to restart the service. check process webmin with pidfile /var/webmin/miniserv.pid start = "/etc/init.d /webmin start" stop = "/etc/init.d /webmin stop" if failed host in1.miracletel.com port 10000 then restart if 5 restarts within 5 cycles then timeout #if changed pid 2 times within 2 cycles then aler Would you please look into this and let me know, whether I can considered the service correct or not?

    Read the article

  • How do I Implement Per VLAN Rate Limiting or QOS for a Cisco 2960?

    - by evolvd
    I have a 2960 that I need to limit the uplink port to 50Mbps for 3 vlans and 350Mbps for another vlan. Would the following config achieve that or is this even possible for the 2960? class-map match-any VLAN50-51-52 match vlan 50-52 class-map match-any VLAN53 match vlan 53 policy-map 50MB_RATE_LIMIT class VLAN50-51-52 police 50000000 5000000 exceed-action drop class VLAN53 police 350000000 35000000 exceed-action drop ! interface GigabitEthernet0/23 service-policy output 50MB_RATE_LIMIT service-policy input 50MB_RATE_LIMIT

    Read the article

< Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >