Search Results

Search found 12283 results on 492 pages for 'tcp port'.

Page 33/492 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Port Compiler Options (8 replies)

    I currently own a license for Crossworks for ARM and would like to compile the port using it. With the code for the CLR now available, is it possible to compile the port with any ARM compiler or are we still restricted to the Keil ARM gcc compilers?

    Read the article

  • Need to open port 10000 for webmin and 21 for FTP in Centos?

    - by Abir Sepahvand
    Hi hwo can I open these two ports in CentOS. I have used webmin with Ubuntu before but I never had to manually open any port. When I enter iptables -L I get a output like this. Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination [root@sachinvasudev test]#

    Read the article

  • Iptables -gw parameter

    - by schoen
    I want to copy tcp traffic. i want to use these commands " iptables -A PREROUTING -t mangle -p tcp --dport 7 -j ROUTE --gw 1.2.3.4 --tee iptables -A POSTROUTING -t mangle -p tcp --sport 7 -j ROUTE --gw 1.2.3.4 --tee" like stated here http://stackoverflow.com/questions/7247668/duplicate-tcp-traffic-with-a-proxy but iptables keeps telling me "iptables v1.4.8: unknown option '--gw'" What can I do to fix this? With Kind Regards

    Read the article

  • Micrsoft Silverlight 3 cannot create service reference to localhost:port

    - by Monte
    Windows Server 2003 (IIS 6) Visual Studio 2008 .NET FrameWork 3.5 SP1 I am a .NET developer for a living and I have over 40 hours in the problem Project type = "Silverlight Navigation Application", "APS.NET Web Site" (when I tried it as "ASP.NET Web Application Project" I could not copy it to the production web site - well I could copy it but I could not make it run) Created a service.cs on the .Web side of the application. Created a reference to that service.cs on the Silverlight side. For a time all is good as I can reference the service as localhost:port (e.g. localhost:1374) in Visual Studio and debug both Silverlight side and service.cs To access the application in production mode (from IE) I update the service refrence and replace localhost:port with the IP address. The problem with the IP address is I cannot debug the service.cs so I have to change it back to localhost:port to debug. Now to the problem. After a period of time localhost:port just plain breaks. I get an error message no service at the other end Yes I know the port can change - that is not the problem - the port on the service side just plain breaks! For example from Visual Studio from the Silverlight side of the project right click "Service Reference", "Add Service Reverence". It finds 1 service in the application on a port. But when I click that service under "Services:" in the modal dialog box "Add Service Reference" I get an error: There was an error downloading 'http://localhost:1377/SehaleCSS.Web/Service.svc'. The request failed with the error message: -- Could not load file or assembly 'App_Web_tipnndfq, If I go back to the IP address the service is repsponding (with the right answer) The service just plain goes a while responding to localhost:port and then fails Even making NO change to service.cs it go a while then fails as a localhost:port It is not IIS environmental as I can go back to a prior saved version of the code and it works Something is happening that the .web side of the application is failing. It still works as an IP and it still exposes itself as a localhost:port but it fails to properly repsonde as a localhost:port.

    Read the article

  • Network Access: I can't access 192.168.1.101 from 192.168.1.102.

    - by takpar
    Hi, I'm running Ubuntu 10.04 on my PC with IP 192.168.1.101. every thing work fine, e.g. my web server is running and I can see http://localhost/ or http://192.168.1.101 properly. But the problem is that I cannot see my PC from my laptop at 192.168.1.102 e.g. at my laptop http://192.168.1.101 gives Connection timed out in browser. or trying to telnet on any port leads to: telnet: Unable to connect to remote host: Connection timed out laptop is running a fresh install of Ubuntu as well and there is no setup for firewall stuff in both computers. PS: Both computers can ping each other well. The router is a cicso linksys wireless ADSL modem. Currently, I can connect to FTP server on the Windows running on 192.168.1.102 from 192.168.1.101 without problem. Theses are commands ran on my PC, 192.168.1.101: ifconfig: adp@adp-desktop:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:26:18:e1:8e:cf inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe70::226:18ff:fee1:8ecf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1831935 errors:0 dropped:0 overruns:0 frame:0 TX packets:1493786 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1996855925 (1.9 GB) TX bytes:215288238 (215.2 MB) Interrupt:27 Base address:0xa000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:951742 errors:0 dropped:0 overruns:0 frame:0 TX packets:951742 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:494351095 (494.3 MB) TX bytes:494351095 (494.3 MB) vmnet1 Link encap:Ethernet HWaddr 00:50:46:c0:00:01 inet addr:192.168.91.1 Bcast:192.168.91.255 Mask:255.255.255.0 inet6 addr: fe70::250:56ff:fec0:1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:50 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vmnet8 Link encap:Ethernet HWaddr 00:50:46:c0:00:08 inet addr:192.168.156.1 Bcast:192.168.156.255 Mask:255.255.255.0 inet6 addr: fe70::250:56ff:fec0:8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:51 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) port 80 is set to 0.0.0.0 well: adp@adp-desktop:~$ netstat -ln | grep 'LISTEN ' tcp 0 0 127.0.0.1:52815 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4559 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:7634 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5269 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5280 0.0.0.0:* LISTEN tcp 0 0 127.0.1.1:7777 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:33601 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5222 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp6 0 0 :::139 :::* LISTEN tcp6 0 0 ::1:631 :::* LISTEN tcp6 0 0 :::445 :::* LISTEN /etc/hosts.deny is empty: adp@adp-desktop:~$ cat /etc/hosts.deny # /etc/hosts.deny: list of hosts that are _not_ allowed to access the system. # See the manual pages hosts_access(5) and hosts_options(5). # # Example: ALL: some.host.name, .some.domain # ALL EXCEPT in.fingerd: other.host.name, .other.domain # # If you're going to protect the portmapper use the name "portmap" for the # daemon name. Remember that you can only use the keyword "ALL" and IP # addresses (NOT host or domain names) for the portmapper, as well as for # rpc.mountd (the NFS mount daemon). See portmap(8) and rpc.mountd(8) # for further information. # # The PARANOID wildcard matches any host whose name does not match its # address. # # You may wish to enable this to ensure any programs that don't # validate looked up hostnames still leave understandable logs. In past # versions of Debian this has been the default. # ALL: PARANOID netstat -l: adp@adp-desktop:~$ netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 localhost:52815 *:* LISTEN tcp 0 0 *:hylafax *:* LISTEN tcp 0 0 *:www *:* LISTEN tcp 0 0 *:4369 *:* LISTEN tcp 0 0 localhost:7634 *:* LISTEN tcp 0 0 *:ftp *:* LISTEN tcp 0 0 *:xmpp-server *:* LISTEN tcp 0 0 localhost:ipp *:* LISTEN tcp 0 0 *:smtp *:* LISTEN tcp 0 0 *:5280 *:* LISTEN tcp 0 0 adp-desktop:7777 *:* LISTEN tcp 0 0 *:33601 *:* LISTEN tcp 0 0 *:xmpp-client *:* LISTEN tcp 0 0 localhost:mysql *:* LISTEN tcp6 0 0 [::]:netbios-ssn [::]:* LISTEN tcp6 0 0 localhost:ipp [::]:* LISTEN tcp6 0 0 [::]:microsoft-ds [::]:* LISTEN udp 0 0 *:bootpc *:* udp 0 0 *:mdns *:* udp 0 0 *:47467 *:* udp 0 0 192.168.1.10:netbios-ns *:* udp 0 0 192.168.91.1:netbios-ns *:* udp 0 0 192.168.156.:netbios-ns *:* udp 0 0 *:netbios-ns *:* udp 0 0 192.168.1.1:netbios-dgm *:* udp 0 0 192.168.91.:netbios-dgm *:* udp 0 0 192.168.156:netbios-dgm *:* udp 0 0 *:netbios-dgm *:* raw 0 0 *:icmp *:* 7 netstat -rn: adp@adp-desktop:~$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.91.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet1 192.168.156.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet8 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 commands on the laptop, 192.168.1.102: ifconfig: root@fakeuser-laptop:~# ifconfig eth0 Link encap:Ethernet HWaddr 00:1c:33:a2:31:15 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:21 eth1 Link encap:Ethernet HWaddr 00:2d:d9:3e:1f:6c inet addr:192.168.1.102 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe70::21d:d9ff:fe3e:1f6c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5681 errors:0 dropped:0 overruns:0 frame:10313 TX packets:6717 errors:6 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4055251 (4.0 MB) TX bytes:779308 (779.3 KB) Interrupt:18 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:206 errors:0 dropped:0 overruns:0 frame:0 TX packets:206 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:15172 (15.1 KB) TX bytes:15172 (15.1 KB) netstat -rn: root@fakeuser-laptop:~# netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • Book about tcp, http, named pipe, shared memory, wcf and other inter-process communication protocol

    - by Samuel
    Recently, I had to create a program to send messages between two winforms executable. I used a tool with simple built-in functionalities to prevent having to figure out all the ins and outs of this vast quantity of protocols that exist. But now, I'm ready to learn more about the internals difference between each of theses protocols. I googled a couple of them but it would be greatly appreciate to have a good reference book that gives me a clean idea of how each protocol works and what are the pros and cons in a couple of context. Here is a list of nice protocols that I found: Shared memory TCP List item Named Pipe File Mapping Mailslots MSMQ (Microsoft Queue Solution) WCF I know that all of these protocols are not specific to a language, it would be nice if example could be in .net. Thank you very much.

    Read the article

  • Exploring TCP throughput with DTrace (2)

    - by user12820842
    Last time, I described how we can use the overlap in distributions of unacknowledged byte counts and send window to determine whether the peer's receive window may be too small, limiting throughput. Let's combine that comparison with a comparison of congestion window and slow start threshold, all on a per-port/per-client basis. This will help us Identify whether the congestion window or the receive window are limiting factors on throughput by comparing the distributions of congestion window and send window values to the distribution of outstanding (unacked) bytes. This will allow us to get a visual sense for how often we are thwarted in our attempts to fill the pipe due to congestion control versus the peer not being able to receive any more data. Identify whether slow start or congestion avoidance predominate by comparing the overlap in the congestion window and slow start distributions. If the slow start threshold distribution overlaps with the congestion window, we know that we have switched between slow start and congestion avoidance, possibly multiple times. Identify whether the peer's receive window is too small by comparing the distribution of outstanding unacked bytes with the send window distribution (i.e. the peer's receive window). I discussed this here. # dtrace -s tcp_window.d dtrace: script 'tcp_window.d' matched 10 probes ^C cwnd 80 10.175.96.92 value ------------- Distribution ------------- count 1024 | 0 2048 | 4 4096 | 6 8192 | 18 16384 | 36 32768 |@ 79 65536 |@ 155 131072 |@ 199 262144 |@@@ 400 524288 |@@@@@@ 798 1048576 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3848 2097152 | 0 ssthresh 80 10.175.96.92 value ------------- Distribution ------------- count 268435456 | 0 536870912 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 1073741824 | 0 unacked 80 10.175.96.92 value ------------- Distribution ------------- count -1 | 0 0 | 1 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 4 4096 | 9 8192 | 21 16384 | 36 32768 |@ 78 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5391 131072 | 0 swnd 80 10.175.96.92 value ------------- Distribution ------------- count 32768 | 0 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 131072 | 0 Here we are observing a large file transfer via http on the webserver. Comparing these distributions, we can observe: That slow start congestion control is in operation. The distribution of congestion window values lies below the range of slow start threshold values (which are in the 536870912+ range), so the connection is in slow start mode. Both the unacked byte count and the send window values peak in the 65536-131071 range, but the send window value distribution is narrower. This tells us that the peer TCP's receive window is not closing. The congestion window distribution peaks in the 1048576 - 2097152 range while the receive window distribution is confined to the 65536-131071 range. Since the cwnd distribution ranges as low as 2048-4095, we can see that for some of the time we have been observing the connection, congestion control has been a limiting factor on transfer, but for the majority of the time the receive window of the peer would more likely have been the limiting factor. However, we know the window has never closed as the distribution of swnd values stays within the 65536-131071 range. So all in all we have a connection that has been mildly constrained by congestion control, but for the bulk of the time we have been observing it neither congestion or peer receive window have limited throughput. Here's the script: #!/usr/sbin/dtrace -s tcp:::send / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @cwnd["cwnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd); @ssthresh["ssthresh", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd_ssthresh); @unacked["unacked", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_snxt - args[3]-tcps_suna); @swnd["swnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize((args[4]-tcp_window)*(1 tcps_snd_ws)); } One surprise here is that slow start is still in operation - one would assume that for a large file transfer, acknowledgements would push the congestion window up past the slow start threshold over time. The slow start threshold is in fact still close to it's initial (very high) value, so that would suggest we have not experienced any congestion (the slow start threshold is adjusted when congestion occurs). Also, the above measurements were taken early in the connection lifetime, so the congestion window did not get a changes to get bumped up to the level of the slow start threshold. A good strategy when examining these sorts of measurements for a given service (such as a webserver) would be start by examining the distributions above aggregated by port number only to get an overall feel for service performance, i.e. is congestion control or peer receive window size an issue, or are we unconstrained to fill the pipe? From there, the overlap of distributions will tell us whether to drill down into specific clients. For example if the send window distribution has multiple peaks, we may want to examine if particular clients show issues with their receive window.

    Read the article

  • C# TCP Hole Punch (NAT Traversal) Library or something?

    - by user293531
    I want to do TCP Hole Punching (NAT Traversal) in C#. It can be done with a rendevouzs server if needed. I found http://sharpstunt.codeplex.com/ but can not get this to work. Ideally i need some method which i give a PortNumber (int) as parameter that after a call to this method is available ("Port Forwarded") at the NAT. It would be also ok if the methode just returns some port number which is then available at the NAT. Has anybody done this in C# ? Can you give me working examples for sharpstunt or something else? Thank you

    Read the article

  • What alternative is there to writing a TCP/IP data relay?

    - by LoudNPossiblyRight
    I am about to write a tcp/ip data relay - application that passes a one way stream of data from one host/port to another host/port. Initially it will be generic, but later on i will customize it to the need of a specific business request. I am guessing that something generic already exists out there so my question is: Has anyone used a third party (preferably open source) data relay in a production environment, if so what is, and do you recommend it? Any platform is fine. Thanks.

    Read the article

  • How do you get the host address and port from a System.Net.EndPoint?

    - by cyclotis04
    I'm using a TcpClient passed to me from a TcpListener, and for the life of me I can't figure out a simple way to get the address and port it's connected to. The best I have so far is _client.Client.RemoteEndPoint.ToString(); which returns a string in the form FFFF::FFFF:FFFF:FFF:FFFF%00:0000. I've managed to extract the address and port using Regular Expressions, but this seems like overkill. What am I missing?

    Read the article

  • User input in Perl with IO::Socket

    - by David
    I am trying to make a perl program which allows a user to input the host and the port number of a foreign host to connect to using IO::Socket. It allows me to run the program and input a host and a port but it never connects and says "Could not connect to [host] at c:\users\USER\Documents\code\perl\sql.pl line 18, line 2." What am i doing wrong with this code shown below? And also, how can i have input validation on my host, which can either be a host name or an ip address? Thanks a bunch! Code Below use IO::Socket print "Host to connect to: "; chomp ($host = <STDIN>); print "Port to connect with: "; chomp ($port = <STDIN>); while(($port > 65535) || ($port <= 0)){ print "Port to connect with [Port > 0 < 65535] : "; chomp ($port = <STDIN>); } print "\nConnecting to host $host on port $port\n"; $socket = new IO::Socket::INET ( LocalHost => '$host', LocalPort => '$port', Proto => 'tcp', Listen => 5, Reuse => 1 ); die "Could not connect to $host";

    Read the article

  • How to forward UDP Wake-on-Lan port to broadcast IP with IPTABLES?

    - by Nazgulled
    I'm trying to setup Wake-on-Lan for some of the LAN computers at home and it seems that I need to open a UDP port (7 or 9 being the most common) and forward all requests to the broadcast IP, which in my case is 192.168.1.255. The problem is that my router does not allow me to forward anything to the broadcast IP. I can connect to my router through telnet and it seems this router uses IPTABLES, but I don't know much about it or how to is. Can someone help me out with the proper iptables commands to do what I want? Also, in case it doesn't work, the commands to put everything back would be nice too. One last thing, rebooting the router will keep those manually added iptables entries or I would need to run them every time?

    Read the article

  • Is it possible to make a persistent USB serial port regardless ofthe existence of the USB device?

    - by Keith Nicholas
    USB serial ports, especially devices which emulate serial ports, don't quite behave the same as old serial ports, which causes a few problems with some software. old serial ports, always existed, they never come and go out of existence. With USB devices that emulate serial ports, they come and go depending on whether they are powered / reset etc. Is there a way under windows to make the USB serial port permanently exist regardless of the presence of the device? (not just come back as the same name as it was before).

    Read the article

  • change default port of IIS and let another process to listen on port 80 (Windows Server 2008)

    - by aleroot
    I have an installation of Windows Server 2008 running IIS 6 with a website listening on port 8080, even though I have moved the website to listen on 8080, port 80 is still kept in use by IIS (for truth by the kernel process : System - ProcId : 4). I want to let another process listen on port 80 without uninstalling or disabling IIS, I want to keep IIS listening on port 8080 and another service on port 80, is there a way to do it? I saw another similar thread here on serverfault but the solution (using httpcfg.exe delete iplisten -i 0.0.0.0:80 ) can work only in 2003 because in 2008 the utility httpcfg.exe doesn't exist and it seems that it cannot be installed ... Does anyone have a solution to get rid of the kernel listening on port 80 in Windows Server 2008 with IIS running?

    Read the article

  • Latency in TCP/IP-over-Ethernet networks

    - by aix
    What resources (books, Web pages etc) would you recommend that: explain the causes of latency in TCP/IP-over-Ethernet networks; mention tools for looking out for things that cause latency (e.g. certain entries in netstat -s); suggest ways to tweak the Linux TCP stack to reduce TCP latency (Nagle, socket buffers etc). The closest I am aware of is this document, but it's rather brief. Alternatively, you're welcome to answer the above questions directly. edit To be clear, the question isn't just about "abnormal" latency, but about latency in general. Additionally, it is specifically about TCP/IP-over-Ethernet and not about other protocols (even if they have better latency characteristics.)

    Read the article

  • How to enable Jetty to support cometd/reverse ajax while let it listen to port 80?

    - by janetsmith
    Hi, I would like to use cometd / reverse ajax capability of Jetty 7. I tried to configure it so it listen to port 80, instead of 8080. However, according to http://jetty.mortbay.org/jetty5/faq/faq%5Fs%5F200-General%5Ft%5Fapache.html , Apache can be configured as a HTTP/1.1 proxy to pass selected request to the Jetty using the HTTP/1.1 protocol. This is simple to configure and use, but current versions of the apache mod_proxy do not support persistent connections. As far as I know, the reverse ajax in jetty is depending on continuation (I guess it is persistent connection). So how to let jetty support reverse ajax, while coexist with apache server? Thanks.

    Read the article

  • Implementing a form of port knocking + Phone Factor = 2 Factor auth for RDP?

    - by jshin47
    I have been looking into how to secure a publicly-available RDP endpoint and want to implement our two-factor authentication RADIUS server, PhoneFactor. I would like to implement the following process: User opens up web app in browser In web app, user enters username + password, initiates RADIUS auth Phone factor calls user to complete auth Once user is authenticated, port 3389 is opened on user's IP on pfSense firewall. After some amount of time, firewall rule is removed for that IP I would like to know the following: Is this a typical setup? If it is a bad idea, please explain why. If it is possible, are there any packages that assist with this? Specifically, the third step, where the appropriate firewall rule would need to be added... Edit: I am aware of TS Web Gateway, but I want the users to be able to use the traditional RDP client...

    Read the article

  • how to setup sonicwall tz210 to port forward packets received from external ip to another external ip

    - by lplp
    i have a sonicwall tz210 on a fixed ip, say ip1. And then i have, let's say a legacy server, with external ip ip2, which sends data to ip1 (and I have another server on ip1 behind the sonicwall which receives and processes that data). I would like to set up a new server on a different external ip ip3 that will receive and process data from the legacy server. How can I setup the sonicwall so that the packets received from the legacy server (from an external ip) are port forwarded to the external ip address ip3?

    Read the article

  • Latency in TCP/IP-over-Ethernet networks

    - by aix
    What resources (books, Web pages etc) exist out there that: explain the causes of latency in TCP/IP-over-Ethernet networks; mention tools for looking out for things that cause latency (e.g. certain entries in netstat -s); suggest ways to tweak the Linux TCP stack to reduce TCP latency (Nagle, socket buffers etc). The closest I am aware of is this document, but it's rather brief. Alternatively, you're welcome to answer the above questions directly.

    Read the article

  • How to port forward https traffic via ssh and/or remote desktop to through several networks and PCs?

    - by donttellya
    I have the following environment: In company X I develop a application on a pc A in network A with ip address 192.168.100.50 which has to do an https request to an http server located in the intranet of company Y In company X is another pc B in network B with ip address 192.168.200.100 pc B (of company X) can access the intranet from company Y via ssh tunnel (putty) pc A (of company X) can ping pc B (of company X) note: pc A can also do a remote desktop connection to pc B) pc B can ping the http sever pc A can not ping the http server How can the https request from pc A of company X get to the http server of company Y? On which pc must be putty configured? And which settings for host, port forwarding etc. has to be done in putty? So finally the https request should go from PC A - PC B - Http Server in company Y.

    Read the article

  • Windows Server 2008: Limit UDP/TCP packets per IP or ban

    - by WBAR
    How I can limit UDP/TCP packets per IP send to my host (or better PORT) per second or minute ? Would be nice to ban that IP for 12/24 hours or even for ever. I got Windows Server 2008 and I'm very poor in Windows administration but quite good in Linux. EDIT: By basic problem is that They sending a lot of rubbish UPD and TCP packets.. TCP packets without SYNCH, fragmented UDP packets so my servers stop responding.. So I need to cut off users (IPs) sending more than X packets per second. I need solution witch provides me, somehow, configurable: X packets of certain type (UDP, TCP or both - lets say parameter named Z ) are allowed to be received by IP on Y port, otherwise this packet should be DROPPED. My virtual hosts are hosted by VirtualBox and I'm able to forward all incoming packets certain type and certain port to the specific Virtual Host, but I need to DROP them before my VirtualBox receive them.

    Read the article

  • How to enable Jetty to support cometd/reverse ajax while let it listen to port 80?

    - by janetsmith
    I would like to use cometd / reverse ajax capability of Jetty 7. I tried to configure it so it listen to port 80, instead of 8080. However, according to http://jetty.mortbay.org/jetty5/faq/faq%5Fs%5F200-General%5Ft%5Fapache.html , Apache can be configured as a HTTP/1.1 proxy to pass selected request to the Jetty using the HTTP/1.1 protocol. This is simple to configure and use, but current versions of the apache mod_proxy do not support persistent connections. As far as I know, the reverse ajax in jetty is depending on continuation (I guess it is persistent connection). So how to let jetty support reverse ajax, while coexist with apache server? Thanks.

    Read the article

  • How do I make a virtualised WAN?

    - by EnchantedEggs
    I want to create a virtualised WAN. As in, I want to have a couple of VMs (VBox) on one physical host machine, that exist on separate LANs, but that can talk to each other. Do I make the VMs, set them up with different IP addresses (e.g. 1.2.3.4 and 5.6.7.8) and then configure port forwarding between them somehow??? I've seen articles that set up port forwarding on port 2222, but I don't really understand why this works. How is setting up the VM to listen to port 2222 and then port forward from there to, say, port 80, any different from just telling the VM to listen on port 80 in the first place? FYI, the VMs run Ubuntu Desktop 14.x.

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >