Search Results

Search found 4990 results on 200 pages for 'traffic measurement'.

Page 140/200 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • Can a virtual mikrotik box bridge a hyper-v internal network with a hyper-v external network?

    - by mcfrosty
    I am trying to set up a Mikrotik router as a transparent firewall on my network. I got the machine working on a hardware MT box, but my boss wants the MT virtualized. I have been trying the set up where my virtual windows box talks to the Mikrotik via private or internal network on the Hyper-V host. I can get the two machines to talk, but as soon as I set up a bridge on the MT, all traffic ceases between the two. Is it possible to create a bridge for this purpose (having the MT silently in front of my firewalled server)? I could really use some help.

    Read the article

  • Out Of Memory Error - Magento

    - by robobobobo
    Ok normally I understand when my server is giving me out of memory errors, but this one has me stumped! I'm running a magento based site, with one or two plugins in it and the rest is pretty basic. The site runs and loads fine wiht no issues. However in the backend - Configuration - Payment Methods it gives me the following out of memory error Fatal error: Out of memory (allocated 39059456) (tried to allocate 85 bytes) in ########/Varien/Simplexml/Element.php on line 84 Now this is where I'm confused..it's allocated more than it tried to allocate? Am I correct there? So how is it running out of memory? My server has 6Gb ram, an SSD and 2 CPU's running WHM with a few other low traffic sites on it. I set my php memory limit to 100mb, 1000mb and finally unlimited but all to no avail! I'm completely lost here, would really appreciate some expertise on this Cheers

    Read the article

  • Wordpress Installation on Two Servers - Loadbalancing

    - by rihatum
    Hi All, I have to install wordpress (One Blog, one domain, for e.g. mycompany.com/blog) on two servers sharing one database on a different server, these two servers are behind a loadbalancer and the db would be on another server. We are planning this way due to high traffic. I have done standalone wordpress installations on a single server, on windows 2003, 2008 with IIS6, 7 etc I am just researching as to how would I implement this. What would be the steps to achieve this and upon searching I saw some posts regarding the wp-content/uploads directory to be synced at regular intervals ? your help much appreciated Thanks for reading

    Read the article

  • Test-service on Internet for testing incoming INVITE

    - by leiflundgren
    I am trying to set up Asterisk at home. I think I'm having trouble configuring my firewall, so that inbound traffic is accepted, but I am not sure. I got the idea that, perhaps, there is a service out on the Internet, where I can, though a web-browser, initiate an incoming call, an INVITE. And then see the SIP-trace that the remote-part experience. Anyone know of such a service? Note. I have a SIP-PSTN provider so I can generate inbound calls. But I cannot see the SIP-logs from my provider...

    Read the article

  • NAT Policy Inbound Source Problem on SonicWall TZ-210 with Multiple DSL Lines

    - by HK1
    We recently added three more DSL connections to our SonicWall TZ-210. My NAT Policies work fine as long as I leave them set with an inbound interface of X1, which hosts our original DSL connection. However, I'd like to change some of the NAT Policies to use inbound source/interface X2, X3, X4 or Any. In my initial tests, when I change one of the policies to use an inbound interface of X2, that port forward policy does not work at all. Traffic never makes it to the internal destination. What could be the problem?

    Read the article

  • Force a browser to load the 'https' edition of a website, not the 'http'?

    - by warren
    This is similar to this previous question, but I believe it's a bit different*. Sites like GMail support a preference that pushes all traffic through the SSL edition of the site rather than the plain-text protocol. For sites that don't offer such preferences (or ones that may, but I have been unable to find, like facebook), is there a way using only the browser (perhaps with a plugin or addon) to always try SSL first, and fall-back to plain-text iff SSL fails? Is that solution available on Windows, Mac OS X, and Linux? Just one? * The previous question was looking for external applications that would accomplish this goal.

    Read the article

  • how limit the number of open TCP streams from same IP to a local port?

    - by JMW
    Hi, i would like to limit the number of concurrent open TCP streams from the the same IP to the server's (local) port. Let's say 4 concurrent conncetions. How can this be done with ip tables? the closest thing, that i've found was: In Apache, is there a way to limit the number of new connections per second/hour/day? iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --set iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --update --seconds 86400 --hitcount 100 -j REJECT But this limitation just messures the number of new connections over the time. This might be good for controlling HTTP traffic. But this is not a good solution for me, since my TCP streams usually have a lifetime between 5 minutes and 2 hours. thanks a lot in advance for any reply :)

    Read the article

  • Route multiple subdomains on one external ip to multiple internal ips

    - by Abenil
    i have several subdomains(git.example.org, build.example.org, etc.), i have a router with an external ip and i have several virtual machines on a host computer with internal ips. Now i want to route git.example.org to internal ip 10.0.2.1 and build.example.org to internal ip 10.0.2.2. How can I do this? I setup in the Router that all traffic on port 80 is comming to my host computer with internal ip 10.0.2.3 and installed Squid on that computer. I added the following lines to the squid.conf file: cache_peer 10.0.2.1 parent 80 0 no-query originserver name=server_1 cache_peer_domain server_1 git.example.org cache_peer 10.0.2.2 parent 80 0 no-query originserver name=server_2 cache_peer_domain server_2 build.example.org But this is not working for me. :( Any help appreciated. Regards Nils Update: Here is the solution for Apache http://serverfault.com/a/273693

    Read the article

  • Blocking IP's Nginx behind proxy

    - by FunkyChicken
    I'm running a Nginx 1.2.4 webserver here, and I'm behind a proxy of my hoster to prevent ddos attacks. The downside of being behind this proxy is that I need to get the REAL IP information from an extra header. In PHP it works great by doing $_SERVER[HTTP_X_REAL_IP] for example. Now before I was behind this proxy of my hoster I had a very effective way of blocking certain IP's by doing this: include /etc/nginx/block.conf and to allow/deny IP's there. But now due to the proxy, Nginx sees all traffic coming from 1 IP. Is there a way I can get Nginx to read the IP's like how PHP does, with the X-REAL-IP header?

    Read the article

  • Cheapest High Available Web Server [closed]

    - by xyz
    I would like to create a high-available setup (e.g. a small cluster) for a webserver, i.e. it will run Apache, PHP and MySQL. There will be between 2-8 small websites running with only very little traffic and workload. High availability is however very important. I don't want to be dependent on 1 datacenter, so there must be a minimum of 2 servers placed in different datacenters, and if one server goes down, the user must experience no or only a minimum of downtime - and no data loss. I have considered Amazon AWS using their Elastic Load Balancing, since it is possible to buy 2 EC2 instances in 2 availability zones and set up load balancing and RDS (Multi-AZ). However this seems rather expensive. Using the AWS price calculator http://calculator.s3.amazonaws.com/calc5.html it totals to 185$/month the first year (including the free tier). Are my calculations incorrect or is there a cheaper way to make this HA setup? Best regards

    Read the article

  • What are best monitoring tool customizable for cluster / distributed system?

    - by Adil
    I am working on a system having multiple servers. I am interested in monitoring some server specific data like CPU/memory usage, disk/filesystem usage, network traffic, system load etc. and some other my process specific data. What are available open source that can serve my purpose? If it provides to customize the parameter to be monitored and monitor your own data by creating plugin / agent. Any suggestions? I heard of Nagios, Zabbix and Pandora but not sure if they provide such interface.

    Read the article

  • Linux - Block ssh users from accessing other machines on the network

    - by Sam
    I have set up a virtual machine on my network for uni project development. I have 6 team members and I don't want them to SSH in and start sniffing my network traffic. I already have set the firewall on my W7 pcs to ignore any connection attempts from the Virtual Machine, but would like to go a step further and not allow any network access from the VM to other machines on my network. Team members will be access the VM by SSH. The only external port forwarded is to vm:22. The VM is running in VirtualBox on a bridged network connection. Running latest Debian. If someone could tell me how to do this I would be much obliged.

    Read the article

  • Web Site Monitoring/Tracking Freeware

    - by jsmith
    I need to be able to track Web Sites visited on a computer and send them to an email address on a daily basis. Keylogger software seems like too much, I want something lightweight that simply monitors websites visited and forwards them on. I was hoping for freeware, but if it's cheap/simple and easy to use I'm willing to pay. I know similar questions have been asked about website traffic monitoring, but it's not quite the same thing, and I can't seem to find an answer to this question anywhere. Thank you ahead of time.

    Read the article

  • load balancing two web servers each on two different isp's?

    - by Scott
    I have two ISP's that provide me hosting via apache / php / mysql. I am running drupal on them. On occasion the mysql server will go away (crash), so I was hoping to find a reasonable way to have a fail over, if server A SQL is down, all traffic is sent to server B. I know traditionally this is handled in DNS where a second alternate ip is given if there is a problem - or similar. But I do not have control over the isp, other than I can run php, perl and the usual apache stuff. Also, I have static ip's on each isp, and I can create dns entries (A/CNAME/TXT). So, I was hoping there might be a way for me to have a script that checks if drupal has a problem, and if so, somehow alter dns, or ? Or, any other ideas? (other than spending lots more $ on a better isp)

    Read the article

  • Apt Stalls When Using HTTP Sources

    - by UltraNurd
    I was getting some to me inexplicable behavior from apt-get/aptitude on an admittedly crusty old webserver. While it was otherwise running fine, as soon as I tried a package upgrade, after a downloading a few updates it would stall completely, then my SSH session hung (and I was unable to reconnect), thus requiring a hard restart. First, I switched to a different package source in /etc/apt/sources.list, but still got the same behavior. At this point I was assuming the NIC was dying in some weird way... but as soon as I changed the package source to use FTP instead of HTTP, everything worked fine, and I was able to upgrade. For now I'm not too concerned since I have an easy work around, but it implies that there's something very weird with my network setup, since it seems to be protocol (or port?) specific. I didn't think any of my NAT setup would affect outbound traffic, but I could be crazy. Any ideas what I should try to look for?

    Read the article

  • scp to remote servers stalls, unable to isolate cause

    - by Rolf
    When I copy a large file (100+mb) to a remote server using scp it slows down from 2.7 mb/s to 100 kb/s and downward and then stalls. The problem is that I can't seem to isolate the problem. I've tried 2 different remote servers, using 2 local machines (1 osx, 1 windows/cygwin), using 2 different networks/isps and 2 different scp clients. All combinations give the problem except when I copy between the two remote servers (scp). Using wireshark I could not detect any traffic volume that would congest the network (although about 7 packets/sec with NBNS requests from the osx machine). What in the world could be going on? Given the combinations I've used there doesn't seem to be any overlap in the thing that could be causing the trouble.

    Read the article

  • Remote Software Solution that Acts as a Client

    - by Richard
    I am looking for something that I am not sure exists. I have a remote computer that will not allow incoming traffic due to ISP blocking of ports(basically double NAT situation that I am unable to get around). I am wondering if I have a computer acting as a client, is there any solution out there that will allow remote access to the computer. I do have other servers on the net that have static IP's that the computer could initiate a connection with. I am thinking of using Debian Linux, However computer is not built yet so OS is not overly important at this point.

    Read the article

  • ASA Slow IPSec Performance

    - by Brent
    I have a IPSec link between two sites over ASA 5520s running 8.4(3) and I am getting extremly poor performance when traffic passes over the VPN. CPU on the device is 13%, Memory at 408 MB, and active VPN sessions 2 so the load on the device is particularly low. Screenshot of wireshark file transfer between the two hosts over the VPN: The large amount of Header checksum failures is alarming, but I am not sure what to check now. I perf is showing around 4-5 Mbit/sec with differing TCP window sizes. Show Run on the ASA http://pastebin.com/uKM4Jh76 Show cry accelerator stats http://pastebin.com/xQahnqK3

    Read the article

  • Windows Proxy Server advice

    - by Scott
    I have a webserver that currently has about 10 IP addresses. I have various clients that require a proxy server to route their internal traffic through. The load is not that great, so I'd like to have this ONE server act as a proxy server for 10 different clients, each client having their own unique IP on the server. The hardware is already setup, but I'm wondering what software solutions you guys recommend? I've looked at WinGate, Squid-Proxy, etc...but am pretty green with this. Maybe there's even a way to have Windows do this natively? I'm running Windows Server 2008, 32 bit.

    Read the article

  • How to do port forwarding in D-link Glb802c?

    - by Manish
    I have some questions about port forwarding on my D-Link Router GLB-802C. For example: My local machine's IP is 117.1.1.81 My router's IP is 117.1.1.1 My Public (Web) IP is 117.16.1.1 My questions are: What will be my Global Address 'To'? What will be my Global Address 'From'? In Destination Port "From" and "To" what do I select in the drop down list and port no for forwarding HTTP traffic (for my website)? In Local Port, what do I select in drop down list and port no?

    Read the article

  • Redundant Router and Load Balancing vs. DDoS attack

    - by colgatta
    With a small server farm at a hoster with great support and conditions, I worry about the increasing number of DDoS attacks against this hoster (not my web project, but other clients on the same location). I have booked a redundant router and load balancer as managed service with this hoster to share the load with all the dedicated servers. However, I was lost again today because another one's project was attacked with DDoS for hours :-( Each hour means hundreds of dollars loss whenever my adserver and tracking is not reachable. Even time-out advertising have to be paid by me but can not be resold to my clients without the servers being available. All the time, the servers, the load and traffic is OK and health, but no chance to keep this stable/online if the hoster is vulnerable. Anyone has ideas or suggestions how to protect - even against DDoS?

    Read the article

  • I would like to have a publicly accessable linux box hosted elsewhere. Who provides this service?

    - by Eric Wilson
    I would like to have a general purpose linux server available and publicly accessible. I understand that there are no lack of web-hosting companies, but I might want more control over the machine than is typical. I would want the ability to install software, such as an SVN server, and I would like to be able to expose various port numbers, as I may have a variety of extremely low traffic sites that I would want to have available. Obviously, one option is to host such a machine in my home. Is that my only option? Or is what I describe out there, possible as a virtual machine on a larger server?

    Read the article

  • In Varnish, is it normal for the number of freed bytes to be 60% of those allocated?

    - by user331397
    I have an installation of Varnish 3.02 on an Amazon EC2 Medium Linux instance in front of two relatively low-traffic websites. After an uptime of 2 hours, there are 3400 objects in the cache. Using varnishstat, I checked the variables SMA.s0.c_bytes and SMA.s0.c_freed, which I assume correspond to the total number of bytes allocated since startup and the number freed, respectively. No objects should have had time to expire during these two hours, but still about 60% of the memory allocated since startup (330MB out of 560MB) has already been freed. Do you know if this is normal? If not, do you know what kind of configuration could be wrong?

    Read the article

  • What type of amazon instance should I use and do I need auto scaling and load balancing?

    - by Navetz
    Hi I am looking to release a website that will initially have large amounts of uploads from users. The first will be 65GB and the rest will probably be close to 1TB. They could happen simultaneously. My question is what type of amazon server instance would be best for this? The website is just being released so the traffic wont be very high. I have been using a micro instance for development but it is time to launch and I need more power. Should I use auto scaling and a load balancer to increase the number of instances when I need it or Will a small or medium instance do the trick? If I do use auto scaling and load balancing how do I handle things like sessions and the database/file lookups? Does one instance become the primary instance and the rest become clones?

    Read the article

  • startup cassandra layout

    - by davidkomer
    We've got a relatively low-traffic site (~1K pageviews/day) hosted on a single server, and expect it to grow significantly over the next few years. I'm thinking of moving over to Rackspace CloudServer or EC2 and firing up 3 nodes (all on CentOS): 2 x Web (Apache) - with loadbalancer 1 x MySQL (for the Wordpress powered part) The question is where to put Cassandra right now... Should it sit on each Web node, or the MySQL node? My thought right now is to put it on Web nodes. It's my understanding that Cassandra has the benefits of fault-tolerance (i.e. if we take a node down, the site is still operational). So even with only 2 nodes, we'd have that benefit as opposed to just putting it on the MySQL node. Also, as we scale up and add another node, a cassandra instance can come along with it and the php can always run its queries on localhost. Is this a good idea?

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >