Search Results

Search found 1464 results on 59 pages for 'blocking'.

Page 4/59 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Azure Website Blocking Cloudflare IPs

    - by neuhoffm
    I've been using CloudFlare with my azure web sites for a couple months. Yesterday my website started showing http 520 errors. After contacting CloudFlare support, it seems that Azure may be blocking the CloudFlare IPs (https://www.cloudflare.com/ips). I am able to connect to my site directly using the azurewebsites.net domain name but anything mapped via CloudFlare results in the 520 error. There's nothing in the azure error logs but the CloudFlare error logs seem to indicate that Azure is blocking the CloudFlare IPs. Anyone know the process of getting IPs whitelisted for Azure sites?

    Read the article

  • MySQL blocking new connections, and mysqladmin flush-hosts

    - by aidan
    I'm running MySQL on a remote server, and it suddenly started rejecting all connections: $ mysql -h 192.168.1.10 -u root -p ERROR 1129 (00000): Host 'web' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts' So, I try this flush-hosts command... $ mysqladmin flush-hosts -h 192.168.1.10 -u root -p mysqladmin: connect to server at '192.168.1.10' failed error: 'Host 'web' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'' I.e. it's blocking the very un-blocking tool it recommends. Am I doing it wrong, or will I have to resort to ssh/cpanel/physical access?

    Read the article

  • NetDiag + TCP Blocking?

    - by CrazyNick
    We are facing some issue with the sharepoint 2007 timer jobs everyday at a specific time, so decide to track the tcp blocking informartion during those hours using NetDiag tool. We are not able to find the required information if we uses "netdiag /test:ipsec", what is the command that can be used to pull the TCP blocking information and how to configure it? if i ran the command "netdiag /test:ipsec /debug" it is returning "IP Security test . . . . . . . . . : Skipped", what does it mean?

    Read the article

  • NetDiag + TCP Blocking?

    - by CrazyNick
    We are facing some issue with the sharepoint 2007 timer jobs everyday at a specific time, so decide to track the tcp blocking informartion during those hours using NetDiag tool. We are not able to find the required information if we uses "netdiag /test:ipsec", what is the command that can be used to pull the TCP blocking information and how to configure it? if i ran the command "netdiag /test:ipsec /debug" it is returning "IP Security test . . . . . . . . . : Skipped", what does it mean?

    Read the article

  • SQL Server 2005 Blocking Problem (ASYNC_NETWORK_IO)

    - by ivankolo
    I am responsible for a third-party application (no access to source) running on IIS and SQL Server 2005 (500 concurrent users, 1TB data, 8 IIS servers). We have recently started to see significant blocking on the database (after months of running this application in production with no problems). This occurs at random intervals during the day, approximately every 30 minutes, and affects between 20 and 100 sessions each time. All of the sessions eventually hit the application time out and the sessions abort. The problem disappears and then gradually re-emerges. The SPID responsible for the blocking always has the following features: WAIT TYPE = ASYNC_NETWORK_IO The SQL being run is “(@claimid varchar(15))SELECT claimid, enrollid, status, orgclaimid, resubclaimid, primaryclaimid FROM claim WHERE primaryclaimid = @claimid AND primaryclaimid < claimid)”. This is relatively innocuous SQL that should only return one or two records, not a large dataset. NO OTHER SQL statements have been implicated in the blocking, only this SQL statement. This is parameterized SQL for which an execution plan is cached in sys.dm_exec_cached_plans. This SPID has an object-level S lock on the claim table, so all UPDATEs/INSERTs to the claim table are also blocked. HOST ID varies. Different web servers are responsible for the blocking sessions. E.g., sometimes we trace back to web server 1, sometimes web server 2. When we trace back to the web server implicated in the blocking, we see the following: There is always some sort of application related error in the Event Log on the web server, linked to the Host ID and Host Process ID from the SQL Session. The error messages vary, usually some sort of SystemOutofMemory. (These error messages seem to be similar to error messages that we have seen in the past without such dramatic consequences. We think was happening before, but didn’t lead to blocking. Why now?) No known problems with the network adapters on either the web servers or the SQL server. (In any event the record set returned by the offending query would be small.) Things ruled out: Indexes are regularly defragmented. Statistics regularly updated. Increased sample size of statistics on claim.primaryclaimid. Forced recompilation of the cached execution plan. Created a compound index with primaryclaimid, claimid. No networking problems. No known issues on the web server. No changes to application software on web servers. We hypothesize that the chain of events goes something like this: Web server process submits SQL above. SQL server executes the SQL, during which it acquires a lock on the claim table. Web server process gets an error and dies. SQL server session is hung waiting for the web server process to read the data set. SQL Server sessions that need to get X locks on parts of the claim table (anyone processing claims) are blocked by the lock on the claim table and remain blocked until they all hit the application time out. Any suggestions for troubleshooting while waiting for the vendor's assistance would be most welcome. Is there a way to force SQL Server to lock at the row/page level for this particular SQL statement only? Is there a way to set a threshold on ASYNC_NETWORK_IO waits only?

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • Blocking HTTPS and P2P Traffic

    - by Genboy
    I have a Debian server running at the gateway level on a LAN. This runs squid for creating block lists of websites - for eg. blocking social networking on the LAN. Also uses iptables. I am able to do a lot of things with squid & iptables, but a few things seem difficult to achieve. 1) If I block facebook through their http url, people can still access https://www.facebook.com because squid doesn't go through https traffic by default. However, if the users set the gateway IP address as proxy on their web browser, then https is also blocked. So I can do one thing - using iptables drop all outgoing 443 traffic, so that people are forced to set proxy on their browser in order to browse any HTTPS traffic. However, is there a better solution for this. 2) As the number of blocked urls increase in squid, I am planning to integrate squidguard. However, the good squidguard lists are not free for commercial use. Anyone knows of a good squidguard list which is free. 3) Block yahoo messenger, gtalk etc. There are so many ports on which these Instant Messenger softwares work. You need to drop lots of outgoing ports in iptables. However, new ports get added, so you have to keep adding them. And even if your list of ports is current, people can still use the web version of gtalk etc. 4) Blocking P2P. Haven't been able to figure out how to do this till now.

    Read the article

  • hosting company blocking google bots and crawlers [closed]

    - by Jayapal Chandran
    Hi, I am having a site for the past three years and it is very active for the past two years. Until not the site is working well and also now but not after the hosting company blocked google bots. Many pages appeared in the first page of the google search. After they started blocking i couldn't see my links in the first page instead they appeared after 5 pages or they did not appear at all. Will hosting companies be so stupid that they block and dont mention it to their users. They want to protect themselves by making the websites at stake. I display google ads and not this month i got only half for this 10 days. I have made requests to other hosting companies like blue host and monster host that i wan to transfer my domain by making a condition that the will not block google bots which stops the business indirectly. so any kind of help will be helpful. how can i claim what i lost from the hosting company. what other hosting companies consider the users (by informing the events like changing the IP or blocking google bot.) It was really working hard to bring up my site but these people just crashed down my site in a few days. :-(

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • 10035 error on a blocking socket

    - by Andrew
    Does anyone have any idea what could cause a 10035 error (EWOULDBLOCK) when reading on a blocking socket with a timeout? This is under Windows XP using the .NET framework version 3.5 socket library. I've never managed to get this myself, but one of my colleagues is getting it all the time. He's sending reasonably large amounts of data to a much slower device and then waiting for a response, which often gives a 10035 error. I'm wondering if there could be issues with TCP buffers filling up, but in that case I would expect the read to wait or timeount. The socket is definitely blocking, not non-blocking.

    Read the article

  • A non-blocking server with java.io

    - by Jon
    Everybody knows that java IO is blocking, and java NIO is non-blocking. In IO you will have to use the thread per client pattern, in NIO you can use one thread for all clients. Now my question follows: is it possible to make a non-blocking design using only the Java IO api. (not NIO) I was thinking about a pattern like this (obviously very simplified); List<Socket> li; for (Socket s : li) { InputStream in = s.getInputStream(); byte[] data = in.available(); in.read(data); // processData(data); (decoding packets, encoding outgoing packets } Also note that the client will always be ready for reading data. What are your opinions on this? Will this be suitable for a server that should at least hold a few hundred of clients without major performance issues?

    Read the article

  • RabbitMQ and persistence (blocking writes?)

    - by daharon
    I want to create a RabbitMQ server on a virtual machine (VMware) to be used in production. It will contain persistent queues. I'm wondering if it is a bad idea to store the server on a NAS that's accessed over NFS. Basically my questions are: Will RabbitMQ's writes be blocking? Will the entire queue's operation halt on a write? How much performance degradation should I expect when persisting over NFS?

    Read the article

  • Want to use apache, ISP blocking port 80

    - by Will
    I am attempting to set up a small web server on my home network, but my ISP is blocking incoming port 80 ( and no, i'm not paying $50/month extra for them to unblock it). I am looking for some ways around this, obviously I can change the port # but I don't find this ideal. really appreciate any ideas for this

    Read the article

  • Fast (non-blocking) way to transfer many files to another server

    - by Nyxynyx
    I am currently attempting to transfer over 1 million files from one server to another. Using wget, it seems to be extremely slow, probably because it starts a new transfer after the previous one has been completed. Question: Is there a faster non-blocking (asynchronous) way to do the transfer? I do not have enough space on the first server to compress the files into tar.gz and transferring them over. Thanks!

    Read the article

  • Blocking specific IP requests

    - by user42908
    Hi, I own a VPS running Ubuntu with Apache stuff. Recently I am getting continous request from IP static-195.22.94.120.addr.tdcsong.se.54303 : 12337 I already installed the 'arno-iptables-firewall'. Have iptables blocking 195.22.94.120 Still then I get the request from that IP if i see via tcpdump. May I know what else i can do to protect my VPS? Thank you.

    Read the article

  • Blocking IP addresses Load Balanced Cluster

    - by Dom
    Hi We're using HAproxy as a front end load balancer / proxy and are looking for solutions to block random IP addresses from jamming the cluster. Is anyone familiar with a conf for HAProxy that can block requests if they exceed a certain threshold from a single IP within a defined period of time. Or can anyone suggest a software solution which could be placed in front of HAProxy to handle this kind of blocking. Thanks Dom--

    Read the article

  • Non blocking IO call from Django controller from a Windows service

    - by Anders
    Hi all, I have a CherryPy server with a Django application running as a Windows service, inside a controller I need to make a call to wmic, the problem is, so far I have only been able to implement a blocking operation. Does anyone have any recommendation for a non blocking operation so, at least more then one person at a time can access this controller and extract information from wmic? Thanks in advance, Anders

    Read the article

  • Generate a proper 404 page for blocked sites via /etc/hosts instead of redirect to localhost

    - by Mixhael
    I have blocked some websites by editing /etc/hosts and adding several newline-entries in the following manner: 0.0.0.0 www.domain.com And it works. The only thing is: when a website is visited which is blocked, the browser is redirected to my http://localhost, resulting in a directory listing or website-presentation that is running within my localhost root-environment. It's not a very big problem, but I prefer a standard error that mentions the website cannot be visited (for instance by a 404 page). Is this possible?

    Read the article

  • Block a website on HTTPS [closed]

    - by momo1729
    I would like to block some websites on their HTTPS version and allow them on HTTP. The main websites involved are Youtube and Google Images/Videos. This is because on the HTTP version, I can enforce the Safesearch filter on those platforms, whereas I cannot on the HTTPS version. For me, this is a very serious issue which spoils many great things about the Safesearch features Google offers. Is there any software/config that can do that? P.S.: I'm not sure this is the right place to post this question in, maybe you could redirect me to some other SE platform?

    Read the article

  • What should we tell our unsupported IE6 users?

    - by Dan Fabulich
    In the upcoming version of our web app, we've broken IE6, and we don't intend to fix it. We've had a clear warning posted for IE6 users for some months; we've decided it's time not to support it. My question is: how should we communicate this to our users? Some people here feel that we should block IE6 users who would try to access the web app, because it's not going to work for them. Others feel that we should just leave up a warning, saying "This doesn't work in IE6," but not block them; instead, if they click to dismiss the warning, just let them in to the broken site to see for themselves that it doesn't work. Who is right? Is there a better way?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >