Search Results

Search found 6172 results on 247 pages for 'limit choices to'.

Page 135/247 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Remote desktop connection issues in windows server 2003

    - by rboorgapally
    Hi all, We are running windows server 2003 at our work place. I have enabled the remote desktop connections. I have also added all the users who connect to the server(three to four people) to the RDC group of users. Many of these users are also having Administrator access. The problem we are facing is that the connection is suddenly lost when we are working on something. Also, at times the system restarts by itself. Is this issue related to the limit on the number of users able to connect to the system? If so, why does the system accept new connections and/or terminate the existing connections? Has this anything to do with the users having administrator access so that all have equal prority and that is why existing connections are stalled? Also, please differentiate between console remote desktop and non console remote desktop.

    Read the article

  • Is there an apache module to slow down site scans?

    - by florin
    I am administering a few web servers. Each night, random hosts from the Internet are probing them for various vulnerabilities in php, phpadmin, horde, mysqladmin, etc. Is there a way (apache plugin?) to slow down the rate of attack? For SSH, I have a rate limiting rule on the firewall, which does not allow more than three connections per minute. But I don't want to rate limit all HTTP access, only the access that returns 404s. Is there such an apache module?

    Read the article

  • Nginx Static Content Server Maxing Out?

    - by Harry
    I use nginx to serve the static content for a decently busy website of mine. I have the logging disabled, and 4 worker processes enabled with 5,000 connections per worker (which should yield a max connection limit of 20,000. The server is only operating at about 10% CPU usage and 50% ram, but it's very laggy, and sometimes nginx is so slow to respond to the requests, it times out. For a small number of connections, it's fine, but once any load starts occurring (~2,500 connections), it backs up and bogs down. Is there any other bottlenecks or limits that I might be hitting? This is a FreeBSD server, and all the static files are located locally (not NFS). The NIC is an unmetered gigabit, and it's only using around 75 megabit. Any insight would be appreciated. Thanks.

    Read the article

  • Driver for writing to UDF partitions from Windows XP?

    - by davr
    I'm considering using an UDF partition to share data between Windows XP, 7, and Linux. It's more efficient than FAT32, and avoids the 4GB max file size limit. I've found it will also work with Mac OS X, more details in this questions. However, in Windows XP, it is read-only. I'd like to write to it too. Are there any drivers that will allow this? I've found a few that support writing UDF...but they are designed for writing to CDs or DVDs, not specifically for HDDs or USB Flash drives: DLA, InCD, Drag-To-Disc. Will any of those 3 drivers work for HDDs/USB Flash drives? Or is there another driver that will do what I want? Thanks.

    Read the article

  • Apache stopping downloads part way through

    - by Ben Smiley
    On my site there are some digital files which can be downloaded through a PHP script. The script works fine for small files but large files i.e. 115MB cannot be downloaded successfully. The connection dies after around 15 minutes but it's not consistent - sometimes longer sometimes shorter. I don't think it's a problem with the script timing out because the download time isn't consistent. Equally it doesn't seem like a memory limit problem because the amount downloaded varies each time. Does anyone know of any Apache or PHP related settings which could cause this kind of problem?

    Read the article

  • How soon does nginx's token bucket replenish when limiting at requests per minute?

    - by Michael Gorsuch
    Hi all. We've decided that we want to experiment and limit requests per minute instead of requests per second on our sites. However, I am confused by the burst parameter in this context. I am under the impression that when you use the 'nodelay' flag, the rate limiting facility acts like a token bucket instead of a leaky bucket. That being the case, the bucket size is equal to the burst parameter, and every time that you violate the policy (say 1 req/s), you have to put a token in the bucket. Once the bucket is full (being equal to the burst setting), you are given a 503 error page. I am also under the impression that once a violator stops going against the policy, a token is removed from the bucket at a rate of 1 token/s allowing him to regain access to the site. Assuming that I have the above correct, my question is what happens when I start regulating access per minute? If we chose 60 requests per minute, at what rate does the token bucket replenish?

    Read the article

  • Send one invalid email response to sender

    - by Kafuka
    I discovered that my postfix/dovecot configuration isn't rejecting emails. If a person sends an email to an invalid email-address, it just drops it. I am fine with this behavior since I think it discourages spammers from mining emails (I have had some success). Recently a person I did not want to talk to emailed an address I cut off and didn't receive a response back. It would have saved me some problems if they knew to call me instead of sending 50+ emails. How would I configure Dovecot/postfix to send a message back to a sender of an email address and then limit this 1 per domain or unique email. Debian Stable Linux 3.6.5-linode47 Dovecot 1.2.15 Postfix 2.7.1 PSQL - backend if that matters

    Read the article

  • Coldfusion multiserver instance hangs

    - by David Sedeño
    I have a coldfusion 8 multiserver setup with IIS in Windows 2008 Standard SP2 and when one instance "hangs" (I can't connect to the instance from fusion reactor) the web server throws a "503 service unavailable". The remains instance seems to works ok in fusion reactor but the website have only the 503. I have to restart jvm processes and IIS to get the website work again. The jvm processes have the option -Xmx2048m and the instanaces have 2.5Gb allocated. Maybe the jvm process reach the 2Gb limit and stop working? Can be a problem between IIS and CF instances? I'm new to CF debugging process, how can I find why the instance hangs? Thanks

    Read the article

  • Simulating a low-bandwidth, high-latency network connection on Linux

    - by Justin L.
    I'd like to simulate a high-latency, low-bandwidth network connection on my Linux machine. Limiting bandwidth has been discussed before, e.g. here, but I can't find any posts which address limiting both bandwidth and latency. I can get either high latency or low bandwidth using tc. But I haven't been able to combine these into a single connection. In particular, the example rate control script here doesn't work for me: # tc qdisc add dev lo root handle 1:0 netem delay 100ms # tc qdisc add dev lo parent 1:1 handle 10: tbf rate 256kbit buffer 1600 limit 3000 RTNETLINK answers: Operation not supported How can I create a low-bandwidth, high-latency connection, using tc or any other readily-available tool?

    Read the article

  • SPF include: too many IP addresses

    - by sprezzatura
    I've hit a snag with SPF. The SPF record for my domain will contain four or five entries, plus it will contain: include:sgizmo.com The SPF record for sgizmo.com contains eleven entries! This, plus mine, is way over the maximum ten allowed by the RFC (and probably by most servers). I realize that there has to be a limit in order to prevent DoS attacks. However, in the real world, it is probably not unreasonable for large companies to have many server addresses. Furthermore, must I know monitor my 'include:' counterparts for changes and additions? Must I check weekly, daily, to insure that some combination of changes doesn't suddenly put me over the top? It doesn't seem to me that SPF is suitable for prime time. Is there another way to do this?

    Read the article

  • Public Facing Recursive DNS Servers - iptables rules

    - by David Schwartz
    We run public-facing recursive DNS servers on Linux machines. We've been used for DNS amplification attacks. Are there any recommended iptables rules that would help mitigate these attacks? The obvious solution is just to limit outbound DNS packets to a certain traffic level. But I was hoping to find something a little bit more clever so that an attack just blocks off traffic to the victim IP address. I've searched for advice and suggestions, but they all seem to be "don't run public-facing recursive name servers". Unfortunately, we are backed into a situation where things that are not easy to change will break if we don't do so, and this is due to decisions made more than a decade ago before these attacks were an issue.

    Read the article

  • Disk wipe preferences

    - by hmvm123
    I manage a pool of systems that are loaded with software and sent to potential customers for evaluations which often land sensitive information on the drives. Before shipping them back, they typically like a standard wipe to be run to clean out the drives. Most are familiar with DBAN so I try to make sure it can work on my systems. Unfortunately, this means I'm usually in RAID driver hell trying to make sure that the versions out there support the ones my systems are shipping with. These are various kinds of 3ware and LSI ones. Consequently, I have DBAN 1.0.7 working on some, a beta version of 2.0 on the others and 2.2.6 on some of the latest SSD based ones. Now with the LSI controllers on my IBM x3550 M3s (1064/1068) I'm getting no love at all. Is there a way out? Do you buildroot with DBAN and try to piece the drivers together? Any other tools, free or commerical, that stay updated. I'm trying to walk people of varying technical proficiencies through this, so a boot disk with simple choices is preferable.

    Read the article

  • Connect server to router with workstation in between?

    - by letseatfood
    I have a server (9 yr old Dell with Ubuntu 10.04 Server Edition) and a workstation (Laptop with Vista) which are both connected to a router. The workstation is able to access the server via web browser for testing my web development projects. Since the server is old and does not have a wireless card, I am wondering if it is possible to connect the server directly to the workstation while maintaining the ability to connect to the server from the workstation. The reason is that the computers are in a separate room from the router and I am trying to limit the amount of wires on the floor, along with trying to avoid buying more hardware (like a wireless card). Thanks and I will be more than happy to clarify!

    Read the article

  • Recommendation for robust, customizable, open source, Java servlet-based forum software?

    - by Erik Hermansen
    There is a lot of forum software out there, but it seems to me that a lot of the popular choices are PHP-based. And for my project, I'd like something based on Java servlets so my team can make customizations to it. Another important feature is that I can completely change the pages to hide unwanted elements without too much work. So I'm looking either for a template system or easily editable scripts (i.e. JSPs) that have a clean view separation. Just having skin changes or CSS customization is not enough. I understand that if I have open source, I can change anything I want, but my point is that it should be easy and not requiring mastery of a complex code base. Finally, I want something that has been around for at least a year and deployed on some high-traffic sites. Clustering support (one database, multiple web servers) is highly desirable. Up-time is crucial since I have an SLA to support. What do you think?

    Read the article

  • Game Server Colocation

    - by Linuz
    Hello, I am new to colocation. I am looking for a good place to host my server that would have around 5-10 Source Servers (Team Fortress 2). I am looking at maybe around 90 players at a time for now, players coming from all over the United States Could I get more info on what I am suppose to look for exactly? Would it be correct for me to get a 100Mb/s line and does 100Mb/s line in the following example actually be 100Mb/s upload bandwidth? Example: package 4 of FDC's services: http://www.fdcservers.net/server_colocation.php also I want to get something unmettered. I do not want to have to ever worry about going over some bandwidth limit or any DDOS attacks killing me. And if anyone has any other recommendations as to what network configurations I should get or any other good colocation providers that are cheap in price, I would REALLY appreciate the help. Thanks

    Read the article

  • Anti Virus Service does not run - Windows XP SP3 32bit Home

    - by Stefan Fassel
    I have a somewhat strange problem here. I am trying to run Anti Virus Software on my Windows XP Home 32bit System. After a serious crash I had to fall back to an outdated copy of my initial installation and had Windows install 5 years of updates. So far so good. After Intalling a new Anti Virus Software (Bitdefender 2012) everything seemed to be fine, initial scanning went fine and configuration was working. But after restarting the System the Virus Scanner was unable to start up again. Even the Configuration console of the AV Software did not start. I tried scanning the System for malware, but nothing was found. Then I tried a different AV Software (MS Security Essentials), but in the end it did fail to start too. I have tried to start the Service manually, but I seem to be missing the privilege to do so. I am logged in as a Non-"Administrator" User with Admin privileges (Not much choices there on a XP Home System). I cannot switch to Administrator account outside the protected mode. When running Windows in protected mode I am unable to start the AV Software because it does not run in protected mode. I am a bit at loss now...

    Read the article

  • Suppress log messages about 3ware disk temperature changes on CentOS?

    - by Stefan Lasiewski
    I have a number of CentOS 5 servers which use 3ware RAID controllers. These servers are bugging my team with messages about minor temperature changes, like this: Jun 8 12:32:39 HOST smartd[1231]: Device: /dev/twa0 [3ware_disk_01], SMART Usage Attribute: 194 Temperature_Celsius changed from 119 to 118 Jun 8 12:32:39 HOST smartd[1231]: Device: /dev/twa0 [3ware_disk_03], SMART Usage Attribute: 194 Temperature_Celsius changed from 122 to 121 How can I suppress these messages? According to man smartd.conf : To disable any of the 3 reports, set the corresponding limit to 0. Trailing zero arguments may be omitted. By default, all temperature reports are disabled (´-W 0´). On my systems, smartd is reporting about temperature changes by default. I tried a manual approach. In /etc/smartd.conf, I have the following: /dev/twa0 -d 3ware,1 -a -W 0 /dev/twa0 -d 3ware,3 -a -W 0 But this still does not suppress the messages. Since these messages show up in /var/log/messages, LogWatch is sending unnecessary emails every night.

    Read the article

  • How to show "only number" in picture cross-reference in Word 2007 document?

    - by kornelijepetak
    I have many pictures in a document and I reference them very often in text. I don't want to lose the order so I am using Insert - Cross-reference. This opens the cross-reference dialog where I can set Reference type to Picture. For "Insert reference to", there are 5 choices: - Entire caption - Only label and number - Only caption text - Page number - Above/below What I need is a reference that would be inserted like this: [4], and not like this: [Picture 4]; None of these options enable me to do it. Is there any way to make Word 2007 insert a reference to only Caption Number? Note: The document is written in Croatian language which has 7 declension cases, so using "Picture 4" would not be valid in all cases. Actually caption label Picture is set to croatian word "Slika" and when I need to say say "in the picture" I can't because it would be "na Slici 5." and not "na Slika 5." (like Word would make me do). That's why I need to reference only the caption number. Is that possible in Word 2007?

    Read the article

  • Bridge and OpenVPN with shorewall

    - by Javier Martinez
    I have this scenario and everything it's working OK, but I want to configure my Shorewall and I can't do it. My interfaces are: br0 (bridge of eth0) tun0 (OpenVPN) vnet* (each one of bridged interfaces with public IP's) Public Main IP: 188.165.X.Y OpenVPN IP's: 172.28.0.x Bridge: public ip's So, I have the next configuration for shorewall: /etc/shorewall/zones #ZONE TYPE OPTIONS IN OUT # OPTIONS OPTIONS fw firewall inet ipv4 road ipv4 /etc/shorewall/interfaces #ZONE INTERFACE BROADCAST OPTIONS inet br0 detect routeback road tun+ detect routeback /etc/shorewall/policy #SOURCE DEST POLICY LOG LIMIT: CONNLIMIT: # LEVEL BURST MASK $FW all ACCEPT inet $FW DROP info road all DROP inet road DROP /etc/shorewall/tunnels #TYPE ZONE GATEWAY GATEWAY # ZONE openvpnserver:1194 inet 0.0.0.0/0 The problem is that even with shorewall running I am able to ping or connect to the virtual machines behind the bridge

    Read the article

  • MySQL too many connections

    - by Webnet
    On my server I have 7 databases. Our server has 512 MB of RAM which I'm getting upgraded this evening to 2GB and has a 2.4 single processor. I've gotten an error about the connection limit exceeded. With increasing my RAM, is it ok to increase the number of connections? Currently it's set to 200 but a single page may connect to 3-4 databases considering JOINs and things. We've setup so many databases for mere organization. We have a total of about 250-300 tables in all of the databases. Any advice would be appreciated :)

    Read the article

  • Maximizing TCP connections on HAProxy load balancer

    - by imaginative
    I am currently using HAProxy in order to load balance tcp connections from clients to my Erlang app server. The connection is persistent, which means I'm limited to roughly 64K clients on an optimized server (I'm currently running HAProxy on an m1.large EC2 instance). My app server is designed to horizontally scale based on the number of TCP connections. What's worrying me though is I'll need an equal number of HAProxy servers as app servers since it's a 1:1 connection. Is there currently a way to "proxy" the tcp connection to the app server so that once HAProxy sends the client off to my Erlang server, it can free up the connection, ready to serve another client? Are there any papers, existing solutions out there I can read so that I only have to worry about the 64K limit on my app servers, and not on the load balancing servers themselves?

    Read the article

  • php, mySQL & AJAX: Unable to use sessions across the scripts in the same domain

    - by Devner
    Hi all, I have the following pages: page1.php, page2.php and page3.php. Code in each of them is as below CODE: page1.php <script type="text/javascript"> $(function(){ $('#imgID').upload({ submit_to_url: "page2.php", file_name: 'myfile1', description : "Image", limit : 1, file_types : "*.jpg", }) }); </script> <body> <form action="page3.php" method="post" enctype="multipart/form-data" name="frm1" id="frm1"> //Some other text fields <input type="submit" name="submit" id="submit" value="Submit" /> </form> </body> page2.php <?php session_start(); $a = $_SESSION['a']; $b = $_SESSION['b']; $c = $_SESSION['c']; $res = mysql_query("SELECT col FROM table WHERE col1 = $a AND col2 = $b AND col3 = $c LIMIT 1"); $num_rows = mysql_num_rows($res); echo $num_rows; //echos 0 when in fact it should have been 1 because the data in the Session exists. //Ok let's proceed further //... Do some stuff... //Store some more values and create new session variables (and assume that page1.php is going to be able to use it) $_SESSION['d'] = 'd'; $_SESSION['e'] = 'e'; $_SESSION['f'] = 'f'; if (move_uploaded_file($_FILES['file']['tmp_name'], $file)) { echo "success"; } else { echo "error ".$_FILES['file']['error']; } ?> page3.php <?php session_start(); if( isset($_POST['submit']) ) { //These sessions are non-existent although the AJAX request //to page2.php may have created them when called via AJAX from within page1.php echo $_SESSION['d'].$_SESSION['e'].$_SESSION['f']; ?> } ?> As the code says it I am posting some info via AJAX call from page1.php to page2.php. page2.php is supposed to be able to use the session values from page1.php i.e. $_SESSION['a'], $_SESSION['b'] and $_SESSION['c'] but it does not. Why? How can I fix this? page2.php is creating some more sessions after some processing is done and a response is sent back to page1.php. The submit button of the form on page1.php is hit and the page gets POST'ed to page3.php. But when the SESSION info that gets created in page2.php is echoed, it's blank signifying that SESSIONS from page2.php are not used. How can I fix this? I looked over a lot of information and have spent about 50 hours trying to do different things with my scripts before arriving at the above conclusions. My app. is custom made using function (not OOPS) and does not use any PHP frameworks & I am not even about to use any as my knowledge of OOP concepts is limited any many frameworks are object oriented. I came across race conditions, but the solutions provided don't help too much. One more solution of using DB to hold sessions and seek and retrieve from DB is the last thing on my mind and I really want to avoid creating table, coding and maintaining code for a task as simple as just keeping sessions across pages in the same domain. So my request is: Is there a way that I can solve the above problem(s) via simple coding in present conditions? Any help is appreciated. Thank you.

    Read the article

  • Hacking prevention, forensics, auditing and counter measures.

    - by tmow
    Recently (but it is also a recurrent question) we saw 3 interesting threads about hacking and security: My server's been hacked EMERGENCY. Finding how a hacked server was hacked File permissions question The last one isn't directly related, but it highlights how easy it is to mess up with a web server administration. As there are several things, that can be done, before something bad happens, I'd like to have your suggestions in terms of good practices to limit backside effects of an attack and how to react in the sad case will happen. It's not just a matter of securing the server and the code but also of auditing, logging and counter measures. Do you have any good practices list or do you prefer to rely on software or on experts that continuously analyze your web server(s) (or nothing at all)? If yes, can you share your list and your ideas/opinions?

    Read the article

  • How can I set up OpenVPN to accept more than 60 connections?

    - by Robin
    Greetings! We're using OpenVPN and today hit an unexpected connection limit of 60 - even though max-clients is set to the source code default 1024. Server log: Tue Dec 21 13:49:41 2010 MULTI: new incoming connection would exceed maximum number of clients (60) We're slowly adding new clients to the VPN and expect to hit 200 some time next year, if we can get it working. We're running the server on a Win2003 R2. OpenVPN 2.0.9 Server config as follows: local 192.168.10.211 port 1195 proto tcp dev tun dev-node OpenVPN_Vision ca vision_ca.crt cert vision_server.crt key vision_server.key # This file should be kept secret dh vision_dh1024.pem server 192.168.211.0 255.255.255.0 ifconfig-pool-persist vision_ipp.txt ;server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 ;client-to-client keepalive 10 120 comp-lzo ;max-clients 100 # Default in source code is 1024 persist-key persist-tun status openvpn-status-vision.log log vision.log verb 3 I would greatly appreciate any help or input on this one. Thanks! Best regards, Robin

    Read the article

  • Apache mpm-itk Performance

    - by Matt Beckman
    I manage a bunch of VPSs with memory ranging from 1GB to 8GB. Most of these websites are Joomla websites, and the servers must support multiple sites/users/S-FTP. I use mpm-itk almost exclusively (mostly due to it's convenience in these shared environments). However, I'm aware it isn't known for performance, so I need some advice on making it faster. Due to the lack of documentation when I first went the way of mpm-itk, I included only one setting in the config, and that was to limit each user to 50 clients (the rest I left up to defaults): <IfModule mpm_itk_module> MaxClientsVHost 50 </IfModule> Are there any better alternatives available? Are there any settings supported in mpm-prefork or mpm-worker that are also supported in mpm-itk? Thanks!

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >