Search Results

Search found 10693 results on 428 pages for 'max requests'.

Page 235/428 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Unable to do Port Forwarding in Virtual Box

    - by dewbot
    I'm using Mac OS X 10.6. I have installed Virtual Box 4.1.0 in it. My Guest OS is Ubuntu Server 11.04. I have added a rule in Port Forwarding in Virtual Box - "guestssh" TCP 127.0.1.1 8080 127.0.0.1 1337 Inside Guest OS I'm running nodejs server. Code is nothing but simple helloworld code found on their site http://nodejs.org/. In short I'm running server on 127.0.0.1 on 1337 Port. Now according to rule I have given, from Host Machine all the requests for 127.0.1.1:8080 should be forwarded to 127.0.0.1:1337 of Guest OS. From Host I'm doing curl http://127.0.1.1:8080 and I'm getting curl: (7) couldn't connect to host Is there something am I doing wrong? Note- Don't give me suggestion to do ssh n all. As my ISP does not provide Internal LAN so its not possible in my case. All I can do it Port Forwarding.

    Read the article

  • Windows 8 taking 4+ mins to shutdown

    - by arnab321
    I did a fresh installation of Windows 8 64 bit, build 9200 (released on aug 16th). I installed the drivers and some basic softwares like NetBeans, mingw, iis server and php. For the first few times, it was restarting normally. But then at shutdown, it would show the shutdown screen for some seconds and then turn black for about 4 mins (similar to what happens at hibernation). I disabled the "fast startup" option in power options, but the problem still persists. Windows 7 and Ubuntu shut down normally. specs: 4gb ram, 750 gb sata hdd, solved by installing Windows Updates released during October. It was a serious bug in the OS, afaik. Now even hibernate takes upto 30 secs max. Still, win 8 is too buggy for release.

    Read the article

  • Mailman aggregate mailing lists

    - by s1nny
    Using CENTOS 5.10, WHM, CpanelX I'm setting up mailing lists for my company (microsoft exchange costs too much for the number of users we have). So I've got a few mailing lists set up right now @Naples, @Cayman, @ACK, @Managers etc which are all set up and working nicely. Now I need to create an aggregate list, @Stores, which sends to other lists, in this case @Naples, @Cayman and @ACK and leaving out @Managers. I keep getting authorization requests from each store list when I send to @Stores but if I send to each store list individually its fine. I've got them all set up to "accept non member postings for which no explicit action is defined", as well as set "accept_these_nonmembers" to ^.*

    Read the article

  • How to find malicious IPs?

    - by alfish
    Cacti shows irregular and pretty steady high bandwidth to my server (40x the normal) so I guess the server is udnder some sort of DDoS attack. The incoming bandwidth has not paralyzed my server, but of course consuming the bandwidth and affects performance so I am keen to figure out the possible culprits IPs add them to my deny list or otherwise counter them. When I run: netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n I get a long list of IPs with up to 400 connections each. I checked the most numerous occurring IPs but they come from my CDN. So I am wondering what is the best way to help monitor the requests that each IP make in order to pinpoint the malicious ones. I am using Ubuntu server. Thanks

    Read the article

  • Help w/ error "Account exceeded bandwidth limits" Thunderbird 3 + Gmail

    - by boo
    Over the past two days I have been receiving the following message on a new computer running Windows 7: Account exceeded bandwidth limits. (Failure) Whenever I try to access my emails through Thunderbird, this is followed by: Login to server imap.gmail.com failed. Credentials are correct as I have access via HTTP. And then it requests me to enter my password (I have also unlocked the captcha on this account, but this didn't stop any error messages) I'm looking for a details on why this is happening, to prevent it from reoccurring, such as whether this is specific to something in Thunderbird 3 or google?

    Read the article

  • Strange PHP output buffering

    - by radek-k
    PHP: header('Content-type: text/plain'); for ($i=0; $i<10; $i++){ echo "$i\r\n"; ob_flush(); flush(); sleep(1); } I tried script above on 2 different servers. Both respond numbers 0...9 in every line. In case of first server each number is received every second. In case of second server there is no output for 10 seconds and entire output is displayed at once. What might be wrong int second case? I tried various uutput control Functions but it didn't help. Set of response headers in both cases is pretty much the same: HTTP/1.1 200 OK Date: Mon, 03 Jan 2011 19:21:21 GMT Server: Apache X-Powered-By: PHP/5.2.14 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/plain

    Read the article

  • nginx php-fpm keeps downloading files

    - by Sam Williams
    vhost: server { listen *:8080; location / { root /var/www/default/pub; index index.php; # if file exists return it right away if (-f $request_filename) { break; } if (!-e $request_filename) { rewrite ^(.+)$ /index.php$1 last; break; } } # serve static files directly location ~* \.(jpg|jpeg|gif|css|png|js|ico|html)$ { access_log off; expires max; } location ~* \.php$ { # By all means use a different server for the fcgi processes if you need to fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /usr/local/nginx/conf/fastcgi_params; } location ~ /\.ht { deny all; } } http://192.168.135.128/index.php loads just fine... http://192.168.135.128/public_/html/index.php downloads...

    Read the article

  • Citrix Mixed Server Session problems

    - by oldskool
    Hello, we've run an Citrix Farm with 4 x Windows 2008 x64 (XenApp 5.0 FP3) and one server Win 2003 x86 (Xenapp 4.6). Yesterday we had a strange problem with Session logins. Existing Sessions could work without problems but new logins were not possible. In the AMC we could not see the Sessions on the Servers. After an reboot all worked fine again. We run the 32 Bit Server since 3 days in our farm (for one Application). Could it be possible that this behavior is caused of the mixed farm? -- (we got in this time a lot of wmi warnings that the Provider CitrixEvemtProv has been registered but it does not correctly impersonate user requests.

    Read the article

  • moving my site, IP change worries...

    - by Sherif Buzz
    Hi all, my site has outgrown the shared hosting account it's on and i've setup a VPS that i'll be moving to soon. I cannot keep the same IP between my new account and the old one and I'm a bit at loss as to how to minimize user downtime while the new IP is reflected in all DNS caches. Note I cannot have the site running on both accounts at the same time as it's a dating site and this would cause data inconsistency. Here's what i am planning to do : Put up a 'under maintenance' page on old host Get the site up and running on new host, and update domain to point to new host. Hope downtime isn't too long. Would it be a good idea to have a link on the page in (1) that opens the new site but using it's ip ? Or even redirect all requests at the old host, to the new one (again by ip) ? Any advice much appreciated.

    Read the article

  • Distributed website server redundancy

    - by Keith Lion
    Assume a website infrastructure is very complicated and is fully distributed (probably like most large web companies). Am I right in thinking that although there are all these extra web servers to handle multiple client requests, there is still a single "machine" whereby users must enter? I am guessing this machine will be the one physically associated to the IP address? I ask because I need to know whether, in places where distributed systems exist, there is still a single point of failure- usually the control node or, in this example, the machine connected to the public internet? Surely there cannot be two machines connected to the internet, as they would have to have different IP addresses? This "machine" may not be a server per se, but maybe it is a piece of cisco equipment. I just need to know whether, in the real world, these distributed systems still have a particular section where they depend on the integrity of one electronic device?

    Read the article

  • Possible to redirect from HTTPS to HTTP behind load-balancer?

    - by Derek Hunziker
    I have a basic ASP.NET application that sits behind an F5 load-balancer. Incoming SSL requests (over HTTPS) terminate at the load-balancer and all internal communication between the load-balancer and my application servers is unsecure (over HTTP). When a unsecure request comes in, my app is able to use Response.Redirect("https://...") to redirect a secure URL with no problems. However, the other direction appears to be impossible - I cannot redirect from HTTPS to HTTP using Response.Redirect() from my application. The URL remains HTTPS for the client and does not change. Could the F5 be preventing the redirect for ever reaching the client? Is there any special configuration necessary to let this happen?

    Read the article

  • hosting simple python scripts in a container to handle concurrency, configuration, caching, etc.

    - by Justin Grant
    My first real-world Python project is to write a simple framework (or re-use/adapt an existing one) which can wrap small python scripts (which are used to gather custom data for a monitoring tool) with a "container" to handle boilerplate tasks like: fetching a script's configuration from a file (and keeping that info up to date if the file changes and handle decryption of sensitive config data) running multiple instances of the same script in different threads instead of spinning up a new process for each one expose an API for caching expensive data and storing persistent state from one script invocation to the next Today, script authors must handle the issues above, which usually means that most script authors don't handle them correctly, causing bugs and performance problems. In addition to avoiding bugs, we want a solution which lowers the bar to create and maintain scripts, especially given that many script authors may not be trained programmers. Below are examples of the API I've been thinking of, and which I'm looking to get your feedback about. A scripter would need to build a single method which takes (as input) the configuration that the script needs to do its job, and either returns a python object or calls a method to stream back data in chunks. Optionally, a scripter could supply methods to handle startup and/or shutdown tasks. HTTP-fetching script example (in pseudocode, omitting the actual data-fetching details to focus on the container's API): def run (config, context, cache) : results = http_library_call (config.url, config.http_method, config.username, config.password, ...) return { html : results.html, status_code : results.status, headers : results.response_headers } def init(config, context, cache) : config.max_threads = 20 # up to 20 URLs at one time (per process) config.max_processes = 3 # launch up to 3 concurrent processes config.keepalive = 1200 # keep process alive for 10 mins without another call config.process_recycle.requests = 1000 # restart the process every 1000 requests (to avoid leaks) config.kill_timeout = 600 # kill the process if any call lasts longer than 10 minutes Database-data fetching script example might look like this (in pseudocode): def run (config, context, cache) : expensive = context.cache["something_expensive"] for record in db_library_call (expensive, context.checkpoint, config.connection_string) : context.log (record, "logDate") # log all properties, optionally specify name of timestamp property last_date = record["logDate"] context.checkpoint = last_date # persistent checkpoint, used next time through def init(config, context, cache) : cache["something_expensive"] = get_expensive_thing() def shutdown(config, context, cache) : expensive = cache["something_expensive"] expensive.release_me() Is this API appropriately "pythonic", or are there things I should do to make this more natural to the Python scripter? (I'm more familiar with building C++/C#/Java APIs so I suspect I'm missing useful Python idioms.) Specific questions: is it natural to pass a "config" object into a method and ask the callee to set various configuration options? Or is there another preferred way to do this? when a callee needs to stream data back to its caller, is a method like context.log() (see above) appropriate, or should I be using yield instead? (yeild seems natural, but I worry it'd be over the head of most scripters) My approach requires scripts to define functions with predefined names (e.g. "run", "init", "shutdown"). Is this a good way to do it? If not, what other mechanism would be more natural? I'm passing the same config, context, cache parameters into every method. Would it be better to use a single "context" parameter instead? Would it be better to use global variables instead? Finally, are there existing libraries you'd recommend to make this kind of simple "script-running container" easier to write?

    Read the article

  • IIS App Pool Identity Internet Settings

    - by Programming Hero
    How does an IIS App Pool determine its Internet Settings? I'm specifying a custom identity under which to host a .NET web application, a service account that is part of our Active Directory domain. When the application runs, it needs to make HTTP requests to other servers. This action causes it to read web and proxy settings from some location, but I can't understand where it goes for this information. Does it look: At the default account's settings on that box? At the default profile on the AD server? Its own local/roaming profile? A combination of the above? Somewhere completely different?

    Read the article

  • How to add recently set cookies to nginx's access log

    - by etoleb
    I'd like to include cookie data in an nginx access log like so: (simplified example) log_format foo '$remote_addr "$request" $cookie_bar'; access_log /var/log/nginx/access.log foo; This works great on requests that already have a cookie "bar", but for the first request to my server nginx will report "-" as the value of "bar". It seems like my problem is that nginx is looking at the request headers for the cookie value. Is there a way check for a Set-Cookie in the response and use that as a fallback?

    Read the article

  • .htaccess rewrite issue

    - by Jessica
    Below is my .htaccess file for my website on shared hosting - I'm attempting to send all requests to index.php with parameter 'q' where I wish to parse data etc... unless it's an existing file or directory. So my issue, if I attempt to browse to an actual directory expecting to get an unauthorized notice (options -Indexes) it sends me to my index.php and I also noticed that print_r($_GET) it gives me this: Array ( [q] => 403.shtml ) AddDefaultCharset UTF-8 Options -Indexes +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} !^$ RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteCond %{HTTPS}s ^on(s)| RewriteRule ^ http%1://www.%{HTTP_HOST}%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] </IfModule> Help would be much appreciated if possible :)

    Read the article

  • Cannot connect to HTTPS port on Ubuntu

    - by Simpleton
    I've installed a new SSL certificate and set up Nginx to use it. But requests time out when trying to hit HTTPS on the site. When I telnet to my domain on port 80 it connects, but times out on port 443. I'm not sure if there's some defaults on Ubuntu preventing a connection. UFW status shows: 443 ALLOW Anywhere netstat -a shows: tcp 0 0 *:https *:* LISTEN nmap localhost shows: 443/tcp open https The relevant block in the Nginx config is: server { listen 443; listen [::]:80 ipv6only=on; listen 80; root /path/to/app; server_name mydomain.com ssl on; ssl_certificate /etc/nginx/ssl/ssl-bundle.crt; ssl_certificate_key /etc/nginx/ssl/server.key; location / { proxy_pass http://mydomain.com; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }

    Read the article

  • Wordpress serving PHP but not CSS or JS

    - by Jason
    I'm trying to set up an Amazon EC2 instance to run a Django app and a WP instance side by side, differing only by the incoming URL. Initially, accessing the site via mysite.com/wordpress worked, but I also needed to catch the incoming requests from a subdomain address blog.mysite.com. To do that, I created a default file in /etc/apache2/sites-enabled and included two virtualhost directives, one of which was <VirtualHost *:80> ServerName www.blog.mysite.com <Directory /var/www/wordpress> Order deny,allow Allow from all </Directory> </VirtualHost> This created some errors with the other virtualhost, so I restored the default 000-default file configuration and restarted. Now, accessing mysite.com/wordpress takes forever, and even then the CSS and JS files are not loading. Iside the Firebug Net tab, I can see the HTML response, but the CSS and JS files are not loading at all. What happened here?

    Read the article

  • Several devices in Device Manager have Code 3 error

    - by John Straka
    One of our users' machines (Dell Optiplex 380 running Windows 7 32-bit) is having a weird driver issue. A bunch of devices have Error Code 3: The driver for this device might be corrupted, or your system may be running low on memory or other resources. (Code 3) The system isn't low on memory (only 1GB in use out of whatever the max for 32-bit is.) Here's a screenshot of the affected devices: I tried reinstalling the chipset and audio drivers to no avail. I have no idea what prompted it, and I'm not sure how the basic Windows Generic PnP Monitor driver could even be corrupted. What might be causing this error?

    Read the article

  • The good SQL database to process a lot of data?

    - by Dorian
    I have to process like 10-100 millions records. I have to give the data to the client when it's finish. The data is givent as SQL requests to execute in the database. He have a powerful server with MySQL, I think it will be fast enough. The issue is my computer is not as powerful as his server, so I would like to use an other SQL server who is compatible (I export his database and import it in my computer) with MySQL but more powerful. What should I use? Or am I doomed to use MySQL?

    Read the article

  • How to use radiusclient-ng?

    - by Muhammad Gelbana
    A guy on my team compiled the radiusclient and radlogin executable found on that page. But installing it is getting more and more problematic and I can't seem to get anywhere ! I received from him: radclient libfreeradius-client.so.2 servers radiusclient.conf dictionary.dat radlogin What I'm trying to do is to install this client on a linux box and the: Access that box remotely using ssh. Then issue a authentication\accounting requests to another remote RADIUS server. But nothing seems intuitive about this and I have very little experience with linux and RADIUS protocols ! Has anyone successfully installed that client ? Thank you.

    Read the article

  • mod_proxy failing as forward proxy in simple configuration

    - by Stabledog
    (On Mac OS X 10.6, Apache 2.2.11) Following the oft-repeated googled advice, I've set up mod_proxy on my Mac to act as a forward proxy for http requests. My httpd.conf contains this: <IfModule mod_proxy> ProxyRequests On ProxyVia On <Proxy *> Allow from all </Proxy> (Yes, I realize that's not ideal, but I'm behind a firewall trying to figure out why the thing doesn't work at all) So, when I point my browser's proxy settings to the local server (ip_address:80), here's what happens: I browse to http://www.cnn.com I see via sniffer that this is sent to Apache on the Mac Apache responds with its default home page ("It works!" is all this page says) So... Apache is not doing as expected -- it is not forwarding my browser's request out onto the Internet to cnn. Nothing in the logfile indicates an error or problem, and Apache returns a 200 header to the browser. Clearly there is some very basic configuration step I'm not understanding... but what?

    Read the article

  • Default maximum heap size -- Ubuntu 10.04 LTS, openjdk6-jre

    - by sachin
    I just installed openjdk6-jre on Ubuntu 10.04 java version "1.6.0_20" OpenJDK Runtime Environment (IcedTea6 1.9.2) (6b20-1.9.2-0ubuntu1~10.04.1) OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode) Every time I run "java" I get this error: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. This can be solved by specifying a maximum heap size and running "java -Xmx256m" But is there anyway to permanently fix this error (i.e. set the default heap size to 256MB so that I do not need to specify the max heap size every time I run the command)

    Read the article

  • Running 32 bit SQL Server 2005 on 64 bit Windows Server?

    - by TooFat
    If I have a 32 bit version of SQL Server 2005 running on a 64 bit Windows Server does the max amount of memory avail. the SQL Server process increase from 2gb to 4gb. In reading this blog entry by Mark Russinovich in which he states that "All Microsoft server products and data intensive executables in Windows are marked with the large address space awareness flag" and "Because the address space on 64-bit Windows is much larger than 4GB, something I’ll describe shortly, Windows can give 32-bit processes the maximum 4GB that they can address and use the rest for the operating system’s virtual memory." which leads me to believe that the answer is "yes" but I not totally confident.

    Read the article

  • Throwing TRIM support in Ubuntu guest at Win7-Virtualbox host

    - by user141472
    I have VirtualBox 4.1.14 on Windows 7 as host, and Ubuntu server 11.10 as guest. System was installed at traditional HDD years ago (and upgraded later), but now it's at SSD as expanding drive. There is "AHCI" and "it's SSD" features enabled in SATA controller. Problem is, this expanding drive growth to it's almost max size (90% of it), but actually in VM only about 50% spent. Also, guest VM does not recognize /dev/sda as SSD, /sys/block/sda/queue/rotational says "1", /sys/block/sda/queue/discard_* all says "0". And, of course, I cannot run fstrim /, it says that operation not supported. Is there some trick to enable TRIM support in my guest system without reinstalling it?

    Read the article

  • aireplay - reading but not sending

    - by oneat
    I'm trying aircrack injection, everything is working I authenticated, but aireplay is not working aireplay-ng -3 -b 00:12:2A:01:74:05 -h 78:e4:00:87:71:8b mon0 18:53:03 Waiting for beacon frame (BSSID: 00:12:2A:01:74:05) on channel 7 Saving ARP requests in replay_arp-0817-185303.cap You should also start airodump-ng to capture replies. Read 4988 packets (0 ARPs, 4 ACKs), sent 0 packets...(0 pps) Why isn't it working? Why isn't it sending packets? 03:00.0 Network controller: Atheros Communications Inc. AR928X Wireless Network Adapter (PCI-Express) (rev 01) I tested injection on injection test in aircrack tutorial, despite driver wasn't patched.

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >