Search Results

Search found 1087 results on 44 pages for 'serving'.

Page 17/44 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Display a flash message for a specific loggedin user upon receiving request.

    - by ShenoudaB
    Dears, how can i trigger a prompt (or flash message with links) display (notifying) for one of the logged in agent (specific agent screen) on my web application when receiving a request from a client. using rails and jquery. as my application is serving a call center, and the my client request ... when a call coming to the system a prompt appears to an agent this call dedicated to with some info from the call.

    Read the article

  • How do I identify which MySQL slave has responded?

    - by Kynth
    I have a MySql Master server replicating to three Slaves. A legacy website is performing load-balanced Reads from the Slaves. Is there a method of identifying from the website which of the Slaves is serving a Read request? I'd prefer a function that I can use to return a server name or ip address as part of the SELECT, but any reasonable method will do. Thank you in advance.

    Read the article

  • Google App Engine - The most awaited feature

    - by systempuntoout
    This list is taken from the official Google App Engine roadmap: SSL for third-party domains Background servers capable of running for longer than 30s Ability to reserve instances to reduce application loading overhead Ability to select different availability vs. latency options for Datastore Support for mapping operations across datasets Datastore dump and restore facility Raise request/response size limits for some APIs Improved monitoring and alerting of application serving Support for Browser Push (Comet) communication Built-in support for OAuth & OpenID What is your most awaited feature and why?

    Read the article

  • Subversion with Apache and permissions issues

    - by djechelon
    Hello, I have setup an SVN repository for use with Apache 2 via svnadmin create command and appropriate vhost configuration. I found that, in order to correctly use the repository, this must be owned by wwwrun user (or www group) or chmodded to 777. I would like to ask if it's possible to explicitly tell Apache to impersonate another user when serving requests to a certain path (from vhost.conf), like with suphp extension, so I won't mess with permissions once I create a repository. Thank you in advance

    Read the article

  • Is nginx / node.js / postgres a very scalable architecture?

    - by Luc
    I have an app running with: one instance of nginx as the frontend (serving static file) a cluster of node.js application for the backend (using cluster and expressjs modules) one instance of Postgres as the DB Is this architecture sufficient if the application needs scalability (this is only for HTTP / REST requests) for: 500 request per seconds (each requests only fetches data from the DB, those data could be several ko, and with no big computation needed after the fetch). 20000 users connected at the same time Where could be the bottlenecks ?

    Read the article

  • How do I stop Opera from caching a page?

    - by nishkarr
    I am trying to get Opera to re-request a page every time instead of just serving it from the cache. I'm sending the 'Cache-control=no-cache' and 'Pragma: no-cache' response headers but it seems as if Opera is just ignoring these headers. It works fine in other browsers - Chrome, IE, Firefox. How do I stop Opera from caching pages? What I want to be able to do is have Opera re-request a page when the user clicks the Back button on the browser.

    Read the article

  • Embedding script with timeout in case server goes down

    - by vsync
    I need a way to make sure my script won't block the viewed page, if the server serving the script is down (port 80 is blocked for some reason). Currently when I test it and take down the server (Apache), or close the firewall, I see in the browser that it is trying to load the resource (script in that case), without success for long seconds, until it aborts. Is there a nice way to get past this issue?

    Read the article

  • serve static play2.x content from CDN

    - by Adam Lane
    In development mode I would like to have assets served locally and when deployed in production I would like them served from the CDN. Anyone using play2 and serving content from CDN be willing to share how they are doing it? // Thinking of something like this in the routes file... @if(play.Play.isDev()) { GET /assets/*file controllers.Assets.at(path="/public", file) } else { GET CDNPATH/assets/*file controllers.Assets.at(path="CDNPATH", file) } (Note: using 2.0.2 because of the headers fix https://github.com/playframework/Play20/pull/276)

    Read the article

  • WinSock best accept() practices

    - by Meta
    Imagine you have a server which can handle only one client at a time. The server uses WSAAsyncSelect to be notified of new connections. In this case, what is the best way of handling FD_ACCEPT messages: A Accept the connection attempt right away but queue the client until its turn? B Do not accept the next connection attempt until we are done serving the currently connected client? What do you guys think is the most efficient?

    Read the article

  • How to serve static files for multiple Django projects via nginx to same domain

    - by thanley
    I am trying to setup my nginx conf so that I can serve the relevant files for my multiple Django projects. Ultimately I want each app to be available at www.example.com/app1, www.example.com/app2 etc. They all serve static files from a 'static-files' directory located in their respective project root. The project structure: Home Ubuntu Web www.example.com ref logs app app1 app1 static bower_components templatetags app1_project templates static-files app2 app2 static templates templatetags app2_project static-files app3 tests templates static-files static app3_project app3 venv When I use the conf below, there are no problems for serving the static-files for the app that I designate in the /static/ location. I can also access the different apps found at their locations. However, I cannot figure out how to serve all of the static files for all the apps at the same time. I have looked into using the 'try_files' command for the static location, but cannot figure out how to see if it is working or not. Nginx Conf - Only serving static files for one app: server { listen 80; server_name example.com; server_name www.example.com; access_log /home/ubuntu/web/www.example.com/logs/access.log; error_log /home/ubuntu/web/www.example.com/logs/error.log; root /home/ubuntu/web/www.example.com/; location /static/ { alias /home/ubuntu/web/www.example.com/app/app1/static-files/; } location /media/ { alias /home/ubuntu/web/www.example.com/media/; } location /app1/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app1; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app1.sock; } location /app2/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app2; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app2.sock; } location /app3/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app3; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app3.sock; } # what to serve if upstream is not available or crashes error_page 400 /static/400.html; error_page 403 /static/403.html; error_page 404 /static/404.html; error_page 500 502 503 504 /static/500.html; # Compression gzip on; gzip_http_version 1.0; gzip_comp_level 5; gzip_proxied any; gzip_min_length 1100; gzip_buffers 16 8k; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # Some version of IE 6 don't handle compression well on some mime-types, # so just disable for them gzip_disable "MSIE [1-6].(?!.*SV1)"; # Set a vary header so downstream proxies don't send cached gzipped # content to IE6 gzip_vary on; } Essentially I want to have something like (I know this won't work) location /static/ { alias /home/ubuntu/web/www.example.com/app/app1/static-files/; alias /home/ubuntu/web/www.example.com/app/app2/static-files/; alias /home/ubuntu/web/www.example.com/app/app3/static-files/; } or (where it can serve the static files based on the uri) location /static/ { try_files $uri $uri/ =404; } So basically, if I use try_files like above, is the problem in my project directory structure? Or am I totally off base on this and I need to put each app in a subdomain instead of going this route? Thanks for any suggestions TLDR: I want to go to: www.example.com/APP_NAME_HERE And have nginx serve the static location: /home/ubuntu/web/www.example.com/app/APP_NAME_HERE/static-files/;

    Read the article

  • Routing not working correctly using the laravel framework

    - by samayres1992
    I'm using the book wrote by one of the guys that created laravel, so I'd like to think for the most part this code isn't horribly wrong. My server is setup with nginx serving all static files and apache2 serving php. My config for each are the following: apache2: <VirtualHost *> # Host that will serve this project. ServerName litl.it # The location of our projects public directory. DocumentRoot /var/www/litl.it/laravel/public # Useful logs for debug. CustomLog /var/log/apache.access.log common ErrorLog /var/log/apache.error.log # Rewrites for pretty URLs, better not to rely on .htaccess. <Directory /var/www/litl.it/laravel/public> <IfModule mod_rewrite.c> Options -MultiViews RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] </IfModule> </Directory> nginx: server { # Port that the web server will listen on. listen 80; # Host that will serve this project. server_name litl.it *.litl.it; # Useful logs for debug. access_log /var/log/nginx.access.log; error_log /var/log/nginx.error.log; rewrite_log on; # The location of our projects public directory. root /var/www/litl.it/laravel/public; # Point index to the Laravel front controller. index index.php; location / { # URLs to attempt, including pretty ones. try_files $uri $uri/ /index.php?$query_string; } # Remove trailing slash to please routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } # PHP FPM configuration. location ~* \.php$ { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy_params; try_files index index.php $uri =404; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root/php/$fastcgi_script_name; } # We don't need .ht files with nginx. location ~ /\.ht { deny all; } location @proxy { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy_params; } error_page 403 /error/403.html; error_page 404 /error/404.html; error_page 405 /error/405.html; error_page 500 501 502 503 504 /error/5xx.html; location ^~ /error/ { internal; root /var/www/litl.it/lavarel/public/error; } } I'm including these server configs, as I feel this maybe the issue? Here is my incredibly basic routing file that should return "routing is working" on domain.com/test but instead it just returns the homepage. <?php Route::get('/', function() { return View::make('hello'); }); Route::get('/test', function() { return "routing is working"; }); Any ideas where I'm going wrong, I'm following this tutorial very closely and I'm confused why there is issue. Thanks!

    Read the article

  • Why are my httpd mpm_prefork processes being reaped so quickly?

    - by Dan Pritts
    We've got a system running RHEL6, x64. We are using a local installation of apache 2.2.22 from source. we serve primarily: mod_perl applications (with a local installation of perl 5.16.0) tomcat applications proxied with mod_jk Here is some context; the main question is below. All of this talks to an Oracle backend. We are having issues with Oracle becoming unresponsive. We think this is because we're hitting the maximum process limit in oracle. We've upped the process limit, but now we are hitting memory pressure on the oracle server. We have tons of oracle sessions sitting idle. I can trace a bunch of them back to the httpd processes. We have mod_perl's Apache::DBI start up a new connection to the database with each httpd child that's spawned. We are concerned that these are not always getting closed out properly when the httpd's exit...and the httpd's are exiting very frequently. I know that it would be good to modify the mod_perl applications to use some better form of db connection pooling; we plan to pursue that but would like to solve our immediate problem sooner. So here's the main question. We are using the prefork MPM. The apache child processes are lasting at most a few minutes. Log analysis shows that each one is serving fewer than 50 clients before exiting; the last request each child serves is OPTIONS * HTTP/1.0 on some sort of internal connection; I'm under the impression that this is a "ping" from the master process. I've adjusted the MPM config as follows. I didn't want to raise MinSpareServers too high, because, after all, i'm trying to minimize the number of sessions to oracle. MinSpareServers 5 MaxSpareServers 30 MaxClients 150 MaxRequestsPerChild 10000 Right now we're serving 250-300 requests per minute. We've got 21 httpd's running, the eldest (other than the master, owned by root) being 3 minutes old. This rate of reaping of the apache children really seems excessive. What could be causing it? Apache was built with: $ ./configure --prefix=/opt/apache --with-ssl=/usr/lib --enable-expires --enable-ext-filter --enable-info --enable-mime-magic --enable-rewrite --enable-so --enable-speling --enable-ssl --enable-usertrack --enable-proxy --enable-headers --enable-log-forensic Apache config info: % /opt/apache/bin/httpd -V Server version: Apache/2.2.22 (Unix) Server built: Jul 23 2012 22:30:13 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.5, APR-Util 1.4.1 Compiled using: APR 1.4.5, APR-Util 1.4.1 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/opt/apache" -D SUEXEC_BIN="/opt/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf" modules are compiled into apache rather than shared libs: % /opt/apache/bin/httpd -l Compiled in modules: core.c mod_authn_file.c mod_authn_default.c mod_authz_host.c mod_authz_groupfile.c mod_authz_user.c mod_authz_default.c mod_auth_basic.c mod_ext_filter.c mod_include.c mod_filter.c mod_log_config.c mod_log_forensic.c mod_env.c mod_mime_magic.c mod_expires.c mod_headers.c mod_usertrack.c mod_setenvif.c mod_version.c mod_proxy.c mod_proxy_connect.c mod_proxy_ftp.c mod_proxy_http.c mod_proxy_scgi.c mod_proxy_ajp.c mod_proxy_balancer.c mod_ssl.c prefork.c http_core.c mod_mime.c mod_status.c mod_autoindex.c mod_asis.c mod_info.c mod_cgi.c mod_negotiation.c mod_dir.c mod_actions.c mod_speling.c mod_userdir.c mod_alias.c mod_rewrite.c mod_so.c One final note - the red hat httpd, apr, and perl packages are all installed, but ldd shows that none of those libraries are linked with the running httpd.

    Read the article

  • HTTP response time profiling

    - by Sparsh Gupta
    Hello I have a nginx reverse proxy. The server is close to serving 600-700 requests per second. I have a Munin HTTP load time plugin which is outputting this: http://monitor.wingify.com/munin/visualwebsiteoptimizer.com/lb1.visualwebsiteoptimizer.com-http_loadtime.html Now, the problem is I am seeing some spikes in the graph. Expected response times should always be under 200ms. I am keeping an eye on syslog and messages but I am unable to figure out the actual cause of this. I was wondering if there is any good HTTP response time profiling system which I can install / embed with this nginx server and get a detailed reports / logs on the breakup of time taken by different things and what exactly is the cause of the spikes. The profiling system would also help me understand bottlenecks and how can I further optimize the latency. Most important right now is to investigate the cause of the spikes in the HTTP load time graphs (similar pattern is reported by external monitors - Pingdom) and to fix it to get consistent response times Thanks

    Read the article

  • Rotate haproxy logs

    - by Jagbir
    I tried few things but still not able to rotate haproxy logs efficiently. I need to rotate logs when log files crosses 500 MB size. Considering haproxy is serving large no. of static tcp connections, I can not restart haproxy process though a reload is doable. Daily haproxy log file size normally crosses 3 GB on my machine. Here's sample from one of newer machine where log file size is growing beyond limit set: ubuntu@server:/mnt/log/haproxy$ ls -lsh total 4.3G 85M -rw-r----- 1 syslog adm 85M Jun 2 07:13 haproxy.log 2.9G -rw-r----- 1 syslog adm 2.9G Jun 2 06:37 haproxy.log.1 460M -rw-r----- 1 syslog adm 460M Jun 1 06:32 haproxy.log.2.gz 469M -rw-r----- 1 syslog adm 469M May 31 06:42 haproxy.log.3.gz 384M -rw-r----- 1 syslog adm 384M May 30 06:49 haproxy.log.4.gz ubuntu@server:/mnt/log/haproxy$ cat /etc/logrotate.d/haproxy /mnt/log/haproxy/haproxy.log { missingok copytruncate notifempty rotate 50 size 500M compress delaycompress }

    Read the article

  • How much RAM required by Varnish?

    - by Gobind Singh Deo
    Hi, I'm using Apache for serving static files. Apache2 require too much RAM. I want to reduce the RAM usage. I don't have experience with Varnish. It's said to be faster. I don't know how Varnish works. So, How much RAM needed for running Apache2+Varnish? Will Apache2+Varnish have higher RAM usage than Apache2 without Varnish? Thanks.

    Read the article

  • IIS application pool failed to respond to ping

    - by Zimmy-DUB-Zongy-Zong-DUBBY
    I have been investigating a problem that occurred on a Windows 2003 server a few days ago. there are about 15 app pools, and within a few minutes, they all produced the error below in the system log: A process serving application pool 'Pool 31x' failed to respond to a ping. The process id was '7144'. The pools were then restarted automatically, but timed out during startup, leaving all sites down. My question is: what would cause a "ping timeout" to all of the app pools around the same time, and then why would they start up too slowly? The app in each pool is a WCMS which uses the .NET 1.1 framework. It connects to a remote DB but is otherwise independent of other machines.

    Read the article

  • Setup LAN to serve webpages and voip and access to the web site from inside LAN with domain name

    - by Mauricio Arias
    I'd like to know if it will work: I have my domain and I´m serving a webpage in a nginx to the internet, but if I type my domain in my laptop inside LAN I access to my modem/router configuration, I cannot access to the web server unless I type the IP address. I would like to add a Bind server after the modem/router - (port forward, ports 80 and 5060), if the request is www.mydomain.com bind should resolve the nginx IP address and serve it, and if it is a voip request should address to the voip server and if I'd like to access to the website from inside LAN I'd like to type mydomain.com. Could I do it with this configuration? Do I need something else? Thanks in advace!

    Read the article

  • How to restart php-cgi automatically with spawn-fcgi

    - by mrm8
    I'm running nginx with php as fcgi. It's working just fine, however, php-cgi keeps on exit()ing after serving 500 requests. I tried increasing that value (PHP_FCGI_MAX_REQUESTS), and that worked, but that seems to be a workaround. Then I set it to 0, and it didn't exit() yet. But I think there's a reason why php-cgi should be restarted. At the moment, I'm running php-cgi with spawn-fcgi: when the php process exits, spawn-fcgi exits, too. Now, is there a way to automatically restart php (without dirty hacks like while [ 1 ]; do spawn-fcgi; done etc)?

    Read the article

  • using squid for apache?

    - by ajsie
    so i have set up apache serving my php pages. i read about squid but don't understand why/how i should use it to speed up my web server. from what i've learned squid is located in same network (or another) and caches content requested by the web browsers, and then when another web browser wants a same page, squid returns that page cached locally, so it never sends a request to the apache server (faster response time for the client, and reduced load for the server). so it seems that squid is for the client side (web browser), and has nothing to do with the server side (apache). but then some people tell others how they have speeded up apache using squid. so im confused. could squid be used on the server side too? and how will it work?

    Read the article

  • How to make MAMP PRO / XAMPP secure enough to serve as production webserver? Is it possible?

    - by Andrei
    Hi, my task is to setup a MAMP webserver for our website in the easiest way so it can be managed by my colleagues without experience in server administration. MAMP PRO is an excellent solution, but some guys don't suggest to use it for serving external requests. Could you explain why it is bad (in details if possible) and how to make it secure enough to be a full-scale and not-only-local webserver? Is there a better solution? Update There is a discussion on the MAMP website. XAMPP developers say that one can make their product secure: The default configuration is not good from a securtiy point of view and it's not secure enough for a production environment - please don't use XAMPP in such environment. Since LAMPP 0.9.5 you can make your XAMPP installation secure by calling »/opt/lampp/lampp security«. Could you comment it?

    Read the article

  • How to get nginx to serve up on an elastic IP

    - by geekbri
    I have an EC2 instance which is serving up PHP pages with nginx and php-fpm. This works perfectly fine when accessed through the public DNS for the instance. However if I try to access the site with the Elastic IP which is bound to it, it serves up a generic "Welcome to nginx" page, even though in my server block i have listen 80 (which i thought listened on all incoming IPs on port 80). Here is my nginx config. server { listen 80; access_log /var/log/nginx/access.log; root "/var/www/clipperz/"; index index.html index.php; # Default location location / { try_files $uri $uri/ index.html; } # Parse all .php file in the $document_root directory location ~ .php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } }

    Read the article

  • Comparison of cloud hosting providers

    - by Abel
    Is there a place where we can compare* the many new arising cloud hosting providers? From reading into each of them, they seem very different and range from just hosting applications (google) to a semi-full enterprise web serving framework (rackspace). Comparing "by hand" takes a lot of time. All have limitations and different prizing, but which are those and how do they compare? I'm looking for an unbiased comparison site, rather then a discussion on "which is the best". * I don't mean a hosting provider comparison site, of which there are many. The properties of cloud hosting providers are remarkably different and don't compare well on classical hosting provider comparison charts.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >