Search Results

Search found 64048 results on 2562 pages for 'http post'.

Page 438/2562 | < Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >

  • Website Content via remote mysql or API for clients

    - by Kannan
    I would like to know whether there will be problem for shared hosting/vps/dedicated users if they request content via remote mysql or via http API. ? What will be issues if they make every request to production server for delivery of the website articles along with image urls etc. Will it cause disk wait on shared/vps/dedicated hosting users ? Will both remotemysql or http api or curl cause high CPU usage, I/O wait or Memory usage ? What should be best for speed remote mysql or http web api ? or any alternatives ? Thanku Kannan

    Read the article

  • what firefall linux distro applicance could track internet usage per device in my home?

    - by GregH
    Hello, Anyone know of a community edition/open source/free firewall/gateway software product that I could install onto an old PC to act as my firewall/gateway/proxy etc, BUT for which it has the power to track internet usage per device in my home. So: a) Mandatory - Track internet usage for devices on my home network on a per device basis (e.g. various PCs/Xbox etc) b) Mandatory - Report/graph would would give a breakdown of internet usage, per device (e.g. IP address), per day. c) Desirable - as in b) above but per hour d) Desirable - realtime graph (e.g. 5 minute measurement intervals or something) that shows current internet usage per device e) Mandatory - Handles all internal<=internet requests for all protocols (e.g. HTTP, HTTPS, xbox etc) f) Mandatory - No explicit settings in clients required - i.e. Transparent Monitoring concept (for both HTTP and non-HTTP traffic like xbox, skype etc) g) Mandatory - easy "appliance" like installation onto a dedicated low spec PC thanks in advance

    Read the article

  • what firefall linux distro applicance could track internet usage per device in my home?

    - by GregH
    Hello, Anyone know of a community edition/open source/free firewall/gateway software product that I could install onto an old PC to act as my firewall/gateway/proxy etc, BUT for which it has the power to track internet usage per device in my home. So: a) Mandatory - Track internet usage for devices on my home network on a per device basis (e.g. various PCs/Xbox etc) b) Mandatory - Report/graph would would give a breakdown of internet usage, per device (e.g. IP address), per day. c) Desirable - as in b) above but per hour d) Desirable - realtime graph (e.g. 5 minute measurement intervals or something) that shows current internet usage per device e) Mandatory - Handles all internal<=internet requests for all protocols (e.g. HTTP, HTTPS, xbox etc) f) Mandatory - No explicit settings in clients required - i.e. Transparent Monitoring concept (for both HTTP and non-HTTP traffic like xbox, skype etc) g) Mandatory - easy "appliance" like installation onto a dedicated low spec PC thanks in advance

    Read the article

  • cisco 6500 crash enabling netflow

    - by bleomycin
    Hello everyone, i have a cisco 6503 running IOS 12.2(33)sxi5 and i'm trying to enable netflow. Following the instructions here http://www.manageengine.com/products/netflow/help/cisco-netflow/cisco-ios-netflow.html enabling for interface vlan 3, shortly after ip flow-export version 5 console outputs: CPU_MONITOR-6-NOT_HEARD: CPU monitor messages have not been heard for 30 seconds crashlog here: http://pastebin.com/Niv2H8xD it then writes a crash log and reloads the router. Has anyone else experienced anything like this before? Here is my running config prior to adding the options in the above link: http://pastebin.com/AgNb1ahG Thank you for any help!

    Read the article

  • UEC - Can the Cluster Controller and Storage Controller be seperate systems?

    - by Jeremy Hajek
    My department is implementing an Ubuntu Enterprise Cloud. I have done the testing and am quite comfortable with the 4 pieces, CC/SC, CLC, WS, NC. Looking at various documents below it appears the the Storage Controller and Cluster Controller (eucalyptus-sc and eucalyptus-cc) are always installed on the same system. My question is this: can I install the storage controller and the cluster controller on separate systems? http://open.eucalyptus.com/wiki/EucalyptusAdvanced_v2.0 the picture indicates that cc and sc are two different machines http://www.canonical.com/sites/default/files/active/Whitepaper-UbuntuEnterpriseCloudArchitecture-v1.pdf P.10 1st paragraph uses the word "machine(s)" http://software.intel.com/file/31966 P. 8 indicates the same separate architecture BUT... https://help.ubuntu.com/community/UEC/PackageInstallSeparate indicates below that the SC and CC are to be on the same system.

    Read the article

  • Online backup solution

    - by Petah
    I am looking for a backup solution to backup all my data (about 3-4TB). I have look at many services out there, such as: http://www.backblaze.com/ http://www.crashplan.com/ Those services look very good, and a reasonable price. But I am worried about them because of incidents like this: http://jeffreydonenfeld.com/blog/2011/12/crashplan-online-backup-lost-my-entire-backup-archive/ I am wondering if there is any online back solution that offers a service level agreement (SLA) with compensation for data loss at a reasonable price (under $30 per month). Or is there a good solution that offers a high enough level of redundancy to mitigate the risk? Required: Offsite backup to prevent data loss in terms of fire/theft. Redundancy to protect the backup from corruption. A reasonable cost (< $30 per month). A SLA in case the service provider faults on its agreements.

    Read the article

  • Redircting to a url that has a question mark in it?

    - by dkmojo
    I have a somewhat strange problem. A client has moved their site to Wordpress. They use a service for link exchanges that has a Wordpress plugin. The issue is that the new links pages use a query string to display the correct content and I cannot figure out how to redirect the old URLs correctly. Old URLs look like this: domain.com/link/category-name.html The plugin makes them look like this in WP: domain.com/links/?page=category-name.html How in the world can I get the redirect to work properly? Here's what I have tried: Redirect 301 /link/actors.html http://www.artisticimages.biz/links/?page=actors.html Redirect 301 /link/actors.html http://www.artisticimages.biz/links/%3Fpage=actors.html Redirect 301 /link/actors.html http://www.artisticimages.biz/links/\?page=actors.html But none of those have worked. Any help is greatly appreciated!

    Read the article

  • Boot xen server through ipxe

    - by Ghassen Telmoudi
    I'm want install Xen Server 6.2 though ipxe, I tried different configurations, no luck making to work until now. I found some may example to boot from pxe using TFTP server, and here is an example: default xenserver-auto label xenserver-auto kernel mboot.c32 append xenserver/xen.gz dom0_max_vcpus=1-2 dom0_mem=752M,max:752M com1=115200,8n1 console=com1,vga --- xenserver/vmlinuz xencons=hvc console=hvc0 console=tty0 answerfile=http://[pxehost]/answerfile.xml remotelog=[SYSLOG] install --- xenserver/install.img The problem is that ipxe uses different syntax, I could not figure out how to convert this configuration to work on ipxe. Here is my ipxe file so far: #!ipxe echo "XEN Server is booting up" initrd http://server-ip/pxe/xen/boot/xen.gz kernel http://server-ip/pxe/xen/boot/pxelinux/mboot.c32 boot Can any one supply the correct configuration?

    Read the article

  • I get a Segmentation fault when doing apt-get util-linux

    - by Adam
    I've found that a lot of upgrade commands and Apache on my system are failing with Segmentation faults. I don't know if this is the main one, but a lot of packages depend on util-linux: root@myUbuntuHardyHeronServer:~# apt-get install util-linux Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: util-linux 1 upgraded, 0 newly installed, 0 to remove and 72 not upgraded. 20 not fully installed or removed. Need to get 0B/441kB of archives. After this operation, 0B of additional disk space will be used. (Reading database ... 20547 files and directories currently installed.) Preparing to replace util-linux 2.13.1-5ubuntu2 (using .../util-linux_2.13.1-5ub untu3.1_i386.deb) ... Unpacking replacement util-linux ... Segmentation fault dpkg: warning - old post-removal script returned error exit status 139 dpkg - trying script from the new package instead ... Segmentation fault dpkg: error processing /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386 .deb (--unpack): subprocess new post-removal script returned error exit status 139 Segmentation fault dpkg: error while cleaning up: subprocess post-removal script returned error exit status 139 Errors were encountered while processing: /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • nginx points the sub-directory of an alias folder to the base directory

    - by Starry
    I am new to Nginx. Now I have a confusion on nginx configurations: My web site contains folders in different locations: location / { root /Path1 } location ^~ /personal { alias /Path2 } When I query http://mysite/personal, I am accessing the content of /Path2 instead of /Path1 Now I want to add a sub-directory in /personal with specific configurations, so I add: location /personal/download { autoindex on; } But I got 404 error when querying http://mysite/personal/download. According to the error log, I am directed to /Path1/personal/download, which is not correct. How can I configure nginx, such that all access to http://mysite/personal/* will be directed to the same directory in /Path2?

    Read the article

  • Setup asp.net mvc application as subdomain website

    - by a_m0d
    I'm trying to setup a local application on a subdomain on our company server. There is already an installation of sharepoint running on http://companyweb/, but I would like my application to run on http://orders.companyweb/. I tried creating a new website, leaving the IP address the same as it is for http://companyweb, and just changing the host header value to orders.companyweb. However, no matter where I try to access the site from (different computers around the network, including the server itself), I keep getting 404 errors. I then tried setting up a simple index.html and serving that up as the highest priority; however, I still got 404 errors. This makes me think that I have actually setup the site itself wrong. What should I change to be able to access this application correctly on all the local computers?

    Read the article

  • Vanilla TeX Live 2009 on Ubuntu

    - by reprogrammer
    I installed TeX Live 2009 by following the instructions at http://www.tug.org/texlive/quickinstall. Then, to make my local TeX Live installation work with the Ubuntu package management system, I followed the instructions on http://www.tug.org/texlive/debian.html. That is, I performed the following steps. $ sudo aptitude install equivs $ mkdir /tmp/tl-equivs && cd /tmp/tl-equivs $ equivs-control texlive-local # I replaced the contents of texlive-local by http://www.tug.org/texlive/debian-control-ex.txt $ equivs-build texlive-local $ sudo dpkg -i texlive-local_2009-1~1_all.deb However, when I go about installing kile through the Ubuntu package management system, it requires me to install a lot of dependencies that are already provided by my texlive-local package. Does any one have a suggestion to fix this problem?

    Read the article

  • Configuring a Jetty web application on a different port

    - by sHz
    Hi folks, I'm brand new to Jetty. I'd like to ask if its possible to have Jetty listening on port 8080, however where specified, serve a specific web application under say /var/jetty/webapps/<appname> (default on CentOS) served on say port 10000 instead of http://localhost:8080/<appname> i.e. http://localhost:10000/ = http://localhost:8080/<appname&gt; ? If so, what configuration changes would be required to make this work without an additional proxy server? I've googled away, but haven't found a solution (perhaps I've missed something obvious?).

    Read the article

  • Program for drawing with pen tablet, like Salman Khan's one

    - by Halst
    I do a lot of sketching with my pen-tablet. I use Microsoft Paint in Windows 7, and it is just perfect except for bad anti-aliasing. I found some videos of Salman Khan, where his sketching is really smooth and anti-aliased. Do you know what program he might use? You can see a bit of its interface here: http://www.khanacademy.org/press/chronicle.HTML and some more: http://www.khanacademy.org/ http://khanexercises.appspot.com/video?v=GW8ZPjGlk24 Else, you can recommend me something else. I hope to find something like Microsoft Paint in Windows 7, but anti-aliased, or whatever.

    Read the article

  • [Apache] mod_rewrite www.site.com/dir/ --> www.site.com/dir/2009/

    - by Casey
    I'm having trouble with this rewrite. I've never really used mod_rewrite before and don't have much experience with regex. Any help is appreciated! <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on #prevent nested looping RewriteCond %{ENV:REDIRECT_STATUS} ^$ #re-route incoming requests RewriteRule ^(.*)$ %{REQUEST_URI}2009/$1 [L,NE] </IfModule> This partially works, http://www.site.com/dir/ is routed to http://www.site.com/dir/2009/ but a request like http://www.site.com/dir/css/theme.css fails. I'm hoping to rewrite all requests to the parent directory into the 2009 subdirectory but I keep encountering infinite loops and server errors messages. I haven't found any useful examples out there. I figured this would be a common rewrite... Thanks in advance!

    Read the article

  • Python coding with VLC player (quite a basic query I expect)

    - by Todd
    I'm fairly new to the whole coding realm so my knowledge is fairly limited, and I can't seem to find any basic tutorials on how to use scripts with VLC player. More specifically, the reason I'm asking here is because I stumbled across a post on this site about playing random clips from random videos on VLC player automatically. This is the forum post: Playback random section from multiple videos changing every 5 minutes My situation is similar to this lovely gentleman's was, though he clearly knows a lot more about coding than I do. In short, I'd like to copy this coding into a file of some sort and apply it to VLC player myself. Only I'm not sure what file type I'd have to save it as (I have Python by the way, and I tried saving it as a .py file but I didn't know if it was correct or where to go from there). Additionally, I'm not sure how to get VLC to "read" the script, so to speak - is there a specific location the file needs to be, and do I run the script from another program or through VLC? I'll reiterate that I'm relatively new to this, so if anybody would be so kind as to post a quick list of steps on how to save/place the file and use it with VLC player I really would appreciate it! P.S. I'm not computer illiterate, I'm fine with most programs and I'd understand if you just said things like "C:\Program Files (x86)\VideoLAN\VLC\plugins" or "in VLC, select Tools Plugins and extensions", I just wouldn't catch on to anything about adding a line of coding that does something without being told exactly what to write! Many thanks in advance! :) Todd

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • Using .htaccess to protect direct access of files

    - by claydough
    We need to prevent direct access of files on our site from someone just entering a URL in their browser. I got this to work by using an htaccess file and it is fine in IE & Safari, but for some reason Firefox doesn't cooperate. I think it has something to do with the way Firefox reports referrers. Here is my code in the .htaccess file. RewriteEngine On RewriteBase / RewriteCond %{HTTP_REFERER} !^http://(my\.)?bigtimbermedia\.com/.*$ [NC] RewriteRule \.(swf|gif|png|jpg|doc|xls|pdf|html|htm|xlsx|docx)$ http://my.bigtimbermedia.com/ [R,L] If you want to see an example of this, try accessing this first... http://my.bigtimbermedia.com/books/bpGreyWolvesflip/index.html It blocks it properly in all browsers. Now if you go to this URL and click on the link, it works in IE and Safari, but Firefox chokes and seems like it is in a loop. Any ideas how I can get this to work in Firefox? Thanks!

    Read the article

  • Recommendation using Client side performance monitoring (boomerang/jiffy/episodes)

    - by Yasei No Umi
    There are a few Client-side JavaScript libraries that check web-site performance on the client side: Jiffy (http://code.google.com/p/jiffy-web/) Episodes (http://stevesouders.com/episodes/) by Steve Sounders Boomerang (http://yahoo.github.com/boomerang/doc/) by Yahoo! Have you used any of them or a similar too? What did you use for the server-side? for reporting? Is this a recommended approach? If not, how should I monitor my web-site performance from the end-user's view?

    Read the article

  • How do I set a default host for nginx?

    - by ulf
    I'm trying to figure out how to set a default host for my nginx installation. I found this article in the nginx Wiki: http://wiki.nginx.org/NginxVirtualHostExample#A_Default_Catchall_Virtual_Host Unfortunately, this doesn’t work. After restarting I get this: Restarting nginx: nginx: [emerg] unknown directive "http" in /etc/nginx/sites-enabled/catchall:1 nginx: configuration file /etc/nginx/nginx.conf test failed After removing the http directive I get this: Restarting nginx: nginx: [emerg] unknown log format "main" in /etc/nginx/sites-enabled/catchall:7 nginx: configuration file /etc/nginx/nginx.conf test failed I’m on Ubuntu 10.04.3 where I’m using the official nginx PPA. Version 1.0.9 of nginx is running.

    Read the article

  • route port 3000 to apache2 alias

    - by user223470
    I have a meteor application running on port 3000. I can successfully connect to the program with www.myurl.com:3000, but would rather connect to it via www.myurl.com/myappname. I started with the instructions on this web site: http://www.andrehonsberg.com/article/deploy-meteorjs-vhosts-ubuntu1204-mongodb-apache-proxy and I have the following Apache configuration file: <VirtualHost *:80> ServerName myurl.com ProxyRequests off <Proxy *> Order deny,allow Allow from all </Proxy> <Location /> ProxyPass http://localhost:3000/ ProxyPassReverse http://localhost:3000/ </Location> </VirtualHost> I do not know how to continue from here to get the program on www.mysite.com/myapp. In other situations, I would use an Alias within the Apache configuration file, but that doesn't seem like the right direction to go in this case. How do I configure Apache to send port 3000 to www.myurl.com/myapp?

    Read the article

  • Monitoring mongrel with monit

    - by matnagel
    I wrote a monit.d file for mongrels which works in this version: check process redmine with pidfile /home/redmine/service/redmine.pid group webservice start program = "/usr/bin/mongrel_rails start -p 41328 -e production -d --pid /home/redmine/service/redmine.pid --user redmine --group redmine -a 127.0.0.1 -c /home/redmine/app" stop program = "/usr/bin/mongrel_rails stop --pid /home/redmine/service/redmine.pid -c /home/redmine/app && rm /home/redmine/service/redmine.pid > /dev/null 2>&1 if cpu greater 50% for 2 cycles then alert if cpu greater 80% for 3 cycles then restart if totalmem greater 60.0 MB for 5 cycles then restart if loadavg (5min) greater 4 for 8 cycles then restart if 3 restarts within 5 cycles then timeout $ Checking monit control file syntax... $ Control file syntax OK I want to also monitor the http response, so I add this line at the end: if failed port 41328 protocol http with timeout 10 seconds then restart Now monit complains: $ Checking monit control file syntax... $ /etc/monit.d/redmine:16: Error: exceeded maximum number of program arguments 'http' $ ERROR: CHECK MONIT CONFIG FILE SYNTAX How do I correctly monitor the port?

    Read the article

  • php file downloads instead of being processed with ajax on apache

    - by eagleon
    I have a small website where some content is displayed within a HTML tag using AJAX. The content is simply taken from another page on the same web site. However, sometimes instead of loading the parsed PHP file, the browser displays a download box instead. I downloaded the file and this is what it looks like a text file mixed with binary or gzipped data. I can't paste the binary stuff here, but here are some of the headers: Jul 2012 18:52:16 GMT Server: Apache/2 X-Powered-By: PHP/5.3.10 Content-Encoding: gzip Vary: Accept-Encoding,User-Agent Keep-Alive: timeout=1, max=95 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html HTTP/1.1 304 Not Modified Date: Sun, 01 Jul 2012 18:52:16 GMT Server: Apache/2 Connection: Keep-Alive Keep-Alive: timeout=1, max=93 ETag: "2fc857-409-4c39691c59b40" HTTP/1.1 304 Not Modified Date: Sun, 01 Jul 2012 18:52:16 GMT Server: Apache/2 Connection: Keep-Alive Keep-Alive: timeout=1, max=92 ETag: "2fc854-3e5-4c39691b65900" HTTP/1.1 304 Not Modified Date: Sun, 01 Jul 2012 18:52:16 GMT Server: Apache/2 Connection: Keep-Alive Keep-Alive: timeout=1, max=91 ETag: "2fc847-3e3-4c3969197d480" and large blocks of stuff like this: µàl]&BaËÜk#ìÏ

    Read the article

  • VPS nameserver setup?

    - by user41010
    Hi, I bought a VPS a few days back and had a domain name registered. It gave me 2 nameservers. I only have shell access (no Cpanel/WHM) and it's running CentOS 5. I can visit my site with http://IP/ but not with http://domain.com. What changes do I need to make so that I can visit my site with http://domain.com. I'm really new at this and any help would be greatly appreciated. Thanks.

    Read the article

  • Existing connexion on Apache and mod_proxy_balancer don't fail over second JBoss node

    - by Jean-Rémy Revy
    I have a Jboss farm, load balanced by Apache HTTP + mod_proxy_balancer and mod_proxy_ajp, with the following configuration : <VirtualHost *:80> ServerName web-gui-acceptance.myorg.com ServerAlias web-gui-acceptance ProxyRequests Off ProxyPass /web-gui balancer://jbosscluster/web-gui stickysession=JSESSIONID nofailover=On ProxyPassReverse /web-gui http://srvlnx01.myorg.com:8080/web-gui ProxyPassReverse /web-gui http://srvlnx02.myorg.com:8080/web-gui <Proxy *> AuthType Kerberos [...] </Proxy> <Proxy balancer://jbosscluster> BalancerMember ajp://srvlnx01.myorg.com:8009 route=SRVLNX01_node1 BalancerMember ajp://srvlnx01.myorg.com:8009 route=SRVLNX02_node1 ProxySet lbmethod=byrequests </Proxy> </VirtualHost> When the first JBoss node fail (the hosting VM is down), my existing connexions don't fail over the second node ... the fist route is keeped (in table / .shm ?) and that provide me 503 errors. Can someone tell me what I missed ?

    Read the article

< Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >