Search Results

Search found 1902 results on 77 pages for 'nginx'.

Page 71/77 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Should I use a separate 'admin' user as my "root sudo" or grant sudo to my 'app' user?

    - by AJB
    I'm still wrapping my brain around the Ubuntu 'nullify root' user management philosophy (and Linux in general) and I'm wondering if I should 'replace' my root user with a user called 'admin' (which basically has all the powers of the root, when using sudo) and create another user called 'app' that will be the primary user for my app. Here's the context: I'll be running a LNMP stack on Ubuntu 12.04 Server LTS. There will be only one app running on the server. The 'app' user needs to have SUPER privileges for MySQL. PHP will need to be able to exec() shell commands. The 'app' user will need to be able to transfer files via SFTP. And I'm thinking this would be the best approach: nullify 'root' user create a user called 'admin' that will be a full sudoer of root, this will be the new "root" user of NGINX, PHP, and MySQL (and all system software) grant SUPER privileges to 'app' in MySQL Grant SFTP privileges to only the 'app' user. As I'm new to this, and the information I've found in researching it tends to be of a more general nature, I'm wondering if this is a solid approach, or if it's unorthodox in a way that would cause issues down the road. Thanks in advance for any help.

    Read the article

  • How do web servers enforce the same-origin policy?

    - by BBnyc
    I'm diving deeper into developing RESTful APIs and have so far worked with a few different frameworks to achieve this. Of course I've run into the same-origin policy, and now I'm wondering how web servers (rather than web browsers) enforce it. From what I understand, some enforcing seems to happen on the browser's end (e.g., honoring a Access-Control-Allow-Origin header received from a server). But what about the server? For example, let's say a web server is hosting a Javascript web app that accesses an API, also hosted on that server. I assume that server would enforce the same-origin policy --- so that only the javascript that is hosted on that server would be allowed to access the API. This would prevent someone else from writing a javascript client for that API and hosting it on another site, right? So how would a web server be able to stop a malicious client that would try to make AJAX requests to its api endpoints while claiming to be running javascript that originated from that same web server? What's the way most popular servers (Apache, nginx) protect against this kind of attack? Or is my understanding of this somehow off the mark? Or is the cross-origin policy only enforced on the client end?

    Read the article

  • nagios NRPE: Unable to read output

    - by user555854
    I currently set up a script to restart my http servers + php5 fpm but can't get it to work. I have googled and have found that mostly permissions are the problems of my error but can't figure it out. I start my script using /usr/lib/nagios/plugins/check_nrpe -H bart -c restart_http This is the output in my syslog on the node I want to restart Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 port 25028 Jun 27 06:29:35 bart nrpe[8926]: Host address is in allowed_hosts Jun 27 06:29:35 bart nrpe[8926]: Handling the connection... Jun 27 06:29:35 bart nrpe[8926]: Host is asking for command 'restart_http' to be run... Jun 27 06:29:35 bart nrpe[8926]: Running command: /usr/bin/sudo /usr/lib/nagios/plugins/http-restart Jun 27 06:29:35 bart nrpe[8926]: Command completed with return code 1 and output: Jun 27 06:29:35 bart nrpe[8926]: Return Code: 1, Output: NRPE: Unable to read output Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 closed. If I run the command myself it runs fine (but asks for a password) (nagios user) This are the script permission and the script contents. -rwxrwxrwx 1 nagios nagios 142 Jun 26 21:41 /usr/lib/nagios/plugins/http-restart #!/bin/bash echo "ok" /etc/init.d/nginx stop /etc/init.d/nginx start /etc/init.d/php5-fpm stop /etc/init.d/php5-fpm start echo "done" I also added this line to visudo nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ My local nagios nrpe.cfg ############################################################################# # Sample NRPE Config File # Written by: Ethan Galstad ([email protected]) # # # NOTES: # This is a sample configuration file for the NRPE daemon. It needs to be # located on the remote host that is running the NRPE daemon, not the host # from which the check_nrpe client is being executed. ############################################################################# # LOG FACILITY # The syslog facility that should be used for logging purposes. log_facility=daemon # PID FILE # The name of the file in which the NRPE daemon should write it's process ID # number. The file is only written if the NRPE daemon is started by the root # user and is running in standalone mode. pid_file=/var/run/nagios/nrpe.pid # PORT NUMBER # Port number we should wait for connections on. # NOTE: This must be a non-priviledged port (i.e. > 1024). # NOTE: This option is ignored if NRPE is running under either inetd or xinetd server_port=5666 # SERVER ADDRESS # Address that nrpe should bind to in case there are more than one interface # and you do not want nrpe to bind on all interfaces. # NOTE: This option is ignored if NRPE is running under either inetd or xinetd #server_address=127.0.0.1 # NRPE USER # This determines the effective user that the NRPE daemon should run as. # You can either supply a username or a UID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_user=nagios # NRPE GROUP # This determines the effective group that the NRPE daemon should run as. # You can either supply a group name or a GID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_group=nagios # ALLOWED HOST ADDRESSES # This is an optional comma-delimited list of IP address or hostnames # that are allowed to talk to the NRPE daemon. # # Note: The daemon only does rudimentary checking of the client's IP # address. I would highly recommend adding entries in your /etc/hosts.allow # file to allow only the specified host to connect to the port # you are running this daemon on. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd allowed_hosts=127.0.0.1,192.168.133.17 # COMMAND ARGUMENT PROCESSING # This option determines whether or not the NRPE daemon will allow clients # to specify arguments to commands that are executed. This option only works # if the daemon was configured with the --enable-command-args configure script # option. # # *** ENABLING THIS OPTION IS A SECURITY RISK! *** # Read the SECURITY file for information on some of the security implications # of enabling this variable. # # Values: 0=do not allow arguments, 1=allow command arguments dont_blame_nrpe=0 # COMMAND PREFIX # This option allows you to prefix all commands with a user-defined string. # A space is automatically added between the specified prefix string and the # command line from the command definition. # # *** THIS EXAMPLE MAY POSE A POTENTIAL SECURITY RISK, SO USE WITH CAUTION! *** # Usage scenario: # Execute restricted commmands using sudo. For this to work, you need to add # the nagios user to your /etc/sudoers. An example entry for alllowing # execution of the plugins from might be: # # nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # # This lets the nagios user run all commands in that directory (and only them) # without asking for a password. If you do this, make sure you don't give # random users write access to that directory or its contents! command_prefix=/usr/bin/sudo # DEBUGGING OPTION # This option determines whether or not debugging messages are logged to the # syslog facility. # Values: 0=debugging off, 1=debugging on debug=1 # COMMAND TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # allow plugins to finish executing before killing them off. command_timeout=60 # CONNECTION TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # wait for a connection to be established before exiting. This is sometimes # seen where a network problem stops the SSL being established even though # all network sessions are connected. This causes the nrpe daemons to # accumulate, eating system resources. Do not set this too low. connection_timeout=300 # WEEK RANDOM SEED OPTION # This directive allows you to use SSL even if your system does not have # a /dev/random or /dev/urandom (on purpose or because the necessary patches # were not applied). The random number generator will be seeded from a file # which is either a file pointed to by the environment valiable $RANDFILE # or $HOME/.rnd. If neither exists, the pseudo random number generator will # be initialized and a warning will be issued. # Values: 0=only seed from /dev/[u]random, 1=also seed from weak randomness #allow_weak_random_seed=1 # INCLUDE CONFIG FILE # This directive allows you to include definitions from an external config file. #include=<somefile.cfg> # INCLUDE CONFIG DIRECTORY # This directive allows you to include definitions from config files (with a # .cfg extension) in one or more directories (with recursion). #include_dir=<somedirectory> #include_dir=<someotherdirectory> # COMMAND DEFINITIONS # Command definitions that this daemon will run. Definitions # are in the following format: # # command[<command_name>]=<command_line> # # When the daemon receives a request to return the results of <command_name> # it will execute the command specified by the <command_line> argument. # # Unlike Nagios, the command line cannot contain macros - it must be # typed exactly as it should be executed. # # Note: Any plugins that are used in the command lines must reside # on the machine that this daemon is running on! The examples below # assume that you have plugins installed in a /usr/local/nagios/libexec # directory. Also note that you will have to modify the definitions below # to match the argument format the plugins expect. Remember, these are # examples only! # The following examples use hardcoded command arguments... command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 # The following examples allow user-supplied arguments and can # only be used if the NRPE daemon was compiled with support for # command arguments *AND* the dont_blame_nrpe directive in this # config file is set to '1'. This poses a potential security risk, so # make sure you read the SECURITY file before doing this. #command[check_users]=/usr/lib/nagios/plugins/check_users -w $ARG1$ -c $ARG2$ #command[check_load]=/usr/lib/nagios/plugins/check_load -w $ARG1$ -c $ARG2$ #command[check_disk]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$ #command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$ command[restart_http]=/usr/lib/nagios/plugins/http-restart # # local configuration: # if you'd prefer, you can instead place directives here include=/etc/nagios/nrpe_local.cfg # # you can place your config snipplets into nrpe.d/ include_dir=/etc/nagios/nrpe.d/ My Sudoers files # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d Hopefully someone can help!

    Read the article

  • Redirect with htaccess for images onto another server without redirect looping

    - by Jeff
    Hey guys, I currently have a host where my main site is hosted on. I have set up nginx on another server to mirror/cache files being requested if it doesn't have it already, in particular images and flv videos. For example: www.domain.com is my main site. www.domain.com/video/video.flv www.domain.com/images/1.png I would like to ask apache to redirect it to imgserv.domain.com (imgserv.domain.com points to another server IP) imgserv.domain.com/video/video.flv imgserv.domain.com/images/1.png Basically redirect everything with certain filetypes and preserving the structure of the URL, like flv etc. I tried something but I am getting a redirect looping error. Could someone help me out? Thank you!

    Read the article

  • git push problem -argh!

    - by phil swenson
    Dunno what's going on, no response from github on this prob so I'm asking here. Tried a git push for the first time in a month or so and got this. Turned on export GIT_CURL_VERBOSE=1 and did a push and get this: localhost:send2mobile_rails phil$ git push Password: * Couldn't find host github.com in the .netrc file; using defaults * About to connect() to github.com port 443 (#0) * Trying 207.97.227.239... * Connected to github.com (207.97.227.239) port 443 (#0) * SSL connection using DHE-RSA-AES256-SHA * Server certificate: * subject: O=*.github.com; OU=Domain Control Validated; CN=*.github.com * start date: 2009-12-11 05:02:36 GMT * expire date: 2014-12-11 05:02:36 GMT * subjectAltName: github.com matched * issuer: C=US; ST=Arizona; L=Scottsdale; O=GoDaddy.com, Inc.; OU=http://certificates.godaddy.com/repository; CN=Go Daddy Secure Certification Authority; serialNumber=07969287 * SSL certificate verify ok. > GET /303devworks/send2mobile_rails.git/info/refs?service=git-receive-pack HTTP/1.1 User-Agent: git/1.7.1 Host: github.com Accept: */* Pragma: no-cache < HTTP/1.1 401 Authorization Required < Server: nginx/0.7.61 < Date: Tue, 01 Jun 2010 10:53:13 GMT < Content-Type: text/html; charset=iso-8859-1 < Connection: keep-alive < Content-Length: 0 < WWW-Authenticate: Basic realm="Repository" < * Connection #0 to host github.com left intact * Issue another request to this URL: 'https://[email protected]/MYUSERHERE/send2mobile_rails.git/info/refs?service=git-receive-pack' * Couldn't find host github.com in the .netrc file; using defaults * Re-using existing connection! (#0) with host github.com * Connected to github.com (207.97.227.239) port 443 (#0) * Server auth using Basic with user '303devworks' > GET /303devworks/send2mobile_rails.git/info/refs?service=git-receive-pack HTTP/1.1 Authorization: Basic MzAzZGVfd29sa3M6Y29nbmwzNzIw User-Agent: git/1.7.1 Host: github.com Accept: */* Pragma: no-cache < HTTP/1.1 200 OK < Server: nginx/0.7.61 < Date: Tue, 01 Jun 2010 10:53:13 GMT < Content-Type: application/x-git-receive-pack-advertisement < Connection: keep-alive < Status: 200 OK < Pragma: no-cache < Content-Length: 153 < Expires: Fri, 01 Jan 1980 00:00:00 GMT < Cache-Control: no-cache, max-age=0, must-revalidate < * Expire cleared * Connection #0 to host github.com left intact Counting objects: 166, done. Delta compression using up to 4 threads. Compressing objects: 100% (133/133), done. * Couldn't find host github.com in the .netrc file; using defaults * About to connect() to github.com port 443 (#0) * Trying 207.97.227.239... * connected * Connected to github.com (207.97.227.239) port 443 (#0) * SSL re-using session ID * SSL connection using DHE-RSA-AES256-SHA * old SSL session ID is stale, removing * Server certificate: * subject: O=*.github.com; OU=Domain Control Validated; CN=*.github.com * start date: 2009-12-11 05:02:36 GMT * expire date: 2014-12-11 05:02:36 GMT * subjectAltName: github.com matched * issuer: C=US; ST=Arizona; L=Scottsdale; O=GoDaddy.com, Inc.; OU=http://certificates.godaddy.com/repository; CN=Go Daddy Secure Certification Authority; serialNumber=07969287 * SSL certificate verify ok. * Server auth using Basic with user 'MYUSERHERE' > POST /303devworks/send2mobile_rails.git/git-receive-pack HTTP/1.1 Authorization: Basic JzAzZGV1d29ya3M6Y25nb29zNzIq User-Agent: git/1.7.1 Host: github.com Accept-Encoding: deflate, gzip Content-Type: application/x-git-receive-pack-request Accept: application/x-git-receive-pack-result Expect: 100-continue Transfer-Encoding: chunked * The requested URL returned error: 411 * Closing connection #0 error: RPC failed; result=22, HTTP code = 411 Writing objects: 100% (140/140), 2.28 MiB | 1.93 MiB/s, done. Total 140 (delta 24), reused 0 (delta 0) ^C localhost:send2mobile_rails phil$

    Read the article

  • HTML encode UTF-8 string gets mangled into latin1

    - by Ken Mayer
    I'm parsing my nginx logs, and I want to discover some details from the HTTP_REFERER string, for example, the query string used to find the web site. One user typed in "México" which gets encoded in the log as "query=M%E9xico". Passing this through Rack::Utils.parse_query('query=M%E9xico') you get a hash, {"query" = "M?xico"} When you to stuff "M?exico" into Postgres (but not the more forgiving SQLite), it pukes because the string isn't proper UTF-8. Looking at http://rack.rubyforge.org/doc/Rack/Utils.html#M000324, unescape is packing a hex string. How can I convert the string back to UTF-8, or can I get parse_query to return UTF-8 in the first place.

    Read the article

  • Could somebody give me a high-level technical overview of WSGI details behind the scenes vs other we

    - by orokusaki
    Firstly: I understand what WSGI is and how to use it I understand what "other" methods (Apache mod-python, fcgi, et al) are, and how to use them I understand their practical differences What I don't understand is how each of the various "other" methods work compared to something like UWSGI, behind the scenes. Does your server (Nginx, etc) route the request to your WSGI application and UWSGI creates a new Python interpreter for each request routed to it? How much different is is from the other more traditional / monkey patched methods is WSGI (aside from the different, easier Python interface that WSGI offers)? What light bulb moment am I missing?

    Read the article

  • What's a good Minimal Server-Side Javascript Framework?

    - by Nick Retallack
    So I was writing a web app with web.py that uses plenty of client-side javascript, and my database is on couchdb so the queries are in javascript too, and eventually I just got to thinking, why not skip the python and go all javascript? Besides, some functions need to run once on the client and again on the server to make sure you're not spoofing, so why translate between javascript and python? So I'm looking for a simple lightweight javascript web framework. All I really need is the url routing, request and response stuff (standard wsgi?), and a way to hook into a big http server like nginx. What do you guys recommend?

    Read the article

  • What is the easiest way to get an embedded upload progress bar using Ruby/Sinatra/Haml/Passenger/ngi

    - by mmr
    I have a website where people can upload 30+mb of data in a single block, and I want to be able to show them the progress of their upload without causing the web page to become unresponsive, similar to how flash uploads work in gmail. There's this question here, but I don't know if that progress bar is embedded in the page or if it's using the browser's progress bar. I'm also a bit of a web newb, so I'm not sure if it's the 'easiest'. I asked the swfupload guys how to do this here, and the answer I got is 'this tool requires some knowledge to use it' without giving me much help in figuring out where to get started. I also asked this question on ServerFault, and got no response, so maybe that was the wrong place to ask. I'm all for learning new things and so forth, but there are a lot of potential pathways to take here. Where should I start, and what do I need to know to make everything work with sinatra, haml, ruby, passenger, and nginx? Thanks!

    Read the article

  • Upload 1GB files using chunking in PHP

    - by rjha94
    I have a web application that accepts file uploads of up to 4 MB. The server side script is PHP and web server is NGINX. Many users have requested to increase this limit drastically to allow upload of video etc. However there seems to be no easy solution for this problem with PHP. First, on the client side I am looking for something that would allow me to chunk files during transfer. SWFUpload does not seem to do that. I guess I can stream uploads using Java FX (http://blogs.sun.com/rakeshmenonp/entry/javafx_upload_file ) but I can not find any equivalent of request.getInputStream in PHP. Increasing browser client_post limits or php.ini upload or max_execution times is not really a solution for really large files (~ 1GB) because maybe the browser will time out and think of all those blobs stored in memory. is there any way to solve this problem using PHP on server side? I would appreciate your replies.

    Read the article

  • Setting outbound 'Expires:' in Squid server's HTTP header

    - by IkeaPimp
    I'm having a problem where items served by my Squid server are being cached by Limelight for too long, sometimes days. It happens when a piece of content has been static for a long time (weeks) and then undergoes numerous changes in a matter of hours. Limelight gets its content from our Squid server and I'm told that if I can add 'Expires: 15m' in the HTTP header the Squid server sends, Limelight will not cache the image for more than 15 min. Unfortunately, I can fond no setting in Squid that will allow me to add this to the header. Here's the HTTP header as presently being sent: HTTP/1.0 200 OK Date: Tue, 15 Dec 2009 23:57:33 GMT Server: nginx/0.5.26 Content-Type: image/jpeg Content-Length: 83843 Last-Modified: Tue, 15 Dec 2009 23:52:00 GMT Accept-Ranges: bytes Age: 450 X-Cache: HIT from squid01.prod.mydomain X-Cache-Lookup: HIT from squid01.prod.mydomain:3128 Via: 1.0 squid01.prod.mydomain:3128 (squid/2.6.STABLE14) Connection: close

    Read the article

  • 404 not found in telnet, works fine in browser

    - by Viranch Mehta
    i am having a very irritating problem, when i open a url ( http://celebs.widewallpapers.net/md/a/adriana-lima/1440/Adriana-Lima-1440x900-002.jpg ) in browser, it works fine.. but when i try to access it by telnet on bash, i get 404 not found!! my exact terminal: $ telnet celebs.widewallpapers.net 80 HEAD /md/a/adriana-lima/1440/Adriana-Lima-1440x900-002.jpg HTTP/1.0 [enter] [enter] HTTP/1.1 404 Not Found Server: nginx Date: Sun, 23 May 2010 21:36:05 GMT Content-Type: text/html; charset=windows-1251 Content-Length: 166 Connection: close please help me with this as i m trying to make a C batch-downloader, which is almost working as same as the telnet.

    Read the article

  • Technology stack for very frequent gps data collection

    - by gvaswani
    I am working on a project that involves gps data collection from many users (say 1000) every second (while they move). I am planning on using a dedicated database instance on EC2 with the mysql persistent block storage and run a ruby on rails application with nginx frontend. I haven't worked on such data collection application before. Am I missing something here? I will have a another instance which will act as application server and use the data from the same EBS. If anybody has dealt with such a system before, Any advise would be much appreciated?

    Read the article

  • Writing a Web Application in hAxe without Apache and PHP?

    - by stesch
    hAxe has Apache httpd modules and can compile to PHP code. These are 2 options I know to make a web application that runs on the server. You can start a http server with nekotools, but this is supposed to be used for development only. Are there any more options? I can always use the NekoVM from within a C or C++ program, running a web server or interfacing to FastCGI. Or compile to C++, using a FastCGI or web server library. But I want to hear about solutions that are actually used. I have a VPS with nginx, so no mod_neko or mod_tora. PHP isn't a problem, but I'd rather wouldn't use it (for irrational reasons).

    Read the article

  • Generate canonical / real URL based on base.href or location

    - by blueyed
    Is there a method/function to get the canonical / transformed URL, respecting any base.href setting of the page? I can get the base URL via (in jQuery) using $("base").attr("href") and I could use string methods to parse the URL meant to being made relative to this, but $("base").attr("href") has no host, path etc attributes (like window.location has) manually putting this together is rather tedious E.g., given a base.href of "http://example.com/foo/" and a relative URL "/bar.js", the result should be: "http://example.com/bar.js" If base.href is not present, the URL should be made relative to window.location. This should handle non-existing base.href (using location as base in this case). Is there a standard method available for this already? (I'm looking for this, since jQuery.getScript fails when using a relative URL like "/foo.js" and BASE tag is being used (FF3.6 makes an OPTIONS request, and nginx cannot handle this). When using the full URL (base.href.host + "/foo.js", it works).)

    Read the article

  • PHP does not return errors at all

    - by ErJab
    I am running PHP and nginx and I use a production version of php.ini. So the variable display_error is set to Off and there is a good reason I want it this way. But for certain files I need to enable error reporting. I used ini_set to turn on error reporting. But a simple code snippet like: <?php error_reporting(E_ALL); ini_set("display_errors", 1); echo "hi"errorrrrr ?> does not trace the error. It simply returns a HTTP 500 Internal Server Error message. What should I do to enable error reporting?

    Read the article

  • Can a http server detect that a client has cancelled their request?

    - by Nick Retallack
    My web app must process and serve a lot of data to display certain pages. Sometimes, the user closes or refreshes a page while the server is still busy processing it. This means the server will continue to process data for several minutes only to send it to a client who is no longer listening. Is it possible to detect that the connection has been broken, and react to it? In this particular project, we're using Django and NginX, or Apache. I assumed this is possible because the Django development server appears to react to cancelled requests by printing Broken Pipe exceptions. I'd love to have it raise an exception that my application code could catch. Alternatively, I could register an unload event handler on the page in question, have it do a synchronous XHR requesting that the previous request from this user be cancelled, and do some kind of inter-process communication to make it so. Perhaps if the slower data processing were handed to another process that I could more easily identify and kill, without killing the responding process...

    Read the article

  • What is the maximum number of virtualhosts Apache can handle?

    - by FractalizeR
    Hello. What is the maximum number of VirtualHosts Apache can handle on a single machine (I don't mean anything related to load, let's suppose it's irrelevant for the question). And we take only Apache without any proxifying things like nginx. I am asking because on one forum one guy reported that his Apache works unstable with the number of sites more than 400 on a single machine. If you have a config, that handles more than 400, please tell me here. Thanks.

    Read the article

  • Invalid AuthenticityToken everywhere

    - by bwizzy
    I have a rails app that I just deployed which is generating Invalid AuthenticityToken errors anywhere a form is submitted. The app uses subdomains as account names and will also eventually allow for a custom domain to be entered. I have an entry in production.rb to allow for cross-domain session handling. The problem is that you can't login / or submit any form because everything raises an Invalid AuthenticityToken error. The issue looks similar but not the same as http://stackoverflow.com/questions/1201901/rails-invalid-authenticity-token-after-deploy plus I'm not using mongrel. I've tried clearing cookies in the browser, and restarting passenger but no luck. Anyone have any ideas? The server is running nginx + passenger 2.3.11, and Rails 2.3.5. #production.rb config.action_controller.session[:domain] = '.domain.com' #environment.rb config.action_controller.session = { :session_key => '_app_session', :secret => '.... nums and chars .....' }

    Read the article

  • How reliable are URIs like /index.php/seo_path

    - by Boldewyn
    I noticed, that sometimes (especially where mod_rewrite is not available) this path scheme is used: http://host/path/index.php/clean_url_here --------------------------^ This seems to work, at least in Apache, where index.php is called, and one can query the /clean_url_here part via $_SERVER['PATH_INFO']. PHP even kind of advertises this feature. Also, e.g., the CodeIgniter framework uses this technique as default for their URLs. The question: How reliable is the technique? Are there situations, where Apache doesn't call index.php but tries to resolve the path? What about lighttpd, nginx, IIS, AOLServer? A ServerFault question? I think it's got more to do with using this feature inside PHP code. Therefore I ask here.

    Read the article

  • Getting "Location" header from NSHTTPURLResponse

    - by aspcartman
    Can't get "Location" header from response at all. Wireshark says that i've got one: Location: http://*/index.html#0;sid=865a84f0212a3a35d8e9d5f68398e535 But NSHTTPURLResponse *hr = (NSHTTPURLResponse*)response; NSDictionary *dict = [hr allHeaderFields]; NSLog(@"HEADERS : %@",[dict description]); Produces this: HEADERS : { Connection = "keep-alive"; "Content-Encoding" = gzip; "Content-Type" = "text/html"; Date = "Thu, 25 Mar 2010 08:12:08 GMT"; "Last-Modified" = "Sat, 29 Nov 2008 15:50:54 GMT"; Server = "nginx/0.7.59"; "Transfer-Encoding" = Identity; } No location anywhere. How to get it? I need this "sid" thing.

    Read the article

  • Permission denied - /tmp/.ruby_inline/Inline_ImageScience_cdab.c

    - by Ikaros
    I have a Ruby on Rails app that I've recently deployed to a remote server (Ubuntu 9.10, nginx, passenger, ruby-enterprise) and I'm getting the error (works fine locally): Permission denied - /var/www/project_name/tmp/.ruby_inline/Inline_ImageScience_cdab.c First, the folder /tmp/.ruby_inline/ is empty - should it be? Is it trying to create Inline_ImageScience_cdab.c or read it? I think I have all the required gems installed: 'gem list' shows image_science and RubyInline installed. libfreeimage3 and libfreeimage-dev are also installed. I've run chmod 755 on /tmp/.ruby_inline/ to match the permissions on surrounding folders but I cannot go any higher than that, however, or I get another error: /var/www/project_name/tmp/.ruby_inline is insecure (40777). It may not be group or world writable. Exiting. And I guess second, why am I getting this error? :) Thanks

    Read the article

  • how to setup rails Authenticity Token to work with multiple domains?

    - by bwizzy
    I'm building an app that uses subdomains as account handles (myaccount.domain.com) and I have my sessions configured to work across the sub-domains like so: config.action_controller.session = {:domain => '.domain.com'} In addition to the subdomain a user can input a real domain name when they are creating their account. My Nginx config is setup to watch for *.com *.net etc, and this is working to serve out the pages. The problem comes when a site visitor submits a comment form on a custom domain that was input by the user. The code is throwing an "Invalid AuthenticityToken" exception. I'm 99% sure this is because the domain the user is on isn't specified as the domain in the config.action_controller.session. Thus the authenticity token isn't getting matched up because Rails can't find their session. So, the question is: Can you set config.action_controller.session to more than 1 domain, and if so can you add / remove from that value at runtime without restarting the app?

    Read the article

  • What tech stack/platform to use for a project?

    - by danny z
    Hey guys, This is a bit of a weird meta-programming question, but I've realized that my new project doesn't need a full MVC framework, and being a rails guy, I'm not sure what to use now. To give you a gist of the necessary functionality; this website will display static pages, but users will be able to log in and 'edit their current plans'. All purchasing and credit card editing is being handled by a recurring payment subscriber, I just need a page to edit their current plan. All of that will be done through (dynamic) XML API calls, so no database is necessary. Should I stick with my typical rails/nginx stack, or is there something I could use that would lighten the load, since I don't need the Rails heft. I'm familiar with python and PHP but would prefer not to go that route. Is Sinatra a good choice here? tl;dr: What's a good way to quickly serve mostly static pages, preferably in Ruby, with some pages requiring dynamic XML rendering?

    Read the article

  • Reverse proxy for Tomcat

    - by aauser
    I got following infrastructure. Site A - Tomcat. Can be access by url www.sitea.com Site B - php backend( or probably it will be just static html pages ). Can't be access directly. I want to forward all request comming to www.sitea.com/doforward/... (Tomcat) to php backend. And all other requests with other urls should be handled by Tomcat itself. I know i can add another web server in front of tomcat, for example nginx, and based on url forward request to php backed or tomcat backend. But i want tomcat to serve requests itself and forward it to another backend. Probably there are ready implementations for servlet containers like mod_proxy for apache. Thank you

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77  | Next Page >