Search Results

Search found 7694 results on 308 pages for 'mediaelement js'.

Page 247/308 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • How to avoid index.php in Zend Framework route using Nginx rewrite

    - by Adam Benayoun
    I am trying to get rid of index.php from the default Zend Framework route. I think it should be corrected at the server level and not on the application. (Correct me if I am wrong, but I think doing it on the server side is more efficient). I run Nginx 0.7.1 and php-fpm 5.3.3 This is my nginx configuration server { listen *:80; server_name domain; root /path/to/http; index index.php; client_max_body_size 30m; location / { try_files $uri $uri/ /index.php?$args; } location /min { try_files $uri $uri/ /min/index.php?q=; } location /blog { try_files $uri $uri/ /blog/index.php; } location /apc { try_files $uri $uri/ /apc.php$args; } location ~ \.php { include /usr/local/etc/nginx/conf.d/params/fastcgi_params_local; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SERVER_NAME $http_host; fastcgi_pass 127.0.0.1:9000; } location ~* ^.+\.(ht|svn)$ { deny all; } # Static files location location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { expires max; } } Basically www.domain.com/index.php/path/to/url and www.domain.com/path/to/url serves the same content. I'd like to fix this using nginx rewrite. Any help will be appreciated.

    Read the article

  • Disable Certain Firefox Plugins System-wide by Default

    - by Andrew Case
    I have firefox installed system-wide for all our users. Unfortunately the Adobe Reader Plug-in is rather flakey and doesn't work some of the time. As a result I want to disable the plug-in by default for all our users, but still allow them to enable it if they want via the standard Tools-Add-ons-Plug-ins menu option. How can I have this plug-ins enabled/disabled status be disabled by default? I've been able to configure system-wide configurations before by setting preferences in the mozilla root folder file defaults/pref/all.js, but enabled/disabled plugins doesn't appear to be configured in the preferences. [edit 1]: I found 'How to manage firefox plugins in pluginreg.dat file' which explained some of the formatting of the pluginreg.dat file. From there I could see flags are masked as follows (from nsPluginHostImpl.h): #define NS_PLUGIN_FLAG_ENABLED 0x0001 // is this plugin enabled? #define NS_PLUGIN_FLAG_OLDSCHOOL 0x0002 // is this a pre-xpcom plugin? #define NS_PLUGIN_FLAG_FROMCACHE 0x0004 // this plugintag info was loaded from cache #define NS_PLUGIN_FLAG_UNWANTED 0x0008 // this is an unwanted plugin #define NS_PLUGIN_FLAG_BLOCKLISTED 0x0010 // this is a blocklisted plugin But is there a way to add this to the defaults so that that NS_PLUGIN_FLAG_ENABLED is removed by default?

    Read the article

  • Configure Nginx to render static files and rewrite file extension or proxy_pass

    - by Pardoner
    I've set up Nginx to handle all my static files else proxy_pass to a Node.js server. It's working fine but I'm having difficulty rewriting the url so that it remove the .html file extension. upstream my_upstream { server 127.0.0.1:8000; keepalive 64; } server { listen 80; server_name staging.mysite.com; root /var/www/staging.mysite.org/public; access_log /var/logs/staging.mysite.org.access.log; error_log /var/logs/staging.mysite.org.error.log; location ~ ^/(images/|javascript/|css/|robots.txt|humans.txt|favicon.ico) { rewrite (.*)\.html $1 permanent; try_files $uri.html $uri/ /index.html; access_log off; expires max; } location / { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_set_header Connection ""; proxy_http_version 1.1; proxy_cache one; proxy_cache_key sfs$request_uri$scheme; proxy_pass http://my_upstream; } }

    Read the article

  • Developing high-performance and scalable zend framework website [on hold]

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery or maybe throw Backbone.js in there, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand for developing an high performance, scalable application expected to have a lot of traffic using PHP(Zend Framework 2)...I would be interested in your approch. I should note that I'm a Zend developer, I'm working with Zend for over 3 years, this is why I'm choosing it.

    Read the article

  • Merely installing PHP5 causes my AWS Ubuntu server to die minutes later from a massive CPU spike

    - by Mark Amery
    I have an AWS server with Ubuntu 11.04 as the OS that is running an Apache2 webserver (incidentally Python-based and using Django). We recently needed to add support for php5 to let us use a third party PHP library (incidentally for serving minified versions of js and css files). However, for no reason any of us can discern, if we simply run sudo apt-get install php5 on the server, then the install appears to finish successfully but, without us taking any further action (including not yet running sudo apt-get install libapache2-mod-php5, which I think would be the next step for us if everything worked), or actually running any PHP scripts on the server, a few minutes later the server becomes impossible to connect to, and looking at the 'Monitoring' tab for the server in the EC2 Management Console reveals that a while after the installation, CPU usage spikes to 100% and stays there permanently (until we reboot the server from the AWS Console). After rebooting, the server also reliably dies within a few (between 0 and 10) minutes. We restored the server to a pre-PHP state from an AMI Image, observed that it was stable, and then tried installing PHP5 again and observed the server die in exactly the same way, so we're pretty much certain that installing PHP5 is what causes the symptoms. What on earth could be causing this behaviour, and how can we get PHP installed on the server without it dying?

    Read the article

  • How to see the properties of a DOM element as they change in realtime?

    - by allquixotic
    JavaScript code can update the properties/attributes of DOM elements in real time by responding to events and so on. Here is an example. In the table on that page, move your mouse over the cells. Notice how they change color when the mouse is on them, and the color goes away when you move the mouse to another cell. Now, using Firefox or Chrome (but not IE, Opera, etc.), I want to examine the background color, expressed in RGB or hex or whatever, of the cells updated in real time, as the mouse cursor enters and leaves the region and causes the JS to do its thing. The behavior that I observe, currently, is that the Inspect Element functionality of both Firefox and Chrome does not update the value of the properties as they are updated by JavaScript. So, in order to view the latest value of the property, I have to inspect the element again, and it takes a momentary "snapshot" of the values. But since the values only change while I have the mouse on them, I can't take a snapshot of the value I want while my mouse cursor is over the cell, because I have to remove my mouse from the cell to select the "Inspect Element" item in the right-click list! If it is possible to have the values updated in real time using either Firefox or Chrome, or an extension, on any recent version of the software (up to the latest stable), please provide instructions for doing so.

    Read the article

  • nginx giving 404 when accessing php from alias directory

    - by code90
    I am trying to migrate from apache to nginx. The php sites that I am hosting need to access a shared library which turns out to be an alias directory. Below is the configuration I came up with. html files work fine, but php files giving 404. I have read through and tried most (if not all) of the answers to the similar questions with no any success. Any hint on what might be causing the issue in my case? location /wtlib/ { alias /var/www/shared/wtlib_4/; index index.php; } location ~ /wtlib/.*\.php$ { alias /var/www/shared/wtlib_4/; try_files $uri =404; if ($fastcgi_script_name ~ /wtlib(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass 127.0.0.1:9013; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/shared/wtlib_4$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } Thanks all ! Update: Following seems to be working fine: location /wtlib/ { alias /usr/share/php/wtlib_4/; location ~* .*\.php$ { try_files $uri @php_wtlib; } location ~* \.(html|htm|js|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air)$ { expires 7d; access_log off; } } location @php_wtlib { if ($fastcgi_script_name ~ /wtlib(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; }

    Read the article

  • Good HTTP Monitoring tools

    - by ffffff
    I look for HTTP to work with a Linux system server monitor tool every protocol. I know, and will not there be it in whom or a freeware? When, for example, I dump 80/tcp with a packet monitor to be concrete # tethereal -i ppp0 port 80 -x Capturing on ppp0 1244206390.030474 219.111.xx.xx -> 74.125.xx.xx HTTP GET /search?output=js&num=0&dt=1244206414703&client=pub-3031568651010206&q=Cagliari%20Flight&ad=n3&ie=utf8&oe=utf8&channel=0091594208&adtest=off HTTP/1.1 0000 00 04 02 00 00 00 00 00 00 00 00 00 00 00 08 00 ................ 0010 45 00 01 e5 ee 82 40 00 40 06 d2 b5 db 6f 02 5b E.....@[email protected].[ 0020 4a 7d 4f 93 d4 29 00 50 3e df 4c 63 4b 6b 42 e0 J}O..).P>.LcKkB Such output is provided, but there is too much unnecessary information such as an SYN packet or a header. What I want The IP address of the client and sending out character string(Get; the contents of the POST) Among the output character string of the server only as for the HTML (Content-Type:) I am what is chisel) of a thing of text/html. I can set a filter and am the best if only information wanting can accumulate in the log.

    Read the article

  • Attempting to emulate Apache MultiViews with Nginx try_files

    - by Samuel Bierwagen
    I want a request to http://example.com/foobar to return http://example.com/foobar.jpg. (Or .gif, .html, .whatever) This is trivial to do with Apache MultiViews, and it seems like it would be equally easy in Nginx. This question seems to imply that it'd be easy as try_files $uri $uri/ index.php; in the location block, but that doesn't work. try_files $uri $uri/ =404; doesn't work, nor does try_files $uri =404; or try_files $uri.* =404; Moving it between my location / { block and the regexp which matches images has no effect. Crucially, try_files $uri.jpg =404; does work, but only for .jpg files, and it throws a configuration error if I use more than one try_files rule in a location block! The current server { block: server { listen 80; server_name example.org www.example.org; access_log /var/log/nginx/vhosts.access.log; root /srv/www/vhosts/example; location / { root /srv/www/vhosts/example; } location ~* \.(?:ico|css|js|gif|jpe?g|es|png)$ { expires max; add_header Cache-Control public; try_files $uri =404; } } Nginx version is 1.1.14.

    Read the article

  • Nginx + WordPress + HHVM: Why isn't Batcache working? Would Varnish help even more?

    - by javipas
    I've heard great things about HHVM, so I've setup a copy of WordPress blog (on another domain) with Nginx (with the Pagespeed module) and HHVM. Right now the benefits are obvious: on the same config, load times are between two and three times faster. I'm trying to speed up things a little bit, and I've also installed Memcached and Batcache. I've installed the memcached package, copied object-cache.php (Pastebin) onto the root folder of the WordPress blog, and after that I've installed the Batcache plugin and copied the advanced-cache.php (Pastebin) file onto the wp-content folder. Also, I've included the line define('WP_CACHE', true); in the wp-config.php file. It seems it doesn't work, though. If I quickly reload the page several times Batcache should show the cached page, but it doesn't. It's easy to check that by reloading (Cmd+R on Chrome on OS X) the page several times and then viewing the page's code. Under the <head> section I should see some batcache stats, but they aren't there. I wonder if someone could give me some hint on this. On a side note, I don't know if I could add some other component in order to help the performance be even better. I'm thing about Varnish, but I'm not sure if it's just useless and it's just another way to the same I'm currently doing. Any other component there? (I'll test CDN for images, minifying js, etc and some other tricks as well, but I'm talking from the server perspective).

    Read the article

  • Nginx + PHPBB3 reverse proxy images problem

    - by siberiano
    Hello all I have a problem with my Nginx Frontend + Apache2 backend + PHPBB3 software. It doesn't load the CSS and the images neither. I get constant errors like these: 2010/04/14 16:57:25 [error] 13365#0: *69 open() "/var/www/foo/styles/styles/coffee_time/theme/large.css" failed (2: No such file or directory), client: 83.44.175.237, server: www.foo.com, request: "GET /styles/coffee_time/theme/large.css HTTP/1.1", host: "www.foo.com", referrer: "http://www.foo.com/viewforum.php?f=43" This is my config of the site: server { listen 80; server_name www.foo.com; access_log /var/log/nginx/foo.access.log; # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; root /var/www/trasteando/; } location / { root /var/www/foo/; index /var/www/foo/index.php; } # proxy the PHP scripts to predefined upstream .apache. # location ~ .php$ { proxy_pass http://apache; } location /styles/ { root /var/www/foo/styles/; }

    Read the article

  • Nginx proxy upstream cached?

    - by Julian H. Lam
    Attempting to resolve an issue that's been annoying me for a bit. I've distilled the symptoms into a set of reproducible steps: I have two sites, siteA, and siteB. They are both Node.js applications running on different ports (for the sake of example, 4567 and 4568) Both applications have their own file in sites_available (plus a symlink from sites_enabled), which contain the directives proxy_pass http://node_siteA/ and proxy_pass http://node_siteB/ respectively, inside of a location block. They also each have an upstream block (defined globally?): upstream node_siteA { upstream node_siteB { server 127.0.0.1:4567; server 127.0.0.1:4568; } } Site A and Site B have nothing to do with each other. Yes, I am restarting (reloading, actually) nginx every time I make a change. If I take down site B and attempt to access it via the web, I am served site A. Why is this? Thoughts Other times, when I create a new Site C, for example, nginx refuses to show me anything except "Welcome to nginx!" for ~5 minutes. This suggests a resolver timeout, perhaps? When I access Site B after its config has been deleted, and it sends me to Site A, this sounds like nginx sending me to servers in a round-robin fashion...

    Read the article

  • 'Future-proof' Live Audio Capture & Broadcast [migrated]

    - by maxpowers
    I'm looking to implement some live audio broadcasting functionality within a Ruby on Rails site for a client and was hoping I could get some input from people who have tackled this type of thing before. Essentially what I need to do is capture and record a user's audio (via microhpone, line in, etc), then stream that to 1,000+ listeners with very little latency, like sub 2 second if possible. So it looks like we've got 3 parts: Web-based audio capture (likely with Flash or JS) Server to accept audio feed and stream to listeners (likely Icecast or Wowza) Actual audio player (maybe HTML5 w/ Flash as a fallback? Maybe this jPlayer fork) Does RTMP makes sense here? Or maybe HTTP? What's the most 'future-proof' way to make this happen? Building with mobile in mind, but still want to be able stream to anyone. I've found lots of potentially helpful threads and software but I'm struggling to get an idea of how it all fits together. I'm a front end guy and way out of my comfort zone so if anyone has insights to offer, I'd love to hear them.

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    Hi, I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • Nginx server 301 Moved permanently

    - by user145714
    When I did a curl -v http://site-wordpress.com:81 I received this result: About to connect() to site-wordpress.com port 81 (#0) Trying ip... connected Connected to site-wordpress.com (ip) port 81 (#0) GET / HTTP/1.1 User-Agent: curl/7.19.7 (x86_64-unknown-linux-gnu) libcurl/7.19.7 NSS/3.12.6.2 zlib/1.2.3 libidn/1.18 libssh2/1.2.2 Host: site-wordpress.com:81 Accept: / < HTTP/1.1 301 Moved Permanently < Server: nginx/1.2.4 < Date: Fri, 16 Nov 2012 16:28:19 GMT < Content-Type: text/html; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < X-Pingback: The URL above/xmlrpc.php < Location: The URL above Seems like this line in my fastcgi_params is causing grief. fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; If I remove this line , I get HTTP/1.1 200 OK but I get a blank page. This is my config: server { listen 81; server_name site-wordpress.com; root /var/www/html/site; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; index index.php; if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; # port where FastCGI processes were spawned fastcgi_index index.php; include /etc/nginx/fastcgi_params; include /etc/nginx/mime.types; } location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; } } This config works with ip and port 80. But now I need to use a domain name and port 81, which doesn't work. Could someone please help. Thanks.

    Read the article

  • nginx with fail2ban and mod_security

    - by Mahesh
    I forgot to update my fail2ban config for nginx. I just moved to nginx from apache. Today, I got a lot of cals from a single IP. IP tried to access login pages with post and get methods IP tried to use nginx as a proxy (GET http:/...) IP searched images, js, css folders IP tried to inject -d url_allow_fopen =1 and something similar. Most of the calls ended with 404. http { limit_req_zone $binary_remote_addr zone=app:10m rate=5r/s; ... server { ... location / { limit_req zone=app burst=50; } I got approximately 50 requests from that ip for a second. So i updated my nginx like the above. Will it avoid too many connections per second now? I have updated my fail2ban jail.local to support nginx. I am confused with the nginx-noscript.conf [Definition] failregex = ^<HOST> -.*GET.*(\.php|\.asp|\.exe|\.pl|\.cgi|\scgi) ignoreregex = I am serving php with nginx. I checked apache's noscript.conf and which has .php extension on it too. I tested this above settings before restarting fail2ban and got thousands of ips matched. I removed php and nothing matched. Do i need .php| in nginx-noscript.conf? Using mod_security and fail2ban together bring any problem? When i was searching today, i came to know mod_security is available for nginx too. So i am planning to use it too.

    Read the article

  • Restricting access to one controller of an MVC app with Nginx

    - by kgb
    I have an MVC app where one controller needs to be accessible only from several ips(this controller is an oauth token callback trap - for google/fb api tokens). My conf looks like this: geo $oauth { default 0; 87.240.156.0/24 1; 87.240.131.0/24 1; } server { listen 80; server_name some.server.name.tld default_server; root /home/user/path; index index.php; location /oauth { deny all; if ($oauth) { rewrite ^(.*)$ /index.php last; } } location / { if ($request_filename !~ "\.(phtml|html|htm|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|xlsx)$") { rewrite ^(.*)$ /index.php last; break; } } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } It works, but does not look right. The following seems logical to me: location /oauth { allow 87.240.156.0/24; deny all; rewrite ^(.*)$ /index.php last; } But this way rewrite happens all the time, allow and deny directives are ignored. I don't understand why...

    Read the article

  • Manual Http error response code in non-existent folder via routing

    - by Slytherin
    Apache server running on ubuntu-like linux I am getting unexpected behaviour when i try to manually send error response. If my .htaccess is responsible for the error response , then appropriate error document is loaded and displayed , with according response code in browser console. However , if my router is origin of the response code , then i get blank screen , but correct response code. .htaccess looks like this RewriteEngine On # RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule !\.(css|js|icon|zip|rar|png|jpg|gif|pdf)$ index.php [L] ErrorDocument 404 /err/404.html ErrorDocument 403 /err/403.html ErrorDocument 500 /err/500.html part of my router that sends the response is the following header("HTTP/1.1 403 Forbidden"); trying this format didnt help either header("HTTP/1.1 403 Forbidden", TRUE, 403); I also tried HTTP/1.0. Furthermore i was thinking that maybe relative path to error page might be an issue , but discarded this idea after attempting to access a document that is forbidden via .htaccess EDIT I should also point out , this scenario happens when URL for not-existing article is requested. Is it possible that Server is looking for a .htaccess file in a folder based on URL ? Eg: domain/blog/non-existent , is server looking for blog folder ? I am specifically asking this because there is no blog folder

    Read the article

  • Searching for just files

    - by M Schenkel
    I have a couple questions about searching for files on Windows 7. I find the XP method much easier than this new Windows 7 search. Note: I am only concerned about finding files matching a search term, not ALL files containing the search term. Is there a way to search just for files? When I use the search it seems to be searching "within" files and returning instances where the name of the file is used. Example: I have a whole web directory and want to find the javascript files. But if I enter "myjavascript.js" in the search, it also returns all the html files which reference the javascript file. This is both slow and difficult to actually find the reference to the file. Is there a way to search for an exact match? The search seems to implicitly use wildcards. For instance, say I have a bunch of files in a folder: file1.txt,file11.txt, file12.txt, file13.txt. If I enter "file1.txt" in the searcher it returns instances as if I were using a wild card file1*.txt I miss XP!!!!

    Read the article

  • Viewing local websites on my iOS device over Wi-fi

    - by John
    Trying to view some local html/css/js files in a mobile browser on my iOS device. Thought maybe file-sharing would be an option, and is, but I'm not completely satisfied with it. Any time I try to do the following an error occurs. Web sharing is on and available at http://192.168.1.101/~user but I have to manually copy the files in. If I try to symlink a folder in so that the address could be viewed at ''~user/some_dir by issuing $ ln -s /Users/user/dev/some_dir ~/Sites/ then I get a 403 forbidden error. I've tried to remedy this by modifying a user.conf file in /private/etc/apache2/ and using the following syntax: <Directory "/Users/user/Sites/"> Options Indexes MultiViews SymLinks AllowOverride None Order allow,deny Allow from all </Directory> but nope, still doesn't work. I get a 403 error. If I try to symlink each individual file in instead of using a directory as a sub-directory, same error. Any help would be greatly appreciated! I'd just like to symlink directories into the ~/Sites one and browse them on my iOS device over wifi. I'm on OS X 10.7 Lion trying to connect with iOS 5.

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • Nginx Removes the index.php from URL

    - by codeHead
    I have a codeigniter php application on nginx. It works as expected on Apache but after moving to nginx, I noticed that the index.php is automatically removed from the URL in all my links. Infact when I try using index.php it does not go to the desired URL but gets redirected to my default controller. below is a coopy of my nginx.conf file. server{ listen 80; server_name mydomainname.com; root /var/www/domain/current; # index index.php; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log main; location / { # Check if a file or directory index file exists, else route it to index.php. try_files $uri $uri/ /index.php ; } location ~* \.php { fastcgi_pass backend; include fastcgi.conf; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_read_timeout 500; #fastcgi_param SCRIPT_FILENAME $document_root/index.php; add_header Expires "Thu, 01 Jan 1970 00:00:01 GMT"; add_header Cache-Control "no-cache, no-store, private, proxy-revalidate, must-revalidate, post-check=0, pre-check=0"; add_header Pragma no-cache; add_header X-Served-By $hostname; } location ~* ^.+\.(css|js)$ { expires 7d; add_header Pragma public; add_header Cache-Control "public"; } # set expiration of assets to MAX for caching location ~* \.(ico|gif|jpe?g|png)(\?[0-9]+)?$ { expires max; log_not_found on; } } I need to use my URL With the index.php -- please help.

    Read the article

  • Skipping nginx PHP cache for certain areas of a site?

    - by DisgruntledGoat
    I have just set up a new server with nginx (which I am new to) and PHP. On my site there are essentially 3 different types of files: static content like CSS, JS, and some images (most images are on an external CDN) main PHP/MySQL database-driven website which essentially acts like a static site dynamic PHP/MySQL forum It is my understanding from this question and this page that the static files need no special treatment and will be served as fast as possible. I followed the answer from the above question to set up caching for PHP files and now I have a config like this: location ~ \.php$ { try_files $uri =404; fastcgi_cache one; fastcgi_cache_key $scheme$host$request_uri; fastcgi_cache_valid 200 302 304 30m; fastcgi_cache_valid 301 1h; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fastcgi/php-fastcgi.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/example$fastcgi_script_name; fastcgi_param HTTPS off; } However, now I want to prevent caching on the forum (either for everyone or only for logged-in users - haven't checked if the latter is feasible with the forum software). I've heard that "if is evil" inside location blocks, so I am unsure how to proceed. With the if inside the location block I would probably add this in the middle: if ($request_uri ~* "^/forum/") { fastcgi_cache_bypass 1; } # or possible this, if I'm able to cache pages for anonymous visitors if ($request_uri ~* "^/forum/" && $http_cookie ~* "loggedincookie") { fastcgi_cache_bypass 1; } Will that work fine, or is there a better way to achieve this?

    Read the article

  • Strategy to isolate multiple nginx ssl apps with single domain via suburi's?

    - by icpu
    Warning: so far I have only learnt how to use nginx to serve apps with their own domain and server block. But I think its time to dive a little deeper. To mitigate the need for multiple SSL certificates or expensive wildcard certificates I would like to serve multiple apps (e.g. rails apps, php apps, node.js apps) from one nginx server_name. e.g. rooturl/railsapp rooturl/nodejsapp rooturl/phpshop rooturl/phpblog I am unsure on ideal strategy. Some examples I have seen and or thought about: Multiple location rules, this seems to cause conflicts between the individual app config requirements, e.g. differing rewrite and access requirements Isolated apps by backend internal port, is this possible? Each port routing to its own config? So config is isolated and can be bespoke to app requirements. Reverse proxy, I am little ignorant of how this works, is this what I need to research? is this actually 2 above? Help online seems to always proxy to another server e.g apache What is an effective way to isolate config requirements for apps served from a single domain via sub uris?

    Read the article

  • How to reduce memory consumption an AWS EC2 t1.micro instance (free tier) ubuntu server 14.04 LTS EBS

    - by CMPSoares
    Hi I'm working on my bachelor thesis and for that I need to host a node.js web application on AWS, in order to avoid costs I'm using a t1.micro instance with 30GB disk space (from what I know it's the maximum I get in the free tier) which is barely used. But instead I have problems with memory consumption, it's using all of it. I tried the approach of creating a virtual swap area as mentioned at Why don't EC2 ubuntu images have swap? with these commands: sudo dd if=/dev/zero of=/var/swapfile bs=1M count=2048 && sudo chmod 600 /var/swapfile && sudo mkswap /var/swapfile && echo /var/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab && sudo swapon -a But this swap area isn't used somehow. Is something missing in this approach or is there another process of reducing the memory consumption in these type of AWS instances? Bottom-line: This originates server freezes and crashes and that's what I want to stop either by using the swap, reducing memory usage or both.

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >