Search Results

Search found 3270 results on 131 pages for 'git mv'.

Page 102/131 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Ubuntu server and services

    - by Vicenç Gascó
    I've been using Linux+Plesk Virtual Server as a web server for a while, but I want to give a try on doing it manually, so my question is: I'll have a server which is: 80GB HDD, 4GB RAM, 1TB Bandwith, 1 Dedicated IP. And I use the following things on my Virtual nowadays: Mail server DNS server Apache + PHP 5.5 + MySQL FTP SSH My question is, without Plesk, can I achieve manually all those functionalities -know that I am not a terminal pro-, actually upgrading some of them to look like that with ubuntu server?: Mail server (with a nice webmail included) DNS server nginx + PHP 5.5 + MySQL + MongoDB FTP + SFTP SSH GIT Server Which ubuntu server should I chose? [EDIT] I almost forgot, I'd like to know how much Bandwith and CPU is using each of my webapps (one per domain usually), and the overall (not just from the webapps, but also mail, dns, etc...) ... usually Plesk does that for me, and I don't know how to measure that without it!

    Read the article

  • How to Identify and Backup the Latest SQL Server Database in a Series

    I have to support a third party application that periodically creates a new database on the fly. This obviously causes issues with our backup mechanisms. The databases have a particular pattern for naming, so I can identify the set of databases, however, I need to make sure I'm always backing up the newest one. Read this tip to ensure you are backing up your latest database in a series. Is your SQL Database under Version Control?SSMS plug-in SQL Source Control connects SVN, TFS, Git, Hg and all others to SQL Server. Learn more.

    Read the article

  • Release Notes for 4/12/2012

    Here are the notes for this week’s release: Fixed an issue where users could not expand a particular subfolder in the ASP.NET source code tree. Fixed an issue where incorrect Git branches would appear in the branch selection dropdown on the source control page. Fixed an issue where colons would appear HTML encoded in users’ activity feed.. Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • Is it advisable to ask employees to create 'work' GitHub accounts?

    - by fiorenti
    I've moved all our company Git repositories to GitHub and now I want to add employees to the projects. Since most employees already have personal GitHub accounts, I'm wondering whether I should ask them to create a work GitHub account. The reason that I'm thinking of doing this is to decrease the chances of unauthorized access to our code base since their personal accounts may be well publicized through their personal activity on the site, increasing chances of targeted attacks. Furthermore, if their personal account is ever compromised it won't mean the whole company code is accessible to the hijacker. Since this will bring the burden of maintaining two accounts for the employees I'm wondering whether it is the correct approach and whether it even makes sense. I would love to hear your opinions on this.

    Read the article

  • NetBeans ??????????????????

    - by user13137856
    NetBeans IDE ???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????NetBeans ??????????????????????????????????????? % netbeans --locale en ??????????????????????????????????????????????????? $HOME/.netbeans $HOME/.cache/netbeans ?????? --userdir ????????????????????? % netbeans --locale en --userdir /tmp/my_newuserdir NetBeans ???????????????????????????????????????NetBeans ???????????????? jar ????????????????????????????????? git ??????????????????????2?????????? NetBeans ????????????(??????????????????????????) ide/modules/locale/org-netbeans-libs-git_ja.jar ide/modules/locale/org-netbeans-modules-git_ja.jar ??? FAQ????????: NetBeans ? UI ???????????????????????? ????????????????????????????? NetBeans ??????????????????

    Read the article

  • rename file names from lower case to upper case

    - by Adnan
    Hello, I have about 2k of file that are currently in lower case like: file_one.cfr file_two.cfr .... I am searching for a fast way to rename them to upper case so they would be like; FILE_ONE.cfr FILE_TWO.cfr .... If I use from my shell; for i in *; do mv $i `echo $i | tr [:lower:] [:upper:]`; done I can get all file and the file extensions to upper case. But the extension should remain in lowercase, so my approach does not work. Any programming language is welcome.

    Read the article

  • How can I copy files with names containing spaces and UNICODE, when using a shell script?

    - by LOlliffe
    I have a list of files that I'm trying to copy and move (using cp and mv) in a bash shell script. The problem that I'm running into, is that I can't get either command to recognize a huge number of files, seemingly because the filenames contain spaces and/or unicode characters. I couldn't find any switches to decode/re-encode these characters. Instead, for example, if I copy "file name.xml", I get "*.xml" and a script error that the file wasn't found for my result. Does anyone know settings or commands that will deal with these files?

    Read the article

  • How to make automake less ugly?

    - by Brendan Long
    I recently learned how to use automake, and I'm somewhat annoyed that my compile commands went from a bunch of: g++ -O2 -Wall -c fileName.cpp To a bunch of: depbase=`echo src/Unit.o | sed 's|[^/]*$|.deps/&|;s|\.o$||'`;\ g++ -DHAVE_CONFIG_H -I. -I./src -g -O2 -MT src/Unit.o -MD -MP -MF $depbase.Tpo -c -o src/Unit.o src/Unit.cpp &&\ mv -f $depbase.Tpo $depbase.Po Is there any way to clean this up? I can usually easily pick out warning messages, but now the wall of text to read though is 3x bigger and much weirder. I know what my flags are, so making it just says "Compiling xxx.cpp" for each file would be perfect.

    Read the article

  • Problem with pgfplot label

    - by harper
    I want to draw an x-y-diagram with axis labels. Unfortunately the ylabel is misplaced. It looks as depending on the actual data. When the other data line in the sample below is used instead of the upper line, it looks better. How can I move the label to the left or (more desirable) how can I tell pgfplot to do it corectly? % !TEX TS-program = pdflatex % !TEX encoding = UTF-8 Unicode \documentclass{scrartcl} \usepackage[utf8]{inputenc} \usepackage{tikz} \usepackage{pgfplots} \begin{document} \begin{tikzpicture} \begin{axis}[width=13cm,height=8cm, xlabel={I in mA}, ylabel={U in mV}] \addplot[only marks,mark=star] coordinates { % (1.36, -0.0177) (45.38, 0.0273) (74.19, 0.0413) (100.88, 0.0533) (134.80, 0.0683) (195.27, 0.1073) }; \end{axis} \end{tikzpicture} \end{document}

    Read the article

  • Does CodeIgniter have to load view in the final step?

    - by Peter
    I have a function function do_something() { // process $this->load->view('some_view', $data); exec('mv /path/to/folder1/*.mp3 /path/to/folder2/'); } My intention is to move files after outputting the view. But apparently it is done before rendering the view. My question is, does $this->load->view(); have to be the final step in a function? I did a little research, and seems like my question is similar to this topic. Correct?

    Read the article

  • can't login to phpmyadmin

    - by user574383
    Hi, i am new at linux but i need phpmyadmin on my centos server. I did this: cd /var/www/html/ (document root of apache) wget http://sourceforge.net/projects/phpmyadmin/path/to/latest/version tar xvfz phpMyAdmin-3.3.9-all-languages.tar.gz mv phpMyAdmin-3.3.9-all-languages phpmyadmin rm phpMyAdmin-3.3.9-all-languages.tar.gz cd phpmyadmin/ cp config.sample.inc.php config.inc.php Ok so then i just got to a webbrowser and go to www.$ip/phpmyadmin and i am presented with a login screen asking for username and password. How can i get these credentials to log in? I'd like to log in as root i guess. But i don't know how to setup a root account and create a password for root using the cli and mysql. Please help? Thanks.

    Read the article

  • Generate all project dependencies in a single file using gcc -MM flag

    - by Jabez
    Hi all, I want to generate a single dependency file which consists of all the dependencies of source files using gcc -M flags through Makefile. I googled for this solution but, all the solutions mentioned are for generating multiple deps files for multiple objects. DEPS = make.dep $(OBJS): $(SOURCES) @$(CC) -MM $(SOURCEs) > $(DEPS) @mv -f $(DEPS) $(DEPS).tmp @sed -e 's|.$@:|$@:|' < $(DEPS).tmp > $(DEPS) @sed -e 's/.*://' -e 's/\\$$//' < $(DEPS).tmp | fmt -1 | \ sed -e 's/^ *//' -e 's/$$/:/' >> $(DEPS) @rm -f $(DEPS).tmp But it is not working properly. Please tell me where i'm making the mistake.

    Read the article

  • Committing file deletions to svn repository whilst ignoring some other local mods

    - by TheJuice
    I have svn repository where I have scheduled some files and folders to be moved in the repository with svn mv. I also have some files that are peers of the files to be moved that have local modifications of which I only want a subset of those files to be committed along with the moves. e.g. the output of svn st would look like: D foo/bar D foo/bar/a.txt D foo/bar/b.txt M foo/exclude.txt M foo/include.txt A foo/whiz/bar A + foo/whiz/bar/c.txt A + foo/whiz/bar/d.txt To commit to the moves to the repository, I would need to perform the commit on foo but that would also commit the modifications to foo/exclude.txt and foo/include.txt. How would I commit only the deletions/additions as a result of the move plus the mods to foo/include.txt whilst excluding foo/exclude.txt? I have a feeling the answer lies with the --depth argument to svn ci but it's not clear to me how it will operate.

    Read the article

  • SED, using variables and in with an array

    - by S1syphus
    What I am trying to do is run the sed on multiple files in the directory Server_Upload, using variables: AB${count} Corresponds, to some variables I made that look like: echo " AB1 = 2010-10-09Three " echo " AB2 = 2009-3-09Foo " echo " AB3 = Bar " And these correspond to each line which contains a word in master.ta, that needs changing in all the text files in Server_Upload. If you get what I mean... great, I have tried to explain it the best I can, but if you are still miffed I'll give it another go as I found it really hard to convey what I mean. cd Server_Upload for fl in *.UP; do mv $fl $fl.old done count=1 saveIFS="$IFS" IFS=$'\n' array=($(<master.ta)) IFS="$saveIFS" for i in "${array[@]}" do sed "s/$i/AB${count}/g" $fl.old > $fl (( count++ )) done It runs, doesn't give me any errors, but it doesn't do what I want, so any ideas?

    Read the article

  • What is an example of MVC in PHP?

    - by waiwai933
    I'm trying to understand the MVC pattern. Here's what I think MV is: Model: <?php if($a == 2){ $variable = 'two'; } else{ $variable = 'not two'; } $this->output->addContent($variable); $this->output->displayContent(); ?> View: <?php class output{ private $content; public function addContent($var){ $this->content = 'The variable is '.$var; } public function displayContent(){ include 'header.php'; echo $content; include 'footer.php'; } } ?> Is this right? If so, what is the controller?

    Read the article

  • Error: Cannot parse function definition from ' hello()' in Mytest.xs, line 9

    - by Nikole
    Hi I am trying to use perl XS in RHEL 5. but simple programm is giving error.I followed same code as in Example 1 in perldoc perlxstut Can anyone help me in correcting the following error? [root@localhost Mytest]# [root@localhost Mytest]# pwd /home/nikole/perlcode/Mytest [root@localhost Mytest]# ls blib lib MANIFEST Mytest.xs pm_to_blib README Changes Makefile.PL Mytest.c Mytest.xsc ppport.h t [root@localhost Mytest]# perl Makefile.PL Checking if your kit is complete... Looks good Writing Makefile for Mytest [root@localhost Mytest]# [root@localhost Mytest]# [root@localhost Mytest]# make /usr/bin/perl /usr/lib/perl5/5.8.8/ExtUtils/xsubpp -typemap /usr/lib/perl5/5.8.8/ExtUtils/typemap Mytest.xs Mytest.xsc && mv Mytest.xsc Mytest.c Error: Cannot parse function definition from ' hello()' in Mytest.xs, line 9 Please specify prototyping behavior for Mytest.xs (see perlxs manual) make: *** [Mytest.c] Error 1 [root@localhost Mytest]# Thanks

    Read the article

  • Redirect output from sed 's/c/d/' myFile to myFile

    - by sixtyfootersdude
    I am using sed in a script to do a replace and I want to have the replaced file overwrite the file. Normally I think that you would use this: % sed -i 's/cat/dog/' manipulate sed: illegal option -- i However as you can see my sed does not have that command. I tried this: % sed 's/cat/dog/' manipulate > manipulate But this just turns manipulate into an empty file (makes sense). This works: % sed 's/cat/dog/' manipulate > tmp; mv tmp manipulate But I was wondering if there was a standard way to redirect output into the same file that input was taken from.

    Read the article

  • Consolidating files in a single directory before you link them into the final executable

    - by David
    I am working on Solaris 10, Sun Studio 11. I am refactoring some old code, and trying to write unit tests for them. My make file looks like: my_model.o:my_model.cc CC -c my_model.cc -I/../../include -library=stlport4 -instances=extern unit_test: unit_test.o my_model.o symbol_dictionary.o CC -o unit_test unit_test.o my_model.o symbol_dictionary.o -I../../include \ -library=stlport4 -instances=extern unit_test.o: unit_test.cc CC -c unit_test.cc -I/../../include -library=stlport4 -instances=extern symbol_dictionary.o: cd ../../test-fixtures && ($MAKE) symbol_dictionary.o mv ../../test-fixtures/symbol_dictionary.o . In the ../../test-fixtures makefile, I have the following target: symbol_dictionary.o: CC -c symbol_dictionary.cc -I/../../include -library=stlport4 -instances=extern I do the instances=extern because I had linking problems before, and this was the recommended solution. The consequence is in each directory that is being compiled, a SunWS_Cache directory is created to store the template instances. This is the long way to get to this question. Is it a standard practice to consolidate object files in a single directory before you link them?

    Read the article

  • Why does this script work in the current directory but fail when placed in the path?

    - by kiloseven
    I wish to replace my failing memory with a very small shell script. #!/bin/sh if ! [ –a $1.sav ]; then mv $1 $1.sav cp $1.sav $1 fi nano $1 is intended to save the original version of a script. If the original has been preserved before, it skips the move-and-copy-back (and I use move-and-copy-back to preserve the original timestamp). This works as intended if, after I make it executable with chmod I launch it from within the directory where I am editing, e.g. with ./safe.sh filename However, when I move it into /usr/bin and then I try to run it in a different directory (without the leading ./) it fails with: *-bash: /usr/bin/safe.sh: /bin/sh: bad interpreter: Text file busy* My question is, when I move this script into the path (verified by echo $PATH) why does it then fail? D'oh? Inquiring minds want to know how to make this work.

    Read the article

  • Trying to simplify some Javascript with closures

    - by mvalente
    Hi, I'm trying to simplify some JS code that uses closures but I am getting nowhere (probably because I'm not grokking closures) I have some code that looks like this: var server = http.createServer(function (request, response) { var httpmethods = { "GET": function() { alert('GET') }, "PUT": function() { alert('PUT') } }; }); And I'm trying to simplify it in this way: var server = http.createServer(function (request, response) { var httpmethods = { "GET": function() { alertGET() }, "PUT": function() { alertPUT() } }; }); function alertGET() { alert('GET'); } function alertPUT() { alert('PUT'); } Unfortunately that doesnt seem to work... Thus: - what am I doing wrong? - is it possible to do this? - how? TIA -- MV

    Read the article

  • Question about the String.replaceAll() and String.replaceFirst() method.

    - by Java Doe
    I need to do a simple string replace operation on a segment of string. I ran into the following issue and hope to get some advice. In the original string I got, I can replace the string such as to something else. BUT, in the same original string, if I want to replace a much long string such as the following, it won’t work. Nothing gets replaced after the call. <div class="more"><a href="http://SERVER_name/profiles/atom/mv/theboard/entries/related.do?email=xyz.com&ps=20&since=1273518953218&sinceEntryId=abc-def-123-456">More...</a></div> I tried these two methods: originalString.replaceFirst(moreTag, newContent); originalString.replaceAll(moreTag, newContent); Thanks in advance.

    Read the article

  • Build multiple sources into multiple targets in a directory

    - by Taschetto
    folks. I'm learning about GNU-Make and I have the following project structure: ~/projects /sysCalls ex1.c ex2.c ex3.c ex4.c ex5.c ex6.c ex7.c Each .c source is very simple, has its own main function and must be built into a corresponding binary (preferably named after its source). But I want to build into a bin directory (added to my .gitignore file). My current Makefile is: CC := gcc CFLAGS := -Wall -g SRC := $(wildcard *.c) TARGET := $(SRC:.c=) all: bin $(TARGET) mv $(TARGET) bin/ bin: mkdir bin clean: rm -fr bin/ It works as expected, but always builds every source. And I don't like moving everything to bin "manually". Any tips or ideas on how this Makefile could be improved?

    Read the article

  • How to model file system operations with REST?

    - by massive
    There are obvious counterparts for some of file systems' basic operations (eg. ls and rm), but how would you implement not straightforwardly RESTful actions such as cp or mv? As answers to the question REST services - exposing non-data “actions” suggest, the preferred way of implementing cp would include GETting the resource, DELETing it and PUTting it back again with a new name. But what if I would need to do it efficiently? For instance, if the resource's size would be huge? How would I eliminate the superfluous transmission of resource's payload to client and back to the originating server? Here is an illustration. I have a resource: /videos/my_videos/2-gigabyte-video.avi and I want copy it into a new resource: /videos/johns_videos/copied-2-gigabyte-video.avi How would I implement the copy, move or other file system actions the RESTful way? Or is there even a proper way? Am I doing it all wrong?

    Read the article

  • Notify via email if something wrong got happened in the shell script

    - by Nevzz03
    fileexist=0 for i in $( ls /data/read-only/clv/daily/Finished-HADOOP_EXPORT_&processDate#.done); do mv /data/read-only/clv/daily/Finished-HADOOP_EXPORT_&processDate#.done /data/read-only/clv/daily/archieve-wip/ fileexist=1 done --some other script below Above is the shell script I have in which in the for loop, I am moving some files. I want to notify myself via email if something wrong got happened in the moving process, as I am running this script on the Hadoop Cluster, so it might be possible that cluster went down while this was running etc etc. So how can I have better error handling mechanism in this shell script? Any thoughts?

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >