Search Results

Search found 18035 results on 722 pages for 'location bar'.

Page 206/722 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • How can I add config options for a specific hostname outside <VirtualHost>?

    - by Boldewyn
    I'm using Apache 2.2 and let it serve domains foo.example.com and bar.example.com with <VirtualHost> statements: <VirtualHost 127.0.0.1:80> ServerName foo.example.com </VirtualHost> <VirtualHost 127.0.0.1:80> ServerName bar.example.com </VirtualHost> My problem is, that I need to add configuration options, that are only targeted at foo.example.com, in a separate file (let's say, /etc/apache/sites-enabled/foo.conf). This file will be included, before the VirtualHost statement is issued, but it can't be embedded inside it. Can I (and if yes, how) target configuration settings to foo.example.com requests only, outside the VirtualHost container?

    Read the article

  • Elevate the weight of browsing history in Google Chrome's autocomplete

    - by maayank
    Google Chrome has the feature of auto-completing web addresses while you type them in the address bar. Alas, it gives absurdly more weight to Google's own auto-suggest v.s. my own browsing history, which seems a bit foolish - if I regularly (i.e. twice a week) check a certain website with the keywords "foo bar ponies" in its url, it is reasonable to expect that I will want to visit that site again and not other sites. While a bit subjective, to the very least I would expect such URLs to be in the list Chrome suggests, even if not at the top. Is there some plugin/secret option that alters the default behavior?

    Read the article

  • Apply SetEnvIf after Apache RewriteRule

    - by coneybeare
    I have a working apache rewrite rule: RewriteCond %{HTTP_HOST} ^.*foo.com RewriteRule (.*) http://bar.com$1 [R=301,QSA,L] and some working dontlog SetEnvIfs: SetEnvIf Request_URI "^/server-status$" dontlog SetEnvIf Request_URI "^/home/ping$" dontlog SetEnvIf Request_URI "^/haproxy-status$" dontlog SetEnvIf User-Agent ".*internal dummy connection.*" dontlog CustomLog /var/log/apache2/access.log combined env=!dontlog but I can't figure out how to stop the RewriteRule from logging a duplicate line. foo.com and bar.com are both on the same machine. I would expect this rule to work, but it did not: SetEnvIf Host "foo.com" dontlog I still get duplicates in the Apache Log: 10.250.18.97 - - [06/Apr/2012:16:57:12 +0000] "GET / HTTP/1.1" 200 732 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/534.55.3 (KHTML, like Gecko) Version/5.1.5 Safari/534.55.3" 68.194.30.42 - - [06/Apr/2012:16:57:12 +0000] "GET / HTTP/1.1" 200 732 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/534.55.3 (KHTML, like Gecko) Version/5.1.5 Safari/534.55.3" .... where 10.250.18.97 is the server's IP. How can I prevent that RewriteRule from logging?

    Read the article

  • Frequent and weird wifi disconnections

    - by Sidou
    How would you explain, troubleshoot (and solve) the following problem? Wifi ADSL modem router D-link 2640R installed in living room at about 1.8m height. Working fine, synchronising and getting/serving stable internet connection. First situation: -Laptop 01 in other end of the house, let's say in room01 southern to the living room, distant by about 15m. Getting stable signal of good to very good quality. No disconnection. -Laptop 02 in room02 opposite to room01 (5m West) which makes it almost at the same distance and direction from the router located 15m North. Getting stable signal of good to very good quality. No disconnection. Second situation: -Laptop 01 moved to room03 Northern to the living room (actually just 3m behind the wall where the router lies). Getting stable signal of excellent quality. No disconnection. -Laptop 02 still in room02 but now experiences frequent disconnections (actually almost impossible to get the Internet even though the signal level is still very good. Either no Internet with the wifi icon appearing connected to access point or no connection established at all which happens every 2 minutes and that means virtually no Internet at all as I can just get a timeframe of 1 minute or so to load any website or even get to the router's web based control panel. If Laptop 01 is completely shut down or its wifi adapters shut down or even still working but its wifi MAC address forbidden, then Laptop 02 has no problem at all. If Laptop 02 is moved to a nearer location to the router, in the living room for instance, then no connection problem occurs even if Laptop 01 is also connected. And also if we move back Laptop 01 to its original location (room 01), then no problem as well. I'm completely lost and don't know how to address this issue. I tried to change the Wifi channel and even tried the auto channel scan but that didn't solve it. I know that the problem is probably coming from Laptop 01 being in its new location or some sort of interference as the problem occurs only under the described condition but I have no idea how to solve it! I also scanned the neighborhood for wifi jam using InSSIDer, there are few other access points but they don't seem to affect the situation. Any ideas about the steps to follow or tools to use ?

    Read the article

  • Backup Script - Could Not Open Input File

    - by Iestyn
    this is the backup script that I've got going: http://pastebin.com/4g4E6wUz This is the cron info: /usr/local/bin/php /home/backups/backup-db.php --filename-dated ALL No matter what I do, I keep on getting this error: Could not open input file: /home/backups/backup-db.php - That's the correct location of the file. I just don't know what else to try, I feel I've been working on this for so long now that I've explored every avenue, on the other hand sometimes I think that the time I've spent on it is clouding my thoughts and I'm missing something stupidly obvious. Just wondering if someone can give me a few pointers? Also on a last note, does anyone know of a way/article to auto generate a full backup of cPanel every * amount of days and store it in a location that I want? Kind Regards.

    Read the article

  • mod_rewrite for specific domains in a mappings file

    - by scott
    I have a bunch of domains that I want to go to one domain but various parts of that domain. # this is what I currently have RewriteEngine On RewriteCond %{HTTP_HOST} ^.*\.?foo\.com$ [NC] RewriteRule ^.*$ ${domainmappings:www.foo.com} [L,R=301] # rewrite map file www.foo.com www.domain.com/domain/foo.com.php www.bar.com www.domain.com/domain/bar.com.php www.baz.com www.domain.com/other/baz.php.foo The problem is that I don't want to have to have each domain be part of the RewriteCond. I tried RewriteCond %{HTTP_HOST} ^www\.(.*) RewriteRule (.*) http://%1/$1 [R=301,L] but that will do it for EVERY domain. I only want the domains that are in the mappings file to redirect, and then continue on to other rewrites if it doesn't match any domains in the mappings file.

    Read the article

  • IIS: redirect everything to another URL, except for one Directory

    - by DrStalker
    I have an IIS server (IIS 6, Win 2003) that hosts the site http://www.foo.com. I want any request to http://foo.com (no matter what path/filename is used) to redirect to http://www.bar.org/AwesomePage.html UNLESS the request is for http://www.foo.com/specialdir, in which case the HTML files in the local directory specialdir should be used. The problem I have is once the redirect is set it also affects /specialdir - even if I right click on that directory and select "content should come from ... local directory" that change does not take effect, and the directory still shows as redirecting to http://www.bar.org/AwesomePage.html. The same thing happens if I try to set individual files to load from the local system instead of redirecting - IIS gives no error, but the change does not take effect and the files still show as being redirected. How can I set specialdir to override the redirection to the new URL?

    Read the article

  • nginx+django serving static files

    - by avalore
    I have followed instruction for setting up django with nginx from the django wiki (https://code.djangoproject.com/wiki/DjangoAndNginx) and have nginx setup as follows (a few name changes to fit my setup). user nginx nginx; worker_processes 2; error_log /var/log/nginx/error_log info; events { worker_connections 1024; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$gzip_ratio"'; client_header_timeout 10m; client_body_timeout 10m; send_timeout 10m; connection_pool_size 256; client_header_buffer_size 1k; large_client_header_buffers 4 2k; request_pool_size 4k; gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; ignore_invalid_headers on; index index.html; server { listen 80; server_name localhost; location /static/ { root /srv/static/; } location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|mov) { access_log off; expires 30d; } location / { # host and port to fastcgi server fastcgi_pass 127.0.0.1:8080; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; fastcgi_param REMOTE_ADDR $remote_addr; } access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log; } } Static files aren't being served (nginx 404). If I look in the access log it seems nginx is looking in /etc/nginx/html/static... rather than /srv/static/ as specified in the config. I've no clue why it's doing this, any help would be hugely appreciated.

    Read the article

  • How can I switch Linux running OS from disk to running from RAM without restarting?

    - by vfclists
    Is it possible to switch to running Linux from RAM or RAM disk after starting starting initially from disk? eg. You need to make an image of your hard disk, FTP it to a remote location, some time later you want the image back, so you start the system from disk as usual, restore the image you FTP'd from the remote location back into place. More like a CloneZilla backup and restore, without booting the server from CD or USB disk, but starting from the normal hard disk? Notes on environment I should have mentioned it earlier. It is a remotely hosted VM where I cannot boot into a recovery console mode or do a netinstall. It will always boot onto the same disk. Which means that if there is some serious corruption I can't repair it offline, which is why being able to ftp a previously saved backup into place is so important

    Read the article

  • apache2 Webdav using VirtualDocumentRoot

    - by picca
    I'm trying to get up dynamical WebDav on my virtual hosts <VirtualHost *:80> # http://www.example.com/test.txt -> /var/www/example.com/www/test.txt VirtualDocumentRoot /var/www/%-2.0.%-1.0/%-3+/ <Location /webdav> Dav On AuthType Basic AuthName "example.com" AuthUserFile /var/www/[PROBLEM-1]/passwd.dav Require valid-user </Location> </VirtualHost> Is there any way I can set dynamically PROBLEM-1 placeholder based on whatever comes with *HTTP_HOST*? More precisely part of it? Example: HTTP_HOST = www.example.com - PROBLEM-1 = example.com HTTP_HOST = example.com - PROBLEM-1 = example.com What I'm trying to do here is to load dav passwd file dynamically based on which domain is requested. It is something like "groups" if you wish. So that owner of domainA is not allowed to access files of domainB. So maybe there is some other solution based on AuthGroupFile directive?

    Read the article

  • disable browser localization

    - by broiyan
    How do I get the websites that I visit to stop localizing the language probably according to my IP location? This is an website specific issue because, for example economist.com and superuser.com do not do it, but Google Checkout and craigslist.org are doing it. Is there a way to setup Ubuntu and Firefox so that English will always be used for all web pages displayed? Edit: Of course many webpages have a link to an English version, but sometimes they don't. For example I believe such links usually appear on the root resource but sometimes I see non-English languages on child resources where such links do not appear. Example: most Blogger.com blogs appear in English but when I go to the blogger's profile ("view my complete profile"), it appears in another language that matches my geographic location.

    Read the article

  • Server 2008 Active directory file sharing Restriction

    - by Usman
    My network scenario is: Two location connected each other through Wireless towers. 1st network ip address are 10.10.10.1 to 10.10.10.150 2nd network ip address are 10.10.10.200 to 10.10.10.254 Both location using their own ACTIVE DIRECTORY (MS SERVER 2008) and share their data with each other network through file sharing. My Problem is that every single client can explore the whole network. by simply typing \computername. i want to implement some restriction on network regarding file sharing. i want to restrict 1st Network Clients to open 2nd network Computer or server. Is there have any Group Policy or Folder permission scenario that i implement on my both network..

    Read the article

  • Django fails to find static files served by nginx

    - by Simon
    I know this is a really noobish question but I can't find any solution despite finding the problem trivial. I have a django application deployed with gunicorn. The static files are served by the nginx server with the following url : myserver.com/static/admin/css/base.css. However, my django application keep looking for the static files at myserver.com:8001/static/admin/css/base.css and is obviously failing (404). I don't know how to fix this. Is it a django or an nginx problem ? Here is my nginx configuration file : server { server_name myserver.com; access_log off; location /static/ { alias /home/myproject/static/; } location / { proxy_pass http://127.0.0.1:8001; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; } } Thanks for the help !

    Read the article

  • How to disable auto-assume in Google Chrome

    - by Ieyasu Sawada
    I've noticed that whenever I sign in in Google Chrome version 21.0 it automatically assumes and it automatically knows what I want to search. I basically use the address bar to search(my default search engine is Google) I don't go type Google.com on the address bar and then search from there. So what happens is that when I type something for example "vernier" for vernier caliper I'm automatically redirected to my facebook account which has a user account name of vern.ancheta. It's really getting annoying, this happens for every search term that I used maybe even ones that I haven't used before in my entire search history in Google. It always assumes as if it knows what I'm really thinking. What's the solution for this? Is this a bug in Google Chrome or just one of its annoying features. Please enlighten me on this.

    Read the article

  • route to vpn based on destination

    - by inquam
    I have a VPN connection on a Windows 7 machine. It's set up to connect to a server in US. Is it possible, and if so how, to setup so that .com destinations uses the vpn interface and .se destinations uses the "normal" connection? Edit (clarification): This is for outbound connections. I.e. the machine conencts to a server on foo.com and uses the VPN and the machine connects to bar.se and uses the "normal" interface. Let's say foo.com has an IP filter that ensures users are located in USA, if I go through the VPN I get a US ip and everything is fine. But tif all traffic goes this way the bar.se server that has a IP filter ensuring users are in Sweden will complain. So I want to route the traffic depending on server location. US servers through VPN and others through the normal interface.

    Read the article

  • Double try_files to solve the nginx's "No input file specified" issue

    - by Howard
    I am following the nginx's wiki (http://wiki.nginx.org/WordPress) to setup my wordpress location / { try_files $uri $uri/ /index.php?$args; } By using the above lines, when a static file which is not found it will redirect to index.php of wordpress, that is okay but.. Problem: When I request an non-existence php script, e.g. http://www.example.com/foo.php, nginx will give me No input file specified I want nginx to return 404 instead of the above message, so in the main fcgi config, I add the 2nd try_files location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include /etc/nginx/fastcgi_params; ... } And this worked, but I am looking if there are any better way to handle it?

    Read the article

  • nginx public webdav server

    - by Gert Cuykens
    Can you check the user group from a $remote_user? location ~ ^/home/(.*)$ { alias /home/$remote_user/$1; auth_pam "Restricted"; auth_pam_service_name "nginx"; dav_methods PUT DELETE MKCOL COPY MOVE; dav_access group:rw all:r; create_full_put_path on; } location ~ ^/get/(.*)$ { alias /home/$1; #check the group of the $remote_user; } curl -T test.txt 'http://gert:[email protected]/home/' curl 'http://friend:[email protected]/get/gert/test.txt'

    Read the article

  • Adding text to the beginning and end of a number of files?

    - by John Feminella
    I have a number of files in a directory hierarchy. For each file, I'd like to add "abcdef" to the beginning, on its own line, and "ghijkl" to the end, on its own line. For example, if the files initially contained: # one/foo.txt apples bananas # two/three/bar.txt coconuts Then afterwards, I'd expect them to contain: # one/foo.txt abcdef apples bananas ghijkl # two/three/bar.txt abcdef coconuts ghijkl What's the best way to do this? I've gotten as far as: # put stuff at start of file find . -type f -print0 | xargs -0 sed -i 's/.../abcdef/g' # put stuff at end of file find . -type f -print0 | xargs -0 sed -i 's/.../ghijkl/g' but I can't seem to figure out how what to put in the ellipses.

    Read the article

  • Restoring file properties but not the complete files, from backup

    - by Jon
    While copying data from my old storage on a Linux computer to the new (linux-based) NAS, I accidentially failed with getting the properties (most important: the modify dates) along to the new location. I also continued to use/modify the files at the new location and hence, cannot just copy it all over again. What I would like to do is a diff between files in the old vs. the new storage, and for those being identical, restore the properties from Linux storage to the NAS storage files. Is there a clever way such as a script or a tool to do this? I could either run it on the Linux box or in worst case from a remote Windows computer. Grateful for any suggestions. /Jon

    Read the article

  • Nginx fastcgi problems with django (double slashes in url?)

    - by wizard
    I'm deploying my first django app. I'm familiar with nginx and fastcgi from deploying php-fpm. I can't get python to recognize the urls. I'm also at a loss on how to debug this further. I'd welcome solutions to this problem and tips on debugging fastcgi problems. Currently I get a 404 page regardless of the url and for some reason a double slash For http://www.site.com/admin/ Page not found (404) Request Method: GET Request URL: http://www.site.com/admin// My urls.py from the debug output - which work in the dev server. Using the URLconf defined in ahrlty.urls, Django tried these URL patterns, in this order: ^listings/ ^admin/ ^accounts/login/$ ^accounts/logout/$ my nginx config server { listen 80; server_name beta.ahrlty.com; access_log /home/ahrlty/ahrlty/logs/access.log; error_log /home/ahrlty/ahrlty/logs/error.log; location /static/ { alias /home/ahrlty/ahrlty/ahrlty/static/; break; } location /media/ { alias /usr/lib/python2.6/dist-packages/django/contrib/admin/media/; break; } location / { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:8001; break; } } and my fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param PATH_INFO $fastcgi_script_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; And lastly I'm running fastcgi from the commandline with django's manage.py. python manage.py runfcgi method=threaded host=127.0.0.1 port=8080 pidfile=mysite.pid minspare=4 maxspare=30 daemonize=false I'm having a hard time debugging this one. Does anything jump out at anybody? Notes nginx version: nginx/0.7.62 Django svn trunk rev 13013

    Read the article

  • puppet propagate variable from node to erb tamplate?

    - by picca
    Is it possible to declare variable in node and than propage it way down to the erb template? Example: node basenode { $myvar = "bar" # default include myclass } node mynode extends basenode { $myvar = "foo" } class myclass { file { "/root/myfile": content => template("myclass/mytemplate.erb") ensure => present, } } Source of mytemplate.erb: myvar has value: <%= myvar %> I know that my example might be complicated. But I'm trying to propagate file on (almost) all my nodes and I want its content to be altered depending on the node which requests the file. The $myvar = "bar" statement should be default when node does not override its value. Is there a solution to my problem? I'm using puppet 0.24.5

    Read the article

  • Rsync like windows backup tool

    - by Halfgaar
    I need to backup some windows machines and have been unable to find the proper tool. What I need is a tool that does efficient copying of changed files to a windows network location, like Rsync does. In turn, the server will then back that up using rdiff-backup, a tool which does very clever incremental backups. Right now I'm using windows' 7 included backup feature, but I really don't get that. It's too much off-topic, but it doesn't suffice (seems buggy as well). I looked into Amanda, but as soon as it wanted to install MySQL, I aborted. I also tried Deltacopy, but unfortunately, I don't remember what the problem with that was... Any advice for an rsync like tool that just does daily syncs to a network location?

    Read the article

  • Nginx not working properly on subdomains [SOLVED]

    - by javipas
    I've been trying to setup a Sugar CRM instance. I've got a domain that has its main site on a server (www.domain.com) and I've created a subdomain (sugar.domain.com), but I wnat this subdomain to be hosted on another server. This second server has nginx installed, and there's a working WordPress blog there on a virtualhost, so I would need to setup a second site. To do this I've created the directory structure, and I've created a /etc/nginx/sites-enabled/sugar.domain.com configuration file that has the following: * server { listen 80; server_name sugar.domain.com *.domain.com; access_log /var/www/sugar/log/access.log; error_log /var/www/sugar/log/error.log info; location / { root /var/www/sugar; index index.php; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sugar/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_read_timeout 180; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } As far as I know, I need the *.domain.com parameter on the "server_name" flag, but something is crashing here: I get either a 403 Forbidden error, or I get PHP code (I can read the PHP file code in the browser, like normal text) that somehow is not executed. I've tried setting permissions to 755 inside the /var/www/sugar/ directory, and I've also set up the owner:group with a chown -R www-data:www-data /var/www/sugar/ The thing is, I don't now if my mistake is in the nginx site configuration, in my folder permissions, or in other place :( Could it be because of the main domain (www.domain.com) is hosted on other server? Do they have to be together necessarily?

    Read the article

  • Nginx not working properly on subdomains

    - by javipas
    I've been trying to setup a Sugar CRM instance. I've got a domain that has its main site on a server (www.domain.com) and I've created a subdomain (sugar.domain.com), but I wnat this subdomain to be hosted on another server. This second server has nginx installed, and there's a working WordPress blog there on a virtualhost, so I would need to setup a second site. To do this I've created the directory structure, and I've created a /etc/nginx/sites-enabled/sugar.domain.com configuration file that has the following: * server { listen 80; server_name sugar.domain.com *.domain.com; access_log /var/www/sugar/log/access.log; error_log /var/www/sugar/log/error.log info; location / { root /var/www/sugar; index index.php; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sugar/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_read_timeout 180; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } As far as I know, I need the *.domain.com parameter on the "server_name" flag, but something is crashing here: I get either a 403 Forbidden error, or I get PHP code (I can read the PHP file code in the browser, like normal text) that somehow is not executed. I've tried setting permissions to 755 inside the /var/www/sugar/ directory, and I've also set up the owner:group with a chown -R www-data:www-data /var/www/sugar/ The thing is, I don't now if my mistake is in the nginx site configuration, in my folder permissions, or in other place :( Could it be because of the main domain (www.domain.com) is hosted on other server? Do they have to be together necessarily?

    Read the article

  • Diff and ignore lines missing in one file

    - by Millianz
    I want to diff two files and ignore lines that are present in one file but missing in the other. For example File1: foo bar baz bat File2: foo ball bat I'm currently running the following diff command diff File1 File2 --changed-group-format='%>' --unchanged-group-format='' Which in this case would produce bar baz as the output, i.e. only missing or conflicting lines. I would like to only print conflicting lines, i.e. ignore cases where one line is missing from File2 and is present in File1 (not the other way around). Is there any way to do something like this using diff or do I have to resort to other tools? If so, what would you recommend?

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >