Search Results

Search found 6253 results on 251 pages for 'apache2 ssl'.

Page 120/251 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • Understanding RewriteCond in .htacces files

    - by Paulo Bu
    I'm having problems understanding how RewriteCond directive works. So far, it's pretty clear that it compares to strings to apply a RewriteRule. I have this file: <IfModule rewrite_module> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ app_dev.php </IfModule> This works for me but I don't know why it works. So far in the RewriteCond directive I understand: if the value of REQUEST_FILENAME is NOT a file in the hard drive then allow the rule This doesn't have sense becouse app_dev.php after substituting is a file in the hard drive. Anyways, could someone enlighten me with this issue? I am having a very harsh time figuring out how this works.

    Read the article

  • Centos + CPanel results in php always running as fgci, instead of cli, as expected

    - by quamis
    I'm having problems with a server configured by someone else. It uses CPanel, and it has Apache+PHP. For some reason, when running php -v as a user, i get "cli" as handler # php -v | head -n 1 PHP 5.3.27 (cli) (built: Oct 15 2013 16:06:48) If a make a PHP script with echo shell_exec("php -v | tail -n 1"), i get PHP 5.3.27 (cgi-fcgi) (built: Oct 15 2013 16:22:16) Why the cli/fcgi difference? I need to fix this so that scripts ran by the apache user would be running as "cli", not "fcgi". As a side-note, i'm not sure how php got installed, because in the package list i see another version installed, instead of the pne reported by php -v # rpm -qa | grep php [.... other packages...] cpanel-php53-5.3.17-5.cp1136.x86_64 rebuild_phpconf --current: /usr/local/cpanel/bin/rebuild_phpconf --current Available handlers: suphp dso fcgi cgi none DEFAULT PHP: 5 PHP4 SAPI: none PHP5 SAPI: dso SUEXEC: enabled RUID2: not installed redhat-release: # cat /etc/redhat-release CentOS release 6.4 (Final) possibly related to http://superuser.com/questions/665809/centos-6-4-running-php-for-root-as-cgi-fcgi-not-cli

    Read the article

  • Nginx proxy to Apache - resolve HTTP ORIGIN

    - by Fratyr
    I have a server setup with nginx serving static content and proxy all PHP/dynamic requests to apache on 127.0.0.1 I'm building an API for my databases, and I need to allow clients by their origin (domain name), rather than just IP. Based on CORS rules. So when I send an HTTP header header("Access-Control-Allow-Origin: www.client-requesting.myapi.com"); from my API server, I have to tell it which origin I allow, otherwise client side requests won't work to my API due to same-origin policy. The question is how can I know which domain name (if any) called my API? What should be the nginx and apache configuration to pass the origin parameter? I tried to google, and all I found is some possible solution with mod_rpaf, but I wanted to be sure. Thanks!

    Read the article

  • Fatal error: Out of memory (allocated ...) (tried to allocate ... bytes) not due to memory_limit setting

    - by Lorenz Meyer
    Since a few days, I get the following error on my server: Fatal error: Out of memory (allocated 262144) (tried to allocate 393216 bytes) Usually this error is due to a memory consumption that is exceeding the configured memory_limit, but in my case there is no relation. The memory_limit is set to 128MB, and in this case, we not even reach 1MB. Also the server does not have a big load, in fact it is an intranet server, and there are just a few people conected to it. System: Windows Server 2003, 1Go RAM, only 600 MB used. Apache 2.2.4 PHP 5.2.3 This error is appearing randomly. The memory limit reached also is randomly between a few kB to a few MB. Sometimes restarting Apache is required to get rid of the error, sometimes it disapears itself. Restarting Apache or the entire server helps temporarily. Where could this problem come from ? How could I narrow down the error source ?

    Read the article

  • Have apache choose a php version based on the extension in the url, but with a single file on the filesystem

    - by Somejan
    I want to configure a local apache server to serve php files with different php versions. In my document root I have phpinfo.php, now if I go to http://localhost/phpinfo.php4, I want to see the phpinfo.php file processed with php4, if I go to http://localhost/phpinfo.php5 I want to see the same file processed with php5. Note: both php 4 and 5 are already installed side by side, I have no problem configuring apache to treat files that have a .php4 or .php5 extension on the filesystem with the correct php version. What I want is for apache to do the following: If the url-path ends in .php5, serve the file which has a .php extension on the filesystem using the application/x-httpd-php5 handler. If the url-path ends in .php4, serve the same file with the .php extension on the filesystem using the application/x-httpd-php4 handler.

    Read the article

  • What are the risks in putting website files in the "root" folder of a shared web hosting server?

    - by Obay Ouano
    A site I've been asked to manage is hosted (shared) on GoDaddy, with this folder structure: / public_html public_ftp mail stats logs etc... However, the website files are stored in the / folder, and NOT in public_html. I'm not sure if this is how GoDaddy sets up their customers' accounts, or if the old web developer accidentally changed it from public_html to root. But when we call up GoDaddy to tell them to correct this (move files to public_html), they won't change it and insist that there is no security risk unless someone gets a hold of the FTP password. Is this true? (I have always read that website files should be inside public_html.) If not, where could this setting be changed? The .htaccess is empty.

    Read the article

  • load balancing between physical and virtual machines

    - by fefe
    First of all sorry if my question would be not relevant, I'm quite beginner. In short: I have 2 physical machines - first(Windows Server 2007, Apache 2.2) on the second machine esxi installed to host virtual machines . I have been converted my physical machine(1) on esxi(2) and in the next step I would like to deploy a load balancer between the physical and virtual machine. Would be this workaround appropriate? If yes what are the steps to follow?

    Read the article

  • Apache LDAP with local groups

    - by Greg Ogle
    I have a server that currently uses htpasswd to authenticate users. I'm migrating to using LDAP, but my LDAP server is only for user authentication, not allowing me to add groups. I still need to use groups as they are used for access control via the Apache Directory tags in my configuration. The alternative is to revisit the access control altogether, using php or something of the sort to limit access. this works for 'basic' authentication <Directory /misc/www/html/site> #LDAP & other config stuff irrelevant to issue Require ldap-group cn=<service>,ou=Groups,dc=<service>,dc=<org>,dc=com </Directory> attempted <Directory /misc/www/html/site> #LDAP & other config stuff irrelevant to issue #groups file from previous configuration using htpasswd #tried to tweak to match new user format, but I don't think it looks up in here AuthGroupFile /misc/www/htpasswd/groups #added the group, which is how it works when using htpasswd Require ldap-group cn=<service>,ou=Groups,dc=<service>,dc=<org>,dc=com group xyz </Directory>

    Read the article

  • Syntax for piping varnish logs to rotatelogs

    - by jetboy
    Ubuntu 12.04 Server x64, Varnish 3.0.2 I'm trying to pipe varnishncsa's logs through Apache's rotatelogs, and running from the shell, things work fine: sudo varnishncsa -a -P /var/run/varnishncsa/varnishncsa.pid |/usr/sbin/rotatelogs /var/log/varnish/varnish.log.%Y%m%d%H 3600 creates a new logfile in /var/log/varnish, with rotation every hour (3600 seconds). However, I'm struggling to get things working the same way inside /etc/init.d/varnishncsa: PATH=/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/usr/bin/$NAME PIDFILE=/var/run/$NAME/$NAME.pid LOGFILE=/var/log/varnish/varnishncsa.log USER=varnishlog DAEMON_OPTS="-a -P ${PIDFILE}" DAEMON_PIPE="|/usr/sbin/rotatelogs /var/log/varnish/varnish.log.%Y%m%d%H 3600" ... start_varnishncsa() { output=$(/bin/tempfile -s.varnish) log_daemon_msg "Starting $DESC" "$NAME" create_pid_directory if start-stop-daemon --start --verbose --pidfile ${PIDFILE} \ --chuid $USER --exec ${DAEMON} -- ${DAEMON_OPTS} \ > ${output} 2>&1; then log_end_msg 0 else log_end_msg 1 cat $output exit 1 fi rm $output } Where should I put DAEMON_PIPE in the above code? I've tried at the end of: if start-stop-daemon --start --verbose --pidfile ${PIDFILE} which is where additional command line parameters usually go, but it isn't creating a logfile.

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • Amazon EC2: Instances, IPs and a wordpress blog (LAMP)

    - by JustinXXVII
    I had a link to my blog posted on Reddit yesterday and MySQL crashed on my EC2 Micro instance. I know I didn't have that many visitors because I used a marketing link that tracks hits. The link got 167 hits over the course of the last 18 hours, and MySQL crashed twice. So anyway, 167 visits is not a lot, so I've done some short term optimizations like restricting the number of Apache threads to limit the MySQL calls. I also set up WP Super Cache to serve static content. Soon I'm going to offload all of my images to S3 or CloudFront. So this leads me to my question. If this doesn't seem to help, and if i have another traffic "spike", how do AMIs work when you have a MySQL database? I think I understand that if you have more than one instance and assign the same Elastic IP to both of them, the incoming traffic gets distributed among both. But what happens when the MySQL database gets updated on one of the instances? I just need to wrap my mind around what happens when I create an AMI and then launch a new instance to help with traffic. Thanks for your suggestions.

    Read the article

  • Troubleshooting "connection reset" error on my linux server

    - by Chris
    I fervently hope someone here can help me with the problem I am experiencing. I am a programmer, and I have very little understanding of linux sysadmin terminology/concepts. I am attempting to troubleshoot a problem with my website. It is a Facebook app, and whenever I try to connect using Chrome, I get an error stating that the "connection was reset". I have been Googling for four days straight trying to find a solution to this problem, but no joy. A big part of the problem is that I do not understand the terminology being employed, and the output from many of the tools referenced is likewise indecipherable to me. I am running a VPS with CentOS 5, apache, PHP, and MySQL. I could spam this post with a ton of information from my iptables, apache, etc but if anyone needs information from my server, please let me know how to get it, and I will post it here. Thank you for any help you can offer!

    Read the article

  • Node.js apps and wordpress on the same vps

    - by Msencenb
    So currently my linode (ubuntu 11.10) serves up three node.js apps for me using connect's vhost middleware listening on port 80. Here is an example of how vhost sets up a domain: var portfolio = require('./bootstrap-portfolio/lib/app.js'); var server = express(); server.use(express.vhost('sencedev.com',portfolio)); server.use(express.vhost('www.sencedev.com',portfolio)); server.listen(80); However I would now like to add a wordpress installation to my vps as well. In the past for me this has meant a traditional apache installation; however I'm a bit unsure of how node.js + a different webserver (apache or nginx) should interact. Any thoughts on how I should approach hosting wordpress + node.js on the same box?

    Read the article

  • Multiple Rails apps on same subdomain?

    - by Derek
    I recently decided to try out Rails. When working with PHP, I simply had all of my PHP projects in the same directory. For example, I may have http://ubuntu/app1, http://ubuntu/app2, etc. I created a subdomain for Rails (http://ruby.ubuntu), installed Rails and Passenger and everything is working. However, I may be wrong, but it looks like I can only have one Rails app per subdomain? My VirtualHost is as follows: <VirtualHost *:80> ServerName ruby.ubuntu ServerAdmin webmaster@localhost DocumentRoot /var/www/ruby/blog/public <Directory /var/www/ruby/blog/public> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all RailsEnv development </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> All of my PHP and misc. files are stored in /var/www/main. I want to be able to store all of my Rails apps in /var/www/ruby. I tried changing DocumentRoot to /var/www/ruby, but I don't think it's as simple as that. When I browse to a Rails app's Welcome Aboard page and click on "About my application's environment," I get a 404 page, but when the DocumentRoot is set to the public directory, I get the expected result. I don't want to have to create a new subdomain every time I create a new project. Is there any way I can make it so I can store all of my apps in /var/www/ruby, and browsing to http://ruby.ubuntu will let me access all of my Rails apps there? That way if I want to create a new app, all I have to do is rails new app, no Apache .htaccess or VirtualHost configuration required.

    Read the article

  • logrotate> removing delaycompress function: should I compress the last log myself?

    - by user120006
    I'm removing the delaycompress function from my logrotating script. Before running logrotate again, should I compress the last log myself ? This is the actual situation: -rw-r----- 1 root adm 4,7M 5 mag 18:38 access.log -rw-r----- 1 root adm 5,2M 29 apr 05:44 access.log.1 -rw-r----- 1 root adm 473K 22 apr 05:45 access.log.2.gz -rw-r----- 1 root adm 605K 15 apr 05:44 access.log.3.gz -rw-r----- 1 root adm 588K 8 apr 05:44 access.log.4.gz The question is: Should I compress "access.log.1" and THEN launch logrotate ? Or logrotate will understand I removed the "delaycompress" option and fix things himself ?

    Read the article

  • Relation between Apache and DNS configuration

    - by KayKay
    I configured my DNS (bind9) to accept every subdomain, using a wildcarded 'A' record : *.mydomain.tld. IN A xx.xx.xx.xx I configured Apache to serve some specific subdomains using virtual hosts : <VirtualHost *:80> ServerName sub1.mydomain.tld ServerAlias sub1.mydomain.tld JkMount / sub1JK JkMount /* sub1JK </VirtualHost> when I ping from a remote computer on a subdomain configured in apache I get a response. When I ping on a subdomain that is not configured in apache, the host is not found. I don't understand why apache configuration would affect DNS resolution like this? I would appreciate any information that helps me understand this. Thanks a lot.

    Read the article

  • Error configuring virtual hosts

    - by user148351
    i Have a problem using my virtual hosts: When i try to connect to my server on direct ip adress, for example http://111.11.11.111/ in apache error log i see following error: script '/var/www/html/mmm/public/index.php' not found or unable to stat File index.php exists!!! and has correct access rights. I have virtual hosts configured <VirtualHost *:80> DocumentRoot /var/www/html/mmm/public ServerName example.com ServerAlias example.com www.example..com <Directory var/www/html/mmm/public> AllowOverride All </Directory> </VirtualHost> Why when I try to connect to ip address - it try to search index.php not in servers root directory, but in root directory of virtual host.

    Read the article

  • Secure against c99 and similar shells

    - by Amit Sonnenschein
    I'm trying to secure my server as much as i can without limiting my options, so as a first step i've prevented dangerous functions with php disable_functions = "apache_child_terminate, apache_setenv, define_syslog_variables, escapeshellarg, escapeshellcmd, eval, exec, fp, fput, ftp_connect, ftp_exec, ftp_get, ftp_login, ftp_nb_fput, ftp_put, ftp_raw, ftp_rawlist, highlight_file, ini_alter, ini_get_all, ini_restore, inject_code, mysql_pconnect, openlog, passthru, php_uname, phpAds_remoteInfo, phpAds_XmlRpc, phpAds_xmlrpcDecode, phpAds_xmlrpcEncode, popen, posix_getpwuid, posix_kill, posix_mkfifo, posix_setpgid, posix_setsid, posix_setuid, posix_setuid, posix_uname, proc_close, proc_get_status, proc_nice, proc_open, proc_terminate, shell_exec, syslog, system, xmlrpc_entity_decode" but i'm still fighting directory travel, i can't seems to be able to limit it, by using a shell script like c99 i can travel from my /home/dir to anywhere on the disc. how can i limit it once and for all ?

    Read the article

  • REMOTE_USER through Apache reverse proxy

    - by Laurent
    I have an Apache webserver with mod_proxy enabled and a Virtualhost, proxy.domain.com. This proxy is configured to prompt the user for credentials with AuthType Basic. Then, the content of web.domain.com is available through the proxy with ProxyPass and ProxyReverse. However, the REMOTE_USER variable is empty. I read different things to achieve this with mod_rewrite and mod_headers but all my tries have failed. Does anybody has been luckier than me? Thanks.

    Read the article

  • APC PHP cache size does not exceed 32MB, even though settings allow for more

    - by hardy101
    I am setting up APC (v 3.1.9) on a high-traffic WordPress installation on CentOS 6.0 64 bit. I have figured out many of the quirks with APC, but something is still not quite right. No matter what settings I change, APC never actually caches more than 32MB. I'm trying to bump it up to 256 MB. 32MB is a default amount for apc.shm_size, so I am wondering if it's stuck there somehow. I have run the following echo '2147483648' > /proc/sys/kernel/shmmax to increase my system's shared memory to 2G (half of my 4G box). Then ran ipcs -lm which returns ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 2097152 max total shared memory (kbytes) = 8388608 min seg size (bytes) = 1 Also made a change in /etc/sysctl.conf then ran sysctl -p to make the settings stick on the server. Rebooted, too, for good measure. In my APC settings, I have mmap enabled (which happens by default in recent versions of APC). php.ini looks like: apc.stat=0 apc.shm_size="256M" apc.max_file_size="10M" apc.mmap_file_mask="/tmp/apc.XXXXXX" apc.ttl="7200" I am aware that mmap mode will ignore references to apc.shm_segments, so I have left it out with default 1. phpinfo() indicates the following about APC: Version 3.1.9 APC Debugging Disabled MMAP Support Enabled MMAP File Mask /tmp/apc.bPS7rB Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Oct 11 2011 22:55:02 Directive Local Value apc.cache_by_default On apc.canonicalize O apc.coredump_unmap Off apc.enable_cli Off apc.enabled On On apc.file_md5 Off apc.file_update_protection 2 apc.filters no value apc.gc_ttl 3600 apc.include_once_override Off apc.lazy_classes Off apc.lazy_functions Off apc.max_file_size 10M apc.mmap_file_mask /tmp/apc.bPS7rB apc.num_files_hint 1000 apc.preload_path no value apc.report_autofilter Off apc.rfc1867 Off apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 256M apc.slam_defense On apc.stat Off apc.stat_ctime Off apc.ttl 7200 apc.use_request_time On apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock On apc.php reveals the following graph, no matter how long the server runs (cache size fluctuates and hovers at just under 32MB. See image http://i.stack.imgur.com/2bwMa.png You can see that the cache is trying to allocate 256MB, but the brown piece of the pie keeps getting recycled at 32MB. This is confirmed as refreshing the apc.php page shows cached file counts that move up and down (implying that the cache is not holding onto all of its files). Does anyone have an idea of how to get APC to use more than 32 MB for its cache size?? **Note that the identical behavior occurs for eaccelerator, xcache, and APC. I read here: http://www.litespeedtech.com/support/forum/archive/index.php/t-5072.html that suEXEC could cause this problem.

    Read the article

  • How do I remove the ServerSignature added by mod_fcgid?

    - by matthew
    I'm running Mod_Security and I'm using the SecServerSignature to customize the Server header that Apache returns. This part works fine, however I'm also running mod_fcgid which appends "mod_fcgid/2.3.5" to the header. Is there any way I can turn this off? Setting ServerSignature off doesn't do anything. I was able to get it to go away by changing the ServerTokens but that removed the customization I had added.

    Read the article

  • CPanel: Every url is being redirected to http://:2083

    - by Frank
    On my cpanel server, I restored about 50 accounts from crashed cpanel server. All of the sites were working fine, but suddenly without changing anything, every site started to get redirected to url "http://:2083/"., There is nothing in logs, no errors. when i do wget it says: wget grinfeld.com.br --2012-09-04 13:18:23-- http://grinfeld.com.br/ Resolving grinfeld.com.br... 198.101.221.254 Connecting to grinfeld.com.br|198.101.221.254|:80... connected. HTTP request sent, awaiting response... 301 Moved Location: https://:2083/ [following] https://:2083/: Invalid host name.

    Read the article

  • Redirect rss feed users

    - by Jeremy Love
    I made a redirect but when I subscribe to it, it doesnt get the feed from my new url it gets the one from my old url heres what I have. <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] RewriteCond %(REQUEST_URI) ^/articles$ [NC] RewriteRule ^(.*)$ htp://newsite.mysite.com/articles [R=301,L] RewriteCond %(REQUEST_URI) /(.) RewriteRule ^(.*)$ htp://newsite.mysite.com [R=302] RewriteCond %{HTTP_HOST} ^www\.oldsite.mysite\.com$ RewriteRule ^(.*)$ http://newsite.mysite.com [R=301,L] Redirect 301 / http://newsite.mysite.com/ </IfModule> any help is greatly appreciated, also do to me having no points i had to rename 2 of the urls to htp instead of http

    Read the article

  • Windows share mounted then symlinked on LAMP server. Serves up html, but not images.

    - by Samuurai
    This has really got me befuddled... I've mounted a share, like this: //srv1/UserUploads /mount/UserUploads cifs rw,user,exec,uid=wwwrun,gid=www,username=shareuser,password=sharepw 0 0 I then have a symlink here: WEBSVR:/Web/htdocs/public_html # ls -l useruploads lrwxrwxrwx 1 wwwrun www 18 Dec 7 09:18 useruploads -> /mount/UserUploads Oddly, if I ls inside the mounted area, items appear with a capital S -rwxrwSrwx 1 wwwrun www 4077 Dec 30 14:54 prop9.jpg -rwxrwSrwx 1 wwwrun www 4 Jan 12 15:57 test.html And if I bring up test.html in a browser, it works fine, but if I go to prop9.jpg, chrome gives me this error: This web page is not available. The web page at http://10.1.64.100/useruploads/webteam/help2let/prop6-1.jpg might be temporarily down or it may have moved permanently to a new web address. More information on this error Below is the original error message Error 100 (net::ERR_CONNECTION_CLOSED): Unknown error. Has anyone seen this behaviour where the binary files (images) arent displayed, but html/text is?

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >