Search Results

Search found 26555 results on 1063 pages for 'active directory explorer'.

Page 395/1063 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • nginx: SSI working on Apache backend, but not on gunicorn backend

    - by j0nes
    I have nginx in front of an Apache server and a gunicorn server for different parts of my website. I am using the SSI module in nginx to display a snippet in every page. The websites include a snippet in this form: For static pages served by nginx everything is working fine, the same goes for the Apache-generated pages - the SSI include is evaluated and the snippet is filled. However for requests to my gunicorn backend running a Python app in Django, the SSI include does not get evaluated. Here is the relevant part of the nginx config: location /cgi-bin/script.pl { ssi on; proxy_pass http://default_backend/cgi-bin/script.pl; include sites-available/aspects/proxy-default.conf; } location /directory/ { ssi on; limit_req zone=directory nodelay burst=3; proxy_pass http://django_backend/directory/; include sites-available/aspects/proxy-default.conf; } Backends: upstream django_backend { server dynamic.mydomain.com:8000 max_fails=5 fail_timeout=10s; } upstream default_backend { server dynamic.mydomain.com:80; server dynamic2.mydomain.com:80; } proxy_default.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; What is the cause for this behaviour? How can I get SSI includes working for my pages generated on gunicorn? How can I debug this further?

    Read the article

  • Have an unprivileged non-account user ssh into another box?

    - by Daniel Quinn
    I know how to get a user to ssh into another box with a key: ssh -l targetuser -i path/to/key targethost But what about non-account users like apache? As this user doesn't have a home directory to which it can write a .ssh directory, the whole thing keeps failing with: $ sudo -u apache ssh -o StrictHostKeyChecking=no -l targetuser -i path/to/key targethost Could not create directory '/var/www/.ssh'. Warning: Permanently added '<hostname>' (RSA) to the list of known hosts. Permission denied (publickey). I've tried variations using -o UserKnownHostsFile=/dev/null and setting $HOME to /dev/null and none of these have done the trick. I understand that sudo could probably fix this for me, but I'm trying to avoid having to require a manual server config since this code will be deployed on a number of different environments. Any ideas? Here's a few examples of what I've tried that don't work: $ sudo -u apache export HOME=path/to/apache/writable/dir/ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l deploy -i path/to/key targethost Eventually, I'll be using this solution to run rsync as the apache user.

    Read the article

  • Ubuntu VM "read only file system" fix?

    - by David
    I was going to install VMWare tools on an Ubuntu server Virtual Machine, but I ran into the issue of not being able to create a cdrom directory in the /mnt directory. I then tested to see if it was just a permissions issue, but I couldn't even create a folder in the home directory. It continues to state that it is a read only file system. I know a little about Linux, and I'm not comfortable with it yet. Any advice would be much appreciated. Requested Information from a comment: username@servername:~$ mount /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type tmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) For sure root output. root@server01:~# mount /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type tmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev)

    Read the article

  • Static route works on one computer, not the other

    - by Dan
    I have been struggling with this for a couple days now, maybe I just need some people with a fresh perspective to figure out what the issue is. Basically I have a bunch of computers that are being routed through a specific gateway in order to access a web page that is hosted internally on a separate subnet. I set up static routes on all of the computers, and they all work... except one. Here's what a route print -4 looks like for a working computer (Windows 7): =========================================================================== Interface List 14...xx xx xx xx xx xx ......Broadcom 802.11n Network Adapter 11...xx xx xx xx xx xx ......Realtek PCIe GBE Family Controller 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 13...00 00 00 00 00 00 00 e0 Microsoft 6to4 Adapter 17...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.xxx.xxx.230 10.xxx.xxx.94 20 10.zzz.zzz.0 255.255.255.0 10.xxx.xxx.147 10.xxx.xxx.94 21 10.xxx.xxx.0 255.255.255.0 On-link 10.xxx.xxx.94 276 10.xxx.xxx.94 255.255.255.255 On-link 10.xxx.xxx.94 276 10.xxx.xxx.255 255.255.255.255 On-link 10.xxx.xxx.94 276 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.xxx.xxx.94 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.xxx.xxx.94 276 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 10.zzz.zzz.0 255.255.255.0 10.xxx.xxx.147 1 =========================================================================== And here's a route print -4 from the station that doesn't work (also Windows 7): =========================================================================== Interface List 10...xx xx xx xx xx xx ......Realtek PCIe GBE Family Controller 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft 6to4 Adapter 14...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 16...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.xxx.xxx.230 10.xxx.xxx.132 276 10.zzz.zzz.0 255.255.255.0 10.xxx.xxx.147 10.xxx.xxx.132 21 10.xxx.xxx.0 255.255.255.0 On-link 10.xxx.xxx.132 276 10.xxx.xxx.132 255.255.255.255 On-link 10.xxx.xxx.132 276 10.xxx.xxx.255 255.255.255.255 On-link 10.xxx.xxx.132 276 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.xxx.xxx.132 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.xxx.xxx.132 276 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 10.zzz.zzz.0 255.255.255.0 10.xxx.xxx.147 1 =========================================================================== Both of these stations are running Windows 7. So essentially what I am trying to do here is route all traffic to the 10.zzz.zzz.0 subnet through the 10.xxx.xxx.147 gateway. Everything else should go through the 10.xxx.xxx.230 gateway. This is the intended behavior, and again it is working everywhere but that one station. I noticed that the Active Route metric costs differ between the two stations, but I am new to the routing table and I am not sure how that is impacting the behavior. I hope I have been able to explain the situation clearly. Any help would be much appreciated. I can provide any additional information if needed!

    Read the article

  • apache using mod_auth_kerb always asks for the password twice

    - by DrStalker
    (Debian Squeeze) I'm trying to set apache up to use Kerberos authentication to allow AD users to log in. It is working, but prompts the user twice for a username and password, with the first time being ignored (no matter what is put it in.) Only the second prompt includes the AuthName string from the config (i.e.: the first windows is a generic username/password one, the second includes the title "Kerberos Login") I'm not worried about integrated windows authentication working at this stage, I just want users to be able to login with their AD account so we don't need to set up a second repository of user accounts. How do I fix this to eliminate that first useless prompt? The directives in the apache2.conf file: <Directory /var/www/kerberos> AuthType Kerberos AuthName "Kerberos Login" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms ONEVUE.COM.AU.LOCAL Krb5KeyTab /etc/krb5.keytab KrbServiceName HTTP/[email protected] require valid-user </Directory> krb5.conf: [libdefaults] default_realm = ONEVUE.COM.AU.LOCAL [realms] ONEVUE.COM.AU.LOCAL = { kdc = SYD01PWDC01.ONEVUE.COM.AU.LOCAL master_kdc = SYD01PWDC01.ONEVUE.COM.AU.LOCAL admin_server = SYD01PWDC01.ONEVUE.COM.AU.LOCAL default_domain = ONEVUE.COM.AU.LOCAL } [login] krb4_convert = true krb4_get_tickets = false The access log when accessing the secured directory (note the two seperate 401's) 192.168.10.115 - - [24/Aug/2012:15:52:01 +1000] "GET /kerberos/ HTTP/1.1" 401 710 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" 192.168.10.115 - - [24/Aug/2012:15:52:06 +1000] "GET /kerberos/ HTTP/1.1" 401 680 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" 192.168.10.115 - [email protected] [24/Aug/2012:15:52:10 +1000] "GET /kerberos/ HTTP/1.1" 200 375 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" And one line in error.log [Fri Aug 24 15:52:06 2012] [error] [client 192.168.0.115] gss_accept_sec_context(2) failed: An unsupported mechanism was requested (, Unknown error)

    Read the article

  • Moving a lotus database causes incoming mail failure

    - by Logman
    Hey all, I moved a couple lotus note databases to another hd (ran out of space) by using a directory link/pointer on the domino server. USed a text file with the .DIR ext with the desired new path etc... well that works fine. I open up a lotus notes client, use the id and then OPEN the database. The directory link works fine, the new location of the db is opened with no problems. We found out that we couldnt FORWARD any mail: We did the following to make it work: Goto menu FILE|MOBILE|EDIT CURRENT LOCATION Goto tab MAIL and enter the correct path for "Mail File:" example: it should read "mail\morespace\flabor" and not "mail\flabor" "morespace" = directory link/pointer we can forward and reply to emails fine after that fix. The problem is that the user has no incoming email. Domino is still trying to send the incoming mail of that user to its old db location "mail\flabor" instead of "mail\morespace\flabor". Delivery error saying user does not exist. Is this a cache problem? We have reset the server ("Q" at the prompt), though we have not completely shut it down though. Thanks Frank

    Read the article

  • Can't get rsync over sftp to work

    - by Patrik
    I'm trying to set up a backup system from an Ubuntu server to a Synology NAS (DS413j) using rsync and sftp. I have created a user for this that we can call ubuntu-backup. I have a directory in ubuntu-backup home directory called www where the backup will be saved. I have enabled Network Backup in DSM The user ubuntu-backup has full access to it's home directory Here is my rsync config file on the Synology NAS: #motd file = /etc/rsyncd.motd #log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid lock file = /var/run/rsync.lock use chroot = no [NetBackup] path = /var/services/NetBackup comment = Network Backup Share uid = root gid = root read only = no list = yes charset = utf-8 auth users = root secrets file = /etc/rsyncd.secrets [ubuntu-backup] path = /volume1/homes/ubuntu-backup/www comment = Ubuntu Backup uid = ubuntu-backup gid = users read only = false auth users = ubuntu-backup secrets file = /etc/rsyncd.secrets The permissions on /volume1/homes/ubuntu-backup/www is ubuntu-backup:users 777 Here is the command i'm running. rsync -aHvhiPb /var/www/ [email protected]:./ The result: sending incremental file list ERROR: module is read only rsync error: syntax or usage error (code 1) at main.c(1034) [Receiver=3.0.9] rsync: connection unexpectedly closed (9 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(605) [sender=3.0.9] If I'm running this: rsync -aHvhiPb /var/www/ [email protected] It looks like its sending files. No errors. But I cant find anything on the NAS.

    Read the article

  • File Replication Service Errors

    - by ekamtaj
    Hey Guys, We have a windows 2003 r2 server and couple of the users are reporting that they can not scan files into the windwos server. They are getting an Out of Space errors. I took a look at the server and we have 600GB free disk space on that partition. But while looking at the event log I found a lot of errors like (13552,13555) The File Replication Service is unable to add this computer to the following replica set: "DOMAIN SYSTEM VOLUME (SYSVOL SHARE)" This could be caused by a number of problems such as: -- an invalid root path, -- a missing directory, -- a missing disk volume, -- a file system on the volume that does not support NTFS 5.0 The information below may help to resolve the problem: Computer DNS name is "server.domain.local" Replica set member name is "server" Replica set root path is "c:\windows\sysvol\domain" Replica staging directory path is "c:\windows\sysvol\staging\domain" Replica working directory path is "c:\windows\ntfrs\jet" Windows error status code is FRS error status code is FrsErrorMismatchedJournalId Other event log messages may also help determine the problem. Correct the problem and the service will attempt to restart replication automatically at a later time. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • How can I associate html/htm files with Chrome in Windows 7 64 bit?

    - by matt
    I want Chrome to open all .html files. It is currently set as my default browser, however html files open in IE9. When I go to Control Panel\Programs\Default Programs\Set Associations I see that .html and .htm files are associated with IE. When I choose to change the default program it I'm presented with a list of programs but Chrome is not one of them. I browse to, and then select the Chrome.exe (C:\Users\Matt\AppData\Local\Google\Chrome\Application\chrome.exe) but it goes right back to IE. This is the first time I've seen anything like this. I'm running Windows 7 64 bit. I never had this problem on Windows 7 32 bit. Is this because Chrome by default installs in the User directory, not the Program Files directory? How can I fix these file associations? EDIT: It's not that things are reverting back to IE after associating them with Chrome. When I browse to Chrome in the file association window, and select it, it doesn't seem to take. It doesn't show Chrome in the list of programs despite pointing to the Chrome.exe location. I really think this has something to do with the fact that it doesn't install into the Program File Directory.

    Read the article

  • 2010 cgi script failure

    - by Barry F
    Hi. I hope you can help, I'm just a beginner! I have listed a few extra details which may not be relevant. I upload cgi scripts onto local/personal directory on a Apache/2.2.10 server, using FTP95Pro in ASCII. The scripts execute correctly using perl on my web-server in a terminal session. Thus my code has no fatal syntax errors. Webpages 'action' each cgi script at /cgi-bin/. There are symbolic links which link system directory files to my local directory files. FollowSymLinks is enabled (unsure how). Permissions are correct (755). This set-up hasnt changed, apparently. The scripts have excuted perfectly for years, up to 2010. But now, in 2010, I have replaced working scripts with new script/files, now with exactly the same text, filename and permissions. Only the date (last modified) has changed. But now I receive a 500 Internal Server Error, and cannot determine why. My server administator assumes I have code errors. But code is unchanged since last year, and it runs fine (albeit no arguments) on web-server console using perl myscript.cgi Is there anything you can think of which may have changed ? I'm suspicious of the new decade. I think the server swapped from Linux to Windows OS last year, but my server administrator got it all working OK. Is there something unusual he may have missed, related to 2010 ? Thank you in advance

    Read the article

  • Windows 7 library nightmare

    - by Lobuno
    In our active directory we deploy a policy to our clients where the personal directory (My documents) is redirected to a file server of ours \server\share\username\Documents In older systems everything worked fine. in Windows 7 some users are experimenting the following symptoms: * The Documents library is EMPTY * Where the documents library should be shown in Explorer an empty white icon is displayed. No caption. * Right clicking in the Documents library to edit the folders that are part of the libraries brings the dialog up. However, that dialog is unusable. No folder is present there and clicking Add folder does nothing. * Deleting the library and auto-creating it doesn't solve the problem * The shared directory can be accessed via UNC paths and it can be mounted as a shared drive as well. The library is still broken. * The shared drives are on a W2008 indexed server... * Using the Windows Library tool utility doesn't solve the problem. What can the cause of this problem be and how can this be solved?

    Read the article

  • Host Primary Domain from a subfolder

    - by TandemAdam
    I am having a problem making a sub directory act as the public_html for my main domain, and getting a solution that works with that domains sub directories too. My hosting allows me to host multiple sites, which are all working great. I have set up a subfolder under my ~/public_html/ directory called /domains/, where I create a folder for each separate website. e.g. public_html domains websiteone websitetwo websitethree ... This keeps my sites nice and tidy. The only issue was getting my "main domain" to fit into this system. It seems my main domain, is somehow tied to my account (or to Apache, or something), so I can't change the "document root" of this domain. I can define the document roots for any other domains ("Addon Domains") that I add in cPanel no problem. But the main domain is different. I was told to edit the .htaccess file, to redirect the main domain to a subdirectory. This seemed to work great, and my site works fine on it's home/index page. The problem I'm having is that if I try to navigate my browser to say the images folder (just for example) of my main site, like this: www.yourmaindomain.com/images/ then it seems to ignore the redirect and shows the entire server directory in the url, like this: www.yourmaindomain.com/domains/yourmaindomain/images/ It still actually shows the correct "Index of /images" page, and show the list of all my images. Here is an example of my .htaccess file that I am using: RewriteEngine on RewriteCond %{HTTP_HOST} ^(www.)?yourmaindomain.com$ RewriteCond %{REQUEST_URI} !^/domains/yourmaindomain/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /domains/yourmaindomain/$1 RewriteCond %{HTTP_HOST} ^(www.)?yourmaindomain.com$ RewriteRule ^(/)?$ domains/yourmaindomain/index.html [L] Does this htaccess file look correct? I just need to make it so my main domain behaves like an addon domain, and it's subdirectories adhere to the redirect rules.

    Read the article

  • Anonymous FTP upload on CentOS 5.2

    - by Craig
    I need to allow users to upload files to an FTP server anonymously. They should not be able to see any other files, or download files. It is a CentOS 5.2 server. I have a separate partition for the the upload area (mounted at /ftp). I have tried to set up vsftpd, followed all the instructions/advice I could find. But, when a user logs in and tries to transfer a file it throws a "553 could not create file." error. If I do a 'pwd' it shows the directory as "/" rather than the anon_root of "/ftp/anonymous". Any attempt to change the remote directory ends with "550 Failed to change directory.". I have a subdirectory "/ftp/anonymous/incoming" that is writable for the uploads SELinux is in permissive mode. I am running version 2.0.5 release 16.el5 of vsftpd. Here is the vsftpd.conf file: anonymous_enable=YES local_enable=YES write_enable=YES local_umask=002 anon_umask=007 file_open_mode=0666 anon_upload_enable=YES anon_mkdir_write_enable=NO dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES chown_uploads=YES chown_username=inftpadm xferlog_std_format=YES nopriv_user=nobody listen=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES ftp_username=inftpadm anon_root=/ftp/anonymous anon_other_write_enable=NO anon_mkdir_write_enable=NO anon_world_readable_only=NO dirlist_enable=YES Can anyone help?

    Read the article

  • Load Balancing Rails on Apache 2.x

    - by revgum
    My situation is that I need to proxy traffic to the root of my web server to port 81 for IIS, and then any traffic to a sub-directory needs to be directed to the rails app. my-server.com/ - needs to proxy to port 81 my-server.com/myapp - needs to point to the rails app This seems to be working alright for the rails application but the images, javascripts, and stylesheets are not actually working (proxied). I've tried to fiddle with the proxypass lines but it still doesn't work for me..can anyone help? Here's my complete VirtualHost portion of the config; LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so ProxyRequests off <Proxy balancer://myapp_cluster> BalancerMember http://127.0.0.1:3001 BalancerMember http://127.0.0.1:3002 </Proxy> <VirtualHost *:80> DocumentRoot "c:\ruby\apps\myapp\public" <Directory /myapp > Options FollowSymLinks AllowOverride None </Directory> ProxyPass /myapp/images ! ProxyPass /myapp/stylesheets ! ProxyPass /myapp/javascripts ! ProxyPass /myapp/ balancer://myapp_cluster/ ProxyPassReverse /myapp/ balancer://myapp_cluster/ ProxyPreserveHost on ProxyPass / http://localhost:81/ ErrorLog "c:\ruby\apps\myapp\log\error.log" # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog "c:\ruby\apps\myapp\log\access.log" combined </VirtualHost>

    Read the article

  • Samba between Ubuntu server 10.10 and Windows Vista, Windows 7

    - by chepukha
    I have a linux box running Linux server ubuntu 10.10. I have installed Samba on this linux box and want to share files with my laptops which run Windows Vista home and Windows 7 home. I have been struggling with the setup for almost a month but couldn't get it right. If I try to access share folder from Windows Vista, I get message "Windows cannot access \\server_ip_address". Error code: 0x80070035. The network path was not found. If I access from Windows 7, then after entering password to login I can see the list of share folders on Linux box. But if I click on a share folder, I get the same error message as above. Tail /var/log/samba/log.windows7-pc I got the following message: [2011/03/16 00:17:41.427238, 0] smbd/service.c:988(make_connection_snum) canonicalize_connect_path failed for service sharemedia, path /root/sharemedia Here is my setting in smb.conf [global] share modes = yes netbios name = Samba workgroup = WORKGROUP wins support = yes encrypt passwords = true [sharemedia] comment = Tesing sharing using Samba path=/root/sharemedia/ public = yes valid users = samba_usr_name ; make sure all files are sensible permissions create mask = 0660 force create mask = 0660 directory mask = 2770 force directory mask = 2770 directory security mask = 0000 ; Normal share parameters read only = no browseable = yes writable = yes guest ok = no

    Read the article

  • IIS6 Sending a 404 for ".exe" files.

    - by Tracker1
    Recently a bunch of files I had setup for download via IIS6's web server stopped working. They are a number of setup files ending in ".exe" and were working prior to a few months ago. I have the file permissions set properly, and even enabled browsing in IIS to determine that the paths are indeed correct. I'm not sure if it is related, but the directories with a period stopped working as well. ex: "~/download/ApplicationName/0.9/AppName-setup-0.9.123b2.exe" When I rename the directory to say 0_9 the browsing works, but the file itself delivers a 404 message from IIS. For now, I've setup FileZilla FTP for anonymous access to these files, but would prefer to continue using IIS. I've considered creating an HTTP handler to serve the .exe files, but would really prefer a configuration solution. I just can't figure out why it isn't working, as all the settings are correct. Directory is setup for read access. "Everyone" has read permissions on the files themselves, and the directory browsing (aside from the folder "0.9" to "0_9" rename) shows the files.

    Read the article

  • How do I prevent spawning of zombie-like apache2 processes on Dreamhost VPS?

    - by Jonathan Hayward
    I have had a website for months or longer on a DreamHost VPS, and I have had vague memories on, in initial setup, having to turn off some customized Apache under /dh to get a standard Apache 2.x to work with. Things have been going along on an even keel, when I started making some changes lately and I found that when I tried to bounce Apache (/usr/sbin/apachectl restart), it couldn't bind to port 80, and my site had been converted from a big literature site to a small parking site. I tried to see what was listening on 80, and it was a DreamHost-customized Apache that had spawned. I killed all of them, restarted Apache, and changed the parent directory under /dh to mode 000. That was a day or two ago. I was bouncing Apache again in trying to get a new site to load under HTTPS, and I found that once again DreamHost's apache had spawned, from the directory I set to mode 000, and once again converted my site to a parking page. I renamed the directory, but I am very skeptical of whether I have permanently killed the DreamHost-customized Apache. Besides duct tape options like a crontab to kill and delete each minute, how can I permanently turn off the Apache processes that are spawning from a location under /dh and interfering with standard Apache? What should I be doing that I am not? Can DreamHost's technical support stop the interference? Thanks,

    Read the article

  • Is TortoiseSVN really this buggy?

    - by John Isaacks
    I have been using tortoise svn for a couple weeks now. I get errors very often. Almost everything I do creates an error. this is with repositories on the internet, locally on my machine or a machine on the network. So I started to keep track. Some examples are below. 12/31/2010 Can't move 'C:\Users\jisaacks\Desktop\my branch test.svn\tmp\entries' to 'C:\Users\jisaacks\Desktop\my branch test.svn\entries': The file or directory is corrupted and unreadable. 01/04/2011 Commit failed (details follow): Server sent unexpected return value (405 Method Not Allowed) in response to MKCOL request for '/svn/kranichs-svn/!svn/wrk/b316f15e-0869-4644-9c53-87aa0103506b/branches' 01/06/2011 Can't move 'C:\Users\jisaacks\Desktop\DVD Catalog\vendors.svn\tmp\entries' to 'C:\Users\jisaacks\Desktop\DVD Catalog\vendors.svn\entries': The file or directory is corrupted and unreadable. 01/06/2011 Can't move 'C:\Users\jisaacks\Desktop\DVD Catalog\cake\tests\test_app\views\layouts.svn\tmp\entries' to 'C:\Users\jisaacks\Desktop\DVD Catalog\cake\tests\test_app\views\layouts.svn\entries': The file or directory is corrupted and unreadable. 01/06/2011 Commit failed (details follow): attempt to write a readonly database attempt to write a readonly database That last one about the read only database happens every time I commit. Say if I am working on the head revision (7) in a working copy. I make a change and commit it. It gives me this error. But if I look at the log it tells me that there is now a revision 8 (the commit I just made) but I am still on revision 7. So I need to run update to be on the current revision that I just commited. I hope I explained that clearly. Anyways with all these errors I wonder.. Is TSVN just this unstable, does everyone have these issues. Or is it just me? If just me, what could I be doing wrong?

    Read the article

  • What's the issue with this Samba setup?

    - by Dan Nestor
    I asked this on superuser, but I realized that may be the wrong place. I am duplicating the question here, I hope this is allowed. I am trying to share a directory through samba. In smb.conf I have the following: [global] workgroup = WORKGROUP security = user passdb backend = tdbsam netbios name = <hostname> [share_name] path = </path/to/share> writable = yes valid users = <username> <username>, the user in question, is the owner of directory /path/to/share. Permissions on the directory are 755. If I try to connect from another computer, the connection attempt is unsuccessful (I assume it's an authentication error, because it re-prompts me for the password). The client requires a domain name for authentication, I tried both WORKGROUP and the hostname/netbios name of the samba server. Samba logs on the server have no mention of the failed connection attempt. Firewall on the server is down. What am I doing wrong? Update: have since run smbpasswd -a <username> and now I am getting a clear error message, "not enough permissions to view contents of share".

    Read the article

  • Running python script in incrontab in Debian

    - by WilliamMayor
    I have a user, dropbox, that runs the Dropbox daemon, I want to monitor the directories in the Dropbox directory for new files and run a python script when they appear. I have the python script that I know works: $ /home/dropbox/monitor.py Trying to get lock Got lock, waiting for Dropbox to be idle Dropbox idle Finding instructions Done, releasing lock I have an incrontab entry: $ incrontab -l /home/dropbox/Dropbox IN_CREATE /home/dropbox/monitor.py | logger /home/dropbox/test IN_CREATE logger "$$ $@ $# $% $&" When I add a file to the test directory I see the output in /var/log/syslog: $ touch /home/dropbox/test/a $ tail /var/log/syslog ... Nov 9 10:18:27 vps incrond[1354]: (dropbox) CMD (logger "$ /home/dropbox/test a IN_CREATE 256") Nov 9 10:18:27 vps logger: "$ /home/dropbox/test a IN_CREATE 256" ... However, when I add a file to the Dropbox directory the command doesn't seem to run: $ touch /home/dropbox/Dropbox/a $ tail /var/log/syslog ... Nov 9 10:24:16 vps incrond[1354]: (dropbox) CMD (/home/dropbox/monitor.py | logger) ... So the incron daemon notices the new file and the correct command is found to be executed but it never actually gets executed. Nor are there any error messages. It kind of seems like incrontab can only be used to run the most simple of commands. This might be a similar question to: Incrond running but not executing commands CentOS 6.4 but I think that I don't have env problems, every path is absolute. I tried changing .../monitor.py to /usr/bin/python2.7 .../monitor.py just in case but it didn't make any difference.

    Read the article

  • Why does Windows XP (during a rename operation) report file already exists when it doesn't?

    - by Hawk
    From the command-line: E:\menu\html\tom\val\.svn\tmp\text-base>ver Microsoft Windows [Version 5.2.3790] E:\menu\html\tom\val\.svn\tmp\text-base>dir Volume in drive E is DATA Volume Serial Number is F047-F44B Directory of E:\menu\html\tom\val\.svn\tmp\text-base 12/23/2010 04:36 PM <DIR> . 12/23/2010 04:36 PM <DIR> .. 12/23/2010 04:01 PM 0 wtf.com3.csv.svn-base 1 File(s) 0 bytes 2 Dir(s) 170,780,262,400 bytes free E:\menu\html\tom\val\.svn\tmp\text-base>rename wtf.com3.csv.svn-base com3.csv.svn-base A duplicate file name exists, or the file cannot be found. E:\menu\html\tom\val\.svn\tmp\text-base>dir Volume in drive E is DATA Volume Serial Number is F047-F44B Directory of E:\menu\html\tom\val\.svn\tmp\text-base 12/23/2010 04:36 PM <DIR> . 12/23/2010 04:36 PM <DIR> .. 12/23/2010 04:01 PM 0 wtf.com3.csv.svn-base 1 File(s) 0 bytes 2 Dir(s) 170,753,064,960 bytes free E:\menu\html\tom\val\.svn\tmp\text-base>` I don't know what to do about this, as there is no other file in this directory. Why does Windows XP report that there is already a file here named com3.csv.svn-base when there is clearly no other file here?

    Read the article

  • NFS confusion - writing many small files

    - by Antonis Christofides
    I have a Debian squeeze amd64 which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 176.9.116.102:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS directory goes at more than 100 MB/s.) I am confused by the description of async in NFS. If my application accesses the local directory, system calls like write and close return even if caches have not been flushed to permanent storage. Apparently this is not true with NFS sync behaviour. However, with NFS async behaviour, even calls like fsync are ignored. Isn't it possible to work like local files, i.e. generally work asynchronously, but honour fsync and O_SYNC?

    Read the article

  • arrays in puppet

    - by paweloque
    I'm wondering how to solve the following puppet problem: I want to create several files based on an array of strings. The complication is that I want to create multiple directories with the files: dir1/ fileA fileB dir2/ fileA fileB fileC The problem is that the file resource titles must be unique. So if I keep the file names in an array, I need to iterate over the array in a custom way to be able to postfix the file names with the directory name: $file_names = ['fileA', 'fileB'] $file_names_2 = [$file_names, 'fileC'] file {'dir1': ensure => directory } file {'dir2': ensure => directory } file { $file_names: path = 'dir1', ensure =>present, } file { $file_names_2: path = 'dir2', ensure =>present, } This wont work because the file resource titles clash. So I need to append e.g. the dir name to the file title, however, this will cause the array of files to be concatenated and not treated as multiple files... arghh.. file { "${file_names}-dir1": path = 'dir1', ensure =>present, } file { "${file_names_2}-dir2": path = 'dir1', ensure =>present, } How to solve this problem without the necessity of repeating the file resource itself. Thanks

    Read the article

  • Apache Virtual host (SSL) Doc Root issue

    - by Steve Hamber
    I am having issues with the SSL document root of my vhosts configuration. Http sees to work fine and navigates to the root directory and publishes the page fine - DocumentRoot /var/www/html/websites/ssl.domain.co.uk/ (as specified in my vhost config) However, https seems to be looking for files in the main apache document root found further up the httpd.conf file, and is not being overwritten by the vhost config. (I assume that vhost config does overwrite the default doc root?). DocumentRoot: The directory out of which you will serve your documents. By default, all requests are taken from this directory, but symbolic links and aliases may be used to point to other locations. DocumentRoot "/var/www/html/websites/" Here is my config, I am quite a new Linux guy so any advise is appreciated on why this is happening!? NameVirtualHost *:80 NameVirtualHost *:443 <VirtualHost *:443> ServerAdmin root@localhost DocumentRoot /var/www/html/websites/https_domain.co.uk/ ServerName ssl.domain.co.uk ErrorLog /etc/httpd/logs/ssl.domain.co.uk/ssl.domain.co.uk-error_log CustomLog /etc/httpd/logs/ssl.domain.co.uk/ssl.domain.o.uk-access_log common SSLEngine on SSLOptions +StrictRequire SSLCertificateFile /var/www/ssl/ssl_domain_co_uk.crt SSLCertificateKeyFile /var/www/ssl/domain.co.uk.key SSLCACertificateFile /var/www/ssl/ssl_domain_co_uk.ca-bundle </VirtualHost> <VirtualHost *:80> ServerAdmin root@localhost DocumentRoot /var/www/html/websites/ssl.domain.co.uk/ ServerName ssl.domain.co.uk ErrorLog /etc/httpd/logs/ssl.domain.co.uk/ssl.domain.xo.uk-error_log CustomLog /etc/httpd/logs/ssl.domain.co.uk/ssl.domain.xo.uk-access_log common </VirtualHost>

    Read the article

  • issue in installing postgresql 9.3.4 on Windows server 2003 x64

    - by randydom
    Hello i really did all what i know to install the PostgreSQL 9.3.4 on my windows 2003 server x64, but i'm always stopped with this error : please see the error : http://oi57.tinypic.com/s4tb8i.jpg I really don't know what to do , if i click OK then when i go to the windows services list i don't find the PostgreSQL service so i can't Start the service . can any one please help me to install it correctly . PS: i've followed all steps in the : wiki.postgresql.org/wiki/Troubleshooting_Installation many thanks . here's the installer log * where i get " Failed to initialise the database cluster with initdb " : Called IsVistaOrNewer()... 'winmgmts' object initialized... Version:5.2 MajorVersion:5 Ensuring we can write to the data directory (using cacls): Executing batch file 'rad22ADE.bat'... processed dir: C:\Program Files\PostgreSQL\9.2\data Executing batch file 'rad22ADE.bat'... The files belonging to this database system will be owned by user "Administrator". This user must also own the server process. The database cluster will be initialized with locale "English_United States.1252". The default text search configuration will be set to "english". fixing permissions on existing directory C:/Program Files/PostgreSQL/9.2/data ... initdb: could not change permissions of directory "C:/Program Files/PostgreSQL/9.2/data": Permission denied Called Die(Failed to initialise the database cluster with initdb)... Failed to initialise the database cluster with initdb Script stderr: Program ended with an error exit code Error running cscript //NoLogo "C:\Program Files\PostgreSQL\9.2/installer/server/initcluster.vbs" "NT AUTHORITY\NetworkService" "postgres" "****" "C:\Program Files\PostgreSQL\9.2" "C:\Program Files\PostgreSQL\9.2\data" 5432 "DEFAULT" 0 : Program ended with an error exit code Problem running post-install step. Installation may not complete correctly The database cluster initialisation failed. Creating Uninstaller Creating uninstaller 25% Creating uninstaller 50% Creating uninstaller 75% Creating uninstaller 100% Installation completed Log finished 05/02/2014 at 04:04:04

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >