Search Results

Search found 21348 results on 854 pages for 'active directory lds'.

Page 205/854 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • Using Apache2 to set up a basic webpage

    - by mrhatter
    I am having a problem with a test page I set up for my website. The config file (index.html) looks like this <html> <head> <title>Welcome to website.net!</title> </head> <body> <h1>Success! The website.net virtual host is working!</h1> </body> </html> Which should display a page like this in my browser when I navigate to www.mywebsite.net Welcome to website.net! Success! The website.net virtual host is working! However I get a 403 "forbidden" error when I navigate to the page. What am I missing? I have the directory installed on /var/www/mywebsite.net/public_html/index.html I have the permissions of the /var/www directory set to 755 so that others can read and exicute it but it does not seem to be working. I also have port 80 open on my iptable. The server is a VPS server if that makes a difference however I have added a DNS record for the ip address. Any help is appreciated! UPDATE: Here is my virtual host configuration file "mywebsite.net.conf" <VirtualHost *:80> # Admin email, Server Name (domain name), and any aliases ServerAdmin [email protected] ServerName www.mywebsite.net ServerAlias mywebsite.net # Index file and Document Root (where the public files are located) DirectoryIndex index.html index.php DocumentRoot /home/myusername/public/mywebsite.net/public # Log file locations LogLevel warn ErrorLog /home/mysuername/public/mywebsite.net/log/error.log CustomLog /home/myusername/public/mywebsite.net/log/access.log combined <Directory /home/myusername/public/mywebsite.net/public> Options Indexes ExecCGI Includes FollowSymLinks MultiViews AllowOverride All Order Deny,Allow Allow from all </Directory> </VirtualHost>

    Read the article

  • Piping perfmon logs over DFS

    - by Sal
    I'm running perfmon on several servers, and I'd like all of the output to be piped to one particular server. I'm trying to do this over DFS by modifying the Root directory arg on each of the servers and placing a DFS path like so: Root Directory: \\PERFMON_LOG_REPOSITORY\[MY_COMP_NAME] The trouble is that when I make the Root directory dump the logs to a file over DFS, I always get the following error upon starting up the Collector Set: when attempting to start the data collector set the following system error occurred: access is denied

    Read the article

  • Apache, Rewrite Rule and Directories

    - by milo5b
    my sites-available/ file looks something like the following: <VirtualHost *:80> ServerAdmin webmaster@mysite ServerName mysite.co.uk ServerAlias www.mysite.co.uk DocumentRoot /home/mysite.co.uk/htdocs/ <Directory /home/mysite.co.uk/htdocs/> Options -Indexes FollowSymlinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/mysite.co.uk/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/mysite.co.uk/access.log combined </VirtualHost> In .htaccess (in the htdocs/), I have (amongst others) the following rewrite rule: RewriteRule ^enquiries$ /enquiries.php Somehow I have also a directory named "enquiries" (/home/mysite.co.uk/htdocs/enquiries/), and when I hit the url "www.mysite.co.uk/enquiries" I get: HTTP/1.1 301 Moved Permanently Date: Mon, 10 Dec 2012 18:53:37 GMT Server: Apache/2.2.16 (Debian) Location: http://www.mysite.co.uk/enquiries/ Vary: Accept-Encoding Content-Type: text/html; charset=iso-8859-1 And a Browser would display the directory's content. Now, I could easily rename the folder and get it sorted, but I would like to understand what's going on here. What would be the correct way to configure Apache in a way that it wont behave this way, and instead would listen to the Rewrite Rule? If I did not explain myself clearly, please feel free to ask more questions, I'd be happy to answer them. Thanks!

    Read the article

  • Does visual source safe take .cvsignore as configuration ?

    - by superuser
    An easy way to tell CVS to ignore these directories is to create a file named .cvsignore (note the leading period) in your top-level source directory Has anyone verified this with vss? Plus,does vss have these similar command lines: * To refresh the state of your source code to that stored in the the source repository, go to your project source directory, and execute cvs update -dP. * When you create a new subdirectory in the source code hierarchy, register it in CVS with a command like cvs add {subdirname}. * When you first create a new source code file, navigate to the directory that contains it, and register the new file with a command like cvs add {filename}. * If you no longer need a particular source code file, navigate to the containing directory and remove the file. Then, deregister it in CVS with a command like cvs remove {filename}. * While you are creating, modifying, and deleting source files, changes are not yet reflected in the server repository. To save your changes in their current state, go to the project source directory and execute cvs commit. You will be asked to write a brief description of the changes you have just completed, which will be stored with the new version of any updated source file.

    Read the article

  • `:Zone.Identifier` files keep on appearing in Windows XP virtual machine

    - by Jonathan Reno
    I have a Windows XP Home Edition guest and a Linux Mint 13 host. I use VirtualBox and the ~/Public directory is shared with the guest. It sometimes happens that I use IE on the guest system to download files (until I get a better Windows browser). All of the downloaded files go the the L:\ drive (the ~/Public directory). When they are finished downloading, Windows Explorer adds a :Zone.Identifier file for each file I download. When I extract a downloaded ZIP archive on the guest (on drive L:\), Windows creates a :Zone.Identifier file for every file in the extracted directory. This even occurs if I use the host to move a file to the ~/Public directory. The shared ~/Public directory is on an ext4 partition and the colon character is supposed to be illegal in file names in Windows, but not on the ext4 partition. Is there any way to stop Windows from putting all this rubbish on my filesystem? (I might have to create a shell script to clean up after Windows' act.) Here is what I see in Windows Explorer: By the way, if I were running a Mac OS X host (where colons are illegal file name characters) this would be even more horrendous.

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • Raid 1 array won't assemble after power outage. How do I fix this ext4 mirror?

    - by Forkrul Assail
    Two ext4 drives on Raid 1 with mdadm won't reassemble after the power went out for an extended period (UPS drained). After turning the machine back on, mdadm said that the array was degraded, after which it took about 2 days for a full resync, which completed without problems. On trying to remount the array I get: mount: you must specify the filesystem type cat /etc/fstab lines relevant to setup: /dev/md127 /media/mediapool ext4 defaults 0 0 dmesg | tail (on trying to mount) says: [ 1050.818782] EXT3-fs (md127): error: can't find ext3 filesystem on dev md127. [ 1050.849214] EXT4-fs (md127): VFS: Can't find ext4 filesystem [ 1050.944781] FAT-fs (md127): invalid media value (0x00) [ 1050.944782] FAT-fs (md127): Can't find a valid FAT filesystem [ 1058.272787] EXT2-fs (md127): error: can't find an ext2 filesystem on dev md127. cat /proc/mdstat says: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md127 : active (auto-read-only) raid1 sdj[2] sdi[0] 2930135360 blocks super 1.2 [2/2] [UU] unused devices: <none> fsck /dev/md127 says: fsck from util-linux 2.20.1 e2fsck 1.42 (29-Nov-2011) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/md127 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> mdadm -E /dev/sdi gives me: /dev/sdi: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 37ac1824:eb8a21f6:bd5afd6d:96da6394 Name : sojourn:33 Creation Time : Sat Nov 10 10:43:52 2012 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 5860271016 (2794.40 GiB 3000.46 GB) Array Size : 2930135360 (2794.39 GiB 3000.46 GB) Used Dev Size : 5860270720 (2794.39 GiB 3000.46 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : 3e6e9a4f:6c07ab3d:22d47fce:13cecfd0 Update Time : Tue Nov 13 20:34:18 2012 Checksum : f7d10db9 - correct Events : 27 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing) boot@boot ~ $ sudo mdadm -E /dev/sdj /dev/sdj: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 37ac1824:eb8a21f6:bd5afd6d:96da6394 Name : sojourn:33 Creation Time : Sat Nov 10 10:43:52 2012 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 5860271016 (2794.40 GiB 3000.46 GB) Array Size : 2930135360 (2794.39 GiB 3000.46 GB) Used Dev Size : 5860270720 (2794.39 GiB 3000.46 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : 7fb84af4:e9295f7b:ede61f27:bec0cb57 Update Time : Tue Nov 13 20:34:18 2012 Checksum : b9d17fef - correct Events : 27 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing) machine@user ~ dmesg | tail [ 61.785866] init: alsa-restore main process (2736) terminated with status 99 [ 68.433548] eth0: no IPv6 routers present [ 534.142511] EXT4-fs (sdi): ext4_check_descriptors: Block bitmap for group 0 not in group (block 2838187772)! [ 534.142518] EXT4-fs (sdi): group descriptors corrupted! [ 546.418780] EXT2-fs (sdi): error: couldn't mount because of unsupported optional features (240) [ 549.654127] EXT3-fs (sdi): error: couldn't mount because of unsupported optional features (240) Since this is Raid 1 it was suggested that I try and mount or fsck the drives separately. After a long fsck on one drive, it ended with this as tail: Illegal double indirect block (2298566437) in inode 39717736. CLEARED. Illegal block #4231180 (2611866932) in inode 39717736. CLEARED. Error storing directory block information (inode=39717736, block=0, num=1092368): Memory allocation failed Recreate journal? yes Creating journal (32768 blocks): Done. *** journal has been re-created - filesystem is now ext3 again *** The drive however still doesn't want to mount: dmesg | tail [ 170.674659] md: export_rdev(sdc) [ 170.675152] md: export_rdev(sdc) [ 195.275288] md: export_rdev(sdc) [ 195.275876] md: export_rdev(sdc) [ 1338.540092] CE: hpet increased min_delta_ns to 30169 nsec [26125.734105] EXT4-fs (sdc): ext4_check_descriptors: Checksum for group 0 failed (43502!=37987) [26125.734115] EXT4-fs (sdc): group descriptors corrupted! [26182.325371] EXT3-fs (sdc): error: couldn't mount because of unsupported optional features (240) [27083.316519] EXT4-fs (sdc): ext4_check_descriptors: Checksum for group 0 failed (43502!=37987) [27083.316530] EXT4-fs (sdc): group descriptors corrupted! Please help me fix this. I never in my wildest nightmares thought a complete mirror would die this badly. Am I missing something? Suggestions on fixing this? Could someone explain why it would resync after the powerout, only to seemingly nuke the drive? Thanks for reading. Any help much appreciated. I've tried everything I can think of, including booting and filesystem checking with SystemRescue and Ubuntu liveboot discs.

    Read the article

  • How to bind old user's SID to new user to remain NTFS file ownership and permissions after freshly reinstall of Windows?

    - by LiuYan ??
    Each time we reinstalled Windows, it will create a new SID for user even the username is as same as before. // example (not real SID format, just show the problem) user SID -------------------- liuyan S-old-501 // old SID before reinstall liuyan S-new-501 // new SID after reinstall The annoying problem after reinstall is NTFS file owership and permissions on hard drive disk are still associated with old user's SID. I want to keep the ownership and permission setting of NTFS files, then want to let the new user take the old user's SID, so that I can access files as before without permission problem. The cacls command line tool can't be used in such situation, because the file does belongs to new user, so it will failed with Access is denied error. and it can't change ownership. Even if I can change the owership via SubInACL tool, cacls can't remove the old user's permission because the old user does not exist on new installation, and can't copy the old user's permission to new user. So, can we simply bind old user's SID to new user on the freshly installed Windows ? Sample test batch @echo off REM Additional tools used in this script REM PsGetSid http://technet.microsoft.com/en-us/sysinternals/bb897417 REM SubInACL http://www.microsoft.com/en-us/download/details.aspx?id=23510 REM REM make sure these tools are added into PATH set account=MyUserAccount set password=long-password set dir=test set file=test.txt echo Creating user [%account%] with password [%password%]... pause net user %account% %password% /add psgetsid %account% echo Done ! echo Making directory [%dir%] ... pause mkdir %dir% dir %dir%* /q echo Done ! echo Changing permissions of directory [%dir%]: only [%account%] and [%UserDomain%\%UserName%] has full access permission... pause cacls %dir% /G %account%:F cacls %dir% /E /G %UserDomain%\%UserName%:F dir %dir%* /q cacls %dir% echo Done ! echo Changing ownership of directory [%dir%] to [%account%]... pause subinacl /file %dir% /setowner=%account% dir %dir%* /q echo Done ! echo RunAs [%account%] user to write a file [%file%] in directory [%dir%]... pause runas /noprofile /env /user:%account% "cmd /k echo some text %DATE% %TIME% > %dir%\%file%" dir %dir% /q echo Done ! echo Deleting and Recreating user [%account%] (reinstall simulation) ... pause net user %account% /delete net user %account% %password% /add psgetsid %account% echo Done ! %account% is recreated, it has a new SID now echo Now, use this "same" account [%account%] to access [%dir%], it will failed with "Access is denied" pause runas /noprofile /env /user:%account% "cmd /k cacls %dir%" REM runas /noprofile /env /user:%account% "cmd /k type %dir%\%file%" echo Done ! echo Changing ownership of directory [%dir%] to NEW [%account%]... pause subinacl /file %dir% /setowner=%account% dir %dir%* /q cacls %dir% echo Done ! As you can see, "Account Domain not found" is actually the OLD [%account%] user echo Deleting user [%account%] ... pause net user %account% /delete echo Done ! echo Deleting directory [%dir%]... pause rmdir %dir% /s /q echo Done !

    Read the article

  • tag structured Filesystems

    - by A.Rashad
    I hope this is the correct site, I lose my way between the 4 sister sites :) Let me ask the question this way. all file systems I have seen before are hierarchical, that means a root directory, with some branched directories, and so on until we have files residing in these directories. except for AS/400 file structure, where it has a concept of a Library that serve somehow as a directory but one level only. Why not have directory-less filesystems where files are placed in a single location, but the file identifiers would be referenced by a database of tag/ file relation ships. This way there will be no need for symbolic links, one file may have multiple relations to multiple subjects, not only a single parent directory to contain. I hope the idea is clear.

    Read the article

  • Using mod_rewrite to mask /cgi-bin/abc as /def

    - by Alois Mahdal
    I have a seemingly easy task, but somehow I just can't get it to work: Some interesting lines from my httpd.conf: ... DocumentRoot "D:/opt/apache/htdocs" ... ScriptAlias /cgi-bin/ "D:/opt/apache/cgi-bin/" ... <Directory "D:/opt/apache/htdocs"> Options Indexes FollowSymLinks ExecCGI AllowOverride None Order allow,deny Allow from all </Directory> <Directory "D:/opt/apache/cgi-bin/"> AllowOverride None Options ExecCGI Order allow,deny Allow from all </Directory> (I know it's dumb but it's only a testing machine :D.) Now, I have d:\opt\apache\cgi-bin\expired.pl and I expect GET /licensecheck.php?code=123456. And I wish to fake client into thinking it speaks with /licensecheck.php, but actually return data by \expired.pl. What I tried was setting following at the end of http.conf: RewriteEngine on RewriteRule ^/licensecheck.php$ /cgi-bin/expired.pl [T=application/x-httpd-cgi,L] ...but it keeps 404-ing me, looking for cgi-bin directory (not cgi-bin\expired.pl) in my DocumentRoot! [error] [client 127.0.0.1] script not found or unable to stat: D:/opt/apache/htdocs/cgi-bin /cgi-bin/expired.pl and all other scripts in /cgi-bin/ work as expected, Only way I could make it work was actually putting the \expired.pl to DocumentRoot, but I don't want this, I want my cgi-bin neatly separated :)

    Read the article

  • Running phpmyadmin and suphp

    - by thor
    I have a Debian Lenny web server. It is running apache2 with libapache2-mod-suphp. Unfortunately, suphp makes impossible to use phpmyadmin, as phpmyadmin is installed in /usr/share/phpmyadmin and owned by root, and suphp disables it's enging in this direcory: $ cat /etc/apache2/mods-enabled/suphp.conf <IfModule mod_suphp.c> AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml suPHP_AddHandler application/x-httpd-php <Directory /> suPHP_Engine on </Directory> # By default, disable suPHP for debian packaged web applications as files # are owned by root and cannot be executed by suPHP because of min_uid. <Directory /usr/share> suPHP_Engine off </Directory> </IfModule> Is there a possibility to enable system phpmyadmin (may be through standard libapache2-mod-php5) while using suphp? How?

    Read the article

  • httpd, vsftpd and the annoying selinux

    - by Christian
    I have a CentOS 6.3 installed with httpd running and vsftpd but I am unable to balance permission between the user able to upload over ftp and their website working. What I do: I create a user with their home directory as `/home/username` I create a sub folder called `html` for their website I chown their directory `chown -R username:apache /home/username` I chmod their directory `chmod -R 750 /home/username` I chcon their directory `chcon -R -t httpd_sys_rw_content_t /home/username` and their website loads fine but they are unable to ftp, but if I do the following, they can ftp but their website doesnt load: chcon -R -t user_home_dir_t /home/username If I disable selinux, the user can ftp and the website loads. so what is the answer to keep selinux?

    Read the article

  • Virtualmin deactivating PHP on new virtual servers

    - by Josh
    This is related to my other question... but the situation is much worse now. After updating to the most recent version of Virtualmin, when I create new accounts, Virtualmin sets up their VirtualHost entries as follows: <Directory /home/username/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php FCGIWrapper /home/username/fcgi-bin/php.fcgi .php </Directory> <Directory /home/username/cgi-bin> allow from all </Directory> [...] RemoveHandler .php Now, not only is it specifically inserting AddHandler fcgid-script and FCGIWrapper... which I do not want because I am using mod_fastcgi, but it's also setting up PHP in such a way that it will never work! It's adding a RemoveHandler .php after setting up the handler for PHP! Where is this behavior configured and how can I stop it? Better yet, how can I make Virtualmin not include any PHP commands at all in the VirtualHost section?

    Read the article

  • Apache AliasMatch and DirectoryMatch not working?

    - by Alex
    I have the following config - please notice the Alias and Directory equivalent -- uncommented they work as expected but the dynamic/regex based versions don't - any ideas??? <VirtualHost *:80> ServerName temp.dev.local ServerAlias temp.dev.local DocumentRoot "C:\wamp\www\temp\public" <Directory "C:\wamp\www\temp\public"> AllowOverride all Order Allow,Deny Allow from all </Directory> # Alias /private/application/core/page/assets/images/ "C:/wamp/www/temp/private/application/core/page/assets/images/" # <Directory "C:/wamp/www/temp/private/application/core/page/assets/images/"> AliasMatch ^/private/application/(.*)/(.*)/assets/images/ /private/application/$1/$2/assets/images/ <DirectoryMatch "^/private/application/(.*)/(.*)/assets/images/"> Options Indexes FollowSymlinks MultiViews Includes AllowOverride None Order allow,deny Allow from all </DirectoryMatch> </VirtualHost>

    Read the article

  • How to fix boot and mount failed drops to initramfs prompt in Ubuntu 12.04?

    - by msPeachy
    Ubuntu partition does not boot. This started after a power interruption during system boot. The next time I boot, I encountered the following error message: mount: mounting /dev/disk/by-uuid/3f7f5cd9d-6ea3-4da7-b5ec-**** on /root failed: Invalid argument mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /proc on /root/proc failed: No such file or directory Target file system doesn't have /sbin/init. No init found. Try passing init= bootarg. Busybox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) _ I've searched for similar posts here and most of the recommended solution is to reboot to the Ubuntu LiveCD. That's another problem because I cannot boot to a LIVEUSB, this is the error message I get when booting to a LiveUSB: Busybox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) mount: mounting /dev/sda2 on /isodevice failed: Invalid argument Could not find the ISO /ubuntu-12.04-desktop-i386.iso. This could also happen if the file system is not clean because of an operating system crash, an interrupted boot process, an improper shutdown, or unplugging of a removable device without first unmounting or ejecting it. To fix this, simply reboot into Windows, let it fully start, log in, run 'chkdsk /r', then gracefully shut down and reboot back into Windows. After this you should be able to reboot again and resume the installation. I cannot boot into Windows because I don't have a Windows partition. Do I have to install Windows to fix this problem? Is there a way to fix this in the (initramfs) prompt? Please help. Thank you!

    Read the article

  • File access with hostname or ip only - no domain?

    - by Jonathon
    It seems likely that this is an obvious question, but I'm having trouble tracking down any useful information. Normally when accessing files in a particular directory on a server, I'm able to create a virtual host, assign a domain, root directory location, etc -- however am in a situation where I have server space and need to access files with only a hostname. Is this possible? For example, let's say the hostname is 123hostname.com, and the file I want access to is in /home/sub-directory/filename.php. How do I get at it via a browser? I've tried: http://123hostname.com/home/sub-directory/filename.php ...and some other variations on that theme (that I can't post because new users are restricted to one link in messages). But generally stuck. Any help -- even if it's just to let me know that this isn't possible without some additional configuration -- would be great. Thank you!

    Read the article

  • Linux And NTFS Permissions

    - by VGE IT
    Trying to restrict a folder within a directory created in linux filesystem. I have changed the permissions to: root rwx, a special active directory group rwx and all others r. Upon doing so, people that are not in the special AD group can access the directory and modify files. Upon doing so the group changes to "Domain Users" when the user modifies documents within the directory. I have to manualy change the documents default group back to my AD group. I have tried to create another AD group and modify permissons to deny write access. When doing so through windows explorer, the settings seem to take affect until I go back in a look at permissions for the restricted group. No permissions show when I view for the second time. Please assist. Samba share properties [MyShare] comment = "blah blah blah" browseable = yes guest ok = no read only = no path = /xxx/xxxxx/ create mask = 0640 directory mask = 0750 admin users = @"domain\Domain Admins", @"domain\group A", @"domain\group B" valid users = @"domain\Domain Admins", @"domain\group A", @"domain\group B" nt acl support = Yes inherit acls = yes inherit owner = yes inherit permissions = yes

    Read the article

  • Apache ScriptAlias and access error?

    - by Parhs
    First of all after much pain i figured out how to make it work in Apache 2.4 windowz. Here is my configuration that seems to work successfully for git clone and push and everything. Problem First of all my configuration works. There is a "Require all denied" at / directory. I want only git functionality and nothing else. Example Request from a git client 192.168.100.252 - - [07/Oct/2012:04:44:51 +0300] "GET /git/simple/info/refs?service=git-upload-pack HTTP/1.1" 200 264` Error caused by that Request [Sun Oct 07 04:44:51.903334 2012] [authz_core:error] [pid 6988:tid 956] [client 192.168.100.252:13493] AH01630: client denied by server configuration: C:/git-server/web/simple There isnt any error at gitclient everything works fine but i get this at error log. Is there any solution for this error to not appear?I worry about log size. <VirtualHost *:80> DocumentRoot "C:\git-server\web" ServerName git.****censored**** DirectoryIndex index.php SetEnv GIT_PROJECT_ROOT c:/git-server/repositories SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER ScriptAlias /git/ "C:/Program Files (x86)/Git/libexec/git-core/git-http-backend.exe/" <LocationMatch "^/.*/git-receive-pack$"> Options +ExecCGI AuthType Basic AuthName intranet AuthUserFile "C:/git-server/config/users" Require valid-user </LocationMatch> <Directory /> Options All Require all denied </Directory> <Directory "C:\Program Files (x86)\Git\libexec\git-core"> Options +ExecCGI Options All Require all granted </Directory> </VirtualHost>

    Read the article

  • Apache 2.2.14: SSLCARevocation location

    - by Doc
    I am installing a .crl in my apache config. It looks like this: VirtualHost default DocumentRoot "web" ServerName example.com SSLEngine on SSLCertificateFile "cert.crt" SSLCertificateKeyFile "key.key" SSLCertificateChainFile "cert.ca-bundle" SSLProtocol -all +SSLv3 SSLCipherSuite SSLv3:+HIGH:+MEDIUM Directory Order deny,allow Allow from all SSLCACertificateFile "ClientRootCert.crt" SSLVerifyClient require SSLVerifyDepth 3 SSLCARevocationFile "CRLList.crl" Directory VirtualHost When Apache is started, I get the error: SSLCARevocationFile not allowed here When I place SSLCARevocationFile above the Directory tag, Apache starts, but all client certs are rejected with the message: ssl_error_expired_cert_alert (both revoked and active certs) How to solve this?

    Read the article

  • debian/ubuntu locales and language settings

    - by AndreasT
    This self-answered question solves the following issues: locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory and some other locale related problems.

    Read the article

  • OpenLDAP RHEL 6

    - by AndyM
    Hi all I've been configuring OpenLDAP on RHEL 6 and its seems you have run the following to rebuild the config dirs. I'm ok with that , but my issues is , say I want to change the server passwd , do I have to go through the whole process every time I change the config ? Is there a way of changing the slapd config after its been built using the RHEL6 method ? below is the advice I've found on the net from http://www.linuxtopia.org/online_books/rhel6/rhel_6_migration_guide/rhel_6_migration_ch07s03.html This example assumes that the file to convert from the old slapd configuration is located at /etc/openldap/slapd.conf and the new directory for OpenLDAP configuration is located at /etc/openldap/slapd.d/. Remove the contents of the new /etc/openldap/slapd.d/ directory: rm -rf /etc/openldap/slapd.d/* Run slaptest to check the validity of the configuration file and specify the new configuration directory: slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d Configure permissions on the new directory: chown -R ldap:ldap /etc/openldap/slapd.d chmod -R 000 /etc/openldap/slapd.d chmod -R u+rwX /etc/openldap/slapd.d

    Read the article

  • How can I call a URL as a cron job in Webmin?

    - by EmmyS
    (Possibly this belongs on stackoverflow, although it's not really a programming issue since the code works when run directly. If it needs to be moved, though, no problem.) I have a PHP file (which consumers a National Weather Service web service via SOAP, if it matters) that I need to run on a scheduled basis. I'm trying to set up a cron job in Webmin. If I use an absolute path to the file in the Command field, when I run it I get some strange errors: /var/www/html/mysite.com/test/ndfdXMLclient.php: line 1: ?php: No such file or directory /var/www/html/mysite.com/test/ndfdXMLclient.php: line 2: //: is a directory /var/www/html/mysite.com/test/ndfdXMLclient.php: line 3: //DOCUMENTATION: No such file or directory /var/www/html/mysite.com/test/ndfdXMLclient.php: line 4: //: is a directory /var/www/html/mysite.com/test/ndfdXMLclient.php: line 5: syntax error near unexpected token `"running client code",' /var/www/html/mysite.com/test/ndfdXMLclient.php: line 5: `error_log("running client code", 1, "[email protected]");' The actual code in my file for those 5 lines looks like this: <?php // *************************************************************************** //DOCUMENTATION FROM WEATHER.GOV ALL STORED IN xmlClientComments.txt // *************************************************************************** error_log("running client code", 1, "[email protected]"); The code runs perfectly fine when I run it directly in my browser, so why doesn't webmin recognize it as code? (The same thing happens if I enter the actual URL in the command field - http://mysite.com/test/ndfdXMLclient.php.) I've never worked with webmin before; most of our hosts' cron control panels allow cron jobs to run PHP files like this with no issue. Is there some trick to getting webmin to read php as actual php?

    Read the article

  • How to download all file content from a folder using wget and http

    - by user1526912
    I am trying to use wget and http to download all contents from folderAA below to directory /root/sstest wget -r --directory-prefix="/root/sstest" -o /root/sstest2.log http://site.com/folder1/folder2/folderAA/ When I submit the above command nothing is downloaded. If I submit a wget request for a specific file from folderAA the file is actually downloaded to /root/sstest: wget -r --directory-prefix="/root/sstest" -o /root/sstest2.log http://site.com/folder1/folder2/folderAA/file.txt Can someone tell me why I cannot download all file content from folderAA at once using the first wget request?

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • Are my Linux symbolic links acting correctly?

    - by Andy Castles
    Hi all I've been using Linux on and off for the last 15 years and today I came across something in bash that surprised me. Setup the following directory structure: $ cd /tmp $ mkdir /tmp/symlinktest $ mkdir /tmp/symlinktest/dir $ mkdir /tmp/symlinktarget Now create two sym links in symlinktest pointing to symlinktarget: $ cd /tmp/symlinktest $ ln -s ../symlinktarget Asym $ ln -s ../symlinktarget Bsym Now, in bash, the following tab completion does strange things. Type the following: $ cd dir $ cd ../A[TAB] Pressing the tab key above completes the line to: $ cd ../Asym/ as I expected. Now press enter to change into Asym and type: $ cd ../B[TAB] This time pressing the tab key completes the link to: $ cd ../Bsym[space] Note that there is now a space after the Bsym and there is no trailing slash. My question is, why when changing from the physical directory "dir" to Asym it recognises that Asym is a link to a directory, but when changing from one sym link to another, it doesn't recognise that it's a link to a directory? In addition, if I try to create a new file within Asym, I get an error message: $ cd /tmp/symlinktest/Asym $ cat hello > ../Bsym/file.txt -bash: ../Bsym/file.txt: No such file or directory I always thought that symlinks were mostly transparent except to programs that need to manipulate them. Is this normal behaviour? Many thanks, Andy

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >