Search Results

Search found 28288 results on 1132 pages for 'home directory'.

Page 85/1132 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • I want to add a Quality Assurance domain. How do I handle DNS servers?

    - by Tim
    I'm advising a large client on how to isolate their dev and testing from their production. They already have one domain, lets say xyz.net with the active directory domain as "XYZ01". I want to add second domain say QAxyz.net and make its active directory domain "QA01" All development and QA servers would be moved to the QAxyz.net domain, the machines would be part of the QA01 domain. Note: Some of these servers will have the same name as the production servers for testing purposes. I believe we would have separate DNS servers for each domain. If I am logged into the QA01 domain, to access the production domain I would qualify my access like so: \PRODSERVER.xyz.net login: XYZ01\username Do I need to add a forwarder to my QAxyz.net DNS server so that it can see xyz.net? Would I need to do the same to the xyz.net DNS server to see QAxyz.net? I don't know how to advise them in this. Does anyone have any other recommendations to isolationg a QA domain? Many Thanks in advance! Tim

    Read the article

  • Linux authentication via ADS -- allowing only specific groups in PAM

    - by Kenaniah
    I'm taking the samba / winbind / PAM route to authenticate users on our linux servers from our Active Directory domain. Everything works, but I want to limit what AD groups are allowed to authenticate. Winbind / PAM currently allows any enabled user account in the active directory, and pam_winbind.so doesn't seem to heed the require_membership_of=MYDOMAIN\\mygroup parameter. Doesn't matter if I set it in the /etc/pam.d/system-auth or /etc/security/pam_winbind.conf files. How can I force winbind to honor the require_membership_of setting? Using CentOS 5.5 with up-to-date packages. Update: turns out that PAM always allows root to pass through auth, by virtue of the fact that it's root. So as long as the account exists, root will pass auth. Any other account is subjected to the auth constraints. Update 2: require_membership_of seems to be working, except for when the requesting user has the root uid. In that case, the login succeeds regardless of the require_membership_of setting. This is not an issue for any other account. How can I configure PAM to force the require_membership_of check even when the current user is root? Current PAM config is below: auth sufficient pam_winbind.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account sufficient pam_winbind.so account sufficient pam_localuser.so account required pam_unix.so broken_shadow password ..... (excluded for brevity) session required pam_winbind.so session required pam_mkhomedir.so skel=/etc/skel umask=0077 session required pam_limits.so session required pam_unix.so require_memebership_of is currently set in the /etc/security/pam_winbind.conf file, and is working (except for the root case outlined above).

    Read the article

  • Error setting up Data Protection Manager 2010 Agents / Network "Unauthenticated" in network settings

    - by Bowsa
    I'm not sure if the two are connected but i suspect they are. Basically I'm tring to setup Data Protection Manager 2010 on a fresh install of Server 2008 R2 in a SBS 2003 domain. Everything went fine until trying to install agents across the network. Upon clicking add, i get the following error message: Unable to connect to the Active Directory Domain Services Database. Make sure that the DPM server is a member of a domain and that the controller is running. Also verify that there is network connectivity between the DPM server and the domain controller. ID: 7 As usual (worryingly) the MSDN support for 2010 products is nearly non existant, clicking the error ID simply gives a page not found error. So after 2 days of Googling and trying various fixes (DNS settings, adding permissions to AD objects, rejoining the domain and many more) I thought I'd ask here in the hope that someone out there may have had this issue before. Any help greatly appreciated! Some further info: Firewalls are disabled on the Server 2008, SBS, and client machines. Manually installing and adding the client in also fails, as the DPM server tries to contact the DC first. Edit: I tried creating a new protection group instead, and it gives a different error upon adding the machines: Following machines are not found in AD: COMPUTERNAME.COMPANYNAME.LOCAL Is there a certain directory structure it follows in AD?

    Read the article

  • IIS 7.5 401 -UnAuthorized Access on a Virtual Directory

    - by Jimmy
    I have setup a website in IIS 7.5 on a Windows 2008 machine. The website is sitting on C:/websites/ Then I added a virtual directory called "/uploads" that points to "d:/websites/uploads". This directory holds all the images/media. When I browse the website in browser, I dont see any images etc. When I browse an image directly I notice that it's throwing a 401 error. 401 - Unauthorized: Access is denied due to invalid credentials. I have searched Google quite a lot and I am pretty sure I am have all the permissions setup correctly. Can anyone tell me what I could be doing wrong here?

    Read the article

  • Windows Domain Chaos - Any Solving Approach

    - by Chake
    we are running an old Window 2003 Server as Domain Controller (DC2003). To safely migrate to Windows 2008 R2 we added a 2008 R2 (DC2008R2) to the domain as domain controller (adprep etc.). After dcpromo on DC2008R2 everything seemed to be ok. The new DC appeared under the "Domain Controlelrs" node. It wasn't checked at this time, if DC2008R2 can REALLY act as domain controller. Later we tried to shutdown DC2003 and ran into a total mess with non functional Exchange and Team Foundation Services. After that I got the job to fix... First i thought it could be an Problem with DC2008R2. So I removed it as Domain Controller and installed a new Windows 2008 R8 Server DC2008R2-2. I ran into similar Problems. I tried a bunch of stuff, but nothign helped. I won't list it, maybe I made an mistake, so I'm willing to redo it with your suggestions. To have a starting point I tried the best practise analyser whicht ended up with 24 "Compatible" and 26 "Not Compatible" tests. From these 26 tests 19 read the same. (I'm translating from german, so that may to be the exact wording) Problem: Using the Best Practise Analyser for Active Directory Domain Services (Active Directory Domain Services Best Practices Analyzer, AD DS BPA) no data can be be gathered using the name of the forest and the domain controller DC2008R2-2. I appreciate any suggestions, this really bothers me.

    Read the article

  • Making one of the folders default in Apache

    - by OmerO
    Hello, The file & directory structure of my website is as follows: /Library/WebServer/mysite/joomla .. /Library/WebServer/mysite/wiki .. /Library/WebServer/mysite/forum .. /Library/WebServer/mysite/index.php As you see, there are various applications each residing in separate folders. Now, in order to define this structure, I have made this entry in Apache http-vhosts.config file: ServerName mysite.com DocumentRoot "/Library/WebServer/mysite" ** And I already have the DirectoryIndex defined: DirectoryIndex index.html index.php, and so on. So far so good but I want this specific functionality: When someone visits mysite, he/she should automatically directed to: /Library/WebServer/mysite/joomla (and therefore /Library/WebServer/mysite/joomla/index.php) I don't want to achieve that functionality by putting a redirection code inside /Library/WebServer/mysite/index.php or /Library/WebServer/mysite/index.htm because that causes time delays (because of the redirection, of course) But in this case, the only proper way of achieving it seems to set DocumentRoot this way: DocumentRoot "/Library/WebServer/mysite/joomla" But when I set it that way, then the other folders (/wiki, /forum, etc.) are simply not served by Apache. To work around it, I put directives like: Alias /wiki /Library/WebServer/mysite/wiki .. Alias /forum /library/WebServer/mysite/forum and it did work actually the way I wanted. But... I still cannot use it that way because in this case I just couldn't manage to make the wiki use Short URLs (as described in link text) So, I have to set the DocumentRoot back to /Library/WebServer/mysite and shoud be able to assign /Library/WebServer/mysite/joomla as the "default directory" (my own terminology :) Can I do it in Apache? Is there any other way you might suggest? Thanks.

    Read the article

  • Cannot set target directory when extracting an archive using tar

    - by palto
    I'm trying to extract a tar archive to a specific directory. I've tried using -C flag but it doesn't work as expected. Here is the commandline I'm using tar xvf myarchive.tar -C mydirectory/ This gives me a following error: tar: file -C: not present in archive tar: file mydirectory/: not present in archive I've also tried setting the -C flag before the archive file but it just says this: tar xvf -C mydirectory/ myarchive.tar tar: -C: No such file or directory What am I doing wrong? EDIT: tar -tf shows that the tar archive does not have full path names: tar -tf myarchive.tar herareport/ herareport/bin/ ...

    Read the article

  • Migrating to Windows Server 2008 R2 Domain Controllers - a few Questions/Issues

    - by Chris
    Ok so here's our setup: We have 2 Windows 2003 Domain Controllers. I am trying to replace them with Windows 2008 R2. The 2003 servers are named DC01 and DC02. The 2008 R2 servers are DC1 and DC2. I prepared the Windows Server 2003 Forest Schema for a Domain Controller that runs Windows Server 2008 or Windows Server 2008 R2. Then with both of the new servers up as member servers I ran dcpromo on DC1 using the advanced option and added it successfully to my existing domain. It's roles are GC, DNS and Active Directory Domain Services. I transferred The PDC Emulator, RID Pool Manager, and Infrastructure Master roles to DC1. The Schema Master and Domain Naming master are still on DC01. The first issue that I'm encountering is when I dcpromo the DC2 and select "Replicate data over the network from and existing domain controller" I select that I want to replicate from DC1 and I get the following error: Failed to identify the requested replica partner (dc1.xxx.org) as a valid domain controller with a machine account for (DC2$). This is likely due to either the machine account not being replicated to this domain controller because of replication latency or the domain controller not advertising the Active Directory Domain Services. Please consider retrying the operation with \dc01.xxx.org as the replica partner. "The server is unwilling to process the request. Is this because the Schema Master and Domain Naming Master roles are still on the old DC01? And if so, if I transfer Schema Master and Domain Naming Master roles to DC1 what is the risk or breaking my AD? I'm a little paranoid because this process HAS to be transparent. ANY down time or interruption will result in me getting a verbal ass kicking from my I.T. Director. Both of the new servers DNS point the the old DNS servers (DC01 and DC02) not themselves by the way.

    Read the article

  • Create FTP accounts with access to just some folders in the web directory

    - by Karevan
    I own a VPS server. At the moment I havent installed any FTP server on it, I am using SSH and SFTP only. I am using Debian 6 Squeeze and Apache2 service. The web directory is in /var/www/ Well, I wanted to create different FTP accounts and give access to some people to them (one account per user). In my web directory I have an structure like this: /var/www/mtaplugins/music/mplayer/music/ /var/www/mapuploader/ and more folders inside. I want to create an FTP account which should be able to just access one of those folders and the folders inside them. I would appreciate some recomendations or stept to follow before installing anything or doing anythong, because I dont have any idea about this. I was thinking in using ProFTPd but as I saw in the documentation it would just create an account for each user in my server, and I want to not create more users (I always use root) Thanks in advance

    Read the article

  • HOW TO RECOVER A WWW DIRECTORY AND INCLUDED FIELS IN UBUNTU 9.04

    - by Al Mubarak
    hai., i'm using ubuntu 9.04 for drupal development. today morning accidentally i removed my www folder in directory. the folder has so many of my web development documents. O God., I just restart after my system when it happens., and i install some recovery software like gpart. is theri any possibilities to recover my www directory and files., bcos its includes more of web development documents. pls pls pls i'm very afraid about that issue. let me know asap. Thanks in Many more advance,

    Read the article

  • "Directory index forbidden by Options directive" when deleting or renaming folders through webdav

    - by sandwiches
    I am trying to delete folders through webdav but all I get is 403 on the client and "Directory index forbidden by Options directive" in the Apache error log. I enabled "options indexes" for the folder and I stopped getting the errors in either the client or the log, but I still can't rename or delete folders through webdav. Any ideas why I'm unable to edit folders through webdav? I am running WAMP, default installation with Apache 2.2.17. I can connect, create files, delete files, rename them, etc. I can create folders but not delete them or rename them, once they're created. On the access log, whenever I try to delete, I get this: "DELETE /uploads/shahs HTTP/1.1" 301 243 On the error log, I get: Directory index forbidden by Options directive: The Webdav client gives a 403 when trying to delete or rename folders. Once, I added "options indexes," I stopped getting the error message in the Apache error log and the 403 on the webdav client, but now, deleting or renaming does nothing. No error messages, but nothing happens, at all.

    Read the article

  • AFP/SSH stopped working on OS X Server

    - by churnd
    I have 3 Mac OS X servers all bound to AD, all configured in the Golden Triangle setup. All 3 are completely separate from each other in terms of services, but all reside on the same internal network and are all bound to the same Active Directory domain. Two are 10.5.x (latest updates) and one is 10.6.3. Last weekend, all 3 simultaneously stopped allowing Active Directory users access to certain services, specifically AFP & SSH. SMB still works fine on all 3. I asked the AD admin if anything changed, and he said "Yes, we made a change to user accounts to toughen up security", and suggested I use [email protected] instead of just username. This still didn't work. I have completely removed one of my servers from AD, and re-joined, but this didn't work either. I can do kinit from command line and get a Kerberos ticket. sudo klist -ke shows all services are configured to use the correct Kerberos principles. I have been scavenging the logs for any useful info. The AFP log just shows that I'm connecting and disconnecting. The DirectoryService.log shows stuff about misconfigured Kerberos hashes, but my research is showing that's not uncommon. /var/log/system.log isn't showing anything useful that I can see. I'm not sure where to go from here. Any help/ideas appreciated.

    Read the article

  • w2k3 AD DC Demotion fails with "no other AD DC for that domain can be contacted"

    - by Kstro21
    i've a small office with a single w2k3 sp2 DC(bad idea, but it is real), now, i want to make a clean install of that pc, so, i got another one, install w2k3 sp2, add it to the domain, dcpromo and set it to be a GC, untill now everything is ok, then tried to dcpromo in the primary DC, but it fails with The box indicating that this domain controller is the last controller for the domain mydomain.com is unchecked. However, no other Active Directory domain controllers for that domain can be contacted. Do you wish to proceed anyway? If you click Yes, any Active Directory changes that have been made on this domain controller will be lost. So, i started to move all the roles to the new server as described here, when all was ok with the roles, i tried doing the same, but got the same result. Tried moving the DNS to the new server, but it doesn't make difference. Shutdown to the old server, then tried to log into a workstation, but it fails saying the domain is not available, also coudln't add new workstation to the domain, so i have to power on the old server again. So, if i successfully move all the roles and dns to the new server: why dcpromo give such message in the old server? why if i shutdown the old server the domain is not available?? if i successfully move all the roles and dns to the new server, and i click yes when dcpromo give warning in the old server, will i lose all users, computers, ou, etc.? am i missing some steps to make this work?? hope you can help me thanks

    Read the article

  • Sharepoint AD imported users are becomming sporadically corrupted, causing us to have to create a ne

    - by TrevJen
    Sharepoint 2007 MOSS with AD imported users. All servers are 2008. I have around 50 users, over the past 2 months, I have had a handful of the users suddenly unable to login to Sharepoint. When they login, they either get a blank screen or they are repropmted. These users are using accounts that have been used for many months, sometimes the problem originates with a password change. In all cases, the users account works on every other Active Directory authenticated resource (domain, exchange, LDAP). In the most recent case, last night I was forced deleted a user ("John smith") because of corruption. The orifinal account name was jsmith. I deleted him from active directory, then deleted him from the profile list in Sharepoint Shared Services. I could not find a way to delete him from the Sharepoint user list, but I reran the import after recreating his account (renamed it too just to be sure to "smithj"). At first, this did not wor, the user could still access all other resources but Sharepoint. then, some 30 minutes later it inexplicably started working. This morning, the user changed passwords, which immediatly broke the login on Sharepoint again. I am at a loss on how to troubleshoot this.

    Read the article

  • How to block a user in apache httpd server from accessing a *.php file inside a Directory, instead user should access this using Directory name

    - by Oxi
    My requirement looks Simple, But Googling Did not help me yet. my query is i want to Throw a 404 page to a user(Not Re-Direct to another folder or file), who is trying to Access *.php files in my website ex: when a client asks for www.example.com/home/ i want to show the content , but when user says www.example.com/home/index.php i want to show a 404 page. i tried different methods, nothing worked for me, one of which tried is shown below <Directory "C:/xampp/htdocs/*"> <FilesMatch "^\.php"> Order Deny,Allow Deny from all ErrorDocument 403 /test/404/ ErrorDocument 404 /test/404/ </FilesMatch> </Directory> Thanks in Advance

    Read the article

  • Problems installing Windows service via Group Policy in a domain

    - by CraneStyle
    I'm reasonably new to Group Policy administration and I'm trying to deploy an MSI installer via Active Directory to install a service. In reality, I'm a software developer trying to test how my service will be installed in a domain environment. My test environment: Server 2003 Domain Controller About 10 machines (between XP SP3, and server 2008) all joined to my domain. No real other setup, or active directory configuration has been done apart from things like getting DNS right. I suspect that I may be missing a step in Group Policy that says I need to grant an explicit permission somewhere, but I have no idea where that might be or what it will say. What I've done: I followed the documentation from Microsoft in How to Deploy Software via Group Policy, so I believe all those steps are correct (I used the UNC path, verified NTFS permissions, I have verified the computers and users are members of groups that are assigned to receive the policy etc). If I deploy the software via the Computer Configuration, when I reboot the target machine I get the following: When the computer starts up it logs Event ID 108, and says "Failed to apply changes to software installation settings. Software changes could not be applied. A previous log entry with details should exist. The error was: An operations error occurred." There are no previous log entries to check, which is weird because if it ever actually tried to invoke the windows installer it should log any sort of failure of my application's installer. If I open a command prompt and manually run: msiexec /qb /i \\[host]\[share]\installer.msi It installs the service just fine. If I deploy the software via the User Configuration, when I log that user in the Event Log says that software changes were applied successfully, but my service isn't installed. However, when deployed via the User configuration even though it's not installed when I go to Control Panel - Add/Remove Programs and click on Add New Programs my service installer is being advertised and I can install/remove it from there. (this does not happen when it's assigned to computers) Hopefully that wall of text was enough information to get me going, thanks all for the help.

    Read the article

  • xf86OpenConsole: Cannot open /dev/tty0 (No such file or directory)

    - by mazgalici
    root@mazgalici:~# startx X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-28-server i686 Ubuntu Current Operating System: Linux mazgalici 2.6.18-194.26.1.el5.028stab079.2PAE #1 SMP Fri Dec 17 19:34:22 MSK 2010 i686 Kernel command line: quiet Build Date: 10 November 2010 11:25:26AM xorg-server 2:1.7.6-2ubuntu7.4 (For technical support please see ) Current version of pixman: 0.16.4 Before reporting problems, check to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Jan 11 01:28:48 2011 (==) Using config directory: "/usr/lib/X11/xorg.conf.d" Fatal server error: xf86OpenConsole: Cannot open /dev/tty0 (No such file or directory) Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log

    Read the article

  • Zsh, directory tab-completion with prefix

    - by nifty
    I have a directory where I put all my projects in, let's say it's ~/projects as an example. I've made a command called s which takes one argument, and moves me into that directory. E.g.: s foo moves me to ~/projects/foo. What I'd like is to have a completion command of some sorts, which would act like cd so I could do keep hitting tab to go further into the ~/projects/... directories. Basically, cd with a prefix which is always present. I've looked into zstyle completion in man zshcompsys, but realized I just don't know enough about it to understand it properly.

    Read the article

  • Users are getting a temporary profile

    - by Serhiy
    A bit about current setup: It is windows 2008 R2 AD servers (all of them are 2008R2) and couple locations which set as Sites. Each location has DFS on AD server. Roaming profiles are not used nor configured. Users have their home folder configured as mapped S: drive to DFS shared folder. For example: in profile tab user has: Home Folder - connect - S: to \\domain.com\dc\users\%username% We also have redirected Desktop, Documents and Downloads folders to \\domain.com\dc\users. Everything was fine. Suddenly (today), users in most locations lost their local profile (both XP and W7 desktops) and got temporary profiles. Also, it looks like local profile was created today (from folder properties). I checked events at couple machines and there is not errors related to profiles or logon process. I do not see issues in event logs at servers as well. Basically, I run out of ideas what is wrong and why machines lost their local profiles. PS: Laptop users do not have their folders redirected, but lost profiles as well.

    Read the article

  • One samba directory is very slow

    - by Tim Rosser
    We have a RHEL server running as a samba server for our Windows network which has been running fine for ages. All of a sudden this morning one specific folder became really slow and sometimes inaccessible. It's the /home/ directory (containing all of the user specific stuff, like their windows desktops, documents etc). It's not only really slow over the network, but when I try to use ls to view the directory it just hangs. I'm getting loads of messages like the following in /var/log/messages Mar 20 09:53:32 zeus smbd[32378]: [2012/03/20 09:53:32, 0] smbd/service.c:set_current_service(184) Mar 20 09:53:32 zeus smbd[32378]: chdir (/opt/shares/home/tim.rosser) failed

    Read the article

  • Deny directory browsing in a Proftpd / Ubuntu Installation

    - by skylarking
    I used this guide to set up a Proftpd installation an Ubuntu 8.04 server... Works well, but the generic user ( userftp ) can run ls and is able to change to any Directory and browse freely on the server ..from the root / and upwards.. I added this line to etc/shells /bin/false in hopes that that would prevent this ... I really only want the userftp account to be able to upload to the generic /home/FTP-Shared directory, and be able to do nothing else on the server. How is this accomplished ... This is a headless Ubuntu box..and I am using CLI only .. no GUI admin tools

    Read the article

  • Add directory to $PATH if it's not already there

    - by Doug Harris
    Has anybody written a bash function to add a directory to $PATH only if it's not already there? I typically add to PATH using something like: export PATH=/usr/local/mysql/bin:$PATH If I construct my PATH in .bash_profile, then it's not read unless the session I'm in is a login session -- which isn't always true. If I construct my PATH in .bashrc, then it runs with each subshell. So if I launch a Terminal window and then run screen and then run a shell script, I get: $ echo $PATH /usr/local/mysql/bin:/usr/local/mysql/bin:/usr/local/mysql/bin:.... I'm going to try building a bash function called add_to_path() which only adds the directory if it's not there. But, if anybody has already written (or found) such a thing, I won't spend the time on it.

    Read the article

  • Only allow the POST method for a specific file in a directory

    - by Dave Chen
    I have one file that should only be accessible via the POST method. /var/www/folder/index.php The document root is /var/www/ and index.php is nested inside a folder. My configurations are as follows: <Directory "/var/www/folder"> <Files "index.php"> order deny,allow Allow from all <LimitExcept POST> Deny from all </LimitExcept> </Files> </Directory> I visit my server at 127.0.0.1/folder but I can GET and POST the file just like normal. I've also tried reversing the order, order allow,deny, require, limitexcept and limit. How can I only allow POST requests to be processed by one file in a folder?

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >