Search Results

Search found 14339 results on 574 pages for 'domain rename'.

Page 193/574 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Can't access IIS 7 server URL from the same IIS 7 server.

    - by Kevin Raffay
    We have an intranet site ie, xxx.yyyy.com, that users access by entering "http"://xxx.yyy.com. Our problems started when we migrated to IIS 7 running on a new 2003 server. We got rid of our single-sign on code and implemented a security model where we capture a user's domain credentials which we then authenticate against a DB. In order to get the domain credentials passed to our ASP.NET app, we have the following settings: Anonymous Authentication:Disabled ASP.NET Impersonation: Enabled Basic/Digest/Forms Authentication: Disabled Windows Authentication: Enabled We allow "*" and deny "?" in the web.config. Browsing "http"://xxx.yyy.com from any client PC results in a domain login prompt, and if your enter a proper user/pwd, you can get in. However, browsing "http"://xxx.yyy.com while remoting into the server results in 3 domain login prompts and eventually a 401 error - unauthorized. We have traced this behavior to problems with our web site where we have pages doing "screen scraping" using the HttpRequest calling a url on the same server. When doing a HttpRequest from any other client, using a test harness that passes authorized credentials, all is good. So internal HttpRequest calls on the server fail, just like attempts to browse that server's url from within a remote session. Why would a to "http"://xxx.yyy.com on server xxx.yyy.com fail authentication?

    Read the article

  • Remotely managing Scheduled Tasks on another computer: Access Denied

    - by Eptin
    I need to remotely create new scheduled tasks from a Windows 7 computer in my company (which according to this Microsoft TechNet article I should be able to do. http://technet.microsoft.com/en-us/library/cc766266.aspx ) From within Task Scheduler, on the menu I click Action Connect to another Computer. I browse for the remote computer's name (I use Check Names to verify that the name is correct) and then I check 'Connect as another user' and enter \Administrator and the local admin password. Whenever I try this, I get the error message Task Scheduler: You do not have permission to access this computer Firewall isn't the problem I am able to use Remote Desktop with this username & password combo, so I would expect it to work when remotely managing as well. The remote computer has firewall exceptions for Remote Scheduled Tasks Management, Remote Service Management, and Remote Desktop among other things. Heck, I even tried turning off the firewall for that individual computer and it still didn't work. More details: I have administrative remote access to several other Windows 7 Enterprise computers, though I log in as the local Administrator (whose administrative rights are only recognized by that local machine, not by the domain). The computer I am managing from is on the domain, and also has administrative rights that are recognized on the domain. More experimentation: If I go the other way around and remote-desktop into the other machine and from there open task scheduler then 'connect to another computer', I am able to connect back to my main computer using the username & password that is recognized by an administrator on the domain, and successfully schedule a task on my main computer. So it's not a company firewall issue that's preventing anything from working. The only permissions requirement Microsoft talks about is "The user credentials that you use to connect to the remote computer must be a member of the Administrators group on the remote computer". I'm logging in as an Administrator on each of the local machines, so why doesn't it work?

    Read the article

  • ISA Server 2006 SSL Certificate Dilemma

    - by JohnyD
    I'm making so great headway in offering our services over https with help from a Go Daddy certificate, later to be upgraded to Thawte SSL123 certs. But, I've just run into one whopper of a problem. Here's my setup: I run an ISA 2006 firewall. Our web services are distributed over 2 servers. One is Windows 2000 (www.domain.com) and the other is Windows 2003 (services.domain.com). So, I'll need to purchase 2 certs for both www and services, import them into IIS6 on their respective machines, then export them with the primary key (making sure to Include all certificates in the certification path if possible... that had me stumped for a while), and then to finally import them into ISA's local computer Personal store. The problem I've just run into is that I have separate firewall rules for services.domain.com and www.domain.com... because requests need to be forwarded to different web servers. Each of these firewall rules use the same httplistener. I have just found out that you can only use 1 certificate per httplistener. To make matters worse you can only have a single httplistener per ip / port. Is this correct? I can only use a single certificate for a single ip address? This would seem to be a severe limitation. Am I wrong? If I'm not then I've got a whole lot more work ahead of me as I'll have to set up extra ip's, add them to the firewall's network interface, create new listeners using that ip, etc... Can someone please confirm that I'm doing this correctly / incorrectly? Once I got my head wrapped around it all it seemed easy... then this. Thanks in advance.

    Read the article

  • Can't start Bind9 on Ubuntu 10.04 + Plesk 10.1 - "named: no process found"

    - by bradley.ayers
    I've installed a fresh version of Ubuntu 10.04 64bit, I didn't install bind when choosing what packages should be installed in the Ubuntu installer. I downloaded the auto installer for Plesk 10.1 and installed it successfully. When I logged into the Plesk control panel and tried to change the password, it failed because it couldn't restart bind. I SSH'd into the box and tried a sudo /etc/init.d/bind9 restart and get the following: brad@ws01:/root# sudo /etc/init.d/bind9 restart * Stopping domain name service... bind9 WARNING: key file (/etc/bind/rndc.key) exists, but using default configuration file (/etc/bind/rndc.conf) rndc: connect failed: 127.0.0.1#953: connection refused named: no process found [ OK ] * Starting domain name service... bind9 [fail] Looking at tail /var/log/messages reveals a whole bunch of: Feb 23 16:08:21 ws01 kernel: [ 3840.065851] type=1503 audit(1298441301.831:31): operation="open" pid=5565 parent=5563 profile="/usr/sbin/named" requested_mask="::r" denied_mask="::r" fsuid=108 ouid=0 name="/var/named/run-root/etc/named.conf" Edit: After following ooshro's advice, bind runs, however I still get the named: no process found error: brad@ws01:/etc/apparmor.d$ sudo /etc/init.d/bind9 restart * Stopping domain name service... bind9 WARNING: key file (/etc/bind/rndc.key) exists, but using default configuration file (/etc/bind/rndc.conf) named: no process found [ OK ] * Starting domain name service... bind9 [ OK ]

    Read the article

  • bind9 "error sending response: host unreachable"

    - by wolfgangsz
    of course), I have a number of DNS servers, all running bind9 (9.5.1, to be specific) under fedora. 4 of them are slaves, fed by a common master for our public DNS. These are all located on the public gateways of our various offices. One of them has tons of messages in its log files similar to these: Jul 21 17:26:18 gateway named[3487]: client 10.171.3.8#52500: view internal: error sending response: host unreachable I wonder where that comes from. The firewall is open on port 53 between the two machines (10.171.3.8 is an internal DNS server located on a Windows Domain Controller). The internal domains do NOT list the gateway as a name server (so there should not be any attempts of replicating the domains), and the gateway does not handle any internal DNS. The clients in these messages vary between the two domain controllers on the internal network and a third internal name server (running bind9 on debian in a different segment of the network). Any pointers are highly welcome. In response to the first reply: The issue with this really is that tcpdump doesn't show any problems. Here is an extract from "tcpdump -i any port 53" 09:13:38.283308 IP valine.aminocom.com.61815 ns-pri.ripe.net.domain: 14075 PTR? 166.225.58.95.in-addr.arpa. (44) 09:13:42.007410 IP gateway-eng.aminocom.com.37047 alanine.aminocom.com.domain: 35410+ PTR? 12.3.172.10.in-addr.arpa. (42) At the same time, the DNS log shows: Jul 22 09:13:38 gateway named[3487]: client 10.171.3.6#61300: view internal: error sending response: host unreachable Jul 22 09:13:40 gateway named[3487]: client 10.172.3.12#56230: view internal: error sending response: host unreachable Jul 22 09:13:40 gateway named[3487]: client 10.171.3.8#55221: view internal: error sending response: host unreachable Jul 22 09:13:49 gateway named[3487]: client 10.171.3.8#51342: view internal: error sending response: host unreachable So clearly at 09:13:40 there were two unsuccessful attempts to connect to internal machines (10.172.3.12 and 10.171.3.8, both are DNS servers), but nothing in the tcpdump output.

    Read the article

  • Can't make virtual host working

    - by sica07
    I have to create a virtual host on a server which, previously hosted a single website (domain name). Now I'm trying to add a second domain on this server (using the same nameserver). What I've done so far: Initially there was no virtual host so I've made one for the second domain: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /var/www/bla ServerName www.blabla.com ServerAlias blabla.com <Directory /var/www/blabla> Order deny,allow Allow from all AllowOverride All </Directory> </VirtualHost *:80> Because nothing happend, I changed the DocumentRoot of the apache server to /var/www (initially was the root document of the first website -/var/www/html) and created a virtual host for the first domain too: <VirtualHost *:80> DocumentRoot /var/www/html ServerName www.first.com ServerAlias first.com <Directory /var/www/html> Order deny,allow Allow from all AllowOverride All </Directory> </VirtualHost *:80> In this case, first.com is working ok, but bla.com not. When I ping blabla.com I get the "unkown host" response. What am I doing wrong? Do I have to modify something in the DNS settings too? Thank you.

    Read the article

  • Issues resolving DNS entries for multi-homed servers

    - by I.T. Support
    This is difficult to explain, so bear with me. We have 2 domain controllers, each multi-homed to straddle 2 internal subnets, (subnet A and subnet B) and provide dns, dhcp, and ldap authentication. Both domain controllers each have 2 DNS entries. both entries have identical host names, but correspond to subnet A & subnet B respectively (example entries shown): dc1 host 192.168.8.1 dc1 host 192.168.9.1 dc2 host 192.168.8.2 dc2 host 192.168.9.2 We also have a 3rd subnet for our dmz, (subnet C) which neither domain controller has an IP address on, but our firewall/routing tables provide access to subnet A from subnet C and vice versa, but don't allow access to subnet B from subnet C. Here's my issue. How can I force/determine which dns entry is used when a server on subnet C queries either domain controller by host name? Right now it seems to randomly pick one of the two entries, swaps out the name for the IP address and that's that. The problem is if it randomly selects the entry that corresponds to the 9.x subnet B (no access from subnet C), then the server fails to resolve. If it picks the entry for the 8.x subnet A then it resolves (firewall/routing tables defined for communication between these 2 subnets) Here's what I'd like to know: What are Best Practices (if any) for dealing with DNS resolution on subnets that the DNS servers don't have a presence on? Can I control something akin to a metric value to force an order of DNS resolution when there are multiple entries for the same host name that correspond to different IP subnets? Should I even have 2 DNS HOST entries for the same name? Here's what I'd like to avoid: Making edits to the HOSTS files of servers on subnet C to force DNS resolution of the hostname to the appropriate subnet Adding NIC's to the DC's to have them straddle the DMZ as well, thus obtaining a third DNS entry that corresponds to subnet C Again, my apologies if this was too verbose / unclear. Thanks!

    Read the article

  • Problem with creation of scheduled task from IIS6 on SR2003

    - by Morten Louw Nielsen
    Hi, I have also posted this question on stackoverflow, but will also try here, since it might be more system-related I am writing a webapplication using .NET. The webapp creates scheduled tasks using the System.Diagnostics.Process class, calling SCHTASKS.EXE with parameters. I have changed the identity on the app pool, to a specific domain user. The domain-user is local administrator on all the four webservers. From webserver01 I am creating tasks on webserver01 to webserver04. It works perfect for 3-5 days, but then it breaks. It gives me the following errormessage in a messagebox: "The application failed to initialize properly (0xc0000142). Click on OK to terminate the application." If I have the system in the broken state, and I change the identity of the app pool to Domain administrator, it works. As I change it back to my domain-user, it breaks again. If I reboot the server, it works again for the same amount of days, but will break again. It seems like a permission-related problem. I just don't understand why it works sometimes, and sometimes doesn't. I hope someone outthere has seen this problem! Looking forward to hear from you! Kind regards, Morten, Denmark

    Read the article

  • Why do weekly tasks created via PowerShell using a different user fail with error 0x41306

    - by Danny Tuppeny
    We have some scripts that create scheduled jobs using PowerShell as part of our application. When testing them recently, I noticed that some of them always failed immediately, and no output is ever produced (they don't even appear in the Get-Job list). After many days of tweaking, we've managed to isolate it to any jobs that are set to run weekly. Below is a script that creates two jobs that do exactly the same thing. When we run this on our domain, and provide credentials of a domain user, then force both jobs to run in the Task Scheduler GUI (right-click - Run), the daily one runs fine (0x0 result) and the weekly one fails (0x41306). Note: If I don't provide the -Credential param, both jobs work fine. The jobs only fail if the task is both weekly, and running as this domain user. I can't find information on why this is happening, nor think of any reason it would behave differently for weekly jobs. The "History£ tab in the Task Scheduler has almost no useful information, just "Task stopping due to user request" and "Task terminated", both of which have no useful info: Task Scheduler terminated "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" instance of the "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" task. Task Scheduler stopped instance "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" of task "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" as request by user "MyDomain\SomeUser" . What's up with this? Why do weekly tasks run differently, and how can I diganose this issue? This is PowerShell v3 on Windows Server 2008 R2. I've been unable to reproduce this locally, but I don't have a user set up in the same way as the one in our production domain (I'm working on this, but I wanted to post this ASAP in the hope someone knows what's happening!). Import-Module PSScheduledJob $Action = { "Executing job!" } $cred = Get-Credential "MyDomain\SomeUser" # Remove previous versions (to allow re-running this script) Get-ScheduledJob Test1 | Unregister-ScheduledJob Get-ScheduledJob Test2 | Unregister-ScheduledJob # Create two identical jobs, with different triggers Register-ScheduledJob "Test1" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Weekly -At 1:25am -DaysOfWeek Sunday) Register-ScheduledJob "Test2" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Daily -At 1:25am)

    Read the article

  • Bind9 zone files

    - by user42780
    Well for the better part of the last two hours I've tried to figure out what is actually wrong, but I can't seem to find anything obvious to me. What I'm trying to do is setup my DNS for say(per example) domain.com. This should include two NS records, namely ns1.domain.com and ns2.domain.com. With that there should be a mail record, as well as a CNAME record for www. I've been trough roughly 20 how to's in the last two hours, rewrote everything from scratch four times and I still can't seem to find whats wrong. My only suspicion to this might be two things; the error I get from the bind9 daemon when I stop the service, and the named.conf file. The error I get from the bind9 daemon when stopping the service is: * Stopping domain name service... bind9 rndc: connection to remote host closed This may indicate that * the remote server is using an older version of the command protocol, * this host is not authorized to connect, * the clocks are not syncronized, or * the key is invalid. I honestly doesn't know what this means, apart from the key defined in /etc/bind/rndc.key that's not in the named.conf file(yes, I did try to add it to no avail). Here's all the zone files, and configuration files; http://208.77.101.5/bind9/ If anyone could help, it would be greatly appreciated.

    Read the article

  • E-mail hosting provider that can set up aliases with wildcards

    - by Richard Downer
    I am looking for an e-mail hosting provider that allow e-mail aliases containing wildcards. In more detail: I own my own domain. I want an e-mail hosting provider to manager e-mail for my domain. Now, to help deal with spam, I often give different e-mail addresses to different organisations. These e-mail addresses always start with the same prefix, but then differ. So, for example, I might give out these e-mail addresses: [email protected] [email protected] [email protected] I want to be able to go to the e-mail provider's control panel and set up an e-mail alias like this: [email protected] -- bounce/discard (because this address has been sold to spammers) joe-*@sample.com -- redirect to [email protected] What's I don't want to do is either have to set up every single e-mail address individually (because I make them up whenever I need them), nor do I want to have a general catch-all for any unrecognised address in my domain (because I don't want to be carpet-bombed with spam when a spammer runs a dictionary attack against my domain name.) Although this seems like a useful feature to have, it seems to be a little-known feature and I've not seen anybody advertise this feature. My current hosting provider offer this but I want to move away from them, so I need another provider that will continue to work with all the e-mail addresses I've been using for years. Alternatively, I could use mail server software that runs on Windows - I have seen some commercial packages offering this feature but they cost more than I can afford - are there any suggestions for low-cost software packages?

    Read the article

  • How to give a user NTFS rights to a folder, via Powershell

    - by Don
    I'm trying to build a script that will create a folder for a new user on our file server. Then take the inherited rights away from that folder and add specific rights back in. I have it successfully adding the folder (if i give it a static entry in the script), giving domain admin rights, removing inheritance, etc...but i'm having trouble getting it to use a variable I set as the user. I don't want there to be a static user each time, I want to be able to run this script, have it ask me for a username, it then goes out and creates the folder, then gives that same user full rights to that folder based on the username i've supplied it. I can use Smithd as a user, like this: New-Item \\fileserver\home$\Smithd –Type Directory But can't get it to reference the user like this: New-Item \\fileserver\home$\$username –Type Directory Here's what i have: Creating a new folder and setting NTFS permissions. $username = read-host -prompt "Enter User Name" New-Item \\\fileserver\home$\$username –Type Directory Get-Acl \\\fileserver\home$\$username $acl = Get-Acl \\\fileserver\home$\$username $acl.SetAccessRuleProtection($True, $False) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Administrators","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain\Domain Admins","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain\"+$username,"FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) Set-Acl \\\fileserver\home$\$username $acl I've tried several ways to get it to work, but no luck. Any ideas or suggestions would be welcome, thanks.

    Read the article

  • netlogon errors

    - by rorr
    I have two instances of mssql 2005 and am using CA XOSoft replication. The master is a failover cluster and the replica is a standalone server. They are all running Server 2003 sp2 x64. Same patch levels on all servers. This setup has worked great for several months until we recently restricted the RPC ports on both nodes of the master(5000 - 6000 using rpccfg.exe). We have to implement egress filtering, thus the limiting of the ports. We began receiving login errors for sql windows authentication and NETLOGON Event ID: 5719: This computer was not able to set up a secure session with a domain controller in domain due to the following: Not enough storage is available to process this command. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. We also see group policies failing to update and cluster file shares go offline at the same time. The RPC ports were set back to default when we started seeing these problems and the servers rebooted, but the problems persist. The domain controllers are not showing any errors. Running dcdiag and netdiag shows everything is fine. We have noticed that the XOSoft service ws_rep.exe is using a lot of handles(8 - 9k), about the same number that sqlserver is using. As soon as xosoft replication is stopped the login errors cease and everything functions correctly. I have opened a ticket with CA for XOSoft, but I'm not sure that the problem is actually xosoft, but that it is the one bringing the problem to light. I'm looking for tips on debugging RPC problems. Specifically on limiting the ports and then reverting the changes.

    Read the article

  • How do I use .htaccess conditional redirects for multiple domains?

    - by John
    I'm managing about 15 or so domains for a particular promotion. Each domain has specific redirects in place, as shown below. Rather than make 15 different .htaccess files that I would later have to manage separately, I'd like to use a single .htaccess file and use a symbolic link into each website's directory. The trouble is that, I can't figure out how to make the rules apply only for a specific domain. Every time I visit www.redirectsite2.com, it sends me to www.targetsite.com/search.html?state=PA&id=75, when it should instead be sending me to www.targetsite.com/search.html?state=NJ&id=68. How exactly do I make multiple RewriteRules apply for a given domain and only that domain? Is this even possible to do within a single .htaccess file? Options +FollowSymlinks # redirectsite1.com RewriteEngine On RewriteBase / # start processing rules for www.redirectsite1.com RewriteCond %{QUERY_STRING} ^$ RewriteCond %{HTTP_HOST} ^www\.redirectsite1\.com$ # rule for organic visit first RewriteRule ^$ http://targetsite.com/search.html?state=PA&id=75 [QSA,R,L] RewriteRule ^PGN$ http://targetsite.com/search.html?state=PA&id=26 [QSA,R,NC,L] RewriteRule ^NS$ http://targetsite.com/search.html?state=PA&id=27 [QSA,R,NC,L] RewriteRule ^INQ$ http://targetsite.com/search.html?state=PA&id=28 [QSA,R,NC,L] RewriteRule ^AA$ http://targetsite.com/search.html?state=PA&id=29 [QSA,R,NC,L] RewriteRule ^PI$ http://targetsite.com/search.html?state=PA&id=30 [QSA,R,NC,L] RewriteRule ^GV$ http://targetsite.com/search.html?state=PA&id=31 [QSA,R,NC,L] # catch-all rule, using the same id as the organic visit RewriteRule ^([a-z]+)?$ http://targetsite.com/search.html?state=PA&id=75 [QSA,R,NC,L] # end processing rules for www.redirectsite1.com # begin rules for redirectsite2.com RewriteCond %{QUERY_STRING} ^$ RewriteCond %{HTTP_HOST} ^www\.redirectsite2\.com$ # rule for organic visit first RewriteRule ^$ http://targetsite.com/search.html?state=NJ&id=68 [QSA,R,L] RewriteRule ^SL$ http://targetsite.com/search.html?state=NJ&id=6 [QSA,R,NC,L] RewriteRule ^APP$ http://targetsite.com/search.html?state=NJ&id=8 [QSA,R,NC,L] # catch-all rule, using the same id as the organic visit RewriteRule ^([a-z]+)?$ http://targetsite.com/search.html?state=NJ&id=68 [QSA,R,NC,L] Thanks for any help you may be able to provide!

    Read the article

  • Squid, authentication, Outlook Anywhere, Windows 7 and HTTP 1.1 = NIGHTMARE

    - by Massimo
    I'm running a Squid proxy (latest version, 3.1.4) on Linux CentOS 5.4 with Samba 3.5.4, in order to allow authenticated web access for domain users; everything works fine, and even Windows 7 clients are fully supported. Authentication is transparent for domain users, while it is explicitly requested for non-domain ones, and it works if the user can provide valid domain credentials. All nice and good. Then, Outlook Anywhere kicks in and pain and suffering ensue. When Outlook (be it 2007 or 2010, it doesn't matter) runs on Windows XP clients, it connects gracefully through the Squid proxy to its remote Exchange server. When it runs on Windows 7, it doesn't. If the authentication requirement is lifted from the proxy, everything works on Windows 7 too, so the problem is obviously related to NTLM authentication with Squid. Digging more deeply (WireShark), I discovered Outlook Anywhere uses HTTP 1.1 when it runs on Windows 7, while it uses HTTP 1.0 when on Windows XP. And it looks like Squid, even in its latest incarnation, still has some serious troubles handling HTTP 1.1 properly, particularly when SSL and proxy authentication are thrown in the mix. While waiting for Squid to fully and officially support HTTP 1.1 (and it looks like this could take quite a long time), I'm looking for one of the following solutions: Make Squid handle this correctly, if it is at all possible. Identify Outlook Anywhere connections and have Squid not require authentication for them. But it isn't easy: again, the behaviour of Outlook differs when running on Windows XP and Windows 7, and while on Windows XP Outlook sends a really nice user-agent string of "MSRPC", on Windows 7 it doesn't send any (why? WHY?!?). Force Outlook Anywhere to use HTTP 1.0 even when running on Windows 7. And no, this is not as simple as deselecting "use HTTP 1.1" in Internet Explorer, looks like Outlook ignores that setting and chooses on its own which protocol to use. Any other feasible solution which doesn't involve whitelisting specific destination Exchange servers, which is the last-resort solution I'm trying to avoid.

    Read the article

  • Mod_Perl configuration for multiple domains

    - by daliaessam
    Reading the Mod_Perl module documentation, can we configure it on per domain basis, what I mean can we configure it to run on every domain or specific domain only. What I see in the docs is: Registry Scripts To enable registry scripts add to httpd.conf: Alias /perl/ /home/httpd/2.0/perl/ <Location /perl/> SetHandler perl-script PerlResponseHandler ModPerl::Registry PerlOptions +ParseHeaders Options +ExecCGI </Location> and now assuming that we have the following script: #!/usr/bin/perl print "Content-type: text/plain\n\n"; print "mod_perl 2.0 rocks!\n"; saved in /home/httpd/httpd-2.0/perl/rock.pl. Make the script executable and readable by everybody: % chmod a+rx /home/httpd/httpd-2.0/perl/rock.pl Of course the path to the script should be readable by the server too. In the real world you probably want to have a tighter permissions, but for the purpose of testing, that things are working, this is just fine. From what I understand above, we can run Perl scripts only from one specific folder that we put the directive above. So the question again, can we make this directive per domain for all domains or for specific number of domains?

    Read the article

  • RRAS VPN on windows 2k3 AD, can access rras server only.

    - by nopsax
    I'm setting up a test lab and here is the current configuration: 192.168.86.201 - a windows 2003 machine acting as PDC with AD/DNS/DHCP/WINS. 192.168.86.62 - windows 2003 machine is the RRAS server with IAS, also a file/print server. 192.168.86.6 - gateway/router to internet 192.168.86.21 - Windows XP Workstation Everything works on the internal network, File/Print/AD etc. Whenever a user connects via vpn to the RRAS server remotely using their domain credentials, they are assigned an ip address from the 192.168.86.201 machine along with the wins server address etc. The vpn user can then ping/access resources on the RRAS server, but cannot ping/access resources of any other machines by name or ip. However, if I ping by name, it does resolve to the correct ip address, just no replies. I did notice that on the RRAS server the 'internal' interface gets an ip address of 192.168.86.75 when a remote user connects, and the remote user is assigned, for example 192.168.86.71 . The RRAS server responds on both the .62 and .75 ip addresses. The client also unchecks the 'use remote default gateway option'. Also, I tried connecting a laptop to the physical network, joining the domain, then going remote and dialing the connection before domain login, and everything seems to work, e.g. browse-able shares via network neighborhood. But I can't really join the domain remotely if I cannot access any other resources. I really need to monitor traffic to see whats happening to those packets but won't be able to until this weekend. Any help is appreciated, will provide whatever configurations are needed.

    Read the article

  • Error 53 - The network path was not found.

    - by Jack
    I have a machine in my Active Directory Domain that I can no longer "net view" from other machines in the domain. This is a Windows XP Pro machine. It is hosting a VMWare virtual of my Domain Controller. If I attempt to net view [machine name] I get system error 53, The network path was not found. This is not a DNS issue, the same thing happens with the machine's IP. I don't think it's a firewall issue, I turned the firewall off on this machine. As I mentioned, it has worked in the past, and then stopped for no reason that I can see. I (intentionally) didn't change the software. I CAN get to the VMs hosted on this machine, can connect to their shares, net view them, etc. All other machines can see each other. In fact, the problem machine can see other machines and access their shares just fine. I tried removing the machine from the domain and re-adding it. I tried deleting the shares and recreating them. Not sure how to troubleshoot this any further. Any ideas?

    Read the article

  • GoDaddy SSL on Shared Hosting

    - by Jon
    So I'm very new to using SSL certificates and I have been trying to install one on a site for a client. He is using shared hosting for multiple domains through GoDaddy, and the site we're working on is not the primary domain. He purchased a UCC certificate for multiple domains and I installed it on the shared hosting account. My thought was that since the domains were under the same hosting account, then they would each be protected under the certificate. This was not the case...apparently. I checked both domains with an SSL checker and the primary domain checked out. The domain that we wanted the SSL on showed the following errors: None of the common names in the certificate match the name that was entered (www.CLIENTDOMAIN.com). You may receive an error when accessing this site in a web browser. I'm not sure how to fix this. It was just purchased yesterday, so if necessary, I guess I could un-install it or re-key it (???). Is there a way to just change the common name to www.CLIENTDOMAIN.com (the correct domain)?

    Read the article

  • Apache debugging: where to find error logs?

    - by AP257
    I'm new to Apache and web serving generally, so apologies if this is a very stupid question. I want to configure a new sub-domain on a working site and install a forum there. I'm using a Debian server that already has Apache, mod_wsgi and a bunch of virtual hosts successfully running on it. I first installed my forum app (Django's OSQA). Following the OSQA instructions, I then created an Apache config file that specified ServerName as the new sub-domain. I also created a .wsgi file for the app, and pointed WSGIScriptAlias at it. I then restarted Apache. However, when I go to the new sub-domain, I get a 404 error message. Two questions: Is there a step missing above? Or is simply creating a new Apache config file in sites-available enough to 'tell' Apache about a new sub-domain? If there's something else going wrong, how can I debug it? The ErrorLog and CustomLog specified in the config file are both blank. apache2.conf, which I guess is Apache-wide configuration, specifies ErrorLog /var/log/apache2/error.log, but this is yet another blank file.

    Read the article

  • Read access to Active Directory property (uSNCreated)

    - by Tom Ligda
    I have an issue with read access to the uSNCreated property when doing LDAP searches. If I do an LDAP search with a user that is a member of the Domain Admins group (UserA), I can see the uSNCreated property for every user. The problem is that if I do an LDAP search with a user (UserB) that is not a member of the Domain Admins group, I can see the uSNCreated property for some users (UserGroupA) and not for some users (UserGroupB). When I look at the users in UserGroupA and compare them to the users in UserGroupB, I see a crucial difference in the "Security" tab. The users in UserGroupA have the "Include inheritable permissions from this object's parent" unchecked. The users in UserGroupB have that option checked. I also noticed that the users in UserGroupA are users that were created earlier. The users in UserGroupB are users created recently. It's difficult to quantify, but I estimate the border between creation time between the users in UserGroupA and UserGroupB is about 6 months ago. What can cause the user creation to default to having that security property checked as opposed to unchecked? A while back (maybe around 6 months ago?) I changed the domain functional level from Windows Server 2003 to Windows Server 2008 R2. Would that have had this effect? (I can't exactly downgrade the domain functional level to test it out.) Is this security property actually the cause of the issue with read access to the uSNChanged property on LDAP searches? It seems correlated, but I'm not sure about causation. What I want in the end is for all authenticated users to have read access to the uSNCreated property for all users when doing an LDAP search. I would also be OK if I could grant read access for that property to an AD group. Then I can control access by adding members to the group.

    Read the article

  • How do I determine whether this email bounce is my fault?

    - by David Zaslavsky
    I use Google Apps to handle email for my personal website, so I have an email address [email protected] through that, and I also have a Gmail account [email protected]. Now, I've been trying to send emails to a particular recipient who shall be known as [email protected]. When I send the email from my Gmail account with the @gmail.com address, it works fine. However, when I send it from my Google Apps account with the @ellipsix.net address, I get a bounce message which includes the following text: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 554 554 mail server permanently rejected message (#5.3.0) (state 17). The bounce message suggests that it is up to the mail administrator of the recipient domain example.com to fix the problem, whatever it is. But I would like to be as sure as possible that nothing needs to be fixed on my end. I already have DKIM signatures enabled for my domain, and I have published an SPF DNS record. Is there something else I should check or do, or can I be confident that it's up to the recipient to fix this issue? Does the "state 17" in the bounce message mean something relevant? I've included my domain name in the question so people who know more than me about this stuff can independently check the relevant DNS records or other information. This other question seems similar, but I've already investigated everything suggested in the answers there (except for contacting Google, which I don't want to do unless I suspect it's their issue to fix).

    Read the article

  • Virtualmin & git integration

    - by weby3456
    I've installed virtualmin on my VPS to manage my websites. It's working perfect and as expected nearly a year now. Recently I wanted to add some features to one of my sites, and I need git integration. I've correctly installed git & gitweb on my server, and I can create repositories and watch them under http://sub.domain.com/git/gitweb.cgi Here is the current relevant directory tree: /home/user/domains/sub.domain.com/public_html/git/ drwxr-sr-x user user . drwxr-x--- user user .. -rw-r--r-- user user git-favicon.png -rw-r--r-- user user git-logo.png -rwxr-xr-x user user gitweb.cgi -rw-r--r-- user user gitweb.css drwxrwx--- apache user reponame.git /home/user/domains/sub.domain.com/public_html/git/reponame.git/ drwxrwx--- apache user . drwxr-sr-x user user .. drwxrwx--- apache user branches -rwxrwx--- apache user config -rwxrwx--- user user description -rwxrwx--- apache user HEAD drwxrwx--- apache user hooks drwxrwx--- apache user info drwxrwx--- apache user objects drwxrwx--- apache user refs But I have some questions: When I'm visiting http://sub.domain.com/git/gitweb.cgi, the owner is listed as 'Apache'. why? how can I change that? Usually, to create a new git repository, I'll do something like: $ mkdir proj $ cd proj $ git init Initialized empty Git repository in /home/user/proj/.git/ // here I'm creating the files or copy them from somewhere else $ git add *.php $ git add README $ git commit -m 'initial version' But after creating the repository in virtualmin, I can find a new dir named 'reponame.git' but not the '.git' dir. When I'm trying to run any git command (e.g. git status) I'm receiving "fatal: This operation must be run in a work tree". How can I work with that repository? Currently I need to explicitly grant access for users to be able to view the repositories via gitweb. How can I make certain repositories public?

    Read the article

  • VPN Error 691 but server says authenticated on server

    - by Andy
    Hello all, I have a problem with a vpn connection on Windows XP SP3 that appears to be related to an account (maybe privilleges or an option that I have missed). When connecting using my account, which is a domain administrator account it will connect to through the vpn fine. However, using an account created for another person they receive Error 691: Username or Password is not valid for this domain. On the domain controller (windows 2003) I see a logon successful message: User DOMAIN\user was granted access. Fully-Qualified-User-Name = int.company.net.au/People/Management/User NAS-IP-Address = 10.30.0.3 NAS-Identifier = not present Client-Friendly-Name = MelbourneCore Client-IP-Address = Router-ip Calling-Station-Identifier = not present NAS-Port-Type = Virtual NAS-Port = 77 Proxy-Policy-Name = Use Windows authentication for all users Authentication-Provider = Windows Authentication-Server = undetermined Policy-Name = Remote VPN Access Authentication-Type = MS-CHAPv1 EAP-Type = Does anyone have any ideas as to where else I should look for finding a solution? If i use the wrong password it gives a logon failure error in the event viewer. Also removing them from the remote access group gives a logon failure error. Nothing appears in the event viewer on the local machine. In the past all that is required is to add them into our Remote Access Users group. Any help?

    Read the article

  • What is the minimal steps to setup a client-server network using Windows Server 2008 R2 standard?

    - by Motivated Student
    Background I have One computer server with Win Server 2008 R2 standard installed but it has not been configured. This server has 2 LAN adapters. One adapter is connected to ISP and the other one connected to HUB/Switch. Other computers working as clients are connected to the same HUB/Switch to which the server is connected. IP Printers, IP scanners, IP camera are also connected to the same HUB/Switch. Note: I am a newbie. I only know how to plug RJ-45 sockets and assembly computer peripherals. I have no prior experience in Windows Server at all. Please teach me from the newbie's point of view. Objective I want to establish the following: Each client can access the internet, printers, scanners after it has been successfully authenticated by the server. Unauthenticated clients cannot access the internet, printers, etc. The server hosts a local site. Clients can browse internally using a private domain www.company.com. If the same domain name has been used by other on the internet, my private domain must override the public domain.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >