Search Results

Search found 18315 results on 733 pages for 'cross domain policy'.

Page 21/733 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • The Message Queuing service failed to join the computer's domain 'DOMAIN' Error 0xc00e0025

    - by SimonGoldstone
    Struggling to get MSMQ installed in Domain Integration mode on Windows 2012 (Azure). So far, I've provisioned a brand new Windows Server 2012 (R2) machine on the Azure platform and installed the Active Directory role and promoted the machine to a domain controller. Once the AD was in place, I then added the MSMQ feature, along with the Directory Integration add on. However, it will not install in AD integration mode. It will only work in Workgroup mode. I can verify this by running the following powershell command: New-MSMQQueue -name Queue1 -queuetype Public When I run this command, I get the following error: New-MsmqQueue : A workgroup installation computer does not support the operation. The event viewer reveals to errors in the Application log: 1. The Message Queuing service failed to join the computer's domain 'DOMAIN'. Error 0xc00e0025: 2. Message Queuing was unable to create the msmq (MSMQ Configuration) object in Active Directory Domain Services. Error c00e0025h: I'm struggling here. Any advice?

    Read the article

  • Mod_rewrite display subdomain.domain.com and call domain.com/subdomain/ for SSL

    - by Jeff H.
    I have a website secured by a standard SSL certificate, securing a few different shops under different subdirectories. Ex. domain.com/shop1/ The shops are also accessible via a subdomain e.g. shop1.domain.com. What I'm trying to accomplish: display shop1.domain.com to the user, while keeping all of the actual server calls as domain.com/shop1, so that the secure pages will continue to work properly. (Not sure if I'm using the proper language, exactly, I hope my point is clear.) To be clear: my SSL is working fine, and I don't need help with that, and I don't need or want to purchase a UCC cert. It can't be that difficult for anyone with experience with Apache. (I've spent 3 hours trying to learn about mod_rewrite. It's just not clicking.) I'm on a GoDaddy secure shared server, so please keep in mind that I'm not able to reset the server or anything.

    Read the article

  • Sending email from an alternative domain to protect my "core" domain from spam filters

    - by Jack7890
    I run a website (seatgeek.com) that sends a lot of transactional email to users--account updates, alerts, etc. It's important to us that our domain remains clean in the eyes of spam filters. We'd like to roll out an email marketing campaign. It's nothing particularly spammy, but this would be the first time we ever emailed to people who hadn't expressly asked to receive email from us. It's to market a new product we built to a specific niche of professionals. In order to protect our domain in the eyes of spam filters, we're considering sending the marketing email from an alternative domain. The alternative domain is an alternative landing page we sometimes use for this new product. Is there any way this could backfire on us? Does it seem like a particularly poor idea?

    Read the article

  • HTTPS in sub domain redirects to main domain

    - by Amitabh
    We recently bought a wildcard certificate and installed it for a domain. It works fine for the main domain but seems to not work at all for any sub domains. Whats happening is we can access the sub domains fine on HTTP, but whenever we try HTTPS for the same sub domain url we are redirected back to the main domain. So if I put up a test folder "httpstest" in a sub domain with a index.html file in it, the following happens mysubdomain.mywebsite.com/httpstest/index.html or mysubdomain.mywebsite.com/httpstest/ works perfectly fine with http:// but mysubdomain.mywebsite.com/httpstest/ or mysubdomain.mywebsite.com/httpstest/index.html does not work with https:// and redirects to the main domain.Any help on this is greatly appreciated. The site is not the main site used for setting up the VPS. It was added from WHM. Environment: We are on a Linux VPS. Cpanel 11.30.6 , Apache 2.2.22, PHP 5.3.13 The Virtualhost entry looks like: <VirtualHost xx.xx.xxx.xx:443> ServerName my-own-website.com ServerAlias www.my-own-website.com DocumentRoot /home/amitabh/public_html ServerAdmin [email protected] UseCanonicalName Off CustomLog /usr/local/apache/domlogs/my-own-website.com combined CustomLog /usr/local/apache/domlogs/my-own-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User amitabh # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup amitabh amitabh </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup amitabh amitabh </IfModule> ScriptAlias /cgi-bin/ /home/amitabh/public_html/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-own-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-own-website.com.key SSLCACertificateFile /etc/ssl/certs/my-own-website.com.cabundle CustomLog /usr/local/apache/domlogs/my-own-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/amitabh/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/ssl/2/amitabh/my-own-website.com/*.conf" </VirtualHost>` I guess I messed up the formatting big time. Any help on formatting and on the issue is great appreciated. Thank you. Update: I could not update the formatting here. I posted the same question in a linux forum . I will really appreciate any pointer on it.

    Read the article

  • McAfee ePolicy-Orchestrator (ePO) - policy ownership by groups?

    - by bkr
    Is there a way to grant ownership of an ePO policy to a group? Alternatively, is there a permission that can be set that would allow owners of an ePO policy to add other owners to that policy without making them ePO admin? In the case I'm looking at, ePO is deployed within a large heterogeneous organization with a large amount of delegation in the form of create/modify policy rights to allow multiple IT departments to customize to their needs for their sections of the system tree. The problem is that the policies are owned by the creator of the policy. This causes problems when they leave (staff turnover) or when other people on their teams need the ability to modify the existing policy. Unfortunately, as far as I can see, only someone who is an ePO admin can change the owners. Even the owner of the policy cannot add other owners (unless they are also an ePO admin). Ideally, I should be able to assign ownership of a policy to a group - since that would be easier to manage than me or another admin having to continually fix policy ownership or remove orphaned polices. Even just allowing the owners of the polices to add other owners would be sufficient. How are other people handling policy ownership when dealing with a large amount of delegated control of polices? Is there a way to delegate this out without making users full ePO admins?

    Read the article

  • New AD-DC in a new Site is refusing cross-site IPv4 connections

    - by sysadmin1138
    We just added a new Server 2008 (sp2) Domain Controller in a new Site, our first such config. It's over a VPN gateway WAN (10Mbit). Unfortunately it is displaying a strange network symptom. Connections to the SMB ports (TCP/139 and TCP/445) are being actively refused... if the connection is coming in on pure IPv4. If the incoming connection is coming by way of the 6to4 tunnel those connections establish and work just fine. It isn't the Firewall, since this behavior can be replicated with the firewall turned off. Also, it's actually issuing RST packets to connection attempts; something that only happens with a Windows Firewall if there is a service behind a port and the service itself denies access. I doubt it's some firewall device on the wire, since the server this one replaced was running Samba and access to it from our main network functioned just fine. I'm thinking it might have something to do with the Subnet lists in AD Sites & Services, but I'm not sure. We haven't put any IPv6 addresses in there, just v4, and it's the v4 connections that are being denied. Unfortunately, I can't figure this out. We need to be able to talk to this DC from the main campus. Is there some kind of site-based SMB-level filtering going on? I can talk to the DC's on campus just fine, but that's over that v6 tunnel. I don't have access to a regular machine on that remote subnet, which limits my ability to test.

    Read the article

  • 1 domain.. 2 server and 2 applications

    - by basit.
    i have a site like twitter.com on server one and on server two i have forum, which path is like domain.com/forum on server one i wanted to implement wild card dns and put main domain on it. but on server two i wanted to keep forum separate, i cant give sub-domain forum.domain.com, because all its links are already put in search engines and link back to domain.com/forum. so i was wondering, how can i put domain and wild card dns on server one and still able to give path on server 2 for domain.com/forum (as sub-folder). any ideas? do you think htaccess can do that job? if yes, then how?

    Read the article

  • Accessing resources on localhost using domain credentials

    - by jas
    I'm trying to set up Team Foundation Server 2010, Sharepoint Server 2010 and Report Server 2008R2. I apologize for how long my question/problem is but I'm really lost on where to even look so am being as descriptive as possible in hopes that I'm making sense. The goal: Since developers can be inside or outside the firewall there needs to be a single http point of entry to TFS that works regardless of which side of the firewall you are and needs to work with external access to SharePoint and Report Server. Meaning we have it set up in DNS so buildserver.mydomain.com: points to the build service box which contains all of the services listed at the top of this post and specific services are defined/located by the port number. This is working great on every machine inside and out except for from the build server itself. All services must be able to work using external URLs. If I use http:// buildserver.mydomain.com:4800/tfs (the external URL) from my notebook which is behind the firewall I'm able to login with my domain credentials as expected. If the other developer points to the same URL from their home which isn't on the domain they are also able to login using their domain credentials. However if I am directly on buildserver and call SharePoint, TFS or Reporting Server from (i.e. http:// buildserver.mydomain.com:4800) itself using the external URL, I am prompted for a username and password. Entering my domain credentials results in another prompt to enter my credentials again. It will prompt three times regardless of which credentials are used (I have rights as a domain admin) and then after the third prompt directs me to a blank white page as though access was denied. There are no errors displayed on the page and nothing ends up in the event viewer. From buildserver if i use just the host name (the internal URL), then I'm prompted a single time for credentials and it works. i.e. http:// buildserver:4800/tfs works from the server itself. The behavior is identical for any service requiring authentication. Meaning from the box itself Sharepoint Central Admin, SharePoint WebApp, TFS, TFS Web Access, Report Server and Report Manager all fail using the external URL but will succeed if called using the interal URL. So the problem comes into play when configuring all of the services to work together. The only way to configure TFS is locally from the server which means I must point to the internal reporting server url (http:// buildserver:4800/reports and reportServer respectively instead of http:// buildserver.domainname.com:4800 like they need to be) since external URLs aren't working from itself. If I configure TFS to use the internal URL for Report Server then creating team projects or working in the SharePoint site for the team project fails for anyone not inside the domain since their machines have no idea who http:// buildserver:/reports even is or how to resolve them. I have configured Sharepoint with Alternate Access Mappings as well as set up Report Server to listen for external URLs. The external URLs simply aren't working when called from the server itself. I hope this makes sense. Thanks for taking the time to read this rather verbose plea for help.

    Read the article

  • CrossDomain error

    - by Jayesh
    Hi, I have hosted my Silverlight application in IIS, now when I try to access the application I get the following error System.ServiceModel.CommunicationException: an error occured while trying to make request to URI This could be due to attempting to access a service in a cross-domain way without proper cross-domain policy in place, or policy that is unsuitable for SOAP services..... I have placed the cross-domain policy properly in wwwroot as well as in the virtual directory. <?xml version="1.0"?> <cross-domain-policy> <allow-http-request-headers-from domain="*" headers="*"/> </cross-domain-policy> Please help! Thanks

    Read the article

  • GPO refresh error - Policy Refresh has not completed in the expected time. Exiting...

    - by Albert Widjaja
    Hi All, I'm having problem with my GPO changes, that I'd like to force to my terminal server users here's what I've done: I've made some necessary changes in one of the Domain Controllers to disable the GPO which applies to my Terminal Server user OU and then I go to the Terminal Server mstsc /admin console to perform the GPo refresh by using /force parameter, however I got this error instead: C:\Documents and Settings\Adminisratorgpupdate /force Refreshing Policy... User Policy Refresh has not completed in the expected time. Exiting... User Policy Refresh has completed. Computer Policy Refresh has not completed in the expected time. Exiting... Computer Policy Refresh has completed. but then the changes still got no effect yet as I logged in to the terminal server ? is there any way of how to make it in effect immediately please ? Thanks

    Read the article

  • Developing and Enforcing a BYOD Policy

    - by Darin Pendergraft
    On October 23, SANS released Part 1 of their Mobile Access Policy Survey (webcast link) and Part 2 was presented on October 25th (webcast link). Join us this Thursday, November 15th as SANS and Oracle present a follow up webcast that will review the survey findings and present guidance on how to create a mobile access policy for employee owned devices, and how to enforce it using Oracle IDM. Click this link to register: Developing and Enforcing a BYOD Policy This will be an excellent opportunity to get the latest updates on how organizations are handling BYOD policies and managing mobile access. We will have 3 speakers: Tony DeLaGrange a Security Expert from Secure Ideas will review the main findings of the SANS Mobile Access Survey Ben Wright, a SANS instructor, attorney and technology law expert will present guidance on how to create BYOD policy Lee Howarth from Oracle Product Managment will review IDM techology that can be used to support and enforce BYOD policies. Join us Thursday to hear about best practices and to get your BYOD questions answered. 

    Read the article

  • Unlocking High Performance with Policy Administration Replacement

    - by helen.pitts(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-ansi-language:EN-CA; mso-fareast-language:EN-CA;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-ansi-language:EN-CA; mso-fareast-language:EN-CA;} It is clear the insurance industry is undergoing significant changes as it consolidates and prepares for growth. The increasing focus on customer centricity, enhanced and speedier product development capabilities, and compliance with regulatory changes has forced companies to rethink well-entrenched policy administration processes. In previous Oracle Insurance blogs I’ve highlighted industry research pointing to policy administration replacement as a top IT priority for carriers. It is predicted that by 2013, the global IT spend on policy administration alone is likely to be almost 22 percentage of the total insurance IT spend. To achieve growth, insurers are adopting new pricing models, enhancing distribution reach, and quickly launching new products and services—all of which depend on agile and effective policy administration processes and technologies. Next month speakers from Oracle Insurance and Capgemini Financial Services will discuss how insurers can competitively drive high performance through policy administration replacement during a free, one-hour webcast hosted by LOMA. Roger Soppe, Oracle senior director, Insurance Strategy, together with Capgemini’s Lars Ernsting, leader, Life & Pensions COE, and Scott Mampre, vice president, Insurance, will be the speakers. Specifically, they’ll be highlighting: How replacing a legacy policy administration system with a modern, flexible platform optimizes IT and operations costs, creates consistent processes and eliminates resource redundancies How selecting the right partner with the best blend of technology, operational, and consulting capabilities, is an important pre-requisite to unlock high performance from policy administration transformation to achieve product, operational, and cost leadership  The value of outsourcing closed block operations We look forward to your participation on Thursday, July 14, 11:00 a.m. ET. Please register now. Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • How can I move a load of zone records from a web based system to text based one?

    - by Chris Adams
    Hi there, I have a few domains with Dreamhost where I have set a load of records using their web based domain name system, and I've like to move them to another provider that lets me enter info directly as a text file for their name server, bind 9 to use. (If you're interested, I'm moving them to Gandi.net). Previously when I used a cpanel based system to do something similar, there was a tool that let me simply enter a domain name, and any available domains were automagically entered into a system, saving me typing it myself (and bringing down sites with silly typos in the process). What open source tool can I use to query a domain for all the relevant subdomains and records and list them in a format like a zone file, that I can use with other name servers?

    Read the article

  • Sharepoint Expiration Policy not working

    - by spano
    I have a Sharepoint 2010 list with a custom content type with an associated retention policy. The policy consists of a custom formula and then the Send to Recycle Bin action. However, I realized that the items were not being deleted. I verified the list settings and the retention policy was configured: I run the Expiration Policy job several times but no items were deleted and no errors found in the logs. I also added logging to the custom formula but no logging was found neither. Finally, I found...(read more)

    Read the article

  • Keeping it Legal - Privacy Policy & PCI Compliance

    One of the most overlooked aspects of a website are the legal disclaimers such as the Privacy Policy and Terms of Use. This article is designed to help you put together these important web documents to keep you in compliance with federal law as well as Google (and other Search Engine's) best practices. Privacy Policy The Privacy Policy is extremely important.

    Read the article

  • How do I cross-compile my application for Ubuntu 12.04 armhf architecture on a Ubuntu 12.04 i386 host?

    - by Jonathan Cave
    I have a large application I have written. I can successfully compile the application in the following scenarios: in a native compilation for the i386 host running Ubuntu 12.04 natively on a PandaBoard running Ubuntu 12.04 (this takes a long time) using Qemu and a chroot on the host PC for the armhf PandaBoard target (this takes a very long time) I would like to cross-compile the application on the i386 host to run on a target such as the PandaBoard to complete builds in a timely fashion. So far attempts made using the arm-linux-gnueabihf tool chain in the repositories has produced binaries that do not run correctly. At this stage, I have no plans to package the software. What is the recommended way to achieve a successful cross-compile?

    Read the article

  • Servers at remote sites vs. centralized servers?

    - by Boden
    Looking for some opinions here. We've got three physical locations and site-to-site VPN between all three. Currently we've got Windows domain controllers at each location, with roughly 50 clients at each. The domains are currently separate, and we're looking at integrating the three sites. Email (Exchange) will be located at the primary site, and RPD is already being used at the secondary branches to hit the app servers also located at the primary site. The bulk of the local user load at the other two sites is just file sharing. What would the main benefits and drawbacks be of replacing the local domain controllers with NAS devices, and only keeping the domain controller(s) at the primary site? (assuming upgrades are coming regardless) Under what circumstances would you choose one setup over the other?

    Read the article

  • HTG Explains: What Group Policy Is and How You Can Use It

    - by Chris Hoffman
    Group Policy is a Windows feature that contains a variety of advanced settings, particularly for network administrators. However, local Group Policy can also be used to adjust settings on a single computer. Group Policy isn’t designed for home users, so it’s only available on Professional, Ultimate, and Enterprise versions of Windows. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • How to explain that writing universally cross-platform C++ code and shipping products for all OSes is not that easy?

    - by sharptooth
    Our company ships a range of desktop products for Windows and lots of Linux users complain on forums that we should have been written versions of our products for Linux years ago and the reason why we don't do that is we're a greedy corporation all our technical specialists are underqualified idiots Our average product is something like 3 million lines of C++ code. My and my colleagues analysis is the following: writing cross-platform C++ code is not that easy preparing a lot of distribution packages and maintaining them for all widespread versions of Linux takes time our estimate is that Linux market is something like 5-15% of all users and those users will likely not want to pay for our effort when this is brought up the response is again that we're greedy underqualified idiots and that when everything is done right all this is easy and painless. How reasonable are our evaluations of the fact that writing cross-platform code and maintaining numerous ditribution packages takes lots of effort? Where can we find some easy yet detailed analysis with real life stories that show beyond the shadow of a doubt what amount of effort exactly it takes?

    Read the article

  • General website publishing questions involving domain forwarding issue

    - by Gorgeousyousuf
    Even though I have been having a certain level of knowledge and experience about web development I have never interested in obtaining a domain and publishing a website from my own server. Since today I have been struggling with getting my own domain and configuring it utilizing web sources. I started with learning the outline of web publishing process including web server installation, deploying a website for testing purpose,router port forwarding, getting a domain and forwarding domain to my router which will also forward http requests to my web server I am confused about some parts and so far could not get the web site accessed from outside of the network. All I try to do is just for learning purpose so I do not pay much attention to security issues for now. I have Server 2008 and IIS 7.5 installed. I use a laptop and have access to the modem over wireless and my modem is Zoom x6 5590. Well I will continue explaining what I have done so far and what I think will be after each action I did, I have successfully had access to my website on any local computer entering the internal ip address and port pair of the host machine in a browser. Next, I forwarded port 80 of my host machine creating a virtual server like 10.0.0.x(internal ip(static) of the host) - tcp - start port : 80 - end port : 80 in router options. Now I suppose every request that will come to the public Ip on port 80 will be forwarded to my host machine(10.0.0.x) over port 80. So If everyhing went as desired, the website listening on port 80 will accept the request and process the issue and finally respond bla bla bla... I suppose to access my website from outside of the network by entering http://MyPublicIp:80 in a browser but I couldn't accomplish this task by now despite using godady's domain forwarding tool,I see a small view of my website when I click the "preview" button that checks whether the address(http://publicip/Index.aspx) I entered where my domain will be forwarded is available or not. I am sure that configuring domain does not play a role in solving such a problem since using public ip and port matching does not help. So here is the first question, What is the fact that I face this problem? After that, I have couple of question regarding domain forwarding using godaddy tool. Can I forward my domain to a any port for example port 8080 other than default http port 80? Additionally, can I use a sub-domain to forward to a different port of the host? What I want to design is if the client enters www.mydomain.com, website1 will respond over a specified port and after when a client enters info.mydomain.com, another website which listens on different port will respond. I tried to add a sub-domain and forward it to a address like http://www.mydomain.com:8080/Index.aspx with no success. Can I really do that? Finally, what if I have a ftp site listening on the default port 21 and I create a domain like ftp.mydomain.com that will forward to that ftp site address. Is it possible to use sub-domains for ftp site access? I know I am more than confused but no matter whatever and however you reply to me, you will help me have a more clear view on this subject. Thank you very much from now.

    Read the article

  • Unable to set password in IIS 8 for Domain User as ApplicationPool Identity

    - by Niels R.
    I'm trying to set a Domain User account as ApplicationPool Identity in IIS 8 (Windows 2012). When trying this using the IIS Management Console I always get an error: Value does not fall within the expected range. When trying to set the identity using appcmd.exe it fails on both the command setting the username and password or the command only setting the password. Setting the username is no problem. Trying to set both the username and password [FAIL]: >appcmd set config /section:applicationPools /[name='AppPoolName'].processModel.identityType:SpecificUser /[name='AppPoolName'].processModel.userName:DOMAIN\Username /[name='AppPoolName'].processModel.password:P4ssW0rd Applied configuration changes to section "system.applicationHost/applicationPools" for "MACHINE/WEBROOT/APPHOST" at configuration commit path "MACHINE/WEBROOT/APPHOST" ERROR ( hresult:80070057, message:Failed to commit configuration changes. The parameter is incorrect. ) Trying to set only the username [SUCCESS]: >appcmd set config /section:applicationPools /[name='AppPoolName'].processModel.identityType:SpecificUser /[name='AppPoolName'].processModel.userName:DOMAIN\Username Applied configuration changes to section "system.applicationHost/applicationPools" for "MACHINE/WEBROOT/APPHOST" at configuration commit path "MACHINE/WEBROOT/APPHOST" Trying to set the password after successfully setting the username [FAIL]: >appcmd set config /section:applicationPools /[name='AppPoolName'].processModel.identityType:SpecificUser /[name='AppPoolName'].processModel.password:P4ssW0rd Applied configuration changes to section "system.applicationHost/applicationPools" for "MACHINE/WEBROOT/APPHOST" at configuration commit path "MACHINE/WEBROOT/APPHOST" ERROR ( hresult:80070057, message:Failed to commit configuration changes. The parameter is incorrect. ) I added the Domain User to the IIS_IUSRS group and allowed it to "Log on as a service". Any suggestions what I might be doing wrong?

    Read the article

  • How to specify search domain name of nginx resolver for proxy_pass

    - by myjpa
    Assuming my server is www.mydomain.com, on Nginx 1.0.6 I'm trying to proxy all request to http://www.mydomain.com/fetch to other hosts, the destination URL is specified as a GET parameter named "url". For instance, when user requests either one: http://www.mydomain.com/fetch?url=http://another-server.mydomain.com/foo/bar http://www.mydomain.com/fetch?url=http://another-server/foo/bar it should be proxyed to http://another-server.mydomain.com/foo/bar I'm using the following nginx config and it works fine only if the url paramter contains domain name, like http://another-server.mydomain.com/...; but fails on http://another-server/... on error: another-server could not be resolved (3: Host not found) nginx.conf is: http { ... # the DNS server resolver 171.10.129.16; server { listen 80; server_name localhost; root /path/to/site/root; location = /fetch { proxy_pass $arg_url; } } Here, I'd like to resolve all URL without domain name as host name in mydomain.com, in /etc/resolv.conf, it's possible to specify default search domain name for the whole Linux system, but it doesn't affect nginx resolver: search mydomain.com Is it possible in Nginx? Or alternatively, how to "rewrite" the url parameter so that I can add the domain name?

    Read the article

  • .com domain transfer failing

    - by digital
    Hi, I'm trying to transfer one of my .com addresses between registrars. I'm down as the owner contact (confirmed working) and the losing registrar is down as the tech and admin contact. Last week I received an email stating that the domain transfer had been rejected by the losing registrar. I contacted the losing registrar and they denied that. My money from the winning registrar was refunded and I was told to try again. I've initiated the transfer again and received confirmation of pending transfer, I gave the correct EPP code and confirmed the transfer. Currently the status on the domain is set as OK, should it not be transfer pending? According to my name.com transfer page if the transfer is not authd in 5 days it will auto transfer anyway. I don't believe this will happen. Name.com have been really helpful but they can't really do much more now. The losing registrar is not being helpful hence me turning here. What can I do to make sure the domain transfers? The domain transfer is set to expire on the 17th. Any help would be greatly appreciated.

    Read the article

  • Win7 Credential manager and accessing SQL Server from outside of the domain

    - by David Lively
    My SQL Server is set to use windows authentication. If I am connected to the domain directly from my Win7 Ultimate x64 machine, SQL Management Studio (SSMS) will let me authenticate with Windows authentication. However, if I am connected via the VPN (from a different machine that is not joined to the domain), it won't. If I start SSMS with the following command line: C:\Windows\system32>runas /netonly /user:domainname\username "C:\Program Files (x86)\Microsoft SQL...\ssms.exe" then connecting to the SQL Server (which is in the domain) with Windows Authentication works fine. I'd like to save these credentials so that I don't have to launch SSMS from the command line, or modify the shortcut. I know I can use the SysInternals ShellRunAs extension to do this, but I again have to enter my domain username and password each time, and shift+right-click to see that menu option. The Windows Credential Manager seems designed to solve this problem, and works for network shares. However, it doesn't seem to work for SSMS. Any suggestions? I've tried using the /savecred option with runas to create the necessary credentials, but that appears to be incompatible with the /netonly option. Running the above command line with the addition of /savecred just displays the runas help screen. Grrr. Argh.

    Read the article

  • Configured Samba to join our domain, but logon fails from Windows machine

    - by jasonh
    I've configured a Fedora 11 installation to join our domain. It seems to join successfully (though it reports a DNS update failure) but when I try to access \\fedoraserver.test.mycompany.com I'm prompted for a password. So I enter adminuser and the password and that fails, so I try test.mycompany.com\adminuser and that too fails. What am I missing? EDIT (Update 9/1/09): I can now connect to the machine and see the shares on it (see my response to djhowell's answer) but when I try to connect, I get an error saying The network path was not found. I checked the log entry on the Fedora computer for the computer I'm connecting from (/var/log/samba/log.ComputerX) and it reads: [2009/09/01 12:02:46, 1] libads/cldap.c:recv_cldap_netlogon(157) no reply received to cldap netlogon [2009/09/01 12:02:46, 1] libads/ldap.c:ads_find_dc(417) ads_find_dc: failed to find a valid DC on our site (Default-First-Site-Name), trying to find another DC Config files as of 9/1/09: smb.conf: [global] Workgroup = TEST realm = TEST.MYCOMPANY.COM password server = DC.TEST.MYCOMPANY.COM security = DOMAIN server string = Test Samba Server log file = /var/log/samba/log.%m max log size = 50 idmap uid = 15000-20000 idmap gid = 15000-20000 windbind use default domain = yes cups options = raw client use spnego = no server signing = auto client signing = auto [share] comment = Test Share path = /mnt/storage1 valid users = adminuser admin users = adminuser read list = adminuser write list = adminuser read only = No I also set the krb5.conf file to look like this: [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = test.mycompany.com dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h forwardable = yes [realms] TEST.MYCOMPANY.COM = { kdc = dc.test.mycompany.com admin_server = dc.test.mycompany.com default_domain = test.mycompany.com } [domain_realm] dc.test.mycompany.com = test.mycompany.com .dc.test.mycompany.com = test.mycompany.com [appdefaults] pam = { debug = false ticket_lifetime = 36000 renew_lifetime = 36000 forwardable = true krb4_convert = false } I realize that there might be an issue with EXAMPLE.COM in there, however if I change it to TEST.MYCOMPANY.COM then it fails to join the domain with a preauthentication failure. As of 9/1/09, this is no longer the case.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >