Search Results

Search found 12947 results on 518 pages for 'domain registrar'.

Page 130/518 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Two different domains as one user session

    - by Mathew Foscarini
    I have two websites that are run as the same service. Each domain offers articles from a different market. At the top of each page the two domains are shown as menu options. If a user clicks one they can switch to the other domain. See here: http://www.cgtag.com Each domain has a different Google Analytics account, and when a user switches domains Google is counting this as a new session. It's listing the other domain as the "referral" for that new session. When the user switches back to the first domain Google is counting this as a returning visitor. This is messing up my reports. Showing returning visitors values that are higher than reality. It's also increasing hits on landing pages when the user switches, and listing the other domain as a referral site. I've found tips on how to list two domains as one website, but that results in merging the data. I want to keep the two domains separate so that I can track each ones performance, but I don't want to count domain changes as new sessions. Maybe something like treating the two domains as subdomains.

    Read the article

  • Two different domains as one user session in Google Analytics

    - by Mathew Foscarini
    I have two websites that are run as the same service. Each domain offers articles from a different market. At the top of each page the two domains are shown as menu options. If a user clicks one they can switch to the other domain. See here: http://www.cgtag.com Each domain has a different Google Analytics account, and when a user switches domains Google is counting this as a new session. It's listing the other domain as the "referral" for that new session. When the user switches back to the first domain Google is counting this as a returning visitor. This is messing up my reports. Showing returning visitors values that are higher than reality. It's also increasing hits on landing pages when the user switches, and listing the other domain as a referral site. I've found tips on how to list two domains as one website, but that results in merging the data. I want to keep the two domains separate so that I can track each ones performance, but I don't want to count domain changes as new sessions. Maybe something like treating the two domains as subdomains.

    Read the article

  • I used a 301 Permanent Redirect to a 3rd party site by mistake! Can I stop the redirection?

    - by Dees
    Oh Noes! I've been parking a domain name for a friend/client of mine on my hosting provider (Dreamhost, FWIW) for a while, and they eventually asked me to redirect their domain to a 3rd party website which is currently featuring some relevant promotional content. Once this period ends, we will probably go ahead and set up a proper website for the domain on my hosting account. I used Dreamhost's "redirect" hosting option in their domain configuration panel, not realizing that it would implement a 301 Permanent redirect, or what the implications were. Now it seems that for any client that has visited the site anytime recently, the 301 redirect is still cached/in effect, although I have changed the domain settings back to regular Dreamhost full site hosting. It seems that the only thing that can be done is to wait out the TTL/cache expiration for the redirect. I have no idea how long that might be, so I'm wondering if there is any good way to cache-bust the redirect or otherwise undo its long-term effects. I put a simple html meta refresh in the domain folder to replace the 301 to keep the intended functionality in place, but I'm still not able to access the domain's other content normally, even via FTP, etc. Isn't there anything I can do? Otherwise, how long does it take for a cached redirect to expire? It's gonna be a bummer if it's really permanent.

    Read the article

  • Configuring SQL Server Management Studio to use Windows Integrated Authentication &hellip; from non-

    - by Enrique Lima
    Did you know you can pass your Windows credentials to SQL Server even when working from a workstation that is not joined to a domain? Here is how … From Start, then click All Programs, find Microsoft SQL Server (version 2005 or 2008). Once there, do a right-click on SQL Server Management Studio, then click on Properties Now, follow below to modify the entry for Target: Now the real task (we will be using the runas command) … Modify the shortcut’s target as follows, and remember to replace <domain\user> with the values that correspond to your environment : x64 SQL Server 2008 C:\Windows\System32\runas.exe /user:<domain\user> /netonly "C:\Program Files (x86)\Microsoft SQL Server\100\Tools\binn\VSShell\Common7\IDE\Ssms.exe -nosplash" SQL Server 2005 C:\Windows\System32\runas.exe /user:<domain\user> /netonly "C:\Program Files (x86)\Microsoft SQL Server\90\Tools\binn\VSShell\Common7\IDE\SqlWb.exe -nosplash" x86 SQL Server 2008 C:\Windows\System32\runas.exe /user:<domain\user> /netonly "C:\Program Files\Microsoft SQL Server\100\Tools\binn\VSShell\Common7\IDE\Ssms.exe -nosplash" SQL Server 2005 C:\Windows\System32\runas.exe /user:<domain\user> /netonly "C:\Program Files\Microsoft SQL Server\90\Tools\binn\VSShell\Common7\IDE\SqlWb.exe -nosplash" Since we modified the shortcut, we will need to fix the icon for SSMS.  We will fix it by pressing the Change Icon… button and pointing to the original “icon” providers. It is the executables for SSMS that hold the icon information, so we need to point to … x64 SQL Server 2008 %ProgramFiles% (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe SQL Server 2005 C:\Program Files (x86)\Microsoft SQL Server\90\Tools\binn\VSShell\Common7\IDE\SqlWb.exe x86 SQL Server 2008 %ProgramFiles%\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe SQL Server 2005 C:\Program Files\Microsoft SQL Server\90\Tools\binn\VSShell\Common7\IDE\SqlWb.exe When you start SSMS from a modified shortcut, you’ll be prompted for your domain password: SSMS will show up stating a different account in the username box, but the parameters from the configuration you are doing above do work and will pass on correctly.

    Read the article

  • How do you setup Postfix/Dovecot/MySQL to not look for local accounts?

    - by thiesdiggity
    I am having an issue with one of my Postfix/Dovecot mail servers and I'm unsure how to fix the problem. I will try to explain it in detail, here it goes: I have an Ubuntu server setup using Virtual hosting with Postfix, Dovecot and MySQL. We have one domain setup as a virtual domain, for this example I am going to use mail.example.com. Under that domain we have one email address. I have another server (MS Exchange) setup using another one of my sub-domains, ex.example.com. The problem is that when I SMTP into the account on mail.example.com and try to send an email to an account on ex.example.com, I get the email returned back to us with an "unknown host" error. Now, I know that the mail.example.com server can resolve the ex.example.com domain because I can ping/dig while SSH'd into it. I can also log into Postfix via Telnet and send an email to an ex.example.com mailbox. I'm guessing that it has something to do with Postfix/Dovecot looking locally for the domain in the virtual domain list because of the tld domain (example.com)? If that's the case, how do I get Postfix/Dovecot to only look locally for the entire URL (mail.example.com) and if it doesn't find it, send it to the correct server by looking up the MX/A records (which I know exist and are setup correctly)? I have been working on this all day and any guidance would be GREATLY appreciated! Thanks for your time!

    Read the article

  • Entity framework separating entities for product and customer specific implementation

    - by Codecat
    I am designing an application with intention into making it a product line. I would like to extend the functionality across all layers and first struggle is with domain models. For example, core functionality would have entity named Invoice with few standard fields and then customer requirements will add some new fields to it, but I don't want to add to core Invoice class. For every customer I could use customer specific DbContext and injected correct context with dependency injection. Also every customer will get they own deployment public class Product.Domain.Invoice { public int InvoiceId { get; set; } // Other fields } How to approach this problem? Solution 1 does not work since Entity Framework does not allow same simple name classes. public class CustomerA.Domain.Invoice : Product.Domain.Invoice { public User ReviewedBy { get; set; } public DateTime? ReviewedOn { get; set; } } Solution 2 Create separate table and link it to core domain table. Reusing services and controllers could be harder. public class CustomerA.Domain.CustomerAInvoice { public Product.Domain.Invoice Invoice { get; set; } public User ReviewedBy { get; set; } public DateTime? ReviewedOn { get; set; } }

    Read the article

  • How does cross domain authentication work in a firewalled environment?

    - by LVLAaron
    This is a simplification and the names have been changed to protect the innocent. The assets: Active Directory Domains corp.lan saas.lan User accounts [email protected] [email protected] Servers dc.corp.lan (domain controller) dc.saas.lan (domain controller) server.saas.lan A one way trust exists between the domains so user accounts in corp.lan and log into servers in saas.lan No firewall between dc.corp.lan and dc.saas.lan server.saas.lan is in a firewalled zone and a set of rules exist so it can talk to dc.saas.lan I can log into server.saas.lan with [email protected] - But I don't understand how it works. If I watch firewall logs, I see a bunch of login chatter between server.saas.lan and dc.saas.lan I also see a bunch of DROPPED chatter between server.saas.lan and dc.corp.lan. Presumably, this is because server.saas.lan is trying to authenticate [email protected] But no firewall rule exists that allows communication between these hosts. However, [email protected] can log in successfully to server.saas.lan - Once logged in, I can "echo %logonserver%" and get \dc.corp.lan. So.... I am a little confused how the account actually gets authenticated. Does dc.saas.lan eventually talk to dc.corp.lan after server.saas.lan can't talk to dc.corp.lan? Just trying to figure out what needs to be changed/fixed/altered.

    Read the article

  • What are the right reverse PTR, domain keys, and SPF settings for two domains running the same appli

    - by James A. Rosen
    I just read Jeff Atwood's recent post on DNS configuration for email and decided to give it a go on my application. I have a web-app that runs on one server under two different IPs and domain names, on both HTTP and HTTPS for each: <VirtualHost *:80> ServerName foo.org ServerAlias www.foo.org ... </VirtualHost> <VirtualHost 1.2.3.4:443> ServerName foo.org ServerAlias www.foo.org </VirtualHost> <VirtualHost *:80> ServerName bar.org ServerAlias www.bar.org ... </VirtualHost> <VirtualHost 2.3.4.5:443> ServerName bar.org ServerAlias www.bar.org </VirtualHost> I'm using GMail as my SMTP server. Do I need the reverse PTR and SenderID records? If so, do I put the same ones on all of my records (foo.org, www.foo.org, bar.org, www.bar.org, ASPMX.L.GOOGLE.COM, ASPMX2.GOOGLEMAIL.COM, ..)? I'm pretty sure I want the domain-keys records, but I'm not sure which domains to attach them to. The Google mail servers? foo.org and bar.org? Everything?

    Read the article

  • For Australian audiences, would an uncached .com.au domain resolve faster than an uncached .com?

    - by thomasrutter
    Is there any speed benefit to using a .com.au domain rather than a .com if your customers, hosting and DNS services are in Australia, specifically in the worst typical case (domain is not cached in any local DNS relay for customer)? Assuming that both domains pointed to the same nameservers in the end. I know this is mostly academic because we are talking about a DNS lookup that would take at most a few hundred milliseconds and would only be relevant once at the beginning of a session. I just was curious. I know that an uncached .com lookup will involve consulting at least one ?.gtld-servers.net. server and an uncached .com.au will involve consulting at least one ?.au. server. Now, what I guess I'd need to know is Are the various ?.gtld-servers.net. servers using anycast technology that would have local fully authoritative nodes in Australia, making them just as fast to Australians as ?.au. and avoiding a 200ms+ overseas latency, or are some or all of them hosted only in the US or in the northern hemisphere?

    Read the article

  • Ho can I recover from SharePoint configuration errors after promoting the server to a Domain Controller?

    - by jjr2527
    I have a SharePoint 2010 VM setup in VirtualBox and I was using local machine accounts to handle security on the server. While preparing for a demo it came time to have some meaningful users on my VM image. I followed some docs on promoting my server to a Domain Controller in a new forrest. So now I have [MachineName].SPDEMO.CONTOSO.com and I can add users as needed. However, when I try to connect to my SharePoint sites I am getting a white screen with the error: "Cannot connect to the configuration database" I changed the pool identity account of each of my IIS app pools to the new Administrator account and started the services successfully but I can't get the SQL services to start up. When I try to start them I get the following error: Windows could not start the SQL Server (MSSQLSERVER) on Local Computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specific error code 17058. In the event log I see the following error: The SQL Server (MSSQLSERVER) service terminated with service-specific error %%17058. Can I recover from this or should I roll back or just uninstall the Domain Controller role. I'd like to keep the server as a standalone DC so I can do some user profile creation/management but I need the SharePoint bits to work as well.

    Read the article

  • apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName

    - by user35402
    I keep getting this warning when I (re)start Apache. Restarting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName ... waiting apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [ OK ] This is the content of my etc/hosts file: #127.0.0.1 hpdtp-ubuntu910 #testproject.localhost localhost.localdomain localhost #127.0.1.1 hpdtp-ubuntu910 127.0.0.1 localhost 127.0.0.1 testproject.localhost 127.0.1.1 hpdtp-ubuntu910 # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts This is the content of my /etc/apache2/sites-enabled/000-default file: <VirtualHost *:80> ServerName testproject.localhost DocumentRoot "/home/morpheous/work/websites/testproject/web" DirectoryIndex index.php <Directory "/home/morpheous/work/websites/testproject/web"> AllowOverride All Allow from All </Directory> Alias /sf /lib/vendor/symfony/symfony-1.3.2/data/web/sf <Directory "/lib/vendor/symfony/symfony-1.3.2/data/web/sf"> AllowOverride All Allow from All </Directory> </VirtualHost> When I go to http://testproject.localhost, I get a blank page can anyone spot what I am doing wrong?

    Read the article

  • How can I recover from SharePoint configuration errors after promoting the server to a Domain Controller?

    - by jjr2527
    I have a SharePoint 2010 VM setup in VirtualBox and I was using local machine accounts to handle security on the server. While preparing for a demo it came time to have some meaningful users on my VM image. I followed some docs on promoting my server to a Domain Controller in a new forrest. So now I have [MachineName].SPDEMO.CONTOSO.com and I can add users as needed. However, when I try to connect to my SharePoint sites I am getting a white screen with the error: "Cannot connect to the configuration database" I changed the pool identity account of each of my IIS app pools to the new Administrator account and started the services successfully but I can't get the SQL services to start up. When I try to start them I get the following error: Windows could not start the SQL Server (MSSQLSERVER) on Local Computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specific error code 17058. In the event log I see the following error: The SQL Server (MSSQLSERVER) service terminated with service-specific error %%17058. Can I recover from this or should I roll back or just uninstall the Domain Controller role. I'd like to keep the server as a standalone DC so I can do some user profile creation/management but I need the SharePoint bits to work as well.

    Read the article

  • Sendmail : Mail delivery of same domain to internal or external mail server.

    - by Silkograph
    Its bit difficult to explain but very simple problem. We have internal sendmail server and hosted server. Both are set to same domain name. We have mixed mail accounts. For example we have two user in one office. [email protected] is local only [email protected] is internal plus external. Internal means we create user on local linux box where sendmail is set. External means we create user on local and hosted server. [email protected] can send mails to any internal user created on Linux box where sendmail is installed. But he can not send mail to outside domain and no mail can be sent to him as there is no account created on external hosted server. [email protected] can send mails to internal as well as all other domains through sendmail's smart_host feature, which uses hosted server's smtp. [email protected] can get all external emails internally through Fetchmail on linux box. Now we have third user [email protected] who will be always outstation and can use external server only. So I can not create account on local linux box for [email protected] because his mail will get delivered locally only. I don't want to create alias and send his mails to gmail or yahoo's account. I want to send emails to his external account from any internal user. How this can be done? Thanks in advance.

    Read the article

  • For enabling SSL for a single domain on a server with muliple vhosts, will this configuration work?

    - by user1322092
    I just purchased an SSL certificate to secure/enable only ONE domain on a server with multiple vhosts. I plan on configuring as shown below (non SNI). In addition, I still want to access phpMyAdmin, securely, via my server's IP address. Will the below configuration work? I have only one shot to get this working in production. Are there any redundant settings? ---apache ssl.conf file--- Listen 443 SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCertificateChainFile /home/web/certs/domain1.intermediate.crt ---apache httpd.conf file---- ... DocumentRoot "/var/www/html" #currently exists ... NameVirtualHost *:443 #new - is this really needed if "Listen 443" is in ssl.conf??? ... #below vhost currently exists, the domain I wish t enable SSL) <VirtualHost *:80> ServerAdmin [email protected] ServerName domain1.com ServerAlias 173.XXX.XXX.XXX DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> #below vhost currently exists. <VirtualHost *:80> ServerName domain2.com ServerAlias www.domain2.com DocumentRoot /home/web/public_html/domain2.com/public </VirtualHost> #new -I plan on adding this vhost block to enable ssl for domain1.com! <VirtualHost *:443> ServerAdmin [email protected] ServerName www.domain1.com ServerAlias 173.203.127.20 SSLEngine on SSLProtocol all SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCACertificateFile /home/web/certs/domain1.intermediate.crt DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> As previously mentioned, I want to be able to access phpmyadmin via "https://173.XXX.XXX.XXX/hiddenfolder/phpmyadmin" which is stored under "var/www/html/hiddenfolder"

    Read the article

  • How to enable hotlink protection without hardcoding my domain in the Apache config file?

    - by Jeff
    Been surfing around for a solution for a couple days now. How do I enable Apache hotlink protection without hardcoding my domain in the config file so I can port the code to my other domains without having to update the config file every time? This is what I have so far: RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://www\.example\.com [NC] RewriteRule \.(gif|ico|jpe|jpeg|jpg|png)$ - [NC,F,L] ... And this is what Apache suggests: SetEnvIf Referer example\.com localreferer <FilesMatch \.(jpg|png|gif)$> Order deny,allow Deny from all Allow from env=localreferer </FilesMatch> ... both of which hardcode the domain in their rules. The closest I came to finding any info that covers this is right here on ServerFault, but the conclusion was that it cannot be done. Based on my research, that appears to be true, but I didn't find any questions or commentary dedicated soley to this question. If anyone's curious, here is the link to the Apache 2 docs that cover this topic. Note that Apache variables (e.g. %{HTTP_REFERER}) can only be used in the RewriteCond text-string and the RewriteRule substitution arguments.

    Read the article

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

  • How to safely send newsletters on VPS (SMTP) w/ non-hosted domain as "From" email?

    - by Andy M
    Greetings, I'm trying to understand the safest way to use SMTP. I'm considering purchasing a second virtual server mainly for email sending, on which I will set up PHPlist (a free open-source mailing program), so we have the freedom to send unlimited newsletters (...well, 10,000 per day at least, which requires a VPS rather than shared hosting). Here's my current setup with a paid mass-mailing software: I have a website - let's call it MyHostedDomain.org. I send newsletters with the From / Reply To address as [email protected], which isn't being hosting by me but I have access to the email account. Can I more or less safely set this up with an SMTP server on a VPS? i.e. send messages using [email protected] as the visible address, but having it all go through my VPS SMTP? I cannot authenticate it, right? Is this too risky a practice? Is my only hope to use an address with a domain on the VPS, i.e. [email protected]? I already have a Reverse DNS record for the domain hosted on my current VPS. I also see other suggestions, like SenderID and DKIM. But with all these things combined, will this still work? I don't want to get blacklisted, but the good thing is this is a somewhat private list, and users opt-in to subscribe. So it's a self-made audience. (If it makes you feel better, this is related to a non-profit activity, not some marketing scam...it's for a good cause, I assure you!)

    Read the article

  • Changing the current URL but serving content from another (same domain) - ProxyPass?

    - by zigojacko
    I've been banging my head against the wall with this for months now so I hope someone on here will be able to finally advise what is needed for this. I have some URL's like this:- domain.com/category/subcat/filter/brand And I wish to rewrite the URL's to:- domain.com/category/brand-subcat Content loads fine at the first URL, I just want to show it at a different URL - is URL masking the correct term for this? I have a RewriteRule in .htaccess that should do this job as far as I believe:- RewriteRule ^([a-zA-Z]+)/([a-zA-Z]+)/filter/([a-zA-Z]+)$ $1/$3-$2 This isn't actually modifying the URL at all though on a Magento website (mod_rewrite is enabled and plenty of other rewrites are working from the same .htaccess). So firstly, I want to know is what I am trying to achieve definitely possible? If so, what is this process even called? Secondly, does this need to be handled using ProxyPass and then use a [P] flag with the rewrite rule? I assume the Apache server doesn't have mod_proxy enabled currently because when I add a [P] flag, the URL returns a 403 forbidden error with the full server path for the current URL. Please could anyone kindly advise what on earth I need to do to achieve this?

    Read the article

  • Removed Old Domain Trust. Now Progress (9.1D) can't open DB File

    - by RLH
    My company has an old server, running Progress 9.1D on a Windows 2000 VM, which was used by our company OS (Vantage 6 by Epicor.) Vantage was our primary OS for a very long time. About 2 years ago, we migrated to a larger, corporate OS and we cancelled our service contract with Epicor. Yesterday, we removed an AD trust between the corporate domain and our old AD domain we used in the days of Vantage. After restarting the virtual server, I have been able to start the ProService for 9.1D Windows service, however, I can not get Vantage to start back up. When I run the application, I get the error in the message listed below. Transcript: ** Could not connect to server for database [progress db file], errno 0. (1432) How can I fix this? FYI, I haven't had to work with Progress in years and even then I wouldn't have considered myself a "novice"-- I'm even less knowledgeable than that title would suggest. Vantage had a lot of internal tools and I recall that Epicor support managed to prevent .pf scripts from being executed. If there was a Progress specific patch that needed to be applied, you had to do it within the Vantage software OR they had to remote into the machine to fix this. I may not be able to run a .pf script but I do know that I can log into the console-based server application. (Yes, I can't even recall which utility that was called. It is sad.) It's been a long time and I never had to digg into Progress that much. Please help and feel free to ask questions. If you need more info, I'll update this post.

    Read the article

  • Strategy to isolate multiple nginx ssl apps with single domain via suburi's?

    - by icpu
    Warning: so far I have only learnt how to use nginx to serve apps with their own domain and server block. But I think its time to dive a little deeper. To mitigate the need for multiple SSL certificates or expensive wildcard certificates I would like to serve multiple apps (e.g. rails apps, php apps, node.js apps) from one nginx server_name. e.g. rooturl/railsapp rooturl/nodejsapp rooturl/phpshop rooturl/phpblog I am unsure on ideal strategy. Some examples I have seen and or thought about: Multiple location rules, this seems to cause conflicts between the individual app config requirements, e.g. differing rewrite and access requirements Isolated apps by backend internal port, is this possible? Each port routing to its own config? So config is isolated and can be bespoke to app requirements. Reverse proxy, I am little ignorant of how this works, is this what I need to research? is this actually 2 above? Help online seems to always proxy to another server e.g apache What is an effective way to isolate config requirements for apps served from a single domain via sub uris?

    Read the article

  • Setup Remote Access in Windows Home Server

    - by Mysticgeek
    One of the many awesome features of Windows Home Server, is the ability to access your server and other computers on your network remotely. Today we show you the steps to enable Remote Access to your home server from anywhere you have an Internet connection. Remote Access in Windows Home Server has a lot of great features like uploading and downloading files from shared folders, accessing files from machines on your network, and controling machines remotely (on supported OS versions). Here we take a look at the basics of setting it up, choosing a domain name, and verifying you can connect remotely. Setup Remote Access in Windows Home Server Open the Windows Home Server Console and click on Settings. Next select Remote Access, it is off by default, just click the button to turn it on. Wait while your router is configured for remote access, when it’s complete click Next. Notice that it will enable UPnP, if you don’t wish to have that enabled, you can manually forward the correct ports. If you have any problems with the router being automatically configured, we’ll be taking a look at a more detailed troubleshooting guide in the future. The router is successfully configured, and we can continue to the next process of configuring our domain name. The Domain Name Setup Wizard will start. Notice you will need a Windows Live ID to set it up –which is typically your hotmail address. If you don’t already have one, you can get one here. Type in your Live ID email address and password and click Next… Agree to the Home Server Privacy Statement and the Live Custom Domains Addendum. If you’re concerned about privacy and want to learn more about the domain addendum, make sure to read about it before agreeing. There is nothing abnormal to point out about either statement, but if this is your first time setting it up, it’s good to review the information.   Now choose a name for the domain. You should select something that is easy to remember and identifies your home server. The name can contain up to 63 characters, numbers, letters, and hyphens…and must begin and end with a letter or number. When you have the name figured out click the Confirm button. Note: You can only register one domain name per Live ID. If the name isn’t already taken, you’ll get a confirmation message indicating it’s god to go. The wizard is complete and you can now access the home server from the URL provided. A few other things to point out after you’ve set it up…under Domain Name click on the Details button… Which pulls up the domain detail information and you can refresh the data to verify everything is working correctly. Or you can click the Configure button and then change or release your current domain name. Under Web site settings, you can change you site page headline to whatever you want it to be. Accessing Home Server Remotely After you’ve gotten everything setup for your home server domain, you can begin to access it when you’re away from home. Simply type in the domain address you created in the previous steps. The start page is rather boring…and to start accessing your data, click the Log On button in the upper right hand corner. Then enter in your home server credentials to gain access to your files, folders, and network computers. You won’t be able to log in with your administrator user account however, to protect security of your network. Once you’re logged in, you’ll be able to access different parts of your home server shares and network computers. Conclusion Now that you have Remote Access setup, you should be able to access and manage your files easily. Being able to access data from your home server remotely is great when you need to get certain files while on the road. The web UI is pretty self explanatory, works best in IE as ActiveX is required, and is smooth and easy to work with. In future articles we’ll be covering a lot more regarding remote access, including more of the available features, troubleshooting connection issues, and enabling access for other users. Similar Articles Productive Geek Tips GMedia Blog: Setting Up a Windows Home ServerHow to Remote Desktop to the Actual Server Console on Windows 2003Use Windows Vista Aero through Remote Desktop ConnectionAccess Your MySQL Server Remotely Over SSHShare Ubuntu Home Directories using Samba TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox)

    Read the article

  • What are the Search engine affects of registering the same domain on multiple top level domains (ie. .com, .ie, .nl etc.)?

    - by user1020317
    I'm looking to register a few more domains for my company, I have my-company.com at the moment, but now require my-company.com.au and my-company.nl and some others.. I'm running through my options and wondering what is the best.. Duplicate all the content on the .com package and make a replica at the other domains Buy the other domains but do a 301 redirect back to the .com domain. Create a full new website with different content for the new domains, thus having no text duplication We currently sell over the world so would like to raise our Search rankings in various countries, can this be done by buying the domain in the country, and if so, how will the above methods affect our search rankings. Any other suggestions are welcome!

    Read the article

  • apache2 namevirtualhost resolving wrong site

    - by joe
    Running apache 2.2.6. I'm setting up a development environment. dev and production will be hosted on the same machine, same IP address. DNS entries like prod.domain.com and dev.domain.com point to the same IP. * Imprortant: it is required that dev and prod are otherwise completely separate. Each will run it's own apache instance. Each will use it's own apache configuration. Each, prod and dev, will host http and https. I have this set up and working, but not as restrictive as I'd like. For instance, the production config: NameVirtualHost *:80 NameVirtualHost *:443 <VirtualHost *:80 > ServerName prod.domain.com # ... etc </VirtualHost> <VirtualHost *:443 > ServerName prod.domain.com # ... etc </VirtualHost> The dev site is set up similarly, using ports 8080 and 4443. Each site works fine. But assuming both apaches are running, one can also hit "cross-site" by mistake. So, inadvertently hitting prod.domain.com:8080 successfully returns a page from the dev site. It would be much better if this failed completely. This is a bit more difficult to solve (for me) because of the need for two apache configs. If all in one, the single process would have full knowledge of everything. So, I tried to solve this with brute force, including virtual hosts for the "other" site, with something that would fail, like no access to documentroot. But apache then inexplicably finds the "wrong" virtual host. Here's the full config for production, with the dummy dev configs. NameVirtualHost *:80 NameVirtualHost *:443 # ---------------------------------------------- # DUMMY HOSTS <VirtualHost *:8080 > ServerName dev.domain.com:8080 DocumentRoot /tmp/ <Directory /tmp/ > Order deny,allow Deny from all </Directory> </VirtualHost> <VirtualHost *:4443 > ServerName dev.domain.com:4443 DocumentRoot /tmp/ <Directory /tmp/ > Order deny,allow Deny from all </Directory> </VirtualHost> # ---------------------------------------------- # REAL PRODUCTION HOSTS <VirtualHost *:80 > ServerName prod.domain.com:80 DocumentRoot /something/valid/ <Directory /something/valid/> Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:443 > ServerName prod.domain.com:443 DocumentRoot /something/valid/ <Directory /something/valid/> Order allow,deny Allow from all </Directory> # .... other valid ssl setup </VirtualHost> Here's the strange thing. With this configuration, a prod.domain.com:80 hit succeeds. But a prod.domain.com:443 hit fails, because it finds the dev.domain.com:4443 instead. I've also tried removing the port from the ServerName, but it still doesn't work. Sorry for the long question. Hopefully this is enough information. Thanks in advance for any help.

    Read the article

  • Troubleshooting unwanted NTP Traffic

    - by Jaxaeon
    A domain controller running Windows Server 2012 is sending NTP and NETBIOS traffic to an address that has never been configured as a time provider. The server logs give no indication that any NTP traffic is failing. The only place I see any evidence of this traffic is in pfSense system logs: (Blocked) Jun 9 08:48:50 DOMAIN 10.0.1.100:123 192.128.127.254:123 UDP (Blocked) Jun 9 08:48:53 DOMAIN 10.0.1.100:137 192.128.127.254:137 UDP As far as I can tell the NTP service is working normally otherwise: DC2.domain.com[10.0.1.101:123]: ICMP: 0ms delay NTP: -0.0131705s offset from DC1.domain.com RefID: DC1.domain.com [10.0.1.100] Stratum: 3 DC1.domain.com *** PDC ***[10.0.1.100:123]: ICMP: 0ms delay NTP: +0.0000000s offset from DC1.domain.com RefID: clock1.albyny.inoc.net [64.246.132.14] Stratum: 2 The time provider NtpClient is currently receiving valid time data from 1.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->204.2.134.163:123). The time provider NtpClient is currently receiving valid time data from 0.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->64.246.132.14:123). The time service is now synchronizing the system time with the time source 0.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->64.246.132.14:123). I've been inside and out of the NTP configuration and cannot find any reason for this traffic. Reverse DNS points the destination address to nothing.attdns.com. pinging nothing.attdns.com from the domain controller in question leads to a response from loopback (127.0.0.2) which makes my head hurt. Any ideas? EDIT1: It should probably be noted that after a dns flush, nslookup 192.128.127.254 returns nothing.attdns.com. 192.128.127.254 is not present in domain.com DNS records. The attdns.com domain is not present in cached lookups. 127.in-addr.arpa is clean of any funkyness. EDIT2: The loopback ping response from nothing.attdns.com is possibly unrelated. Machines on other networks are also displaying this behavior. EDIT3: As mentioned in the comments, I tracked the problem network adapter back to my pfSense VM hosted in esxi 5.5 (I know shame on me for virtualizing a firewall). pfSense was configured to use DC1.domain.com as its primary time provider, but upon changing it back to pool.ntp.org the problem persists. pfSense logs give no indication of NTP misconfiguration. Everywhere I can think to look this VM is identified as 10.0.1.253, so I still have no idea why it’s sending NTP requests as 192.128… Since this firewall was a temporary solution to a problem that no longer exists so I am going to decommission it. EDIT4: The queries were coming from another machine sharing the same virtual adapter as the firewall. The machine has two local adapters: one for LAN, and the other for attached hardware that uses an Ethernet connection. That hardware sits in the the mystery subnet, and the machine is broadcasting NTP requests over both adapters.

    Read the article

  • How can I make subversion reset the stored passwords/users and remember my authentication credential

    - by NicDumZ
    Hello folks! Background: I used to have everything working just fine on my fresh install: $ svn co https://domain:443/ test1 Error validating server certificate for 'https://domain:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! Certificate information: - Hostname: **REMOVED** - Valid: **REMOVED** - Issuer: **REMOVED** - Fingerprint: **checked with issuer and REMOVED** (R)eject, accept (t)emporarily or accept (p)ermanently? p Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz-machine-hostname': Authentication realm: <https://domain:443> Subversion repository Username: nicdumz Password for 'nicdumz': # proceeds to checkout correctly $ svn co https://domain:443/ test2 # checkouts nicely, without asking for my password. At some point I needed to commit stuff using a different account. So I did that $ svn ci --username other.user Authentication realm: <https://domain:443> Subversion repository Password for 'other.user': # works fine But since then, everytime I want to commit as 'nicdumz' (default user, all repos have been checked-out with that user), it prompts me for my password: $ svn ci Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz': Hey come on, why :) The same happens if I want a fresh checkout, since read-access is also protected. So I tried fixing the issue by myself. I read around that ~/.subversion/auth was storing credentials, so I removed it from the way: $ cd ~/.subversion $ mv auth oldauth $ mkdir auth It seemed to work at first, because svn had forgotten about certificate validation: $ svn co https://domain:443/ test3 Error validating server certificate for 'https://domain:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! Certificate information: - Hostname: **REMOVED** - Valid: **REMOVED** - Issuer: **REMOVED** - Fingerprint: **checked with issuer and REMOVED** (R)eject, accept (t)emporarily or accept (p)ermanently? p Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz-machine-hostname': Authentication realm: <https://domain:443> Subversion repository Username: nicdumz Password for 'nicdumz': # proceeds to checkout correctly $ svn up Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz': What? how is this happening? If you have suggestions to investigate more about the behaviour, I am very interested. If I'm correct, there is no way to do a verbose svn up or anything of the like, so I'm not sure should I go for investigation. Oh, and for what it's worth: $ svn --version svn, version 1.6.6 (r40053) compiled Oct 26 2009, 06:19:08 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: * ra_neon : Module for accessing a repository via WebDAV protocol using Neon. - handles 'http' scheme - handles 'https' scheme * ra_svn : Module for accessing a repository using the svn network protocol. - with Cyrus SASL authentication - handles 'svn' scheme * ra_local : Module for accessing a repository on local disk. - handles 'file' scheme * ra_serf : Module for accessing a repository via WebDAV protocol using serf. - handles 'http' scheme - handles 'https' scheme

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >