Search Results

Search found 21310 results on 853 pages for 'multiple domains'.

Page 403/853 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • trouble with AD and profile import

    - by GeorgeWNYC
    I am involuntary Admin for a MOSS 2007 site. We use profile import from AD, from two domains: Mycompany.com and AM.MyCompany.Com I was looking at the log for the PEOPLE_DL_IMPORT Content source and it has many entries like: spsimport://?$$dl$$/MyCompany.com/MyCompany.com/MyCompany.com/am.MyCompany.com/MyCompany.com/am.MyCompany.com/MyCompany.com/am.MyCompany.com/am.MyCompany.com/MyCompany.com/am.MyCompany.com/am.MyCompany.com/am.MyCompany.com/MyCompany.com/MyCompany.com/am.MyCompany.com/am.MyCompany.com It certainly doesn't look right. Is this normal? What can I do to remedy it ? Can I start over? There are users already in SP and some of them are in SP groups for permission purposes.

    Read the article

  • nginx: override global ssl directives for specific servers

    - by alkar
    In my configuration I have placed the ssl_* directives inside the http block and have been using a wildcard certificate certified by a custom CA without any problems. However, I now want to use a new certificate for a new subdomain (a server), that has been certified by a recognized CA. Let's say the TLD is blah.org. I want my custom certificate with CN *.blah.org to be used on all domains except for new.blah.org that will use its own certificate/key pair of files with CN new.blah.org. How would one do that? Adding new ssl_* directives inside the server block doesn't seem to override the global settings.

    Read the article

  • WebDAV mapped drive asking for username and password

    - by confus3d
    Since we migrated domains we're having problems with mapping a drive using a WebDAV connection in our login script. It's a simple net use x: \\server.domain.com\folder Which used to authenticate automatically (all we needed to do to make this happen was to put the server in the intranet zone in the internet explorer settings). Since the domain migration though, nearly everyone is being prompted for a username and password to connect. Does anyone have any idea how to fix this? Any help much appreciated. The webdav share is on a Windows 2003 server running IIS.

    Read the article

  • Extracting httpdocs from Plesk Panel 9.5.4 Webserver backup file

    - by Paddington
    Good day, I am having problems manually extracting domains from Plesk 9.5 backup that was FTPed onto my back up server. I have followed the article http://kb.parallels.com/en/1757 using method 2. The problem is here: zcat DUMP_FILE.gz DUMP_FILE My backup file CP_1204131759.tar is a tar archive and zcat does not work with it. So I proceed to run the command: cat CP_1204131759.tar CP_1204131759. But when I try # cat CP_1204131759 | munpack I get an error that munpack did not find anything to read from standard input. I went on to extract the tar backup file using the xvf flags and got a lot of files (20) similar to these ones: CP_sapp-distrib.7686-0_1204131759.tgz CP_sapp-distrib.7686-35_1204131759.tgz CP_sapp-distrib.7686-6_1204131759.tgz How best can I extract the httpdocs of a domain from this server wide Plesk 9.5.4 backup?

    Read the article

  • Virtualhosts - best way of dealing with it?

    - by axqe56
    I'm competent at the basics of Apache, PHP and virtual hosting but have a question about virtual hosting. As far as I'm aware, HOSTS files can only be in one of the following locations: C:/Windows/system32/drivers/etc (varies in older installs, I believe) I don't think it can be put elsewhere for use with Apache, simply for virtual hosts, and the main HOSTS file for blocking sites etc. I heard about PAC files on Uniform Server's website (http://wiki.uniformserver.com/index.php/Virtual_Hosting:_PAC) but they're browser-specific though, aren't they? What's the best way to deal with virtualhosts, other than HOSTS file? My server isn't currently open to the internet yet, but if it is, what's the best way to resolve DNS for my virtualhost domains if it were to become forward-facing (i.e open to the internet)?

    Read the article

  • How to secure Apache for shared hosting environment? (chrooting, avoid symlinking...)

    - by Alessio Periloso
    I'm having problems dealing with Apache configuration: the problem is that I want to limit each user to his own docroot (so, a chroot() would be what I'm looking for), but: Mod_chroot works only globally and not for each virtualhost: i have the users in a path like the following one /home/vhosts/xxxxx/domains/domain.tld/public_html (xxxxx is the user), and can't solve the problem chrooting /home/vhosts, because the users would still be allowed to see each other. Using apache-mod-itk would slow down the websites too much, and I'm not sure if it would solve anything Without using any of the previous two, I think the only thing left is avoiding symlinking, not allowing the users to link to something that doesn't belong to them. So, I think I'm going to follow the third point but... how to efficiently avoid symlinking while still keeping mod_rewrite working?! The php has already been chrooted with php-fpm, so my only concern is about Apache itself.

    Read the article

  • How would I measure the amount of RAM needed per Glassfish domain? [closed]

    - by oligofren
    Possible Duplicate: Can you help me with my capacity planning? In our test environment we have a lot of apps spread out over a few servers and Glassfish domains. To make versioning easier I would have liked to have one Glassfish domain per customer per app (kind of like a heavyweight version of lots of jetty instances). But I have heard that Glassfish is kind of heavy on the resources, and so I would need to measure approximately how many instances would fit in the available RAM. These are low-traffic/low load testing servers, so CPU is not really an issue, though RAM might be. How would I get an approximate measure of how much RAM is needed? This is one Glassfish 3 instance with one heavy EAR application deployed. top? jvmstats? ??

    Read the article

  • Mod_rewrite issue with godaddy web hosting

    - by MrFoh
    Am trying to use laravel to build a site but my routes all redirect to the homepage. Apache error logs show this AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. And the .htaccess file is this <IfModule mod_rewrite.c> Options -MultiViews Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L] </IfModule> The webroot has multiple sub-folders which are document roots for different domains. Am working with one of these sub-folders. What is causing this error and how can it be fixed

    Read the article

  • Spam mail through SMTP and user spoofing

    - by Josten Moore
    I have noticed that it's possible to telnet into a mailserver that I own and send spoofed messages to other clients. This only works for the domain that the mail server is regarding; I cannot do it for other domains. For example; lets say that I own example.com. If I telnet example.com 25 I can successfully send a message to another user without authentication: HELO local MAIL FROM: [email protected] RCPT TO: [email protected] DATA SUBJECT: Whatever this is spam Spam spam spam . I consider this a big problem; how do I secure this?

    Read the article

  • Handling emails on a web server - Making sure the FQDN is set correctly based on the website sending the email

    - by webnoob
    I have a Windows 2008 Web Edition server hosting multiple websites using IIS 7.5. At the moment, all the emails are sent via the IIS6 SMTP service. The FQDN of the SMTP service is set to the computer name at the moment which isn't correct as it doesn't resolve to a valid DNS entry and is not RFC compliant. Some questions: Is there any way I can change the FQDN of the SMTP service based on the site sending the email? Would it be Ok to just setup mailserver.mydomain.com and use that as the FQDN for all the sites on multiple domains. Should I be using some other mail server software to handle this better? The reason I am asking is lots of emails are hitting spam folders because the settings are incorrect. I have access to the code that is running the websites so if something needs to be done there then that shouldn't be a problem. The sites are written using ASP.NET 2.0. EDIT: I have just found an option to create an SMTP virtual service. Would this be the way forward? Create a virtual server for each site? Thanks.

    Read the article

  • WAN Optimization for Small Office/Home Office

    - by TiernanO
    I have been reading up on WAN optimization for the last while, mostly out of interest of speeding up my own internet connections, but also to speed up the office internet connection. At home, I have 2 cable modems plugged into a RouterBoard RB750, which load balances the connections. In the office, we have a single connection into a NetGear router. Most of the WAN Optimization products I have seen, seem to be prohibitively expensive, but also seem to be based on the idea of having multiple branches around the world. What I am looking for, ideally, is as follows: software install: I am "guessing" I need to install it in 2 places: one in the office or house, and one in "the cloud". any connections going to, say, The US (we are in Europe, but our backup's live in the US currently, which would be something important to speed up) would be "tunnelled" though the Optimizer. If downloading or uploading large files, open multiple connections between both "the cloud" and the optimizer... This is where a lot of speed could be gained. finally, for items not compressed, they would be compressed on the cloud side of things, also items that are already on the optimizer could be not sent again. kind of like RSync or Proxy servers... So, is there something that can be done? Is it available using off the shelf components (some magic script with SSH, Squid, Linux and duct tape) or is it something that needs to be purchased? or even an Open Source Project that does 90% of what i am asking?

    Read the article

  • Exchange 2010 UR3 - customizing OWA logon page

    - by STGdb
    I have an Exchange 2010 UR3 deployment that I need to customize the OWA logon page for. I've created a new LGNTOPL.GIF file to replace the existing one in the folder: “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.158.1\themes\resources” When I bring up OWA, I still get the original “Outlook Web App” logo. I’ve searched and found a couple of other instances of LGNTOPL.GIF in the directories: “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.123.3\themes\resources” “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.146.0\themes\resources” “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\Current\themes\resources” I’ve replaced the LGNTOPL.GIF file in each of the above directories but got the same results. I’ve tried clearing my browser cache and even using multiple browsers from multiple PC’s but the same results. I’ve even tried making my GIF file the same pixel size as the original LGNTOPL.GIF logo but still the same results. I’ve tried restarting IIS on the CAS server and restarting the server but same results. Has something changed with Exchange 2010 UR3 when trying to customize OWA? I don't see anything documented about any change to OWA customization. Thanks

    Read the article

  • Steps to debugging web site latency and timeout issues

    - by Paperjam
    I have a client who has multiple offices around the country, all of which share the same Internet connection via their WAN. One specific office for this client is experiencing severe latency and timeout issues with my web site. Most, but not all, of the latency occurs on a specific ASPX page where multiple postbacks are made while populating cascading dropdown lists (rapid form submits). The latency is sporadic and can be anywhere from a few seconds to a full timeout. There is no indication that the timeouts are occurring on the server's end. The IT guy for this client is having trouble narrowing down the problem. Since it is affecting only one location for one client, I am led to believe it is not something with my site but something specific to that location. He's measured ping times while using the site and has noticed no real variance in ping times even when the page has timed out. I believe this may be being caused by some sort of Internet filter that doesn't like rapid form submits, but beyond a hunch I haven't a clue. My question is what things should I tell the IT guy to look for? While I'm not trying to provide active tech support for this issue, I would at least like to glean an understanding of what is going on and try to offer some sort of advice. Thank you.

    Read the article

  • I can't use a custom theme on a network account

    - by Rev
    I'm an administrator for the computer I use, but I'm using a network account. I can set custom themes (non-Microsoft, I mean) on my local account but not on the network account. It's the same machine, just different accounts/domains. I tried to repatch the files from the network account, but it says they're already patched. Any ideas why this won't work? The themes don't show up in the Personalize menu, and I can't just double click the .theme file from the Themes folder in Windows 7 Pro. This is the theme I'm trying to use, by the way: http://fediafedia.deviantart.com/art/Windows-8-VS-for-Win7-258514188?q=boost%3Apopular%20windows%208%20theme&qo=0 Tried repatching the files, still nothing.

    Read the article

  • Nagios DNX plugins

    - by danneh3826
    I'm toying with the idea of multiple Nagios instances setup to monitor our infrastructure. I've looked at all the various methods of distributed Nagios checks, and I think DNX comes out the closest. DNX handles failure of worker nodes, that's fine. What happens if the main DNX server fails though? Is there a way to replicate the server too? I'm using AWS EC2 primarily, so I can utilise Elastic Load Balancing for the web UI, but I need to be able to handle the AZ where the monitoring server is to fail over, and essentially for a second to pick up the checking load (active/passive, active/active, so long as it doesn't fail completely) The other thing I'm trying to solve is an issue with routing. What I'd like is to have multiple nodes report a fault before Nagios confirms it as critical. Not the NRPE checks, as they're pretty self explanitory, but things more like check_ping. I often have routing issues out of AWS to certain datacenters, so Nagios can often report bad/no ping/timeout as a critical issue, even though the machine in question is working fine. Would it be possible to have a setup where a worker complains a service check is critical, and have a second worker node (positioned in another datacenter/AZ) also report the service as critical before the Nagios central server issues a critical alert? I realise I might be asking a bit much (how far down the line do you go setting up failover systems before it starts to get ridiculous), however surely someone must have thought of this scenario when developing DNX?

    Read the article

  • How can I migrate mails from Yahoo! Mail to Google Apps?

    - by alnorth29
    We'd like to move from using Yahoo! Mail to our Google Apps account. It'd be great to be able to migrate all the mails across from the old accounts to the new. If we were using Gmail accounts this would be easy as the migration is offered by Google, for some reason they don't offer this to those using custom domains. As far as I know Yahoo! don't offer POP or IMAP access to non-paying customers and I'd rather not pay $20 per account that I want to migrate. Anyone know of any easy and cheap solutions?

    Read the article

  • How do I securely buy a domain from a private party?

    - by Matt
    I have a domain I want. I found the owners contact information, and negotiated a price. Is there a domain brokerage service out there that would help me with the exchange. I don't want to send off money, and never hear from him again, and I'm sure it's likewise. I've found a lot of sites that carry collections of domains for sale, but I've already found mine, and contacted the owner. I just want a service to facilitate the hand-off. Has anyone used a service like this that they could recommend?

    Read the article

  • Create a new domain with the same name of a trusted domain

    - by Russ
    I have a domain blah.com that was aquired a while back by my company foo.com. I set up a two way trust between the two domains, but now I want to move their servers into our forest, while keeping the domain name of blah.com. Is this possible? What things do I need to consider when doing this? I know I can't move the domain from its forest into our forest. blah.com is a 2003 native domain/forest foo.com is a 2008R2 domain/2003 forest.

    Read the article

  • Migrating Gmail to Office 365

    - by user218699
    Good Morning, I have been setting up Office 365 for my organization. We are currently using Gmail. I have synced our local Active Directory server w/ Office 365, as well as our domains. The problem I am having has to do with migrating mailboxes from Gmail to Office 365. I have been using this article to walk me through the process: http://technet.microsoft.com/en-us/library/dn568114.aspx The issue arises when I begin to sync the mailboxes. Currently I have been trying to sync my own mailbox as a test. The synchronization process has been going on for about 15 hours (for just one mailbox) with no errors or any information given by Office 365, other than the "Syncing" status on the migration page in the Exchange Admin Center. Is syncing a single mailbox supposed to take this long, or have I missed a step? Thanks!

    Read the article

  • Mail not going through : Sendmail Issue

    - by Zama Ques
    Some of the mails for a particular domain are not getting delivered from our mail server. We are using sendmail for mail server. Fallowing can be seen in log Oct 21 13:24:59 mailser sendmail[5407]: r9L7st1a005405: to=<[email protected]>, delay=00:00:03, xdelay=00:00:03, mailer=esmtp, pri=120539, relay=mailgw.test.in. [164.X.X.19], dsn=2.0.0, stat=Sent (ok: Message 289953693 accepted) FOr other domains like yahoo , gmail etc it is working fine . But if I send the mail through commandline using mailx command from the same server , the message is going through... Oct 21 13:30:37 ssdgweb sendmail[5443]: r9L80RFI005440: to=<[email protected]>, ctladdr=<[email protected]> (502/502), delay=00:00:10, xdelay=00:00:10, mailer=esmtp, pri=120329, relay=mailgw.test.in. [164.X.X.19], dsn=2.0.0, stat=Sent (ok: Message 289955601 accepted) Please let us know what is the issue and how it can be resolved .

    Read the article

  • Running two Magentos installations, one of which has 3 stores set up as multi-store. Which server?

    - by Pedro Peixoto
    I want to run 4 Magento stores in 2 different installations. 1 is a standalonne installation with 3 languages. The other is a multi-store with 3 different online stores in different domains. At the moment we have a VPS with 1GB memory, would that be enough? I ask because I've finished the standalone store and already put it online, and the server is already running on 62% memory. The ideal would be that this is enough as my company wouldn't like to move to a Dedicated Server (as it involves costs). I'm sure I can try to optimize Magento to run on lower memory (I'm expecting visits averaging 2000/day on all sites), if I could have some tips on the best way to do that Id appreciate it too.

    Read the article

  • What is the best method to determine an account through DNS A record configuration?

    - by Matt
    I apologize if my description of the problem is unclear. I am working for an online CMS that allows external domains to be used similar to Tumblr or Flavors.me. I noticed both of these services simply require you to add an A record to your domain's DNS. When trying this, I added an A record for a blank name and "www" both leading to my webserver's IP. While this successfully routes to my server, it doesn't retain the used domain. This leaves me without any idea of what account they're attempting to reach at the application layer. I'm using nginx as my webserver. I have changed all the nameservers for a domain before, and that works properly, however that causes complications with other issues such as mail and isn't feasible on a scaled solution. What should I be doing here? Is the A record the correct method of accomplishing this? How are sites like Tumblr and Flavors.me determining which account is being referenced by the domain?

    Read the article

  • Lighttpd domain redirection

    - by HTF
    I would like to redirect domains on HTTP/HTTPS: http://old.com -> https://new.com https://old.com -> https://new.com I have to specify the SSL key/certificate for the old domain but I'm not sure where I have to place these directives: $SERVER["socket"] == ":443" { ssl.engine = "enable" ssl.pemfile = "/etc/pki/tls/private/new.com.pem" ssl.ca-file = "/etc/pki/tls/certs/new.com.crt" } $SERVER["socket"] == ":80" { $HTTP["host"] =~ "old.com|new.com" { url.redirect = ( "^/(.*)" => "https://new.com:443/$1" ) } } I was trying to add the code below but Lighttpd reports configuration errors: $SERVER["socket"] == ":443" { $HTTP["host"] =~ "old.com" { url.redirect = ( "^/(.*)" => "https://new.com:443/$1" ) } ssl.engine = "enable" ssl.pemfile = "/etc/pki/tls/private/old.com.pem" ssl.ca-file = "/etc/pki/tls/certs/old.com.crt" }

    Read the article

  • How to make Thunderbird play nice with Google mail

    - by Christi
    Thunderbird and gmail aren't exactly the best of friends. Gmail's tags mean that Thunderbird often downloads multiple copies of a single mail. Anything tagged in gmail will appear in a folder related to that tag, the "all mail" folder, and possibly the "inbox" and "sent mail" folders too. Thus a mail with multiple tags could potentially be stored more than four times in a local Thunderbird cache. This can make searching difficult, and is obviously wasteful of disk space. The best solution I have come up with is as follows. Operate a zero inbox policy (i.e. use the inbox for processing live mail only and archive everything else) which eliminates an extra copy in the inbox. Secondly, configure Thunderbird not to sync the "Sent Mail" folder - this is a bit of a pain, since I actually find it quite useful to be able to look through just the mails I've sent, but a search can duplicate this functionality. In this way, most of the duplicates are removed, and only mail with tags is stored locally more than once. Ideally, however, I'd only like one copy of each mail to be stored locally. I am surprised Thunderbird doesn't store mail by some sort of hashing algorithm to prevent precisely this problem - but it wouldn't be compatible with the way the folders are mirrored in a local directory structure, I suppose. Can anyone think of a better way to get Thunderbird to cache a Google mail account locally efficiently.

    Read the article

  • Relative path incorrect in the view layer when hosting a rails3 app in a subdirectory using passenger and apache

    - by Saifis
    I want to host multiple Rails apps on a multiple server using sub-directories. And have encountered some relative path problems. I have made a symbolic link to the app's public directory and placed it in the /var/www/html directory, var/www/html/ /test_app (symbolic link to the public folder of test_app) and set apache as so LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12/ext/apache2/mod_passenger.so PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12 PassengerRuby /usr/local/bin/ruby <VirtualHost *:80> ServerName test.com DocumentRoot /var/www/html Options Indexes FollowSymLinks -MultiViews RailsBaseURI /test_app </Location> </VirtualHost> The links in the app itself works just fine, all the links acknowledge the test_app/ directory and work, however, when it comes to showing images in the public directory in the view, the relative path goes wrong. Say I have /system/files/1/aaa.png it goes looking for it in /var/www/html/system/files/1/aaa.png rather than /var/www/html/test_app/system/files/1/aaa.png As far as I understand this is an Apache setting problem than something to be done in Rails, if its possible I would prefer to have it contained in the conf file of apache rather than having to alter the code.

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >