Search Results

Search found 2454 results on 99 pages for 'domains'.

Page 37/99 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Using naked domain in apache, no "www" on domain in httpd.conf

    - by chrsdgtl
    Incredibly there is no good tutorial or easy reference guide for using naked domains (no subdomain) as the primary URI online that I could find. I'm trying to configure this to happen in my httpd.conf in apache. Since I'm still a relative newb to this server stuff, trying to figure it out myself all I could do was configure some nasty redirect loops and error 400's. There's plenty of notes for the more common: http:// -- https:// and naked to -- www. and a ton of .htaccess stuff (not interested) What I want is http://www.domain.com -- http://domain.com The most helpful thing I found was this: Multiple domains (including www-"subdomain") on apache? I ended using the solution mentioned by ceejayoz in that post that some folks noted was messy and complicated because it got the desired result but I'd like to know the best practice for this in the future. I'd appreciate a nudge in the right direction. Thanks in advance.

    Read the article

  • How to automatically remove Flash history/privacy trail? Or stop Flash from storing it?

    - by Arjan van Bentem
    Many people have heard about third-party cookies, and some browsers even block those by default. Some people may even be using Private Browsing modes. However, only few seem to realise that Adobe's Flash player also leaves a cross-browser trail on your local hard drive, and allows for sending cookie-like information back to the server, including third-party sites. And because it is a plugin, Flash does not take any of the browser's privacy settings into account. Sorry for the long post, but first some details about why using Flash raises a privacy concern, followed by the results of my tests: The Flash player keeps a cross-browser history of the domain names of the Flash-sites your computer has visited. Unlike your browser's history, this history is not limited to a certain number of days. History is also recorded while using so-called Private Browsing modes. It is stored on your hard drive (though, as described below, without going to Adobe's site you won't know what is stored). I am not sure if any date and time information is kept about each visit, but to see the domain names: right-click on some Flash content, open the settings dialog, and click the Help icon or click the Advanced button within the Privacy tab. This opens a browser to the help pages on Adobe.com, where one can click through to the Website Storage Settings panel. One can clear the existing list, but one cannot stop it from being recorded again. Flash allows for storing data on your local hard drive, using so-called Local Shared Objects (aka "Flash Cookies"). Just like HTTP cookies, this data can be sent back to the server, for tracking purposes. They are cross-browser, have no expiration date, and no user defined maximum lifetime can be set in the Flash preferences either. These not being HTTP cookies, they are (of course) not blocked by a browser's cookies preferences and are not removed when the normal HTTP cookies are deleted. Adobe has announced that version 10.1 will obey Private Browsing in most popular browsers, but unfortunately no word about also removing the data whenever normal cookies are deleted manually. And its implementation might be confusing: [..] if the browser is in normal browsing mode when the Flash Player instance is created, then that particular instance will forever be in normal browsing mode (private browsing is turned off). Accordingly, toggling private browsing on or off without refreshing the page or closing the private browsing window will not impact Flash Player. Local Shared Objects are not limited to the site you visit, and third-party storage is enabled by default. At the Global Storage Settings panel one can deselect the default Allow third-party Flash content to store data on your computer. Because of the cross-browser and expiration-less nature (and the fact that few people know about it), I feel that the cross-browser third-party Flash Cookies are more dangerous for visitor tracking than third-party normal HTTP cookies. They are even used to restore plain HTTP cookies that the user tried to delete: "All advertisers, websites and networks use cookies for targeted advertising, but cookies are under attack. According to current research they are being erased by 40% of users creating serious problems," says Mookie Tenembaum, founder of United Virtualities. "From simple frequency capping to the more sophisticated behavioral targeting, cookies are an essential part of any online ad campaign. PIE ["Persistent Identification Element"] will give publishers and third-party providers a persistent backup to cookies effectively rendering them unassailable", adds Tenembaum. [..] To justify this tracking mechanism, UV's Tenembaum said, "The user is not proficient enough in technology to know if the cookie is good or bad, or how it works." When selecting None (zero KB) for Specify the amount of disk space that website websites that you haven't yet visited can use to store information on your computer, and checking Never ask again then some sites do not work. However, the same site might work when setting it to None but without selecting Never ask again, and then choose Deny whenever prompted. Both options would result in zero KB of data being allowed, but the behaviour differs. The plugin also provides a Flash Player cache for Adobe-signed files. I guess these files are not an issue. So: how to automatically delete that information? On a Mac, one can find a settings.sol file and a folder for each visited Flash-website in: $HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer/sys/ Deleting the settings.sol file and all the folders in sys, removes the trail from the settings panels. However, the actual Local Shared Ojects are elsewhere (see Wikipedia for locations on other operating systems), in a randomly named subfolder of: $HOME/Library/Preferences/Macromedia/Flash Player/#SharedObjects But then: how to remove this automatically? Simply removing the folders and the settings.sol file every now and then (like by using launchd or Windows' Task Scheduler) may interfere with active browsers. Or is it safe to assume that, given the cross-browser nature, the plugin would not care if things are removed while it is active? Only clearing during log-off may not work for those who hibernate all the time. Firefox users can install BetterPrivacy or Objection to delete the Local Shared Objects (for all others browsers as well). I don't know if that also deletes the trail of website domain names. Or: how to stop Flash from storing a history trail? Change of plans: I'm currently testing prohibiting Flash to write to its own sys and #SharedObjects folders. So far, Flash has not tried to restore permissions (though, when deleting the folders, Flash will of course recreate them). I've not encountered any problems but this may take some while to validate, using multiple browsers and sites. I've not yet found a log that reports errors. On a Mac: cd "$HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer" rm -r sys/* chmod u-w sys cd "$HOME/Library/Preferences/Macromedia/Flash Player" # preserve the randomly named subfolders (only preserving the latest would suffice; see below) rm -r \#SharedObjects/*/* chmod -R u-w \#SharedObjects I guess the above chmods cannot be achieved on an old Windows system (I'm not sure about XP and Vista?). Though maybe on Windows one could replace the folders sys and #SharedObjects with dummy files with the same names? Anyone? Obviously, keeping Flash from storing those Local Shared Objects for all sites may cause problems. Some test results (Flash 10 on Mac OS X): When blocking the sys folder (even when leaving the #SharedObjects folder writable) then YouTube won't remember your volume settings while viewing multiple videos. Temporarily allowing write access to the blocked folders while visiting trusted sites (to only create folders for domains you like, maybe including references in settings.sol) solves that. This way, for YouTube, Flash could be allowed to write to sys/#s.ytimg.com and #SharedObjects/s.ytimg.com, while Flash could not create new folders for other domains. One may also need to make settings.sol read-only afterwards, or delete it again. When blocking both the sys and #SharedObjects folders, YouTube and Vimeo work fine (though they might not remember any settings). However, Bits on the Run refuses to even show the video player. This is solved by temporarily unblocking the #SharedObjects folder, to allow Flash to create a subfolder with some random name. Within this folder, it would create yet another folder for the current Flash website (content.bitsontherun.com). Removing that website-specific folder, and blocking both #SharedObjects and the randomly named subfolder, still seems to allow Bits on the Run to operate, even though it still cannot write anything to disk. So: the existence of the randomly named subfolder (even when write protected) is important for some sites. When I first found the #SharedObjects folder, it held many subfolders with random names, some created on the very same day. I wonder when Flash decides it wants a new folder, and how it determines (and remembers) that random name. For a moment I considered not blocking write access for sys and #SharedObjects, but explicitly creating read-only folders for well-known third-party tracking domains (like based on a list from, for example, AdBlock Plus). That way, any other domain could still create Local Shared Objects. But the list would be long, and the domains from AdBlock Plus are probably all third-party domains anyway, so disabling Allow third-party Flash content to store data on your computer might have the very same result. Any experience anyone? (Final notes: if the above links to the settings panels do not work in the future, then use the URL that is known to Flash player as a starting point: www.adobe.com/go/settingsmanager. See also "You Deleted Your Cookies? Think Again" at Wired.com -- which uses Flash cookies itself as well... For the very suspicious using Time Machine: you may want to exclude both folders, for each user, and remove the trace that is already on your backup.)

    Read the article

  • I can't send email from my server to gmail addresses

    - by brianegge
    I'm using postfix, and have setup spf, dkim, and domainkeys. I can get my email to go to Yahoo, but not gmail. Here's the headers from an email send to Yahoo. Yahoo reports the email as domain key verified. X-Apparently-To: brianegge at yahoo.com via 68.142.206.167; Sat, 20 Mar 2010 05:29:19 -0700 Return-Path: <domains at theeggeadventure.com> X-YahooFilteredBulk: 67.207.137.114 X-YMailISG: x7_Rl9EWLDuugoqPcORhih0FeQMOaIIpz4qfuu9ttx1xbo3uKI2kz.CLUy2cJ1BTtHAwuJtrsGRsveHIx.Dx95avNGlPPGWy_cSpnEwWLXGxBciO.YgtSQxdURQiWLCLvbHej0QPjQIHFjAFjdeGhJd2Y8NgTW1wcExq45Sb7LMlOGvtGMjSQuc8QazwXUxpZrQbIxgSQUTmzQO1x30xaZ2Us6TQTab7Wpya6OgAX.emKOM3phfS5kfhYj9FLQ.qi32sFNWnAoFdVK596OTP2F63PAJOVM5qPsM2jIAbJylIBmnj94LO7hOVr3KOS6XLtCPRn2Oe X-Originating-IP: [67.207.137.114] Authentication-Results: mta1055.mail.mud.yahoo.com from=theeggeadventure.com; domainkeys=pass (ok); from=theeggeadventure.com; dkim=pass (ok) Received: from 127.0.0.1 (EHLO mail.theeggeadventure.com) (67.207.137.114) by mta1055.mail.mud.yahoo.com with SMTP; Sat, 20 Mar 2010 05:29:19 -0700 Received: by mail.theeggeadventure.com (Postfix, from userid 1003) id BB5B01C16A4; Sat, 20 Mar 2010 12:29:16 +0000 (UTC) DomainKey-Signature: a=rsa-sha1; s=2010; d=theeggeadventure.com; c=simple; q=dns; b=JHbK9VhqyQTfpQFqaXxJrKpEG9h9H0IZ0LdWoBooJEA7hv3SYWmFUtyE247EuwoaG gzApKJ1DuRhwESZ7PswrbzuaUL8poAUO8LmMvZ+OqnDolgNSJUYWu0FcO+fe3H4m9ZD grkj0xMpHw+uFjXV4plKO+sa8olJXJAmP+9cMEo= X-DKIM: Sendmail DKIM Filter v2.8.2 mail.theeggeadventure.com BB5B01C16A4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=theeggeadventure.com; s=2010; t=1269088156; bh=bUlMldcnzFCmCmNT8qjpRl6fiY1YyjiZiC9jhCXASOw=; h=Subject:To:Message-Id:Date:From; b=EVNolTlh4Gch5/HIrrHaRQvcApl7wkB42gB44NsPcLZD2QrhuOvnhanhnEB4UbV0e A+3dAOjhX7LKzgGrn11jXNTiEjNX1vQDsX3HyG0fNra73aWiGTzr1nHJfnuEJ7Ph0j 5tp0HRL5jjikD1XJcvmsYzTpT22mxuz60HXYRB1s= Subject: cron To: <brianegge at yahoo.com> X-Mailer: mail (GNU Mailutils 1.2) Message-Id: <[email protected]> Date: Sat, 20 Mar 2010 12:29:16 +0000 (UTC) From: This sender is DomainKeys verified [email protected] (domains) View contact details Content-Length: 818 When I send to gmail, I see the following in my server log, but the message doesn't even reach my spam folder. Mar 20 12:59:12 Everest postfix/pickup[27802]: C81C61C16A4: uid=1000 from=<egge> Mar 20 12:59:12 Everest postfix/cleanup[27847]: C81C61C16A4: message-id=<[email protected]> Mar 20 12:59:13 Everest postfix/qmgr[27801]: C81C61C16A4: from=<[email protected]>, size=2784, nrcpt=1 (queue active) Mar 20 12:59:14 Everest postfix/smtp[27849]: C81C61C16A4: to=<brianegge at gmail.com>, relay=gmail-smtp-in.l.google.com[209.85.223.24]:25, delay=2.1, delays=0.39/0.28/0.13/1.3, dsn=2.0.0, status=sent (250 2.0.0 OK 1269089954 32si4566750iwn.51) Mar 20 12:59:14 Everest postfix/qmgr[27801]: C81C61C16A4: removed I've send to email to test services, and the report everything verifies ok. I've also checked all the RBL lists, and I'm not on any of them.

    Read the article

  • Excel hyperlink not redirecting properly (bug?)

    - by Andrej
    I don't know is this is the right place to post this questions, but I have an excel hyperlink problem. Here's the thing. I click on let's say "A1", copy the link in it (http://www.godaddy.com/domains/searchresults.aspx?ci=54814), right click on hyperlink and copy that SAME URL as the link (if it is not automatically detected and changed). When I go to click on it, I am readirected to http://www.godaddy.com/domains/search.aspx?ci=53972. If I copy and paste the link directly into the browser, it works fine. Somebody knows what's going on? Thank you for your time. Andrej

    Read the article

  • Indirect Postfix bounces create new user directories

    - by hheimbuerger
    I'm running Postfix on my personal server in a data centre. I am not a professional mail hoster and not a Postfix expert, it is just used for a few domains served from that server. IIRC, I mostly followed this howto when setting up Postfix. Mails addressed to one of the domains the server manages are delivered locally (/srv/mail) to be fetched with Dovecot. Mails to other domains require usage of SMTPS. The mailbox configuration is stored in MySQL. The problem I have is that I suddenly found new mailboxes being created on the disk. Let's say I have the domain 'example.com'. Then I would have lots of new directories, e.g. /srv/mail/example.com/abenaackart /srv/mail/example.com/abenaacton etc. There are no entries for these addresses in my database, neither as a mailbox nor as an alias. It's clearly spam from auto-generated names. Most of them start with 'a', a few with 'b' and a couple of random ones with other letters. At first I was afraid of an attack, but all security restrictions seem to work. If I try to send mail to these addresses, I get an "Recipient address rejected: User unknown in virtual mailbox table" during the 'RCPT TO' stage. So I looked into the mails stored in these mailboxes. Turns out that all of them are bounces. It seems like all of them were sent from a randomly generated name to an alias that really exists on my system, but pointed to an invalid destination address on another host. So Postfix accepted it, then tried to redirect it to another mail server, which rejected it. This bounced back to my Postfix server, which now took the bounce and stored it locally -- because it seemed to be originating from one of the addresses it manages. Example: My Postfix server handles the example.com domain. [email protected] is configured to redirect to [email protected]. [email protected] has since been deleted from the Hotmail servers. Spammer sends mail with FROM:[email protected] and TO:[email protected]. My Postfix server accepts the mail and tries to hand it off to hotmail.com. hotmail.com sends a bounce back. My Postfix server accepts the bounce and delivers it to /srv/mail/example.com/bob. The last step is what I don't want. I'm not quite sure what it should do instead, but creating hundreds of new mailboxes on my disk is not what I want... Any ideas how to get rid of this behaviour? I'll happily post parts of my configuration, but I'm not really sure where to start debugging the problem at this point.

    Read the article

  • Amavis-new whitelist

    - by pigeon
    Hi to the Linux community. I'm coming from a Windows Server background so have mercy. I'm attempting to whitelist some domains and although I know this isn't the best way of doing so its just a one off for a couple of domains so I thought it would be the quickest way of doing so. Current setup: Amavis is used to pass emails off the ClamAV and SpamAssasin, currently I make changes in /etc/amavis/conf.d/50-user, as this will overide other settings. Have created a whitelist file that looks like this: .domaintowhitelist.com .domain2towhitelist.com In the 50-user config file: Have tried variants like this: read_hash(\%whitelist_sender, '/etc/amavis/whitelist'); read_hash(\%virus_lovers, '/etc/amavis/whitelist'); And restarting amavis after making those changes. Am I going about this the wrong way? Any help is appreciated.

    Read the article

  • Have to enter google sites through second-level domain

    - by Anton Geraschenko
    I'm having the same problem as this guy. I own two domains hosted on google sites, mydomain.com and mydomain.net. When I go to mydomain.com, it redirects me to the site located at www.mydomain.com (this is the desired behavior). This used to also work on mydomain.net, but now when I go to mydomain.net, I get a Google 404. To see the content, I have to go to www.mydomain.net. As far as I can tell, the DNS settings and Google apps settings for both domains are identical. Does anybody have any idea about what could be happening?

    Read the article

  • forward all mail on a specified domain to script

    - by David
    Hey all! I run a disposable e-mail service that accepts all incoming mail and forwards it to a PHP script that stores it in a database for people to view. Before now, I have been on shared hosting with cPanel, which makes it easy to pipe e-mails to a script. Now, however, I got my own VPS, and it doesn't have cPanel. How do I pipe e-mails to script? Further, how do I pipe emails to any address on certain specified domains to my script? You see, aside from the main domain, there are several alternate domains that people can use if the main domain is blocked, and on each domain I want any address to be usable (xyz@domain, abc@domain, anythingelse@domain). The VPS has Ubuntu 9.04 installed, and I have been experimenting with Postfix, though I can switch to Exim or Sendmail if it is easier.

    Read the article

  • Need Some comparative data on Apache and IIS

    - by zod
    This is not a pure programming question but very programming related question I am not sure where should i ask. if you dont know answer please dont downvote . If you know the right place to ask please suggest or move this question We have a web application running on PHP 5 , Zend Framework , Apche . I need some comparitive data which states the above technologies and server is one of the best and thousands of domains are using this. Do you know any website i can get this type of data listing major websites developed in PHP. Major domains runs on apache Major website running on Zend A comparitive case study on Apache and IIS or PHP and .Net

    Read the article

  • One domain hiding two servers

    - by George DSeas
    For our SaaS web-app we have two identical servers in two geographically separated data centers. FOO_1 is the production server and does real-time (MySQL master-slave) replication to its backup F00_2. We want our users to always go to THEFOO.COM which somehow points to the production server. So even if FOO_1 dies, we can just switch THEFOO.COM to redirect to FOO_2 so the failure is transparent. This switch can be manual or automatic but without failback (if FOO_1 somehow becomes available again). Is there a way to do this with DNS? I am getting stuck with ANAME and CNAMEs configuration. We don't use sub-domains, just straight domains. If not, what are other options? Does it make sense to just have a web server at LOVELY_FOO.COM and just redirect all traffic? I also looked at load balancers but didn't see a solution for across data centers/network providers.

    Read the article

  • Qmail does not forward mail to a specific domain

    - by jahufar
    Hi I have a dedicated hosting account with GoDaddy.com. I've pointed my domain's email to work with Google apps. The server has qmail running and it forwards email to all domains just fine except for MY domain (mydomain.com) - it says 550 User xxx not found in mydomain.com I believe it thinks I've hosted email on the server itself (not gmail) and it's trying to validate if [email protected] exists on my server (which is not the case since it's all handled by google apps). How do I make it forward mail to all domains? Thank you :) EDIT: I would only need it forwarding emails if the connection originates from 127.0.0.1 - which I believe is the default way it's configured. So to clarify: I just need a purely forwarded configuration so my PHP scripts have the ability to send email.

    Read the article

  • Entering new user data into AD LDS

    - by Robert Koritnik
    I need some help configuring AD LDS (Active Directory Lightweight Directory Services). I'm not an administrator, have never configured domains and I don't have a clue how to add new users to existing domains. The thing is I need to develop an app on top of Sharepoint 2010 that must be connected to AD. I've chosen AD LDS because I can install it on Windows 7 and it acts as an active directory even though there's no domain controller present in the network. What I've done so far: I've installed AD LDS I've added a new instance with appication directory partition name DN=Air,DC=Watanabe,DC=pri I can connect to it using ADSI Edit and see all kinds of strange But now I don't know what to do? When it opens I can see the window below, but where's next? Can anybody give me some guidelines, how can I add domain users, so I can use them in my app AD required app?

    Read the article

  • Specifying an Internal (LAN) DNS Server in Netgear DGND3700 (N600) router

    - by Mus
    I have a DNS server running on a linux machine on my LAN which has domains for a few devices in my LAN. The resolve.conf file has google and the isp nameservers in it, as well as itself. Dunno if that helps or hinders but this setup has worked for years. I used to have a Thomson 585 ADSL router where I set the internal DNS Server as the primary DNS and the ISPs DNS server as the secondary. True enough all connected devices could access domains specified in the internal DNS. Recently I had to replace the Thomson router with a Netgear DGND3700 (N600) ADSL router. The problem is that if I specify the internal DNS server in this router, I lose internet connection as well as connection to the router itself. Does anyone know how I can use the internal DNS as the primary in the router?

    Read the article

  • Cross domain LDAP

    - by Adam
    For a system we are developing we have 2 domains an internal and an external domain with bi directional trust between them. However the servers are only able to connect to their own DC's. We have an application server on the internal domain which needs to use an LDAP query to gather a list of users from a group on the external domain. How do i go about writing an LDAP query that asks one DC to go ask another DC for a list of users. I tried querying the internal DC with the same LDAP query I would use if it could hit the external DC directly but this does not work. When i use Softerra LDAP Administraor I can view the full hierarchy of the interal domain but despite the trust relationship between domains i am unable to see any of the external doamin. Any suggestions or help would be greatly appreciated

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • Permission settings for apache2 web content directories with several users?

    - by John
    Hi there. I've got a Debian VPS set up with a LAMP-stack. My apache2 instance runs on the user account 'www-data'. In addition to the root account and the service accounts I have several user accounts belonging to friends, family and myself that includes FTP-access. This is to allow the users to drop files to the root of their domain which is located in their home folder. I am having issues with setting the correct permissions so that Apache is able to serve the content ("403 Forbidden"). I could just do a 'chmod -R 755 *' on the entire www-directory for each domain, but from what I gather that's not a good idea. Here's an example of the structure: apache2 is run by 'www-data' User 'john' has this home folder structure /home/john/domains/somedomain.com/www /home/john/domains/sub.somedomain.com/www How can I keep things safe while still allowing users to upload content via FTP, and allow for file-uploads in lets say Wordpress?

    Read the article

  • Upgrading TFS 2005 to TFS 2010 fails at "Executing servicing step Upgrade Version Control Identities"

    - by nadeemmar
    Hi all, I have been trying to upgrade our TFS 2005 to TFS 2010 but with no luck so far. I went through the TFS Installation guide and many upgrade guides but with no luck in overcoming the issue I am facing which seems to be unique and different to other described issues. In our company, we have a domain forest with several domains. Lets say domain A, B, and C. TFS is in domain A and has users from all these three domains. All domains have trust reltionships between them. However, domain C was deleted several months ago. In the upgrade process, whenever I reach the collection upgrade step, the following error is raised: [Info @09:57:50.997] [2010-12-29 09:55:47Z][Informational] Step Data: ExtensionType = Microsoft.TeamFoundation.VersionControl.Server.PlugIns.WorkspaceSecurityNamespaceExtension [Info @09:57:50.997] [2010-12-29 09:55:47Z] Servicing step Create VersionControl Security Namespaces passed. (ServicingOperation: UpgradePreTfs2010Databases; Step group: Upgrade.TfsVersionControl) [Info @09:57:50.997] [2010-12-29 09:55:47Z] Executing servicing step Upgrade Version Control Identities. (ServicingOperation: UpgradePreTfs2010Databases; Step group: Upgrade.TfsVersionControl) [Info @09:57:50.997] [2010-12-29 09:55:47Z][Informational] Step Performer: VersionControl [Info @09:57:50.997] [2010-12-29 09:55:47Z][Informational] Step Type: UpgradeIdentity [Info @09:57:50.997] [2010-12-29 09:55:47Z][Informational] Step Data Text: [Error @09:57:50.997] [2010-12-29 09:55:51Z][Error] Sync error for identity: System.Security.Principal.WindowsIdentity, S-1-5-21-1004336348-527237240-682003330-2818 - The trust relationship between the primary domain and the trusted domain failed I looked for the SID and it seems to be for a user in the deleted domain C. With a bit of googling, I figured out that TFSConfig Identities command can be used to remap users from one domain to the other. I went ahead and created local users that matches the users we have from domain C and ran the TFSConfig Identities /Change command and it executed successfully. However, I still get the same error. I am stuck and can't figure out how to move forward :( I need your expertise, has anyone faced this issue before? Do I need to change these identities on TFS 2005 before I commence the upgrade? I forgot to mention, I am following the upgrade with a move approach. I created a virtual machine for testing the upgrade. Installed SQL server 2008, restored the TFS databases and installed TFS 2010 and ran the upgrade wizard. Regards, Nadeem

    Read the article

  • Installing and maintaining an email server

    - by Andrew
    I need to move hosting providers for four or five domains and for several reasons I'm considering a Linux VPS rather than staying with my current shared, managed hosting provider. The only thing that's stopping me is email. I have lots of experience running and maintaining Apache, but none with email servers. Based on some research, if I want to keep what I've using now, it looks like I'd be going with Postfix and Dovecot, and probably Exim and SpamAssassin. I have no problem performing regular maintenance and watching for security updates, but I don't want to bite off more than I can chew. For someone new to email services, how hard is it to set up an email server that is externally accessible (via SMTP and POP3, not IMAP), available over SSL/TLS and reasonably reliable for multiple domains? How much of a time commitment is it to maintain one?

    Read the article

  • active directory servers synchronization

    - by Mit Naik
    I have 3 AD servers with windows server 2008 R2 at 3 different places, main server is at datacenter and 2 are in our local office which are at 2 different place. I want to synchornize all the 3 server together, were datacenter server should be central server and rest 2 servers should synch with the datacenter server. Please provide us the steps or tutorial to do this. Also we want that once the changes are done in 1 of the AD server the changes are automatically done in all the servers. For example if I change the password of user in our local server it should be updated in our main AD server and other branch server too. Please provide us the steps or tutorial to do this asap. I have one more question I have already created main datacenter AD as domain.local and other domains as xyz.local and abc.local, how can I replicate the additional AD domains with main datacenter DC, also do we require VPN connection, is there any other way to replicate the servers without using VPN connection?

    Read the article

  • Spam issues while using Postfix as a two-way relay

    - by BenGC
    I want to use a Postfix box to do two things: Relay mail from any host on the internet addressed to one of my domains to my Zimbra server Relay mail from my Zimbra server to any address on the internet. To try and accomplish this I have configured Postfix thusly: mynetworks = 127.0.0.0/8, zimbra_ip/32 myorigin = zimbra_server mydestination = localhost, zimbra_server relay_domains = example.com example.org transport_maps = hash:/etc/postfix/transport_map local_transport = error:no mailboxes on this host transport_map looks like this: example.com smtp:[zimbra_server] example.org smtp:[zimbra_server] Now, this works and passes the Open Relay tests. However, I am seeing in the maillog that the server is relaying spam that has a From: address of <> to domains that are not mine. How do I stop this behavior?

    Read the article

  • How to add IP range to a server?

    - by Efe Cakinberk
    Hello, First of all I must say that I'm ver inexperienced server and network user. But I rented a unmanaged dedicated server. Well I didn't know what unmanaged really means, then I learned it when I needed support. Well I must do everything by myself. I have a problem. I had already 4 IPs on my server when I rented it. But then I needed more Ips and my server assigned me 32 Ips in which I can only use 27 of them. 85.25.230.0 - 85.25.230.31 this is my Ip range and they say the following Ips must not be configured on the server: 85.25.230.0 - network address 85.25.230.1 - gateway address 85.25.230.2 - router redundancy 85.25.230.3 - router redundancy 85.25.230.31 - broadcast address But the problem is ok Ips are assigned to me but they are not setup on my server. How will I setup Ips to work on my server? I did this after my reseach on google: I used this command on command prompt: route add 85.25.230.0 mask 255.255.255.224 85.25.230.1 metric 1 if then it said OK!. and I thought they should be working. (btw, mask is given to me by my ISP and I don't know metric 1 and if means I just saw it on the net and write it here) I setup my domains via using Plesk Kontrol Panel. So i added one domain and setup one of my new Ips 85.25.230.5 to it. But no it is not working. When I visit the domain via browser, there is a Plesk page comes and says this domain is not configured on the server. Then I changed the domains Ip to one of my old Ips which are given to me with the server and which I have been using for my other domains for a long time. Ok in a second, domain started working. I set it back to my new Ip and domain did not work. As I said, I'm not an expert and do not now the logic. But I'm eager to learn. Can you tell me what might couse the problem and did I do wrong while setting up IP RANGE to my server, if so how can I set them up? Thank you, Efe

    Read the article

  • Exchange 2013 Internal Relay via Smart Host

    - by Matt Clements
    Thank you for your help in advance! I am currently setting up an Exchange 2013 server, to replace our old POP3/SMTP system, however we would like to roll this out gradually when convenient for our staff. Our plan is therefore: Setup Exchange 2013 to retrieve email via POP Connector - Done Setup Exchange 2013 to send ALL mail via a SmartHost - Issues I have set the domains in Mail Flow Accepted Domains to Internal Relay, enabled a Smart Host for * as the domain name, and disabled/deleted the accounts that are not setup yet; however Exchange just bounces the emails with no errors.

    Read the article

  • Entering data into AD LDS

    - by Robert Koritnik
    I need some help configuring AD LDS (Active Directory Lightweight Directory Services). I'm not an administrator, have never configured domains and I don't have a clue how to add new users to existing domains. The thing is I need to develop an app that must be connected to AD. I've chosen AD LDS because I can install it on Windows 7 and it acts as an active directory even though there's no dmain controller present in the network. What I've done so far: I've installed AD LDS I've added a new instance with appication directory partition name DN=Air,DC=Watanabe,DC=pri I can connect to it using ADSI Edit and see all kinds of strange But now I don't know what to do? When it opens I can see the window below, but where's next? Can anybody give me some guidelines, how can I add domain users, so I can use them in my app AD required app?

    Read the article

  • Multiple URLs on one domain in IIS 6 [closed]

    - by TGnat
    Possible Duplicate: How to Host Multiple Domains / Web Sites on one IIS6 Server I'm a developer and only know the basics of IIS administration. I want to give each of my clients a personalized URL to access my web site. I have mySite.com and I would like to allow client A to acess the site via A.mySite.com and client B to use B.mySite.com. These would all point the the mySite.com domain with the same ip address. Can this be done in IIS? Do I have to register new domains for each client? Are there other ways to accomplish the same thing?

    Read the article

  • Configure Nginx On Separate Server For Zimbra Webmail

    - by alphadogg
    How do I properly configure a server with nginx to front for a Zimbra server with multiple domains? I run a small SOHO network. I NAT/port forwarding on my Comcast router to get traffic to my handful of servers. I setup a server with Zimbra, call it host1.internal.local. The server currently has two domains, call them domain1.com and domain2.com. Both offer webmail access at webmail.domain1.com and webmail.domain2.com. I have a separate server with nginx. I want to configure nginx to reverse proxy, such that I can direct all HTTP/HTTPS, and send webmail traffic via matched host address/headers to the Zimbra server. If possible, I'd like to know how to map IMAP, POP and SMTP traffic too. How would I do this?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >