Search Results

Search found 13222 results on 529 pages for 'security gate'.

Page 113/529 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • monitoring a /21 for potential bad guys with snort and port mirroring

    - by Adeodatus
    Hi all, I want/need to start monitoring our network a bit better. Its an odd network in that it comprises 2 /22 public IPs and a slew of private admin IPs. I do have one point in the network where it all comes together and I can turn on port mirroring on the catalyst. From that port, I'd like to turn up a box running various utilities. Snort is high on my list but it'd be nice to also get some networking statistics with something like Netflow. So, what are peoeple's thoughts. I can turn up a box needed for this with a bit of ease. We have the hardware available. What should I run? I'd love to know what kind of nasty things are potentially going on but I'd also like to see statistics on what people are doing on the network so I can better tweak our systems to handle it better and improve performance. I'm open so please, give me some ideas to go along with what I've got.

    Read the article

  • Create restricted user on Debian server

    - by James Willson
    I want to create a user account for each of the key programs installed on my debian server. For example, for the following programs: Tomcat Nginx Supervisor PostgreSQL This seems to be recommended based on my reading online. However, I want to restrict these user accounts as much as possible, so that they dont have a shell login, dont have access to the other programs and are as limited as possible but still functional. Would anyone mind telling me how this could be achieved? My reading so far suggests this: echo "/usr/sbin/nologin" >> /etc/shells useradd -s /usr/sbin/nologin tomcat But I think there may be a more complete way of doing it. EDIT: I'm using debian squeeze

    Read the article

  • Custom fail2ban Filter for phpMyadmin bruteforce attempts

    - by Michael Robinson
    In my quest to block excessive failed phpMyAdmin login attempts with fail2ban, I've created a script that logs said failed attempts to a file: /var/log/phpmyadmin_auth.log Custom log The format of the /var/log/phpmyadmin_auth.log file is: phpMyadmin login failed with username: root; ip: 192.168.1.50; url: http://somedomain.com/phpmyadmin/index.php phpMyadmin login failed with username: ; ip: 192.168.1.50; url: http://192.168.1.48/phpmyadmin/index.php Custom filter [Definition] # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; phpMyAdmin jail [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 6 The fail2ban log contains: 2012-10-04 10:52:22,756 fail2ban.server : INFO Stopping all jails 2012-10-04 10:52:23,091 fail2ban.jail : INFO Jail 'ssh-iptables' stopped 2012-10-04 10:52:23,866 fail2ban.jail : INFO Jail 'fail2ban' stopped 2012-10-04 10:52:23,994 fail2ban.jail : INFO Jail 'ssh' stopped 2012-10-04 10:52:23,994 fail2ban.server : INFO Exiting Fail2ban 2012-10-04 10:52:24,253 fail2ban.server : INFO Changed logging target to /var/log/fail2ban.log for Fail2ban v0.8.6 2012-10-04 10:52:24,253 fail2ban.jail : INFO Creating new jail 'ssh' 2012-10-04 10:52:24,253 fail2ban.jail : INFO Jail 'ssh' uses poller 2012-10-04 10:52:24,260 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,260 fail2ban.filter : INFO Set maxRetry = 6 2012-10-04 10:52:24,261 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,261 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,279 fail2ban.jail : INFO Creating new jail 'ssh-iptables' 2012-10-04 10:52:24,279 fail2ban.jail : INFO Jail 'ssh-iptables' uses poller 2012-10-04 10:52:24,279 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set maxRetry = 5 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,280 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,287 fail2ban.jail : INFO Creating new jail 'fail2ban' 2012-10-04 10:52:24,287 fail2ban.jail : INFO Jail 'fail2ban' uses poller 2012-10-04 10:52:24,287 fail2ban.filter : INFO Added logfile = /var/log/fail2ban.log 2012-10-04 10:52:24,287 fail2ban.filter : INFO Set maxRetry = 3 2012-10-04 10:52:24,288 fail2ban.filter : INFO Set findtime = 604800 2012-10-04 10:52:24,288 fail2ban.actions: INFO Set banTime = 604800 2012-10-04 10:52:24,292 fail2ban.jail : INFO Jail 'ssh' started 2012-10-04 10:52:24,293 fail2ban.jail : INFO Jail 'ssh-iptables' started 2012-10-04 10:52:24,297 fail2ban.jail : INFO Jail 'fail2ban' started When I issue: sudo service fail2ban restart fail2ban emails me to say ssh has restarted, but I receive no such email about my phpmyadmin jail. Repeated failed logins to phpMyAdmin does not cause an email to be sent. Have I missed some critical setup? Is my filter's regular expression wrong? Update: added changes from default installation Starting with a clean fail2ban installation: cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local Change email address to my own, action to: action = %(action_mwl)s Append the following to jail.local [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 4 Add the following to /etc/fail2ban/filter.d/phpmyadmin.conf # phpmyadmin configuration file # # Author: Michael Robinson # [Definition] # Option: failregex # Notes.: regex to match the password failures messages in the logfile. The # host must be matched by a group named "host". The tag "<HOST>" can # be used for standard IP/hostname matching and is only an alias for # (?:::f{4,6}:)?(?P<host>\S+) # Values: TEXT # # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; # Option: ignoreregex # Notes.: regex to ignore. If this regex matches, the line is ignored. # Values: TEXT # # Ignore our own bans, to keep our counts exact. # In your config, name your jail 'fail2ban', or change this line! ignoreregex = Restart fail2ban sudo service fail2ban restart PS: I like eggs

    Read the article

  • How to restrict file system when logged into terminal services

    - by pghcpa
    What I need to accomplish: With one login, when user is physically in the building I need them to see everything. When they are using terminal services with same login they should not be able to see the file system on the network. I can lock down the PC running terminal services as that is its only use. Details: Windows/2003 Server with terminal services. One login for a user (e.g., johndoe). When johndoe logs into the network at his desk in the office, he can see the network files according to group policy. When johndoe logs into terminal services from outside the building, we do not want to allow him see the network. Using 2x to do a published app, but that app has a "feature" that allows user to see network. Published application on termina services (only) is a document management system that is tied to windows login, so I can't give them two logins. With one login, when they are in the building I need them to see everything. When they are using terminal services they should not be able to see the network. I can lock down the PC running terminal services as that is its only use.

    Read the article

  • How to avoid apache2 revealing hidden directory and/or file structure

    - by matnagel
    When someone fetches a denied URL that exists, he gets: Forbidden You don't have permission to access /admin/admin.php on this server. Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.9 with Suhosin-Patch Server When someone goes to a URL that does not exist he will get: Not Found The requested URL /notexisting/notthere.php was not found on this server. Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.9 with Suhosin-Patch Server This way someone can find out information about the directory structure in an area, that is actually not open to the public. Is this true? If I were paranoid, what could I do? Just curious.

    Read the article

  • Auditing events 4656 and 4658 on Windows folder on Server 2008

    - by PCurd
    During an overnight system state backup we are seeing thousands of success audit events (4656, 4658) on the folder c:\windows\servicing, system32 and others in the windows folder. We use file success auditing on some files so I can't disable it but this deluge is filling up the logs and making reporting tricky. What is the harm of changing the auditing settings on the windows folder? What are the recommended settings to put on the files for those people doing system state backups? Thanks,

    Read the article

  • Why do I have untrusted certificates for Google, Yahoo, Mozilla and others?

    - by jackweirdy
    In the HTTPS/SSL section of chrome://chrome/settings, I see the following: What does this mean, and is there something wrong? I have a basic understanding of SSL/TLS - I'm not claiming to be completely familiar, but I'm fairly confident I know my way around it - but I don't understand why I have certificates installed on my machine specifically for these sites. From my understanding, I should have the certificates for Certificate Authorities, and any site I visit and use SSL/TLS should have a certificate signed by one of these trusted CAs for me to trust the site. My worry is that if someone has maliciously installed a certificate for these sites on my machine, they could perform a DNS spoofing attack (or a number of other attacks) to hijack my connection to my email account without me knowing, and as they've got the private counterpart to the certificate on my machine, decrypt the communication. NB: I'm also aware that CA certificates aren't just within Chromium and are used system wide as part of libssl - they're stored in /etc/ssl/certs. What I'd like to know is: Is this correct? - The big red boxes make me think no Is this malicious or benign? What can I do to resolve this problem? (If indeed it is a problem) Thanks :)

    Read the article

  • How to set the default file permissions on ALL newly created files in linux

    - by eviljack
    My question is similar to this: http://stackoverflow.com/questions/228534/linux-default-file-permission but there is no scp/ftp client involved and that question looks abandoned. Simply put: I want to be able to, at some global level decree that all newly created files will never have world writable permissions (0775). I tried putting a umask 02 in /etc/profile then in my bash_profile but it only works for scripts or new files that I create in a shell. It doesn't work for files that another binary creates. Is there anyway to have all new files that are created?

    Read the article

  • Static NAT in AWS's Virtual Private Cloud (VPC)

    - by user1050797
    Currently in a VPC with a public and a private subnet, all internet bound traffic from the private subnet could be routed via an NAT instance. The NAT instance will port address translate the packet's source IP to use the NAT instance's elastic IP, so the public server can reply to this public address. This is a PAT mechanism. My question is there a way for me to do a static NAT on my NAT instance -- Using the same NAT instance to static NAT an unassociated but reserved elastic IP to a private subnet host. This NAT instance will behave like a physical firewall doing static nat'ing for a bunch of private ip's.

    Read the article

  • What consequences to take from what i read in logfiles?

    - by Helene Bilbo
    Since some weeks i manage my first Webserver, a Seaside application behind an Apache proxy on Linode, and i installed logwatch to send me daily logs. Where can i get information on when i have to act as a consequence of what i read in these logwatch reports? For example i read that all kinds of people try to login on funny nonexisting accounts or all kinds of webcrawlers test for nonexisting cms login pages, some ip adresses get banned and unbanned by fail2ban... I assume that's normal? Is it? But how do i know that i probably have to do something? What do i look for in the logs?

    Read the article

  • OpenVPN vs. IPSec - Pros and Cons, what to use?

    - by jens
    interestingly I have not found any good searchresults when searching for "OpenVPN vs IPSec": I need to setup a private LAN over an untrusted network. And as far as I know, both approaces seem to be valid. But I do not know which one is better. I would be very thankfull If you can list the pro's and con's of both approaches and maybe your suggestions and experiences what to use. Update (Regarding the comment/question): In my concrete case the goal is to have any number of Servers (with static IPs) be connected transparently with each other. But a small portion of "dynamic clients like road warriors" (with dynamic IPs) should also be able to connect. The main goal is however having a "transparent secure network" run top of untrusted network. I am quite a newbie so I do not know how to correctly interprete "1:1 Point to Point Connections" = The solution should support Broadcasts and all that stuff so it is a fully functional network... Thank you very much!! Jens

    Read the article

  • How to Protect Sensitive (HIPAA) SQL Server Standard Data and Log Files

    - by Quesi
    I am dealing with electronic personal health information (ePHI or PHI) and HIPAA regulations require that only authorized users can access ePHI. Column-level encryption may be of value for some of the data, but I need the ability to do like searches on some of the PHI fields such as name. Transparent Data Encryption (TDE) is a feature of SQL Server 2008 for encrypting database and log files. As I understand it this prevents someone who gains access to the MDF, LDF, or backup files from being able to do anything with the files because they are encrypted. TDE is only on enterprise and developer versions of SQL Server and enterprise is cost-prohibitive for my particular scenario. How can I get similar protection on SQL Server Standard? Is there a way to encrypt the database and backup files (is there a third-party tool)? Or just as good, is there a way to prevent the files from being used if the disk were attached to another machine (linux or windows)? Administrator access to the files from the same machine is fine, but I just want to prevent any issues if the disk were removed and hooked up to another machine. What are some of the solutions for this that are out there?

    Read the article

  • Returning "200 OK" in Apache on HTTP OPTIONS requests

    - by i.
    I'm attempting to implement cross-domain HTTP access control without touching any code. I've got my Apache(2) server returning the correct Access Control headers with this block: Header set Access-Control-Allow-Origin "*" Header set Access-Control-Allow-Methods "POST, GET, OPTIONS" I now need to prevent Apache from executing my code when the browser sends a HTTP OPTIONS request (it's stored in the REQUEST_METHOD environment variable), returning 200 OK. How can I configure Apache to respond "200 OK" when the request method is OPTIONS? I've tried this mod_rewrite block, but the Access Control headers are lost. RewriteEngine On RewriteCond %{REQUEST_METHOD} OPTIONS RewriteRule ^(.*)$ $1 [R=200,L]

    Read the article

  • mod_security2 and w00tw00t attacks

    - by Saif Bechan
    I have a server with apache and i recently installed mod_config2 because I get attacked a lot by this: My apache version is apache v 2.2.3 and i user mod_security2.c [Wed Mar 24 02:35:41 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:31 2010] [error] [client 202.75.211.90] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:49 2010] [error] [client 95.228.153.177] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:48:03 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) I tried configuring mod_security2 like this: SecFilterSelective REQUEST_URI "w00tw00t.at.ISC.SANS.DFind" SecFilterSelective REQUEST_URI "\w00tw00t.at.ISC.SANS" SecFilterSelective REQUEST_URI "w00tw00t.at.ISC.SANS" SecFilterSelective REQUEST_URI "w00tw00t.at.ISC.SANS.DFind:" SecFilterSelective REQUEST_URI "w00tw00t.at.ISC.SANS.DFind:)" The thing in mod_security2 is that SecFilterSelective can not be used, it gives me erros. Instead i use a rule like this: SecRule REQUEST_URI "w00tw00t.at.ISC.SANS.DFind" SecRule REQUEST_URI "\w00tw00t.at.ISC.SANS" SecRule REQUEST_URI "w00tw00t.at.ISC.SANS" SecRule REQUEST_URI "w00tw00t.at.ISC.SANS.DFind:" SecRule REQUEST_URI "w00tw00t.at.ISC.SANS.DFind:)" Even this does not work. I don't know what to do anymore. Anyone have any advice?

    Read the article

  • How to disable 3rd party cookies in Chrome?

    - by David Nordvall
    I have both the "stop websites from storing local data" and the "block all third party cookies without exception" settings enabled in Chrome 12 (I'm not sure what the exact names of these settings are in english as I run Chrome with swedish localization). I do however have two problems. My first problem is that when I'm visiting one of my local news paper's site (and surely other), cookies from www.facebook.com is allowed for some reason. I suspect that the reason is that I have added an exception to the www.facebook.com domain but as the setting "block all third party cookies without exception" implies, that shouldn't matter. My second problem is that if I check what cookies are stored on my computer after browsing for a while, I have tons of cookies that are not on my white list. Primarily from ad services. My expectations from enabling the above mentioned settings was that only cookies that fulfill the two folling requirements would be accepted: the cookies must be from the domain in my address bar the cookies must be from a domain on my whitelist Apparently this isn't the case. The question is, have I completely misunderstood the settings or is this a bug? And, either way, is there a way to accomplish my desired behavior?

    Read the article

  • VPN Error 619: Behind Cisco Router WRT310N

    - by ty91011
    I've researched a lot on all the forums and this error is too generic for any of the proposed solutions to work. I'll try to give as much detail and tried solutions. I'm running a CentOS PPTP server behind a Cisco WRT310N Router. Multiple clients from outside with different OS have failed with the same error 619, along with turning off windows firewall and disabling antivirus. I believe this is a router and IP routing issue, and not a client issue. When I connect from a client on the same router as the VPN server, it works when I use the 192. network address- but doesn't work with the public IP address. I've tried telnet to port 1723 from an outside server and I get in. I've opened up the VPN port (1723) on the router, VPN udp port (500), and the GRE port (47) to route to the VPN server's ip. Also, the server's router is behind a DSL modem. I had a glimmer of hope when this site: http://www.chicagotech.net/casestudy/vpnerror619.htm suggested that the PPoE authentication should reside on the router and not the modem. But I still came up empty. So does anybody know what the problem is?

    Read the article

  • Bad to be logged in as admin all the time?

    - by poke
    At the office where I work, three of the other members of the IT staff are logged into their computers all the time with accounts that are members of the domain administrators group. I have serious concerns about being logged in with admin rights (either local or for the domain). As such, for everyday computer use, I use an account that just has regular user privelages. I also have an different account that is part of the domain admins group. I use this account when I need to do something that requires elevated privilages on my computer, one of the servers, or on another user's computer. What is the best practice here? Should network admins be logged in with rights to the entire network all the time (or even their local computer for that matter)?

    Read the article

  • Good infrastructure design questions for software developers?

    - by JakeRobinson
    Building on Jeff's blog post titled Vampires (Programmers) versus Werewolves (Sysadmins) From my perspective, the whole point of the company is to talk about what we're doing. Getting things done is important, of course, but we have to stop occasionally to write up what we're doing, how we're doing it, and why we're even doing it in the first place -- including all our doubts and misgivings and concerns. So, what are some questions you ask your software developers when they request a server?

    Read the article

  • win2008 r2 IIS7.5 - setting up a custom user for an application pool, and trust issues

    - by Ken Egozi
    Scenario: blank win2008 r2 install the goal was to have a couple of sites running with isolated pool and dedicated users A new folder for a new website - c:\web\siteA\wwwroot, with the app (asp.net) deployed there in the /bin folder created a user named "appuser" and added it to the IIS_USERS group gave the website folder read and execute permissions for IIS_USERS and the appuser created the IIS site. set the app=pool identity to the appuser now I'm getting YSOD telling me that the trust-level is too low - SecurityException: That assembly does not allow partially trusted callers Added <trust level="Full" /> on the web-config, did not help changing the app-pool user to Administrator makes the site run Setting "anonymous user identity" to either IUSR or the app pool identity makes no difference any idea? is there a "step by step" howto guide for setting up users for isolated app pools on IIS7.5?

    Read the article

  • Exchange 2010 OWA with Client Certificates

    - by Christian
    I have enabled Client Certificate Authentication for Exchange 2010 through IIS7 and the users are prompted to choose their User Certificate when they log in, but they are all then presented with the following error message Request Url: https://<domain_name>:443/owa/ User host address: <server_ip_address> OWA version: 14.1.355.2 Exception Exception type: System.NullReferenceException Exception message: Object reference not set to an instance of an object. Call stack Microsoft.Exchange.Clients.Owa.Core.RequestDispatcher.GetUserIdentities(OwaContext owaContext, OwaIdentity& logonIdentity, OwaIdentity& mailboxIdentity, Boolean& isExplicitLogon, Boolean& isAlternateMailbox, ExchangePrincipal& logonExchangePrincipal) Microsoft.Exchange.Clients.Owa.Core.RequestDispatcher.InternalDispatchRequest(OwaContext owaContext) Microsoft.Exchange.Clients.Owa.Core.RequestDispatcher.DispatchRequest(OwaContext owaContext) Microsoft.Exchange.Clients.Owa.Core.OwaRequestEventInspector.OnPostAuthorizeRequest(Object sender, EventArgs e) System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) The method I followed to enable Certificate authentiaction was from this post: http://www.miru.ch/2011/04/how-to-enable-certificate-based-authentication-on-exchange-2010/ Any ideas? Google isn't being very helpful

    Read the article

  • Identifying mail account used in CRAM-MD5 transaction

    - by ManiacZX
    I suppose this is one of those where the tool for identifying the problem is also the tool used for taking advantage of it. I have a mail server that I am seeing emails that spam is being sent through it. It is not an open relay, the messages in question are being sent by someone authenticating to the smtp with CRAM-MD5. However, the logs only capture the actual data passed, which has been hashed so I cannot see what user account is being used. My suspicion is a simple username/password combo or a user account's password has otherwise been compromised, but I cannot do much about it without knowing what user it is. Of course I can block the IP that is doing it, but that doesn't fix the real problem. I have both the CRAM-MD5 Base64 challenge string and the hashed client auth string containing the username, password and challenge string. I am looking for a way to either reverse this (which I haven't been able to find any information on) or otherwise I suppose I need a dictionary attack tool designed for CRAM-MD5 to run through two lists, one for username and one for password and the constant of the challenge string until it finds a matching result of the authentication string I have logged. Any information on reversing using the data I have logged, a tool to identify it or any alternative methods you have used for this situation would be greatly appreciated.

    Read the article

  • IIS7 authentication

    - by Kev
    To give our user's the ability to protect content on their IIS6 sites we used a tool called IISPassword which emulates .htaccess to provide Basic authentication. There isn't support for IISPassword on IIS7 at the moment. Is there an equivalent mechanism I can use built into IIS7 instead? I'm well aware of ASP.NET Forms Authentication, but I need a way for users deploying non-ASP.NET content (such as PHP, Perl, images etc) to be able to use Basic authentication but not have to write code to achieve this. Is this possible?

    Read the article

  • .htaccess to deny access to most xml files

    - by CEich
    I recently had a Joomla site hacked, so I'm trying to harden the site a bit. There's a section in the recommended .htaccess that restricts outside access to the xml files that come with extensions. However, it also keeps my sitemap.xml file from being accessed. How do I allow a certain file whiles keeping the rest? here's the default code: <Files ~ "\.xml$"> Order allow,deny Deny from all Satisfy all </Files> and my modification that caused a 500 error: <Files ~ "(?!sitemap)\.xml$"> Order allow,deny Deny from all Satisfy all </Files>

    Read the article

  • Nginx Password Protect Directory Downloads Source Code

    - by Pamela
    I'm trying to password protect a WordPress login page on my Nginx server. When I navigate to http://www.example.com/wp-login.php, this brings up the "Authentication Required" prompt (not the WordPress login page) for a username and password. However, when I input the correct credentials, it downloads the PHP source code (wp-login.php) instead of showing the WordPress login page. Permission for my htpasswd file is set to 644. Here are the directives in question within the server block of my website's configuration file: location ^~ /wp-login.php { auth_basic "Restricted Area"; auth_basic_user_file htpasswd; } Alternately, here are the entire contents of my configuration file (including the above four lines): server { listen *:80; server_name domain.com www.domain.com; root /var/www/domain.com/web; index index.html index.htm index.php index.cgi index.pl index.xhtml; error_log /var/log/ispconfig/httpd/domain.com/error.log; access_log /var/log/ispconfig/httpd/domain.com/access.log combine$ location ~ /\. { deny all; access_log off; log_not_found off; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location /stats/ { index index.html index.php; auth_basic "Members Only"; auth_basic_user_file /var/www/web/stats/.htp$ } location ^~ /awstats-icon { alias /usr/share/awstats/icon; } location ~ \.php$ { try_files /b371b8bbf0b595046a2ef9ac5309a1c0.htm @php; } location @php { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/lib/php5-fpm/web11.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; } location / { try_files $uri $uri/ /index.php?$args; client_max_body_size 64M; } location ^~ /wp-login.php { auth_basic "Restricted Area"; auth_basic_user_file htpasswd; } } If it makes any difference, I'm using Ubuntu 14.04.1 LTS with Nginx 1.4.6 and ISPConfig 3.0.5.4p3.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >