Search Results

Search found 1424 results on 57 pages for 'protect'.

Page 41/57 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • How can I solve the apache2 httpd error "mixing * ports and non-* ports with a NameVirtualHost addre

    - by rrc7cz
    Here is the error I get when booting up Apache2: * Starting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [warn] NameVirtualHost *:80 has no VirtualHosts I first followed this guide on setting up Apache to host multiple sites: http://www.debian-administration.org/articles/412 I then found a similar question on ServerFault and tried applying the solution, but it didn't help. Here is an example of my final VirtualHost config: <VirtualHost *:80> ServerAdmin [email protected] ServerName www.xxx.com ServerAlias xxx.com # Indexes + Directory Root. DirectoryIndex index.html DocumentRoot /var/www/www.xxx.com # Logfiles ErrorLog /var/www/www.xxx.com/logs/error.log CustomLog /var/www/www.xxx.com/logs/access.log combined </VirtualHost> with the domain X'd out to protect the innocent :-) Also, I have the conf.d/virtual.conf file mentioned in the guide looking like this: NameVirtualHost * The odd thing is that everything appears to work fine for two of the three sites.

    Read the article

  • Can I subnet a subnet?

    - by Portman
    Apologies in advance for the botched terminology. I have read the Server Fault Subnet Wiki but this is more of an ISP question. I currently have a /27 block of public IPs. I use give my router the first address in this pool and then use 1-to-1 NAT for all the servers behind the firewall, so that they each get their own public IP. The router/firewall is currently using (actual addresses removed to protect the guilty): IP Address: XXX.XXX.XXX.164 Subnet mask: 255.255.255.224 Gateway: XXX.XXX.XXX.161 What I would like to do is break out my subnet into two separate /28 subnets. And do this in a way that is transparent to the ISP (i.e., they see me as continuing to operate a single /27). Currently, my topology looks like: ISP | [Router/Firewall] | [Managed Ethernet Switch] / \ \ [Server1] [Server2] [Server3] (etc) Instead, I would like it to look like: ISP | [Switch] / \ [Router1] [Router2] | | | | [S1] [S2] [S3] [S4] (etc) As you can see, this would partition me into two separate networks. I'm struggling with what the correct IP settings would be on Router1 and Router2. Here's what I have right now: Router1 Router2 IP Address: XXX.XXX.XXX.164 XXX.XXX.XXX.180 Subnet mask: 255.255.255.240 255.255.255.240 Gateway: XXX.XXX.XXX.161 XXX.XXX.XXX.161 Note that normally you would expect Router2 to have a gateway of .177, but I'm trying to get them both to use the gateway originally given to me by the ISP. Is subnetting like this in fact possible, or am I completely botching the most basic concepts?

    Read the article

  • SPF hardfail and DKIM failure when recipient has e-mail forwarding

    - by Beaming Mel-Bin
    I configured hardfail SPF for my domain and DKIM message signing on my SMTP server. Since this is the only SMTP server that should be used for outgoing mail from my domain, I didn't foresee any complications. However, consider the following situation: I sent an e-mail message via my SMTP server to my colleague's university e-mail. The problem is that my colleague forwards his university e-mail to his GMail account. These are the headers of the message after it reaches his GMail mailbox: Received-SPF: fail (google.com: domain of [email protected] does not designate 192.168.128.100 as permitted sender) client-ip=192.168.128.100; Authentication-Results: mx.google.com; spf=hardfail (google.com: domain of [email protected] does not designate 192.168.128.100 as permitted sender) [email protected]; dkim=hardfail (test mode) [email protected] (Headers have been sanitized to protect the domains and IP addresses of the non-Google parties) GMail checks the last SMTP server in the delivery chain against my SPF and DKIM records (rightfully so). Since the last STMP server in the delivery chain was the university's server and not my server, the check results in an SPF hardfail and DKIM failure. Fortunately, GMail did not mark the message as spam but I'm concerned that this might cause a problem in the future. Is my implementation of SPF hardfail perhaps too strict? Any other recommendations or potential issues that I should be aware of? Or maybe there is a more ideal configuration for the university's e-mail forwarding procedure? I know that the forwarding server could possibly change the envelope sender but I see that getting messy.

    Read the article

  • URL Rewriting on GoDaddy Virtual Server

    - by Aristotle
    I migrated a Kohana2 application from a shared-hosting environment over to a virtual dedicated server. After this migration, I can't seem to get my .htaccess file working again. I apologize up front, but over the years I have never experienced so much frustration with anything else as I do with the dreaded .htaccess file. Presently I have my project installed immediately within a directory in my public folder: /var/html/www/info.php (general information about server) /var/html/www/logo.jpg (some flat file) /var/html/www/somesite.com/[kohana site exists here] So my .htaccess file is within that directory, and has the following contents: # Turn on URL rewriting RewriteEngine On # Installation directory RewriteBase /somesite.com/ # Protect application and system files from being viewed # This is only necessary when these files are inside the webserver document root RewriteRule ^(application|modules|system) - [R=404,L] # Allow any files or directories that exist to be displayed directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Rewrite all other URLs to index.php/URL RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L] # Alternativly, if the rewrite rule above does not work try this instead: #RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L] This doesn't work. The initial controller is loaded, since index.php is called up implicitly when nothing else is in the url. But if I try to load up some other non-default controller, the site fails. If I place the index.php back within the url, the call to other controllers works just fine. I'm really at my wits end, and would appreciate some direction here.

    Read the article

  • How to remove a plain text protecting single quote from all the selected cells in LibreOffice Calc?

    - by Ivan
    I've imported a CSV file having the first column to be date-time values in ISO 8601 format like 2012-01-01T00:00:00.000Z for the first moment of the year 2012. Then, willing to make LibreOffice to recognize the format (as I was looking forward to plot a diagram), I've selected the column, chosen Format Cells... and entered the custom time format as YYYY-MM-DDTHH:MM:SS.000Z And this seems to work if... I edit a cell to remove a hidden single-quote from its beginning (which serves to protect a cell content from being interpreted) as all the newly formatted cells now store values like '2012-01-01T00:00:00.000Z (note the single quote - it is only visible when you edit a particular cell). And I am to do so for all the cells in the column. How can I automate this? UPDATE: I've already found a solution for the particular case of mine: it helps to set a column format to "time" in the CSV import dialogue. But I am still curious how could this be done in case I wouldn't have the original .csv data file to import but only the .ods file with the data already imported without the format specified at the import time.

    Read the article

  • Why are my Windows 7 updates continuously failing?

    - by Chris C.
    I'm an advanced level user here with an odd issue. I have two Windows Updates that are failing to install, every single time. I'm getting a mysterious "Code 1" error on both updates, an error for which I'm having difficulty finding a solution. The updates in question are: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Because these updates are failing, the Shut Down button in my start menu always has the shield icon next to it, indicating that "new" updates will be installed on shut down. But, of course, they'll fail and when the PC is restarted, the shield icon is still there. When checking the update history and viewing the details of the failed updates, I get the following: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) Installation date: ?6/?29/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important A security issue has been identified leading to MFC application vulnerability in DLL planting due to MFC not specifying the full path to system/localization DLLs. You can protect your computer by installing this update from Microsoft. After you install this item, you may have to restart your computer. More information: http://go.microsoft.com/fwlink/?LinkId=216803 System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Installation date: ?6/?28/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important This tool is being offered because an inconsistency was found in the Windows servicing store which may prevent the successful installation of future updates, service packs, and software. This tool checks your computer for such inconsistencies and tries to resolve issues if found. More information: http://support.microsoft.com/kb/947821 About My System I'm running Windows 7 Home Premium x64 Edition. This is a custom PC build and the OS was installed fresh, not an upgrade from a previous version. I've been running this system for about 4 months. Windows Updates aside, the system is usually quite stable. Thanks in advance for your help!

    Read the article

  • How does cross domain authentication work in a firewalled environment?

    - by LVLAaron
    This is a simplification and the names have been changed to protect the innocent. The assets: Active Directory Domains corp.lan saas.lan User accounts [email protected] [email protected] Servers dc.corp.lan (domain controller) dc.saas.lan (domain controller) server.saas.lan A one way trust exists between the domains so user accounts in corp.lan and log into servers in saas.lan No firewall between dc.corp.lan and dc.saas.lan server.saas.lan is in a firewalled zone and a set of rules exist so it can talk to dc.saas.lan I can log into server.saas.lan with [email protected] - But I don't understand how it works. If I watch firewall logs, I see a bunch of login chatter between server.saas.lan and dc.saas.lan I also see a bunch of DROPPED chatter between server.saas.lan and dc.corp.lan. Presumably, this is because server.saas.lan is trying to authenticate [email protected] But no firewall rule exists that allows communication between these hosts. However, [email protected] can log in successfully to server.saas.lan - Once logged in, I can "echo %logonserver%" and get \dc.corp.lan. So.... I am a little confused how the account actually gets authenticated. Does dc.saas.lan eventually talk to dc.corp.lan after server.saas.lan can't talk to dc.corp.lan? Just trying to figure out what needs to be changed/fixed/altered.

    Read the article

  • Mac Management and Security

    - by Bart Silverstrim
    I was going through some literature on managing OS X laptops and asked someone some questions about usage scenarios when using the MacBooks. I asked someone more knowledgeable than I about whether it was possible for my Mac to be taken over if I were visiting another site for a conference or if I went on a wifi network at a local coffee house with policies from an OS X Server with workgroup manager (either legit for the site or someone running a version of OS X Server on hardware they have hidden somewhere on the network), which apparently could be set up to do things like limit my access to Finder or impose other neat whiz-bang management features. He said that it is indeed possible for it to happen as it would be assigned via the DHCP server and the OS X server would assume my Mac is a guest and could hand out restrictions and apparently my Mac will happily accept them without notifying me or giving me an option, unlike Windows which I believe would need to be joined to a domain before it becomes "managed" by Active Directory. So my question is as network admins and sysadmins with users traveling with MacBooks, is there a way to reasonably protect your users from having their machines hijacked without resorting to just turning off networking all the time? Or isn't this much of a security hazard? What threat does this pose to the road warriors in your businesses?

    Read the article

  • What is the advantage of not running as root? [closed]

    - by Shmuel Brill
    Possible Duplicate: What's wrong with always being root? All modern brands of Linux highly discourage (or disable) one from running as root instead of a normal user. I do not understand why. As a "normal" user, one could Download a rouge program from the internet. Run it (After all, one isn't root, what can it do). It installs itself in .bashrc or .xinitrc It writes a rouge "sudo" and "su" and adds . to the path Not noticing that . is in path, one runs sudo. The rouge program now has root password and can do anything it wants in the system. Even if 3-6 doesn't happen, the program could still Be part of a botnet. Read all files in the home directory and send them back (mine for SS#, Credit Card numbers, bank account numbers, etc). Send spam. Run a backdoor server to allow an attacker a chance to connect to the machine to determine vulnerabilities. It seems that the whole "permissions" thing (root/non-root) is just to prevent amateur crackers from getting into the system, so the question is: Is there a point in avoiding running as root, and is there a way to protect oneself if one wants to run unsafe code?

    Read the article

  • JAWStats statspath error on windows

    - by crosenblum
    I have AWStats which works fine, and JAWStats I am trying to get working. I have tried back and forward slashes to get the program to read the dirdata files. I even moved the folder of dirdata outside of program files folder, in case it had problems with folder names with spaces in them. Here is my config file. // core config parameters $sConfigDefaultView = "thismonth.all"; $bConfigChangeSites = true; $bConfigUpdateSites = true; $sUpdateSiteFilename = "xml_update.php"; // individual site configuration // awstats092012.noname.jumpingcrab.com.txt $aConfig["site1"] = array( "statspath" => "C:\\Program Files\\AWStats\\DirData\\", "statsname" => "awstats[MM][YYYY].yourexample.com.txt", "updatepath" => "C:\\Program Files\\AWStats\\wwwroot\\cgi-bin\\awstats.pl\\", "siteurl" => "http://yourexample.com", "theme" => "default", "fadespeed" => 250, "password" => "", "includes" => "" ); Domain names changed to protect the innocent...:P Here is the error message: An error has occured: No AWStats Log Files Found JAWStats cannot find any AWStats log files in the specified directory: C:\Program Files\AWStats\DirData\ Is this the correct folder? Is your config name, site1, correct? Please refer to the installation instructions for more information.

    Read the article

  • ZFS: Mirror vs. RAID-Z

    - by John Clayton
    I'm planning on building a file server using OpenSolaris and ZFS that will provide two primary services - be an iSCSI target for XenServer virtual machines & be a general home file server. The hardware I'm looking at includes 2x 4-port SATA controllers, 2x small boot drives (one on each controller), and 4x big drives for storage. This allows one free port per controller for upgrading the array down the road. Where I'm a little confused is how to setup the storage drives. For performance, mirroring appears to be king. I'm having a hard time seeing what the benefit would be of using RAIDZ over mirroring would be. With this setup I can see two options - two mirrored pools in one stripe, or RAIDZ2. Both should protect against 2 drive failures, and/or one controller failure...the only benefit of RAIDZ2 would be that any 2 drives could fail. The storage should be 50% of capacity in both cases, but the first should have much better performance, right? The other thing I'm trying to wrap my mind around is the benefit of mirrored arrays with more than two devices. For data integrity what, if any, would be the benefit of a RAIDZ over a three-way mirror? Since ZFS maintains file integrity what does RAIDZ bring to the table...doesn't ZFS's integrity checks negate the value of RAIDZ's parity?

    Read the article

  • Could this server log mean my server is being used as a proxy?

    - by So Over It
    I came across the following entry in my access.log: 58.218.199.147 - - [05/Jun/2012:12:56:04 +1000] "GET http://proxyproxys.com/ HTTP/1.1" 200 183 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" Normally when I see a full URL entry in my access.log I assume it is log spam with people trying to get me to access their site. These entries are normally followed with a 404 response. The above entry is followed with a 200 'success' response! Doing some searching it would seem that this can occur when someone is trying to use your server as a proxy. This disturbed me more - especially because the URL in question has the word proxy in it. Going to the site 'proxyproxys.com' (using hidemyass.com to protect my own identity), the site returns what appears to be some sort of 'proxy judge' ---------------------------------------- HTTP_ACCEPT=text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 HTTP_ACCEPT_LANGUAGE=en-US,en;q=0.8 HTTP_USER_AGENT=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.53 Safari/536.5 HTTP_CONNECTION=close REMOTE_PORT=56355 REMOTE_HOST=74.63.112.142 REMOTE_ADDR=74.63.112.142 ---------------------------------------- CS_ProxyJudge Result=HIGH_ANONYMITY ---------------------------------------- Question: 1) does the 200 success mean that someone has been able to successfully use my server as a proxy? 2) are there other means of confirming if my server is being used as a proxy 3) can you refer me to documentation to help 'close up' my security gap if there is one. Thanks.

    Read the article

  • INFORMIX - listener thread err 25582

    - by Samuel Lao
    I´ve been digging different forums in the last 7 days looking for a possible solution.... Our database is based on informix running in a Linux server (LINUX SUSE 11). Suddenly, last saturday informix began to show an error message: listener-thread err=-25582 oserr=0, network connection is broken End users started to call reporting about slow network performance to this server, moments where the database application lost connection with server...so we proceeded doing a ping to the db server, getting good responses (1ms) without losing packets. I tried typing telnet (ipserver) 1526 which is informix's port for the application, it works. We had to disconnect the server and enable a backup db server which is located on another branch. It has been working in a regular way because the backup server hasn´t good specs (it is an old dell server model). So, I scanned the main server looking for viruses using Trend Micro Server Protect, it didn´t find anything (0 viruses and spywares). I revised the switches and routers, but I haven´t find anything strange... What else could be ? Thanks in advanced for your time and help with this issue.....I would really appreciate any advice...

    Read the article

  • Mirror a RAID0 volume

    - by Ghostrider
    I have two SSD running in RAID0. The capacity and speed are just great. I use Windows Home Server to do incremental daily backups. This is fine and well and I've successfully restored from these backups. However. When one of the disks physically died. I was stuck without a working system until the replacement arrives so that I can restore the array from backup. WHS restoration takes about 5 hours which basically means that I'm losing entire day for the process. Is it possible to set up kind of a recovery volume for the RAID array? Use a single mechanical HDD that would be updated with the exact clone of the RAID array on a daily basis. This way if the array goes offline for some reason, I can just boot from the mechanical HDD, lose some perf but will still be able to work. The machine in question runs Windows 7. Creating RAID01 is not an option because of the high price of the SSD and the fact that it still doesn't protect against failure of RAID controller. Is there any way it can be set up?

    Read the article

  • Centralized Windows/Mac Patch Management that is easy to use

    - by BiggsTRC
    I'm looking for advice on what patch management solutions you would recommend based upon your experience. I'm also looking for which ones you would not recommend based upon your experience. We have a mixed network of Windows and Mac clients. Our central servers are all Windows servers, although I have considered putting in a Mac server to better handle our Mac clients. The issue we are facing currently is that we need to maintain the patches on all of our third-party applications. Right now we use WSUS, which handles with patching of Windows and some Microsoft products but that is about it. I need something to cover the other applications, specifically things like Adobe products (Reader, Flash, Dreamweaver, etc.) Our network isn't that big (maybe 200 clients) and I don't have a person to dedicate just to patching and maintaining a patch management solution. Thus very large and complicated solutions like System Center are most likely out. I have recently been looking at Dell's Kace K1000 solution (http://www.kace.com/products/systems-management-appliance/). It seems simple and it provides a lot of tools in one package that I would like/need as well. I like the fact that it is self-contained in an appliance and that it is designed for solutions like mine. However, I'm not sure if this is the best solution. I've also looked some at Shavlik's Netchk solution (http://www.shavlik.com/netchk-protect.aspx) but I don't need an anti-virus product. However, it looks like they might have a very good patch database. My question is this: What are your thoughts on these to products? Are there better products out there? Are there issues that I'm not considering? I want something that is very good at patching a broad range of products, that is simple to use, that takes a minimal amount of management (like WSUS), and that (hopefully) works with Mac and Windows.

    Read the article

  • suggestions for firewall/router project using *BSD or Linux

    - by Adeodatus
    Hi All, I have a project in mind and I'd love to hear some ideas on some open source solutions with COTS hardware. I have a few 24 and/or 48 port managed layer2 switches with customers potentially on each port (though its usually about 20-30). Right now the switch has a bridged network and backhaul the traffic to our core to a centralized DHCP server. I need to move them to a NAT solution and, while doing this, I'd like to protect the customers on each port from the customer traffic on the other ports. I also need to be able to port forward from the public side of the firewall/nat box to specific hardware on the inside of the nat machine (easy enough, I know). My first thoughts are to build an appliance-like box (the fewer moving parts the better) that can do filtering and NAT with rfc1918 an address range being handed out via a DHCP server on the appliance. A caching DNS server on the appliance would be a plus since we backhaul everything to the core. I'd like to run FreeBSD but I'm open. Now, to try to limit the broadcast traffic thats visible I was thinking of doing each port on the switch as a different vlan and have the switch do trunking to the private NIC on the FreeBSD/appliance. I'd probably need to do some magic on the freebsd NIC to get this working but it should. We have the parts to build these systems. So, does this make sense? Are there any other solutions out there that we don't have to spend money on but can use our parts to create something? Are there any good distros that could do this already (monowall)?? I may or may not admin this solution so a secure web configuration and management tool would be a plus in the other admins' minds. Thoughts?

    Read the article

  • Mac Management Without Permission and Security

    - by Bart Silverstrim
    I was going through some literature on managing OS X laptops and asked someone some questions about usage scenarios when using the MacBooks. I asked someone more knowledgeable than I about whether it was possible for my Mac to be taken over if I were visiting another site for a conference or if I went on a wifi network at a local coffee house with policies from an OS X Server with workgroup manager (either legit for the site or someone running a version of OS X Server on hardware they have hidden somewhere on the network), which apparently could be set up to do things like limit my access to Finder or impose other neat whiz-bang management features. He said that it is indeed possible for it to happen as it would be assigned via the DHCP server and the OS X server would assume my Mac is a guest and could hand out restrictions and apparently my Mac will happily accept them without notifying me or giving me an option, unlike Windows which I believe would need to be joined to a domain before it becomes "managed" by Active Directory. So my question is as network admins and sysadmins with users traveling with MacBooks, is there a way to reasonably protect your users from having their machines hijacked without resorting to just turning off networking all the time? Or isn't this much of a security hazard? What threat does this pose to the road warriors in your businesses?

    Read the article

  • Windows update stuck. Fixt it stuck. So is KB947821. What should I do?

    - by Jim Thio
    I installed a new computer. After I installed I update everything and let the computer runs for days. Then I don't know what my daughter do. The computer stop responding. Windows update no longer work. People said to run fix it. Fix it run and after that the problem still persist and it still doesn't work. The problem changes though. Before there is some error code. Now windows update simply "updating" never end. So I downloaded KB947821. It's been 3 hour and it's still installing. Looks like it hit firewall or something. i don't see windows update on firewall exception list. However, I've never heard that this is an issue. Firewall only protect against incoming transaction, not outgoing right? Or what am I missing. What should I do?

    Read the article

  • Apache2, Tomcat6, and proxy redirects

    - by Randal Hale
    So here is my question - go easy and slow. I'm a GIS Consultant and general hack with linux. I inherited this volunteer job essentially because I knew more than the rest of the team - or the rest of the team isn't as stubborn as I am... With that said a number of people have been mucking around in the server before I got involved so I've been cleaning up a lot of things. The domain names have been changed to protect the innocent. I have a server running Apache2 (port 80) and tomcat6 (8080) running on ubuntu server 10.4. There is a virtual host on Apache2 called "Runner" (the domain is runner.org). I have mod_proxy loaded. I am trying to redirect everyone that visits runner.org to http://some.ip.address:8080/openrunner-webapp/ So far I've gotten runner.org assigned to the apache2 server. Someone set up a redirect in the httpd.conf file but I believe it needs to go into the virtualhost. I tried setting the redirect in the virtualhost as: *ProxyPass / http://localhost:8080/openrunner-webapp All that does is show me the root of the Apache webserver. Anyway I'm stuck

    Read the article

  • Nginx Server Block Port 8081 Path to Root Folder

    - by Pamela
    I'm trying to password protect all of port 8081 on my Nginx server. The only thing this port is used for is PhpMyAdmin. When I navigate to https://www.example.com:8081, I successfully get the default Nginx welcome page. However, when I try navigating to the PhpMyAdmin directory, https://www.example.com:8081/phpmyadmin, I get a "404 Not Found" page. Permission for my htpasswd file is set to 644. Here is the code for my server block: server { listen 8081; server_name example.com www.example.com; root /usr/share/phpmyadmin; auth_basic "Restricted Area"; auth_basic_user_file htpasswd; } I have also tried entirely commenting out #root /usr/share/phpmyadmin; However, it doesn't make any difference. Is my problem confined to using the incorrect root path? If so, how can I find the root path for PhpMyAdmin? If it makes any difference, I'm using Ubuntu 14.04.1 LTS with Nginx 1.4.6 and ISPConfig 3.0.5.4p3.

    Read the article

  • Nginx ignores HTTP Authentication for WordPress login directory

    - by MrNerdy
    I am running WordPress in a subfolder of my domain for testing and development purposes on a VPS LEMP-stack. In order to password-protect the wp-login.php with an etxra layer, I used HTTP authentication for the wp-admin folder. The problem is that the http authentication is ignored. When the wp-login.php or wp-admin-folder is called, it goes directly to the normal WordPress-login. I installed everything from the command line in the following way: sudo apt-get install apache2-utils sudo htpasswd -c /var/www/bitmall/wp-admin/.htpasswd exampleuser New password: Re-type new password: Adding password for user exampleuser My Nginx configuration file looks like this: server { listen 80; root /var/www; index index.php index.html index.htm; server_name example.com; location / { try_files $uri $uri/ /index.html; } location /bitmall/wp-admin/ { auth_basic "Restricted Section"; auth_basic_user_file /var/www/bitmall/wp-admin/.htpasswd; } location ~ /\.ht { deny all; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } I would appreciate your advive on this.

    Read the article

  • Security in shared hosting vs VPS 'virtual appliances'

    - by Pedro Loureiro
    I have to change my hosting provider. Right now I have a shared hosting account but I'm considering trying the LAMP stack appliance from turnkeylinux.org. I'm very comfortable with using linux, I've been using it for a long time. I have no problem ssh'ing into remote machines and do whatever I have to do (coding, reading logs, moving files, deploying, etc). The problem is that none of those tasks have involved securing the server/firewall. My experience has been as a desktop user or developer deploying apps/files in remote servers. Ignoring the security in the application logic (read: any scripts, frameworks, websites I might have created or installed) - I'm worried about things like base configuration of deamons, firewall, ports, executable scripts being readable from the outside and whatnot. My question is: how do you compare the (expected) out of the box security of the LAMP stack from turnkey and the (expected) security of a "regular" shared hosting provider? I was hoping to find some guides with a list of steps to do to protect my server but the only documentation I found was simply referring to ubuntu's documentation.

    Read the article

  • Rapidly changing public IP addresses on certain networks?

    - by zenblender
    I run/develop an online game where many of our users are in southeast asia. I recently went to southeast asia and made an alarming discovery. Anywhere I got internet access, whether it was via 3G, a LAN in a hotel, or wifi in a cafe, both in Singapore and the Philippines, I noticed that my IP address was changing CONSTANTLY. I mean the public IP address, not the private one. I could load a page like whatismyip.com and just hit reload and see a new IP address show up every 5-10 seconds! This has lots of consequences for my online game, as many things "break" if the IP address changes for a given user. Basically, I would like to know more about this. Is there a name for the kind of network or router or paradigm that causes this, so I can read up on it? I don't understand WHY a network would function this way. Does it do this on purpose? Is it for security reasons? Is it to anonymize and protect the identity of the users? Or is it just an "old" method that is mostly obsolete in the rest of the world? Thanks for any info that will help me to understand.

    Read the article

  • Poor Write Performance in VM inside Proxmox PVE 2.0

    - by sorsenne
    I am running a PVE 2.0 on a decent Hardware (2 SATA HDDs as RAID1, 12GB RAM, i7 CPU) but the I/O Performance is very poor inside the VM (Ubuntu 11.10 Server). The very same VM was copied to another Server running simply Ubuntu Server with KVM and had better I/O Perf. this is how the HDD is shown in the Guest: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) ata1.00: ATA-8: ST3000DM001-9YN166, CC49, max UDMA/133 ata1.00: 5860533168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA ata1.00: configured for UDMA/133 scsi 0:0:0:0: Direct-Access ATA ST3000DM001-9YN1 CC49 PQ: 0 ANSI: 5 sd 0:0:0:0: [sda] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB) sd 0:0:0:0: [sda] 4096-byte physical blocks sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA I tested with DD: $ dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 19.2222 s, 7.0 MB/s on the Host, this same Test will result with 156 MB/s in average. PS: I am using VirtIO and see no error in dmesg.

    Read the article

  • Best practices for settings for Oracle database creation

    - by Gary
    When installing an Oracle Database, what non-default settings would you normally apply (or consider applying) ? I'm not after hardware dependent setting (eg memory allocation) or file locations, but more general items. Similarly anything that is a particular requirement for a specific application rather than generally applicable isn't really useful. Do you separate out code/API schemas (PL/SQL owners) from data schemes (table owners) ? Do you use default or non-default roles, and if the latter, do you password protect the role ? I'm also interested in whether there's any places where you do a REVOKE of a GRANT that is installed by default. That may be version dependent as 11g seems more locked down for its default install. These are ones I used in a recent setup. I'd like to know whether I missed anything or where you disagree (and why). Database Parameters Auditing (AUDIT_TRAIL to DB and AUDIT_SYS_OPERATIONS to YES) DB_BLOCK_CHECKSUM and DB_BLOCK_CHECKING (both to FULL) GLOBAL_NAMES to true OPEN_LINKS to 0 (did not expect them to be used in this environment) Character set - AL32UTF8 Profiles I created an amended password verify function that used the apex dictionary table (FLOWS_030000.wwv_flow_dictionary$) as an extra check to prevent simple passwords. Developer logins CREATE PROFILE profile_dev LIMIT FAILED_LOGIN_ATTEMPTS 8 PASSWORD_LIFE_TIME 32 PASSWORD_REUSE_TIME 366 PASSWORD_REUSE_MAX 12 PASSWORD_LOCK_TIME 6 PASSWORD_GRACE_TIME 8 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME 1080 IDLE_TIME 180 LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Application login CREATE PROFILE profile_app LIMIT FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME 999 PASSWORD_REUSE_TIME 999 PASSWORD_REUSE_MAX 1 PASSWORD_LOCK_TIME 999 PASSWORD_GRACE_TIME 999 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME unlimited IDLE_TIME unlimited LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Privileges for a standard schema owner account CREATE CLUSTER CREATE TYPE CREATE TABLE CREATE VIEW CREATE PROCEDURE CREATE JOB CREATE MATERIALIZED VIEW CREATE SEQUENCE CREATE SYNONYM CREATE TRIGGER

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >