Search Results

Search found 14771 results on 591 pages for 'security policy'.

Page 136/591 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Is my dns server being attacked? And what should I do about it?

    - by Mnebuerquo
    I've been having some intermittent dns problems with a web server, where certain isp's dns servers don't have my hostnames in cache and fail to look them up. At the same time, queries to opendns for those hostnames resolve correctly. It's intermittent, and it always works fine for me, so it's hard to identify the problem when someone reports connectivity problems to my site. In trying to figure this out, I've been looking at my logs to see if there are any errors I should know about. I found thousands of the following messages in my logs, from different ip's, but all requesting similar dns records: May 12 11:42:13 localhost named[26399]: client 94.76.107.2#36141: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#29075: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#47924: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#4727: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#16153: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#40267: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63507: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63721: query (cache) 'burningpianos.org/MX/IN' denied May 12 11:43:36 localhost named[26399]: client 82.209.240.241#3537: query (cache) 'burningpianos.com/MX/IN' denied I've read of Dan Kaminski's dns cache poisoning vulnerability, and I'm wondering if these log records are an attempt by some evildoer to attack my dns server. There are thousands of records in my logs, all requesting "burningpianos", some for com and some for org, most looking for an mx record. There are requests from multiple ip's, but each ip will request hundreds of times per day. So this smells to me like an attack. What is the defense against this?

    Read the article

  • mod_security2 and w00tw00t attacks

    - by Saif Bechan
    I have a server with apache and i recently installed mod_config2 because I get attacked a lot by this: My apache version is apache v 2.2.3 and i user mod_security2.c [Wed Mar 24 02:35:41 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:31 2010] [error] [client 202.75.211.90] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:49 2010] [error] [client 95.228.153.177] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:48:03 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) I tried configuring mod_security2 like this: SecFilterSelective REQUEST_URI "w00tw00t.at.ISC.SANS.DFind" SecFilterSelective REQUEST_URI "\w00tw00t.at.ISC.SANS" SecFilterSelective REQUEST_URI "w00tw00t.at.ISC.SANS" SecFilterSelective REQUEST_URI "w00tw00t.at.ISC.SANS.DFind:" SecFilterSelective REQUEST_URI "w00tw00t.at.ISC.SANS.DFind:)" The thing in mod_security2 is that SecFilterSelective can not be used, it gives me erros. Instead i use a rule like this: SecRule REQUEST_URI "w00tw00t.at.ISC.SANS.DFind" SecRule REQUEST_URI "\w00tw00t.at.ISC.SANS" SecRule REQUEST_URI "w00tw00t.at.ISC.SANS" SecRule REQUEST_URI "w00tw00t.at.ISC.SANS.DFind:" SecRule REQUEST_URI "w00tw00t.at.ISC.SANS.DFind:)" Even this does not work. I don't know what to do anymore. Anyone have any advice?

    Read the article

  • Correctly setting up UFW on Ubuntu Server 10 LTS which has Nginx, FastCGI and MySQL?

    - by littlejim84
    I'm wanting to get my firewall on my new webserver to be as secure as it needs to be. After I did research for iptables, I came across UFW (Uncomplicated FireWall). This looks like a better way for me to setup a firewall on Ubuntu Server 10 LTS and seeing that it's part of the install, it seems to make sense. My server will have Nginx, FastCGI and MySQL on it. I also want to be allow SSH access (obviously). So I'm curious to know exactly how I should set up UFW and is there anything else I need to take into consideration? After doing research, I found an article that explains it this way: # turn on ufw ufw enable # log all activity (you'll be glad you have this later) ufw logging on # allow port 80 for tcp (web stuff) ufw allow 80/tcp # allow our ssh port ufw allow 5555 # deny everything else ufw default deny # open the ssh config file and edit the port number from 22 to 5555, ctrl-x to exit nano /etc/ssh/sshd_config # restart ssh (don't forget to ssh with port 5555, not 22 from now on) /etc/init.d/ssh reload This all seems to make sense to me. But is it all correct? I want to back this up with any other opinions or advice to ensure I do this right on my server. Many thanks!

    Read the article

  • win2008 r2 IIS7.5 - setting up a custom user for an application pool, and trust issues

    - by Ken Egozi
    Scenario: blank win2008 r2 install the goal was to have a couple of sites running with isolated pool and dedicated users A new folder for a new website - c:\web\siteA\wwwroot, with the app (asp.net) deployed there in the /bin folder created a user named "appuser" and added it to the IIS_USERS group gave the website folder read and execute permissions for IIS_USERS and the appuser created the IIS site. set the app=pool identity to the appuser now I'm getting YSOD telling me that the trust-level is too low - SecurityException: That assembly does not allow partially trusted callers Added <trust level="Full" /> on the web-config, did not help changing the app-pool user to Administrator makes the site run Setting "anonymous user identity" to either IUSR or the app pool identity makes no difference any idea? is there a "step by step" howto guide for setting up users for isolated app pools on IIS7.5?

    Read the article

  • Bad ways to secure wireless network c

    - by Moshe
    I was wondering if anybody had any thoughts on this, as I recently saw a Verizon DSL network set up where the WEP key was the last 8 characters of the router's MAC address. (It's bad enough that hey were using WEP in the first place...)

    Read the article

  • Getting Server 2008 R2 to ignore all traffic from Internet-facing NIC, leaving it to a VM

    - by Wolvenmoon
    I got in to Server 2008 R2 via Dreamspark and would like to start learning on it. I don't have much option but to put it on a system sitting between the Internet and my home LAN due to electricity bills and the fact that 3 computers in an 11x11 space in 102 degree weather is pretty stygian. Currently I use a ClearOS gateway to manage everything, what I'd like to do is take my server 2008 R2 box, which has two NICs, and drop it at the head of my network. I'd want Server 2008 R2 to ignore all traffic on the external facing NIC and pass it to a virtual ClearOS gateway, and to put all its Internet traffic through its other NIC - which will face the rest of my network and be the default gateway for it. The theory is to keep the potentially vulnerable Server 2008 R2 install as tucked behind a Linux box as possible, without sacrificing too much performance. This is a home network that occasionally hosts dedicated game servers and voice chat servers, so most malicious activity is in the form of drive by non-targeted attacks, however, I don't trust Windows Server because I don't know the OS well enough, yet. So, three questions: How do I do this, am I going to be reasonably more secure doing this than if I just let the Server 2008 R2 rig handle all the network traffic and DHCP (not an option), and should I virtualize the Server 2008 R2 rig instead and if so in what? (Core 2 Duo e6600 w/ 5 gigs usable RAM)

    Read the article

  • Penetration testing - common examples?

    - by Mirek
    Hi, I was charged to do some basic penetration testing on our system. I tried to find some favoured practices but I was not successful. I guess SYN attack is retired (no NT here). Could anyone advice some basic steps of what to test in order to proceed at least very basic penetration test? Thanks

    Read the article

  • Why is 50.22.53.71 hitting my localhost node.js in an attempt to find a php setup

    - by laggingreflex
    I just created a new app using angular-fullstack yeoman generator, edited it a bit to my liking, and ran it with grunt on my localhost, and immediately upon starting up I get this flood of requests to paths that I haven't even defined. Is this a hacking attempt? And if so, how does the hacker (human or bot) immediately know where my server is and when it came online? Note that I haven't made anything online, it's just a localhost setup and I'm merely connected to the internet. (Although my router does allow 80 port incoming.) Whois shows that the IP address belongs to a SoftLayer Technologies. Never heard of it. Express server listening on 80, in development mode GET / [200] | 127.0.0.1 (Chrome 31.0.1650) GET /w00tw00t.at.blackhats.romanian.anti-sec:) [404] | 50.22.53.71 (Other) GET /scripts/setup.php [404] | 50.22.53.71 (Other) GET /admin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /admin/pma/scripts/setup.php [404] | 50.22.53.71 (Other) GET /admin/phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /db/scripts/setup.php [404] | 50.22.53.71 (Other) GET /dbadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /myadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /mysql/scripts/setup.php [404] | 50.22.53.71 (Other) GET /mysqladmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /typo3/phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpmyadmin1/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpmyadmin2/scripts/setup.php [404] | 50.22.53.71 (Other) GET /pma/scripts/setup.php [404] | 50.22.53.71 (Other) GET /web/phpMyAdmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /xampp/phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /web/scripts/setup.php [404] | 50.22.53.71 (Other) GET /php-my-admin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /websql/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin-2/scripts/setup.php [404] | 50.22.53.71 (Other) GET /php-my-admin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin-2.5.5/index.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin-2.5.5-pl1/index.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin/ [404] | 50.22.53.71 (Other) GET /phpmyadmin/ [404] | 50.22.53.71 (Other) GET /mysqladmin/ [404] | 50.22.53.71 (Other)

    Read the article

  • Block Google requests to 16k using pf firewall

    - by atmosx
    I'd like to block access to Google search using PF after the threshold of 17500 requests (connection established) in 24h, from a host running FreeBSD 9. What I came up with, after reading pf-faq is this rule: pass out on $net proto tcp from any to 'www.google.com' port www flags S/SA keep state (max-src-conn 200, max-src-conn-rate 17500/86400) NOTE: 86400 are 24h in seconds. The rule should work, but PF is smart enough to know that www.google.com resolves in 5 different IPs. So my pfctl -sr output gives me this: pass out on vte0 inet proto tcp from any to 173.194.44.81 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.82 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.83 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.80 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.84 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) PF creates 5 different rules, 1 for each IP that Google resolves. However I have the sense - without being 100% sure, I didn't had the chance to test it - that the number 17500/86400 applies for each IP. If that's the case - please confirm - then it's not what I want. In pf-faq there's another option called source-track-global: source-track This option enables the tracking of number of states created per source IP address. This option has two formats: + source-track rule - The maximum number of states created by this rule is limited by the rule's max-src-nodes and max-src-states options. Only state entries created by this particular rule count toward the rule's limits. + source-track global - The number of states created by all rules that use this option is limited. Each rule can specify different max-src-nodes and max-src-states options, however state entries created by any participating rule count towards each individual rule's limits. The total number of source IP addresses tracked globally can be controlled via the src-nodes runtime option. I tried to apply source-track-global in the above rule without success. How can I use this option in order to achieve my goal? Any thoughts or comments are more than welcome since I'm an amateur and don't fully understand PF yet. Thanks

    Read the article

  • Why would sshd allow root logins by default?

    - by The Journeyman geek
    I'm currently working on hardening my servers against hacking- amongst other things, i'm getting a load of attempts to log on as root over ssh. While i've implemented fail2ban, i'm wondering, why root logons would be allowed by default to start with? Even with non sudo based distros, i can always log on as a normal user and switch - so i'm wondering is there any clear advantage to allowing root logons on ssh, or it just something no one bothers to change?

    Read the article

  • Audit success in event log from not administrator IP - is that immediately a hack success indicator?

    - by Valentin Kuzub
    I checked event log today and between mass of failed audit events I found some successes which originated from not my country. However they look a little weird and no process is specified, while when I logon using RDP it says winlogon.exe I am wondering whether that means my system was compromised or there are good variants and it doesnt mean its all that bad. I am using a VPS solution if thats useful.

    Read the article

  • Completely reset mysql server authentication

    - by p3dro-sola
    I was trying to change the password for a user on a mysql server, and i appear to have locked myself out. I have access to the root user, but root doesn't have the privileges to access any databses, including the 'mysql' database where all the config is kept. Is there any way i can 'reset' the root user? (i have full file-system access) ... or do i just need to reinstall (can i salvage my data?) Thanks. -Ped

    Read the article

  • What steps should I take to remove an employee from a linux server?

    - by user146059
    I was recently hired as the main developer of a small web company. It seems that I will be taking his place and I don't have much system admin experience. My non-technical bosses have instructed me to ensure that he will not be able to cause any damage to our system/database/application when he is gone. I know the basics of what needs to be done but was hoping to have a definitive list before it happens.

    Read the article

  • Securing data sent to an unencrypted WiFi AP

    - by David Parunakian
    The business plan of a project I'm involved in assumes selling certain WiFi-enabled devices to end users. All these devices originally have an unencrypted connection and a standard SSID. The problem is that although the user can connect to it and set both a new SSID and a WPA passphrase, these are being sent to the AP in plain text and thus can be intercepted by anyone nearby with a sniffer. What's the best solution to this problem, and why? Initially set up an encrypted wireless network at the device and supply the user with a printed passphrase Buy an SSL certificate for the AP's default IP address or local domain name (the APs aren't supposed to work as a router and have a captive portal & dnsmasq installed, so all of them can pretend to be myunit.example.com, as far as I understand) Something different Thank you.

    Read the article

  • How do I access a shared folder using credentials other than the ones I logged in with?

    - by George Sealy
    I have a lab full of Windows 7 machines, and a shared login (user360) that all my students use. I also have a shared folder that they can all have read/write access to (for moving files around easily). My problem is that I also want to be able to create a shared folder for each student for submitting assignments. I can set up a shared folder with permissions for just a single user, and not the 'user360' account. The problem is, when I'm logged in as user360, and I try to open the 'StudentA', Windows never asks me for alternate credentials, it just refuses access because the user360 account is not allowed access. Can anyone suggest a fix for this?

    Read the article

  • ESET Remote Administrator Console showing infected files on a client, but threat log is empty

    - by Aron Rotteveel
    We recently deployed ESET NOD32 Antivirus on our small domain network and use the Remote Adminstrator to manage everything remotely. On a recent full system scan, one of the clients shows 10 infected files of which 4 have been cleaned in the scan log. The strange thing, however, is that the threat log is empty. Is there any reason why the threat log is empty? What has happened to the 6 remaining uncleaned files? Where can I view information on what files are infected and what they have been infected with? I know this can be done through the scan log properties screen, but with 958790 files scanned, I obviously do not want to browse through this list. Any help is appreciated.

    Read the article

  • How to set up a linux user that can only access a repository via ssh?

    - by GJ
    I have a mercurial repository on a secure server, to which I want to grant secure access to an external user. I added for him a user account and publickey ssh authentication so that now he could push/pull changesets via ssh. My question is: how can I make this new user account completely disabled from doing anything or accessing any data on the server other than accessing the repository? E.g. he shouldn't even have the possibility to enter an interactive shell session. Thanks

    Read the article

  • What can I do to prevent my user folder from being tampered with by malicious software?

    - by Tom Wijsman
    Let's assume some things: Back-ups do run every X minutes, yet the things I save should be permanent. There's a firewall and virus scanner in place, yet there happens to be a zero day attack on me. I am using Windows. (Although feel free to append Linux / OS X parts to your answer) Here is the problem Any software can change anything inside my user folder. Tampering with the files could cost me my life, whether it's accessing / modifying or wiping them. So, what I want to ask is: Is there a permission-based way to disallow programs from accessing my files in any way by default? Extending on the previous question, can I ensure certain programs can only access certain folders? Are there other less obtrusive ways than using Comodo? Or can I make Comodo less obtrusive? For example, the solution should be proof against (DO NOT RUN): del /F /S /Q %USERPROFILE%

    Read the article

  • A separate user for each task?

    - by Mark Tomlin
    I just got a VPS sver the other day, I'm new to server administration, but not that new to Ubuntu (11.04). I use it in my living room as the HTPC, and I had a previous VPS that I used on and off for a team speak server. This one I'm setting up for long term use. So I would like to know the best practice when it comes to websites and tasks that I have the server proforming. I understand that it could be beneficial to separate each website into it's own usergroup or under its own username. I would setup nginx so that it could read all of the users directors (and thus each website) but could not touch anything else. The same with the TeamSpeak, should I make a user for TeamSpeak so that it operates within its own confined area or is this overkill? I do have access to root on the sever and my current plan is to run about 4 websites and a TeamSpeak server. My stack is Linux (Ubuntu 11.04 LTS), nginx, and PHP 5.4.3 (using the PDO SQLite 3 built in driver for the database). Should PHP have it's own user group or is it ok to place it in with nginx?

    Read the article

  • NTFS: Deny all permissions for all files, except where explicitly added

    - by Simon
    I'm running a sandboxed application as a local user. I now want to deny almost all file system permissions for this user to secure the system, except for a few working folders and some system DLLs (I'll call this set of files & directories X below). The sandbox user is not in any group. So it shouldn't have any permissions, right? Wrong, because all "Authenticated Users" are a member of the local "Users" group, and that group has access to almost everything. I thought about recursively adding deny ACL-entries to all files and directories and remove them manually from X. But this seems excessive. I also thought about removing "Authenticated Users" from the "Users" group. But I'm afraid of unintended side-effects. It's likely that other things rely on this. Is this correct? Are there better ways to do this? How would you limit the filesystem permissions of a (very) non-trustworthy account?

    Read the article

  • Lock down SFTP access on OpenSolaris

    - by Simon
    Hi all, I have an OpenSolaris 2009.06 server and I'd like to enable a user to remotely change files in a specific directory, ideally via SFTP or FTP-via-SSH. This user does not yet have an account on the machine and I'd like to create it so it's as restricted as possible. Is there a canonical way of doing this? I know about OpenSolaris' role-based access control and authorizations model, but I figure it's a lot of work (i.e., a lot I can mess up) to really lock down a full-blown user account (prevent fork bombs, make sure there's really no other file in the file system which can be written to...). Any hint is greatly appreciated. Thanks, Simon

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >