Search Results

Search found 33454 results on 1339 pages for 'access token'.

Page 521/1339 | < Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >

  • fail2ban log parsing too slow on Raspberry Pi - options? [migrated]

    - by Gordon Morehouse
    I'm running fail2ban on a Raspberry Pi at 950MHz which I cannot overclock further. The Pi is occasionally subject to SYN floods on particular ports. I've set up iptables to throttle the rate of SYNs on the port of interest; when the throttle limits are exceeded, hosts which send SYNs are dropped into the REJECT chain and the particular SYN packet which exceeded the limit is logged. fail2ban then watches for these logged SYNs and, after seeing a few, temporarily bans the host for a short time (this is a transient issue in the app I'm working with). The problem is that the SYN floods can occasionally reach rates which are too fast for fail2ban to keep up with; I'll see 20-40 log messages per second, and eventually fail2ban falls behind and becomes ineffective. To add insult to injury, it continues consuming a LOT of CPU as it tries to catch up. I have verified that DROP chained packets from hosts already banned by fail2ban are not logged, and thus do not add to its load. What are my options here? I have a few ideas, but no clear path forward. Could I make the log-parse regex "easier" so it takes fewer cycles? Would using iptables --log-prefix to put a token near the start of the log message, and/or otherwise simplifying/altering the fail2ban regex help? Here is the current fail2ban config line containing a regex: failregex = kernel:.*?SRC=(?:::f{4,6}:)?(?P<host>[\w\-.^_]+) DST.*?SYN Is there a faster way for fail2ban to watch for the packets exceeding the limits than parsing kern.log? Could fail2ban be run under PyPy instead of CPython with minimal nonstandard wizardry (the OS is Raspbian 7, so, mostly Debian 7)? Is there something better than fail2ban that I could use to watch for the packets which exceed the SYN limits, and after N exceeds in X seconds, temporarily put the offending IP into the iptables DROP bucket, and take it out when the ban timer expires? Again, I'd vastly prefer a solution that uses as much software available in Debian as possible, though I can build Debian packages in a pinch.

    Read the article

  • No external src ip in log files (my router ip appears instead)

    - by bongo_fury
    I recently retired my workhorse WRT54G router/AP in favor of a Linksys EA2700. Since then, all inbound traffic (bound to an Ubuntu 10.02 box running LAMP)logged to Syslog, Apache's error and access logs, etc. (all behind said router) is getting logged with a src ip of 192.168.1.1, that of the router's internal ip. For example, here is an old entry from apache's access.log: 74.82.68.20 - - [22/Feb/2011:10:14:34 -0600] "GET /assets/css/style.css HTTP/1.1" 304 154 "http://example.com/view.php?event_id=1" "BlackBerry8520/5.0.0.822 Profile/MIDP-2.1 Configuration/CLDC-1.1 VendorID/100" And here is one since switching the router: 192.168.1.1 - - [05/Oct/2012:21:29:25 -0500] "GET /somedir/print.css HTTP/1.1" 200 650 "http://example.com/somedir/" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1"** That first field is the problem. Each and every entry in every log shows an "external" IP of 192.168.1.1, which isn't very helpful. Any ideas? Much thanks from a n00b!

    Read the article

  • btrfs: can i create a btrfs file system with data as jbod and metadata mirrored

    - by Yogi
    I am trying to build a home server that will be my NAS/Media server as well a the XBMC front end. I am planning on using Ubuntu with btrfs for the NAS part of it. The current setup consists of 1TB hdd for the OS etc and two 2TB hdd's for data. I plan to have the 2TB hdd's used as JBOD btrfs system in which i can add hdd's as needed later, basically growing the filesystem online. They way I had setup the file system for testing was while installing the OS just have one of the HDD's connected and have btrfs on it mounted as /data. Later on add another hdd to this file system. When the second disk was added btrfs made as RAID 0, with metadata being RAID 1. However, this presents a problem: even if one of the disk fails I loose all my data (mostly media). Also most of the time the server will be running without doing any disk access, i.e. the HDD's can be spun down, when a access request comes in this with the current RAID 0 setup both disks will spin up. in case I manage a JBOD only the disk that has the file needs to be spun up. This should hopefully reduce the MTBF for each disk. So, is there a way in which I can have btrfs setup such that metadata is mirrored but data stays in a JBOD formation? Another question I have is this, I understand that a full drive failure in JBOD will lose data on the drive, but having metadeta mirrored across all drives, will this help the filesytem correct errors that migh creep in (ex bit rot?) and is btrfs capable of doing this.

    Read the article

  • Paused BitLocker encryption because no longer want to encrypt the hard drive - how do I get out of encrypting?

    - by Matthew Nagear
    I'm hoping someone can help. I'm not a techy and can't seem to find the same question already answered. I went to encrypt (using BitLocker) a Buffalo hard drive but after it took about 1 min to reach 0.2% 'encrypted' I decided to pause and eject, thinking this would have ended the encryption process. However, I had already assigned a password to it on the instruction pre the encryption beginning. I've since connected my hard drive again and I'm asked to input the password. If I don't I cannot access the drive. However, if I do, I CAN access the files but straight away the little BitLocker Drive Encryption dialogue box comes up saying Encryption in progress.. which I have the option to Pause. So I hit Pause straight away as I do not want to go through with the encryption. Is there any way I can stop the process and decrypt before encryption is completed? Or am I forever going to need to Pause quickly and eject. I fear that will affect my hard drive longer term. Thanks.

    Read the article

  • How do I get a subdomain on Xampp Apache @ localhost?

    - by jasondavis
    **UPDATE- I got it working now, I just had to change to The port number is important here. I just modified my windows HOST file @ C:\Windows\System32\drivers\etc and added this to the end of it 127.0.0.1 images.localhost 127.0.0.1 w-w-w.friendproject-.com 127.0.0.1 friendproject.-com Then I modified my httpd-vhosts.conf file on Apache under Xampp @ C:\webserver\apache\conf\extra Under the part where it shows examples for adding virtualhost I added this code below: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /htdocs/images/ ServerName images.localhost </VirtualHost> <VirtualHost *:80> DocumentRoot /htdocs/ ServerName friendproject.com/ </VirtualHost> <VirtualHost *:80> DocumentRoot /htdocs/ ServerName w-ww-.friendproject.c-om/ </VirtualHost> Now the problem is when I go to any of the newly added domains in the browser I get this error below and even worse news is I now get this error even when going to http://localhost/ which worked fine before doing this I realize I can change everything back but I really need to at least get htt-p://im-ages.localhost to work. What do I do? Access forbidden! You don't have permission to access the requested directory. There is either no index document or the directory is read-protected. If you think this is a server error, please contact the webmaster . Error 403 localhost 07/25/09 21:20:14 Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9

    Read the article

  • SQL Server Management Studio not scripting all objects

    - by Ian Boyd
    i've been attempting to script a database using SQL Server 2005 Management Studio. i cannot get it to script some objects. It scripts others, but skips some. i can provide detailed screen shots the options being selected including all tables the folder where the script files will go the folder being empty before scripting the scripting process saying Sucess when scripting a table the destination folder no longer empty, with a hundred or so script files the script of some tables not being in the folder. And earlier SSMS would not script some views. Is this a known thing that the the Generate Scripts task does not generate scripts? Update Known issue on Microsoft Connect, but Microsoft couldn't repro the steps, so they closed closed the ticket. Fails on SQL Server 2005, also fails on SQL Server 2008. Update Two Some basic questions: 1.What version of SQL Server? Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Microsoft SQL Server 2008 - 10.0.2531.0 (Intel X86) Microsoft SQL Server 2005 Management Studio: 9.00.4035.00 Microsoft SQL Server 2008 Management Studio: 10.0.1600.22 2.What O/S are you running on? Windows Server 2000 Windows Server 2003 Windows Server 2008 3.How are you logging in to SQL server? sa/password Trusted authentication 4.Have you verified your account has full access to all objects? Yes, i have access to all objects. 5.Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable) Yes, all objects work fine. SQL Server Enterprise Manager can script the objects fine. Update Three They fail no matter what version of SQL Server you script against. It wasn't a problem in Enterprise Manager: Client Tools SQL Server 2000 SQL Server 2005 SQL Server 2008 ============ =============== =============== =============== 2000 Yes n/a n/a 2005 No No No 2008 No No No Update Four No errors found in the database using: DBCC CHECKDB go DBCC CHECKCONSTRAINTS go DBCC CHECKFILEGROUP go DBCC CHECKIDENT go DBCC CHECKCATALOG go EXECUTE sp_msforeachtable 'DBCC CHECKTABLE (''?'')' Honk if you hate SSMS.

    Read the article

  • ISA 2006 SP1 - SSL Client Certificate Authentication in Workgroup Environment

    - by JoshODBrown
    We have an IIS6 website that was previously published using an ISA 2006 SP1 standard server publishing rule. In IIS we had required a client certificate be provided before the website could be accessed... this all worked fine and dandy. Now we wish to use a web publishing rule on ISA 2006 SP1 for this same website. However, it seems the client certificate doesn't get processed now, so of course the user can't access the website. I've read a few articles stating the CA for the certificate needs to be installed in the trusted root certificate authorities store on the ISA Server (i have done this), as well as installing the client certificate on the ISA Server (done as well). I have also verified that the ISA Server is able to access the CRL for our CA no problem... In the listener properties for the web publishing rule, under Authentication, and Client Authentication Method, there is an option for SSL Client Certificate Authentication... i select this, but it appears the only Authentication Validation Method selectable is Windows (Active Directory).... there is no Active Directory in this environment. When i configure the rule with the defaults, I then try to hit my website and it prompts for my certificate, i choose it and hit ok... then I'm given the following error Error Code: 500 Internal Server Error. The server denied the specified Uniform Resource Locator (URL). Contact the server administrator. (12202) I check the event logs on the ISA Server and in Security Logs, i see Event ID 536, Failure Aud. The reason: The NetLogon component is not active. I think this is pretty obvious since there is no active directory available. Is there a way to make this web publishing rule work using client certificates in this workgroup environment? Any suggestions or links to helpful documents would be greatly appreciated!

    Read the article

  • Windows 7 - Windows XP - sharing - why isn't working?

    - by durumdara
    Hi! This is seems to be "hardware" and not "software" / "programming" question, but I need to use this share in my programs, so it is "close to programming". We had an XP based wireless network. The server is XP Professional, the clients are XP Home (Notebooks). This was working well with folder sharing (with user rights, not simple share). Then we replaced the one of the notebook with Win7/X64 notebook. First time this can reach the server, and the another client too. Later I went to another sites, and connect to another servers, another networks. And then, when I return to this network, I saw that I cannot connect to this server. Nothing of resources I see, and when try to dbl click on this computer, I got login window, where I can write anything, never I can login... The interesting part, that: Another XP home can see the server, can login as quest, or with other user. The server can see the XP home notebook. The Win7 can see the notebook's shared folders, and XP home can see the Win7 shared folders. The server can see the Win7 folders, BUT: the Win7 cannot see the server folders. Cannot see the resources too... The Win7 is in "work networking group", the group name is not mshome. I tried everything on the server, I tried to remove MS client, restore it with simple sharing, set guest password, etc., but I lost the possibilities to access this server from Win7. Does anyone have any idea what I need to see, what I need to set to access these resource - to use them in my programs? Thanks for every info, link: dd

    Read the article

  • Fix Corrupted Ruby in Mac OS X Lion

    - by luckyb56
    I screwed up my ruby buy executing the command sudo easy_install pip> /usr/bin/ruby -e "$(/usr/bin/curl -fksSL https://raw.github.com/mxcl/homebrew/master/Library/Contributions/install_homebrew.rb)" It showed error: Couldn't find index page for '-e' (maybe misspelled?) No local packages or download links found for -e error: Could not find suitable distribution for Requirement.parse('-e') After that when I tried to install Brew by: /usr/bin/ruby -e "$(/usr/bin/curl -fksSL https://raw.github.com/mxcl/homebrew/master/Library/Contributions/install_homebrew.rb)" It shows error which I have no idea: /usr/bin/ruby: line 1: Searching: command not found /usr/bin/ruby: line 2: Best: command not found /usr/bin/ruby: line 3: Processing: command not found Usage: pip COMMAND [OPTIONS] pip: error: No command by the name pip 1.1 (maybe you meant "pip install 1.1") /usr/bin/ruby: line 5: Installing: command not found /usr/bin/ruby: line 6: Installing: command not found /usr/bin/ruby: line 8: Using: command not found /usr/bin/ruby: line 9: Processing: command not found /usr/bin/ruby: line 10: Finished: command not found /usr/bin/ruby: line 11: Searching: command not found /usr/bin/ruby: line 12: Reading: command not found /usr/bin/ruby: line 13: syntax error near unexpected token `(' /usr/bin/ruby: line 13: `Scanning index of all packages (this may take a while)' Can this be fixed?

    Read the article

  • Help Email Account Management among multiple users

    - by CogitoErgoSum
    So I preface this with saying this may belong in IT Security, not too sure feel free to move. Currently we have an email account [email protected] - hosted via google apps (as is all our email). We had an incident where we had to terminate an employee. This employee however had the password for this account as we have 20-30 people utilizing it at any given point to manage customer emails etc. Thinking on this I feel there must be a better way to manage access. With Google you can associate upto 10 email accounts to another the problem is we have more like 20-30 people going. We were evaluating tools such as SalesForce and Assistly where people have their own login credentials and then the system contains the appropriate smtp information for the [email protected] email address to send emails from it rather than a users personal account. Aside from those options does anyone have any other thoughts? One suggestion floated was moving everyone to desktop clients and saving the PW info there so they could only login from their physical workstation but we may have situations where we'd like employees to work remotely. Does anyone have experience with this sort of system where ~20-30 people are responding from one email box and how to manage security and access?

    Read the article

  • iptables port forwarding works only for localhost

    - by Venki
    Below is my iptables config. I used this for my accessing a node js website running in port 9000 through port 80. This works fine only if access the website through local host / loop back. When I try to use the ip of eth0, which is assigned by my router through dcp. this does not work, when I use ip like 192.168.0.103 to access the website. I am not able to figure what is wrong here, Already burnt a day in this, still not able to figure out :( Edit: ( more information) Earlier, I was using this configuration to develop the website, i had configured the domain name to point to 127.0.0.1 in the /etc/hosts file. It was working fine, but now I am trying to deploy the website in a vps with static ip, This configuration does not work with both static IP. # redirect port 80 to port 9000 *nat :PREROUTING ACCEPT [57:3896] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [4229:289686] :POSTROUTING ACCEPT [4239:290286] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9000 -A OUTPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9000 COMMIT # Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL). -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT -A INPUT -p tcp --dport 9000 -j ACCEPT -A INPUT -j REJECT

    Read the article

  • Connecting to server from remote machine

    - by Jannat Arora
    I wish to connect my machine to a server in some other city. For doing the same I am using the following command: mstsc -v ip_address_of_server remote desktop can't connect to remote computer for one of these reasons: 1) Remote access to server is not enabled. 2) Remote computer is turned off 3) Remote computer is not available on network. Make sure remote computer is turned on and connected to the network, and that remote access is enabled. As per previous posts I need to turn off my client computers firewall..which I have...but still it gives me the same message. Can someone please please help me out...so as to how i may resolve this?? I am really new to networking, etc. Also when i am pinging: ping ip_address_of server I am getting the following response: Reply from ip_address_of_server: destination host unreachable Also I did try on ubuntu with rdesktop...still its not been able to connect with it. Also i know there are other people who are able to connect their machines with the server remotely. So i guess its not working for me only. Also when I accessed the same machine through LAN I was able to do so.

    Read the article

  • Mac OS X: remove write-protect flag from file in Terminal

    - by Albert
    Hi, I have a file on a FAT32 volume which is shown as write-protected in Finder (so I cannot move it). Removing that write-protected flag in the information dialog works just fine. However, I have many more such files and I thus want to do it via Terminal. I already tried via 'chmod +w' but that didn't worked. 'ls -la' showed me that they are already just fine ("-rwxrwxrwx 1 az az " where az is my user account). Then I thought this might be stored in some xattr properties but 'xattr -l' didn't gave me any entry. Then I thought this might be some ACL setting (whereby I thought they would be stored as xattr but let's try it anyway) - and some Google search returned me something with 'chmod -a' or 'chmod -i' or so. All these tries only give me chmod: No ACL currently associated with file" or chmod: Failed to set ACL on file...: Operation not permitted". But I definitly have no write access to the file because I cannot move it or do any other change to it (in Terminal). Removing the write-access flag in Finder solves that.

    Read the article

  • setting up a proxy to mirror an SSH SOCKS connection

    - by aresnick
    I have two remote machines, remote1 and remote2. remote2 is only running sshd, and I can't run anything else on it. remote1 is a full-fledged server to which I have complete access. I can run a SOCKS proxy on remote2 via ssh -f -N -D *:8080 me@remote2 which lets me expose a SOCKS proxy on port 8080 on remote1. I'd like to authenticate this so that the proxy isn't sitting open. How can I do this? It seems like I should be able to use delegate, but I can't even seem to get its HTTP proxy functionality working. When I run delegated -r -P8081 SERVER=http PERMIT="*:*:*" REMITTABLE="*" I can't even get it to work on port 8081. Anyway, I was hoping someone could point me in the right direction to let me authenticate access to the SOCKS proxy connection? That is, I want to be able to point my browser's proxy at remote1 and browse the internet through the SSH SOCKS proxy/tunnel to remote2. squid doesn't support a SOCKS parent =( Thanks!

    Read the article

  • Matlab computations done over Apple Filing Protocol (AFP) depend on POSIX permissions, ignores ACLs

    - by flumignan
    I'm a system administrator and have never used Matlab, so forgive my general ignorance of the program. My users have encountered problems when executing scripted Matlab actions over AFP to a Mac OS X Server 10.6.7 where the access control list (ACL) should allow actions, but the POSIX-style permissions disallow the activity. It seems as if Matlab, run locally on the Mac workstations on datasets on the remote server, ignores the ACLs entirely. This is the only application I've ever seen behave this way. The server's filesystem is HFS+J and all other activity is performing as expected. These users cannot use CIFS because of our integration with external directory systems. In this example, the directory bxdata, the members of the group cibturner should be able to modify the files. Indeed, they can using any other method except via Matlab scripts. When the Matlab script hits these files, the POSIX permissions of 644 disallow modification. It's as if the ACLs are irrelevant. [root@cib 16:00:24 /14181.2_5sM]# ls -leh@ bxdata/ total 128 -rw-r--r--+ 1 kel32 staff 18K Feb 15 09:31 TS-5sMath030708-21073-1.edat 0: group:cibturner inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 1: group:cibsrlocaladmins inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 2: group:crcservergroup inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown -rw-r--r--+ 1 kel32 staff 25K Feb 15 09:31 TS-5sMath030708-21073-1.txt 0: group:cibturner inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 1: group:cibsrlocaladmins inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 2: group:crcservergroup inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown Because this server has HIPAA data, security is critical. We are not using networked home directories or SAN technology. The MatLab program is run on the user's hard drive; access is granted via Kerberized AFP.

    Read the article

  • Hiding a Website from Search Engine Bots and Viewers by Disabling Default VirtualHost

    - by Basel Shishani
    When staging a website on a remote VPS, we would like it to be accessible to team members only, and we would also like to keep the search engine bots off until the site is finalized. Access control by host whether in Iptables or Apache is not desirable, as accessing hosts can vary. After some reading in Apache config and other SF postings, I settled on the following design that relies on restricting access to only through specific domain names: Default virtual host would be disabled in Apache config as follows - relying on Apache behavior to use first virtual host for site default: <VirtualHost *:80> # Anything matching this should be silently ignored. </VirtualHost> <VirtualHost *:80> ServerName secretsiteone.com DocumentRoot /var/www/secretsiteone.com </VirtualHost> <VirtualHost *:80> ServerName secretsitetwo.com ... </VirtualHost> Then each team member can add the domain names in their local /etc/hosts: xx.xx.xx.xx secrethostone.com My question is: is the above technique good enough to achieve the above said goals esp restricting SE bots, or is it possible that bots would work around that. Note: I understand that mod_rewrite rules con be used to achieve a similar effect as discussed here: How to disable default VirtualHost in apache2?, so the same question would apply to that technique too. Also please note: the content is not highly secretive - the idea is not to devise something that is hack proof, so we are not concerned about traffic interception or the like. The idea is to keep competitors and casual surfers from viewing the content before it's released, and to prevent SE bots from indexing it.

    Read the article

  • SFTP only works occasionally

    - by 82din
    I suddenly get this error using SFTP: Status: Connecting to example.com... Response: fzSftp started Command: open "[email protected]" 22 Command: Pass: ********* Status: Connected to example.com Status: Retrieving directory listing... Command: pwd Response: Current directory is: "/root" Command: ls Status: Listing directory /root Error: Connection timed out Error: Failed to retrieve directory listing I tried using FileZila, Cyberduck, Shell (Terminal), same result. However, it worked fine today (just a few seconds) in Passive mode. I guess something changed in my network, so I have tried both: Active and Passive mode: Connecting to probe.filezilla-project.org Response: 220 FZ router and firewall tester ready USER FileZilla Response: 331 Give any password. PASS 3.6.0.2 Response: 230 logged on. Checking for correct external IP address Retrieving external IP address from http://checkip.dyndns.org:8245/ Checking for correct external IP address IP <external IP> big-bf-ccc-f Response: 200 OK PREP 49565 Response: 200 Using port 49565, data token 380352881 PORT 186,15,222,5,193,157 Response: 200 PORT command successful LIST Response: 150 opening data connection Response: 503 Failure of data connection. Server sent unexpected reply. Connection closed Because I'm working behind a router, I get my external IP from http://checkip.dyndns.org:8245/ I also tested different range of ports.

    Read the article

  • NetApp NDMP backup with BE 2010 R2 works, restore fails

    - by uuwe
    Hi, I'm having some issues with a new Backup Exec 2010 R2 installation. I configured a NetApp FAS2020 as an NDMP device and want to backup files from the NAS to a tape drive connected to my backup server. I set up ndmpd according to this document (http://www.symantec.com/business/support/index?page=content&id=TECH48957) and created a separate backup user (http://filers.blogspot.com/2006/09/setting-veritas-netbackup-with-non.html). Backup works perfectly, but restoring any file gives me an authentication failed error. The NDMP device has a "global" ndmp user configured in the device tab (tried this with the newly created ndmpd backup user and the netapp root) and I can also configure separate resource credentials in the BE restore job. I have tried setting the same accounts for the "global" ndmp device and the restore credentials and have also tried setting different accounts for them. NDMP debug level is at 5 and this is what shows up in /etc/messages. The session is closed immediately after it has been granted. 16:12:07 PST [Java_Thread:info]: ndmpdserver: ndmpd.access allowed for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 16:12:07 PST [Java_Thread:info]: Ndmpd51: ndmpd session closed successfully for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 Running wireshark on the backup server doesn't produce much. It shows a SYN - SYN/ACK - NDMP CONNECT_CLOSE Request from the backup server. The Resource Credentials for the restore job behave very oddly. If I enter NDMP credentials and do "Test All" it fails. If I use my regular domain backup account, it is successful. There are no failed or succeeded logons in the NetApp ndmp log and tracing this check shows that it doesn't even connect to the NAS. This makes me think that this is more likely flaky BE behaviour rather than misconfiguration of the NAS. Here is the options ndmp output: FAS2020-1 options ndmp ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled on ndmpd.enable on ndmpd.ignore_ctime.enabled off ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface disable ndmpd.tcpnodelay.enable off

    Read the article

  • Filtered Router Interface

    - by jviotti
    I'm having some problems with a Scientific-Atlanta DPR2320R2. In specific with the WIFI. A few months ago I changed its password and username and now I can't remember. So I tried cracking it with Hydra but it drove things worse. Content of webadmin was rendered partial, and threw lot of errors. I then reseted the router. I found myself abled to browse the web with ethernet-connected pc. Wifi is configured by registering the device's MAC Address, and indeed the router has been reseted and register MAC address were lost. No device could connect to wifi. In fact, the device does not even recognize the network. I tried the pointing to 192.168.0.1 to restablish the MAC's. But I couldn't connect to the router access point. Tried listing up hosts: $ nmap -sP 192.168.0.0/24 Starting Nmap 5.00 ( http://nmap.org ) at 2012-12-11 01:18 ART Host 192.168.0.1 is up (0.0018s latency). Host 192.168.0.11 is up (0.00025s latency). Nmap done: 256 IP addresses (2 hosts up) scanned in 59.62 seconds Then checked 192.168.0.1 was really up by sending pings. It responded to all my pings. I quick-scanned the access point: $ nmap 192.168.0.1 Starting Nmap 5.00 ( http://nmap.org ) at 2012-12-11 01:08 ART Interesting ports on 192.168.0.1: Not shown: 999 closed ports PORT STATE SERVICE 80/tcp filtered http Nmap done: 1 IP address (1 host up) scanned in 6.73 seconds Look the state of the port 80: FILTERED. I'm pretty confused now. Any suggestion would be appreciated. Thanks in advance.

    Read the article

  • running automated fsck on remote server

    - by GriffinHeart
    I had another question about df, and now i came to conclusion i need to run fsck my partition, i've been reading about it and would like some advice, if possible. The situation is like this, no physical access to the server and i want to run fsck. from what i read i just need to touch /forcefsck and when i reboot it will run fsck. My question is, at its basis, with what arguments will the fsck run? Will it need user input to correct errors, etc? and after running will it save a log of what happened? if this was how it ran it would be perfect, anyway of enforcing that on reboot? fsck -v -p /machine/disk/p1 2>&1 > fscklog.txt Also here they describe this: it's also a good idea on debian and debian-derivatives like ubuntu to edit /etc/default/rcS on remote servers and set "FSCKFIX=yes" that adds "-y" to the boot time fsck, so it doesn't risk the remote server being stuck waiting for someone to login at the console and run fsck. But on Centos that doesn't seem to exist I only have ssh access at the moment so that is why i'm being so picky with it. here's some info about disks and mounted volumes on the server: http://pastebin.centos.org/33314 Thanks.

    Read the article

  • What keeps you from changing your public IP address and wreak havok?

    - by Whitemage
    An interesting question was asked to me and I did not know what to answer.. So I'll ask here. Let's say I subscribed to an ISP and I'm using cable internet access. ISP gives me a public IP address of 60.61.62.63. What keeps me from changing this IP address to, let's say, 60.61.62.75 and mess with another consumer's internet access? For the sake of this argument, let's say that this other IP address is also owned by the same ISP. Also, let's assume that it's possible for me to go into the cable modem settings and manually change the IP address. Under a business contract where you are allocated static addresses, you are also assigned a default gaetway, a network address and a broadcast address. So that's 3 addresses the ISP "loses" to you. That seems very wastefull for dynamically assigned IP addresses where the majority of customers are.. Could they simply be using static arps? ACLs? Other simple mechanisms? Anyone who worked at an ISP would be willing to explain this a bit?

    Read the article

  • How do I set up DNS with nic.io to point to an AWS EC2 server?

    - by Chad Johnson
    I purchased a domain one week ago via nic.io. I have elected to provide my own DNS [because they provided no other option]. I'm trying to point my .io domain at my EC2 server instance. I've allocated an elastic IP and associated it with the instance. I can SSH into the instance and access point 80 via the IP address just fine. The IP is 54.235.201.241. nic.io support said the following: "You have selected to provide your own DNS and therefore if there is an issue with the set-up of the name servers you will need to contact your DNS provider." So, I created a Hosted Zone via Route 53 in AWS. This created NS and SOA records. I then set the Primary and Secondary servers at nic.io's domain admin page to the SOA record domains. Additionally, I set the optional servers to the NS domains. I did this two days ago, and I can't access the server via the domain. I ran a DNS check here...still not sure what I need to do: http://mydnscheck.com/?domain=chadjohnson.io&ns1=&ns2=&ns3=&ns4=&ns5=&ns6=. I have no idea what I'm supposed to do. Does anyone have any ideas?

    Read the article

  • nginx rewrite or internal redirection cycle

    - by gyre
    Im banging my head against a table trying to figure out what is causing redirection cycle in my nginx configuration when trying to access URL which does not exist Configuration goes as follows: server { listen 127.0.0.1:8080; server_name .somedomain.com; root /var/www/somedomain.com; access_log /var/log/nginx/somedomain.com-access.nginx.log; error_log /var/log/nginx/somedomain.com-error.nginx.log debug; location ~* \.php.$ { # Proxy all requests with an URI ending with .php* # (includes PHP, PHP3, PHP4, PHP5...) include /etc/nginx/fastcgi.conf; } # all other files location / { root /var/www/somedomain.com; try_files $uri $uri/ ; } error_page 404 /errors/404.html; location /errors/ { alias /var/www/errors/; } #this loads custom logging configuration which disables favicon error logging include /etc/nginx/drop.conf; } this domain is a simple STATIC HTML site just for some testing purposes. I'd expect that the error_page directive would kick in in response to PHP-FPM not being able to find given files as I have fastcgi_intercept_errors on; in http block and nave error_page set up, but I'm guessing the request fails even before that somewhere on internal redirects. Any help would be much appreciated.

    Read the article

  • Mysql can not resolve hostnames when checking privileges

    - by Fabio
    I'm going crazy to solve this. I have a mysql installation (on machine db.example.org) which doesn't resolve a given hostname. I gave privileges using hostnames i.e. GRANT USAGE ON *.* TO 'user'@'host1.example.org' IDENTIFIED BY PASSWORD 'secret' GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, INDEX ON `my_database`.* TO 'user'@'host1.example.org' However when I try to connect using mysql -u user -p -h db.example.org I obtain ERROR 1045 (28000): Access denied for user 'user'@'192.168.11.244' (using password: YES) I already checked for correct name resolution in the dns system: $ dig -x 192.168.11.244 ;; ANSWER SECTION: 244.11.168.192.in-addr.arpa. 68900 IN PTR host1.example.org. I've also checked for skip-name-resolve option in mysql variables in fact if I can access from another machine on the same subnet using hostname privileges. The only difference is that host1.example.org and db.example.org point the same ip on the same machine i.e. both db.example.org and host1.example.org have ip 192.168.11.244. In this way all the applications using that database can use the name db.example.org and we can move the data on other hosts (if needed) just by changing the dns record, leaving the application code unchanged. What should I do to solve this or at least to understand what's happening?

    Read the article

  • What could cause a huge packet loss in Ubuntu 9.10, for both wired and wireless?

    - by xzenox
    I was previously using 9.04 fine (and in fact, I am posting this from my old 9.04 live cd). I tested the following install steps in a virtualbox vm prior to following the sames ones to upgrade my laptop: Download/burn ubuntu minimal cd (12mb one) Install ubuntu minimal sudo apt-get update sudo apt-get upgrade sudo apt-get ubuntu-desktop ubuntu-standard In the VM worked fine and I found myself with a working 9.10 ubuntu, network worked fine and I was able to test my backups and DropBox without a hitch (host was 9.04). When I followed the same steps on my laptop, everything worked up to after 9.10 being installed and working. As far as I can tell, everything besides eth0/wireless works. For some reason, I am unable to access the internet. Ping reports that over 99% of packets get lost (over an hour or so of pinging). This means for example that if I try hard enough, I can load a webpage but only at the cost of much patience... This happens both for a wired and wireless connection to my wrt310n (updated with latest firmware). At first I thought that it could be related to the ipv6 issues ppl have been experiencing however even after disabling ipv6 at the kernel level (through grub), I still get the issue. I do not think this is related to DNS issues or the likes since even when I ping my ISP's gateway IP, I have the same amount of packet loss. No DNS resolving should be required there. Access to my router works peachy with no packet loss there. I've tried different MTU values but to no avail. Note that this issue affects every web-enabled application: firefox, ping, synaptic, etc. The same hardware/router combo works with 9.04 but not with 9.10. In fact, when I did: sudo apt-get ubuntu-desktop ubuntu-standard after 9.10 minimal was installed, it downloaded over 400mb of packages without a hitch so my guess is that one of those packages either in ubuntu-desktop or ubuntu-standard is causing havok. Thoughts?

    Read the article

< Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >