Search Results

Search found 5866 results on 235 pages for 'authentication'.

Page 191/235 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • NetApp NDMP backup with BE 2010 R2 works, restore fails

    - by uuwe
    Hi, I'm having some issues with a new Backup Exec 2010 R2 installation. I configured a NetApp FAS2020 as an NDMP device and want to backup files from the NAS to a tape drive connected to my backup server. I set up ndmpd according to this document (http://www.symantec.com/business/support/index?page=content&id=TECH48957) and created a separate backup user (http://filers.blogspot.com/2006/09/setting-veritas-netbackup-with-non.html). Backup works perfectly, but restoring any file gives me an authentication failed error. The NDMP device has a "global" ndmp user configured in the device tab (tried this with the newly created ndmpd backup user and the netapp root) and I can also configure separate resource credentials in the BE restore job. I have tried setting the same accounts for the "global" ndmp device and the restore credentials and have also tried setting different accounts for them. NDMP debug level is at 5 and this is what shows up in /etc/messages. The session is closed immediately after it has been granted. 16:12:07 PST [Java_Thread:info]: ndmpdserver: ndmpd.access allowed for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 16:12:07 PST [Java_Thread:info]: Ndmpd51: ndmpd session closed successfully for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 Running wireshark on the backup server doesn't produce much. It shows a SYN - SYN/ACK - NDMP CONNECT_CLOSE Request from the backup server. The Resource Credentials for the restore job behave very oddly. If I enter NDMP credentials and do "Test All" it fails. If I use my regular domain backup account, it is successful. There are no failed or succeeded logons in the NetApp ndmp log and tracing this check shows that it doesn't even connect to the NAS. This makes me think that this is more likely flaky BE behaviour rather than misconfiguration of the NAS. Here is the options ndmp output: FAS2020-1 options ndmp ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled on ndmpd.enable on ndmpd.ignore_ctime.enabled off ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface disable ndmpd.tcpnodelay.enable off

    Read the article

  • Will Parallel-port dongle work on USB-to-Parallel Adapter?

    - by Gary M. Mugford
    We have a niche program running on a Win2K laptop that uses a security dongle connected to a parallel port for authentication. The laptop is getting creaky and I spent a frustrating night last night shopping various websites for a new laptop that had a parallel port. Seems I'm about three years late [G]. The question I have, is, if I buy a new(ish) laptop and use a USB-to-Parallel Port adapter, will the security dongle work? I know I'm not being specific about the app, but it's one most people wouldn't have heard of anyways. I've been guessing the answer to my question is no, since the app won't know to send a request out to the non-existent port. But, if the process actually is that the dongle sends a message INTO the computer every now and then, then it might work. And, I'm not sure whether the dongle is only needed at program startup time or randomly. The dongle is a 'permanent' addition to the old laptop. This is all about the money. We can have a newly-updated version of the program (which won't add any features we need) for the princely sum of $2700. Or we can spend $500 on a refurbed laptop still running WinXP, add a 30 buck adapter and keep the same solid, stolid performance we've come to appreciate. But it all comes down to the dongle behaviour. Oh, and a dock won't work. The whole laptop issue is about moving about the various nooks and crannies of the building with laptop in hand. Thanks for any suggestions/guidance. GM

    Read the article

  • What breaks in a Windows domain if a member has a high time skew?

    - by Ryan Ries
    It's taken for granted by most IT people that in a Windows domain, if a member server's clock is off by more than 5 minutes (or however many minutes you've configured it for) from that of its domain controller - logons and authentications will fail. But that is not necessarily true. At least not for all authentication processes on all versions of Windows. For instance, I can set my time on my Windows 7 client to be skewed all to heck - logoff/logon still works fine. What happens is that my client sends an AS_REQ (with his time stamp) to the domain controller, and the DC responds with KRB_AP_ERR_SKEW. But the magic is that when the DC responds with the aforementioned Kerberos error, the DC also includes his time stamp, which the client in turn uses to adjust his own time and resubmits the AS_REQ, which is then approved. This behavior is not considered a security threat because encryption and secrets are still being used in the communication. This is also not just a Microsoft thing. RFC 4430 describes this behavior. So my question is does anyone know when this changed? And why is it that other things fail? For instance, Office Communicator kicks me off if my clock starts drifting too far out. I really wish to have more detail on this. edit: Here's the bit from RFC 4430 that I'm talking about: If the server clock and the client clock are off by more than the policy-determined clock skew limit (usually 5 minutes), the server MUST return a KRB_AP_ERR_SKEW. The optional client's time in the KRB-ERROR SHOULD be filled out. If the server protects the error by adding the Cksum field and returning the correct client's time, the client SHOULD compute the difference (in seconds) between the two clocks based upon the client and server time contained in the KRB-ERROR message. The client SHOULD store this clock difference and use it to adjust its clock in subsequent messages. If the error is not protected, the client MUST NOT use the difference to adjust subsequent messages, because doing so would allow an attacker to construct authenticators that can be used to mount replay attacks.

    Read the article

  • Mac Share Points automatically authenticate with matching Windows AD credentials from Windows

    - by Ron L
    I recently started administering an OS X server (10.8) that is on the same network as our AD domain. While setting up Mac Share Points, I encountered some odd behavior that I hope someone can explain. For the purposes of this example assume the following: 1) Local User on OS X Server: frank, password: Help.2012 2) AD Domain User: frank, password: Help.2012 3) AD Domain: mycompany 4) OS X Server hostname: macserver (not bound to AD, not running OD) When joined to the domain on a a Win 7 computer and logged in as frank and accessing the shares at \\macserver, it automatically authenticates using frank's OS X credentials (because they are the same). However, if I change frank's OS X password, the standard Windows authentication dialog pops-up preset to use frank's AD domain (my company\frank). However, after entering the new OS X password, it will not authenticate without changing the domain to local (.\frank). Basically, if a user in AD has the same User name and password in OS X, it will authenticate automatically regardless of the domain. If the passwords differ, authenticating to the OS X shares must be done from the local machine. (and slightly off topic - how come an OS X administrator can access the root drives on the Mac server from Windows when accessing the Mac shares even when they aren't shared? In other words, it will show all the shared folders from "File Sharing" plus whatever drives are mounted in OS X)

    Read the article

  • Why can a local root turn into any LDAP user?

    - by Daniel Gollás
    I know this has been asked here before, but I am not satisfied with the answers and don't know if it's ok to revive and hijack an older question. We have workstations that authenticate users on an LDAP server. However, the local root user can su into any LDAP user without needing a password. From my perspective this sounds like a huge security problem that I would hope could be avoided at the server level. I can imagine the following scenario where a user can impersonate another and don't know how to prevent it: UserA has limited permissions, but can log into a company workstation using their LDAP password. They can cat /etc/ldap.conf and figure out the LDAP server's address and can ifconfig to check out their own IP address. (This is just an example of how to get the LDAP address, I don't think that is usually a secret and obscurity is not hard to overcome) UserA takes out their own personal laptop, configures authentication and network interfaces to match the company workstation and plugs in the network cable from the workstation to their laptop, boots and logs in as local root (it's his laptop, so he has local root) As root, they su into any other user on LDAP that may or may not have more permissions (without needing a password!), but at the very least, they can impersonate that user without any problem. The other answers on here say that this is normal UNIX behavior, but it sounds really insecure. Can the impersonated user act as that user on an NFS mount for example? (the laptop even has the same IP address). I know they won't be able to act as root on a remote machine, but they can still be any other user they want! There must be a way to prevent this on the LDAP server level right? Or maybe at the NFS server level? Is there some part of the process that I'm missing that actually prevents this? Thanks!!

    Read the article

  • error in auth.log but can login; LDAP/PAM

    - by Peter
    I have a server running OpenLDAP. When I start a ssh-session I can log in without problems, but an error appears in the logs. This only happens when I log in with a LDAP account (so not with a system account such as root). Any help to eliminate these errors would be much appreciated. The relevant piece from /var/log/auth.log sshd[6235]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=example.com user=peter sshd[6235]: Accepted password for peter from 192.168.1.2 port 2441 ssh2 sshd[6235]: pam_unix(sshd:session): session opened for user peter by (uid=0) pam common-session session [default=1] pam_permit.so session required pam_unix.so session optional pam_ldap.so session required pam_mkhomedir.so skel=/etc/skel umask=0022 session required pam_limits.so session required pam_unix.so session optional pam_ldap.so pam common-auth auth [success=1 default=ignore] pam_ldap.so auth required pam_unix.so nullok_secure use_first_pass auth required pam_permit.so session required pam_mkhomedir.so skel=/etc/skel umask=0022 silent auth sufficient pam_unix.so nullok_secure use_first_pass auth requisite pam_succeed_if.so uid >= 1000 quiet auth sufficient pam_ldap.so use_first_pass auth required pam_deny.so pam common-account account [success=2 new_authtok_reqd=done default=ignore] pam_ldap.so account [success=1 default=ignore] pam_unix.so account required pam_unix.so account sufficient pam_succeed_if.so uid < 1000 quiet account [default=bad success=ok user_unknown=ignore] pam_ldap.so account required pam_permit.so account sufficient pam_ldap.so account sufficient pam_unix.so

    Read the article

  • Cisco ASA5505 won't sync with NTP

    - by Martijn Heemels
    Today I noticed that the clock my Cisco ASA 5505 firewall was running about 15 minutes late, which surprised me since I've set up the NTP client. My two NTP servers 10.10.0.1 and 10.10.0.2 are virtualized Windows Server 2008 R2 domain controllers, and both have the correct time. As shown below, the ASA knows about the two servers, can ping them and seems to poll them periodically, so I suppose it can reach them both. The ASA claims its time source is NTP, however the clock is unsynchronized. Neither host is marked as synced. Result of the command: "ping 10.10.0.1" Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.10.0.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms Result of the command: "sh ntp ass" address ref clock st when poll reach delay offset disp ~10.10.0.1 .LOCL. 1 78 1024 377 0.5 643.69 17.0 ~10.10.0.2 10.10.0.1 2 190 1024 377 0.9 655.91 58.4 * master (synced), # master (unsynced), + selected, - candidate, ~ configured Result of the command: "sh ntp stat" Clock is unsynchronized, stratum 16, no reference clock nominal freq is 99.9984 Hz, actual freq is 99.9984 Hz, precision is 2**6 reference time is 00000000.00000000 (07:28:16.000 CEST Thu Feb 7 2036) clock offset is 0.0000 msec, root delay is 0.00 msec root dispersion is 0.00 msec, peer dispersion is 0.00 msec Result of the command: "sh clock detail" 10:33:23.769 CEDT Tue Jun 26 2012 Time source is NTP UTC time is: 08:33:23 UTC Tue Jun 26 2012 Summer time starts 02:00:00 CEST Sun Mar 25 2012 Summer time ends 03:00:00 CEDT Sun Oct 28 2012 I've tried the basic steps of manually setting the time and removing and adding the timeservers, to no avail. My ASA's ntp config is simply: ntp server 10.10.0.1 ntp server 10.10.0.2 Do I need to enable authentication to use a Windows NTP server? Any thoughts?

    Read the article

  • samba sync password with unix password on debian wheezy

    - by Oz123
    I installed samba on my server and I am trying to write a script to spare me the two steps to add user, e.g.: adduser username smbpasswd -a username My smb.conf states: # This boolean parameter controls whether Samba attempts to sync the Unix # password with the SMB password when the encrypted SMB password in the # passdb is changed. unix password sync = yes Further reading brought me to pdbedit man page which states: -a This option is used to add a user into the database. This com- mand needs a user name specified with the -u switch. When adding a new user, pdbedit will also ask for the password to be used. Example: pdbedit -a -u sorce new password: retype new password Note pdbedit does not call the unix password syncronisation script if unix password sync has been set. It only updates the data in the Samba user database. If you wish to add a user and synchronise the password that im- mediately, use smbpasswd’s -a option. So... now I decided to try adding a user with smbpasswd: 1st try, unix user still does not exist: root@raspberrypi:/home/pi# smbpasswd -a newuser New SMB password: Retype new SMB password: Failed to add entry for user newuser. 2nd try, unix user exists: root@raspberrypi:/home/pi# useradd mag root@raspberrypi:/home/pi# smbpasswd -a mag New SMB password: Retype new SMB password: Added user mag. # switch to user pi, and try to switch to mag root@raspberrypi:/home/pi# su pi pi@raspberrypi ~ $ su mag Password: su: Authentication failure So, now I am asking myself: how do I make samba passwords sync with unix passwords? where are samba passwords stored? Can someone help enlighten me?

    Read the article

  • JBoss7 load balancing with mod_proxy_balancer - session not working

    - by Phil P.
    I am trying to set up mod_proxy_balancer for routing requests to 2 jboss7-servers. For the time being I am testing this setup on my local machine, using following config in httpd.conf: ProxyRequests Off <Proxy \*> Order deny,allow Deny from all </Proxy> ProxyPass / balancer://mycluster/ stickysession=JSESSIONID|jsessionid scolonpathdelim=On <Proxy balancer://mycluster> BalancerMember http://localhost:8080 route=node1 BalancerMember http://localhost:8081 route=node2 Order allow,deny Allow from all </Proxy> and in the standalone.xml file of each jboss I have defined the jvmRoute system property: <system-properties> <property name="jvmRoute" value="node1"/> </system-properties> At http:// localhost/myapp the application is accessible but the java-session is not build up correctly. Consequently the authentication is not working. The funny thing is, that everything is working if I turn off one JBoss-instance. As I have tried a couple of settings already, I am thankful for any further suggestions.

    Read the article

  • How to re-join an AD2003 domain with Samba after deleting the machine account?

    - by Guss
    During some troubleshooting I deleted the machine account for a Linux server running samba from our AD 2003 domain. We are using Kerberos for authentication, and after I deleted the machine account I tried to join the domain again using net ads join -U Administrator But I keep getting Kerberos errors like these: [2009/08/18 16:14:36, 0] libads/kerberos.c:ads_kinit_password(228) kerberos_kinit_password [email protected] failed: Client not found in Kerberos database Failed to join domain: Improperly formed account name It appears as if samba remembers that it once had an account with the AD and keeps trying to reconnect to it, but I want to create a new account from scratch. I tried to delete all the .tdb files I could find as well as everything under /var/cache/samba but to no avail - it still behaves the same. I also tried to create the machine account on the AD side, but then I get a similar error when I try to join, about failure to authenticate with the machine account - it looks like samba tries the previous machine account password and I don't know how to reset it, or even if I could figure out what samba uses - how to set it in the AD. Any help would be greatly appreciated, as at this point the only thing I can think about is to reformat and reinstall the machine, and I would really REALLY love to not do that. Thanks in advance.

    Read the article

  • Can't connect to public WiFi with MacBookPro at coffee shops and libraries

    - by Nathan Bowers
    The Problem: I can't connect to public, unencrypted WiFi at my local public library or Peets Coffee. My Setup: Late 2006 MacBookPro running 10.5.8. I have Parallels installed. It's supposed to work like this: 1) Connect to their unencrypted WiFi network 2) Open a browser which redirects you to their "enter password/agree to terms" page. 3) Browse normally. I can connect to the WiFi network, but when I try to authenticate I always get stuck in a redirect loop. It's been like this for a while. Even before I upgraded to 10.5.8. I never have trouble with encrypted networks or regular open WiFi. What I've tried: Disabling Parallels connections in Network Prefs. Superstition: somehow Parallels installed something in the network stack that's messing me up. Pinging the IP address of the WiFi node I'm connected to. I can ping it, it's there, but I still get stuck in this authentication redirect loop. Tried different browsers, tried different cookie and security settings. Even tried IE under Parallels. No dice. Tried flushing DNS cache. Asked library and coffee employees for help. It didn't go well. My Question: Anybody else have this problem? What should I be looking for?

    Read the article

  • Single Sign On for intranet with Apache and Linux MIT Kerberos

    - by Beerdude26
    Greetings, I am looking for a way to do a single sign on to an intranet in the following manner: A Linux user logs on via a graphical frontend (for example, GNOME). He automatically requests a TGT for his username from the MIT Kerberos KDC. Via some way or another, the Apache server (which we'll assume is on the same server as the KDC), is informed that this user has logged in. When the user accesses the intranet, he is automatically granted access to his web applications. I don't think I've seen this kind of functionality while searching the net. I know the following possibilities exist: Using an authentication module such as mod_auth_kerb, a user is presented with a login prompt to enter his username and password, which are then authenticated against the MIT Kerberos server. (I would like this to be automatic.) IIS supports integrated Windows logon via ASP.Net when the user is part of an Active Directory. (I'm looking for the Linux / Apache equivalent.) Any suggestions, criticism and ideas are highly appreciated. This is for a school project to show a proof-of-concept, so every handy piece of information is more than welcome. :)

    Read the article

  • What could cause these "failed to authenticate" logs other than failed login attempts (OSX)?

    - by Tom
    I've found this in the Console logs: 10/03/10 3:53:58 PM SecurityAgent[156] User info context values set for tom 10/03/10 3:53:58 PM authorizationhost[154] Failed to authenticate user (tDirStatus: -14090). 10/03/10 3:54:00 PM SecurityAgent[156] User info context values set for tom 10/03/10 3:54:00 PM authorizationhost[154] Failed to authenticate user (tDirStatus: -14090). 10/03/10 3:54:03 PM SecurityAgent[156] User info context values set for tom 10/03/10 3:54:03 PM authorizationhost[154] Failed to authenticate user (tDirStatus: -14090). There are about 11 of these "failed to authenticate" messages logged in quick succession. It looks to me like someone is sitting there trying to guess the password. However, when I tried to replicate this I get the same log messages except that this extra message appears after five attempts: 13/03/10 1:18:48 PM DirectoryService[11] Failed Authentication return is being delayed due to over five recent auth failures for username: tom. I don't want to accuse someone of trying to break into an account without being sure that they were actually trying to break in. My question is this: is it almost definitely someone guessing a password, or could the 11 "failed to authenticate" messages be caused by something else?

    Read the article

  • Duplicity on a ReadyNAS

    - by Jason Swett
    Has anyone here run Duplicity on a ReadyNAS? I'm trying but here's what I get: duplicity full --encrypt-key="ABC123" /home/jason/ scp://[email protected]//gob Invalid SSH password Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' failed (attempt #1) I've also found this post that says the "Invalid SSH password" message doesn't actually mean invalid SSH password. This would make sense because I'm not using an SSH password; I'm using a public key. I can ssh, ftp, sftp and rsync into my ReadyNAS just fine. (Actually, to be more accurate, I can get past authentication with ssh, ftp and sftp but I can't actually do anything past that. Regardless, that's enough to tell me that "Invalid SSH password" is bogus. Rsync works with no problems.) The post I found says the command will work as soon as the directory at the end of your scp command exists, but I don't know how to check for that. I know the share gob exists on my ReadyNAS and I know it's writable because I'm writing to it with rsync. Also, here is the verbose output: Using archive dir: /home/jason/.cache/duplicity/3bdd353b29468311ffa8485160da6873 Using backup name: 3bdd353b29468311ffa8485160da6873 Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Main action: full ================================================================================ duplicity 0.6.10 (September 19, 2010) Args: /usr/bin/duplicity full --encrypt-key=ABC123 -v9 /home/jason/ scp://[email protected]//gob Linux gob 2.6.35-22-generic #33-Ubuntu SMP Sun Sep 19 20:34:50 UTC 2010 i686 /usr/bin/python 2.6.6 (r266:84292, Sep 15 2010, 15:52:39) [GCC 4.4.5] ================================================================================ Using temporary directory /tmp/duplicity-cridGi-tempdir Registering (mkstemp) temporary file /tmp/duplicity-cridGi-tempdir/mkstemp-ztuF5P-1 Temp has 86334349312 available, backup will use approx 34078720. Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' (attempt #1) State = sftp, Before = '[email protected]'s' State = sftp, Before = '' Invalid SSH password Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' failed (attempt #1) Any ideas as to what's going wrong?

    Read the article

  • Problem with network policy rule in Network Policy Server

    - by Robert Moir
    Trying to configure RADIUS for a college network, and have run into the following frustration: I can't set an "AND" condition for group membership of authenticated objects in the network policy rules, e.g. I'm trying to create a NPS rule that says, essentially "IF user is a member of [list of user groups] And is authenticating from a computer in [wireless computer group] then allow access. The screenshot above is the rule I am having trouble with. It does not work as written. The rule underneath it, which is identical in every aspect except the conditions rule, does work. I've tried changing the non-working rule to define each set of groups as "Windows group" rather than specifically as machine and user groups, with no change. With the "faulty" rule enabled and the working one disabled, any attempt to login with a valid account from a machine that is in the wireless computers group gives a 6273 audit event in the windows event log: Reason code 66 - "the user attempted to use an authentication method that is not enabled on the matching network policy". Disabling the "faulty" rule, enabling the other rule and logging in with the same account and computer works just fine.

    Read the article

  • Postfix Relay to Office365

    - by woodsbw
    I am trying to setup a Postfix server on a Linux box to relay all mail to our Office365 (Exchange, hosted by Microsoft) mail server, but, I keep getting an error regarding the sending address: BB338140DC1: to= relay=pod51010.outlook.com[157.56.234.118]:587, delay=7.6, delays=0.01/0/2.5/5.1, dsn=5.7.1, status=bounced (host pod51010.outlook.com[157.56.234.118] said: 550 5.7.1 Client does not have permissions to send as this sender (in reply to end of DATA command)) Office 365 requires that the sending address in the MAIL FROM and From: header be the same as the address used to authenticate. I have tried everything I can think of in the config to get this working. My postconf -n: append_dot_mydomain = no biff = no config_directory = /etc/postfix debug_peer_list = 127.0.0.1 inet_interfaces = loopback-only inet_protocols = all mailbox_size_limit = 0 mydestination = xxxxx, localhost.localdomain, localhost myhostname = localhost mynetworks = 127.0.0.0/8 recipient_delimiter = + relay_domains = our.doamin relayhost = [pod51010.outlook.com]:587 sender_canonical_classes = envelope_sender sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_always_send_ehlo = yes smtp_sasl_auth_enable = yes smtp_sasl_mechanism_filter = login smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = smtp_tls_CAfile = /etc/postfix/cacert.pem smtp_tls_loglevel = 1 smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes sender_canonical: www-data [email protected] root [email protected] www-data@localhost [email protected] root@localhost [email protected] Also, sasl_passwd is set to the correct credentials (tested them using swaks multiple times.) Authentication works, and sends the message when the from headers are correct (also tested using swaks....which works) The emails are coming from PHP, so I have also tried altering the sendmail path in php.ini to use pass the correct from address via -f So, for some reason, mail coming from www-data and root are not having the from fields rewritten to Office 365's satisfaction, and it won't send the message. Any postfix gurus out there that can help me setup this relay?

    Read the article

  • Accounting setup in freeradius with mikrotik and the "always" module

    - by Matt
    I have a freeradius setup that is being used to provide authentication for users on a wireless network. The access points are all Mikrotik hardware and the users are connected 24/7. We've been using Daloradius with mysql and freeradius 2. The boss wants to use the accounting information and while this is all set up and appears to be working, I've found that not all the accounting information is present. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly. So he started poking around at this link: http://wiki.mikrotik.com/wiki/RouterOs_MySql_Freeradius#Configuring_RouterOs_for_Radius_.26_PPP.2A_AAA And was looking specifically at the following section. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly always fail { rcode = fail } always reject { rcode = reject } always ok { rcode = ok simulcount = 0 mpp = no } However, that link references freeradius 1 and I can't find this in the radius.conf file for freeradius 2. What does it do and could it be a reason I'm missing data? EDIT: I have found one issue. We have a backup freeradius server that is also receiving the accounting packets. Although they are replicating, it's only a master/slave configuration. If the slave receives accounting packets it won't replicate them back to the master. Although I suspect this might solve it, the boss is not convinced due to the always module. Is there anything special I need to configure in the mikrotik AP's or freeradius 2 for clients connected 24/7.

    Read the article

  • Encryption setup for Linux NAS?

    - by Daniel
    There's a bazillion hard disk encryption HOWTOs, but somehow I can't find one that actually does what I want. Which is: I have a home NAS running Ubuntu, which is being accessed by a Linux and a Win XP client. (Hopefully MacOS X soon...) I want to setup encryption for home dirs on the NAS so that: It does not interfere with the boot process (since the NAS it tucked away in a cupboard), the home dirs should be accessible as a regular file system on the client(s) (e.g. via SMB), it is easy to use by 'normal' people, (so it does not require SSH-ing to the NAS, mount the encrypted partition on command line, then connecting via SMB, and finally umount the partition after being done. I can't explain that to my mom, or in fact to anyone.) does not store the encryption key the NAS itself, encrypts file meta-data and content (i.e. safe against the 'RIAA' attack, where an intruder should not be able to identify which songs are in your MP3 collection). What I hoped to do was use Samba + PAM. The idea was that on connecting to the SMB server, I'd have to enter the password on the client, which sends it to the server for authentication, which would use the password to mount the encrpytion partition, and would unmount it again when the session was closed. Turns out that doesn't really work, because SMB does not transmit the password in the plain and hence I can't configure PAM to use the incoming password to mount the encrypted patition. So... anything I'm overlooking? Is there any way in which I can use the password entered on the client (e.g. on SMB connect) to initiate mounting the encrypted dir on the server?

    Read the article

  • [SOLVED] Single Sign On for intranet with Apache and Linux MIT Kerberos

    - by Beerdude26
    EDIT: SOLVED! See my answer below. Greetings, I am looking for a way to do a single sign on to an intranet in the following manner: A Linux user logs on via a graphical frontend (for example, GNOME). He automatically requests a TGT for his username from the MIT Kerberos KDC. Via some way or another, the Apache server (which we'll assume is on the same server as the KDC), is informed that this user has logged in. When the user accesses the intranet, he is automatically granted access to his web applications. I don't think I've seen this kind of functionality while searching the net. I know the following possibilities exist: Using an authentication module such as mod_auth_kerb, a user is presented with a login prompt to enter his username and password, which are then authenticated against the MIT Kerberos server. (I would like this to be automatic.) IIS supports integrated Windows logon via ASP.Net when the user is part of an Active Directory. (I'm looking for the Linux / Apache equivalent.) Any suggestions, criticism and ideas are highly appreciated. This is for a school project to show a proof-of-concept, so every handy piece of information is more than welcome. :)

    Read the article

  • AuthBasicProvider: failover not working when the first LDAP is down?

    - by quanta
    I've been trying to setup redundant LDAP servers with Apache 2.2.3. /etc/httpd/conf.d/authn_alias.conf <AuthnProviderAlias ldap master> AuthLDAPURL ldap://192.168.5.148:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> <AuthnProviderAlias ldap slave> AuthLDAPURL ldap://192.168.5.199:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> /etc/httpd/conf.d/authz_ldap.conf # # mod_authz_ldap can be used to implement access control and # authenticate users against an LDAP database. # LoadModule authz_ldap_module modules/mod_authz_ldap.so <IfModule mod_authz_ldap.c> <Location /> AuthBasicProvider master slave AuthzLDAPAuthoritative Off AuthType Basic AuthName "Authorization required" AuthzLDAPMemberKey member AuthUserFile /home/setup/svn/auth-conf AuthzLDAPSetGroupAuth user require valid-user AuthzLDAPLogLevel error </Location> </IfModule> If I understand correctly, mod_authz_ldap will try to search users in the second LDAP if the first server is down or OpenLDAP on it is not running. But in practice, it does not happen. Tested by stopping LDAP on the master, I get the "500 Internal Server Error" when accessing to the Subversion repository. The error_log shows: [11061] auth_ldap authenticate: user quanta authentication failed; URI / [LDAP: ldap_simple_bind_s() failed][Can't contact LDAP server] Did I misunderstand?

    Read the article

  • Apache httpd LDAP integration

    - by David W.
    I am configuring a CollabNet Subversion integration. I have the following collabnet_subversion.conf file: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=vegibanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" </Location> This works great. Any user in our Active Directory can access our Subversion repository. Now, I want to limit this to only people in the Active Directory group Development: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=VegiBanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" Require ldap-group CN=Development OU=Security Groups OU=VegiBanc, dc=vegibanc, dc=com </Location> I added Require ldap-group, but now no one can log in. I have LogLevel set to debug, but all I get is this in my error_log (Single line broken up for easier reading): [Thu Oct 11 13:09:28 2012] [info] [client 10.55.9.45] [6752] vauth_ldap authenticate: user dweintraub authentication failed; URI /svn/ [ldap_search_ext_s() for user failed][Bad search filter] And, I get this in my access_log: 10.55.9.45 - - [11/Oct/2012:13:09:27 -0500] "GET /svn/ HTTP/1.1" 401 401 10.55.9.45 - dweintraub [11/Oct/2012:13:09:28 -0500] "GET /svn/ HTTP/1.1" 500 535 Yes, I am in that group. (Or, at least how can I confirm that just to make sure that's not the issue. I have the SysinternalsSuite ADExplorer. It's where I'm getting all of my info.)

    Read the article

  • SSL setup: UCC or wildcard certificates?

    - by quanza
    I've scoured the web for a clear and concise answer to my SSL question, but to no avail. So here goes: I have a web-service requiring SSL support for authentication pages. The root-level domain does not have the "www" - i.e., secure://domain.com - but localized pages use "language-code.domain.com", i.e. secure://ja.domain.com So I need at least a wildcard SSL certificate that supports secure://*.domain.com However, we also have a public sandbox environment at sandbox.domain.com, which we also need to support under localized domains - so secure://ja.sandbox.domain.com needs to also work. The previous admin managed to purchase a wildcard SSL certificate for .domain.com, but with a Subject Alternative Name for "domain.com". So, I'm thinking of trying to get a wildcard certificate with SANs defined as "domain.com" and ".*.domain.com". But now I'm getting confused because there seem to be separate SAN certificates, also called UCC certificates. Can someone clarify whether it's possible to get a wildcard certificate with additional SAN fields, and ultimately what the best way is to support: secure://domain.com secure://.domain.com secure://.*.domain.com with the fewest (and cheapest!) number of SSL certificates? Thanks!

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • How to set up a serial connection to a Windows 7 computer

    - by oli_arborum
    I need to set up a "dial in" connection to a Windows 7 (Ultimate) computer via a serial null-modem cable to be able to connect from a Windows XP client to that computer and exchange data over IP. Question 1: How do I do that? I did neither find the information via Google nor in the MSDN. Seems like no one tried ever before... ;-) I already managed to install a legacy modem device called "Communications cable between two computers" and found the menu entry "New Incoming Connection..." in Network and Internet Network Connections. When I finish this wizard I get the message that the "Routing and Remote Access service" cannot be started. In the event viewer I see the following error messages: "The currently configured authentication provider failed to load and initialize successfully. The requested name is valid, but no data of the requested type was found." (Source: RemoteAccess, EventID: 20152) "The Routing and Remote Access service terminated with service-specific error The requested name is valid, but no data of the requested type was found." (Source: Service Control Manager, EventID: 7024) The Windows 7 installation is "naked", i.e. no additional software or services are installed. Question 2: Am I on the right path to set up the connection? Question 3: How can I get the Routing and Remote Access service running?

    Read the article

  • Upgrading Redmine, activerecord-mysql2-adapter not recognized

    - by David Kaczynski
    For upgrading Redmine from 1.0.1 to 2.1.2, I need to execute the command: rake db:migrate RAILS_ENV=production However, doing so produces the following error: rake aborted! Please install the mysql2 adapter: gem install activerecord-mysql2-adapter (mysql2 is not part of the bundle. Add it to Gemfile.) I have ran gem install activerecord-mysql2-adapter, but I still get the same error when I try to run the rake ... command. How do I get my RoR app to recognize that I have the mysql2 adapter installed already? or Is there something wrong with my activerecord-mysql2-adapter installation? Results of sudo bundle install: Using rake (10.0.0) Using i18n (0.6.1) Using multi_json (1.3.7) Using activesupport (3.2.8) Using builder (3.0.0) Using activemodel (3.2.8) Using erubis (2.7.0) Using journey (1.0.4) Using rack (1.4.1) Using rack-cache (1.2) Using rack-test (0.6.2) Using hike (1.2.1) Using tilt (1.3.3) Using sprockets (2.1.3) Using actionpack (3.2.8) Using mime-types (1.19) Using polyglot (0.3.3) Using treetop (1.4.12) Using mail (2.4.4) Using actionmailer (3.2.8) Using arel (3.0.2) Using tzinfo (0.3.35) Using activerecord (3.2.8) Using activeresource (3.2.8) Using coderay (1.0.8) Using fastercsv (1.5.5) Using rack-ssl (1.3.2) Using json (1.7.5) Using rdoc (3.12) Using thor (0.16.0) Using railties (3.2.8) Using jquery-rails (2.0.3) Using metaclass (0.0.1) Using mocha (0.12.3) Using mysql (2.8.1) Using net-ldap (0.3.1) Using pg (0.14.1) Using ruby-openid (2.1.8) Using rack-openid (1.3.1) Using bundler (1.2.1) Using rails (3.2.8) Using rmagick (2.13.1) Using shoulda (2.11.3) Using sqlite3 (1.3.6) Using yard (0.8.3) [32mYour bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.[0m Results of sudo find / -name "*mysql2*": /var/lib/gems/1.8/doc/mysql2-0.3.11 /var/lib/gems/1.8/doc/activerecord-3.2.9/ri/ActiveRecord/Base/mysql2_connection-c.ri /var/lib/gems/1.8/doc/activerecord-mysql2-adapter-0.0.3 /var/lib/gems/1.8/doc/activerecord-mysql2-adapter-0.0.3/ri/ActiveRecord/Base/em_mysql2_connection-c.ri /var/lib/gems/1.8/doc/activerecord-mysql2-adapter-0.0.3/ri/ActiveRecord/Base/mysql2_connection-c.ri /var/lib/gems/1.8/gems/mysql2-0.3.11 /var/lib/gems/1.8/gems/mysql2-0.3.11/spec/mysql2 /var/lib/gems/1.8/gems/mysql2-0.3.11/mysql2.gemspec /var/lib/gems/1.8/gems/mysql2-0.3.11/lib/mysql2.rb /var/lib/gems/1.8/gems/mysql2-0.3.11/lib/mysql2 /var/lib/gems/1.8/gems/mysql2-0.3.11/lib/mysql2/mysql2.so /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2 /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2/mysql2.so /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2/mysql2_ext.c /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2/mysql2_ext.h /var/lib/gems/1.8/gems/mysql2-0.3.11/ext/mysql2/mysql2_ext.o /var/lib/gems/1.8/gems/activerecord-3.2.9/lib/active_record/connection_adapters/mysql2_adapter.rb /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3 /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/activerecord-mysql2-adapter.gemspec /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/arel/engines/sql/compilers/mysql2_compiler.rb /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/activerecord-mysql2-adapter.rb /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/activerecord-mysql2-adapter /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/active_record/connection_adapters/em_mysql2_adapter.rb /var/lib/gems/1.8/gems/activerecord-mysql2-adapter-0.0.3/lib/active_record/connection_adapters/mysql2_adapter.rb /var/lib/gems/1.8/gems/activerecord-3.2.8/lib/active_record/connection_adapters/mysql2_adapter.rb /var/lib/gems/1.8/cache/mysql2-0.3.11.gem /var/lib/gems/1.8/cache/activerecord-mysql2-adapter-0.0.3.gem /var/lib/gems/1.8/specifications/activerecord-mysql2-adapter-0.0.3.gemspec /var/lib/gems/1.8/specifications/mysql2-0.3.11.gemspec Contents of /usr/share/redmine/Gemfile: source 'http://rubygems.org' gem 'rails', '3.2.8' gem "jquery-rails", "~> 2.0.2" gem "i18n", "~> 0.6.0" gem "coderay", "~> 1.0.6" gem "fastercsv", "~> 1.5.0", :platforms => [:mri_18, :mingw_18, :jruby] gem "builder", "3.0.0" # Optional gem for LDAP authentication group :ldap do gem "net-ldap", "~> 0.3.1" end # Optional gem for OpenID authentication group :openid do gem "ruby-openid", "~> 2.1.4", :require => "openid" gem "rack-openid" end # Optional gem for exporting the gantt to a PNG file, not supported with jruby platforms :mri, :mingw do group :rmagick do # RMagick 2 supports ruby 1.9 # RMagick 1 would be fine for ruby 1.8 but Bundler does not support # different requirements for the same gem on different platforms gem "rmagick", ">= 2.0.0" end end # Database gems platforms :mri, :mingw do group :postgresql do gem "pg", ">= 0.11.0" end group :sqlite do gem "sqlite3" end end platforms :mri_18, :mingw_18 do group :mysql do gem "mysql" end end platforms :mri_19, :mingw_19 do group :mysql do gem "mysql2", "~> 0.3.11" end end platforms :jruby do gem "jruby-openssl" group :mysql do gem "activerecord-jdbcmysql-adapter" end group :postgresql do gem "activerecord-jdbcpostgresql-adapter" end group :sqlite do gem "activerecord-jdbcsqlite3-adapter" end end group :development do gem "rdoc", ">= 2.4.2" gem "yard" end group :test do gem "shoulda", "~> 2.11" # Shoulda does not work nice on Ruby 1.9.3 and seems to need test-unit explicitely. gem "test-unit", :platforms => [:mri_19] gem "mocha", "0.12.3" end local_gemfile = File.join(File.dirname(__FILE__), "Gemfile.local") if File.exists?(local_gemfile) puts "Loading Gemfile.local ..." if $DEBUG # `ruby -d` or `bundle -v` instance_eval File.read(local_gemfile) end # Load plugins' Gemfiles Dir.glob File.expand_path("../plugins/*/Gemfile", __FILE__) do |file| puts "Loading #{file} ..." if $DEBUG # `ruby -d` or `bundle -v` instance_eval File.read(file) end Contents of /usr/share/redmine/Gemfile.lock: GEM remote: http://rubygems.org/ specs: actionmailer (3.2.8) actionpack (= 3.2.8) mail (~> 2.4.4) actionpack (3.2.8) activemodel (= 3.2.8) activesupport (= 3.2.8) builder (~> 3.0.0) erubis (~> 2.7.0) journey (~> 1.0.4) rack (~> 1.4.0) rack-cache (~> 1.2) rack-test (~> 0.6.1) sprockets (~> 2.1.3) activemodel (3.2.8) activesupport (= 3.2.8) builder (~> 3.0.0) activerecord (3.2.8) activemodel (= 3.2.8) activesupport (= 3.2.8) arel (~> 3.0.2) tzinfo (~> 0.3.29) activeresource (3.2.8) activemodel (= 3.2.8) activesupport (= 3.2.8) activesupport (3.2.8) i18n (~> 0.6) multi_json (~> 1.0) arel (3.0.2) builder (3.0.0) coderay (1.0.8) erubis (2.7.0) fastercsv (1.5.5) hike (1.2.1) i18n (0.6.1) journey (1.0.4) jquery-rails (2.0.3) railties (>= 3.1.0, < 5.0) thor (~> 0.14) json (1.7.5) mail (2.4.4) i18n (>= 0.4.0) mime-types (~> 1.16) treetop (~> 1.4.8) metaclass (0.0.1) mime-types (1.19) mocha (0.12.3) metaclass (~> 0.0.1) multi_json (1.3.7) mysql (2.8.1) mysql2 (0.3.11) net-ldap (0.3.1) pg (0.14.1) polyglot (0.3.3) rack (1.4.1) rack-cache (1.2) rack (>= 0.4) rack-openid (1.3.1) rack (>= 1.1.0) ruby-openid (>= 2.1.8) rack-ssl (1.3.2) rack rack-test (0.6.2) rack (>= 1.0) rails (3.2.8) actionmailer (= 3.2.8) actionpack (= 3.2.8) activerecord (= 3.2.8) activeresource (= 3.2.8) activesupport (= 3.2.8) bundler (~> 1.0) railties (= 3.2.8) railties (3.2.8) actionpack (= 3.2.8) activesupport (= 3.2.8) rack-ssl (~> 1.3.2) rake (>= 0.8.7) rdoc (~> 3.4) thor (>= 0.14.6, < 2.0) rake (10.0.0) rdoc (3.12) json (~> 1.4) rmagick (2.13.1) ruby-openid (2.1.8) shoulda (2.11.3) sprockets (2.1.3) hike (~> 1.2) rack (~> 1.0) tilt (~> 1.1, != 1.3.0) sqlite3 (1.3.6) test-unit (2.5.2) thor (0.16.0) tilt (1.3.3) treetop (1.4.12) polyglot polyglot (>= 0.3.1) tzinfo (0.3.35) yard (0.8.3) PLATFORMS ruby DEPENDENCIES activerecord-jdbcmysql-adapter activerecord-jdbcpostgresql-adapter activerecord-jdbcsqlite3-adapter builder (= 3.0.0) coderay (~> 1.0.6) fastercsv (~> 1.5.0) i18n (~> 0.6.0) jquery-rails (~> 2.0.2) jruby-openssl mocha (= 0.12.3) mysql mysql2 (~> 0.3.11) net-ldap (~> 0.3.1) pg (>= 0.11.0) rack-openid rails (= 3.2.8) rdoc (>= 2.4.2) rmagick (>= 2.0.0) ruby-openid (~> 2.1.4) shoulda (~> 2.11) sqlite3 test-unit yard Results of gem list: actionmailer (3.2.9, 3.2.8) actionpack (3.2.9, 3.2.8) activemodel (3.2.9, 3.2.8) activerecord (3.2.9, 3.2.8) activerecord-mysql2-adapter (0.0.3) activeresource (3.2.9, 3.2.8) activesupport (3.2.9, 3.2.8) arel (3.0.2) builder (3.0.0) bundler (1.2.1) coderay (1.0.8) erubis (2.7.0) fastercsv (1.5.5) hike (1.2.1) i18n (0.6.1) journey (1.0.4) jquery-rails (2.0.3) json (1.7.5) mail (2.4.4) metaclass (0.0.1) mime-types (1.19) mocha (0.12.3) multi_json (1.3.7) mysql (2.8.1) mysql2 (0.3.11) net-ldap (0.3.1) pg (0.14.1) polyglot (0.3.3) rack (1.4.1) rack-cache (1.2) rack-openid (1.3.1) rack-ssl (1.3.2) rack-test (0.6.2) rails (3.2.9, 3.2.8) railties (3.2.9, 3.2.8) rake (10.0.0) rdoc (3.12) rmagick (2.13.1) ruby-openid (2.1.8) shoulda (2.11.3) sprockets (2.2.1, 2.1.3) sqlite3 (1.3.6) thor (0.16.0) tilt (1.3.3) treetop (1.4.12) tzinfo (0.3.35) yard (0.8.3) Results of 'bundle show`: Gems included by the bundle: * actionmailer (3.2.8) * actionpack (3.2.8) * activemodel (3.2.8) * activerecord (3.2.8) * activeresource (3.2.8) * activesupport (3.2.8) * arel (3.0.2) * builder (3.0.0) * bundler (1.2.1) * coderay (1.0.8) * erubis (2.7.0) * fastercsv (1.5.5) * hike (1.2.1) * i18n (0.6.1) * journey (1.0.4) * jquery-rails (2.0.3) * json (1.7.5) * mail (2.4.4) * metaclass (0.0.1) * mime-types (1.19) * mocha (0.12.3) * multi_json (1.3.7) * mysql (2.8.1) * net-ldap (0.3.1) * pg (0.14.1) * polyglot (0.3.3) * rack (1.4.1) * rack-cache (1.2) * rack-openid (1.3.1) * rack-ssl (1.3.2) * rack-test (0.6.2) * rails (3.2.8) * railties (3.2.8) * rake (10.0.0) * rdoc (3.12) * rmagick (2.13.1) * ruby-openid (2.1.8) * shoulda (2.11.3) * sprockets (2.1.3) * sqlite3 (1.3.6) * thor (0.16.0) * tilt (1.3.3) * treetop (1.4.12) * tzinfo (0.3.35) * yard (0.8.3)

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >