Search Results

Search found 12704 results on 509 pages for 'security'.

Page 29/509 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Is this Java 7 security thread an issue if you have Java 7 installed but not as the default?

    - by user1361315
    I have a MBP with osx mountain lion installed, and I believe from what I read Mac's only ship with Java 6 by default. I'm not at my computer at the moment, but I am pretty sure I have installed Java 7 but it isn't my default java version (I think I installed it and I have to explicitly reference it to use it). Does this mean I am safe from this particular thread? Reference: http://www.pcworld.com/businesscenter/article/261748/researchers_find_critical_vulnerability_in_java_7_patch_hours_after_release.html

    Read the article

  • Problem with Remember Me Service in Spring Security

    - by Gearóid
    Hi, I'm trying to implement a "remember me" functionality in my website using Spring. The cookie and entry in the persistent_logins table are getting created correctly. Additionally, I can see that the correct user is being restored as the username is displayed at the top of the page. However, once I try to access any information for this user when they return after they were "remembered", I get a NullPointerException. It looks as though the user isn't being set in the session again. My applicationContext-security.xml contains the following: <remember-me data-source-ref="dataSource" user-service-ref="userService"/> ... <authentication-provider user-service-ref="userService" /> <jdbc-user-service id="userService" data-source-ref="dataSource" role-prefix="ROLE_" users-by-username-query="select email as username, password, 1 as ENABLED from user where email=?" authorities-by-username-query="select user.id as id, upper(role.name) as authority from user, role, users_roles where users_roles.user_fk=id and users_roles.role_fk=role.name and user.email=?"/> I thought it may have had something to do with users-by-username query but surely login wouldn't work correctly if this query was incorrect? Any help on this would be greatly appreciated. Thanks, gearoid.

    Read the article

  • Authlogic, logout, credential capture and security

    - by Paddy
    Ok this is something weird. I got authlogic-oid installed in my rails app today. Everything works perfectly fine but for one small nuisance. This is what i did: I first register with my google openid. Successful login, redirection and my email, along with my correct openid is stored in my database. I am happy that everything worked fine! Now when i logout, my rails app as usual destroys the session and redirects me back to my root url where i can login again. Now if i try to login it still remembers my last login id. Not a big issue as i can always "Sign in as a different user" but i am wondering if there is anyway to not only logout from my app but also logout from google. I noticed the same with stack overflow's openid authentication system. Why am i so bothered about this, you may ask. But is it not a bad idea if your web apps end user, who happens to be in a cyber cafe, thinks he has logged out from your app and hence from his google account only to realize later that his google account had got hacked by some unworthy loser who just happened to notice that the one before him had not logged out from google and say.. changed his password!! Should i be paranoid? Isn't this a major security lapse while implementing the openid spec? Probably today someone can give me a workaround for this issue and the question is solved for me. But what about the others who have implemented openid in their apps and not implemented a workaround?

    Read the article

  • WCF Double Hop questions about Security and Binding.

    - by Ken Maglio
    Background information: .Net Website which calls a service (aka external service) facade on an app server in the DMZ. This external service then calls the internal service which is on our internal app server. From there that internal service calls a stored procedure (Linq to SQL Classes), and passes the serialized data back though to the external service, and from there back to the website. We've done this so any communication goes through an external layer (our external app server) and allows interoperability; we access our data just like our clients consuming our services. We've gotten to the point in our development where we have completed the system and it all works, the double hop acts as it should. However now we are working on securing the entire process. We are looking at using TransportWithMessageCredentials. We want to have WS2007HttpBinding for the external for interoperability, but then netTCPBinding for the bridge through the firewall for security and speed. Questions: If we choose WS2007HttpBinding as the external services binding, and netTCPBinding for the internal service is this possible? I know WS-* supports this as does netTCP, however do they play nice when passing credential information like user/pass? If we go to Kerberos, will this impact anything? We may want to do impersonation in the future. If you can when you answer post any reference links about why you're answering the way you are, that would be very helpful to us. Thanks!

    Read the article

  • Cross-platform game development: ease of development vs security

    - by alcuadrado
    Hi, I'm a member and contributor of the Argentum Online (AO) community, the first MMORPG from Argentina, which is Free Software; which, although it's not 3D, it's really addictive and has some dozens of thousands of users. Really unluckily AO was developed in Visual Basic (yes, you can laugh) but the former community, so imagine, the code not only sucks, it has zero portability. I'm planning, with some friends to rewrite the client, and as a GNU/Linux frantic, want to do it cross-platform. Some other people is doing the same with the server in Java. So my biggest problem is that we would like to use a rapid development language (like Java, Ruby or Python) but the client would be pretty insecure. Ruby/Python version would have all it's code available, and the Java one would be easily decompilable (yes, we have some crackers in the community) We have consider the option to implement the security module in C/C++ as a dynamic library, but it can be replaced with a custom one, so it's not really secure. We are also considering the option of doing the core application in C++ and the GUI in Ruby/Python. But haven't analysed all it's implications yet. But we really don't want to code the entire game in C/C++ as it doesn't need that much performance (the game is played at 18fps on average) and we want to develop it as fast as possible. So what would you choose in my case? Thank you!

    Read the article

  • Where are the real risks in network security?

    - by Barry Brown
    Anytime a username/password authentication is used, the common wisdom is to protect the transport of that data using encryption (SSL, HTTPS, etc). But that leaves the end points potentially vulnerable. Realistically, which is at greater risk of intrusion? Transport layer: Compromised via wireless packet sniffing, malicious wiretapping, etc. Transport devices: Risks include ISPs and Internet backbone operators sniffing data. End-user device: Vulnerable to spyware, key loggers, shoulder surfing, and so forth. Remote server: Many uncontrollable vulnerabilities including malicious operators, break-ins resulting in stolen data, physically heisting servers, backups kept in insecure places, and much more. My gut reaction is that although the transport layer is relatively easy to protect via SSL, the risks in the other areas are much, much greater, especially at the end points. For example, at home my computer connects directly to my router; from there it goes straight to my ISPs routers and onto the Internet. I would estimate the risks at the transport level (both software and hardware) at low to non-existant. But what security does the server I'm connected to have? Have they been hacked into? Is the operator collecting usernames and passwords, knowing that most people use the same information at other websites? Likewise, has my computer been compromised by malware? Those seem like much greater risks. What do you think?

    Read the article

  • how to retrive pK using spring security

    - by aditya
    i implement this method of the UserDetailService interface, public UserDetails loadUserByUsername(final String username) throws UsernameNotFoundException, DataAccessException { final EmailCredential userDetails = persistentEmailCredential .getUniqueEmailCredential(username); if (userDetails == null) { throw new UsernameNotFoundException(username + "is not registered"); } final HashSet<GrantedAuthority> authorities = new HashSet<GrantedAuthority>(); authorities.add(new GrantedAuthorityImpl("ROLE_USER")); for (UserRole role:userDetails.getAccount().getRoles()) { authorities.add(new GrantedAuthorityImpl(role.getRole())); } return new User(userDetails.getEmailAddress(), userDetails .getPassword(), true, true, true, true, authorities); } in the security context i do some thing like this <!-- Login Info --> <form-login default-target-url='/dashboard.htm' login-page="/login.htm" authentication-failure-url="/login.htm?authfailed=true" always-use-default-target='false' /> <logout logout-success-url="/login.htm" invalidate-session="true" /> <remember-me user-service-ref="emailAccountService" key="fuellingsport" /> <session-management> <concurrency-control max-sessions="1" /> </session-management> </http> now i want to pop out the Pk of the logged in user, how can i show it in my jsp pages, any idea thanks in advance

    Read the article

  • Problem with Spring security's logout

    - by uther-lightbringer
    Hello, I've got a problem logging out in Spring framework. First when I want j_spring_security_logout to handle it for me i get 404 j_spring_security_logout not found: sample-security.xml: <http> <intercept-url pattern="/messageList.htm*" access="ROLE_USER,ROLE_GUEST" /> <intercept-url pattern="/messagePost.htm*" access="ROLE_USER" /> <intercept-url pattern="/messageDelete.htm*" access="ROLE_ADMIN" /> <form-login login-page="/login.jsp" default-target-url="/messageList.htm" authentication-failure-url="/login.jsp?error=true" /> <logout/> </http> Sample url link to logout in JSP page: <a href="<c:url value="/j_spring_security_logout" />">Logout</a> When i try to use a custom JSP page i.e. I use login form for this purpose then I get better result at least it gets to login page, but another problem is that you dont't get logged off as you can diretcly type url that should be guarded buy you get past it anyway. Slightly modified from previous listings: <http> <intercept-url pattern="/messageList.htm*" access="ROLE_USER,ROLE_GUEST" /> <intercept-url pattern="/messagePost.htm*" access="ROLE_USER" /> <intercept-url pattern="/messageDelete.htm*" access="ROLE_ADMIN" /> <form-login login-page="/login.jsp" default-target-url="/messageList.htm" authentication-failure-url="/login.jsp?error=true" /> <logout logout-success-url="/login.jsp" /> </http> <a href="<c:url value="/login.jsp" />">Logout</a> Thank you for help

    Read the article

  • security deleting a mysql row with jQuery $.post

    - by FFish
    I want to delete a row in my database and found an example on how to do this with jQuery's $.post() Now I am wondering about security though.. Can someone send a POST request to my delete-row.php script from another website? JS function deleterow(id) { // alert(typeof(id)); // number if (confirm('Are you sure want to delete?')) { $.post('delete-row.php', {album_id:+id, ajax:'true'}, function() { $("#row_"+id).fadeOut("slow"); }); } } PHP: delete-row.php <?php require_once("../db.php"); mysql_connect(DB_SERVER, DB_USER, DB_PASSWORD) or die("could not connect to database " . mysql_error()); mysql_select_db(DB_NAME) or die("could not select database " . mysql_error()); if (isset($_POST['album_id'])) { $query = "DELETE FROM albums WHERE album_id = " . $_POST['album_id']; $result = mysql_query($query); if (!$result) die('Invalid query: ' . mysql_error()); echo "album deleted!"; } ?>

    Read the article

  • E-Commerce Security: Only Credit Card Fields Encrypted?!

    - by bizarreunprofessionalanddangerous
    I'd like your opinions on how a major bricks-and-mortar company is running the security for its shopping Web site. After a recent update, when you are logged into your shopping account, the session is now not secured. No 'https', no browser 'lock'. All the personal contact info, shopping history -- and if I'm not mistaken submit and change password -- are being sent unencrypted. There is a small frame around the credit card fields that is https. There's a little notice: "Our website is secure. Our website uses frames and because of this the secure icon will not appear in your browser" On top of this the most prominent login fields for the site are broken, and haven't gotten fixed for a week or longer (giving the distinct impression they have no clue what's going on and can't be trusted with anything). Now is it just me -- or is this simply incomprehensible for a billion dollar company, significant shopping site, in the year 2010. No lock. "We use frames" (maybe they forget "Best viewed in IE4"). Customers complaining, as you can see from their FAQ "explaining" why you aren't seeing https. I'm getting nowhere trying to convince customer service that they REALLY need to do something about this, and am about to head for the CEO. But I just want to make sure this is as BIZARRE and unprofessional and dangerous a situation as I think it is. (I'm trying to visualize what their Web technical team consists of. I'm getting A) some customer service reps who were given a 3 hour training course on Web site maintenance, B) a 14 year old boy in his bedroom masquerading as a major technical services company, C) a guy in a hut in a jungle with an e-commerce book from 1996.)

    Read the article

  • What is the best prctice for using security in JAX-WS

    - by kislo_metal
    Here is scenario : I have some web services (JAX-WS) that need to be secured. Currently for authentication needs I providing addition SecurityWService that give authorized user some userid & sessionid that is need to be described in request to other services. It would be more better to use some java security. We have many of them but could not defined what is better to use. Q1 : It is understand that I should use SSL in transport layer, but what should I use for user authorization. Is there is better way to establishing session, validating user etc. ? Here is some key description : Most web services clents is php based. I am using jax-ws implementation as a Stateless session EJB. Deploying to glassfish v3. Q2: what is the best framework / technology for user authorization / authentication in case of using JSF 2.0 and ejb3.1 technologies ( Realms? WSIT? )? Thank You!

    Read the article

  • Lack of security in many PHP applications?

    - by John
    Over the past year of freelancing, I inherited two web projects, both of them built in PHP, both of them with sensitive information like credit card info, bank info, etc... In one application, when I typed http://thecompany.com/admin/, and without being asked for a username and password, I saw every user's sensitive information, including credit card numbers, bank account numbers etc... In another application, I was able to bypass the login screen by simply typing http://the2ndcompany.com/customer.php?user_id=777, and again, without any prompts for username and password, i was able to see user 777's credit card info. I cycled through a few more user_ids (any integer) and saw each person's credit card info. Is something wrong here? Or is this the quality of work that the "average" programmer produces? Because if this is what the average programmer produces, does that means I'm an...gasp...elite programmer?? No..that can't be right....something doesn't make sense. So my question is, is it just coincidence that I inherited two applications both of which are dangerously lacking in security? Or are there are a lot of bad PHP programmers out there?

    Read the article

  • How to manually set an authenticated user in Spring Security / SpringMVC

    - by David Parks
    After a new user submits a 'New account' form, I want to manually log that user in so they don't have to login on the subsequent page. The normal form login page going through the spring security interceptor works just fine. In the new-account-form controller I am creating a UsernamePasswordAuthenticationToken and setting it in the SecurityContext manually: SecurityContextHolder.getContext().setAuthentication(authentication); On that same page I later check that the user is logged in with: SecurityContextHolder.getContext().getAuthentication().getAuthorities(); This returns the authorities I set earlier in the authentication. All is well. But when this same code is called on the very next page I load, the authentication token is just UserAnonymous. I'm not clear why it did not keep the authentication I set on the previous request. Any thoughts? Could it have to do with session ID's not being set up correctly? Is there something that is possibly overwriting my authentication somehow? Perhaps I just need another step to save the authentication? Or is there something I need to do to declare the authentication across the whole session rather than a single request somehow? Just looking for some thoughts that might help me see what's happening here.

    Read the article

  • Struts 2 security

    - by Dewfy
    Does Struts 2 has complete solution for simple login task? I have simple declaration in struts.xml: <package namespace="/protected" name="manager" extends="struts-default" > <interceptors> <interceptor-stack name="secure"> <interceptor-ref name="roles"> <param name="allowedRoles">registered</param> </interceptor-ref> </interceptor-stack> </interceptors> <default-action-ref name="pindex"/> <action name="pindex" > <interceptor-ref name="completeStack"/> <interceptor-ref name="secure"/> <result>protected/index.html</result> </action> </package> Accessing to this resource shows only (Forbidden 403). So what should I do on the next step to: Add login page (standart Tomcat declaration on web.xml with <login-config> not works) ? Provide security round trip. Do I need write my own servlet or exists struts2 solutions? Thanks in advance!

    Read the article

  • Protecting my apps security from deassembling

    - by sandis
    So I recently tested deassembling one of my android apps, and to my horror I discovered that the code was quite readable. Even worse, all my variable names where intact! I thought that those would be compressed to something unreadable at compile time. The app is triggered to expire after a certain time. However, now it was trivial for me to find my function named checkIfExpired() and find the variable "expired". Is there any good way of making it harder for a potential hacker messing with my app? Before someone states the obvious: Yes, it is security through obscurity. But obviously this is my only option since the user always will have access to all my code. This is the same for all apps. The details of my deactivation-thingy is unimportant, the point is that I dont want deassembler to understand some of the things I do. side questions: Why are the variable names not compressed? Could it be the case that my program would run faster if I stopped using really long variable names, as are my habit?

    Read the article

  • DWR and Spring Security - User is deauthenticated in few seconds

    - by Vojtech
    I am trying to implement user authentication via DWR as follows: public class PublicRemote { @Autowired @Qualifier("authenticationManager") private AuthenticationManager authenticationManager; public Map<String, Object> userLogin(String username, String password, boolean stay) { Map<String, Object> map = new HashMap<>(); UsernamePasswordAuthenticationToken authRequest = new UsernamePasswordAuthenticationToken(username, password); try { Authentication authentication = authenticationManager.authenticate(authRequest); SecurityContextHolder.getContext().setAuthentication(authentication); map.put("success", "true"); } catch (Exception e) { map.put("success", "false"); } return map; } public Map<String, Object> getUserState() { Map<String, Object> map = new HashMap<>(); Authentication authentication = SecurityContextHolder.getContext().getAuthentication(); boolean authenticated = authentication != null && authentication.isAuthenticated(); map.put("authenticated", authenticated); if (authenticated) { map.put("authorities", authentication.getAuthorities()); } return map; } } The authentication works correctly and by calling getUserState() I can see that the user is successfully logged in. The problem is that this state will stay only for few seconds. In probably 5 seconds, the getAuthentication() starts returning null. Is there some problem with session in DWR or is it some misconfiguration of Spring Security?

    Read the article

  • Security of Flex for payment website

    - by Mario
    So, it's been about 3 years since I wrote and went live with my company's main internet facing website. Originally written in php, I've since just been making minor changes here and there to progress the site as we've needed to. I've wanted to rewrite it from the ground up in the last year or so and now, we want to add some major features so this is a perfect time. The website in question is as close to a banking website as you'd get (without being a bank; sorry for the obscurity, but the less info I can give out, the better). For the rewrite, I want to separate the presentation layer from the processing layer as much as I can. I want the end user to be stuck in a box and not be able to get out so to speak (this is all because of PCI complacency, being PEN tested every 3 months, etc...) So, being probed every 3 months has increasingly made me nervous. We haven't failed yet and there hasen't been a breach yet, but I want to make sure I continue to pass (as much as I can anyways) So, I'm considering rewriting the presentation layer in Adobe Flex and do all the processing in PHP (effectively IMO, separating presentation from processing) - I would do all my normal form validation in flex (as opposed to javascript or php) and do my reads and writes to the db via php. My questions are: I know Flash has something like 99% market penetration - do people find this to be true? Has anyone seen on their own sites being in flash that someone couldn't access it? Flash in general has come under alot of attacks about security and the like - i know this. I would use a swf encryptor - disable debugging (which i got snagged on once on a different application), continue to use https and any other means i can think of. At the end of the day, everyone knows if someone wants in to the data bad enough, their going to find a ways in; i just wanna make it as difficult for them as i can. Any thoughts are appreciated. -Mario

    Read the article

  • UDP security and identifying incoming data.

    - by Charles
    I have been creating an application using UDP for transmitting and receiving information. The problem I am running into is security. Right now I am using the IP/socketid in determining what data belongs to whom. However, I have been reading about how people could simply spoof their IP, then just send data as a specific IP. So this seems to be the wrong way to do it (insecure). So how else am I suppose to identify what data belongs to what users? For instance you have 10 users connected, all have specific data. The server would need to match the user data to this data we received. The only way I can see to do this is to use some sort of client/server key system and encrypt the data. I am curious as to how other applications (or games, since that's what this application is) make sure their data is genuine. Also there is the fact that encryption takes much longer to process than unencrypted. Although I am not sure by how much it will affect performance. Any information would be appreciated. Thanks.

    Read the article

  • Security strategies for storing password on disk

    - by Mike
    I am building a suite of batch jobs that require regular access to a database, running on a Solaris 10 machine. Because of (unchangable) design constraints, we are required use a certain program to connect to it. Said interface requires us to pass a plain-text password over a command line to connect to the database. This is a terrible security practice, but we are stuck with it. I am trying to make sure things are properly secured on our end. Since the processing is automated (ie, we can't prompt for a password), and I can't store anything outside the disk, I need a strategy for storing our password securely. Here are some basic rules The system has multiple users. We can assume that our permissions are properly enforced (ie, if a file with a is chmod'd to 600, it won't be publically readable) I don't mind anyone with superuser access looking at our stored password Here is what i've got so far Store password in password.txt $chmod 600 password.txt Process reads from password.txt when it's needed Buffer overwritten with zeros when it's no longer needed Although I'm sure there is a better way.

    Read the article

  • How to strengthen Mysql database server Security?

    - by i need help
    If we were to use server1 for all files (file server), server2 for mysql database (database server). In order for websites in server1 to access to the database in server2, isn't it needed to connect to to ip address of second (mysql server) ? In this case, is remote mysql connection. However, I seen from some people comment on the security issue. remote access to MySQL is not very secure. When your remote computer first connects to your MySQL database, the password is encrypted before being transmitted over the Internet. But after that, all data is passed as unencrypted "plain text". If someone was able to view your connection data (such as a "hacker" capturing data from an unencrypted WiFi connection you're using), that person would be able to view part or all of your database. So I just wondering ways to secure it? Allow remote mysql access from server1 by allowing the static ip adress allow remote access from server 1 by setting port allowed to connect to 3306 change 3306 to other port? Any advice?

    Read the article

  • [GEEK SCHOOL] Network Security 1: Securing User Accounts and Passwords in Windows

    - by Matt Klein
    This How-To Geek School class is intended for people who want to learn more about security when using Windows operating systems. You will learn many principles that will help you have a more secure computing experience and will get the chance to use all the important security tools and features that are bundled with Windows. Obviously, we will share everything you need to know about using them effectively. In this first lesson, we will talk about password security; the different ways of logging into Windows and how secure they are. In the proceeding lesson, we will explain where Windows stores all the user names and passwords you enter while working in this operating systems, how safe they are, and how to manage this data. Moving on in the series, we will talk about User Account Control, its role in improving the security of your system, and how to use Windows Defender in order to protect your system from malware. Then, we will talk about the Windows Firewall, how to use it in order to manage the apps that get access to the network and the Internet, and how to create your own filtering rules. After that, we will discuss the SmartScreen Filter – a security feature that gets more and more attention from Microsoft and is now widely used in its Windows 8.x operating systems. Moving on, we will discuss ways to keep your software and apps up-to-date, why this is important and which tools you can use to automate this process as much as possible. Last but not least, we will discuss the Action Center and its role in keeping you informed about what’s going on with your system and share several tips and tricks about how to stay safe when using your computer and the Internet. Let’s get started by discussing everyone’s favorite subject: passwords. The Types of Passwords Found in Windows In Windows 7, you have only local user accounts, which may or may not have a password. For example, you can easily set a blank password for any user account, even if that one is an administrator. The only exception to this rule are business networks where domain policies force all user accounts to use a non-blank password. In Windows 8.x, you have both local accounts and Microsoft accounts. If you would like to learn more about them, don’t hesitate to read the lesson on User Accounts, Groups, Permissions & Their Role in Sharing, in our Windows Networking series. Microsoft accounts are obliged to use a non-blank password due to the fact that a Microsoft account gives you access to Microsoft services. Using a blank password would mean exposing yourself to lots of problems. Local accounts in Windows 8.1 however, can use a blank password. On top of traditional passwords, any user account can create and use a 4-digit PIN or a picture password. These concepts were introduced by Microsoft to speed up the sign in process for the Windows 8.x operating system. However, they do not replace the use of a traditional password and can be used only in conjunction with a traditional user account password. Another type of password that you encounter in Windows operating systems is the Homegroup password. In a typical home network, users can use the Homegroup to easily share resources. A Homegroup can be joined by a Windows device only by using the Homegroup password. If you would like to learn more about the Homegroup and how to use it for network sharing, don’t hesitate to read our Windows Networking series. What to Keep in Mind When Creating Passwords, PINs and Picture Passwords When creating passwords, a PIN, or a picture password for your user account, we would like you keep in mind the following recommendations: Do not use blank passwords, even on the desktop computers in your home. You never know who may gain unwanted access to them. Also, malware can run more easily as administrator because you do not have a password. Trading your security for convenience when logging in is never a good idea. When creating a password, make it at least eight characters long. Make sure that it includes a random mix of upper and lowercase letters, numbers, and symbols. Ideally, it should not be related in any way to your name, username, or company name. Make sure that your passwords do not include complete words from any dictionary. Dictionaries are the first thing crackers use to hack passwords. Do not use the same password for more than one account. All of your passwords should be unique and you should use a system like LastPass, KeePass, Roboform or something similar to keep track of them. When creating a PIN use four different digits to make things slightly harder to crack. When creating a picture password, pick a photo that has at least 10 “points of interests”. Points of interests are areas that serve as a landmark for your gestures. Use a random mixture of gesture types and sequence and make sure that you do not repeat the same gesture twice. Be aware that smudges on the screen could potentially reveal your gestures to others. The Security of Your Password vs. the PIN and the Picture Password Any kind of password can be cracked with enough effort and the appropriate tools. There is no such thing as a completely secure password. However, passwords created using only a few security principles are much harder to crack than others. If you respect the recommendations shared in the previous section of this lesson, you will end up having reasonably secure passwords. Out of all the log in methods in Windows 8.x, the PIN is the easiest to brute force because PINs are restricted to four digits and there are only 10,000 possible unique combinations available. The picture password is more secure than the PIN because it provides many more opportunities for creating unique combinations of gestures. Microsoft have compared the two login options from a security perspective in this post: Signing in with a picture password. In order to discourage brute force attacks against picture passwords and PINs, Windows defaults to your traditional text password after five failed attempts. The PIN and the picture password function only as alternative login methods to Windows 8.x. Therefore, if someone cracks them, he or she doesn’t have access to your user account password. However, that person can use all the apps installed on your Windows 8.x device, access your files, data, and so on. How to Create a PIN in Windows 8.x If you log in to a Windows 8.x device with a user account that has a non-blank password, then you can create a 4-digit PIN for it, to use it as a complementary login method. In order to create one, you need to go to “PC Settings”. If you don’t know how, then press Windows + C on your keyboard or flick from the right edge of the screen, on a touch-enabled device, then press “Settings”. The Settings charm is now open. Click or tap the link that says “Change PC settings”, on the bottom of the charm. In PC settings, go to Accounts and then to “Sign-in options”. Here you will find all the necessary options for changing your existing password, creating a PIN, or a picture password. To create a PIN, press the “Add” button in the PIN section. The “Create a PIN” wizard is started and you are asked to enter the password of your user account. Type it and press “OK”. Now you are asked to enter a 4-digit pin in the “Enter PIN” and “Confirm PIN” fields. The PIN has been created and you can now use it to log in to Windows. How to Create a Picture Password in Windows 8.x If you log in to a Windows 8.x device with a user account that has a non-blank password, then you can also create a picture password and use it as a complementary login method. In order to create one, you need to go to “PC settings”. In PC Settings, go to Accounts and then to “Sign-in options”. Here you will find all the necessary options for changing your existing password, creating a PIN, or a picture password. To create a picture password, press the “Add” button in the “Picture password” section. The “Create a picture password” wizard is started and you are asked to enter the password of your user account. You are shown a guide on how the picture password works. Take a few seconds to watch it and learn the gestures that can be used for your picture password. You will learn that you can create a combination of circles, straight lines, and taps. When ready, press “Choose picture”. Browse your Windows 8.x device and select the picture you want to use for your password and press “Open”. Now you can drag the picture to position it the way you want. When you like how the picture is positioned, press “Use this picture” on the left. If you are not happy with the picture, press “Choose new picture” and select a new one, as shown during the previous step. After you have confirmed that you want to use this picture, you are asked to set up your gestures for the picture password. Draw three gestures on the picture, any combination you wish. Please remember that you can use only three gestures: circles, straight lines, and taps. Once you have drawn those three gestures, you are asked to confirm. Draw the same gestures one more time. If everything goes well, you are informed that you have created your picture password and that you can use it the next time you sign in to Windows. If you don’t confirm the gestures correctly, you will be asked to try again, until you draw the same gestures twice. To close the picture password wizard, press “Finish”. Where Does Windows Store Your Passwords? Are They Safe? All the passwords that you enter in Windows and save for future use are stored in the Credential Manager. This tool is a vault with the usernames and passwords that you use to log on to your computer, to other computers on the network, to apps from the Windows Store, or to websites using Internet Explorer. By storing these credentials, Windows can automatically log you the next time you access the same app, network share, or website. Everything that is stored in the Credential Manager is encrypted for your protection.

    Read the article

  • Spring security problem, Error creating bean with name 'org.springframework.web.servlet.mvc.annotati

    - by benaissa
    Hello; I'm developping a web application with spring mvc, i started by developping the web application after i'm trying to add spring security; but i have this message, and i don't find a solution, thanks 16-04-2010 12:10:22:296 6062 ERROR org.springframework.web.servlet.DispatcherServlet - Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping': Initialization of bean failed; nested exception is java.lang.NoClassDefFoundError: org/springframework/beans/factory/generic/GenericBeanFactoryAccessor at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:527) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:286) at org.springframework.web.servlet.DispatcherServlet.createDefaultStrategy(DispatcherServlet.java:770) at org.springframework.web.servlet.DispatcherServlet.getDefaultStrategies(DispatcherServlet.java:737) at org.springframework.web.servlet.DispatcherServlet.initHandlerMappings(DispatcherServlet.java:518) at org.springframework.web.servlet.DispatcherServlet.initStrategies(DispatcherServlet.java:410) at org.springframework.web.servlet.DispatcherServlet.onRefresh(DispatcherServlet.java:398) at org.springframework.web.servlet.FrameworkServlet.onApplicationEvent(FrameworkServlet.java:474) at org.springframework.context.event.GenericApplicationListenerAdapter.onApplicationEvent(GenericApplicationListenerAdapter.java:51) at org.springframework.context.event.SourceFilteringListener.onApplicationEventInternal(SourceFilteringListener.java:97) at org.springframework.context.event.SourceFilteringListener.onApplicationEvent(SourceFilteringListener.java:68) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:97) at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:301) at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:888) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:426) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:402) at org.springframework.web.servlet.FrameworkServlet.initWebApplicationContext(FrameworkServlet.java:316) at org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:282) at org.springframework.web.servlet.HttpServletBean.init(HttpServletBean.java:126) at javax.servlet.GenericServlet.init(GenericServlet.java:212) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1173) at org.apache.catalina.core.StandardWrapper.allocate(StandardWrapper.java:809) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:129) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.NoClassDefFoundError: org/springframework/beans/factory/generic/GenericBeanFactoryAccessor at org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping.determineUrlsForHandler(DefaultAnnotationHandlerMapping.java:113) at org.springframework.web.servlet.handler.AbstractDetectingUrlHandlerMapping.detectHandlers(AbstractDetectingUrlHandlerMapping.java:79) at org.springframework.web.servlet.handler.AbstractDetectingUrlHandlerMapping.initApplicationContext(AbstractDetectingUrlHandlerMapping.java:57) at org.springframework.context.support.ApplicationObjectSupport.initApplicationContext(ApplicationObjectSupport.java:119) at org.springframework.web.context.support.WebApplicationObjectSupport.initApplicationContext(WebApplicationObjectSupport.java:69) at org.springframework.context.support.ApplicationObjectSupport.setApplicationContext(ApplicationObjectSupport.java:73) at org.springframework.context.support.ApplicationContextAwareProcessor.invokeAwareInterfaces(ApplicationContextAwareProcessor.java:99) at org.springframework.context.support.ApplicationContextAwareProcessor.postProcessBeforeInitialization(ApplicationContextAwareProcessor.java:82) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:394) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1405) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) ... 32 more Caused by: java.lang.ClassNotFoundException: org.springframework.beans.factory.generic.GenericBeanFactoryAccessor at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1516) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1361) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 43 more

    Read the article

  • How to protect UI components using OPSS Resource Permissions

    - by frank.nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid black 1.0pt; mso-border-alt:solid black .5pt; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-border-insideh:.5pt solid black; mso-border-insidev:.5pt solid black; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} ADF security protects ADF bound pages, bounded task flows and ADF Business Components entities with framework specific JAAS permissions classes (RegionPermission, TaskFlowPermission and EntityPermission). If used in combination with the ADF security expression language and security checks performed in Java, this protection already provides you with fine grained access control that can also be used to secure UI components like buttons and input text field. For example, the EL shown below disables the user profile panel tabs for unauthenticated users: <af:panelTabbed id="pt1" position="above">   ...   <af:showDetailItem        text="User Profile" id="sdi2"                                       disabled="#{!securityContext.authenticated}">   </af:showDetailItem>   ... </af:panelTabbed> The next example disables a panel tab item if the authenticated user is not granted access to the bounded task flow exposed in a region on this tab: <af:panelTabbed id="pt1" position="above">   ...   <af:showDetailItem text="Employees Overview" id="sdi4"                        disabled="#{!securityContext.taskflowViewable         ['/WEB-INF/EmployeeUpdateFlow.xml#EmployeeUpdateFlow']}">   </af:showDetailItem>   ... </af:panelTabbed> Security expressions like shown above allow developers to check the user permission, authentication and role membership status before showing UI components. Similar, using Java, developers can use code like shown below to verify the user authentication status: ADFContext adfContext = ADFContext.getCurrent(); SecurityContext securityCtx = adfContext.getSecurityContext(); boolean userAuthenticated = securityCtx.isAuthenticated(); Note that the Java code lines use the same security context reference that is used with expression language. But is this all that there is? No ! The goal of ADF Security is to enable all ADF developers to build secure web application with JAAS (Java Authentication and Authorization Service). For this, more fine grained protection can be defined using the ResourcePermission, a generic JAAS permission class owned by the Oracle Platform Security Services (OPSS).  Using the ResourcePermission  class, developers can grant permission to functional parts of an application that are not protected by page or task flow security. For example, an application menu allows creating and canceling product shipments to customers. However, only a specific user group - or application role, which is the better way to use ADF Security - is allowed to cancel a shipment. To enforce this rule, a permission is needed that can be used declaratively on the UI to hide a menu entry and programmatically in Java to check the user permission before the action is performed. Note that multiple lines of defense are what you should implement in your application development. Don't just rely on UI protection through hidden or disabled command options. To create menu protection permission for an ADF Security enable application, you choose Application | Secure | Resource Grants from the Oracle JDeveloper menu. The opened editor shows a visual representation of the jazn-data.xml file that is used at design time to define security policies and user identities for testing. An option in the Resource Grants section is to create a new Resource Type. A list of pre-defined types exists for you to create policy definitions for. Many of these pre-defined types use the ResourcePermission class. To create a custom Resource Type, for example to protect application menu functions, you click the green plus icon next to the Resource Type select list. The Create Resource Type editor that opens allows you to add a name for the resource type, a display name that is shown when granting resource permissions and a description. The ResourcePermission class name is already set. In the menu protection sample, you add the following information: Name: MenuProtection Display Name: Menu Protection Description: Permission to grant menu item permissions OK the dialog to close the resource permission creation. To create a resource policy that can be used to check user permissions at runtime, click the green plus icon in the Resources section of the Resource Grants section. In the Create Resource dialog, provide a name for the menu option you want to protect. To protect the cancel shipment menu option, create a resource with the following settings Resource Type: Menu Protection Name: Cancel Shipment Display Name: Cancel Shipment Description: Grant allows user to cancel customer good shipment   A new resource Cancel Shipmentis added to the Resources panel. Initially the resource is not granted to any user, enterprise or application role. To grant the resource, click the green plus icon in the Granted To section, select the Add Application Role option and choose one or more application roles in the opened dialog. Finally, you click the process action to define the policy. Note that permission can have multiple actions that you can grant individually to users and roles. The cancel shipment permission for example could have another action "view" defined to determine which user should see that this option exist and which users don't. To use the cancel shipment permission, select the disabled property on a command item, like af:commandMenuItem and click the arrow icon on the right. From the context menu, choose the Expression Builder entry. Expand the ADF Bindings | securityContext node and click the userGrantedResource option. Hint: You can expand the Description panel below the EL selection panel to see an example of how the grant should look like. The EL that is created needs to be manually edited to show as #{!securityContext.userGrantedResource[               'resourceName=Cancel Shipment;resourceType=MenuProtection;action=process']} OK the dialog so the permission checking EL is added as a value to the disabled property. Running the application and expanding the Shipment menu shows the Cancel Shipments menu item disabled for all users that don't have the custom menu protection resource permission granted. Note: Following the steps listed above, you create a JAAS permission and declaratively configure it for function security in an ADF application. Do you need to understand JAAS for this? No!  This is one of the benefits that you gain from using the ADF development framework. To implement multi lines of defense for your application, the action performed when clicking the enabled "Cancel Shipments" option should also check if the authenticated user is allowed to use process it. For this, code as shown below can be used in a managed bean public void onCancelShipment(ActionEvent actionEvent) {       SecurityContext securityCtx =       ADFContext.getCurrent().getSecurityContext();   //create instance of ResourcePermission(String type, String name,   //String action)   ResourcePermission resourcePermission =     new ResourcePermission("MenuProtection","Cancel Shipment",                            "process");        boolean userHasPermission =          securityCtx.hasPermission(resourcePermission);   if (userHasPermission){       //execute privileged logic here   } } Note: To learn more abput ADF Security, visit http://download.oracle.com/docs/cd/E17904_01/web.1111/b31974/adding_security.htm#BGBGJEAHNote: A monthly summary of OTN Harvest blog postings can be downloaded from ADF Code Corner. The monthly summary is a PDF document that contains supporting screen shots for some of the postings: http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html

    Read the article

  • WCF security via message headers

    - by exalted
    I'm trying to implement "some sort of" server-client & zero-config security for some WCF service. The best (as well as easiest to me) solution that I found on www is the one described at http://www.dotnetjack.com/post/Automate-passing-valuable-information-in-WCF-headers.aspx (client-side) and http://www.dotnetjack.com/post/Processing-custom-WCF-header-values-at-server-side.aspx (corrisponding server-side). Below is my implementation for RequestAuth (descibed in the first link above): using System; using System.Diagnostics; using System.ServiceModel; using System.ServiceModel.Configuration; using System.ServiceModel.Dispatcher; using System.ServiceModel.Description; using System.ServiceModel.Channels; namespace AuthLibrary { /// <summary> /// Ref: http://www.dotnetjack.com/post/Automate-passing-valuable-information-in-WCF-headers.aspx /// </summary> public class RequestAuth : BehaviorExtensionElement, IClientMessageInspector, IEndpointBehavior { [DebuggerBrowsable(DebuggerBrowsableState.Never)] private string headerName = "AuthKey"; [DebuggerBrowsable(DebuggerBrowsableState.Never)] private string headerNamespace = "http://some.url"; public override Type BehaviorType { get { return typeof(RequestAuth); } } protected override object CreateBehavior() { return new RequestAuth(); } #region IClientMessageInspector Members // Keeping in mind that I am SENDING something to the server, // I only need to implement the BeforeSendRequest method public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState) { throw new NotImplementedException(); } public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel) { MessageHeader<string> header = new MessageHeader<string>(); header.Actor = "Anyone"; header.Content = "TopSecretKey"; //Creating an untyped header to add to the WCF context MessageHeader unTypedHeader = header.GetUntypedHeader(headerName, headerNamespace); //Add the header to the current request request.Headers.Add(unTypedHeader); return null; } #endregion #region IEndpointBehavior Members public void AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { throw new NotImplementedException(); } public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime) { clientRuntime.MessageInspectors.Add(this); } public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher) { throw new NotImplementedException(); } public void Validate(ServiceEndpoint endpoint) { throw new NotImplementedException(); } #endregion } } So first I put this code in my client WinForms application, but then I had problems signing it, because I had to sign also all third-party references eventhough http://msdn.microsoft.com/en-us/library/h4fa028b(v=VS.80).aspx at section "What Should Not Be Strong-Named" states: In general, you should avoid strong-naming application EXE assemblies. A strongly named application or component cannot reference a weak-named component, so strong-naming an EXE prevents the EXE from referencing weak-named DLLs that are deployed with the application. For this reason, the Visual Studio project system does not strong-name application EXEs. Instead, it strong-names the Application manifest, which internally points to the weak-named application EXE. I expected VS to avoid this problem, but I had no luck there, it complained about all the unsigned references, so I created a separate "WCF Service Library" project inside my solution containing only code above and signed that one. At this point entire solution compiled just okay. And here's my problem: When I fired up "WCF Service Configuration Editor" I was able to add new behavior element extension (say "AuthExtension"), but then when I tried to add that extension to my end point behavior it gives me: Exception has been thrown by the target of an invocation. So I'm stuck here. Any ideas?

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >