Search Results

Search found 15441 results on 618 pages for 'ssl security'.

Page 60/618 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Code Access Security - Basics and Example

    - by jobless-spt
    I was going through this link to understand CodeAccessSecurity: http://www.codeproject.com/KB/security/UB_CAS_NET.aspx It's a great article but it left me with following questions: If you can demand and get whatever permissions you want, then any executable can get Full_Trust on machine. If permissions are already there, then why do we need to demand those? Code is executing on Server, so the permissions are on server not on client machine? Article takes an example of removing write permissions from an assembly to show security exception. Though in real world, System.IO assembly (or related classes) will take care of these permissions. So is there a real scenario where we will need CAS?

    Read the article

  • WCF - Disabling security in nettcpbinding (c#)

    - by daniel-lacayo
    Hello everyone. I'm trying to make a self hosted WCF app that uses nettcpbinding but works in an environment without a domain. It's just two regular windows pc's, one is the server and the other one will be the client. The problem with this is that when I try to get the client to connect it's rejected because of the security settings. Can you please point me in the right direction as to how I can get this scenario to work? Should I (if possible) disable security? Is there another (hopefully simple) way to accomplish this? Regards, Daniel

    Read the article

  • Homepage 301 Redirect to SSL Homepage

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have 1 site where we implemented a 301 redirect on the homepage from http to https. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our webmaster account I notice that we are not being provided with any webmaster information (search queries, backlinks) related to our homepage under SSL. I performed a Fetch Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried that the fact that Google Fetch is not getting the correct Title Tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the SiteMap to ensure that Google is correctly indexing all our pages and being able to flow from the https to the http without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • Setting Sql server security rights for multiple situations

    - by DanDan
    We have an application which uses an instance of Sql Server locally for its backend storage. The administrator windows login has had its sysadmin right revoked, and instead two sql logins have been created; one for the application with a secret password and one read only login we let users view the raw data with. This was working fine until we moved on FileStreams, which requires intergrated windows authentication. So now the sql server logins must be replaced. As a result, I am now reviewing all of our logins but I am not sure how it is possible. It seems that the application needs full read/write access, yet I still need to lock down writing to the tables so the user cannot login into the database and delete data randomly. Does anyone have any tips for setting multiple levels of security using intergrated windows logins, or can you direct me to any further reading? Some answers can also be found on serverfault: http://serverfault.com/questions/138763/setting-sql-server-security-rights-for-multiple-situations

    Read the article

  • Database security / scaling question

    - by orokusaki
    Typically I use a database such as MySQL or PostGreSQL on the same machine as the application using it, which makes access easy and secure. I'm just now building the first site that will have a separate physical database server (later this year it will). I'm wondering 3 things: (security) What things should I look into for starters pertaining to security of accessing a separate machine's database? (scalability) Are their scalability issues that I should think about pertaining to this (technology agnostic)? (more ServerFaultish but related) If starting the DB out on the same physical server (using a separate VMWare VM) and later moving to a different physical server, are there implicit problems that I'll have to deal with? Isn't another VM still accessed via localhost? If these questions are completely ludicrous, I apologize to you DB experts.

    Read the article

  • How can I use Spring Security without sessions?

    - by Jarrod
    I am building a web application with Spring Security that will live on Amazon EC2 and use Amazon's Elastic Load Balancers. Unfortunately, ELB does not support sticky sessions, so I need to ensure my application works properly without sessions. So far, I have setup RememberMeServices to assign a token via a cookie, and this works fine, but I want the cookie to expire with the browser session (e.g. when the browser closes). I have to imagine I'm not the first one to want to use Spring Security without sessions... any suggestions?

    Read the article

  • System.Security.Permissions.SecurityPermission and Reflection on Godaddy

    - by David Murdoch
    I have the following method: public static UserControl LoadControl(string UserControlPath, params object[] constructorParameters) { var p = new Page(); var ctl = p.LoadControl(UserControlPath) as UserControl; // Find the relevant constructor if (ctl != null) { ConstructorInfo constructor = ctl.GetType().BaseType.GetConstructor(constructorParameters.Select(constParam => constParam == null ? "".GetType() : constParam.GetType()).ToArray()); //And then call the relevant constructor if (constructor == null) { throw new MemberAccessException("The requested constructor was not found on : " + ctl.GetType().BaseType.ToString()); } constructor.Invoke(ctl, constructorParameters); } // Finally return the fully initialized UC return ctl; } Which when executed on a Godaddy shared host gives me System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.

    Read the article

  • User account design and security...

    - by espinet
    Before I begin, I am using Ruby on Rails and the Devise gem for user authentication. Hi, I was doing some research about account security and I found a blog post about the topic awhile ago but I can no longer find it again. I read something about when making a login system you should have 1 model for User, this contains a user's username, encrypted password, and email. You should also have a model for a user's Account. This contains everything else. A User has an Account. I don't know if I'm explaining this correctly since I haven't seen the blog post for several months and I lost my bookmark. Could someone explain how and why I should or shouldn't do this. My application deals with money so I need to cover my bases with security. Thanks.

    Read the article

  • Data-related security Implementation

    - by devdude
    Using Shiro we have a great security framework embedded in our enterprise application running on GF. You define users, roles, permissions and we can control at any fine-grain level if a user can access the application, a certain page or even click a specific button. Is there a recipe or pattern, that allows on top of that, to restrict a user from seeing certain data ? Sample: You have a customer table for 3 factories (part of one company). An admin user can see all customer records, but the user at the local factory must not see any customer data of other factories (for whatever reason). Te security feature should be part of the role definition. Thanks for any input and ideas

    Read the article

  • Session ID Rotation - does it enhance security?

    - by dound
    (I think) I understand why session IDs should be rotated when the user logs in - this is one important step to prevent session fixation. However, is there any advantage to randomly/periodically rotating session IDs? This seems to only provide a false sense of security in my opinion. Assuming session IDs are not vulnerable to brute-force guessing and you only transmit the session ID in a cookie (not as part of URLs), then an attacker will have to access your cookie (most likely by snooping on your traffic) to get your session ID. Thus if the attacker gets one session ID, they'll probably be able to sniff the rotated session ID too - and thus randomly rotating has not enhanced security.

    Read the article

  • .NET Security Part 4

    - by Simon Cooper
    Finally, in this series, I am going to cover some of the security issues that can trip you up when using sandboxed appdomains. DISCLAIMER: I am not a security expert, and this is by no means an exhaustive list. If you actually are writing security-critical code, then get a proper security audit of your code by a professional. The examples below are just illustrations of the sort of things that can go wrong. 1. AppDomainSetup.ApplicationBase The most obvious one is the issue covered in the MSDN documentation on creating a sandbox, in step 3 – the sandboxed appdomain has the same ApplicationBase as the controlling appdomain. So let’s explore what happens when they are the same, and an exception is thrown. In the sandboxed assembly, Sandboxed.dll (IPlugin is an interface in a partially-trusted assembly, with a single MethodToDoThings on it): public class UntrustedPlugin : MarshalByRefObject, IPlugin { // implements IPlugin.MethodToDoThings() public void MethodToDoThings() { throw new EvilException(); } } [Serializable] internal class EvilException : Exception { public override string ToString() { // show we have read access to C:\Windows // read the first 5 directories Console.WriteLine("Pwned! Mwuahahah!"); foreach (var d in Directory.EnumerateDirectories(@"C:\Windows").Take(5)) { Console.WriteLine(d.FullName); } return base.ToString(); } } And in the controlling assembly: // what can possibly go wrong? AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase } // only grant permissions to execute // and to read the application base, nothing else PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, appDomainSetup.ApplicationBase); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.pathDiscovery, appDomainSetup.ApplicationBase); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain("Sandbox", null, appDomainSetup, restrictedPerms); // execute UntrustedPlugin in the sandbox // don't crash the application if the sandbox throws an exception IPlugin o = (IPlugin)sandbox.CreateInstanceFromAndUnwrap("Sandboxed.dll", "UntrustedPlugin"); try { o.MethodToDoThings() } catch (Exception e) { Console.WriteLine(e.ToString()); } And the result? Oops. We’ve allowed a class that should be sandboxed to execute code with fully-trusted permissions! How did this happen? Well, the key is the exact meaning of the ApplicationBase property: The application base directory is where the assembly manager begins probing for assemblies. When EvilException is thrown, it propagates from the sandboxed appdomain into the controlling assembly’s appdomain (as it’s marked as Serializable). When the exception is deserialized, the CLR finds and loads the sandboxed dll into the fully-trusted appdomain. Since the controlling appdomain’s ApplicationBase directory contains the sandboxed assembly, the CLR finds and loads the assembly into a full-trust appdomain, and the evil code is executed. So the problem isn’t exactly that the sandboxed appdomain’s ApplicationBase is the same as the controlling appdomain’s, it’s that the sandboxed dll was in such a place that the controlling appdomain could find it as part of the standard assembly resolution mechanism. The sandbox then forced the assembly to load in the controlling appdomain by throwing a serializable exception that propagated outside the sandbox. The easiest fix for this is to keep the sandbox ApplicationBase well away from the ApplicationBase of the controlling appdomain, and don’t allow the sandbox permissions to access the controlling appdomain’s ApplicationBase directory. If you do this, then the sandboxed assembly can’t be accidentally loaded into the fully-trusted appdomain, and the code can’t be executed. If the plugin does try to induce the controlling appdomain to load an assembly it shouldn’t, a SerializationException will be thrown when it tries to load the assembly to deserialize the exception, and no damage will be done. 2. Loading the sandboxed dll into the application appdomain As an extension of the previous point, you shouldn’t directly reference types or methods in the sandboxed dll from your application code. That loads the assembly into the fully-trusted appdomain, and from there code in the assembly could be executed. Instead, pull out methods you want the sandboxed dll to have into an interface or class in a partially-trusted assembly you control, and execute methods via that instead (similar to the example above with the IPlugin interface). If you need to have a look at the assembly before executing it in the sandbox, either examine the assembly using reflection from within the sandbox, or load the assembly into the Reflection-only context in the application’s appdomain. The code in assemblies in the reflection-only context can’t be executed, it can only be reflected upon, thus protecting your appdomain from malicious code. 3. Incorrectly asserting permissions You should only assert permissions when you are absolutely sure they’re safe. For example, this method allows a caller read-access to any file they call this method with, including your documents, any network shares, the C:\Windows directory, etc: [SecuritySafeCritical] public static string GetFileText(string filePath) { new FileIOPermission(FileIOPermissionAccess.Read, filePath).Assert(); return File.ReadAllText(filePath); } Be careful when asserting permissions, and ensure you’re not providing a loophole sandboxed dlls can use to gain access to things they shouldn’t be able to. Conclusion Hopefully, that’s given you an idea of some of the ways it’s possible to get past the .NET security system. As I said before, this post is not exhaustive, and you certainly shouldn’t base any security-critical applications on the contents of this blog post. What this series should help with is understanding the possibilities of the security system, and what all the security attributes and classes mean and what they are used for, if you were to use the security system in the future.

    Read the article

  • Security precautions and techniques for a User-submitted Code Demo Area

    - by Jack W-H
    Hey folks Maybe this isn't really feasible. But basically, I've been developing a snippet-sharing website and I would like it to have a 'live demo area'. For example, you're browsing some snippets and click the Demo button. A new window pops up which executes the web code. I understand there are a gazillion security risks involved in doing this - XSS, tags, nasty malware/drive by downloads, pr0n, etc. etc. etc. The community would be able to flag submissions that are blatantly naughty but obviously some would go undetected (and, in many cases, someone would have to fall victim to discover whatever nasty thing was submitted). So I need to know: What should I do - security wise - to make sure that users can submit code, but that nothing malicious can be run - or executed offsite, etc? For your information my site is powered by PHP using CodeIgniter. Jack

    Read the article

  • Spring 3 - Custom Security

    - by Eqbal
    I am in the process of converting a legacy application from proprietary technology to a Spring based web app, leaving the backend system as is. The login service is provided by the backend system through a function call that takes in some parameter (username, password plus some others) and provides an output that includes the authroizations for the user and other properties like firstname, lastname etc. What do I need to do to weave this into Spring 3.0 security module. Looks like I need to provide a custom AuthenticationProvider implementation (is this where I call the backend function?). Do I also need a custom User and UserDetailsService implementation which needs loadUserByName(String userName)? Any pointers on good documentation for this? The reference that came with the download is okay, but doesn't help too much in terms of implementing custom security.

    Read the article

  • TFS Security and Documents Folder

    - by pm_2
    I'm getting an issue with TFS where the documents folder is marked with a red cross. As far as I can tell, this seems to be a security issue, however, I am set-up as project admin on the relevant projects. I’ve come to the conclusion that it’s a security issue from running the TFS Project Admin tool (available here). When I run this, it tells me that I don’t have sufficient access rights to open the project. I’ve checked, and I’m not included in any groups that are denied access. Please can anyone shed any light as to why I may not have sufficient access to these projects?

    Read the article

  • Enable/Disable SSL JSSE in Weblogic Server 11g/12c

    - by Vijaya Moderator -Oracle
    Here is the flag to enable or to disable JSSE.-Dweblogic.ssl.JSSEEnabled=true|false Oracle recommends that you keep this value set to true.  Starting WLS 12c, Even if the above option is set to false , it is ignored. The changes are neither persisted nor the Mbean value is also left unmodified.... Please also be aware of the below changes in SSL implementation in 12c version 1. Certicom has been removed from WebLogic Server 12.1.1 and is no longer supported. 2. JSSE is the only SSL implementation that is supported in WebLogic Server 12.1.1. The following configuration changes have been made to be consistent with this support: The default for JSSEEnabled has been changed to true. Oracle recommends that you keep this value set to true. If JSSEEnabled is set to false, it will be ignored. That is, the MBean value will not be changed either in memory or the persisted config.xml file. WebLogic Server will continue to use JSSE, but will issue a warning. Please refer to the below doc in OTN for more info... Oracle Fusion Middleware What's New in Oracle WebLogic Server - 12c Release 1 (12.1.1)

    Read the article

  • Why does 301 redirect work for http but not for https?

    - by Tom G
    Through my domain registrar I have set up a domain, essayme.co.uk, to automatically forward to https://google.com. If I go to http://essayme.co.uk it works as expected and redirects me to https://google.com. $curl -i http://essayme.co.uk HTTP/1.1 301 Moved Permanently Cache-Control: max-age=900 Content-Type: text/html Location: https://google.com Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sat, 07 Jun 2014 11:14:16 GMT Content-Length: 0 Age: 0 Connection: keep-alive However, if I go to https://essayme.co.uk it just freezes and times out. $curl -i https://essayme.co.uk curl: (7) Failed connect to essayme.co.uk:443; Operation timed out What is happening in the second case? (and, if possible, how can I get the redirect to work for https?) Problem background/clarification: I don't have an SSL certificate for the essayme.co.uk domain above, but I do for my live domain (let's call it mywebsite.com), and I was seeing the exact same problem on this domain (hence why I'm trying to debug the problem). Unfortunately I can't experiment with the live domain (as it's live) and I would like to avoid having to buy a second certificate for essayme.co.uk just for debugging (unless absolutely necessary). The problem I was seeing: my live domain, mywebsite.com (not its real name), has a valid SSL certificate. Visiting https://www.mywebsite.com displayed the webpage as expected. I had set up forwarding (like in the question above) from the naked domain (mywebsite.com) to https://www.mywebsite.com) Visiting http://mywebsite.com redirected to https://www.mywebsite.com as expected. However, visiting https://mywebsite.com would freeze and time out (as in the question above). I also tried forwarding it to http://www.otherwebsite.com as an experiment (i.e. forwarding to another site that does not use SSL), but the result was the same: Visiting http://mywebsite.com redirected to http://www.otherwebsite.com as expected. Visiting https://mywebsite.com would freeze and time out again. So I set up essayme.co.uk as an experiment to try and understand why it doesn't work.

    Read the article

  • Spring security oauth2 provider to secure non-spring api

    - by user1241320
    I'm trying to set up an oauth 2.0 provider that should "secure" our restful api using spring-security-oauth. Being a 'spring fan' i thought it could be the quicker solution. main point is this restful thingie is not a spring based webapp. boss says the oauth provider should be a separate application, but i'm starting to doubt that. (got this impression by reading spring-security-oauth) i'm also new here so haven't really got my hands into this other (jersey-powered) restul api (core of our business). any help/hint will be much appreciated.

    Read the article

  • GWT HTML widget security risks

    - by h2g2java
    In GWT javadoc, we are advised If you only need a simple label (text, but not HTML), then the Label widget is more appropriate, as it disallows the use of HTML, which can lead to potential security issues if not used properly. I would like to be educated/reminded about the security susceptibilities? It would be nice to list the description of the mechanisms of those risks. Are the susceptibilities equally potent on GAE vs Amazon vs my home linux server? Are they equally potent across the browser brands? Thank you.

    Read the article

  • Control Menu Items based on Privileges of Logged In User with spring security

    - by Nirmal
    Hi All... Based on this link I have incorporated the spring security core module with my grails project... I am using the Requestmap concept by storing each role, user and requestmap inside the database only... Now my requirement is to provide the menu items based on the users assigned roles... For e.g.: If my "User" Main Menu have following Items : Dashboard Import User Manage User And if I have assigned a roles of Dashboard and Import User to the user with a username "auditor" then, only following Menu items should be displayed on the screen : User (Main Menu) - Dashboard (sub menu) - Import User (sub menu) I have explored the Spring Security ACL plugin for the same, but it's using the Domain classes to get it working... So, wanted to know the convenient way to do so... Thanks in advance...

    Read the article

  • Using OAuth along with spring security, grails

    - by GroovyUser
    I have grails app which runs on the spring security plugin. It works with no problem. I wish I could give the users the way to connect with Facebook and social networking site. So I decided to use Spring Security OAuth plugin. I have configured the plugin. Now I want user can access both via normal local account and also the OAuth authentication. More precisely I have a controller like this: @Secured(['IS_AUTHENTICATED_FULLY']) def test() { render "Home page!!!" } Now I want this controller to be accessed with OAuth authentication too. Is that possible to do so?

    Read the article

  • Understanding CGI and SQL security from the ground up

    - by Steve
    This question is for learning purposes. Suppose I am writing a simple SQL admin console using CGI and Python. At http://something.com/admin, this admin console should allow me to modify a SQL database (i.e., create and modify tables, and create and modify records) using an ordinary form. In the least secure case, anybody can access http://something.com/admin and modify the database. You can password protect http://something.com/admin. But once you start using the admin console, information is still transmitted in plain text. So then you use HTTPS to secure the transmitted data. Questions: To describe to a learner, how would you incrementally add security to the least secure environment in order to make it most secure? How would you modify/augment my three (possibly erroneous) steps above? What basic tools in Python make your steps possible? Optional: Now that I understand the process, how do sophisticated libraries and frameworks inherently achieve this level of security?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >