Search Results

Search found 13454 results on 539 pages for 'ws security'.

Page 25/539 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • can you customize adobe acrobat reader "security warning"

    - by akaphenom
    We need to insert a web beacon (i know taboo) in to adobe PDFs to know when they are opened, as one of our clients is moving to a model of "giving" their documents away and following up repeat viewers for subscriptions. Its not enough to be able to provide a download, they want to attach the PDF to an email and "blast" to directed recipients (double-opt-in etc). Adding the javascript to the pdf is easy enough: (iText) and the "openAction" event. However the security box pops up and displays: "Security Warning" "Document is trying to connect to 'xxxx.yyy.com' if you trusty the site choose Allow. If do not trust the site choose Block" [help] [allow] [block] I don't think we need to completley overhaul the dialogue box, I just think we need to change the middle text to be more descriptive of why we are doing it. Of course our client would love us to remove this completely... Thank you in advance for any feed back you can provide, Todd

    Read the article

  • WCF - Windows authentication - Security settings require Anonymous...

    - by Rashack
    Hi, I am struggling hard with getting WCF service running on IIS on our server. After deployment I end up with an error message: Security settings for this service require 'Anonymous' Authentication but it is not enabled for the IIS application that hosts this service. I want to use Windows authentication and thus I have Anonymous access disabled. Also note that there is aspNetCompatibilityEnabled (if that makes any difference). Here's my web.config: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <bindings> <webHttpBinding> <binding name="default"> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Windows" proxyCredentialType="Windows"/> </security> </binding> </webHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="AspNetAjaxBehavior"> <enableWebScript /> <webHttp /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="defaultServiceBehavior"> <serviceMetadata httpGetEnabled="true" httpsGetEnabled="false" /> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceAuthorization principalPermissionMode="UseWindowsGroups" /> </behavior> </serviceBehaviors> </behaviors> <services> <service name="xxx.Web.Services.RequestService" behaviorConfiguration="defaultServiceBehavior"> <endpoint behaviorConfiguration="AspNetAjaxBehavior" binding="webHttpBinding" contract="xxx.Web.Services.IRequestService" bindingConfiguration="default"> </endpoint> <endpoint address="mex" binding="mexHttpBinding" name="mex" contract="IMetadataExchange"></endpoint> </service> </services> </system.serviceModel> I have searched all over the internet with no luck. Any clues are greatly appreciated.

    Read the article

  • Infinite loop using Spring Security - Login page is protected even though it should allow anonymous

    - by Tai Squared
    I have a Spring application (Spring version 2.5.6.SEC01, Spring Security version 2.0.5) with the following setup: web.xml <welcome-file-list> <welcome-file> index.jsp </welcome-file> </welcome-file-list> The index.jsp page is in the WebContent directory and simply contains a redirect: <c:redirect url="/login.htm"/> In the appname-servlet.xml, there is a view resolver to point to the jsp pages in WEB-INF/jsp <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView" /> <property name="prefix" value="/WEB-INF/jsp/" /> <property name="suffix" value=".jsp" /> </bean> In the security-config.xml file, I have the following configuration: <http> <!-- Restrict URLs based on role --> <intercept-url pattern="/WEB-INF/jsp/login.jsp*" access="ROLE_ANONYMOUS" /> <intercept-url pattern="/WEB-INF/jsp/header.jsp*" access="ROLE_ANONYMOUS" /> <intercept-url pattern="/WEB-INF/jsp/footer.jsp*" access="ROLE_ANONYMOUS" /> <intercept-url pattern="/login*" access="ROLE_ANONYMOUS" /> <intercept-url pattern="/index.jsp" access="ROLE_ANONYMOUS" /> <intercept-url pattern="/logoutSuccess*" access="ROLE_ANONYMOUS" /> <intercept-url pattern="/css/**" filters="none" /> <intercept-url pattern="/images/**" filters="none" /> <intercept-url pattern="/**" access="ROLE_ANONYMOUS" /> <form-login login-page="/login.jsp"/> </http> <authentication-provider> <jdbc-user-service data-source-ref="dataSource" /> </authentication-provider> However, I can't even navigate to the login page and get the following error in the log: WARNING: The login page is being protected by the filter chain, but you don't appear to have anonymous authentication enabled. This is almost certainly an error. I've tried changing the ROLE_ANONYMOUS to IS_AUTHENTICATED_ANONYMOUSLY, changing the login-page to index.jsp, login.htm, and adding different intercept-url values, but I can't get it so the login page is accesible and security applies to the other pages. What do I have to change to avoid this loop?

    Read the article

  • Silverlight WCF with two-way SSL security certificates

    - by dlang
    Dear All! I would like to implement a server - client software with the following security requirements: WCF-Services need to be secured with SSL and Certificates for both, the server and the client Client certificates need to be generated programmatically upon user registration Client-certificates are deployed via a an automatically generated installer-package Altough the client-certificates are self-signed (no authorized CA for the generation server) the end-user must not add the server-certificate to the trusted certificates in the local Certificate Store My problems: I cannot find any information regarding establishing such a two-way ssl-security for wcf, while the server-certificate is not signed by an authorized CA and instead is created programmatically with "makecert"... My question: Is it technically possible to implement this requirements? If yes - could you provide some hints how to get started? Thank you!

    Read the article

  • Code Access Security - Basics and Example

    - by jobless-spt
    I was going through this link to understand CodeAccessSecurity: http://www.codeproject.com/KB/security/UB_CAS_NET.aspx It's a great article but it left me with following questions: If you can demand and get whatever permissions you want, then any executable can get Full_Trust on machine. If permissions are already there, then why do we need to demand those? Code is executing on Server, so the permissions are on server not on client machine? Article takes an example of removing write permissions from an assembly to show security exception. Though in real world, System.IO assembly (or related classes) will take care of these permissions. So is there a real scenario where we will need CAS?

    Read the article

  • WCF - Disabling security in nettcpbinding (c#)

    - by daniel-lacayo
    Hello everyone. I'm trying to make a self hosted WCF app that uses nettcpbinding but works in an environment without a domain. It's just two regular windows pc's, one is the server and the other one will be the client. The problem with this is that when I try to get the client to connect it's rejected because of the security settings. Can you please point me in the right direction as to how I can get this scenario to work? Should I (if possible) disable security? Is there another (hopefully simple) way to accomplish this? Regards, Daniel

    Read the article

  • Internet Explorer blocked this website from displaying content with security certificate errors

    - by Tabrez
    I have a security certificate linked to a CDN's server. The main website is https:www.connect4fitness.com When I pull the site up in firefox or chrome, everything works fine. But in IE I get the following error: "Internet Explorer blocked this website from displaying content with security certificate errors." On IE 9 it shows the button "Display Content" and you can get past the error by clicking on the button. On older versions on I the error message is much more cryptic and is confusing users. Please note that I don't have the option of asking end users to add the site to Trusted Sources as some folks use the site from their work computers and do not have that access. Also, some people don't bother to call once they hit the error. I have looked at the content and all my links are "https" only. I had one namespace link and I got rid of it. Any idea about how I can find what is triggering this message?

    Read the article

  • Setting Sql server security rights for multiple situations

    - by DanDan
    We have an application which uses an instance of Sql Server locally for its backend storage. The administrator windows login has had its sysadmin right revoked, and instead two sql logins have been created; one for the application with a secret password and one read only login we let users view the raw data with. This was working fine until we moved on FileStreams, which requires intergrated windows authentication. So now the sql server logins must be replaced. As a result, I am now reviewing all of our logins but I am not sure how it is possible. It seems that the application needs full read/write access, yet I still need to lock down writing to the tables so the user cannot login into the database and delete data randomly. Does anyone have any tips for setting multiple levels of security using intergrated windows logins, or can you direct me to any further reading? Some answers can also be found on serverfault: http://serverfault.com/questions/138763/setting-sql-server-security-rights-for-multiple-situations

    Read the article

  • Database security / scaling question

    - by orokusaki
    Typically I use a database such as MySQL or PostGreSQL on the same machine as the application using it, which makes access easy and secure. I'm just now building the first site that will have a separate physical database server (later this year it will). I'm wondering 3 things: (security) What things should I look into for starters pertaining to security of accessing a separate machine's database? (scalability) Are their scalability issues that I should think about pertaining to this (technology agnostic)? (more ServerFaultish but related) If starting the DB out on the same physical server (using a separate VMWare VM) and later moving to a different physical server, are there implicit problems that I'll have to deal with? Isn't another VM still accessed via localhost? If these questions are completely ludicrous, I apologize to you DB experts.

    Read the article

  • How can I use Spring Security without sessions?

    - by Jarrod
    I am building a web application with Spring Security that will live on Amazon EC2 and use Amazon's Elastic Load Balancers. Unfortunately, ELB does not support sticky sessions, so I need to ensure my application works properly without sessions. So far, I have setup RememberMeServices to assign a token via a cookie, and this works fine, but I want the cookie to expire with the browser session (e.g. when the browser closes). I have to imagine I'm not the first one to want to use Spring Security without sessions... any suggestions?

    Read the article

  • System.Security.Permissions.SecurityPermission and Reflection on Godaddy

    - by David Murdoch
    I have the following method: public static UserControl LoadControl(string UserControlPath, params object[] constructorParameters) { var p = new Page(); var ctl = p.LoadControl(UserControlPath) as UserControl; // Find the relevant constructor if (ctl != null) { ConstructorInfo constructor = ctl.GetType().BaseType.GetConstructor(constructorParameters.Select(constParam => constParam == null ? "".GetType() : constParam.GetType()).ToArray()); //And then call the relevant constructor if (constructor == null) { throw new MemberAccessException("The requested constructor was not found on : " + ctl.GetType().BaseType.ToString()); } constructor.Invoke(ctl, constructorParameters); } // Finally return the fully initialized UC return ctl; } Which when executed on a Godaddy shared host gives me System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.

    Read the article

  • User account design and security...

    - by espinet
    Before I begin, I am using Ruby on Rails and the Devise gem for user authentication. Hi, I was doing some research about account security and I found a blog post about the topic awhile ago but I can no longer find it again. I read something about when making a login system you should have 1 model for User, this contains a user's username, encrypted password, and email. You should also have a model for a user's Account. This contains everything else. A User has an Account. I don't know if I'm explaining this correctly since I haven't seen the blog post for several months and I lost my bookmark. Could someone explain how and why I should or shouldn't do this. My application deals with money so I need to cover my bases with security. Thanks.

    Read the article

  • Data-related security Implementation

    - by devdude
    Using Shiro we have a great security framework embedded in our enterprise application running on GF. You define users, roles, permissions and we can control at any fine-grain level if a user can access the application, a certain page or even click a specific button. Is there a recipe or pattern, that allows on top of that, to restrict a user from seeing certain data ? Sample: You have a customer table for 3 factories (part of one company). An admin user can see all customer records, but the user at the local factory must not see any customer data of other factories (for whatever reason). Te security feature should be part of the role definition. Thanks for any input and ideas

    Read the article

  • Session ID Rotation - does it enhance security?

    - by dound
    (I think) I understand why session IDs should be rotated when the user logs in - this is one important step to prevent session fixation. However, is there any advantage to randomly/periodically rotating session IDs? This seems to only provide a false sense of security in my opinion. Assuming session IDs are not vulnerable to brute-force guessing and you only transmit the session ID in a cookie (not as part of URLs), then an attacker will have to access your cookie (most likely by snooping on your traffic) to get your session ID. Thus if the attacker gets one session ID, they'll probably be able to sniff the rotated session ID too - and thus randomly rotating has not enhanced security.

    Read the article

  • .NET Security Part 4

    - by Simon Cooper
    Finally, in this series, I am going to cover some of the security issues that can trip you up when using sandboxed appdomains. DISCLAIMER: I am not a security expert, and this is by no means an exhaustive list. If you actually are writing security-critical code, then get a proper security audit of your code by a professional. The examples below are just illustrations of the sort of things that can go wrong. 1. AppDomainSetup.ApplicationBase The most obvious one is the issue covered in the MSDN documentation on creating a sandbox, in step 3 – the sandboxed appdomain has the same ApplicationBase as the controlling appdomain. So let’s explore what happens when they are the same, and an exception is thrown. In the sandboxed assembly, Sandboxed.dll (IPlugin is an interface in a partially-trusted assembly, with a single MethodToDoThings on it): public class UntrustedPlugin : MarshalByRefObject, IPlugin { // implements IPlugin.MethodToDoThings() public void MethodToDoThings() { throw new EvilException(); } } [Serializable] internal class EvilException : Exception { public override string ToString() { // show we have read access to C:\Windows // read the first 5 directories Console.WriteLine("Pwned! Mwuahahah!"); foreach (var d in Directory.EnumerateDirectories(@"C:\Windows").Take(5)) { Console.WriteLine(d.FullName); } return base.ToString(); } } And in the controlling assembly: // what can possibly go wrong? AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase } // only grant permissions to execute // and to read the application base, nothing else PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, appDomainSetup.ApplicationBase); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.pathDiscovery, appDomainSetup.ApplicationBase); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain("Sandbox", null, appDomainSetup, restrictedPerms); // execute UntrustedPlugin in the sandbox // don't crash the application if the sandbox throws an exception IPlugin o = (IPlugin)sandbox.CreateInstanceFromAndUnwrap("Sandboxed.dll", "UntrustedPlugin"); try { o.MethodToDoThings() } catch (Exception e) { Console.WriteLine(e.ToString()); } And the result? Oops. We’ve allowed a class that should be sandboxed to execute code with fully-trusted permissions! How did this happen? Well, the key is the exact meaning of the ApplicationBase property: The application base directory is where the assembly manager begins probing for assemblies. When EvilException is thrown, it propagates from the sandboxed appdomain into the controlling assembly’s appdomain (as it’s marked as Serializable). When the exception is deserialized, the CLR finds and loads the sandboxed dll into the fully-trusted appdomain. Since the controlling appdomain’s ApplicationBase directory contains the sandboxed assembly, the CLR finds and loads the assembly into a full-trust appdomain, and the evil code is executed. So the problem isn’t exactly that the sandboxed appdomain’s ApplicationBase is the same as the controlling appdomain’s, it’s that the sandboxed dll was in such a place that the controlling appdomain could find it as part of the standard assembly resolution mechanism. The sandbox then forced the assembly to load in the controlling appdomain by throwing a serializable exception that propagated outside the sandbox. The easiest fix for this is to keep the sandbox ApplicationBase well away from the ApplicationBase of the controlling appdomain, and don’t allow the sandbox permissions to access the controlling appdomain’s ApplicationBase directory. If you do this, then the sandboxed assembly can’t be accidentally loaded into the fully-trusted appdomain, and the code can’t be executed. If the plugin does try to induce the controlling appdomain to load an assembly it shouldn’t, a SerializationException will be thrown when it tries to load the assembly to deserialize the exception, and no damage will be done. 2. Loading the sandboxed dll into the application appdomain As an extension of the previous point, you shouldn’t directly reference types or methods in the sandboxed dll from your application code. That loads the assembly into the fully-trusted appdomain, and from there code in the assembly could be executed. Instead, pull out methods you want the sandboxed dll to have into an interface or class in a partially-trusted assembly you control, and execute methods via that instead (similar to the example above with the IPlugin interface). If you need to have a look at the assembly before executing it in the sandbox, either examine the assembly using reflection from within the sandbox, or load the assembly into the Reflection-only context in the application’s appdomain. The code in assemblies in the reflection-only context can’t be executed, it can only be reflected upon, thus protecting your appdomain from malicious code. 3. Incorrectly asserting permissions You should only assert permissions when you are absolutely sure they’re safe. For example, this method allows a caller read-access to any file they call this method with, including your documents, any network shares, the C:\Windows directory, etc: [SecuritySafeCritical] public static string GetFileText(string filePath) { new FileIOPermission(FileIOPermissionAccess.Read, filePath).Assert(); return File.ReadAllText(filePath); } Be careful when asserting permissions, and ensure you’re not providing a loophole sandboxed dlls can use to gain access to things they shouldn’t be able to. Conclusion Hopefully, that’s given you an idea of some of the ways it’s possible to get past the .NET security system. As I said before, this post is not exhaustive, and you certainly shouldn’t base any security-critical applications on the contents of this blog post. What this series should help with is understanding the possibilities of the security system, and what all the security attributes and classes mean and what they are used for, if you were to use the security system in the future.

    Read the article

  • Security precautions and techniques for a User-submitted Code Demo Area

    - by Jack W-H
    Hey folks Maybe this isn't really feasible. But basically, I've been developing a snippet-sharing website and I would like it to have a 'live demo area'. For example, you're browsing some snippets and click the Demo button. A new window pops up which executes the web code. I understand there are a gazillion security risks involved in doing this - XSS, tags, nasty malware/drive by downloads, pr0n, etc. etc. etc. The community would be able to flag submissions that are blatantly naughty but obviously some would go undetected (and, in many cases, someone would have to fall victim to discover whatever nasty thing was submitted). So I need to know: What should I do - security wise - to make sure that users can submit code, but that nothing malicious can be run - or executed offsite, etc? For your information my site is powered by PHP using CodeIgniter. Jack

    Read the article

  • Spring 3 - Custom Security

    - by Eqbal
    I am in the process of converting a legacy application from proprietary technology to a Spring based web app, leaving the backend system as is. The login service is provided by the backend system through a function call that takes in some parameter (username, password plus some others) and provides an output that includes the authroizations for the user and other properties like firstname, lastname etc. What do I need to do to weave this into Spring 3.0 security module. Looks like I need to provide a custom AuthenticationProvider implementation (is this where I call the backend function?). Do I also need a custom User and UserDetailsService implementation which needs loadUserByName(String userName)? Any pointers on good documentation for this? The reference that came with the download is okay, but doesn't help too much in terms of implementing custom security.

    Read the article

  • TFS Security and Documents Folder

    - by pm_2
    I'm getting an issue with TFS where the documents folder is marked with a red cross. As far as I can tell, this seems to be a security issue, however, I am set-up as project admin on the relevant projects. I’ve come to the conclusion that it’s a security issue from running the TFS Project Admin tool (available here). When I run this, it tells me that I don’t have sufficient access rights to open the project. I’ve checked, and I’m not included in any groups that are denied access. Please can anyone shed any light as to why I may not have sufficient access to these projects?

    Read the article

  • Spring security oauth2 provider to secure non-spring api

    - by user1241320
    I'm trying to set up an oauth 2.0 provider that should "secure" our restful api using spring-security-oauth. Being a 'spring fan' i thought it could be the quicker solution. main point is this restful thingie is not a spring based webapp. boss says the oauth provider should be a separate application, but i'm starting to doubt that. (got this impression by reading spring-security-oauth) i'm also new here so haven't really got my hands into this other (jersey-powered) restul api (core of our business). any help/hint will be much appreciated.

    Read the article

  • GWT HTML widget security risks

    - by h2g2java
    In GWT javadoc, we are advised If you only need a simple label (text, but not HTML), then the Label widget is more appropriate, as it disallows the use of HTML, which can lead to potential security issues if not used properly. I would like to be educated/reminded about the security susceptibilities? It would be nice to list the description of the mechanisms of those risks. Are the susceptibilities equally potent on GAE vs Amazon vs my home linux server? Are they equally potent across the browser brands? Thank you.

    Read the article

  • Control Menu Items based on Privileges of Logged In User with spring security

    - by Nirmal
    Hi All... Based on this link I have incorporated the spring security core module with my grails project... I am using the Requestmap concept by storing each role, user and requestmap inside the database only... Now my requirement is to provide the menu items based on the users assigned roles... For e.g.: If my "User" Main Menu have following Items : Dashboard Import User Manage User And if I have assigned a roles of Dashboard and Import User to the user with a username "auditor" then, only following Menu items should be displayed on the screen : User (Main Menu) - Dashboard (sub menu) - Import User (sub menu) I have explored the Spring Security ACL plugin for the same, but it's using the Domain classes to get it working... So, wanted to know the convenient way to do so... Thanks in advance...

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >