Search Results

Search found 12720 results on 509 pages for 'moss2007 security'.

Page 49/509 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • (Fluent) NHibernate Security Exception - ReflectionPermission

    - by PeterEysermans
    I've upgraded an ASP.Net Web application to the latest build of Fluent NHibernate (1.0.0.636) and the newest version of NHibernate (v2.1.2.4000). I've checked a couple of times that the application is running in Full trust. But I keep getting the following error: Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SecurityException: Request for the permission of type 'System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.] System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) +0 System.Security.CodeAccessPermission.Demand() +54 System.Reflection.Emit.DynamicMethod.PerformSecurityCheck(Type owner, StackCrawlMark& stackMark, Boolean skipVisibility) +269 System.Reflection.Emit.DynamicMethod..ctor(String name, Type returnType, Type[] parameterTypes, Type owner, Boolean skipVisibility) +81 NHibernate.Bytecode.Lightweight.ReflectionOptimizer.CreateDynamicMethod(Type returnType, Type[] argumentTypes) +165 NHibernate.Bytecode.Lightweight.ReflectionOptimizer.GenerateGetPropertyValuesMethod(IGetter[] getters) +383 NHibernate.Bytecode.Lightweight.ReflectionOptimizer..ctor(Type mappedType, IGetter[] getters, ISetter[] setters) +108 NHibernate.Bytecode.Lightweight.BytecodeProviderImpl.GetReflectionOptimizer(Type mappedClass, IGetter[] getters, ISetter[] setters) +52 NHibernate.Tuple.Component.PocoComponentTuplizer..ctor(Component component) +231 NHibernate.Tuple.Component.ComponentEntityModeToTuplizerMapping..ctor(Component component) +420 NHibernate.Tuple.Component.ComponentMetamodel..ctor(Component component) +402 NHibernate.Mapping.Component.BuildType() +38 NHibernate.Mapping.Component.get_Type() +32 NHibernate.Mapping.SimpleValue.IsValid(IMapping mapping) +39 NHibernate.Mapping.RootClass.Validate(IMapping mapping) +61 NHibernate.Cfg.Configuration.ValidateEntities() +220 NHibernate.Cfg.Configuration.Validate() +16 NHibernate.Cfg.Configuration.BuildSessionFactory() +39 FluentNHibernate.Cfg.FluentConfiguration.BuildSessionFactory() in d:\Builds\FluentNH\src\FluentNHibernate\Cfg\FluentConfiguration.cs:93 Anyone had a similar error? I've seach the web / stackoverflow / NHibernate forums but only found people who had a problem when running in medium trust mode, not full trust. I've been developing for several months on this application on this machine with previous versions of Fluent NHibernate and NHibernate. The machine I'm running this on is 64-bit, you never know that this is relevant.

    Read the article

  • Mirror using apt-mirror and exclud certain sections/categories

    - by Onitlikesonic
    I'm currently using apt-mirror to create a local mirror of the debian repositories. As the mirrored repositories will be used only by machines destined to be headless servers and as an effort to reduce the current mirroring size (around 75GB), categories like games and possibly others will never be needed. How can I go about specifying (on the mirror.list perhaps?) what sections/categories I want to be excluded from the mirroring? Maybe a bit subjective, but apart from games what other sections/categories could be "safely" ignored from the mirroring for my environment purposes? My mirror.list looks as below since all the machines are using precise. # MAIN deb-amd64 http://archive.ubuntu.com/ubuntu precise main restricted universe multiverse deb-i386 http://archive.ubuntu.com/ubuntu precise main restricted universe multiverse # SECURITY deb-amd64 http://archive.ubuntu.com/ubuntu precise-security main restricted universe multiverse deb-i386 http://archive.ubuntu.com/ubuntu precise-security main restricted universe multiverse Also, what others would you recommend adding to the list to be mirrored for a relatively stable environment? Again I understand this is subjective, just looking for some pointers. Much appreciated in advance

    Read the article

  • IIS Not Accepting Active Directory Login Credentials

    - by Dale Jay
    I have an ASP.NET web form using Microsoft's boilerplate Active Directory login page, set up exactly as suggested. Windows Authentication is activated on the "Default Website" and "MyWebsite" levels, and Domain\This.User is given "Allow" access to the site. After entering the valid credentials for This.User on the web form, a popup window appears asking me to enter my credentials yet again. Despite entering valid credentials for This.User (after attempting Domain\This.User and This.User formats), it rejects the credentials and returns an unauthorized security headers page (error 401.2). Active Directory user This.User is valid, the IP address of the AD server has been verified and SPN's have been set up for the server. Error Code: 0x80070005 Default Web Site security config: <system.web> <identity impersonate="true" /> <authentication mode="Windows" /> <customErrors mode="Off" /> <compilation debug="true" /> </system.web> Sub web site security <authentication mode="Windows"> <forms loginUrl="~/logon.aspx" timeout="2880" /> </authentication> <authorization> <deny users="?" /> <allow users="*" /> </authorization>

    Read the article

  • Cost effective way to provide static media content

    - by james
    I'd like to be able to deliver around 50MB of static content, either in about 30 individual files up to 10MB or grouped into 3 compressed files, around 5k to 20k times a day. Ideally I'd like to put some sort of very basic security around providing the data to ensure that a request is from the expected source, but if tossing the security for a big reduction in price is possible then it's an option. Does anyone have any suggestions other than what I've found: Google AppEngine is $0.12/GB & I believe has a file size limit of 10MB so I'd have to break the data up a bit. So a rough calculation would seem to be that this would cost me about $30 to $120 a day. Or I've seen something like what seems to be just public static content delivery with no type of logic capabilities like Usenet.nl at what I think calculates to about $0.025/GB which would cost me about $6 to $25 a day. Any idea if I'm going about these calculations right & if there might be a better option for just static content on a decently high volume delivery? Again some basic security would be great but if cost is greatly reduced without it then I'm up for that.

    Read the article

  • IE8 HTTPs Download Issue

    - by Jon Egerton
    I have a problem with a system I develop related to IE8 downloading over SSL (ie on sites using https://...) and is described on this MS kb article: http://support.microsoft.com/kb/323308 We use the HTTPCacheability.NoCache option as the data being downloaded is sensitive, and is downloaded from a secured site. I don't want that data to be cached on any of the proxies etc that the response passes through back to the client. The article describing the issue details a fix to the client side registry changing a BypassSSLNoCacheCheck setting. I don't want to loosen the system security just for IE8, as the system works fine on anything more upto date. Getting all the clients to apply the hotfix is difficult at best, and impossible at worst. We need to support IE8 in the system, at least for now. So: 1: Does the detailed hotfix have any implications for the security at the browser end in IE8 - does it mean the file will be cached? (in a place other than where the user saves the file). 2: Is there some way I can get these files downloadable with a change at the server end that doesn't break the security side of things?

    Read the article

  • Choosing a CMS to use with backend modules involving haskell and python [on hold]

    - by Butterflycode
    Hi I am trying to decide on a CMS to use for a new project. Security is the most important element of the CMS. I am looking to use a PHP based CMS such as Joomla or Drupal however, PHP has many security flaws which worries me. The data which needs to be secure will be inside a database and relate to account information. I am wondering what is the best way to do this? What I am wanting is a frontend which is made in php/js(joomla) and then I have a backend api which is written in Haskell to handle money transfers ensuring nothing goes wrong. In between the two I want a controller written in perhaps Python or C. I never want the php to touch the database. I want it to relay messages to the controller that's written in python or C and then it inputs to the database, sanitising data etc Am I perhaps thinking too deeply about this? Just wondering if anyone has any ideas on what I should do.... I can't quite explain what the project is as I don't want the idea to be stolen, but it has a lot money transactions involved so security is essential.

    Read the article

  • Ghost Records, Backups, and Database Compression…With a Pinch of Security Considerations

    - by Argenis
      Today Jeffrey Langdon (@jlangdon) posed on #SQLHelp the following questions: So I set to answer his question, and I said to myself: “Hey, I haven’t blogged in a while, how about I blog about this particular topic?”. Thus, this post was born. (If you have never heard of Ghost Records and/or the Ghost Cleanup Task, go see this blog post by Paul Randal) 1) Do ghost records get copied over in a backup? If you guessed yes, you guessed right. The backup process in SQL Server takes all data as it is on disk – it doesn’t crack the pages open to selectively pick which slots have actual data and which ones do not. The whole page is backed up, regardless of its contents. Even if ghost cleanup has run and processed the ghost records, the slots are not overwritten immediately, but rather until another DML operation comes along and uses them. As a matter of fact, all of the allocated space for a database will be included in a full backup. So, this poses a bit of a security/compliance problem for some of you DBA folk: if you want to take a full backup of a database after you’ve purged sensitive data, you should rebuild all of your indexes (with FILLFACTOR set to 100%). But the empty space on your data file(s) might still contain sensitive data! A SHRINKFILE might help get rid of that (not so) empty space, but that might not be the end of your troubles. You might _STILL_ have (not so) empty space on your files! One approach that you can follow is to export all of the data on your database to another SQL Server instance that does NOT have Instant File Initialization enabled. This can be a tedious and time-consuming process, though. So you have to weigh in your options and see what makes sense for you. Snapshot Replication is another idea that comes to mind. 2) Does Compression get rid of ghost records (2008)? The answer to this is no. The Ghost Records/Ghost Cleanup Task mechanism is alive and well on compressed tables and indexes. You can prove this running a simple script: CREATE DATABASE GhostRecordsTest GO USE GhostRecordsTest GO CREATE TABLE myTable (myPrimaryKey int IDENTITY(1,1) PRIMARY KEY CLUSTERED,                       myWideColumn varchar(1000) NOT NULL DEFAULT 'Default string value')                         ALTER TABLE myTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE) GO INSERT INTO myTable DEFAULT VALUES GO 10 DELETE myTable WHERE myPrimaryKey % 2 = 0 DBCC TRACEON(2514) DBCC CHECKTABLE(myTable) TraceFlag 2514 will make DBCC CHECKTABLE give you an extra tidbit of information on its output. For the above script: “Ghost Record count = 5” Until next time,   -Argenis

    Read the article

  • Ghost Records, Backups, and Database Compression…With a Pinch of Security Considerations

    - by Argenis
      Today Jeffrey Langdon (@jlangdon) posed on #SQLHelp the following questions: So I set to answer his question, and I said to myself: “Hey, I haven’t blogged in a while, how about I blog about this particular topic?”. Thus, this post was born. (If you have never heard of Ghost Records and/or the Ghost Cleanup Task, go see this blog post by Paul Randal) 1) Do ghost records get copied over in a backup? If you guessed yes, you guessed right. The backup process in SQL Server takes all data as it is on disk – it doesn’t crack the pages open to selectively pick which slots have actual data and which ones do not. The whole page is backed up, regardless of its contents. Even if ghost cleanup has run and processed the ghost records, the slots are not overwritten immediately, but rather until another DML operation comes along and uses them. As a matter of fact, all of the allocated space for a database will be included in a full backup. So, this poses a bit of a security/compliance problem for some of you DBA folk: if you want to take a full backup of a database after you’ve purged sensitive data, you should rebuild all of your indexes (with FILLFACTOR set to 100%). But the empty space on your data file(s) might still contain sensitive data! A SHRINKFILE might help get rid of that (not so) empty space, but that might not be the end of your troubles. You might _STILL_ have (not so) empty space on your files! One approach that you can follow is to export all of the data on your database to another SQL Server instance that does NOT have Instant File Initialization enabled. This can be a tedious and time-consuming process, though. So you have to weigh in your options and see what makes sense for you. Snapshot Replication is another idea that comes to mind. 2) Does Compression get rid of ghost records (2008)? The answer to this is no. The Ghost Records/Ghost Cleanup Task mechanism is alive and well on compressed tables and indexes. You can prove this running a simple script: CREATE DATABASE GhostRecordsTest GO USE GhostRecordsTest GO CREATE TABLE myTable (myPrimaryKey int IDENTITY(1,1) PRIMARY KEY CLUSTERED,                       myWideColumn varchar(1000) NOT NULL DEFAULT 'Default string value')                         ALTER TABLE myTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE) GO INSERT INTO myTable DEFAULT VALUES GO 10 DELETE myTable WHERE myPrimaryKey % 2 = 0 DBCC TRACEON(2514) DBCC CHECKTABLE(myTable) TraceFlag 2514 will make DBCC CHECKTABLE give you an extra tidbit of information on its output. For the above script: “Ghost Record count = 5” Until next time,   -Argenis

    Read the article

  • Need suggestions on what you regard as &ldquo;security&rdquo;

    - by John Breakwell
    I’m currently writing a large piece on MSMQ security and wanted to check I was covering the right areas. I have some doubts as I’ve seen the occasional MSMQ forum question where a poster has used the word “security” in different contexts to what I was expecting. So here are the areas I plan to cover: Message security encryption on the wire (SSL and IPSEC) encryption of the message (MSMQ encryption) encryption of the payload (data encryption) signing and authentication Queue security SIDs and ACLs Discoverability Cross-forest issues Storage security NTFS permissions unencrypted data Service security Ports and Firewalls DOS attacks Hardened mode (HTTP only) RPC secure channel requirement authenticated RPC requirement Active Directory object permissions Setup Administrator requirements What else would you want to see?

    Read the article

  • Framework 4 Features: Login Id Support

    - by Anthony Shorten
    Given that Oracle Utilities Application Framework 4 is available as part of Mobile Work Force Management and other product progressively I am preparing a number of short but sweet blog entries highlighting some of the new functionality that has been implemented. This is the first entry and it is on a new security feature called Login Id. In past releases of the Oracle Utilities Application Framework, the userid used for authentication and authorization was limited to eight (8) characters in length. This mirrored what the market required in the past with LAN userids and even legacy userids being that length. The technology market has since progressed to longer userid lengths. It is very common to hear that email addresses are being used as credentials for production systems. To achieve this in past versions of the Oracle Utilities Application Framework, sites had to introduce a short userid (8 characters in length) as an alias in your preferred security store. You then configured your J2EE Web Application Server to use the alias as credentials. This sometimes was a standard feaure of the security store and/or the J2EE Web Application Server, if you were lucky. If not, some java code has to be written to implement the solution. In Oracle Utilities Application Framework 4 we introduced a new attribute on the user object called Login Id. The Login Id can be up to 256 characters in length and is an alternative to the existing userid stored on the user object. This means the Oracle Utilities Application Framework can support both long and short userids. For backward compatibility we use the Login Id for authentication but the short userid for authorization and auditing. The user object within the Oracle Utilities Application Framework holds the translation. Backward compatibility is always a consideration in any of our designs for future or changed functionality. You will see reference to this fact in the blog entries I will be composing over the next few months. We have also thought about the flexibility in implementing this feature. The Login Id can be the same value of the Userid (the default for backward compatibility) or can be different. Both the Login Id and Userid have to be unique. This avoids sharing of credentials and is also backward compatible. You can manually enter the Login Id or provision it from Oracle Identity Manager (or other tool). If you use the Login Id only, then we will not autogenerate a short userid automatically as the rules for this can vary from site to site. You have a number of options there. Most Identity provisioning tools can generate a short userid at user creation time and this can be used. If you do not use provisioning tools, then you can write a class extension using the SDK to autoegenerate the userid based upon your sites preference. When we designed the feature there were lots of styles of generating userids (random, initial and surname, numbers etc). We could not really see a clear winner in that respect so we just allowed the extension to be inserted in if necessary. Most customers indicated to us that identity provisioning was the preferred way. This is why we released an Oracle Identity Manager integration with the framework. The Login id is case sensitive now which was not supported under userid. The introduction of the Login Id allows the product to offer flexible options when configuring security whilst maintaining backward compatibility.

    Read the article

  • Spring Security and the Synchronizer Token J2EE pattern, problem when authentication fails.

    - by dfuse
    Hey, we are using Spring Security 2.0.4. We have a TransactionTokenBean which generates a unique token each POST, the bean is session scoped. The token is used for the duplicate form submission problem (and security). The TransactionTokenBean is called from a Servlet filter. Our problem is the following, after a session timeout occured, when you do a POST in the application Spring Security redirects to the logon page, saving the original request. After logging on again the TransactionTokenBean is created again, since it is session scoped, but then Spring forwards to the originally accessed url, also sending the token that was generated at that time. Since the TransactionTokenBean is created again, the tokens do not match and our filter throws an Exception. I don't quite know how to handle this elegantly, (or for that matter, I can't even fix it with a hack), any ideas? This is the code of the TransactionTokenBean: public class TransactionTokenBean implements Serializable { public static final int TOKEN_LENGTH = 8; private RandomizerBean randomizer; private transient Logger logger; private String expectedToken; public String getUniqueToken() { return expectedToken; } public void init() { resetUniqueToken(); } public final void verifyAndResetUniqueToken(String actualToken) { verifyUniqueToken(actualToken); resetUniqueToken(); } public void resetUniqueToken() { expectedToken = randomizer.getRandomString(TOKEN_LENGTH, RandomizerBean.ALPHANUMERICS); getLogger().debug("reset token to: " + expectedToken); } public void verifyUniqueToken(String actualToken) { if (getLogger().isDebugEnabled()) { getLogger().debug("verifying token. expected=" + expectedToken + ", actual=" + actualToken); } if (expectedToken == null || actualToken == null || !isValidToken(actualToken)) { throw new IllegalArgumentException("missing or invalid transaction token"); } if (!expectedToken.equals(actualToken)) { throw new InvalidTokenException(); } } private boolean isValidToken(String actualToken) { return StringUtils.isAlphanumeric(actualToken); } public void setRandomizer(RandomizerBean randomizer) { this.randomizer = randomizer; } private Logger getLogger() { if (logger == null) { logger = Logger.getLogger(TransactionTokenBean.class); } return logger; } } and this is the Servlet filter (ignore the Ajax stuff): public class SecurityFilter implements Filter { static final String AJAX_TOKEN_PARAM = "ATXTOKEN"; static final String TOKEN_PARAM = "TXTOKEN"; private WebApplicationContext webApplicationContext; private Logger logger = Logger.getLogger(SecurityFilter.class); public void init(FilterConfig config) { setWebApplicationContext(WebApplicationContextUtils.getWebApplicationContext(config.getServletContext())); } public void destroy() { } public void doFilter(ServletRequest req, ServletResponse response, FilterChain chain) throws IOException, ServletException { HttpServletRequest request = (HttpServletRequest) req; if (isPostRequest(request)) { if (isAjaxRequest(request)) { log("verifying token for AJAX request " + request.getRequestURI()); getTransactionTokenBean(true).verifyUniqueToken(request.getParameter(AJAX_TOKEN_PARAM)); } else { log("verifying and resetting token for non-AJAX request " + request.getRequestURI()); getTransactionTokenBean(false).verifyAndResetUniqueToken(request.getParameter(TOKEN_PARAM)); } } chain.doFilter(request, response); } private void log(String line) { if (logger.isDebugEnabled()) { logger.debug(line); } } private boolean isPostRequest(HttpServletRequest request) { return "POST".equals(request.getMethod().toUpperCase()); } private boolean isAjaxRequest(HttpServletRequest request) { return request.getParameter("AJAXREQUEST") != null; } private TransactionTokenBean getTransactionTokenBean(boolean ajax) { return (TransactionTokenBean) webApplicationContext.getBean(ajax ? "ajaxTransactionTokenBean" : "transactionTokenBean"); } void setWebApplicationContext(WebApplicationContext context) { this.webApplicationContext = context; } }

    Read the article

  • Getting Started with ASP.NET Membership, Profile and RoleManager

    - by Ben Griswold
    A new ASP.NET MVC project includes preconfigured Membership, Profile and RoleManager providers right out of the box.  Try it yourself – create a ASP.NET MVC application, crack open the web.config file and have a look.  First, you’ll find the ApplicationServices database connection: <connectionStrings>   <add name="ApplicationServices"        connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true"        providerName="System.Data.SqlClient"/> </connectionStrings>   Notice the connection string is referencing the aspnetdb.mdf database hosted by SQL Express and it’s using integrated security so it’ll just work for you without having to call out a specific database login or anything. Scroll down the file a bit and you’ll find each of the three noted sections: <membership>   <providers>     <clear/>     <add name="AspNetSqlMembershipProvider"          type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"          connectionStringName="ApplicationServices"          enablePasswordRetrieval="false"          enablePasswordReset="true"          requiresQuestionAndAnswer="false"          requiresUniqueEmail="false"          passwordFormat="Hashed"          maxInvalidPasswordAttempts="5"          minRequiredPasswordLength="6"          minRequiredNonalphanumericCharacters="0"          passwordAttemptWindow="10"          passwordStrengthRegularExpression=""          applicationName="/"             />   </providers> </membership>   <profile>   <providers>     <clear/>     <add name="AspNetSqlProfileProvider"          type="System.Web.Profile.SqlProfileProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"          connectionStringName="ApplicationServices"          applicationName="/"             />   </providers> </profile>   <roleManager enabled="false">   <providers>     <clear />     <add connectionStringName="ApplicationServices" applicationName="/" name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />     <add applicationName="/" name="AspNetWindowsTokenRoleProvider" type="System.Web.Security.WindowsTokenRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />   </providers> </roleManager> Really. It’s all there. Still don’t believe me.  Run the application, walk through the registration process and finally login and logout.  Completely functional – and you didn’t have to do a thing! What else?  Well, you can manage your users via the Configuration Manager which is hiding in Visual Studio behind Projects > ASP.NET Configuration. The ASP.NET Web Site Administration Tool isn’t MVC-specific (neither is the Membership, Profile or RoleManager stuff) but it’s neat and I hardly ever see anyone using it.  Here you can set up and edit users, roles, and set access permissions for your site. You can manage application settings, establish your SMTP settings, configure debugging and tracing, define default error page and even take your application offline.  The UI is rather plain-Jane but it works great. And here’s the best of all.  Let’s say you, like most of us, don’t want to run your application on top of the aspnetdb.mdf database.  Let’s suppose you want to use your own database and you’d like to add the membership stuff to it.  Well, that’s easy enough. Take a look inside your [drive:]\%windir%\Microsoft.Net\Framework\v2.0.50727\ folder.  Here you’ll find a bunch of files.  If you were to run the InstallCommon.sql, InstallMembership.sql, InstallRoles.sql and InstallProfile.sql files against the database of your choices, you’d be installing the same membership, profile and role artifacts which are found in the aspnet.db to your own database.  Too much trouble?  Okay. Run [drive:]\%windir%\Microsoft.Net\Framework\v2.0.50727\aspnet_regsql.exe from the command line instead.  This will launch the ASP.NET SQL Server Setup Wizard which walks you through the installation of those same database objects into the new or existing database of your choice. You may not always have the luxury of using this tool on your destination server, but you should use it whenever you can.  Last tip: don’t forget to update the ApplicationServices connectionstring to point to your custom database after the setup is complete. At the risk of sounding like a smarty, everything I’ve mentioned in this post has been around for quite a while. The thing is that not everyone has had the opportunity to use it.  And it makes sense. I know I’ve worked on projects which used custom membership services.  Why bother with the out-of-the-box stuff, right?   And the .NET framework is so massive, who can know it all. Well, eventually you might have a chance to architect your own solution using any implementation you’d like or you will have the time to play around with another aspect of the framework.  When you do, think back to this post.

    Read the article

  • OFM 11g: OAM SSO for Forms and ADF Faces

    - by olaf.heimburger
    In my blog entry OFM 11g: Implementing OAM SSO with Forms we set the foundation for providing a complete Single Sign-On solution based on Oracle Access Manager (OAM). This foundation should now be used to combine Forms 11g and ADF Faces 11g applications with a transparent login. The Beginning Before we start, lets re-consider the requirements to achieve the ultimate goal. These are:- Access to the Forms 11g Application must be authenticated by OAM (protected). Access to the ADF Faces 11g Application must be authenticated by OAM (protected). Switching from one application to the other should not result in a re-authentication (aka single sign-on). User identity should be availble to the application without any extra work in the application code. All these are the common requirements for a single sign-on solution. The challenge here is that Forms relies on Oracle AS SSO (OSSO or "the old SSO") while ADF Faces is quite open and can be protected by Oracle AS SSO and Oracle Access Manager SSO (OAM SSO or "the modern SSO"). Both application types can use their own login mechanism. The Forms 11g Application To demonstrate the SSO functionality, we use the standard Forms test (/forms/frmservlet?form=test.fmx). Although this shows nothing specific in the Forms application, it is good enough to demonstrate that it is protected. The ADF Faces 11g Application With ADF 11g you can develop quite a number of useful Faces based applications. Among many features, it comes with the ADF Security feature that provides you with functionality to protect your pages, regions, and even TaskFlows from un-authenticated usage in a declarative way.To demonstrate that functionality a sample application with different access levels plus a login dialog is used. This application comes with a publc page that has protected content (a button). Once you are authenticated for the application, the protected content and some personalisation (the users name) is shown. Protecting Forms 11g As already explained in the OFM 11g: Implementing OAM SSO with Forms, the easiest way to protect a Forms application is to configure it as a OSSO partner application, setup mod_osso, test it, migrate OSSO to OAM SSO with the Upgrade Agent, reconfigure mod_osso, and you are done.Sort of. By default the OAM is configured to run in co-exist mode. This means that a user has to re-authenticate to the Forms application when logged into an OAM SSO application before. To avoid this, you must disable the co-exist mode, for example by using WLST and issue the disableCoexistMode on the OAM server. Protecting ADF Faces 11g To protect an ADF Faces 11g application we have to consider two scenarios: Use a HTTPD server in front of WLS Use WLS without a HTTPD server Both scenarios have their pro's and cons' and we won't get into details and just describe how to configure both. Scenario 1: HTTPD Server with WLS In this scenario we have to setup the environment in some steps:- Configure a WebGate at OAMThis configuration can be done through the OAM console or by a script. No matter which way you choose, the WebGate configuration files will be created for you. Install the OAM WebGate into an HTTPD serverThe type of webgate you need to install depends on you HTTPD server. With Oracle HTTP Server 11g you can use the latest OAM 11g WebGate. With other HTTPD servers you must resort to OAM 10g WebGates. A OAM 11g WebGate can use the pre-created configuration files supplied during the WebGate configuration at OAM. An OAM 10g WebGate asks for the specific configuration and verifies it during installation. Configure the WLS plugin to forward the requests to WLSAgain, depending on your HTTPD Server you have different plugins to forward requests to WLS. With OHS 11g you can use the pre-installed mod_wl_ohs plugin. Its configuration is quite simple and straightforward. Configure an OAM SSPI Provider as a IdentityAsserter in WLS to retrieve the user identifierThis configuration is quite important as it retrieves the user identifier for the next step. If you have a SOA Suite installation within your OFM_HOME, the necessary software is already installed and you only need to setup your Security Realm within WLS.You can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. We add the OAMIdentityAsserter as the first SSPI Provider. It is important that the Control Flag is set to SUFFICIENT. Every other configuration can be left as is, no changes are necessary here. Configure an OAM Identity Provider to get the real user identityIn OFM 11g: Implementing OAM SSO with Forms we have configured an OID as Identity Store. To get the user identity we need to configure the same OID as an SSPI Provider for WLS. This will retrieve the real user information from OID and creates the JAAS Subject and Principals to be used by any application within WLS.Again, you can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. Now add the OIDAuthenticator as the second SSPI Provider. It is important that the Control Flag is set to OPTIONAL. After we saved this setup, we need to configure this provider by setting the Provider Specific details to access OID. Scenario 2: WLS only This scenario is a bit easier but requires more work in the WLS setup:- Configure a WebGate at OAMThis configuration can be done through the OAM console or by a script. No matter which way you choose, the WebGate configuration files will be created for you. Configure the OAM SSPI Provider as IdentityAuthenticator to authenticate and set the user identifierWhen using the OAM SSPI Provider as OAMAuthenticator we create it with the Control Flag as SUFFICIENT. Afte saving it, the Provider Specific settings must be configured to allow the OAM SSPI Provider to connect to the OAM Server. Configure an OAM Identity Provider to get the real user identity providerAgain, you can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. Now add the OIDAuthenticator as the second SSPI Provider. It is important that the Control Flag is set to OPTIONAL. After we saved this setup, we need to configure this provider by setting the Provider Specific details to access OID. Configure ADF 11g Application for OAM Actually, there are no changes to be made within the ADF application. We only need to add the value CLIENT_CERT to the <auth-mode> tag in the <login-config> tag in the web.xml file. Testing To test the configuration, simply point your browser to one of both appliction URLs. OAM should kick in and redirect you to the OAM Login page. After you have entered the correct credentials, access to the URLs is granted and you will see the application. Enjoy!

    Read the article

  • How to Recover From a Virus Infection: 3 Things You Need to Do

    - by Chris Hoffman
    If your computer becomes infected with a virus or another piece of malware, removing the malware from your computer is only the first step. There’s more you need to do to ensure you’re secure. Note that not every antivirus alert is an actual infection. If your antivirus program catches a virus before it ever gets a chance to run on your computer, you’re safe. If it catches the malware later, you have a bigger problem. Change Your Passwords You’ve probably used your computer to log into your email, online banking websites, and other important accounts. Assuming you had malware on your computer, the malware could have logged your passwords and uploaded them to a malicious third party. With just your email account, the third party could reset your passwords on other websites and gain access to almost any of your online accounts. To prevent this, you’ll want to change the passwords for your important accounts — email, online banking, and whatever other important accounts you’ve logged into from the infected computer. You should probably use another computer that you know is clean to change the passwords, just to be safe. When changing your passwords, consider using a password manager to keep track of strong, unique passwords and two-factor authentication to prevent people from logging into your important accounts even if they know your password. This will help protect you in the future. Ensure the Malware Is Actually Removed Once malware gets access to your computer and starts running, it has the ability to do many more nasty things to your computer. For example, some malware may install rootkit software and attempt to hide itself from the system. Many types of Trojans also “open the floodgates” after they’re running, downloading many different types of malware from malicious web servers to the local system. In other words, if your computer was infected, you’ll want to take extra precautions. You shouldn’t assume it’s clean just because your antivirus removed what it found. It’s probably a good idea to scan your computer with multiple antivirus products to ensure maximum detection. You may also want to run a bootable antivirus program, which runs outside of Windows. Such bootable antivirus programs will be able to detect rootkits that hide themselves from Windows and even the software running within Windows. avast! offers the ability to quickly create a bootable CD or USB drive for scanning, as do many other antivirus programs. You may also want to reinstall Windows (or use the Refresh feature on Windows 8) to get your computer back to a clean state. This is more time-consuming, especially if you don’t have good backups and can’t get back up and running quickly, but this is the only way you can have 100% confidence that your Windows system isn’t infected. It’s all a matter of how paranoid you want to be. Figure Out How the Malware Arrived If your computer became infected, the malware must have arrived somehow. You’ll want to examine your computer’s security and your habits to prevent more malware from slipping through in the same way. Windows is complex. For example, there are over 50 different types of potentially dangerous file extensions that can contain malware to keep track of. We’ve tried to cover many of the most important security practices you should be following, but here are some of the more important questions to ask: Are you using an antivirus? – If you don’t have an antivirus installed, you should. If you have Microsoft Security Essentials (known as Windows Defender on Windows 8), you may want to switch to a different antivirus like the free version of avast!. Microsoft’s antivirus product has been doing very poorly in tests. Do you have Java installed? – Java is a huge source of security problems. The majority of computers on the Internet have an out-of-date, vulnerable version of Java installed, which would allow malicious websites to install malware on your computer. If you have Java installed, uninstall it. If you actually need Java for something (like Minecraft), at least disable the Java browser plugin. If you’re not sure whether you need Java, you probably don’t. Are any browser plugins out-of-date? – Visit Mozilla’s Plugin Check website (yes, it also works in other browsers, not just Firefox) and see if you have any critically vulnerable plugins installed. If you do, ensure you update them — or uninstall them. You probably don’t need older plugins like QuickTime or RealPlayer installed on your computer, although Flash is still widely used. Are your web browser and operating system set to automatically update? – You should be installing updates for Windows via Windows Update when they appear. Modern web browsers are set to automatically update, so they should be fine — unless you went out of your way to disable automatic updates. Using out-of-date web browsers and Windows versions is dangerous. Are you being careful about what you run? – Watch out when downloading software to ensure you don’t accidentally click sketchy advertisements and download harmful software. Avoid pirated software that may be full of malware. Don’t run programs from email attachments. Be careful about what you run and where you get it from in general. If you can’t figure out how the malware arrived because everything looks okay, there’s not much more you can do. Just try to follow proper security practices. You may also want to keep an extra-close eye on your credit card statement for a while if you did any online-shopping recently. As so much malware is now related to organized crime, credit card numbers are a popular target.     

    Read the article

  • ADF page security - the untold password rule

    - by ankuchak
    I'm kinda new to Oracle ADF. So, in this blog post I'm going to share something with you that I faced (and recovered from) recently. Initially I thought if I should at all put a blog post on this, because it's totally simple. Still, simplicity is a relative term. So without wasting further time, let's kick off.    I was exploring the ADF security aspect to secure a page through html basic authentication. The idea is very simple and the credential store etc. come into picture. But I was not able to run a successful test of this phenomenally simple thing even after trying for over 30 minutes. This is what I did.   I created a simple jsf page and put a panel in it. And I put a simple el to show the current user name.  Next I created a user that I should test with. I named the password as myuser, just to keep it simple. Then I created an enterprise role and mapped the user that I just created. Then I created an application role and mapped the enterprise role to it. Then I mapped the resource, the simple jsf page in this case, to this application role. This way, only users with the given application role can only access this page (as if you didn't know this duh!).  Of course, I had to create the page definition for the page before I could map it to an application role. What else! done! Then I hit the run menu item and it all went well...   Until... I got this message. I put the correct credentials repeatedly 2-3 times. Still I got the same error. Why? I didn't get any error message during the deployment. nope.  Then, as I said before, I spent over 30 minutes trying different things out, things like mapping only the user(not the role) to the page, changing the context root etc. Nothing worked!  Then of course, I bothered to look at the logs and found this. See the first red line. That says it all. So the problem was with that password. The password must have at least one special character and one digit in it. I think I was misled by the missing password hint/rule and the fact that the deployment didn't fail even if the user was not created properly. Well, yes, I agree that I was fool enough not to look at the logs.  Later I changed the password to something like myuser123# . And it worked. I hope it helped.

    Read the article

  • Spring Security: session expiration without redirect to expired-url?

    - by Kdeveloper
    I'm using Spring Security 3.0.2 form based authentication. But I can't figure out how I can configure it so that when a session expires that the request is not redirect to an other page (expired-url) or displays a 'session expires' message. I don't want any redirect or messages, I want that a anonymous session is started just like when a user without a session enters the website. My current configuration: <http> <intercept-url pattern="/login.action*" filters="none"/> <intercept-url pattern="/admin/**" access="ROLE_ADMIN" /> <intercept-url pattern="/**" access="IS_AUTHENTICATED_ANONYMOUSLY"/> <form-login login-page="/login.action" authentication-failure-url="/login.action?error=failed" login-processing-url="/login-handler.action"/> <logout logout-url="/logoff-execute.action" logout-success-url="/logoff.action?done=1"/> <remember-me key="remember-me-security" services-ref="rememberMeServices"/> <session-management > <concurrency-control max-sessions="1" error-if-maximum-exceeded="false" expired-url="/login.action?error=expired.url"/> </session-management> </http>

    Read the article

  • How do I use a custom authentication mechanism for a Java web application with Spring Security?

    - by Adam
    Hi, I'm working on a project to convert an existing Java web application to use Spring Web MVC. As a part of this I will migrate the existing log-on/log-off mechanism to use Spring Security. The idea at this stage is to replicate the existing functionality and replace only the web layer, leaving the service classes and objects in place. The required functionality is simple. Access is controlled to URLs and to access certain pages the user must log on. Authentication is performed with a simple username and password along with an extra static piece of information that comes from the login page. There is no notion of a role: once a user has logged on they have access to all of the pages. Behind the scenes, the service layer has a class with a simple authentication method: doAuthenticate(String username, String password, String info) throws ServiceException An exception is thrown if the login fails. I'd like to leave this existing service object that does the authentication intact but to "plug it into" the Spring Security mechanism. Can somebody suggest the best approach to take for this please? Naturally, I'd like to take the path of least resistance and leave the work where possible to Spring... Thanks in advance, Adam.

    Read the article

  • Are there any security issues to avoid when providing a email-or-username-can-act-as-username login

    - by Tchalvak
    I am in the process of moving from a "username/password" system to one that uses email for login. I don't think that there's any horrible problem with allowing either email or username for login, and I remember seeing sites that I consider somewhat respectable doing it as well, but I'd like to be aware of any major security flaws that I may be introducing. More specifically, here is the pertinent function (the query_row function parameterizes the sql). function authenticate($p_user, $p_pass) { $user = (string)$p_user; $pass = (string)$p_pass; $returnValue = false; if ($user != '' && $pass != '') { // Allow login via username or email. $sql = "SELECT account_id, account_identity, uname, player_id FROM accounts join account_players on account_id=_account_id join players on player_id = _player_id WHERE lower(account_identity) = lower(:login) OR lower(uname) = lower(:login) AND phash = crypt(:pass, phash)"; $returnValue = query_row($sql, array(':login'=>$user, ':pass'=>$pass)); } return $returnValue; } Notably, I have added the WHERE lower(account_identity) = lower(:login) OR lower(uname) = lower(:login) ...etc section to allow graceful backwards compatibility for users who won't be used to using their email for the login procedure. I'm not completely sure that that OR is safe, though. Are there some ways that I should tighten the security of the php code above?

    Read the article

  • Are there any security issues to avoid when providing a either-email-or-username-can-act-as-username

    - by Tchalvak
    I am in the process of moving from a "username/password" system to one that uses email for login. I don't think that there's any horrible problem with allowing either email or username for login, and I remember seeing sites that I consider somewhat respectable doing it as well, but I'd like to be aware of any major security flaws that I may be introducing. More specifically, here is the pertinent function (the query_row function parameterizes the sql). function authenticate($p_user, $p_pass) { $user = (string)$p_user; $pass = (string)$p_pass; $returnValue = false; if ($user != '' && $pass != '') { // Allow login via username or email. $sql = "SELECT account_id, account_identity, uname, player_id FROM accounts join account_players on account_id=_account_id join players on player_id = _player_id WHERE lower(account_identity) = lower(:login) OR lower(uname) = lower(:login) AND phash = crypt(:pass, phash)"; $returnValue = query_row($sql, array(':login'=>$user, ':pass'=>$pass)); } return $returnValue; } Notably, I have added the WHERE lower(account_identity) = lower(:login) OR lower(uname) = lower(:login) ...etc section to allow graceful backwards compatibility for users who won't be used to using their email for the login procedure. I'm not completely sure that that OR is safe, though. Are there some ways that I should tighten the security of the php code above?

    Read the article

  • ASP.NET WebAPI Security 5: JavaScript Clients

    - by Your DisplayName here!
    All samples I showed in my last post were in C#. Christian contributed another client sample in some strange language that is supposed to work well in browsers ;) JavaScript client scenarios There are two fundamental scenarios when it comes to JavaScript clients. The most common is probably that the JS code is originating from the same web application that also contains the web APIs. Think a web page that does some AJAX style callbacks to an API that belongs to that web app – Validation, data access etc. come to mind. Single page apps often fall in that category. The good news here is that this scenario just works. The typical course of events is that the user first logs on to the web application – which will result in an authentication cookie of some sort. That cookie will get round-tripped with your AJAX calls and ASP.NET does its magic to establish a client identity context. Since WebAPI inherits the security context from its (web) host, the client identity is also available here. The other fundamental scenario is JavaScript code *not* running in the context of the WebAPI hosting application. This is more or less just like a normal desktop client – either running in the browser, or if you think of Windows 8 Metro style apps as “real” desktop apps. In that scenario we do exactly the same as the samples did in my last post – obtain a token, then use it to call the service. Obtaining a token from IdentityServer’s resource owner credential OAuth2 endpoint could look like this: thinktectureIdentityModel.BrokeredAuthentication = function (stsEndpointAddress, scope) {     this.stsEndpointAddress = stsEndpointAddress;     this.scope = scope; }; thinktectureIdentityModel.BrokeredAuthentication.prototype = function () {     getIdpToken = function (un, pw, callback) {         $.ajax({             type: 'POST',             cache: false,             url: this.stsEndpointAddress,             data: { grant_type: "password", username: un, password: pw, scope: this.scope },             success: function (result) {                 callback(result.access_token);             },             error: function (error) {                 if (error.status == 401) {                     alert('Unauthorized');                 }                 else {                     alert('Error calling STS: ' + error.responseText);                 }             }         });     };     createAuthenticationHeader = function (token) {         var tok = 'IdSrv ' + token;         return tok;     };     return {         getIdpToken: getIdpToken,         createAuthenticationHeader: createAuthenticationHeader     }; } (); Calling the service with the requested token could look like this: function getIdentityClaimsFromService() {     authHeader = authN.createAuthenticationHeader(token);     $.ajax({         type: 'GET',         cache: false,         url: serviceEndpoint,         beforeSend: function (req) {             req.setRequestHeader('Authorization', authHeader);         },         success: function (result) {              $.each(result.Claims, function (key, val) {                 $('#claims').append($('<li>' + val.Value + '</li>'))             });         },         error: function (error) {             alert('Error: ' + error.responseText);         }     }); I updated the github repository, you can can play around with the code yourself.

    Read the article

  • Leaks on Wikis: "Corporations...You're Next!" Oracle Desktop Virtualization Can Help.

    - by adam.hawley
    Between all the press coverage on the unauthorized release of 251,287 diplomatic documents and on previous extensive releases of classified documents on the events in Iraq and Afghanistan, one could be forgiven for thinking massive leaks are really an issue for governments, but it is not: It is an issue for corporations as well. In fact, corporations are apparently set to be the next big target for things like Wikileaks. Just the threat of such a release against one corporation recently caused the price of their stock to drop 3% after the leak organization claimed to have 5GB of information from inside the company, with the implication that it might be damaging or embarrassing information. At the moment of this blog anyway, we don't know yet if that is true or how they got the information but how did the diplomatic cable leak happen? For the diplomatic cables, according to press reports, a private in the military, with some appropriate level of security clearance (that is, he apparently had the correct level of security clearance to be accessing the information...he reportedly didn't "hack" his way through anything to get to the documents which might have raised some red flags...), is accused of accessing the material and copying it onto a writeable CD labeled "Lady Gaga" and walking out the door with it. Upload and... Done. In the same article, the accused is quoted as saying "Information should be free. It belongs in the public domain." Now think about all the confidential information in your company or non-profit... from credit card information, to phone records, to customer or donor lists, to corporate strategy documents, product cost information, etc, etc.... And then think about that last quote above from what was a very junior level person in the organization...still feeling comfortable with your ability to control all your information? So what can you do to guard against these types of breaches where there is no outsider (or even insider) intrusion to detect per se, but rather someone with malicious intent is physically walking out the door with data that they are otherwise allowed to access in their daily work? A major first step it to make it physically, logistically much harder to walk away with the information. If the user with malicious intent has no way to copy to removable or moble media (USB sticks, thumb drives, CDs, DVDs, memory cards, or even laptop disk drives) then, as a practical matter it is much more difficult to physically move the information outside the firewall. But how can you control access tightly and reliably and still keep your hundreds or even thousands of users productive in their daily job? Oracle Desktop Virtualization products can help.Oracle's comprehensive suite of desktop virtualization and access products allow your applications and, most importantly, the related data, to stay in the (highly secured) data center while still allowing secure access from just about anywhere your users need to be to be productive.  Users can securely access all the data they need to do their job, whether from work, from home, or on the road and in the field, but fully configurable policies set up centrally by privileged administrators allow you to control whether, for instance, they are allowed to print documents or use USB devices or other removable media.  Centrally set policies can also control not only whether they can download to removable devices, but also whether they can upload information (see StuxNet for why that is important...)In fact, by using Sun Ray Client desktop hardware, which does not contain any disk drives, or removable media drives, even theft of the desktop device itself would not make you vulnerable to data loss, unlike a laptop that can be stolen with hundreds of gigabytes of information on its disk drive.  And for extreme security situations, Sun Ray Clients even come standard with the ability to use fibre optic ethernet networking to each client to prevent the possibility of unauthorized monitoring of network traffic.But even without Sun Ray Client hardware, users can leverage Oracle's Secure Global Desktop software or the Oracle Virtual Desktop Client to securely access server-resident applications, desktop sessions, or full desktop virtual machines without persisting any application data on the desktop or laptop being used to access the information.  And, again, even in this context, the Oracle products allow you to control what gets uploaded, downloaded, or printed for example.Another benefit of Oracle's Desktop Virtualization and access products is the ability to rapidly and easily shut off user access centrally through administrative polices if, for example, an employee changes roles or leaves the company and should no longer have access to the information.Oracle's Desktop Virtualization suite of products can help reduce operating expense and increase user productivity, and those are good reasons alone to consider their use.  But the dynamics of today's world dictate that security is one of the top reasons for implementing a virtual desktop architecture in enterprises.For more information on these products, view the webpages on www.oracle.com and the Oracle Technology Network website.

    Read the article

  • How to find domain registrar and DNS hosting with good DNSSEC support?

    - by rsp
    Simplified problem I want to buy a domain and make a website that is fully secured with DNSSEC. Background I've been hearing about the insecurity of DNS for years. I've watched all of the talks by Dan Kaminsky and others from DNS exploits to The future of DNS Security Panel. I knew that using DNS without security is a disaster waiting to happen. I followed the development of the DNSSEC standard. I celebrated the key signing ceremony. Everything was on the right track to finally have a secure DNS system in place. And now more than 2 years later I wanted to just do what everyone said I should do: use DNSSEC for a new domain. So I need a domain registrar and a DNS hosting service that supports DNSSEC. Surprisingly it is not that easy to even find out who does support DNSSEC. It was actually much easier to find info on DNSSEC two years ago when everyone was going to support DNSSEC Real Soon Now but now years passed and I hardly see any progress done. I just hope that I was just looking in the wrong places and someone here will explain all of the doubts. I hope that other people who want to have a secure website will also find this question useful. What is needed registrar and DNS servers with full DNSSEC support for .com domains What is not needed IPv6 support Web hosting anything more What I found out so far Go Daddy offers Premium DNS service for additional $36 per year that lets you "Secure up to 5 domains with DNSSEC". easyDNS has DNSSEC available in Beta across all service levels (you need to enable the "beta" flag in configuration) but it doesn't seem to be production ready and judging from the lack of updates it isn't a feature of highest priority (the last update from March 2011 on the easyDNS blog). Name.com - according to The Register (US domain registrar does IPv6, DNSSEC) it has DNSSEC support since 2010 but right now (October 2012) I couldn't find anything related to DNSSEC on their website. Dynadot that is very often recommended doesn't support DNSSEC Namecheap that is also often recommended doesn't support DNSSEC. The support answer from 2011 suggested that it was being added but in 2012 still no ETA is given to customers. DynDNS was supposed to support DNSSEC, I found a link explaining DNSSEC support but it gives 404 Not Found page and offers a search box - when searching for DNSSEC I get "No results were found for your query." GKG was recommended online for DNSSEC support but it's hard to find any information on the level of DNSSEC support - there is a brief explanation on what is DNSSEC and how to sign Delegation Signer records in their FAQ but no information about the level of actual support can be found. Ask Slashdot: Which Registrars Support DNSSEC? from July 2011 - Answers list Go Daddy, DynDNS, GKG, Name.com as registrars that support DNSSEC but: see above. Related questions How to find web hosting that meets my requirements? What is needed to add DNSSEC to my site? DNS hosting better managed by Domain provider or Hosting provider? Registrar with good security, DNS hosting, and DNSSEC and IPv6 resolvers? In no. 1 no one is ever mentioning DNS at all. In no. 2 answers only mention the .se TLD, there are very few answers and they seem very outdated. In no. 3 one answer says "On projects that demand higher security, I might look for a web host that supports DNSSEC" but no more information is provided. The only relevant answers are in no. 4 where easyDNS is recommended by someone who has never used them personally. Meanwhile, as of October 2012, the support of DNSSEC is described as "in beta" on the easyDNS feature list. Another one recommends SiteGround but searching their site for DNSSEC returns no results. Other answers recommend web hosting providers that don't meet the requirement of DNSSEC support. Also the question mentioned above lists 9 very specific requirements other than only DNSSEC (like eg. HTTP-only login cookies, two-factor authentications, no DNS record limits, DNS statistics of queries/day, audit trails etc.) which might have excluded many possible recommendations if one is only interested in DNSSEC support. Conclusions I thought that by the end of 2012 the support of DNSSEC among domain registrars and DNS providers would be nearly universal. I am shocked that the support seems virtually nonexistent. Is this a result of some serious problems with the DNSSEC adoption? Or is it just not a hot topic and no one bothers anymore? According to the DNSSEC Scoreboard roughly about 0.1% of .com domains support DNSSEC. Could that be caused by the lack of DNSSEC support among registrars and DNS providers, is the information too hard to find or maybe no one cares? There is even no "dnssec" tag here. Questions The information is surprisingly hard to find. That is why I am asking for first-hand experience and personal recommendations. Has anyone here actually set up a website with DNSSEC, from the domain registration to the configuration of DNS servers? Can anyone recommend any of the registrars mentioned above? Can anyone recommend any registrar not mentioned above?

    Read the article

  • Take Advantage of Oracle's Ongoing Assurance Effort!

    - by eric.maurice
    Hi, this is Eric Maurice again! A few years ago, I posted a blog entry, which discussed the psychology of patching. The point of this blog entry was that a natural tendency existed for systems and database administrators to be reluctant to apply patches, even security patches, because of the fear of "breaking" the system. Unfortunately, this belief in the principle "if it ain't broke, don't fix it!" creates significant risks for organizations. Running systems without applying the proper security patches can greatly compromise the security posture of the organization because the security controls available in the affected system may be compromised as a result of the existence of the unfixed vulnerabilities. As a result, Oracle continues to strongly recommend that customers apply all security fixes as soon as possible. Most recently, I have had a number of conversations with customers who questioned the need to upgrade their highly stable but otherwise unsupported Oracle systems. These customers wanted to know more about the kind of security risks they were exposed to, by running obsolete versions of Oracle software. As per Oracle Support Policies, Critical Patch Updates are produced for currently supported products. In other words, Critical Patch Updates are not created by Oracle for product versions that are no longer covered under the Premier Support or Extended Support phases of the Lifetime Support Policy. One statement used in each Critical Patch Update Advisory is particularly important: "We recommend that customers upgrade to a supported version of Oracle products in order to obtain patches. Unsupported products, releases and versions are not tested for the presence of vulnerabilities addressed by this Critical Patch Update. However, it is likely that earlier versions of affected releases are also affected by these vulnerabilities." The purpose of this warning is to inform Oracle customers that a number of the vulnerabilities fixed in each Critical Patch Update may affect older versions of a specific product line. In other words, each Critical Patch Update provides a number of fixes for currently supported versions of a given product line (this information is listed for each bug in the Risk Matrices of the Critical Patch Update Advisory), but the unsupported versions in the same product line, while they may be affected by the vulnerabilities, will not receive the fixes, and are therefore vulnerable to attacks. The risk assumed by organizations wishing to remain on unsupported versions is amplified by the behavior of malicious hackers, who typically will attempt to, and sometimes succeed in, reverse-engineering the content of vendors' security fixes. As a result, it is not uncommon for exploits to be published soon after Oracle discloses vulnerabilities with the release of a Critical Patch Update or Security Alert. Let's consider now the nature of the vulnerabilities that may exist in obsolete versions of Oracle software. A number of severe vulnerabilities have been fixed by Oracle over the years. While Oracle does not test unsupported products, releases and versions for the presence of vulnerabilities addressed by each Critical Patch Update, it should be assumed that a number of the vulnerabilities fixed with the Critical Patch Update program do exist in unsupported versions (regardless of the product considered). The most severe vulnerabilities fixed in past Critical Patch Updates may result in full compromise of the targeted systems, down to the OS level, by remote and unauthenticated users (these vulnerabilities receive a CVSS Base Score of 10.0) or almost as critically, may result in the compromise of the affected systems (without compromising the underlying OS) by a remote and unauthenticated users (these vulnerabilities receive a CVSS Base Score of 7.5). Such vulnerabilities may result in complete takeover of the targeted machine (for the CVSS 10.0), or may result in allowing the attacker the ability to create a denial of service against the affected system or even hijacking or stealing all the data hosted by the compromised system (for the CVSS 7.5). The bottom line is that organizations should assume the worst case: that the most critical vulnerabilities are present in their unsupported version; therefore, it is Oracle's recommendation that all organizations move to supported systems and apply security patches in a timely fashion. Organizations that currently run supported versions but may be late in their security patch release level can quickly catch up because most Critical Patch Updates are cumulative. With a few exceptions noted in Oracle's Critical Patch Update Advisory, the application of the most recent Critical Patch Update will bring these products to current security patch level and provide the organization with the best possible security posture for their patch level. Furthermore, organizations are encouraged to upgrade to most recent versions as this will greatly improve their security posture. At Oracle, our security fixing policies state that security fixes are produced for the main code line first, and as a result, our products benefit from the mistakes made in previous version(s). Our ongoing assurance effort ensures that we work diligently to fix the vulnerabilities we find, and aim at constantly improving the security posture our products provide by default. Patch sets include numerous in-depth fixes in addition to those delivered through the Critical Patch Update and, in certain instances, important security fixes require major architectural changes that can only be included in new product releases (and cannot be backported through the Critical Patch Update program). For More Information: • Mary Ann Davidson is giving a webcast interview on Oracle Software Security Assurance on February 24th. The registration link for attending this webcast is located at http://event.on24.com/r.htm?e=280304&s=1&k=6A7152F62313CA09F77EBCEEA9B6294F&partnerref=EricMblog • A blog entry discussing Oracle's practices for ensuring the quality of Critical patch Updates can be found at http://blogs.oracle.com/security/2009/07/ensuring_critical_patch_update_quality.html • The blog entry "To patch or not to patch" is located at http://blogs.oracle.com/security/2008/01/to_patch_or_not_to_patch.html • Oracle's Support Policies are located at http://www.oracle.com/us/support/policies/index.html • The Critical Patch Update & Security Alert page is located at http://www.oracle.com/technetwork/topics/security/alerts-086861.html

    Read the article

  • How do you make Bastille work and secure Ubuntu 12.04? It doesnt work for me `sudo bastille -x`

    - by BobMil
    I was able to install bastille from the normal repositories and then run the GUI. After going through the options and clicking OK to apply, it showed these errors. Do you know why Bastille wont work on Ubuntu 12.04? NOTE: Executing PSAD Specific Configuration NOTE: Executing File Permissions Specific Configuration NOTE: Executing Account Security Specific Configuration NOTE: Executing Boot Security Specific Configuration ERROR: Unable to open /etc/inittab as the swap file /etc/inittab.bastille already exists. Rename the swap file to allow Bastille to make desired file modifications. ERROR: open /etc/inittab.bastille failed... ERROR: open /etc/inittab failed. ERROR: Couldn't insert line to /etc/inittab, since open failed.NOTE: Executing Inetd Specific Configuration

    Read the article

  • What is the career path for a software developer/ programmer? [closed]

    - by Lo Wai Lun
    I've been working as a programmer for a few months and I often study CCNA , CISSP for future. Besides simple coding I was working on specs, designing applications, and all those around-like things. My question is, I want to be a information / system security specialist. what's the career path I should be aiming for? Is it like working on code for the rest of my life? :) Restart my career from the network engineer ? Or do programmers make a good manager-position people ? I know it's very subjective. Thing is, lately I find myself much more into the designing/working on specs part of the development project then the coding itself. How do you see it? Would you like to go from development to information security? Would you like to work on a project with a manager that used to be a coder?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >