Search Results

Search found 13475 results on 539 pages for 'transport security'.

Page 200/539 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Configure Forms based authentication in SharePoint 2010

    - by sreejukg
      Configuring form authentication is a straight forward task in SharePoint. Mostly public facing websites built on SharePoint requires form based authentication. Recently, one of the WCM implementation where I was included in the project team required registration system. Any internet user can register to the site and the site offering them some membership specific functionalities once the user logged in. Since the registration open for all, I don’t want to store all those users in Active Directory. I have decided to use Forms based authentication for those users. This is a typical scenario of form authentication in SharePoint implementation. To implement form authentication you require the following A data store where you are storing the users – technically this can be active directory, SQL server database, LDAP etc. Form authentication will redirect the user to the login page, if the request is not authenticated. In the login page, there will be controls that validate the user inputs against the configured data store. In this article, I am going to use SQL server database with ASP.Net membership API’s to configure form based authentication in SharePoint 2010. This article assumes that you have SQL membership database available. I already configured the membership and roles database using aspnet_regsql command. If you want to know how to configure membership database using aspnet_regsql command, read the below blog post. http://weblogs.asp.net/sreejukg/archive/2011/06/16/usage-of-aspnet-regsql-exe-in-asp-net-4.aspx The snapshot of the database after implementing membership and role manager is as follows. I have used the database name “aspnetdb_claim”. Make sure you have created the database and make sure your database contains tables and stored procedures for membership. Create a web application with claims based authentication. This article assumes you already created a web application using claims based authentication. If you want to enable forms based authentication in SharePoint 2010, you must enable claims based authentication. Read this post for creating a web application using claims based authentication. http://weblogs.asp.net/sreejukg/archive/2011/06/15/create-a-web-application-in-sharepoint-2010-using-claims-based-authentication.aspx  You make sure, you have selected enable form authentication, and then selected Membership provider and Role manager name. To make sure you are done with the configuration, navigate to central administration website, from central administration, navigate to the Web Applications page, select the web application and click on icon, you will see the authentication providers for the current web application. Go to the section Claims authentication types, and make sure you have enabled forms based authentication. As mentioned in the snapshot, I have named the membership provider as SPFormAuthMembership and role manager as SPFormAuthRoleManager. You can choose your own names as you need. Modify the configuration files(Web.Config) to enable form authentication There are three applications that needs to be configured to support form authentication. The following are those applications. Central Administration If you want to assign permissions to web application using the credentials from form authentication, you need to update Central Administration configuration. If you do not want to access form authentication credentials from Central Administration, just leave this step.  STS service application Security Token service is the service application that issues security token when users are logging in. You need to modify the configuration of STS application to make sure users are able to login. To find the STS application, follow the following steps Go to the IIS Manager Expand the sites Node, you will see SharePoint Web Services Expand SharePoint Web Services, you can see SecurityTokenServiceApplication Right click SecuritytokenServiceApplication and click explore, it will open the corresponding file system. By default, the path for STS is C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\WebServices\SecurityToken You need to modify the configuration file available in the mentioned location. The web application that needs to be enabled with form authentication. You need to modify the configuration of your web application to make sure your web application identifies users from the form authentication.   Based on the above, I am going to modify the web configuration. At end of each step, I have mentioned the expected output. I recommend you to go step by step and after each step, make sure the configuration changes are working as expected. If you do everything all together, and test your application at the end, you may face difficulties in troubleshooting the configuration errors. Modifications for Central Administration Web.Config Open the web.config for the Central administration in a text editor. I always prefer Visual Studio, for editing web.config. In most cases, the path of the web.config for the central administration website is as follows C:\inetpub\wwwroot\wss\VirtualDirectories\<port number> Make sure you keep a backup copy of the web.config, before editing it. Let me summarize what we are going to do with Central Administration web.config. First I am going to add a connection string that points to the form authentication database, that I created as mentioned in previous steps. Then I need to add a membership provider and a role manager with the corresponding connectionstring. Then I need to update the peoplepickerwildcards section to make sure the users are appearing in search results. By default there is no connection string available in the web.config of Central Administration. Add a connection string just after the configsections element. The below is the connection string I have used all over the article. <add name="FormAuthConnString" connectionString="Initial Catalog=yourdatabasename;data source=databaseservername;Integrated Security=SSPI;" /> Once you added the connection string, the web.config look similar to Now add membership provider to the code. In web.config for CA, there will be <membership> tag, search for it. You will find membership and role manager under the <system.web> element. Under the membership providers section add the below code… <add name="SPFormAuthMembership" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> After adding memberhip element, see the snapshot of the web.config. Now you need to add role manager element to the web.config. Insider providers element under rolemanager, add the below code. <add name="SPFormAuthRoleManager" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> After adding, your role manager will look similar to the following. As a last step, you need to update the people picker wildcard element in web.config, so that the users from your membership provider are available for browsing in Central Administration. Search for PeoplePickerWildcards in the web.config, add the following inside the <PeoplePickerWildcards> tag. <add key="SPFormAuthMembership" value="%" /> After adding this element, your web.config will look like After completing these steps, you can browse the users available in the SQL server database from central administration website. Go to the site collection administrator’s page from central administration. Select the site collection you have created for form authentication. Click on the people picker icon, choose Forms Auth and click on the search icon, you will see the users listed from the SQL server database. Once you complete these steps, make sure the users are available for browsing from central administration website. If you are unable to find the users, there must be some errors in the configuration, check windows event logs to find related errors and fix them. Change the web.config for STS application Open the web.config for STS application in text editor. By default, STS web.config does not have system.Web or connectionstrings section. Just after the System.Webserver element, add the following code. <connectionStrings> <add name="FormAuthConnString" connectionString="Initial Catalog=aspnetdb_claim;data source=sp2010_db;Integrated Security=SSPI;" /> </connectionStrings> <system.web> <roleManager enabled="true" cacheRolesInCookie="false" cookieName=".ASPXROLES" cookieTimeout="30" cookiePath="/" cookieRequireSSL="false" cookieSlidingExpiration="true" cookieProtection="All" createPersistentCookie="false" maxCachedResults="25"> <providers> <add name="SPFormAuthRoleManager" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> </providers> </roleManager> <membership userIsOnlineTimeWindow="15" hashAlgorithmType=""> <providers> <add name="SPFormAuthMembership" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> </providers> </membership> </system.web> See the snapshot of the web.config after adding the required elements. After adding this, you should be able to login using the credentials from SQL server. Try assigning a user as primary/secondary administrator for your site collection from Central Administration and login to your site using form authentication. If you made everything correct, you should be able to login. This means you have successfully completed configuration of STS Configuration of Web Application for Form Authentication As a last step, you need to modify the web.config of the form authentication web application. Once you have done this, you should be able to grant permissions to users stored in the membership database. Open the Web.config of the web application you created for form authentication. You can find the web.config for the application under the path C:\inetpub\wwwroot\wss\VirtualDirectories\<port number> Basically you need to add connection string, membership provider, role manager and update the people picker wild card configuration. Add the connection string (same as the one you added to the web.config in Central Administration). See the screenshot after the connection string has added. Search for <membership> in the web.config, you will find this inside system.web element. There will be other providers already available there. You add your form authentication membership provider (similar to the one added to Central Administration web.config) to the provider element under membership. Find the snapshot of membership configuration as follows. Search for <roleManager> element in web.config, add the new provider name under providers section of the roleManager element. See the snapshot of web.config after new provider added. Now you need to configure the peoplepickerwildcard configuration in web.config. As I specified earlier, this is to make sure, you can locate the users by entering a part of their username. Add the following line under the <PeoplePickerWildcards> element in web.config. See the screenshot of the peoplePickerWildcards element after the element has been added. Now you have completed all the setup for form authentication. Navigate to the web application. From the site actions -> site settings -> go to peope and groups Click on new -> add users, it will popup the people picker dialog. Click on the icon, select Form Auth, enter a username in the search textbox, and click on search icon. See the screenshot of admin search when I tried searching the users If it displays the user, it means you are done with the configuration. If you add users to the form authentication database, the users will be able to access SharePoint portal as normal.

    Read the article

  • Protecting a WebCenter app with OAM 11g - the Webcenter side

    - by Martin Deh
    Recently, there was a customer requirment to enable a WebCenter custom portal application to have multiple login-type pages and have the authentication be handle through Oracle Access Manager (OAM) As my security colleagues would tell me, this is fully supported through OAM.  Basically, all that would have to be done is to define in OAM individual resources (directories, URLS , .etc) that needed to be secured. Once that was done, OAM would handle the rest and the user would typically then be prompted by a login page, which was provided by OAM.  I am not going to discuss talking about OAM security in this blog.  In addition, my colleague Chris Johnson (ATEAM security) has already blogged his side of the story here:  http://fusionsecurity.blogspot.com/2012/06/protecting-webcenter-app-with-oam-11g.html .  What I am going to cover is what was done on the WebCenter/ADF side of things. In the test application, basically the structure of pages defined in the pages.xml are as follows:  In this screenshot, notice that "Delegated Security" has been selected, and of the absence for the anonymous-role for the "secured" page (A - B is the same)  This essentially in the WebCenter world means that each of these pages are protected, and only accessible by those define by the applications "role".  For more information on how WebCenter handles security, which by the way extends from ADF security, please refer to the documentation.  The (default) navigation model was configured.  You can see that with this set up, a user will be able to view the "links", where the links define navigation to the respective page:   Note from this dialog, you could also set some security on each link via the "visible" property.  However, the recommended best practice is to set the permissions through the page hierarchy (pages.xml).  Now based on this set up, the expected behavior is that I could only see the link for secured A page only if I was already authenticated (logged in).  But, this is not the use case of the requirement, since any user (anonymous) should be able to view (and click on the link).  So how is this accomplished?  There is now a patch that enables this.  In addition, the portal application's web.xml will need an additional context parameter: <context-param>     <param-name>oracle.webcenter.navigationframework.SECURITY_LEVEL</param-name>     <param-value>public</param-value>  </context-param>  As Chris mentions in his part of the blog, the code that is responsible for displaying the "links" is based upon the retrieval of the navigation model "node" prettyURL.  The prettyURL is a generated URL that also includes the adf.ctrl-state token, which is very important to the ADF framework runtime.  URLs that are void of this token, get new tokens from the ADF runtime.  This can lead to potential memory issues.  <af:forEach var="node" varStatus="vs"    items="#{navigationContext.defaultNavigationModel.listModel['startNode=/,includeStartNode=false']}">                 <af:spacer width="10" height="10" id="s1"/>                 <af:panelGroupLayout id="pgl2" layout="vertical"                                      inlineStyle="border:blue solid 1px">                   <af:goLink id="pt_gl1" text="#{node.title}"                              destination="#{node.goLinkPrettyUrl}"                              targetFrame="#{node.attributes['Target']}"                              inlineStyle="font-size:large;#{node.selected ? 'font-weight:bold;' : ''}"/>                   <af:spacer width="10" height="10" id="s2"/>                   <af:outputText value="#{node.goLinkPrettyUrl}" id="ot2"                                  inlineStyle="font-size:medium; font-weight:bold;"/>                 </af:panelGroupLayout>               </af:forEach>  So now that the links are visible to all, clicking on a secure link will be intercepted by OAM.  Since the OAM can also configure in the Authentication Scheme, the challenging URL (the login page(s)) can also come from anywhere.  In this case the each login page have been defined in the custom portal application.  This was another requirement as well, since this login page also needed to have ADF based content.  This would not be possible if the login page came from OAM.  The following is the example login page: <?xml version='1.0' encoding='UTF-8'?> <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1"           xmlns:f="http://java.sun.com/jsf/core"           xmlns:h="http://java.sun.com/jsf/html"           xmlns:af="http://xmlns.oracle.com/adf/faces/rich">   <jsp:directive.page contentType="text/html;charset=UTF-8"/>   <f:view>     <af:document title="Settings" id="d1">       <af:panelGroupLayout id="pgl1" layout="vertical"/>       <af:outputText value="LOGIN FORM FOR A" id="ot1"/>       <form id="loginform" name="loginform" method="POST"             action="XXXXXXXX:14100/oam/server/auth_cred_submit">         <table>           <tr>             <td align="right">username:</td>             <td align="left">               <input name="username" type="text"/>             </td>           </tr>                      <tr>             <td align="right">password:</td>             <td align="left">               <input name="password" type="password"/>             </td>           </tr>                      <tr>             <td colspan="2" align="center">               <input value=" login " type="submit"/>             </td>           </tr>         </table>         <input name="request_id" type="hidden" value="${param['request_id']}"                id="itsss"/>       </form>     </af:document>   </f:view> </jsp:root> As you can see the code is pretty straight forward.  The most important section is in the form tag, where the submit is a POST to the OAM server.  This example page is mostly HTML, however, it is valid to have adf tags mixed in as well.  As a side note, this solution is really to tailored for a specific requirement.  Normally, there would be only one login page (or dialog/popup), and the OAM challenge resource would be /adfAuthentication.  This maps to the adfAuthentication servlet.  Please see the documentation for more about ADF security here. 

    Read the article

  • The Incremental Architect&acute;s Napkin - #2 - Balancing the forces

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/02/the-incremental-architectacutes-napkin---2---balancing-the-forces.aspxCategorizing requirements is the prerequisite for ecconomic architectural decisions. Not all requirements are created equal. However, to truely understand and describe the requirement forces pulling on software development, I think further examination of the requirements aspects is varranted. Aspects of Functionality There are two sides to Functionality requirements. It´s about what a software should do. I call that the Operations it implements. Operations are defined by expressions and control structures or calls to frameworks of some sort, i.e. (business) logic statements. Operations calculate, transform, aggregate, validate, send, receive, load, store etc. Operations are about behavior; they take input and produce output by considering state. I´m not using the term “function” here, because functions - or methods or sub-programs - are not necessary to implement Operations. Functions belong to a different sub-aspect of requirements (see below). Operations alone are not enough, though, to make a customer happy with regard to his/her Functionality requirements. Only correctly implemented Operations provide full value. This should make clear, why testing is so important. And not just manual tests during development of some operational feature, but automated tests. Because only automated tests scale when over time the number of operations increases. Without automated tests there is no guarantee formerly correct operations are still correct after more got added. To retest all previous operations manually is infeasible. So whoever relies just on manual tests is not really balancing the two forces Operations and Correctness. With manual tests more weight is put on the side of the scale of Operations. That might be ok for a short period of time - but in the long run it will bite you. You need to plan for Correctness in the long run from the first day of your project on. Aspects of Quality As important as Functionality is, it´s not the driver for software development. No software has ever been written to just implement some operation in code. We don´t need computers just to do something. All computers can do with software we can do without them. Well, at least given enough time and resources. We could calculate the most complex formulas without computers. We could do auctions with millions of people without computers. The only reason we want computers to help us with this and a million other Operations is… We don´t want to wait for the results very long. Or we want less errors. Or we want easier accessability to complicated solutions. So the main reason for customers to buy/order software is some Quality. They want some Functionality with a higher Quality (e.g. performance, scalability, usability, security…) than without the software. But Qualities come in at least two flavors: Most important are Primary Qualities. That´s the Qualities software truely is written for. Take an online auction website for example. Its Primary Qualities are performance, scalability, and usability, I´d say. Auctions should come within reach of millions of people; setting up an auction should be very easy; finding a suitable auction and bidding on it should be as fast as possible. Only if those Qualities have been implemented does security become relevant. A secure auction website is important - but not as important as a fast auction website. Nobody would want to use the most secure auction website if it was unbearably slow. But there would be people willing to use the fastest auction website even it was lacking security. That´s why security - with regard to online auction software - is not a Primary Quality, but just a Secondary Quality. It´s a supporting quality, so to speak. It does not deliver value by itself. With a password manager software this might be different. There security might be a Primary Quality. Please get me right: I don´t want to denigrate any Quality. There´s a long list of non-functional requirements at Wikipedia. They are all created equal - but that does not mean they are equally important for all software projects. When confronted with Quality requirements check with the customer which are primary and which are secondary. That will help to make good economical decisions when in a crunch. Resources are always limited - but requirements are a bottomless ocean. Aspects of Security of Investment Functionality and Quality are traditionally the requirement aspects cared for most - by customers and developers alike. Even today, when pressure rises in a project, tunnel vision will focus on them. Any measures to create and hold up Security of Investment (SoI) will be out of the window pretty quickly. Resistance to customers and/or management is futile. As long as SoI is not placed on equal footing with Functionality and Quality it´s bound to suffer under pressure. To look closer at what SoI means will help to become more conscious about it and make customers and management aware of the risks of neglecting it. SoI to me has two facets: Production Efficiency (PE) is about speed of delivering value. Customers like short response times. Short response times mean less money spent. So whatever makes software development faster supports this requirement. This must not lead to duct tape programming and banging out features by the dozen, though. Because customers don´t just want Operations and Quality, but also Correctness. So if Correctness gets compromised by focussing too much on Production Efficiency it will fire back. Customers want PE not just today, but over the whole course of a software´s lifecycle. That means, it´s not just about coding speed, but equally about code quality. If code quality leads to rework the PE is on an unsatisfactory level. Also if code production leads to waste it´s unsatisfactory. Because the effort which went into waste could have been used to produce value. Rework and waste cost money. Rework and waste abound, however, as long as PE is not addressed explicitly with management and customers. Thanks to the Agile and Lean movements that´s increasingly the case. Nevertheless more could and should be done in many teams. Each and every developer should keep in mind that Production Efficiency is as important to the customer as Functionality and Quality - whether he/she states it or not. Making software development more efficient is important - but still sooner or later even agile projects are going to hit a glas ceiling. At least as long as they neglect the second SoI facet: Evolvability. Delivering correct high quality functionality in short cycles today is good. But not just any software structure will allow this to happen for an indefinite amount of time.[1] The less explicitly software was designed the sooner it´s going to get stuck. Big ball of mud, monolith, brownfield, legacy code, technical debt… there are many names for software structures that have lost the ability to evolve, to be easily changed to accomodate new requirements. An evolvable code base is the opposite of a brownfield. It´s code which can be easily understood (by developers with sufficient domain expertise) and then easily changed to accomodate new requirements. Ideally the costs of adding feature X to an evolvable code base is independent of when it is requested - or at least the costs should only increase linearly, not exponentially.[2] Clean Code, Agile Architecture, and even traditional Software Engineering are concerned with Evolvability. However, it seems no systematic way of achieving it has been layed out yet. TDD + SOLID help - but still… When I look at the design ability reality in teams I see much room for improvement. As stated previously, SoI - or to be more precise: Evolvability - can hardly be measured. Plus the customer rarely states an explicit expectation with regard to it. That´s why I think, special care must be taken to not neglect it. Postponing it to some large refactorings should not be an option. Rather Evolvability needs to be a core concern for every single developer day. This should not mean Evolvability is more important than any of the other requirement aspects. But neither is it less important. That´s why more effort needs to be invested into it, to bring it on par with the other aspects, which usually are much more in focus. In closing As you see, requirements are of quite different kinds. To not take that into account will make it harder to understand the customer, and to make economic decisions. Those sub-aspects of requirements are forces pulling in different directions. To improve performance might have an impact on Evolvability. To increase Production Efficiency might have an impact on security etc. No requirement aspect should go unchecked when deciding how to allocate resources. Balancing should be explicit. And it should be possible to trace back each decision to a requirement. Why is there a null-check on parameters at the start of the method? Why are there 5000 LOC in this method? Why are there interfaces on those classes? Why is this functionality running on the threadpool? Why is this function defined on that class? Why is this class depending on three other classes? These and a thousand more questions are not to mean anything should be different in a code base. But it´s important to know the reason behind all of these decisions. Because not knowing the reason possibly means waste and having decided suboptimally. And how do we ensure to balance all requirement aspects? That needs practices and transparency. Practices means doing things a certain way and not another, even though that might be possible. We´re dealing with dangerous tools here. Like a knife is a dangerous tool. Harm can be done if we use our tools in just any way at the whim of the moment. Over the centuries rules and practices have been established how to use knifes. You don´t put them in peoples´ legs just because you´re feeling like it. You hand over a knife with the handle towards the receiver. You might not even be allowed to cut round food like potatos or eggs with it. The same should be the case for dangerous tools like object-orientation, remote communication, threads etc. We need practices to use them in a way so requirements are balanced almost automatically. In addition, to be able to work on software as a team we need transparency. We need means to share our thoughts, to work jointly on mental models. So far our tools are focused on working with code. Testing frameworks, build servers, DI containers, intellisense, refactoring support… That´s all nice and well. I don´t want to miss any of that. But I think it´s not enough. We´re missing mental tools, tools for making thinking and talking about software (independently of code) easier. You might think, enough of such tools already exist like all those UML diagram types or Flow Charts. But then, isn´t it strange, hardly any team is using them to design software? Or is that just due to a lack of education? I don´t think so. It´s a matter value/weight ratio: the current mental tools are too heavy weight compared to the value they deliver. So my conclusion is, we need lightweight tools to really be able to balance requirements. Software development is complex. We need guidance not to forget important aspects. That´s like with flying an airplane. Pilots don´t just jump in and take off for their destination. Yes, there are times when they are “flying by the seats of their pants”, when they are just experts doing thing intuitively. But most of the time they are going through honed practices called checklist. See “The Checklist Manifesto” for very enlightening details on this. Maybe then I should say it like this: We need more checklists for the complex businss of software development.[3] But that´s what software development mostly is about: changing software over an unknown period of time. It needs to be corrected in order to finally provide promised operations. It needs to be enhanced to provide ever more operations and qualities. All this without knowing when it´s going to stop. Probably never - until “maintainability” hits a wall when the technical debt is too large, the brownfield too deep. Software development is not a sprint, is not a marathon, not even an ultra marathon. Because to all this there is a foreseeable end. Software development is like continuously and foreever running… ? And sometimes I dare to think that costs could even decrease over time. Think of it: With each feature a software becomes richer in functionality. So with each additional feature the chance of there being already functionality helping its implementation increases. That should lead to less costs of feature X if it´s requested later than sooner. X requested later could stand on the shoulders of previous features. Alas, reality seems to be far from this despite 20+ years of admonishing developers to think in terms of reusability.[1] ? Please don´t get me wrong: I don´t want to bog down the “art” of software development with heavyweight practices and heaps of rules to follow. The framework we need should be lightweight. It should not stand in the way of delivering value to the customer. It´s purpose is even to make that easier by helping us to focus and decreasing waste and rework. ?

    Read the article

  • ssh + tinyproxy: poor performance

    - by Paul
    I am currently in China and I would like to still visit some blocked websites (facebook, youtube). I have VPS in the USA and I have installed tinyproxy on it. I log in on my VPS with SSH port-forwarding and I have configured my browser appropriately. Everything works more or less: I can surf to those websites but everything is inusually slow and sometimes data transfer stops abruptly. This probably has to do with the fact that I see some errors in my shell on the VPS like : channel 6: open failed: connect failed: Also in the log-file of tinyproxy I see some bad things: ERROR Sep 06 14:52:14 [28150]: getpeer_information: getpeername() error: Transport endpoint is not connected ERROR Sep 06 14:52:15 [28153]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28168]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28151]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28143]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:17 [28147]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:23 [28137]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:26 [28168]: getpeer_information: getpeername() error: Transport endpoint is not connected ERROR Sep 06 14:52:27 [28186]: read_request_line: Client (file descriptor: 7) closed socket before read. ERROR Sep 06 14:52:31 [28160]: getpeer_information: getpeername() error: Transport endpoint is not connected

    Read the article

  • Exchange 2010 DAG Automatic Failover Testing/Issue. Not always automatically failing over to health

    - by Richard
    Ok I've got 2 exchange 2010 servers that run client access/hub transport/mailbox roles and one exchange 2010 server running just client access/hub transport roles and acts as my bridgehead. The two mailbox servers are running one database setup in a DAG. Server A shows the DB Mounted and Server B shows Healthy. If I reboot Server A via windows GUI Server B switches from healthy to mounted and I see hardly any interruption in service using Outlook 2007. Server A shows "Service down", then "Failed" then "Healthy" and leaves the DB mounted on Server B. This is how it should work, so far so good. Now if I test Server A being shut down cold, or unplugging both nics from network to simulate failure, Server B switches from Healthy to Mounted and server A switches to "Service Down" but my outlook client never connects to the DB mounted on server B! I can connect to server C (client access/hub transport) and get to my email and even send new email out, but incoming email doesn't deliver until Server A is brought back online and it's DB goes back to Healthy status. So I don't understand why it auto fail-overs when I reboot the server with the mounted DB copy, causing very little outlook 2007 hiccup if any. But when I shutdown or DC the mounted DB server it DOES mount the healthy copy but outlook 2007 clients can't connect.. I hope the picture I'm trying to paint makes some sense, it's driving me a little batty. Any help would be appreciated!

    Read the article

  • Issues with doing a P2V on Exchange 2007

    - by PokermUNKEE
    I shutdown all Exchange services on the physical box and did a convert. Everything converted fine. Server booted up no issues and I could receive email from the outside world with no issues. But I am unable to send emails internally or externally. They just sit in the Outbox. I also disabled the DHCPClient service as I didn't want the server to get an IP address from DHCP when it first came online. It came online with no IP, then I used console to assign the correct static IP address (same one on the physical server) and rebooted. I ran the Exchange Troubleshooting Agent and it gave me the following errors: The value for the '\MSExchangeIS Mailbox\Messages Queued For Submission' counter on server exchange is greater than zero (average value is 8.8) and it appears that 'MSExchangeMailSubmission' is failing to submit messages to at least one computer with the Hub Transport server role installed over the last minute. Found 1 computers with the Hub Transport server role installed in the same Active Directory site as server exchange using local Active Directory site GUID 'ce8a4367-1bf6-4825-9cac-c4e2b115c450'. Check Local Active Directory Site Hub Transport Server Role Health (Mailflow_CheckLocalBhHealth1.1.1.1.1.1.1.1) I have one Exchange server with all the roles on it. Has anyone see this before when doing a convert? Please help :(

    Read the article

  • Cassandra Remote Connection

    - by Lyuben Todorov
    I'm not managing to connect to cassandra from outside machines. The database is hosted on a windows machine and im trying to connect through a mac (but this shouldn't cause problems) Local connection works: C:\cassandra\bin>cassandra-cli Starting Cassandra Client Connected to: "Test Cluster" on 127.0.0.1/9160 Welcome to Cassandra CLI version 1.1.6 But fails from other machines on the same network bin/cassandra-cli --host 192.168.0.10 --port 9160 org.apache.thrift.transport.TTransportException: java.net.ConnectException: Operation timed out at org.apache.thrift.transport.TSocket.open(TSocket.java:183) at org.apache.thrift.transport.TFramedTransport.open(TFramedTransport.java:81) at org.apache.cassandra.cli.CliMain.connect(CliMain.java:70) at org.apache.cassandra.cli.CliMain.main(CliMain.java:246) Exception connecting to 192.168.0.10/9160. Reason: Operation timed out. Welcome to Cassandra CLI version 1.2.0-beta3 Type 'help;' or '?' for help. Type 'quit;' or 'exit;' to quit. There is a router on the network but these ports have been triggred Ports: 1024, 7000, 7001, 7199, 9160 And the same ports were forwarded to 192.168.0.10 (where Cassandra is hosted) Cassandra version is 1.0.7 And the settings I think i need to change in cassandra.yaml listen_address: 192.168.0.10 rpc_address: I'm not really sure if I've missed any steps. Any help would be appreciated.

    Read the article

  • ASP.NET sending email through exchange problem

    - by Solmead
    I have an exchange 2010 server running on Windows 2008 R2, I also have a remote webserver running Windows 2003 with multiple sites on it (all asp.net mvc 2 sites). I setup a Transport in exchange and all the websites on my remote web server can send email no problem to anyone in the exchange server and to any external domain. Now for my problem. I am having issues with that webserver, so I moved one of the websites to run on my exchange server, it runs well (low hit website) except that email doesn't work from that site. I tried changing the Transport in exchange to add the IP address of the local machine and the 127.0.0.1 addresses and it still isn't sending any email. Any ideas on how to get this working? The remote websites can still send email no problem, the version of the site that I had to move on the remote server can still email, but on the exchange server for that website email does not send. I would guess it is a Transport issue, since it is running on the same server a firewall shouldn't be the issue. I changed the smtp setting in web.config to localhost, and now I do receive email to my account on the exchange server, but I do not receive any emails on outside addresses. To add more description, this is a custom developed asp.net mvc 2 website. And no errors were being generated in the code when sending the email in either case.

    Read the article

  • when should be choose simple php mail and when smpt with loggin+password?

    - by user43353
    Hi, My Case: web application that need to send 1,000 messages per day to main gmail account. (Only need to send email, not need receive emails - email client) 1. option - use php mail function + sendmail + config php.ini php example: <?php $to = '[email protected]'; $subject = 'the subject'; $message = 'hello'; $headers = 'From: [email protected]' . "\r\n" . 'Reply-To: [email protected]' . "\r\n" . 'X-Mailer: PHP/' . phpversion(); mail($to, $subject, $message, $headers); ?> php.ini config (ubuntu): sendmail_path = /usr/sbin/sendmail -t -i pros:don't need email account, easy to setup cons:? 2. option - use Zend_Mail + transport on smpt+ password auto php example(need include Zend_Mail classes): $config = array('auth' => 'login', 'username' => 'myusername', 'password' => 'password'); $transport = new Zend_Mail_Transport_Smtp('mail.server.com', $config); $mail = new Zend_Mail(); $mail->setBodyText('This is the text of the mail.'); $mail->setFrom('[email protected]', 'Some Sender'); $mail->addTo('[email protected]', 'Some Recipient'); $mail->setSubject('TestSubject'); $mail->send($transport); pros:? cons:? Questions: Can 1 option be filtered by gmail email server as spam? please can you add pros + cons to options above Thanks

    Read the article

  • deploying war on tomcat fails to start

    - by Asghar
    i have a java application which uses JAX_WS when i deployed on my tomcat5 server . it is deployed successfully. but it fails to start SEVERE: WSSERVLET11: failed to parse runtime descriptor: java.lang.IllegalArgumentException: prefix cannot be "null" when creating a QName java.lang.IllegalArgumentException: prefix cannot be "null" when creating a QName at javax.xml.namespace.QName.<init>(xml-commons-apis-1.3.02.jar.so) at gnu.xml.stream.XMLParser.getAttributeName(libgcj.so.7rh) at com.sun.xml.ws.util.xml.XMLStreamReaderFilter.getAttributeName(XMLStreamReaderFilter.java:228) at com.sun.xml.ws.streaming.XMLStreamReaderUtil$AttributesImpl.<init>(XMLStreamReaderUtil.java:355) at com.sun.xml.ws.streaming.XMLStreamReaderUtil.getAttributes(XMLStreamReaderUtil.java:198) at com.sun.xml.ws.transport.http.DeploymentDescriptorParser.parseAdapters(DeploymentDescriptorParser.java:204) at com.sun.xml.ws.transport.http.DeploymentDescriptorParser.parse(DeploymentDescriptorParser.java:147) at com.sun.xml.ws.transport.http.servlet.WSServletContextListener.contextInitialized(WSServletContextListener.java:124) at org.apache.catalina.core.StandardContext.listenerStart(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardContext.start(catalina-5.5.23.jar.so) at org.apache.catalina.manager.ManagerServlet.start(catalina-manager-5.5.23.jar.so) at org.apache.catalina.manager.HTMLManagerServlet.start(catalina-manager-5.5.23.jar.so) at org.apache.catalina.manager.HTMLManagerServlet.doGet(catalina-manager-5.5.23.jar.so) at javax.servlet.http.HttpServlet.service(tomcat5-servlet-2.4-api-5.5.23.jar.so) at javax.servlet.http.HttpServlet.service(tomcat5-servlet-2.4-api-5.5.23.jar.so) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(catalina-5.5.23.jar.so) at org.apache.catalina.core.ApplicationFilterChain.doFilter(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardWrapperValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardContextValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardHostValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.valves.ErrorReportValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardEngineValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.connector.CoyoteAdapter.service(catalina-5.5.23.jar.so) at org.apache.coyote.http11.Http11Processor.process(tomcat-http-5.5.23.jar.so) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(tomcat-http-5.5.23.jar.so) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(tomcat-util-5.5.23.jar.so) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(tomcat-util-5.5.23.jar.so) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(tomcat-util-5.5.23.jar.so) at java.lang.Thread.run(libgcj.so.7rh)

    Read the article

  • Exim4 Smart Host Relay

    - by ColinM
    I am running Exim 4.71. I want to: Route all email from A.com through mail.A.com Route all email from [B-E].com through mail.B.com Send all other email directly. Here is the configuration I have that doesn't work like I hoped: domainlist a_domains = a.com domainlist b_domains = b.com : c.com : d.com : e.com begin routers smart_route_a: driver = manualroute domains = +a_domains transport = remote_smtp route_list = +a_domains mail.a.com no_more smart_route_b: driver = manualroute domains = +b_domains transport = remote_smtp route_list = +b_domains mail.mollenhour.com no_more dnslookup: driver = dnslookup domains = ! +local_domains transport = remote_smtp ignore_target_hosts = 0.0.0.0 : 127.0.0.0/8 no_more When I send an email e.g. with PHP's mail() or Zend_Mail_Transport_Smtp setting both From: and Return-Path: as [email protected], the smart_route_a router is not used, the dnslookup is used instead. Disabling dnslookup results in no mail being sent. From the logs it appears that email sent to [email protected] uses smart_route_a, but the same email sent from [email protected] to [email protected] is sent using dnslookup. How do I make email from [email protected] be relayed via mail.a.com?

    Read the article

  • Postfix "loops back to myself" error on relay to another IP address on same machine

    - by Nic Wolff
    I'm trying to relay all mail for one domain "ourdomain.tld" from Postfix running on port 2525 of one interface to another SMTP server running on port 25 of another interface on the same machine. However, when a message is received for that domain, we're getting a "mail for loops back to myself" error. Below are netstat and postconf, the contents of our /etc/postfix/transport file, and the error that Postfix is logging. (The high bytes of each IP address are XXXed out.) Am I missing something obvious? Thanks - # netstat -ln -A inet Proto Recv-Q Send-Q Local Address Foreign Address State ... tcp 0 0 XXX.XXX.138.209:25 0.0.0.0:* LISTEN tcp 0 0 XXX.XXX.138.210:2525 0.0.0.0:* LISTEN # postconf -d | grep mail_version mail_version = 2.8.4 # postconf -n alias_maps = hash:/etc/aliases allow_mail_to_commands = alias,forward bounce_queue_lifetime = 0 command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 default_privs = nobody default_process_limit = 200 html_directory = no inet_interfaces = XXX.XXX.138.210 local_recipient_maps = local_transport = error:local mail delivery is disabled mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq manpage_directory = /usr/local/man message_size_limit = 10240000 mydestination = mydomain = ourdomain.tld myhostname = ourdomain.tld mynetworks = XXX.XXX.119.0/24, XXX.XXX.138.0/24, XXX.XXX.136.128/25 myorigin = ourdomain.tld newaliases_path = /usr/bin/newaliases queue_directory = /var/spool/postfix readme_directory = /etc/postfix recipient_delimiter = + relay_domains = ourdomain.tld relay_recipient_maps = sample_directory = /etc/postfix sendmail_path = /usr/sbin/sendmail setgid_group = postdrop smtpd_authorized_verp_clients = $mynetworks smtpd_recipient_limit = 10000 transport_maps = hash:/etc/postfix/transport unknown_local_recipient_reject_code = 450 # cat /etc/postfix/transport ourdomain.tld relay:[XXX.XXX.138.209]:25 # tail -f /var/log/maillog ... Aug 2 23:58:36 va4 postfix/smtp[9846]: 9858A758404: to=<nicwolff@... >, relay=XXX.XXX.138.209[XXX.XXX.138.209]:25, delay=1.1, delays=0.08/0.01/1/0, dsn=5.4.6, status=bounced (mail for [XXX.XXX.138.209]:25 loops back to myself)

    Read the article

  • Mail not piping in postfix

    - by user220912
    I have setup a postfix server and wanted to test the piping of mail to my perl script where i can make use of it and filter the mails.I wrote a test script for that which just logs the information in txt file. but i don't see any changes on sending the mail. My postconf-n output: alias_database = hash:/etc/aliases append_dot_mydomain = no command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = yantratech.co.in, localhost.localdomain, localhost myhostname = tcmailer8.in mynetworks = 103.8.128.62, 103.8.128.69/101, 168.100.189.0/28, 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES recipient_delimiter = + relayhost = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/pki/tls/certs/tcmailer8.in.cert smtpd_tls_key_file = /etc/pki/tls/private/localhost.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = hash:/etc/postfix/virtual virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = /etc/postfix/vhosts virtual_mailbox_maps = hash:/etc/postfix/vmaps virtual_minimum_uid = 1000 virtual_uid_maps = static:5000 here's my transport: [email protected] email_route my main.cf declaration: transport_maps = hash:/etc/postfix/transport my master.cf declaration: email_route unix - n n - - pipe flags=FR user=nobody argv=/etc/postfix/test.php -f $(sender) -- $(recipient) and my php script: #!/usr/bin/php <?php $fh = fopen('/etc/postfix/testmail.txt','a'); fwrite($fh, "Hello it works\n"); fclose($fh); ?> I am sending mails through telnet in localhost.

    Read the article

  • IIS 7 Authentication: Certain users can't authenticate, while almost all others can.

    - by user35335
    I'm using IIS 7 Digest authentication to control access to a certain directory containing files. Users access the files through a department website from inside our network and outside. I've set NTFS permissions on the directory to allow a certain AD group to view the files. When I click a link to one of those files on the website I get prompted for a username and password. With most users everything works fine, but with a few of them it prompts for a password 3 times and then get: 401 - Unauthorized: Access is denied due to invalid credentials. But other users that are in the group can get in without a problem. If I switch it over to Windows Authentication, then the trouble users can log in fine. That directory is also shared, and users that can't log in through the website are able to browse to the share and view files in it, so I know that the permissions are ok. Here's the portion of the IIS log where I tried to download the file (/assets/files/secure/WWGNL.pdf): 2010-02-19 19:47:20 xxx.xxx.xxx.xxx GET /assets/images/bullet.gif - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 218 2010-02-19 19:47:20 xxx.xxx.xxx.xxx GET /assets/images/bgOFF.gif - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 218 2010-02-19 19:47:21 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 2 5 0 2010-02-19 19:47:36 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 1 2148074252 0 2010-02-19 19:47:43 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 1 2148074252 15 2010-02-19 19:47:46 xxx.xxx.xxx.xxx GET /manager/media/script/_session.gif 0.19665693119168282 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 203 2010-02-19 19:47:46 xxx.xxx.xxx.xxx POST /manager/index.php - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 296 2010-02-19 19:47:56 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 1 2148074252 15 2010-02-19 19:47:59 xxx.xxx.xxx.xxx GET /favicon.ico - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 404 0 2 0 Here's the Failed Logon attempt in the Security Log: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 2/19/2010 11:47:43 AM Event ID: 4625 Task Category: Logon Level: Information Keywords: Audit Failure User: N/A Computer: WEB4.net.domain.org Description: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: jim.lastname Account Domain: net.domain.org Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc000006a Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: - Source Network Address: 10.5.16.138 Source Port: 50065 Detailed Authentication Information: Logon Process: WDIGEST Authentication Package: WDigest Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols. - Key length indicates the length of the generated session key. This will be 0 if no session key was requested. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-a5ba-3e3b0328c30d}" /> <EventID>4625</EventID> <Version>0</Version> <Level>0</Level> <Task>12544</Task> <Opcode>0</Opcode> <Keywords>0x8010000000000000</Keywords> <TimeCreated SystemTime="2010-02-19T19:47:43.890Z" /> <EventRecordID>2276316</EventRecordID> <Correlation /> <Execution ProcessID="612" ThreadID="692" /> <Channel>Security</Channel> <Computer>WEB4.net.domain.org</Computer> <Security /> </System> <EventData> <Data Name="SubjectUserSid">S-1-0-0</Data> <Data Name="SubjectUserName">-</Data> <Data Name="SubjectDomainName">-</Data> <Data Name="SubjectLogonId">0x0</Data> <Data Name="TargetUserSid">S-1-0-0</Data> <Data Name="TargetUserName">jim.lastname</Data> <Data Name="TargetDomainName">net.domain.org</Data> <Data Name="Status">0xc000006d</Data> <Data Name="FailureReason">%%2313</Data> <Data Name="SubStatus">0xc000006a</Data> <Data Name="LogonType">3</Data> <Data Name="LogonProcessName">WDIGEST</Data> <Data Name="AuthenticationPackageName">WDigest</Data> <Data Name="WorkstationName">-</Data> <Data Name="TransmittedServices">-</Data> <Data Name="LmPackageName">-</Data> <Data Name="KeyLength">0</Data> <Data Name="ProcessId">0x0</Data> <Data Name="ProcessName">-</Data> <Data Name="IpAddress">10.5.16.138</Data> <Data Name="IpPort">50065</Data> </EventData> </Event>

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • Cisco ASA: Allowing and Denying VPN Access based on membership to an AD group

    - by milkandtang
    I have a Cisco ASA 5505 connecting to an Active Directory server for VPN authentication. Usually we'd restrict this to a particular OU, but in this case users which need access are spread across multiple OUs. So, I'd like to use a group to specify which users have remote access. I've created the group and added the users, but I'm having trouble figuring out how to deny users which aren't in that group. Right now, if someone connects they get assigned the correct group policy "companynamera" if they are in that group, so the LDAP mapping is working. However, users who are not in that group still authenticate fine, and their group policy becomes the LDAP path of their first group, i.e. CN=Domain Users,CN=Users,DC=example,DC=com, and then are still allowed access. How do I add a filter so that I can map everything that isn't "companynamera" to no access? Config I'm using (with some stuff such as ACLs and mappings removed, since they are just noise here): gateway# show run : Saved : ASA Version 8.2(1) ! hostname gateway domain-name corp.company-name.com enable password gDZcqZ.aUC9ML0jK encrypted passwd gDZcqZ.aUC9ML0jK encrypted names name 192.168.0.2 dc5 description FTP Server name 192.168.0.5 dc2 description Everything server name 192.168.0.6 dc4 description File Server name 192.168.0.7 ts1 description Light Use Terminal Server name 192.168.0.8 ts2 description Heavy Use Terminal Server name 4.4.4.82 primary-frontier name 5.5.5.26 primary-eschelon name 172.21.18.5 dmz1 description Kerio Mail Server and FTP Server name 4.4.4.84 ts-frontier name 4.4.4.85 vpn-frontier name 5.5.5.28 ts-eschelon name 5.5.5.29 vpn-eschelon name 5.5.5.27 email-eschelon name 4.4.4.83 guest-frontier name 4.4.4.86 email-frontier dns-guard ! interface Vlan1 nameif inside security-level 100 ip address 192.168.0.254 255.255.255.0 ! interface Vlan2 description Frontier FiOS nameif outside security-level 0 ip address primary-frontier 255.255.255.0 ! interface Vlan3 description Eschelon T1 nameif backup security-level 0 ip address primary-eschelon 255.255.255.248 ! interface Vlan4 nameif dmz security-level 50 ip address 172.21.18.254 255.255.255.0 ! interface Vlan5 nameif guest security-level 25 ip address 172.21.19.254 255.255.255.0 ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 switchport access vlan 3 ! interface Ethernet0/2 switchport access vlan 4 ! interface Ethernet0/3 switchport access vlan 5 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive clock timezone PST -8 clock summer-time PDT recurring dns domain-lookup inside dns server-group DefaultDNS name-server dc2 domain-name corp.company-name.com same-security-traffic permit intra-interface access-list companyname_splitTunnelAcl standard permit 192.168.0.0 255.255.255.0 access-list companyname_splitTunnelAcl standard permit 172.21.18.0 255.255.255.0 access-list inside_nat0_outbound extended permit ip any 172.21.20.0 255.255.255.0 access-list inside_nat0_outbound extended permit ip any 172.21.18.0 255.255.255.0 access-list bypassingnat_dmz extended permit ip 172.21.18.0 255.255.255.0 192.168.0.0 255.255.255.0 pager lines 24 logging enable logging buffer-size 12288 logging buffered warnings logging asdm notifications mtu inside 1500 mtu outside 1500 mtu backup 1500 mtu dmz 1500 mtu guest 1500 ip local pool VPNpool 172.21.20.50-172.21.20.59 mask 255.255.255.0 no failover icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface global (outside) 2 email-frontier global (outside) 3 guest-frontier global (backup) 1 interface global (dmz) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 2 dc5 255.255.255.255 nat (inside) 1 192.168.0.0 255.255.255.0 nat (dmz) 0 access-list bypassingnat_dmz nat (dmz) 2 dmz1 255.255.255.255 nat (dmz) 1 172.21.18.0 255.255.255.0 access-group outside_access_in in interface outside access-group dmz_access_in in interface dmz route outside 0.0.0.0 0.0.0.0 4.4.4.1 1 track 1 route backup 0.0.0.0 0.0.0.0 5.5.5.25 254 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 ldap attribute-map RemoteAccessMap map-name memberOf IETF-Radius-Class map-value memberOf CN=RemoteAccess,CN=Users,DC=corp,DC=company-name,DC=com companynamera dynamic-access-policy-record DfltAccessPolicy aaa-server ActiveDirectory protocol ldap aaa-server ActiveDirectory (inside) host dc2 ldap-base-dn dc=corp,dc=company-name,dc=com ldap-scope subtree ldap-login-password * ldap-login-dn cn=administrator,ou=Admins,dc=corp,dc=company-name,dc=com server-type microsoft aaa-server ADRemoteAccess protocol ldap aaa-server ADRemoteAccess (inside) host dc2 ldap-base-dn dc=corp,dc=company-name,dc=com ldap-scope subtree ldap-login-password * ldap-login-dn cn=administrator,ou=Admins,dc=corp,dc=company-name,dc=com server-type microsoft ldap-attribute-map RemoteAccessMap aaa authentication enable console LOCAL aaa authentication ssh console LOCAL http server enable http 192.168.0.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart sla monitor 123 type echo protocol ipIcmpEcho 4.4.4.1 interface outside num-packets 3 frequency 10 sla monitor schedule 123 life forever start-time now crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set pfs crypto dynamic-map outside_dyn_map 20 set transform-set ESP-3DES-SHA crypto map outside_map 65535 ipsec-isakmp dynamic outside_dyn_map crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 ! track 1 rtr 123 reachability telnet timeout 5 ssh 192.168.0.0 255.255.255.0 inside ssh timeout 5 ssh version 2 console timeout 0 management-access inside dhcpd auto_config outside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn group-policy companynamera internal group-policy companynamera attributes wins-server value 192.168.0.5 dns-server value 192.168.0.5 vpn-tunnel-protocol IPSec password-storage enable split-tunnel-policy tunnelspecified split-tunnel-network-list value companyname_splitTunnelAcl default-domain value corp.company-name.com split-dns value corp.company-name.com group-policy companyname internal group-policy companyname attributes wins-server value 192.168.0.5 dns-server value 192.168.0.5 vpn-tunnel-protocol IPSec password-storage enable split-tunnel-policy tunnelspecified split-tunnel-network-list value companyname_splitTunnelAcl default-domain value corp.company-name.com split-dns value corp.company-name.com username admin password IhpSqtN210ZsNaH. encrypted privilege 15 tunnel-group companyname type remote-access tunnel-group companyname general-attributes address-pool VPNpool authentication-server-group ActiveDirectory LOCAL default-group-policy companyname tunnel-group companyname ipsec-attributes pre-shared-key * tunnel-group companynamera type remote-access tunnel-group companynamera general-attributes address-pool VPNpool authentication-server-group ADRemoteAccess LOCAL default-group-policy companynamera tunnel-group companynamera ipsec-attributes pre-shared-key * ! class-map type inspect ftp match-all ftp-inspection-map class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect ftp ftp-inspection-map parameters class ftp-inspection-map policy-map type inspect dns migrated_dns_map_1 parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns migrated_dns_map_1 inspect ftp inspect h323 h225 inspect h323 ras inspect http inspect ils inspect netbios inspect rsh inspect rtsp inspect skinny inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp inspect icmp inspect icmp error inspect esmtp inspect pptp ! service-policy global_policy global prompt hostname context Cryptochecksum:487525494a81c8176046fec475d17efe : end gateway# Thanks so much!

    Read the article

  • Ubuntu 12.04 // Likewise Open // Unable to ever authenticate AD users

    - by Rob
    So Ubuntu 12.04, Likewise latest from the beyondtrust website. Joins domain fine. Gets proper information from lw-get-status. Can use lw-find-user-by-name to retrieve/locate users. Can use lw-enum-users to get all users. Attempting to login with an AD user via SSH generates the following errors in the auth.log file: Nov 28 19:15:45 hostname sshd[2745]: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:15:45 hostname sshd[2745]: PAM adding faulty module: pam_winbind.so Nov 28 19:15:51 hostname sshd[2745]: error: PAM: Authentication service cannot retrieve authentication info for DOMAIN\\user.name from remote.hostname Nov 28 19:16:06 hostname sshd[2745]: Connection closed by 10.1.1.84 [preauth] Attempting to login via the LightDM itself generates similar errors in the auth.log file. Nov 28 19:19:29 hostname lightdm: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:19:29 hostname lightdm: PAM adding faulty module: pam_winbind.so Nov 28 19:19:47 hostname lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "DOMAIN\user.name" Nov 28 19:19:52 hostname lightdm: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:DOMAIN\user.name][error code:40022] Nov 28 19:19:54 hostname lightdm: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:19:54 hostname lightdm: PAM adding faulty module: pam_winbind.so Attempting to login via a console on the system itself generates slightly different errors: Nov 28 19:31:09 hostname login[997]: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:31:09 hostname login[997]: PAM adding faulty module: pam_winbind.so Nov 28 19:31:11 hostname login[997]: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:DOMAIN\user.name][error code:40022] Nov 28 19:31:14 hostname login[997]: FAILED LOGIN (1) on '/dev/tty2' FOR 'DOMAIN\user.name', Authentication service cannot retrieve authentication info Nov 28 19:31:31 hostname login[997]: FAILED LOGIN (2) on '/dev/tty2' FOR 'DOMAIN\user.name', Authentication service cannot retrieve authentication info I am baffled. The errors obviously are correct, the file /lib/security/pam_winbind.so does not exist. If its a dependancy/required, surely it should be part of the package? I've installed/reinstalled, I've used the downloaded package from the beyondtrust website, i've used the repository, nothing seems to work, every method of installing this application generates the same errors for me. UPDATE : Hrmm, I thought likewise didn't use native winbind but its own modules. Installing winbind from apt-get uninstalls pbis-open (likewise) and generates failures when installing if pbis-open is installed first. Uninstalled winbind, reinstalled pbis-open, same issue as above. The file pam_winbind.so does not exist in that location. Setting up pbis-open-legacy (7.0.1.918) ... Installing Packages was successful This computer is joined to DOMAIN.LOCAL New libraries and configurations have been installed for PAM and NSS. Clearly it thinks it has installed it, but it hasn't. It may be a legacy issue with the previous attempt to configure domain integration manually with winbind. Does anyone have a working likewise-open installation and does the /etc/nsswitch.conf include references to winbind? Or do the /etc/pam.d/common-account or /etc/pam.d/common-password reference pam_winbind.so? I'm unsure if those entries are just legacy or setup by likewise. UPDATE 2 : Complete reinstall of OS fixed it and it worked seamlessly, like it was meant to and those 2 PAM files did NOT include entries for pam_winbind.so, so that was the underlying problem. Thanks for the assist.

    Read the article

  • Using Live Data in Database Development Work

    - by Phil Factor
    Guest Editorial for Simple-Talk Newsletter... in which Phil Factor reacts with some exasperation when coming across a report that a majority of companies were still using financial and personal data for both developing and testing database applications. If you routinely test your development work using real production data that contains personal or financial information, you are probably being irresponsible, and at worst, risking a heavy financial penalty for your company. Surprisingly, over 80% of financial companies still do this. Plenty of data breaches and fraud have happened from the use of real data for testing, and a data breach is a nightmare for any organisation that suffers one. The cost of each data breach averages out at around $7.2 million in the US in notification, escalation, credit monitoring, fines, litigation, legal costs, and lost business due to customer churn, £1.9 million in the UK. 70% of data breaches are done from within the organisation. Real data can be exploited in a number of ways for malicious or criminal purposes. It isn't just the obvious use of items such as name and address, date of birth, social security number, and credit card and bank account numbers: Data can be exploited in many subtle ways, so there are excellent reasons to ensure that a high priority is given to the detection and prevention of any data breaches. You'll never successfully guess all the ways that real data can be exploited maliciously, or the ease with which it can be accessed. It would be silly to argue that developers never need access to a copy of the database containing live data. Developers sometimes need to track a bug that can only be replicated on the data from the live database. However, it has to be done in a very restrictive harness. The law makes no distinction between development and production databases when a data breach occurs, so the data has to be held with all appropriate security measures in place. In Europe, the use of personal data for testing requires the explicit consent of the people whose data is being held. There are federal standards such as GLBA, PCI DSS and HIPAA, and most US States have privacy legislation. The task of ensuring compliance and tight security in such circumstances is an expensive and time-consuming overhead. The developer is likely to suffer investigation if a data breach occurs, even if the company manages to stay in business. Ironically, the use of copies of live data isn't usually the most effective way to develop or test your data. Data is usually time-specific and isn't usually current by the time it is used for testing, Existing data doesn't help much for new functionality, and every time the data is refreshed from production, any test data is likely to be overwritten. Also, it is not always going to test all the 'edge' conditions that are likely to flush out bugs. You still have the task of simulating the dynamics of actual usage of the database, and here you have no alternative to creating 'spoofed' data. Because of the complexities of relational data, It used to be that there was no realistic alternative to developing and testing with live data. However, this is no longer the case. Real data can be obfuscated, or it can be created entirely from scratch. The latter process used to be impractical, now that there are plenty of third-party tools to choose from. The process of obfuscation isn't risk free. The process must access the live data, and the success of the obfuscation process has to be carefully monitored. Database data security isn't an exciting topic to you or I, but to a hacker it can be an all-consuming obsession, especially if there is financial or political gain involved. This is not the sort of adversary one would wish for and it is far better to accept, and work with, security restrictions that exist for using live data in database development work, especially when the tools exist to create large realistic database test data that can be better for several aspects of testing.

    Read the article

  • DISA Cross Domain Enterprise Solutions on the NetBeans Platform

    - by Geertjan
    Bray 2.0 is a tool based on the NetBeans Platform that assists in creating valid Data Flow Configuration (DFC) files. The DFC Specification was developed to provide a standardized way for defining, validating, and approving data flows for use on cross-domain guarding solutions. A DFC document specifies key entities such as security domains, guards that facilitate data between security domains, data flows that describe how data travels between security domains, filters that transform and validate the data and more. Related info: http://www.disa.mil/Services/Information-Assurance/Cross-Domain-Solutions The Bray product is in development at Fulcrum IT (http://www.fulcrumco.com). The DFC Specification and Bray were developed in support of the US Department of Defense. Bray 2.0 marks the first release of Bray on the NetBeans Platform and utilizes a number of features that are core to the NetBeans Platform: Modular plugability. Bray consumers can integrate their own tools, file types, and more into the product with relative ease. Robust UI. The NetBeans Platform intuitive UI makes it easy to access and manipulate multiple aspects of a DFC. Explorer. The Explorer is a key component that makes the DFC XML easy to traverse, edit, and find errors. Context-sensitive help. JavaHelp can be readily integrated for the product as well as all the UI within. Editors. Any external file can be added to a DFC. Users can register their own editors or use the provided NetBeans editors to edit files. Printing. The NetBeans Platform Print API makes it easy to determine what should be printed and how.   A screenshot: Bray 2.0 provides a lot of key features in developing valid, robust DFC files:  XML validation. A DFC can be validated against the DFC schema specification. DFC Check List. An interactive, minimal guide for creating a complete DFC. Summary Window. The Summary Window functions like the Navigator in NetBeans IDE. The current "item of interest" is checked against various business rules and provides the ability to quickly find and fix errors. Change Log. Bray audits every change to a DFC and places them in a change log for users to peruse. Comments. Users can optionally add comments for other users to see. Digital signatures. DFC files can be digitally signed. A signature history and signature validation is provided in Bray. Pluggable security schemes. Bray ships with plain text and IC-ISM security schemes. If needed, users can integrate additional ones.  ...and more to come! New features for Bray are constantly in development including use of the NetBeans Visual Library, language support, and more. More screenshots:

    Read the article

  • With a little effort you can &ldquo;SEMI&rdquo;-protect your C# assemblies with obfuscation.

    - by mbcrump
    This method will not protect your assemblies from a experienced hacker. Everyday we see new keygens, cracks, serials being released that contain ways around copy protection from small companies. This is a simple process that will make a lot of hackers quit because so many others use nothing. If you were a thief would you pick the house that has security signs and an alarm or one that has nothing? To so begin: Obfuscation is the concealment of meaning in communication, making it confusing and harder to interpret. Lets begin by looking at the cartoon below:     You are probably familiar with the term and probably ignored this like most programmers ignore user security. Today, I’m going to show you reflection and a way to obfuscate it. Please understand that I am aware of ways around this, but I believe some security is better than no security.  In this sample program below, the code appears exactly as it does in Visual Studio. When the program runs, you get either a true or false in a console window. Sample Program. using System; using System.Diagnostics; using System.Linq;   namespace ObfuscateMe {     class Program     {                static void Main(string[] args)         {               Console.WriteLine(IsProcessOpen("notepad")); //Returns a True or False depending if you have notepad running.             Console.ReadLine();         }             public static bool IsProcessOpen(string name)         {             return Process.GetProcesses().Any(clsProcess => clsProcess.ProcessName.Contains(name));         }     } }   Pretend, that this is a commercial application. The hacker will only have the executable and maybe a few config files, etc. After reviewing the executable, he can determine if it was produced in .NET by examing the file in ILDASM or Redgate’s Reflector. We are going to examine the file using RedGate’s Reflector. Upon launch, we simply drag/drop the exe over to the application. We have the following for the Main method:   and for the IsProcessOpen method:     Without any other knowledge as to how this works, the hacker could export the exe and get vs project build or copy this code in and our application would run. Using Reflector output. using System; using System.Diagnostics; using System.Linq;   namespace ObfuscateMe {     class Program     {                static void Main(string[] args)         {               Console.WriteLine(IsProcessOpen("notepad"));             Console.ReadLine();         }             public static bool IsProcessOpen(string name)         {             return Process.GetProcesses().Any<Process>(delegate(Process clsProcess)             {                 return clsProcess.ProcessName.Contains(name);             });         }       } } The code is not identical, but returns the same value. At this point, with a little bit of effort you could prevent the hacker from reverse engineering your code so quickly by using Eazfuscator.NET. Eazfuscator.NET is just one of many programs built for this. Visual Studio ships with a community version of Dotfoscutor. So download and load Eazfuscator.NET and drag/drop your exectuable/project into the window. It will work for a few minutes depending if you have a quad-core or not. After it finishes, open the executable in RedGate Reflector and you will get the following: Main After Obfuscation IsProcessOpen Method after obfuscation: As you can see with the jumbled characters, it is not as easy as the first example. I am aware of methods around this, but it takes more effort and unless the hacker is up for the challenge, they will just pick another program. This is also helpful if you are a consultant and make clients pay a yearly license fee. This would prevent the average software developer from jumping into your security routine after you have left. I hope this article helped someone. If you have any feedback, please leave it in the comments below.

    Read the article

  • CI tests to enforce specific development rules - good practice?

    - by KeithS
    The following is all purely hypothetical and any particular portion of it may or may not accurately describe real persons or situations, whether living, dead or just pretending. Let's say I'm a senior dev or architect in charge of a dev team working on a project. This project includes a security library for user authentication/authorization of the application under development. The library must be available for developers to edit; however, I wish to "trust but verify" that coders are not doing things that could compromise the security of the finished system, and because this isn't my only responsibility I want it to be done in an automated way. As one example, let's say I have an interface that represents a user which has been authenticated by the system's security library. The interface exposes basic user info and a list of things the user is authorized to do (so that the client app doesn't have to keep asking the server "can I do this?"), all in an immutable fashion of course. There is only one implementation of this interface in production code, and for the purposes of this post we can say that all appropriate measures have been taken to ensure that this implementation can only be used by the one part of our code that needs to be able to create concretions of the interface. The coders have been instructed that this interface and its implementation are sacrosanct and any changes must go through me. However, those are just words; the security library's source is open for editing by necessity. Any of my devs could decide that this secured, private, hash-checked implementation needs to be public so that they could do X, or alternately they could create their own implementation of this public interface in a different library, exposing the hashing algorithm that provides the secure checksum, in order to do Y. I may not be made aware of these changes so that I can beat the developer over the head for it. An attacker could then find these little nuggets in an unobfuscated library of the compiled product, and exploit it to provide fake users and/or falsely-elevated administrative permissions, bypassing the entire security system. This possibility keeps me awake for a couple of nights, and then I create an automated test that reflectively checks the codebase for types deriving from the interface, and fails if it finds any that are not exactly what and where I expect them to be. I compile this test into a project under a separate folder of the VCS that only I have rights to commit to, have CI compile it as an external library of the main project, and set it up to run as part of the CI test suite for user commits. Now, I have an automated test under my complete control that will tell me (and everyone else) if the number of implementations increases without my involvement, or an implementation that I did know about has anything new added or has its modifiers or those of its members changed. I can then investigate further, and regain the opportunity to beat developers over the head as necessary. Is this considered "reasonable" to want to do in situations like this? Am I going to be seen in a negative light for going behind my devs' backs to ensure they aren't doing something they shouldn't?

    Read the article

  • How do I fix: The handshake failed due to an unexpected packet format?

    - by Greg Finzer
    I am connecting from Windows Server 2008 R2 to a Linux FTP Server running vsFTPd 2.0.7. I am connecting via SSL. Here is the line of code it is failing on: sslStream = new SslStream(stream, false, CertificateValidation); Here is the log: 220 (vsFTPd 2.0.7) AUTH SSL 234 Proceed with negotiation. I receive the following error: System.IO.IOException: The handshake failed due to an unexpected packet format. at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at KellermanSoftware.NetFtpLibrary.ProxySocket.InitSsl() at KellermanSoftware.NetFtpLibrary.FTP.Connect(Boolean implicitConnection)

    Read the article

  • The trust relationship between the primary domain and the trusted domain failed. ASP.NET 2.0

    - by Dasupalouie
    Anyone run into this issue? Any help would be appretiated :) Server Error in '/CTCWeb' Application. The trust relationship between the primary domain and the trusted domain failed. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.SystemException: The trust relationship between the primary domain and the trusted domain failed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SystemException: The trust relationship between the primary domain and the trusted domain failed. ] System.Security.Principal.NTAccount.TranslateToSids(IdentityReferenceCollection sourceAccounts, Boolean& someFailed) +1185 System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean& someFailed) +44 System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) +47 System.Security.Principal.WindowsPrincipal.IsInRole(String role) +101 System.Web.Configuration.AuthorizationRule.IsTheUserInAnyRole(StringCollection roles, IPrincipal principal) +123 System.Web.Configuration.AuthorizationRule.IsUserAllowed(IPrincipal user, String verb) +256 System.Web.Configuration.AuthorizationRuleCollection.IsUserAllowed(IPrincipal user, String verb) +199 System.Web.Security.UrlAuthorizationModule.OnEnter(Object source, EventArgs eventArgs) +8771980 System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +68 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +75 -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.3603; ASP.NET Version:2.0.50727.3053

    Read the article

  • Error adding certificate to cacerts. Unknown key spec

    - by Alvaro Villanueva
    I am using jdk 1.6 in Windows. I have a .der file (DER Encoded X509 Certificate) that will like to add to my cacerts file... so I tried the following: keytool -import -keystore "C:\Program Files\Java\jdk1.6.0_27\jre\lib\security\cacerts" -trustcacerts -alias openldap -file "C:\cacert.der" I got the following error: java.security.cert.CertificateParsingException: java.io.IOException: subject key, java.security.spec.InvalidKeySpecException: Unknown key spec At first, I thoght it was a problemen with the der certificate, but then doing the following I got exactly the same error: keytool -list -keystore "C:\Program Files\Java\jdk1.6.0_27\jre\lib\security\cacerts" Any ideas why is this problem appearing? I have not found anything in the Web. Thanks in advance.

    Read the article

  • ASP.NET Web Site Administration Tool unkown Error ASP.NET 4 VS 2010

    - by Gabriel Guimarães
    I was following the MVCMusic tutorial with an machine with full sql server 2008 r2 and full visual studio professional and when I got to the page where it sets up membership (near page 66) the Web administration tool wont work, i got the following error: An error was encountered. Please return to the previous page and try again. my web config is like this: <connectionStrings> <clear /> <add name="MvcMusicStoreCN" connectionString="Data Source=.;Initial Catalog=MvcMusicStore;Integrated Security=True" providerName="System.Data.SqlClient" /> <add name="MvcMusicStoreEntities" connectionString="metadata=res://*/Models.Store.csdl|res://*/Models.Store.ssdl|res://*/Models.Store.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=.;Initial Catalog=MvcMusicStore;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> <system.web> <membership defaultProvider="AspNetSqlMembershipProvider"> <providers> <clear /> <add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="MvcMusicStoreCN" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" passwordFormat="Hashed" /> </providers> </membership> <profile> <providers> <clear /> <add name="AspNetSqlProfileProvider" type="System.Web.Profile.SqlProfileProvider" connectionStringName="MvcMusicStoreCN" applicationName="/" /> </providers> </profile> <roleManager enabled="true" defaultProvider="MvcMusicStoreCN"> <providers> <clear /> <add connectionStringName="MvcMusicStoreCN" applicationName="/" name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider" /> <add applicationName="/" name="AspNetWindowsTokenRoleProvider" type="System.Web.Security.WindowsTokenRoleProvider" /> </providers> </roleManager> </system.web>

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >