Search Results

Search found 1214 results on 49 pages for 'protection'.

Page 49/49 | < Previous Page | 45 46 47 48 49 

  • Database file is inexplicably locked during SQLite commit

    - by sweeney
    Hello, I'm performing a large number of INSERTS to a SQLite database. I'm using just one thread. I batch the writes to improve performance and have a bit of security in case of a crash. Basically I cache up a bunch of data in memory and then when I deem appropriate, I loop over all of that data and perform the INSERTS. The code for this is shown below: public void Commit() { using (SQLiteConnection conn = new SQLiteConnection(this.connString)) { conn.Open(); using (SQLiteTransaction trans = conn.BeginTransaction()) { using (SQLiteCommand command = conn.CreateCommand()) { command.CommandText = "INSERT OR IGNORE INTO [MY_TABLE] (col1, col2) VALUES (?,?)"; command.Parameters.Add(this.col1Param); command.Parameters.Add(this.col2Param); foreach (Data o in this.dataTemp) { this.col1Param.Value = o.Col1Prop; this. col2Param.Value = o.Col2Prop; command.ExecuteNonQuery(); } } this.TryHandleCommit(trans); } conn.Close(); } } I now employ the following gimmick to get the thing to eventually work: private void TryHandleCommit(SQLiteTransaction trans) { try { trans.Commit(); } catch (Exception e) { Console.WriteLine("Trying again..."); this.TryHandleCommit(trans); } } I create my DB like so: public DataBase(String path) { //build connection string SQLiteConnectionStringBuilder connString = new SQLiteConnectionStringBuilder(); connString.DataSource = path; connString.Version = 3; connString.DefaultTimeout = 5; connString.JournalMode = SQLiteJournalModeEnum.Persist; connString.UseUTF16Encoding = true; using (connection = new SQLiteConnection(connString.ToString())) { //check for existence of db FileInfo f = new FileInfo(path); if (!f.Exists) //build new blank db { SQLiteConnection.CreateFile(path); connection.Open(); using (SQLiteTransaction trans = connection.BeginTransaction()) { using (SQLiteCommand command = connection.CreateCommand()) { command.CommandText = DataBase.CREATE_MATCHES; command.ExecuteNonQuery(); command.CommandText = DataBase.CREATE_STRING_DATA; command.ExecuteNonQuery(); //TODO add logging } trans.Commit(); } connection.Close(); } } } I then export the connection string and use it to obtain new connections in different parts of the program. At seemingly random intervals, though at far too great a rate to ignore or otherwise workaround this problem, I get unhandled SQLiteException: Database file is locked. This occurs when I attempt to commit the transaction. No errors seem to occur prior to then. This does not always happen. Sometimes the whole thing runs without a hitch. No reads are being performed on these files before the commits finish. I have the very latest SQLite binary. I'm compiling for .NET 2.0. I'm using VS 2008. The db is a local file. All of this activity is encapsulated within one thread / process. Virus protection is off (though I think that was only relevant if you were connecting over a network?). As per Scotsman's post I have implemented the following changes: Journal Mode set to Persist DB files stored in C:\Docs + Settings\ApplicationData via System.Windows.Forms.Application.AppData windows call No inner exception Witnessed on two distinct machines (albeit very similar hardware and software) Have been running Process Monitor - no extraneous processes are attaching themselves to the DB files - the problem is definitely in my code... Does anyone have any idea whats going on here? I know I just dropped a whole mess of code, but I've been trying to figure this out for way too long. My thanks to anyone who makes it to the end of this question! brian UPDATES: Thanks for the suggestions so far! I've implemented many of the suggested changes. I feel that we are getting closer to the answer...however... The code above technically works however it is non-deterministic! It is not guaranteed to do anything aside from spin in neutral forever. In practice it seems to work somewhere between the 1st and 10th iteration. If i batch my commits at a reasonable interval damage will be mitigated but I really do not want to leave things in this state... More suggestions welcome!

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • Issue 15: Oracle Exadata Marketing Campaigns

    - by rituchhibber
         PARTNER FOCUS Oracle ExadataMarketing Campaign Steve McNickleVP Europe, cVidya Steve McNickle is VP Europe for cVidya, an innovative provider of revenue intelligence solutions for telecom, media and entertainment service providers including AT&T, BT, Deutsche Telecom and Vodafone. The company's product portfolio helps operators and service providers maximise margins, improve customer experience and optimise ecosystem relationships through revenue assurance, fraud and security management, sales performance management, pricing analytics, and inter-carrier services. cVidya has partnered with Oracle for more than a decade. RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Engineered Systems Oracle Communications cVidya SUBSCRIBE FEEDBACK PREVIOUS ISSUES Are you ready for Oracle OpenWorld this October? -- -- Please could you tell us a little about cVidya's partnering history with Oracle, and expand on your Oracle Exastack accreditations? "cVidya was established just over ten years ago and we've had a strong relationship with Oracle almost since the very beginning. Through our Revenue Intelligence work with some of the world's largest service providers we collect tremendous amounts of information, amounting to billions of records per day. We help our clients to collect, store and analyse that data to ensure that their end customers are getting the best levels of service, are billed correctly, and are happy that they are on the correct price plan. We have been an Oracle Gold level partner for seven years, and crucially just two months ago we were also accredited as Oracle Exastack Optimized for MoneyMap, our core Revenue Assurance solution. Very soon we also expect to be Oracle Exastack Optimized DRMap, our Data Retention solution." What unique capabilities and customer benefits does Oracle Exastack add to your applications? "Oracle Exastack enables us to deliver radical benefits to our customers. A typical mobile operator in the UK might handle between 500 million and two billion call data record details daily. Each transaction needs to be validated, billed correctly and fraud checked. Because of the enormous volumes involved, our clients demand scalable infrastructure that allows them to efficiently acquire, store and process all that data within controlled cost, space and environmental constraints. We have proved that the Oracle Exadata system can process data up to seven times faster and load it as much as 20 times faster than other standard best-of-breed server approaches. With the Oracle Exadata Database Machine they can reduce their datacentre equipment from say, the six or seven cabinets that they needed in the past, down to just one. This dramatic simplification delivers incredible value to the customer by cutting down enormously on all of their significant cost, space, energy, cooling and maintenance overheads." "The Oracle Exastack Program has given our clients the ability to switch their focus from reactive to proactive. Traditionally they may have spent 80 percent of their day processing, and just 20 percent enabling end customers to see advanced analytics, and avoiding issues before they occur. With our solutions and Oracle Exadata they can now switch that balance around entirely, resulting not only in reduced revenue leakage, but a far higher focus on proactive leakage prevention. How has the Oracle Exastack Program transformed your customer business? "We can already see the impact. Oracle solutions allow our delivery teams to achieve successful deployments, happy customers and self-satisfaction, and the power of Oracle's Exa solutions is easy to measure in terms of their transformational ability. We gained our first sale into a major European telco by demonstrating the major performance gains that would transform their business. Clients can measure the ease of organisational change, the early prevention of business issues, the reduction in manpower required to provide protection and coverage across all their products and services, plus of course end customer satisfaction. If customers know that that service is provided accurately and that their bills are calculated correctly, then over time this satisfaction can be attributed to revenue intelligence and the underlying systems which provide it. Combine this with the further integration we have with the other layers of the Oracle stack, including the telecommunications offerings such as NCC, OCDM and BRM, and the result is even greater customer value—not to mention the increased speed to market and the reduced project risk." What does the Oracle Exastack community bring to cVidya, both in terms of general benefits, and also tangible new opportunities and partnerships? "A great deal. We have participated in the Oracle Exastack community heavily over the past year, and have had lots of meetings with Oracle and our peers around the globe. It brings us into contact with like-minded, innovative partners, who like us are not happy to just stand still and want to take fresh technology to their customer base in order to gain enhanced value. We identified three new partnerships in each of two recent meetings, and hope these will open up new opportunities, not only in areas that exactly match where we operate today, but also in some new associative areas that will expand our reach into new business sectors. Notably, thanks to the Exastack community we were invited on stage at last year's Oracle OpenWorld conference. Appearing so publically with Oracle senior VP Judson Althoff elevated awareness and visibility of cVidya and has enabled us to participate in a number of other events with Oracle over the past eight months. We've been involved in speaking opportunities, forums and exhibitions, providing us with invaluable opportunities that we wouldn't otherwise have got close to." How has Exastack differentiated cVidya as an ISV, and helped you to evolve your business to the next level? "When we are selling to our core customer base of Tier 1 telecommunications providers, we know that they want more than just software. They want an enduring partnership that will last many years, they want innovation, and a forward thinking partner who knows how to guide them on where they need to be to meet market demand three, five or seven years down the line. Membership of respected global bodies, such as the Telemanagement Forum enables us to lead standard adherence in our area of business, giving us a lot of credibility, but Oracle is also involved in this forum with its own telecommunications portfolio, strengthening our position still further. When we approach CEOs, CTOs and CIOs at the very largest Tier 1 operators, not only can we easily show them that our technology is fantastic, we can also talk about our strong partnership with Oracle, and our joint embracing of today's standards and tomorrow's innovation." Where would you like cVidya to be in one year's time? "We want to get all of our relevant products Oracle Exastack Optimized. Our MoneyMap Revenue Assurance solution is already Exastack Optimised, our DRMAP Data Retention Solution should be Exastack Optimised within the next month, and our FraudView Fraud Management solution within the next two to three months. We'd then like to extend our Oracle accreditation out to include other members of the Oracle Engineered Systems family. We are moving into the 'Big Data' space, and so we're obviously very keen to work closely with Oracle to conduct pilots, map new technologies onto Oracle Big Data platforms, and embrace and measure the benefits of other Oracle systems, namely Oracle Exalogic Elastic Cloud, the Oracle Exalytics In-Memory Machine and the Oracle SPARC SuperCluster. We would also like to examine how the Oracle Database Appliance might benefit our Tier 2 service provider customers. Finally, we'd also like to continue working with the Oracle Communications Global Business Unit (CGBU), furthering our integration with Oracle billing products so that we are able to quickly deploy fraud solutions into Oracle's Engineered System stack, give operational benefits to our clients that are pre-integrated, more cost-effective, and can be rapidly deployed rapidly and producing benefits in three months, not nine months." Chris Baker ,Senior Vice President, Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners' business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? "Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best – building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best in class application platform so the ISV is free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success." How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? "We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme." How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? "One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry." What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? "My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data – a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle." What opportunities are immediately opened to new ISV partners joining the OPN? "As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation." Finally, are there any other messages that you would like to share with the Oracle ISV community? "The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: “I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud”. The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them." -- Gergely Strbik is Oracle Hardware and Software Product Manager for Avnet in Hungary. Avnet Technology Solutions is an OracleValue Added Distributor focused on the development of the existing Oracle channel. This includes the recruitment and enablement of Oracle partners as well as driving deeper adoption of Oracle's technology and application products within the IT channel. "The main business benefits of ODA for our customers and partners are scalability, flexibility, a great price point for the high performance delivered, and the easily configurable embedded Linux operating system. People welcome a lower point of entry and the ability to grow capacity on demand as their business expands." "Marketing and selling the ODA requires another way of thinking because it is an appliance. We have to transform the ways in which our partners and customers think from buying hardware and software independently to buying complete solutions. Successful early adopters and satisfied customer reactions will certainly help us to sell the ODA. We will have more experience with the product after the first deliveries and installations—end users need to see the power and benefits for themselves." "Our typical ODA customers will be those looking for complete solutions from a single reseller partner who is also able to manage the appliance. They will have enjoyed using Oracle Database but now want a new product that is able to unlock new levels of performance. A higher proportion of potential customers will come from our existing Oracle base, with around 30% from new business, but we intend to evangelise the ODA on the market to see how we can change this balance as all our customers adjust to the concept of 'Hardware and Software, Engineered to Work Together'. -- Back to the welcome page

    Read the article

  • Server compromised. Bounce message contains many email addresses message was not sent to

    - by Tim Duncklee
    This is not a dupe. Please read and understand the issue before marking this as a duplicate question that has been answered already. Several customers are reporting bounce messages like the one below. At first I thought their computers had a virus but then I received one that was server generated so the problem is with the server. I've inspected the logs and these email addresses do not appear in the logs. The only thing I see that I do not remember seeing in the past are entries like this: Apr 30 13:34:49 psa86 qmail-queue-handlers[20994]: hook_dir = '/var/qmail//handlers/before-queue' Apr 30 13:34:49 psa86 qmail-queue-handlers[20994]: recipient[3] = '[email protected]' Apr 30 13:34:49 psa86 qmail-queue-handlers[20994]: handlers dir = '/var/qmail//handlers/before-queue/recipient/[email protected]' I've searched here and the web and maybe I'm just not entering the right search terms but I find nothing on this issue. Does anyone know how a hacker would attach additional email addresses to a message at the server and have them not appear in the logs? CentOS release 5.4, Plesk 8.6, QMail 1.03 Hi. This is the qmail-send program at psa.aaaaaa.com. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 82.201.133.227 does not like recipient. Remote host said: 550 #5.1.0 Address rejected. Giving up on 82.201.133.227. <[email protected]>: 64.18.7.10 does not like recipient. Remote host said: 550 No such user - psmtp Giving up on 64.18.7.10. <[email protected]>: 173.194.68.27 does not like recipient. Remote host said: 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://support.google.com/mail/bin/answer.py?answer=6596 w8si1903qag.18 - gsmtp Giving up on 173.194.68.27. <[email protected]>: 207.115.36.23 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.36.23. <[email protected]>: 207.115.37.22 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.22. <[email protected]>: 207.115.37.20 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.20. <[email protected]>: 207.115.37.23 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.23. <[email protected]>: 207.115.36.22 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.36.22. <[email protected]>: 74.205.16.140 does not like recipient. Remote host said: 553 sorry, that domain isn't in my list of allowed rcpthosts; no valid cert for gatewaying (#5.7.1) Giving up on 74.205.16.140. <[email protected]>: 207.115.36.20 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.36.20. <[email protected]>: 207.115.37.21 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.21. <[email protected]>: 192.169.41.23 failed after I sent the message. Remote host said: 554 qq Sorry, no valid recipients (#5.1.3) --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 15962 invoked from network); 1 May 2013 06:49:34 -0400 Received: from exprod6mo107.postini.com (64.18.1.18) by psa.aaaaaa.com with (DHE-RSA-AES256-SHA encrypted) SMTP; 1 May 2013 06:49:34 -0400 Received: from aaaaaa.com (exprod6lut001.postini.com [64.18.1.199]) by exprod6mo107.postini.com (Postfix) with SMTP id 47F80B8CA4 for <[email protected]>; Wed, 1 May 2013 03:49:33 -0700 (PDT) From: "Support" <[email protected]> To: [email protected] Subject: Detected Potential Junk Mail Date: Wed, 1 May 2013 03:49:33 -0700 Dear [email protected], junk mail protection service has detected suspicious email message(s) since your last visit and directed them to your Message Center. You can inspect your suspicious email at: ... UPDATE: After not seeing this problem for a while, I personally sent a message and immediately got a bounce with several bad addresses that I know I did not send to. These are addresses that are not on my system or on the server. This problem happens with both Mac and Windows clients and with messages generated from Postini and sent to users on my system. This is NOT backscatter. If it was backscatter it would not have the contents of my message in it. UPDATE #2 Here is another bounce. This one was sent by me and the bounce came back immediately. Hi. This is the qmail-send program at psa.aaaaaa.com. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 71.74.56.227 does not like recipient. Remote host said: 550 5.1.1 <[email protected]>... User unknown Giving up on 71.74.56.227. <[email protected]>: Connected to 208.34.236.3 but sender was rejected. Remote host said: 550 5.7.1 This system is configured to reject mail from 174.142.62.210 [174.142.62.210] (Host blacklisted - Found on Realtime Black List server 'bl.mailspike.net') <[email protected]>: 66.96.80.22 failed after I sent the message. Remote host said: 552 sorry, mailbox [email protected] is over quota temporarily (#5.1.1) <[email protected]>: 83.145.109.52 does not like recipient. Remote host said: 550 5.1.1 <[email protected]>: Recipient address rejected: User unknown in virtual mailbox table Giving up on 83.145.109.52. <[email protected]>: 69.49.101.234 does not like recipient. Remote host said: 550 5.7.1 <[email protected]>... H:M12 [174.142.62.210] Connection refused due to abuse. Please see http://mailspike.org/anubis/lookup.html or contact your E-mail provider. Giving up on 69.49.101.234. <[email protected]>: 212.55.154.36 does not like recipient. Remote host said: 550-The account has been suspended for inactivity 550 A conta do destinatario encontra-se suspensa por inactividade (#5.2.1) Giving up on 212.55.154.36. <[email protected]>: 199.168.90.102 failed after I sent the message. Remote host said: 552 Transaction [email protected] failed, remote said "550 No such user" (#5.1.1) <[email protected]>: 98.136.217.192 failed after I sent the message. Remote host said: 554 delivery error: dd Sorry your message to [email protected] cannot be delivered. This account has been disabled or discontinued [#102]. - mta1210.sbc.mail.gq1.yahoo.com --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 2618 invoked from network); 2 Jun 2013 22:32:51 -0400 Received: from 75-138-254-239.dhcp.jcsn.tn.charter.com (HELO ?192.168.0.66?) (75.138.254.239) by psa.aaaaaa.com with SMTP; 2 Jun 2013 22:32:48 -0400 User-Agent: Microsoft-Entourage/12.34.0.120813 Date: Sun, 02 Jun 2013 21:32:39 -0500 Subject: Refinance From: Tim Duncklee <[email protected]> To: Scott jones <[email protected]> Message-ID: <CDD16A79.67344%[email protected]> Thread-Topic: Reference Thread-Index: Ac5gAp2QmTs+LRv0SEOy7AJTX2DWzQ== Mime-version: 1.0 Content-type: multipart/mixed; boundary="B_3453053568_12034440" > This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. --B_3453053568_12034440 Content-type: multipart/related; boundary="B_3453053568_11982218" --B_3453053568_11982218 Content-type: multipart/alternative; boundary="B_3453053568_12000660" --B_3453053568_12000660 Content-type: text/plain; charset="ISO-8859-1" Content-transfer-encoding: quoted-printable Scott, ... email body here ... Here are the relevant log entries: Jun 2 22:32:50 psa qmail-queue[2616]: mail: all addreses are uncheckable - need to skip scanning (by deny mode) Jun 2 22:32:50 psa qmail-queue[2616]: scan: the message(drweb.tmp.i2SY0n) sent by [email protected] to [email protected] should be passed without checks, because contains uncheckable addresses Jun 2 22:32:50 psa qmail-queue-handlers[2617]: Handlers Filter before-queue for qmail started ... Jun 2 22:32:50 psa qmail-queue-handlers[2617]: [email protected] Jun 2 22:32:50 psa qmail-queue-handlers[2617]: [email protected] Jun 2 22:32:50 psa qmail-queue-handlers[2617]: hook_dir = '/var/qmail//handlers/before-queue' Jun 2 22:32:50 psa qmail-queue-handlers[2617]: recipient[3] = '[email protected]' Jun 2 22:32:50 psa qmail-queue-handlers[2617]: handlers dir = '/var/qmail//handlers/before-queue/recipient/[email protected]' Jun 2 22:32:51 psa qmail: 1370226771.060211 starting delivery 57: msg 1540285 to remote [email protected] Jun 2 22:32:51 psa qmail: 1370226771.060402 status: local 0/10 remote 1/20 Jun 2 22:32:51 psa qmail: 1370226771.060556 new msg 4915232 Jun 2 22:32:51 psa qmail: 1370226771.060671 info msg 4915232: bytes 687899 from <[email protected]> qp 2618 uid 2020 Jun 2 22:32:51 psa qmail-remote-handlers[2619]: Handlers Filter before-remote for qmail started ... Jun 2 22:32:51 psa qmail-queue-handlers[2617]: starter: submitter[2618] exited normally Jun 2 22:32:51 psa qmail-remote-handlers[2619]: from= Jun 2 22:32:51 psa qmail-remote-handlers[2619]: [email protected] Jun 2 22:32:51 psa qmail: 1370226771.078732 starting delivery 58: msg 4915232 to remote [email protected] Jun 2 22:32:51 psa qmail: 1370226771.078825 status: local 0/10 remote 2/20 Jun 2 22:32:51 psa qmail-remote-handlers[2621]: Handlers Filter before-remote for qmail started ... Jun 2 22:32:51 psa qmail-remote-handlers[2621]: [email protected] Jun 2 22:32:51 psa qmail-remote-handlers[2621]: [email protected]

    Read the article

  • PC powers off at random times

    - by Timo Huovinen
    Short Version After experiencing some problems with Mobo batteries my PC started to power off at random times, the power off is instant and sudden and does not restart afterwards, need help figuring out the cause. Facts: Powers off when PC is playing games Powers off when PC is idle Powers off when PC is in safe mode Powers off when PC is in BIOS Powers off when PC is booted through a Windows installation USB Replaced the motherboard battery several times Replaced the 650W PSU with a 750W PSU Replaced the RAM Swapped the RAM between slots Re-applied thermal paste to the CPU Checked if the motherboard touches the case Nothing is overclocked PC Specs PC specs: OS: Windows 7 Ultimate SP1 RAM: klingston 1333MHz 4GB stick CPU: AMD Phenom II x4 955 Mobo: Gigabyte 88GMA-UD2H rev 2.2 Motherboard battery: CR2032 3v HDD: 500GB Seagate ST3500418AS ATA Device Graphics: ATI/AMD Radeon HD 6870 Very Long version Around 10 months ago I built a brand new gaming PC. Around 6 months ago it's time setting in windows started resetting to the year 2010. I swapped the Motherboard battery for a new one of the exact same size and shape and voltage, and the problems disappeared...for around 2 weeks. Then the same problem happened again, time gets reset, I swapped the battery again, and the problem was gone for good and everything was great for about 3 months.. then another problem started happening, the PC started to power off suddenly and without warning at completely random times, sometimes the PC works for and hour, sometimes 5 minutes. So I read on the forums that it might be either the PSU or the motherboard Battery or RAM or HDD or the Graphics card or the CPU or the motherboard or the drivers or a Virus or Grounding issues, or something short circuiting, basically it can be anything... I spent some days researching, and decided to remove the possibility of a virus. I reset the CMOS, cleared all BIOS settings and reinstalled windows 7 after a full format of the HDD, but the random power off kept happening. I then disabled the restart on error option in windows and looked at the event log for error events, but they did not help me figure out the problem. Network list service depends on network location awareness the dependency group failed to start Source Kernel Power Event 41 Task Category 63 Source Disk Event ID 11 Task Category None The driver detected a controller error on device disk I took apart the PC, every little piece, re-applied some expensive thermal paste to the CPU, and double checked that none of the pieces are touching the PC case. The problem was gone, the PC no longer powered off randomly I re-attached the graphics card and all was good for 4 months... then the power off problem appeared again, but was happening at high intervals, the PC would shutdown once in 2 days on average, at random points in time, sometimes when it's idle all day long, sometimes when it's running CRYSIS 2. I checked the CPU temperature, because I know that AMD CPU's have a built in protection mechanism that switches off the PC if the CPU gets too hot, and the Temp was 50C system temp, and 45C CPU after running the PC all day long (I did not do tests to see if there are any temperature spikes, don't know how to do them) Originally the PSU that powered the PC was 650Watts and had one 4 pin cable to power the CPU, I replaced it with a new 750Watts PSU which has two 4 pin cables for the CPU, but the problem remained. I removed the graphics card and let the motherboard use the built in one, but the PC kept suddenly powering off at random times. I took apart the PC completely again, and re-applied thermal paste to the CPU, added lots of insulation, and checked for any type of short-circuit possibility again and again, but the problem remained. The problem was like that for some months. I replaced the Battery a couple of times over the time, changed lots of options in windows, and tried everything I could, but it kept powering off, so I stopped using the PC as much as I used to, just living with the random power offs from time to time, until a couple of days ago, when the power off happens almost immediately after powering on the PC. I replaced the RAM with a brand new one, but that did not help. Took apart the PC again, checked for anything anywhere that might cause it, found some small scratches on the very edge of the motherboard to the left of the PCI express x16 slot. This might cause the problem, I thought, but the scratch looks very superficial, not deep at all, and if the scratch did harm the motherboard, wouldn't it cause it to not start at all? And why did it start to power off a while ago, and then suddenly stop powering off? The scratches could not have vanished??? did chkdsk \d but it powered off when it was at 75% I removed the hard disks, the graphics card, while I fiddled with the BIOS settings, and suddenly the PC shut down while I was looking at the BIOS version. This makes me realize, it is not caused by: HDD, Windows, Drivers or the Graphics card I cleared the CMOS again, updated the BIOS from F5 to F6f beta, but that did not help, it might even seem that the PC powers off even sooner. The shutdown even happened to me while I booted through a windows 7 installation USB and was in the repair console. I removed one of the cables powering the CPU, now only one 4pin cable powers it, and it worked for 30mins after doing that, which makes me think that it's the CPU overheating, and because it gets less power, it overheats slower? The things that I am still considering: CPU overheating (does not seem to overheat, maybe false readings?) Motherboard short circuiting (faulty motherboard?) I desperately need some advice in what is faulty, is it a faulty Motherboard or an overheating CPU? or maybe something else? I have been breaking my head over this problem over a span of 6 months. I'm not sure if this is a good place to ask this question, if it is not, then tell me where I can get some experienced help. More info I have also discovered a mysterious piece that seems to have fallen out of the motherboard i119.photobucket.com/albums/o126/yurikolovsky/strangepiece.jpg What is it? Looks like each time that it powers off the datetime gets reset I also found another forum post tomshardware.co.uk/forum/… except I don't have Integrated PeripheralsUSB Keyboard Function option in BIOS :S Comments summary (asked by Random moderator) Q. tell me, if the computer restarts, is it immediately? Does it take a second and then restarts? Do you see (BSOD) or hear (PSU, short circuit) any suspicious when it happens? After reading trough it, it remains the mainboard that is faulty. – JohannesM A. Immediate power off, all the fans stop instantly, all the light turn off instantly, no sound or anything, and it remains off until I turn it back on. Thanks for the feedback, faulty motherboard is what I fear. Q. Try stress-testing the system with Prime95 and see if errors or shutdowns occur when the CPU is under full load. – speakr A. Prime95 heat stress test peaked CPU heat at 60C after 5mins, it powered off after 30mins of testing in the middle of the test with no errors, Prime95 Heat test or the stress-testing with low RAM usage (small or in-place FFTs) do not report errors while testing for 10-60 mins. The power off does not seem like it is affected by Prime95 at all Makes me wonder if it's a CPU or Motherboard issue at all. Q. I had similar random/intermittent problems with my old board. It gave one of a few different symptoms: keyboard and/or mouse would die and/or the RAM wouldn't work and/or it would shut down. It was in bad shape. One problems was that my old PSU had literally burned the connector on it (browned around the pins), another was that a broken lead inside the layers of the PCB would work sometimes if it happened to be hot or if I bent the board—by jamming a hunk of wood behind it. I managed to keep the board alive for several years, but eventually nothing I did would make it work correctly anymore. – Synetech A. I will try that as the last resort, ok? ;) Q. Have you tried a different power cord, surge protector, outlet (on a different circuit). It's worth a shot just to ensure it's not subpar wiring or a week circuit (dips in power may cause shutdown if the PSU can't pull enough juice from the wall). – Kyle A. yes, I attached the PC to an entirely different outlet on a different circuit and the problem persists. After connecting it to a different outlet after starting the PC it gave me 3 long beeps and 1 short one, then the PC immediately proceeded to boot up normally. Q. Re-check your mainboard manual and all PSU connections to your mainboard to be sure that nothing is missing (e.g. 12V ATX 4-pin/6-pin connector). If you can provoke shutdowns with Prime95, then consider buying new hardware -- a stable system should run Prime95 for 24h without any errors. Prime95 mentions errors in the log when they occur and gives a summary after the stress test was stopped manually (e.g. "0 errors, 0 warnings", if all is fine) – speakr A. Re-checked, there are no more PSU connectors that I can physically connect, except the one ATX 4-pin (there are 2 that power the CPU) that I disconnected on purpose, I have reconnected it but the problem persists. Q. With one PC I had a short curcuit. The power button on the front plate had its cables soldered, but not isolated, and the contacts were very close to the metal case. A heavier touch was enough to cause a shutdown. The PC's vibration could be enough – ott-- A. yes, it seems to switch off with even the lightest touch, I switched on the PC, then pulled out the front panel power cable that connects to the motherboard so the power button does not work anymore, after 5 mins of working like that, with the power button completely disconnected, just sitting idle, the PC powered off again, I don't think it's the power button. Q. I wonder if you dare to operate components without the case, that is remove motherboard, power, disk ( just put the motherboard on a wooden desk). Don't bend the adapters when running like that. – ott-- A. yes, I do dare to do that, but only tomorrow, too tired/late right now.

    Read the article

  • Cannot open root device xvda1 or unknown-block(0,0)

    - by svoop
    I'm putting together a Dom0 and three DomU (all Gentoo) with kernel 3.5.7 and Xen 4.1.1. Each Dom has it's own md (md0 for Dom0, md1 for Dom1 etc). Dom0 works fine so far, however, I'm stuck trying to create DomUs. It appears the xvda1 device on DomU is not created or accessible: Parsing config file dom1 domainbuilder: detail: xc_dom_allocate: cmdline="root=/dev/xvda1 console=hvc0 root=/dev/xvda1 ro 3", features="(null)" domainbuilder: detail: xc_dom_kernel_mem: called domainbuilder: detail: xc_dom_boot_xen_init: ver 4.1, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 domainbuilder: detail: xc_dom_parse_image: called domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ... domainbuilder: detail: loader probe failed domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ... domainbuilder: detail: xc_dom_malloc : 10530 kB domainbuilder: detail: xc_dom_do_gunzip: unzip ok, 0x2f7a4f -> 0xa48888 domainbuilder: detail: loader probe OK xc: detail: elf_parse_binary: phdr: paddr=0x1000000 memsz=0x558000 xc: detail: elf_parse_binary: phdr: paddr=0x1558000 memsz=0x690e8 xc: detail: elf_parse_binary: phdr: paddr=0x15c2000 memsz=0x127c0 xc: detail: elf_parse_binary: phdr: paddr=0x15d5000 memsz=0x533000 xc: detail: elf_parse_binary: memory: 0x1000000 -> 0x1b08000 xc: detail: elf_xen_parse_note: GUEST_OS = "linux" xc: detail: elf_xen_parse_note: GUEST_VERSION = "2.6" xc: detail: elf_xen_parse_note: XEN_VERSION = "xen-3.0" xc: detail: elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000 xc: detail: elf_xen_parse_note: ENTRY = 0xffffffff815d5210 xc: detail: elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000 xc: detail: elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb" xc: detail: elf_xen_parse_note: PAE_MODE = "yes" xc: detail: elf_xen_parse_note: LOADER = "generic" xc: detail: elf_xen_parse_note: unknown xen elf note (0xd) xc: detail: elf_xen_parse_note: SUSPEND_CANCEL = 0x1 xc: detail: elf_xen_parse_note: HV_START_LOW = 0xffff800000000000 xc: detail: elf_xen_parse_note: PADDR_OFFSET = 0x0 xc: detail: elf_xen_addr_calc_check: addresses: xc: detail: virt_base = 0xffffffff80000000 xc: detail: elf_paddr_offset = 0x0 xc: detail: virt_offset = 0xffffffff80000000 xc: detail: virt_kstart = 0xffffffff81000000 xc: detail: virt_kend = 0xffffffff81b08000 xc: detail: virt_entry = 0xffffffff815d5210 xc: detail: p2m_base = 0xffffffffffffffff domainbuilder: detail: xc_dom_parse_elf_kernel: xen-3.0-x86_64: 0xffffffff81000000 -> 0xffffffff81b08000 domainbuilder: detail: xc_dom_mem_init: mem 5000 MB, pages 0x138800 pages, 4k each domainbuilder: detail: xc_dom_mem_init: 0x138800 pages domainbuilder: detail: xc_dom_boot_mem_init: called domainbuilder: detail: x86_compat: guest xen-3.0-x86_64, address size 64 domainbuilder: detail: xc_dom_malloc : 10000 kB domainbuilder: detail: xc_dom_build_image: called domainbuilder: detail: xc_dom_alloc_segment: kernel : 0xffffffff81000000 -> 0xffffffff81b08000 (pfn 0x1000 + 0xb08 pages) domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x1000+0xb08 at 0x7fdec9b85000 xc: detail: elf_load_binary: phdr 0 at 0x0x7fdec9b85000 -> 0x0x7fdeca0dd000 xc: detail: elf_load_binary: phdr 1 at 0x0x7fdeca0dd000 -> 0x0x7fdeca1460e8 xc: detail: elf_load_binary: phdr 2 at 0x0x7fdeca147000 -> 0x0x7fdeca1597c0 xc: detail: elf_load_binary: phdr 3 at 0x0x7fdeca15a000 -> 0x0x7fdeca1cd000 domainbuilder: detail: xc_dom_alloc_segment: phys2mach : 0xffffffff81b08000 -> 0xffffffff824cc000 (pfn 0x1b08 + 0x9c4 pages) domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x1b08+0x9c4 at 0x7fdec91c1000 domainbuilder: detail: xc_dom_alloc_page : start info : 0xffffffff824cc000 (pfn 0x24cc) domainbuilder: detail: xc_dom_alloc_page : xenstore : 0xffffffff824cd000 (pfn 0x24cd) domainbuilder: detail: xc_dom_alloc_page : console : 0xffffffff824ce000 (pfn 0x24ce) domainbuilder: detail: nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s) domainbuilder: detail: nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s) domainbuilder: detail: nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffbfffffff, 1 table(s) domainbuilder: detail: nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 0xffffffff827fffff, 20 table(s) domainbuilder: detail: xc_dom_alloc_segment: page tables : 0xffffffff824cf000 -> 0xffffffff824e6000 (pfn 0x24cf + 0x17 pages) domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x24cf+0x17 at 0x7fdece676000 domainbuilder: detail: xc_dom_alloc_page : boot stack : 0xffffffff824e6000 (pfn 0x24e6) domainbuilder: detail: xc_dom_build_image : virt_alloc_end : 0xffffffff824e7000 domainbuilder: detail: xc_dom_build_image : virt_pgtab_end : 0xffffffff82800000 domainbuilder: detail: xc_dom_boot_image: called domainbuilder: detail: arch_setup_bootearly: doing nothing domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64 <= matches domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32 domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64 domainbuilder: detail: xc_dom_update_guest_p2m: dst 64bit, pages 0x138800 domainbuilder: detail: clear_page: pfn 0x24ce, mfn 0x37ddee domainbuilder: detail: clear_page: pfn 0x24cd, mfn 0x37ddef domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x24cc+0x1 at 0x7fdece675000 domainbuilder: detail: start_info_x86_64: called domainbuilder: detail: setup_hypercall_page: vaddr=0xffffffff81001000 pfn=0x1001 domainbuilder: detail: domain builder memory footprint domainbuilder: detail: allocated domainbuilder: detail: malloc : 20658 kB domainbuilder: detail: anon mmap : 0 bytes domainbuilder: detail: mapped domainbuilder: detail: file mmap : 0 bytes domainbuilder: detail: domU mmap : 21392 kB domainbuilder: detail: arch_setup_bootlate: shared_info: pfn 0x0, mfn 0xbaa6f domainbuilder: detail: shared_info_x86_64: called domainbuilder: detail: vcpu_x86_64: called domainbuilder: detail: vcpu_x86_64: cr3: pfn 0x24cf mfn 0x37dded domainbuilder: detail: launch_vm: called, ctxt=0x7fff224e4ea0 domainbuilder: detail: xc_dom_release: called Daemon running with PID 4639 [ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Linux version 3.5.7-gentoo (root@majordomo) (gcc version 4.5.4 (Gentoo 4.5.4 p1.0, pie-0.4.7) ) #1 SMP Tue Nov 20 10:49:51 CET 2012 [ 0.000000] Command line: root=/dev/xvda1 console=hvc0 root=/dev/xvda1 ro 3 [ 0.000000] ACPI in unprivileged domain disabled [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] Xen: [mem 0x0000000000000000-0x000000000009ffff] usable [ 0.000000] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserved [ 0.000000] Xen: [mem 0x0000000000100000-0x0000000138ffffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] MPS support code is not built-in. [ 0.000000] Using acpi=off or acpi=noirq or pci=noacpi may have problem [ 0.000000] DMI not present or invalid. [ 0.000000] No AGP bridge found [ 0.000000] e820: last_pfn = 0x139000 max_arch_pfn = 0x400000000 [ 0.000000] e820: last_pfn = 0x100000 max_arch_pfn = 0x400000000 [ 0.000000] init_memory_mapping: [mem 0x00000000-0xffffffff] [ 0.000000] init_memory_mapping: [mem 0x100000000-0x138ffffff] [ 0.000000] NUMA turned off [ 0.000000] Faking a node at [mem 0x0000000000000000-0x0000000138ffffff] [ 0.000000] Initmem setup node 0 [mem 0x00000000-0x138ffffff] [ 0.000000] NODE_DATA [mem 0x1387fc000-0x1387fffff] [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00010000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x138ffffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00010000-0x0009ffff] [ 0.000000] node 0: [mem 0x00100000-0x138ffffff] [ 0.000000] SMP: Allowing 1 CPUs, 0 hotplug CPUs [ 0.000000] No local APIC present [ 0.000000] APIC: disable apic facility [ 0.000000] APIC: switched to apic NOOP [ 0.000000] e820: cannot find a gap in the 32bit address range [ 0.000000] e820: PCI devices with unassigned 32bit BARs may break! [ 0.000000] e820: [mem 0x139100000-0x1394fffff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on Xen [ 0.000000] Xen version: 4.1.1 (preserve-AD) [ 0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:1 nr_node_ids:1 [ 0.000000] PERCPU: Embedded 26 pages/cpu @ffff880138400000 s75712 r8192 d22592 u2097152 [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1259871 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: root=/dev/xvda1 console=hvc0 root=/dev/xvda1 ro 3 [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] __ex_table already sorted, skipping sort [ 0.000000] Checking aperture... [ 0.000000] No AGP bridge found [ 0.000000] Memory: 4943980k/5128192k available (3937k kernel code, 448k absent, 183764k reserved, 1951k data, 524k init) [ 0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] NR_IRQS:4352 nr_irqs:256 16 [ 0.000000] Console: colour dummy device 80x25 [ 0.000000] console [tty0] enabled [ 0.000000] console [hvc0] enabled [ 0.000000] installing Xen timer for CPU 0 [ 0.000000] Detected 3411.602 MHz processor. [ 0.000999] Calibrating delay loop (skipped), value calculated using timer frequency.. 6823.20 BogoMIPS (lpj=3411602) [ 0.000999] pid_max: default: 32768 minimum: 301 [ 0.000999] Security Framework initialized [ 0.001355] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes) [ 0.002974] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.003441] Mount-cache hash table entries: 256 [ 0.003595] Initializing cgroup subsys cpuacct [ 0.003599] Initializing cgroup subsys freezer [ 0.003637] ENERGY_PERF_BIAS: Set to 'normal', was 'performance' [ 0.003637] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8) [ 0.003643] CPU: Physical Processor ID: 0 [ 0.003645] CPU: Processor Core ID: 0 [ 0.003702] SMP alternatives: switching to UP code [ 0.011791] Freeing SMP alternatives: 12k freed [ 0.011835] Performance Events: unsupported p6 CPU model 42 no PMU driver, software events only. [ 0.011886] Brought up 1 CPUs [ 0.011998] Grant tables using version 2 layout. [ 0.012009] Grant table initialized [ 0.012034] NET: Registered protocol family 16 [ 0.012328] PCI: setting up Xen PCI frontend stub [ 0.015089] bio: create slab <bio-0> at 0 [ 0.015158] ACPI: Interpreter disabled. [ 0.015180] xen/balloon: Initialising balloon driver. [ 0.015180] xen-balloon: Initialising balloon driver. [ 0.015180] vgaarb: loaded [ 0.016126] SCSI subsystem initialized [ 0.016314] PCI: System does not support PCI [ 0.016320] PCI: System does not support PCI [ 0.016435] NetLabel: Initializing [ 0.016438] NetLabel: domain hash size = 128 [ 0.016440] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.016447] NetLabel: unlabeled traffic allowed by default [ 0.016475] Switching to clocksource xen [ 0.017434] pnp: PnP ACPI: disabled [ 0.017501] NET: Registered protocol family 2 [ 0.017864] IP route cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.019322] TCP established hash table entries: 524288 (order: 11, 8388608 bytes) [ 0.020376] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) [ 0.020497] TCP: Hash tables configured (established 524288 bind 65536) [ 0.020500] TCP: reno registered [ 0.020525] UDP hash table entries: 4096 (order: 5, 131072 bytes) [ 0.020564] UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes) [ 0.020624] NET: Registered protocol family 1 [ 0.020658] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 0.020662] software IO TLB [mem 0xfb632000-0xff631fff] (64MB) mapped at [ffff8800fb632000-ffff8800ff631fff] [ 0.020750] platform rtc_cmos: registered platform RTC device (no PNP device found) [ 0.021378] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 0.023378] msgmni has been set to 9656 [ 0.023544] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.023549] io scheduler noop registered [ 0.023551] io scheduler deadline registered [ 0.023580] io scheduler cfq registered (default) [ 0.023650] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 0.023845] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 0.024082] Non-volatile memory driver v1.3 [ 0.024085] Linux agpgart interface v0.103 [ 0.024207] Event-channel device installed. [ 0.024265] [drm] Initialized drm 1.1.0 20060810 [ 0.024268] [drm:i915_init] *ERROR* drm/i915 can't work without intel_agp module! [ 0.025145] brd: module loaded [ 0.025565] loop: module loaded [ 0.045646] Initialising Xen virtual ethernet driver. [ 0.198264] i8042: PNP: No PS/2 controller found. Probing ports directly. [ 0.199096] i8042: No controller found [ 0.199139] mousedev: PS/2 mouse device common for all mice [ 0.259303] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0 [ 0.259353] rtc_cmos: probe of rtc_cmos failed with error -38 [ 0.259440] md: raid1 personality registered for level 1 [ 0.259542] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) [ 0.259732] ip_tables: (C) 2000-2006 Netfilter Core Team [ 0.259747] TCP: cubic registered [ 0.259886] NET: Registered protocol family 10 [ 0.260031] ip6_tables: (C) 2000-2006 Netfilter Core Team [ 0.260070] sit: IPv6 over IPv4 tunneling driver [ 0.260194] NET: Registered protocol family 17 [ 0.260213] Bridge firewalling registered [ 5.360075] XENBUS: Waiting for devices to initialise: 25s...20s...15s...10s...5s...0s...235s...230s...225s...220s...215s...210s...205s...200s...195s...190s...185s...180s...175s...170s...165s...160s...155s...150s...145s...140s...135s...130s...125s...120s...115s...110s...105s...100s...95s...90s...85s...80s...75s...70s...65s...60s...55s...50s...45s...40s...35s...30s...25s...20s...15s...10s...5s...0s... [ 270.360180] XENBUS: Timeout connecting to device: device/vbd/51713 (local state 3, remote state 1) [ 270.360273] md: Waiting for all devices to be available before autodetect [ 270.360277] md: If you don't use raid, use raid=noautodetect [ 270.360388] md: Autodetecting RAID arrays. [ 270.360392] md: Scanned 0 and added 0 devices. [ 270.360394] md: autorun ... [ 270.360395] md: ... autorun DONE. [ 270.360431] VFS: Cannot open root device "xvda1" or unknown-block(0,0): error -6 [ 270.360435] Please append a correct "root=" boot option; here are the available partitions: [ 270.360440] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 270.360444] Pid: 1, comm: swapper/0 Not tainted 3.5.7-gentoo #1 [ 270.360446] Call Trace: [ 270.360454] [<ffffffff813d2205>] ? panic+0xbe/0x1c5 [ 270.360459] [<ffffffff813d2358>] ? printk+0x4c/0x51 [ 270.360464] [<ffffffff815d5fb7>] ? mount_block_root+0x24f/0x26d [ 270.360469] [<ffffffff815d62b6>] ? prepare_namespace+0x168/0x192 [ 270.360474] [<ffffffff815d5ca7>] ? kernel_init+0x1b0/0x1c2 [ 270.360477] [<ffffffff815d5500>] ? loglevel+0x34/0x34 [ 270.360482] [<ffffffff813d5a64>] ? kernel_thread_helper+0x4/0x10 [ 270.360486] [<ffffffff813d4038>] ? retint_restore_args+0x5/0x6 [ 270.360490] [<ffffffff813d5a60>] ? gs_change+0x13/0x13 The config: name = "dom1" bootloader = "/usr/bin/pygrub" root = "/dev/xvda1 ro" extra = "3" # runlevel memory = 5000 disk = [ 'phy:/dev/md1,xvda1,w' ] # vif = [ 'ip=..., vifname=veth1' ] # none for now Here are some details on the Dom0 kernel (grepping for "xen"): CONFIG_XEN=y CONFIG_XEN_DOM0=y CONFIG_XEN_PRIVILEGED_GUEST=y CONFIG_XEN_PVHVM=y CONFIG_XEN_MAX_DOMAIN_MEMORY=500 CONFIG_XEN_SAVE_RESTORE=y CONFIG_PCI_XEN=y CONFIG_XEN_PCIDEV_FRONTEND=y # CONFIG_XEN_BLKDEV_FRONTEND is not set CONFIG_XEN_BLKDEV_BACKEND=y # CONFIG_XEN_NETDEV_FRONTEND is not set CONFIG_XEN_NETDEV_BACKEND=y CONFIG_INPUT_XEN_KBDDEV_FRONTEND=y CONFIG_HVC_XEN=y CONFIG_HVC_XEN_FRONTEND=y # CONFIG_XEN_WDT is not set # CONFIG_XEN_FBDEV_FRONTEND is not set # Xen driver support CONFIG_XEN_BALLOON=y # CONFIG_XEN_SELFBALLOONING is not set CONFIG_XEN_SCRUB_PAGES=y CONFIG_XEN_DEV_EVTCHN=y CONFIG_XEN_BACKEND=y CONFIG_XENFS=y CONFIG_XEN_COMPAT_XENFS=y CONFIG_XEN_SYS_HYPERVISOR=y CONFIG_XEN_XENBUS_FRONTEND=y CONFIG_XEN_GNTDEV=m CONFIG_XEN_GRANT_DEV_ALLOC=m CONFIG_SWIOTLB_XEN=y CONFIG_XEN_TMEM=y CONFIG_XEN_PCIDEV_BACKEND=m CONFIG_XEN_PRIVCMD=y CONFIG_XEN_ACPI_PROCESSOR=m And the DomU kernel (grepping for "xen"): CONFIG_XEN=y CONFIG_XEN_DOM0=y CONFIG_XEN_PRIVILEGED_GUEST=y CONFIG_XEN_PVHVM=y CONFIG_XEN_MAX_DOMAIN_MEMORY=500 CONFIG_XEN_SAVE_RESTORE=y CONFIG_PCI_XEN=y CONFIG_XEN_PCIDEV_FRONTEND=y CONFIG_XEN_BLKDEV_FRONTEND=y CONFIG_XEN_NETDEV_FRONTEND=y CONFIG_INPUT_XEN_KBDDEV_FRONTEND=y CONFIG_HVC_XEN=y CONFIG_HVC_XEN_FRONTEND=y # CONFIG_XEN_WDT is not set # CONFIG_XEN_FBDEV_FRONTEND is not set # Xen driver support CONFIG_XEN_BALLOON=y # CONFIG_XEN_SELFBALLOONING is not set CONFIG_XEN_SCRUB_PAGES=y CONFIG_XEN_DEV_EVTCHN=y # CONFIG_XEN_BACKEND is not set CONFIG_XENFS=y CONFIG_XEN_COMPAT_XENFS=y CONFIG_XEN_SYS_HYPERVISOR=y CONFIG_XEN_XENBUS_FRONTEND=y CONFIG_XEN_GNTDEV=m CONFIG_XEN_GRANT_DEV_ALLOC=m CONFIG_SWIOTLB_XEN=y CONFIG_XEN_TMEM=y CONFIG_XEN_PRIVCMD=y CONFIG_XEN_ACPI_PROCESSOR=m Any ideas what I'm doing wrong here? Thanks a lot!

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • Localhost not working after installing PHP on Mountain Lion

    - by zen
    I've installed php using brew install php54 --with-mysql, I've set up all the path correctly. which php will give me /usr/local/bin/php php -v will give me PHP 5.4.8 (cli) (built: Nov 20 2012 09:29:31) php --ini will give me: Configuration File (php.ini) Path: /usr/local/etc/php/5.4 Loaded Configuration File: /usr/local/etc/php/5.4/php.ini Scan for additional .ini files in: /usr/local/etc/php/5.4/conf.d Additional .ini files parsed: (none) apachectl -V | grep httpd.conf will give me -D SERVER_CONFIG_FILE="/private/etc/apache2/httpd.conf" I believe everything is correct, but after I restarted my apache I keep getting error Service Temporarily Unavailable The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. This is my httpd.conf file: # # This is the main Apache HTTP server configuration file. It contains the # configuration directives that give the server its instructions. # See <URL:http://httpd.apache.org/docs/2.2> for detailed information. # In particular, see # <URL:http://httpd.apache.org/docs/2.2/mod/directives.html> # for a discussion of each configuration directive. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "log/foo_log" # with ServerRoot set to "/usr" will be interpreted by the # server as "/usr/log/foo_log". # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # Do not add a slash at the end of the directory path. If you point # ServerRoot at a non-local disk, be sure to point the LockFile directive # at a local disk. If you wish to share the same ServerRoot for multiple # httpd daemons, you will need to change at least LockFile and PidFile. # ServerRoot "/usr" # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # #Listen 12.34.56.78:80 Listen 127.0.0.1:80 # # Dynamic Shared Object (DSO) Support # # To be able to use the functionality of a module which was built as a DSO you # have to place corresponding `LoadModule' lines at this location so the # directives contained in it are actually available _before_ they are used. # Statically compiled modules (those listed by `httpd -l') do not need # to be loaded here. # # Example: # LoadModule foo_module modules/mod_foo.so # LoadModule authn_file_module libexec/apache2/mod_authn_file.so LoadModule authn_dbm_module libexec/apache2/mod_authn_dbm.so LoadModule authn_anon_module libexec/apache2/mod_authn_anon.so LoadModule authn_dbd_module libexec/apache2/mod_authn_dbd.so LoadModule authn_default_module libexec/apache2/mod_authn_default.so LoadModule authz_host_module libexec/apache2/mod_authz_host.so LoadModule authz_groupfile_module libexec/apache2/mod_authz_groupfile.so LoadModule authz_user_module libexec/apache2/mod_authz_user.so LoadModule authz_dbm_module libexec/apache2/mod_authz_dbm.so LoadModule authz_owner_module libexec/apache2/mod_authz_owner.so LoadModule authz_default_module libexec/apache2/mod_authz_default.so LoadModule auth_basic_module libexec/apache2/mod_auth_basic.so LoadModule auth_digest_module libexec/apache2/mod_auth_digest.so LoadModule cache_module libexec/apache2/mod_cache.so LoadModule disk_cache_module libexec/apache2/mod_disk_cache.so LoadModule mem_cache_module libexec/apache2/mod_mem_cache.so LoadModule dbd_module libexec/apache2/mod_dbd.so LoadModule dumpio_module libexec/apache2/mod_dumpio.so LoadModule reqtimeout_module libexec/apache2/mod_reqtimeout.so LoadModule ext_filter_module libexec/apache2/mod_ext_filter.so LoadModule include_module libexec/apache2/mod_include.so LoadModule filter_module libexec/apache2/mod_filter.so LoadModule substitute_module libexec/apache2/mod_substitute.so LoadModule deflate_module libexec/apache2/mod_deflate.so LoadModule log_config_module libexec/apache2/mod_log_config.so LoadModule log_forensic_module libexec/apache2/mod_log_forensic.so LoadModule logio_module libexec/apache2/mod_logio.so LoadModule env_module libexec/apache2/mod_env.so LoadModule mime_magic_module libexec/apache2/mod_mime_magic.so LoadModule cern_meta_module libexec/apache2/mod_cern_meta.so LoadModule expires_module libexec/apache2/mod_expires.so LoadModule headers_module libexec/apache2/mod_headers.so LoadModule ident_module libexec/apache2/mod_ident.so LoadModule usertrack_module libexec/apache2/mod_usertrack.so #LoadModule unique_id_module libexec/apache2/mod_unique_id.so LoadModule setenvif_module libexec/apache2/mod_setenvif.so LoadModule version_module libexec/apache2/mod_version.so LoadModule proxy_module libexec/apache2/mod_proxy.so LoadModule proxy_connect_module libexec/apache2/mod_proxy_connect.so LoadModule proxy_ftp_module libexec/apache2/mod_proxy_ftp.so LoadModule proxy_http_module libexec/apache2/mod_proxy_http.so LoadModule proxy_scgi_module libexec/apache2/mod_proxy_scgi.so LoadModule proxy_ajp_module libexec/apache2/mod_proxy_ajp.so LoadModule proxy_balancer_module libexec/apache2/mod_proxy_balancer.so LoadModule ssl_module libexec/apache2/mod_ssl.so LoadModule mime_module libexec/apache2/mod_mime.so LoadModule dav_module libexec/apache2/mod_dav.so LoadModule status_module libexec/apache2/mod_status.so LoadModule autoindex_module libexec/apache2/mod_autoindex.so LoadModule asis_module libexec/apache2/mod_asis.so LoadModule info_module libexec/apache2/mod_info.so LoadModule cgi_module libexec/apache2/mod_cgi.so LoadModule dav_fs_module libexec/apache2/mod_dav_fs.so LoadModule vhost_alias_module libexec/apache2/mod_vhost_alias.so LoadModule negotiation_module libexec/apache2/mod_negotiation.so LoadModule dir_module libexec/apache2/mod_dir.so LoadModule imagemap_module libexec/apache2/mod_imagemap.so LoadModule actions_module libexec/apache2/mod_actions.so LoadModule speling_module libexec/apache2/mod_speling.so LoadModule userdir_module libexec/apache2/mod_userdir.so LoadModule alias_module libexec/apache2/mod_alias.so LoadModule rewrite_module libexec/apache2/mod_rewrite.so #LoadModule perl_module libexec/apache2/mod_perl.so LoadModule php5_module local/Cellar/php54/5.4.8/libexec/apache2/libphp5.so #LoadModule hfs_apple_module libexec/apache2/mod_hfs_apple.so <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> # # If you wish httpd to run as a different user or group, you must run # httpd as root initially and it will switch. # # User/Group: The name (or #number) of the user/group to run httpd as. # It is usually good practice to create a dedicated user and group for # running httpd, as with most system services. # User _www Group _www </IfModule> </IfModule> # 'Main' server configuration # # The directives in this section set up the values used by the 'main' # server, which responds to any requests that aren't handled by a # <VirtualHost> definition. These values also provide defaults for # any <VirtualHost> containers you may define later in the file. # # All of these directives may appear inside <VirtualHost> containers, # in which case these default settings will be overridden for the # virtual host being defined. # # # ServerAdmin: Your address, where problems with the server should be # e-mailed. This address appears on some server-generated pages, such # as error documents. e.g. [email protected] # ServerAdmin [email protected] # # ServerName gives the name and port that the server uses to identify itself. # This can often be determined automatically, but we recommend you specify # it explicitly to prevent problems during startup. # # If your host doesn't have a registered DNS name, enter its IP address here. # #ServerName www.example.com:80 # # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "/Library/WebServer/Documents" # # Each directory to which Apache has access can be configured with respect # to which services and features are allowed and/or disabled in that # directory (and its subdirectories). # # First, we configure the "default" to be a very restrictive set of # features. # <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # This should be changed to whatever you set DocumentRoot to. # <Directory "/Library/WebServer/Documents"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # Options Indexes FollowSymLinks MultiViews # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # AllowOverride None # # Controls who can get stuff from this server. # Order allow,deny Allow from all </Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # <IfModule dir_module> DirectoryIndex index.html </IfModule> # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <FilesMatch "^\.([Hh][Tt]|[Dd][Ss]_[Ss])"> Order allow,deny Deny from all Satisfy All </FilesMatch> # # Apple specific filesystem protection. # <Files "rsrc"> Order allow,deny Deny from all Satisfy All </Files> <DirectoryMatch ".*\.\.namedfork"> Order allow,deny Deny from all Satisfy All </DirectoryMatch> # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog "/private/var/log/apache2/error_log" # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn <IfModule log_config_module> # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> # You need to enable mod_logio.c to use %I and %O LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # CustomLog "/private/var/log/apache2/access_log" common # # If you prefer a logfile with access, agent, and referer information # (Combined Logfile Format) you can use the following directive. # #CustomLog "/private/var/log/apache2/access_log" combined </IfModule> <IfModule alias_module> # # Redirect: Allows you to tell clients about documents that used to # exist in your server's namespace, but do not anymore. The client # will make a new request for the document at its new location. # Example: # Redirect permanent /foo http://www.example.com/bar # # Alias: Maps web paths into filesystem paths and is used to # access content that does not live under the DocumentRoot. # Example: # Alias /webpath /full/filesystem/path # # If you include a trailing / on /webpath then the server will # require it to be present in the URL. You will also likely # need to provide a <Directory> section to allow access to # the filesystem path. # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the target directory are treated as applications and # run by the server when requested rather than as documents sent to the # client. The same rules about trailing "/" apply to ScriptAlias # directives as to Alias. # ScriptAliasMatch ^/cgi-bin/((?!(?i:webobjects)).*$) "/Library/WebServer/CGI-Executables/$1" </IfModule> <IfModule cgid_module> # # ScriptSock: On threaded servers, designate the path to the UNIX # socket used to communicate with the CGI daemon of mod_cgid. # #Scriptsock /private/var/run/cgisock </IfModule> # # "/Library/WebServer/CGI-Executables" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/Library/WebServer/CGI-Executables"> AllowOverride None Options None Order allow,deny Allow from all </Directory> # # DefaultType: the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain <IfModule mime_module> # # TypesConfig points to the file containing the list of mappings from # filename extension to MIME-type. # TypesConfig /private/etc/apache2/mime.types # # AddType allows you to add to or override the MIME configuration # file specified in TypesConfig for specific file types. # #AddType application/x-gzip .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # For type maps (negotiated resources): #AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # #AddType text/html .shtml #AddOutputFilter INCLUDES .shtml </IfModule> # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # #MIMEMagicFile /private/etc/apache2/magic # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # MaxRanges: Maximum number of Ranges in a request before # returning the entire resource, or one of the special # values 'default', 'none' or 'unlimited'. # Default setting is to accept 200 Ranges. #MaxRanges unlimited # # EnableMMAP and EnableSendfile: On systems that support it, # memory-mapping or the sendfile syscall is used to deliver # files. This usually improves server performance, but must # be turned off when serving from networked-mounted # filesystems or if support for these functions is otherwise # broken on your system. # #EnableMMAP off #EnableSendfile off # 6894961 TraceEnable off # Supplemental configuration # # The configuration files in the /private/etc/apache2/extra/ directory can be # included to add extra features or to modify the default configuration of # the server, or you may simply copy their contents here and change as # necessary. # Server-pool management (MPM specific) Include /private/etc/apache2/extra/httpd-mpm.conf # Multi-language error messages #Include /private/etc/apache2/extra/httpd-multilang-errordoc.conf # Fancy directory listings Include /private/etc/apache2/extra/httpd-autoindex.conf # Language settings Include /private/etc/apache2/extra/httpd-languages.conf # User home directories Include /private/etc/apache2/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include /private/etc/apache2/extra/httpd-info.conf # Virtual hosts #Include /private/etc/apache2/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual Include /private/etc/apache2/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include /private/etc/apache2/extra/httpd-dav.conf # Various default settings #Include /private/etc/apache2/extra/httpd-default.conf # Secure (SSL/TLS) connections #Include /private/etc/apache2/extra/httpd-ssl.conf # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> Include /private/etc/apache2/other/*.conf Please help me, I've spent 2 days trying to make it work. Btw error log keep saying [Tue Nov 20 10:47:40 2012] [error] proxy: HTTP: disabled connection for (localhost) and [Tue Nov 20 11:59:32 2012] [error] (61)Connection refused: proxy: HTTP: attempt to connect to [fe80::1]:20559 (localhost) failed

    Read the article

  • Squid + Dans Guardian (simple configuration)

    - by The Digital Ninja
    I just built a new proxy server and compiled the latest versions of squid and dansguardian. We use basic authentication to select what users are allowed outside of our network. It seems squid is working just fine and accepts my username and password and lets me out. But if i connect to dans guardian, it prompts for username and password and then displays a message saying my username is not allowed to access the internet. Its pulling my username for the error message so i know it knows who i am. The part i get confused on is i thought that part was handled all by squid, and squid is working flawlessly. Can someone please double check my config files and tell me if i'm missing something or there is some new option i must set to get this to work. dansguardian.conf # Web Access Denied Reporting (does not affect logging) # # -1 = log, but do not block - Stealth mode # 0 = just say 'Access Denied' # 1 = report why but not what denied phrase # 2 = report fully # 3 = use HTML template file (accessdeniedaddress ignored) - recommended # reportinglevel = 3 # Language dir where languages are stored for internationalisation. # The HTML template within this dir is only used when reportinglevel # is set to 3. When used, DansGuardian will display the HTML file instead of # using the perl cgi script. This option is faster, cleaner # and easier to customise the access denied page. # The language file is used no matter what setting however. # languagedir = '/etc/dansguardian/languages' # language to use from languagedir. language = 'ukenglish' # Logging Settings # # 0 = none 1 = just denied 2 = all text based 3 = all requests loglevel = 3 # Log Exception Hits # Log if an exception (user, ip, URL, phrase) is matched and so # the page gets let through. Can be useful for diagnosing # why a site gets through the filter. on | off logexceptionhits = on # Log File Format # 1 = DansGuardian format 2 = CSV-style format # 3 = Squid Log File Format 4 = Tab delimited logfileformat = 1 # Log file location # # Defines the log directory and filename. #loglocation = '/var/log/dansguardian/access.log' # Network Settings # # the IP that DansGuardian listens on. If left blank DansGuardian will # listen on all IPs. That would include all NICs, loopback, modem, etc. # Normally you would have your firewall protecting this, but if you want # you can limit it to only 1 IP. Yes only one. filterip = # the port that DansGuardian listens to. filterport = 8080 # the ip of the proxy (default is the loopback - i.e. this server) proxyip = 127.0.0.1 # the port DansGuardian connects to proxy on proxyport = 3128 # accessdeniedaddress is the address of your web server to which the cgi # dansguardian reporting script was copied # Do NOT change from the default if you are not using the cgi. # accessdeniedaddress = 'http://YOURSERVER.YOURDOMAIN/cgi-bin/dansguardian.pl' # Non standard delimiter (only used with accessdeniedaddress) # Default is enabled but to go back to the original standard mode dissable it. nonstandarddelimiter = on # Banned image replacement # Images that are banned due to domain/url/etc reasons including those # in the adverts blacklists can be replaced by an image. This will, # for example, hide images from advert sites and remove broken image # icons from banned domains. # 0 = off # 1 = on (default) usecustombannedimage = 1 custombannedimagefile = '/etc/dansguardian/transparent1x1.gif' # Filter groups options # filtergroups sets the number of filter groups. A filter group is a set of content # filtering options you can apply to a group of users. The value must be 1 or more. # DansGuardian will automatically look for dansguardianfN.conf where N is the filter # group. To assign users to groups use the filtergroupslist option. All users default # to filter group 1. You must have some sort of authentication to be able to map users # to a group. The more filter groups the more copies of the lists will be in RAM so # use as few as possible. filtergroups = 1 filtergroupslist = '/etc/dansguardian/filtergroupslist' # Authentication files location bannediplist = '/etc/dansguardian/bannediplist' exceptioniplist = '/etc/dansguardian/exceptioniplist' banneduserlist = '/etc/dansguardian/banneduserlist' exceptionuserlist = '/etc/dansguardian/exceptionuserlist' # Show weighted phrases found # If enabled then the phrases found that made up the total which excedes # the naughtyness limit will be logged and, if the reporting level is # high enough, reported. on | off showweightedfound = on # Weighted phrase mode # There are 3 possible modes of operation: # 0 = off = do not use the weighted phrase feature. # 1 = on, normal = normal weighted phrase operation. # 2 = on, singular = each weighted phrase found only counts once on a page. # weightedphrasemode = 2 # Positive result caching for text URLs # Caches good pages so they don't need to be scanned again # 0 = off (recommended for ISPs with users with disimilar browsing) # 1000 = recommended for most users # 5000 = suggested max upper limit urlcachenumber = # # Age before they are stale and should be ignored in seconds # 0 = never # 900 = recommended = 15 mins urlcacheage = # Smart and Raw phrase content filtering options # Smart is where the multiple spaces and HTML are removed before phrase filtering # Raw is where the raw HTML including meta tags are phrase filtered # CPU usage can be effectively halved by using setting 0 or 1 # 0 = raw only # 1 = smart only # 2 = both (default) phrasefiltermode = 2 # Lower casing options # When a document is scanned the uppercase letters are converted to lower case # in order to compare them with the phrases. However this can break Big5 and # other 16-bit texts. If needed preserve the case. As of version 2.7.0 accented # characters are supported. # 0 = force lower case (default) # 1 = do not change case preservecase = 0 # Hex decoding options # When a document is scanned it can optionally convert %XX to chars. # If you find documents are getting past the phrase filtering due to encoding # then enable. However this can break Big5 and other 16-bit texts. # 0 = disabled (default) # 1 = enabled hexdecodecontent = 0 # Force Quick Search rather than DFA search algorithm # The current DFA implementation is not totally 16-bit character compatible # but is used by default as it handles large phrase lists much faster. # If you wish to use a large number of 16-bit character phrases then # enable this option. # 0 = off (default) # 1 = on (Big5 compatible) forcequicksearch = 0 # Reverse lookups for banned site and URLs. # If set to on, DansGuardian will look up the forward DNS for an IP URL # address and search for both in the banned site and URL lists. This would # prevent a user from simply entering the IP for a banned address. # It will reduce searching speed somewhat so unless you have a local caching # DNS server, leave it off and use the Blanket IP Block option in the # bannedsitelist file instead. reverseaddresslookups = off # Reverse lookups for banned and exception IP lists. # If set to on, DansGuardian will look up the forward DNS for the IP # of the connecting computer. This means you can put in hostnames in # the exceptioniplist and bannediplist. # It will reduce searching speed somewhat so unless you have a local DNS server, # leave it off. reverseclientiplookups = off # Build bannedsitelist and bannedurllist cache files. # This will compare the date stamp of the list file with the date stamp of # the cache file and will recreate as needed. # If a bsl or bul .processed file exists, then that will be used instead. # It will increase process start speed by 300%. On slow computers this will # be significant. Fast computers do not need this option. on | off createlistcachefiles = on # POST protection (web upload and forms) # does not block forms without any file upload, i.e. this is just for # blocking or limiting uploads # measured in kibibytes after MIME encoding and header bumph # use 0 for a complete block # use higher (e.g. 512 = 512Kbytes) for limiting # use -1 for no blocking #maxuploadsize = 512 #maxuploadsize = 0 maxuploadsize = -1 # Max content filter page size # Sometimes web servers label binary files as text which can be very # large which causes a huge drain on memory and cpu resources. # To counter this, you can limit the size of the document to be # filtered and get it to just pass it straight through. # This setting also applies to content regular expression modification. # The size is in Kibibytes - eg 2048 = 2Mb # use 0 for no limit maxcontentfiltersize = # Username identification methods (used in logging) # You can have as many methods as you want and not just one. The first one # will be used then if no username is found, the next will be used. # * proxyauth is for when basic proxy authentication is used (no good for # transparent proxying). # * ntlm is for when the proxy supports the MS NTLM authentication # protocol. (Only works with IE5.5 sp1 and later). **NOT IMPLEMENTED** # * ident is for when the others don't work. It will contact the computer # that the connection came from and try to connect to an identd server # and query it for the user owner of the connection. usernameidmethodproxyauth = on usernameidmethodntlm = off # **NOT IMPLEMENTED** usernameidmethodident = off # Preemptive banning - this means that if you have proxy auth enabled and a user accesses # a site banned by URL for example they will be denied straight away without a request # for their user and pass. This has the effect of requiring the user to visit a clean # site first before it knows who they are and thus maybe an admin user. # This is how DansGuardian has always worked but in some situations it is less than # ideal. So you can optionally disable it. Default is on. # As a side effect disabling this makes AD image replacement work better as the mime # type is know. preemptivebanning = on # Misc settings # if on it adds an X-Forwarded-For: <clientip> to the HTTP request # header. This may help solve some problem sites that need to know the # source ip. on | off forwardedfor = on # if on it uses the X-Forwarded-For: <clientip> to determine the client # IP. This is for when you have squid between the clients and DansGuardian. # Warning - headers are easily spoofed. on | off usexforwardedfor = off # if on it logs some debug info regarding fork()ing and accept()ing which # can usually be ignored. These are logged by syslog. It is safe to leave # it on or off logconnectionhandlingerrors = on # Fork pool options # sets the maximum number of processes to sporn to handle the incomming # connections. Max value usually 250 depending on OS. # On large sites you might want to try 180. maxchildren = 180 # sets the minimum number of processes to sporn to handle the incomming connections. # On large sites you might want to try 32. minchildren = 32 # sets the minimum number of processes to be kept ready to handle connections. # On large sites you might want to try 8. minsparechildren = 8 # sets the minimum number of processes to sporn when it runs out # On large sites you might want to try 10. preforkchildren = 10 # sets the maximum number of processes to have doing nothing. # When this many are spare it will cull some of them. # On large sites you might want to try 64. maxsparechildren = 64 # sets the maximum age of a child process before it croaks it. # This is the number of connections they handle before exiting. # On large sites you might want to try 10000. maxagechildren = 5000 # Process options # (Change these only if you really know what you are doing). # These options allow you to run multiple instances of DansGuardian on a single machine. # Remember to edit the log file path above also if that is your intention. # IPC filename # # Defines IPC server directory and filename used to communicate with the log process. ipcfilename = '/tmp/.dguardianipc' # URL list IPC filename # # Defines URL list IPC server directory and filename used to communicate with the URL # cache process. urlipcfilename = '/tmp/.dguardianurlipc' # PID filename # # Defines process id directory and filename. #pidfilename = '/var/run/dansguardian.pid' # Disable daemoning # If enabled the process will not fork into the background. # It is not usually advantageous to do this. # on|off ( defaults to off ) nodaemon = off # Disable logging process # on|off ( defaults to off ) nologger = off # Daemon runas user and group # This is the user that DansGuardian runs as. Normally the user/group nobody. # Uncomment to use. Defaults to the user set at compile time. # daemonuser = 'nobody' # daemongroup = 'nobody' # Soft restart # When on this disables the forced killing off all processes in the process group. # This is not to be confused with the -g run time option - they are not related. # on|off ( defaults to off ) softrestart = off maxcontentramcachescansize = 2000 maxcontentfilecachescansize = 20000 downloadmanager = '/etc/dansguardian/downloadmanagers/default.conf' authplugin = '/etc/dansguardian/authplugins/proxy-basic.conf' Squid.conf http_port 3128 hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? cache deny QUERY acl apache rep_header Server ^Apache #broken_vary_encoding allow apache access_log /squid/var/logs/access.log squid hosts_file /etc/hosts auth_param basic program /squid/libexec/ncsa_auth /squid/etc/userbasic.auth auth_param basic children 5 auth_param basic realm proxy auth_param basic credentialsttl 2 hours auth_param basic casesensitive off refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl NoAuthNec src <HIDDEN FOR SECURITY> acl BrkRm src <HIDDEN FOR SECURITY> acl Dials src <HIDDEN FOR SECURITY> acl Comps src <HIDDEN FOR SECURITY> acl whsws dstdom_regex -i .opensuse.org .novell.com .suse.com mirror.mcs.an1.gov mirrors.kernerl.org www.suse.de suse.mirrors.tds.net mirrros.usc.edu ftp.ale.org suse.cs.utah.edu mirrors.usc.edu mirror.usc.an1.gov linux.nssl.noaa.gov noaa.gov .kernel.org ftp.ale.org ftp.gwdg.de .medibuntu.org mirrors.xmission.com .canonical.com .ubuntu. acl opensites dstdom_regex -i .mbsbooks.com .bowker.com .usps.com .usps.gov .ups.com .fedex.com go.microsoft.com .microsoft.com .apple.com toolbar.msn.com .contacts.msn.com update.services.openoffice.org fms2.pointroll.speedera.net services.wmdrm.windowsmedia.com windowsupdate.com .adobe.com .symantec.com .vitalbook.com vxn1.datawire.net vxn.datawire.net download.lavasoft.de .download.lavasoft.com .lavasoft.com updates.ls-servers.com .canadapost. .myyellow.com minirick symantecliveupdate.com wm.overdrive.com www.overdrive.com productactivation.one.microsoft.com www.update.microsoft.com testdrive.whoson.com www.columbia.k12.mo.us banners.wunderground.com .kofax.com .gotomeeting.com tools.google.com .dl.google.com .cache.googlevideo.com .gpdl.google.com .clients.google.com cache.pack.google.com kh.google.com maps.google.com auth.keyhole.com .contacts.msn.com .hrblock.com .taxcut.com .merchantadvantage.com .jtv.com .malwarebytes.org www.google-analytics.com dcs.support.xerox.com .dhl.com .webtrendslive.com javadl-esd.sun.com javadl-alt.sun.com .excelsior.edu .dhlglobalmail.com .nessus.org .foxitsoftware.com foxit.vo.llnwd.net installshield.com .mindjet.com .mediascouter.com media.us.elsevierhealth.com .xplana.com .govtrack.us sa.tulsacc.edu .omniture.com fpdownload.macromedia.com webservices.amazon.com acl password proxy_auth REQUIRED acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 631 2001 2005 8731 9001 9080 10000 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port # https, snews 443 563 acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port # unregistered ports 1936-65535 acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 10000 acl Safe_ports port 631 acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT acl UTubeUsers proxy_auth "/squid/etc/utubeusers.list" acl RestrictUTube dstdom_regex -i youtube.com acl RestrictFacebook dstdom_regex -i facebook.com acl FacebookUsers proxy_auth "/squid/etc/facebookusers.list" acl BuemerKEC src 10.10.128.0/24 acl MBSsortnet src 10.10.128.0/26 acl MSNExplorer browser -i MSN acl Printers src <HIDDEN FOR SECURITY> acl SpecialFolks src <HIDDEN FOR SECURITY> # streaming download acl fails rep_mime_type ^.*mms.* acl fails rep_mime_type ^.*ms-hdr.* acl fails rep_mime_type ^.*x-fcs.* acl fails rep_mime_type ^.*x-ms-asf.* acl fails2 urlpath_regex dvrplayer mediastream mms:// acl fails2 urlpath_regex \.asf$ \.afx$ \.flv$ \.swf$ acl deny_rep_mime_flashvideo rep_mime_type -i video/flv acl deny_rep_mime_shockwave rep_mime_type -i ^application/x-shockwave-flash$ acl x-type req_mime_type -i ^application/octet-stream$ acl x-type req_mime_type -i application/octet-stream acl x-type req_mime_type -i ^application/x-mplayer2$ acl x-type req_mime_type -i application/x-mplayer2 acl x-type req_mime_type -i ^application/x-oleobject$ acl x-type req_mime_type -i application/x-oleobject acl x-type req_mime_type -i application/x-pncmd acl x-type req_mime_type -i ^video/x-ms-asf$ acl x-type2 rep_mime_type -i ^application/octet-stream$ acl x-type2 rep_mime_type -i application/octet-stream acl x-type2 rep_mime_type -i ^application/x-mplayer2$ acl x-type2 rep_mime_type -i application/x-mplayer2 acl x-type2 rep_mime_type -i ^application/x-oleobject$ acl x-type2 rep_mime_type -i application/x-oleobject acl x-type2 rep_mime_type -i application/x-pncmd acl x-type2 rep_mime_type -i ^video/x-ms-asf$ acl RestrictHulu dstdom_regex -i hulu.com acl broken dstdomain cms.montgomerycollege.edu events.columbiamochamber.com members.columbiamochamber.com public.genexusserver.com acl RestrictVimeo dstdom_regex -i vimeo.com acl http_port port 80 #http_reply_access deny deny_rep_mime_flashvideo #http_reply_access deny deny_rep_mime_shockwave #streaming files #http_access deny fails #http_reply_access deny fails #http_access deny fails2 #http_reply_access deny fails2 #http_access deny x-type #http_reply_access deny x-type #http_access deny x-type2 #http_reply_access deny x-type2 follow_x_forwarded_for allow localhost acl_uses_indirect_client on log_uses_indirect_client on http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access allow SpecialFolks http_access deny CONNECT !SSL_ports http_access allow whsws http_access allow opensites http_access deny BuemerKEC !MBSsortnet http_access deny BrkRm RestrictUTube RestrictFacebook RestrictVimeo http_access allow RestrictUTube UTubeUsers http_access deny RestrictUTube http_access allow RestrictFacebook FacebookUsers http_access deny RestrictFacebook http_access deny RestrictHulu http_access allow NoAuthNec http_access allow BrkRm http_access allow FacebookUsers RestrictVimeo http_access deny RestrictVimeo http_access allow Comps http_access allow Dials http_access allow Printers http_access allow password http_access deny !Safe_ports http_access deny SSL_ports !CONNECT http_access allow http_port http_access deny all http_reply_access allow all icp_access allow all access_log /squid/var/logs/access.log squid visible_hostname proxy.site.com forwarded_for off coredump_dir /squid/cache/ #header_access Accept-Encoding deny broken #acl snmppublic snmp_community mysecretcommunity #snmp_port 3401 #snmp_access allow snmppublic all cache_mem 3 GB #acl snmppublic snmp_community mbssquid #snmp_port 3401 #snmp_access allow snmppublic all

    Read the article

  • Configuration Error ASP.NET password format specified is invalid

    - by salvationishere
    I am getting the above error in IIS 6.0 now when I browse my C# / SQL web application. This was built in VS 2008 and SS 2008 on a 32-bit XP OS. The application was working before I added Login controls to it. However, this is my first time configuring Login/password controls so I am probably missing something really basic. This error doesn't happen until I try to login. Here are the details of my error from IIS; I get the same error in VS: Parser Error Message: Password format specified is invalid. Source Error: Line 31: <add Line 32: name="SqlProvider" Line 33: type="System.Web.Security.SqlMembershipProvider" Line 34: connectionStringName="AdventureWorksConnectionString2" Line 35: applicationName="AddFileToSQL2" Source File: C:\Inetpub\AddFileToSQL2\web.config Line: 33 And the relevant contents of my web.config are: <connectionStrings> <add name="Master" connectionString="server=MSSQLSERVER;database=Master; Integrated Security=SSPI" providerName="System.Data.SqlClient" /> <add name="AdventureWorksConnectionString" connectionString="Data Source=SIDEKICK;Initial Catalog=AdventureWorks;Integrated Security=True" providerName="System.Data.SqlClient" /> <add name="AdventureWorksConnectionString2" connectionString="Data Source=SIDEKICK;Initial Catalog=AdventureWorks;Integrated Security=True; " providerName="System.Data.SqlClient" /> </connectionStrings> <system.web> <membership defaultProvider="SqlProvider" userIsOnlineTimeWindow="15"> <providers> <clear /> <add name="SqlProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="AdventureWorksConnectionString2" applicationName="AddFileToSQL2" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" requiresUniqueEmail="false" passwordFormat="encrypted" /> </providers> </membership> <!-- Set compilation debug="true" to insert debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. --> <roleManager enabled="true" /> <compilation debug="true"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> </assemblies> </compilation> <!-- The <authentication> section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. --> <authentication mode="Forms"> <forms loginUrl="Password.aspx" protection="All" timeout="30" name="SqlAuthCookie" path="/FormsAuth" requireSSL="false" slidingExpiration="true" defaultUrl="default.aspx" cookieless="UseCookies" enableCrossAppRedirects="false" /> </authentication> <!--Authorization permits only authenticated users to access the application --> <authorization> <deny users="?" /> <allow users="*" /> </authorization> <!-- The <customErrors> section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. <customErrors mode="RemoteOnly" defaultRedirect="GenericErrorPage.htm"> <error statusCode="403" redirect="NoAccess.htm" /> <error statusCode="404" redirect="FileNotFound.htm" /> </customErrors> --> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpModules> </system.web> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <remove name="ScriptModule"/> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <remove name="ScriptHandlerFactory"/> <remove name="ScriptHandlerFactoryAppServices"/> <remove name="ScriptResource"/> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </handlers> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> </assemblyBinding> </runtime> <system.net> <mailSettings> <smtp from="[email protected]"> <network host="SIDEKICK" password="" userName="" /> </smtp> </mailSettings> </system.net> </configuration> I checked and I do have an aspnetdb database in my SSMS. The Network Service account has SELECT, EXECUTE, INSERT, UPDATE access to this database. But one problem I see is that all of the tables in this database are empty except for aspnet_SchemaVersions, which just has 2 records (common and membership). Is this right? I added users and roles via ASP.NET Configuration wizard, and I believe I set this up correctly since I followed the Microsoft tutorial at http://msdn.microsoft.com/en-us/library/ms998347.aspx. One other problem I see from VS is after adding content to my Page_Load on my initial login Password.aspx.cs file, I'm getting an invalid cast problem below. I googled this problem also but the solutions I saw confused me even more. The Page_Load section I added is: protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello, " + Server.HtmlEncode(User.Identity.Name)); FormsIdentity id = (FormsIdentity)User.Identity; FormsAuthenticationTicket ticket = id.Ticket; Response.Write("<p/>TicketName: " + ticket.Name); Response.Write("<br/>Cookie Path: " + ticket.CookiePath); Response.Write("<br/>Ticket Expiration: " + ticket.Expiration.ToString()); Response.Write("<br/>Expired: " + ticket.Expired.ToString()); Response.Write("<br/>Persistent: " + ticket.IsPersistent.ToString()); Response.Write("<br/>IssueDate: " + ticket.IssueDate.ToString()); Response.Write("<br/>UserData: " + ticket.UserData); Response.Write("<br/>Version: " + ticket.Version.ToString()); } And the VS exception I'm getting: System.InvalidCastException was unhandled by user code Message="Unable to cast object of type 'System.Security.Principal.GenericIdentity' to type 'System.Web.Security.FormsIdentity'." Source="AddFileToSQL" StackTrace: at AddFileToSQL.Password.Page_Load(Object sender, EventArgs e) in C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL2\AddFileToSQL\Password.aspx.cs:line 22 at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) at System.Web.UI.Control.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) InnerException:

    Read the article

  • ASP.NET SQLMembership Provider not logging in

    - by cfdev9
    My web app uses the sql memebership provider. Running it locally all is well, deploying to a dev server it works fine too in firefox, but in IE8 something unexpected is happening. Once a user logs in they're supposed to be redirected to home.aspx. What's happening when I attempt to login is it appears to accept the login credentials but then doesn't redirect to home.aspx. Instead it just redirects me to the login page as though I had attempted to access home.aspx directly without being logged in. The url parameter ReturnUrl is appended, Login.aspx?ReturnUrl=%2fhome.aspx Why is this only happening with IE8? My local PC is IIS7 but the server is IIS6. Using the same web.config Full code behind public partial class Login : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { Session.Abandon(); FormsAuthentication.SignOut(); } } protected void btnSubmit_Click(object sender, EventArgs e) { if (Membership.ValidateUser(tbUsername.Text, tbPassword.Text)) { if (Request.QueryString["ReturnUrl"] != null) { FormsAuthentication.RedirectFromLoginPage(tbUsername.Text, false); } else { FormsAuthentication.SetAuthCookie(tbUsername.Text, false); Response.Redirect("~/Home.aspx"); } } } } Full web.config <?xml version="1.0"?> <configuration> <configSections> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere"/> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <appSettings/> <connectionStrings> <add name="ASPNET_DB" connectionString="..."/> </connectionStrings> <system.web> <membership defaultProvider="SqlMembershipProvider"> <providers> <add name="SqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ASPNET_DB" enablePasswordRetrieval="true" enablePasswordReset="true" requiresQuestionAndAnswer="false" applicationName="/" requiresUniqueEmail="false" passwordFormat="Clear" maxInvalidPasswordAttempts="5" passwordAttemptWindow="10" passwordStrengthRegularExpression="" minRequiredPasswordLength="1" minRequiredNonalphanumericCharacters="0"/> </providers> </membership> <roleManager enabled="true" defaultProvider="SqlRoleManager"> <providers> <add name="SqlRoleManager" type="System.Web.Security.SqlRoleProvider" connectionStringName="ASPNET_DB" applicationName="/"/> </providers> </roleManager> <authentication mode="Forms"> <forms name="CHOUSE.ASPXAUTH" loginUrl="login.aspx" protection="All" path="/"/> </authentication> <authorization> <allow roles="AccountManager"/> <allow roles="Client"/> <deny users="*"/> </authorization> <compilation debug="true"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> </assemblies> </compilation> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpModules> </system.web> <location path="Admin"> <system.web> <authorization> <allow roles="AccountManager"/> <deny users="*"/> </authorization> </system.web> </location> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <remove name="ScriptModule"/> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <remove name="ScriptHandlerFactory"/> <remove name="ScriptHandlerFactoryAppServices"/> <remove name="ScriptResource"/> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </handlers> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> </assemblyBinding> </runtime>

    Read the article

  • These are few objective type questions which i was not able to find the solution [closed]

    - by Tarun
    1. Which of the following advantages does System.Collections.IDictionaryEnumerator provide over System.Collections.IEnumerator? a. It adds properties for direct access to both the Key and the Value b. It is optimized to handle the structure of a Dictionary. c. It provides properties to determine if the Dictionary is enumerated in Key or Value order d. It provides reverse lookup methods to distinguish a Key from a specific Value 2. When Implementing System.EnterpriseServices.ServicedComponent derived classes, which of the following statements are true? a. Enabling object pooling requires an attribute on the class and the enabling of pooling in the COM+ catalog. b. Methods can be configured to automatically mark a transaction as complete by the use of attributes. c. You can configure authentication using the AuthenticationOption when the ActivationMode is set to Library. d. You can control the lifecycle policy of an individual instance using the SetLifetimeService method. 3. Which of the following are true regarding event declaration in the code below? class Sample { event MyEventHandlerType MyEvent; } a. MyEventHandlerType must be derived from System.EventHandler or System.EventHandler<TEventArgs> b. MyEventHandlerType must take two parameters, the first of the type Object, and the second of a class derived from System.EventArgs c. MyEventHandlerType may have a non-void return type d. If MyEventHandlerType is a generic type, event declaration must use a specialization of that type. e. MyEventHandlerType cannot be declared static 4. Which of the following statements apply to developing .NET code, using .NET utilities that are available with the SDK or Visual Studio? a. Developers can create assemblies directly from the MSIL Source Code. b. Developers can examine PE header information in an assembly. c. Developers can generate XML Schemas from class definitions contained within an assembly. d. Developers can strip all meta-data from managed assemblies. e. Developers can split an assembly into multiple assemblies. 5. Which of the following characteristics do classes in the System.Drawing namespace such as Brush,Font,Pen, and Icon share? a. They encapsulate native resource and must be properly Disposed to prevent potential exhausting of resources. b. They are all MarshalByRef derived classes, but functionality across AppDomains has specific limitations. c. You can inherit from these classes to provide enhanced or customized functionality 6. Which of the following are required to be true by objects which are going to be used as keys in a System.Collections.HashTable? a. They must handle case-sensitivity identically in both the GetHashCode() and Equals() methods. b. Key objects must be immutable for the duration they are used within a HashTable. c. Get HashCode() must be overridden to provide the same result, given the same parameters, regardless of reference equalityl unless the HashTable constructor is provided with an IEqualityComparer parameter. d. Each Element in a HashTable is stored as a Key/Value pair of the type System.Collections.DictionaryElement e. All of the above 7. Which of the following are true about Nullable types? a. A Nullable type is a reference type. b. A Nullable type is a structure. c. An implicit conversion exists from any non-nullable value type to a nullable form of that type. d. An implicit conversion exists from any nullable value type to a non-nullable form of that type. e. A predefined conversion from the nullable type S? to the nullable type T? exists if there is a predefined conversion from the non-nullable type S to the non-nullable type T 8. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is a private instance member with a leading underscore that can be programmatically referenced. c. The compiler generates a backing field that is accessible via reflection d. The compiler generates a code that will store the information separately from the instance to ensure its security. 9. Which of the following does using Initializer Syntax with a collection as shown below require? CollectionClass numbers = new CollectionClass { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; a. The Collection Class must implement System.Collections.Generic.ICollection<T> b. The Collection Class must implement System.Collections.Generic.IList<T> c. Each of the Items in the Initializer List will be passed to the Add<T>(T item) method d. The items in the initializer will be treated as an IEnumerable<T> and passed to the collection constructor+K110 10. What impact will using implicitly typed local variables as in the following example have? var sample = "Hello World"; a. The actual type is determined at compilation time, and has no impact on the runtime b. The actual type is determined at runtime, and late binding takes effect c. The actual type is based on the native VARIANT concept, and no binding to a specific type takes place. d. "var" itself is a specific type defined by the framework, and no special binding takes place 11. Which of the following is not supported by remoting object types? a. well-known singleton b. well-known single call c. client activated d. context-agile 12. In which of the following ways do structs differ from classes? a. Structs can not implement interfaces b. Structs cannot inherit from a base struct c. Structs cannot have events interfaces d. Structs cannot have virtual methods 13. Which of the following is not an unboxing conversion? a. void Sample1(object o) { int i = (int)o; } b. void Sample1(ValueType vt) { int i = (int)vt; } c. enum E { Hello, World} void Sample1(System.Enum et) { E e = (E) et; } d. interface I { int Value { get; set; } } void Sample1(I vt) { int i = vt.Value; } e. class C { public int Value { get; set; } } void Sample1(C vt) { int i = vt.Value; } 14. Which of the following are characteristics of the System.Threading.Timer class? a. The method provided by the TimerCallback delegate will always be invoked on the thread which created the timer. b. The thread which creates the timer must have a message processing loop (i.e. be considered a UI thread) c. The class contains protection to prevent reentrancy to the method provided by the TimerCallback delegate d. You can receive notification of an instance being Disposed by calling an overload of the Dispose method. 15. What is the proper declaration of a method which will handle the following event? Class MyClass { public event EventHandler MyEvent; } a. public void A_MyEvent(object sender, MyArgs e) { } b. public void A_MyEvent(object sender, EventArgs e) { } c. public void A_MyEvent(MyArgs e) { } d. public void A_MyEvent(MyClass sender,EventArgs e) { } 16. Which of the following scenarios are applicable to Window Workflow Foundation? a. Document-centric workflows b. Human workflows c. User-interface page flows d. Builtin support for communications across multiple applications and/or platforms e. All of the above 17. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is a private instance member with a leading underscore that can be programmatically referenced. c. The compiler generates a backing field that is accessible via reflection d. The compiler generates a code that will store the information separately from the instance to ensure its security. 18 While using the capabilities supplied by the System.Messaging classes, which of the following are true? a. Information must be explicitly converted to/from a byte stream before it uses the MessageQueue class b. Invoking the MessageQueue.Send member defaults to using the System.Messaging.XmlMessageFormatter to serialize the object. c. Objects must be XMLSerializable in order to be transferred over a MessageQueue instance. d. The first entry in a MessageQueue must be removed from the queue before the next entry can be accessed e. Entries removed from a MessageQueue within the scope of a transaction, will be pushed back into the front of the queue if the transaction fails. 19. Which of the following are true about declarative attributes? a. They must be inherited from the System.Attribute. b. Attributes are instantiated at the same time as instances of the class to which they are applied. c. Attribute classes may be restricted to be applied only to application element types. d. By default, a given attribute may be applied multiple times to the same application element. 20. When using version 3.5 of the framework in applications which emit a dynamic code, which of the following are true? a. A Partial trust code can not emit and execute a code b. A Partial trust application must have the SecurityCriticalAttribute attribute have called Assert ReflectionEmit permission c. The generated code no more permissions than the assembly which emitted it. d. It can be executed by calling System.Reflection.Emit.DynamicMethod( string name, Type returnType, Type[] parameterTypes ) without any special permissions Within Windows Workflow Foundation, Compensating Actions are used for: a. provide a means to rollback a failed transaction b. provide a means to undo a successfully committed transaction later c. provide a means to terminate an in process transaction d. achieve load balancing by adapting to the current activity 21. What is the proper declaration of a method which will handle the following event? Class MyClass { public event EventHandler MyEvent; } a. public void A_MyEvent(object sender, MyArgs e) { } b. public void A_MyEvent(object sender, EventArgs e) { } c. public void A_MyEvent(MyArgs e) { } d. public void A_MyEvent(MyClass sender,EventArgs e) { } 22. Which of the following controls allows the use of XSL to transform XML content into formatted content? a. System.Web.UI.WebControls.Xml b. System.Web.UI.WebControls.Xslt c. System.Web.UI.WebControls.Substitution d. System.Web.UI.WebControls.Transform 23. To which of the following do automatic properties refer? a. You declare (explicitly or implicitly) the accessibility of the property and get and set accessors, but do not provide any implementation or backing field b. You attribute a member field so that the compiler will generate get and set accessors c. The compiler creates properties for your class based on class level attributes d. They are properties which are automatically invoked as part of the object construction process 24. Which of the following are true about Nullable types? a. A Nullable type is a reference type. b. An implicit conversion exists from any non-nullable value type to a nullable form of that type. c. A predefined conversion from the nullable type S? to the nullable type T? exists if there is a predefined conversion from the non-nullable type S to the non-nullable type T 25. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is accessible via reflection. c. The compiler generates a code that will store the information separately from the instance to ensure its security. 26. When using an implicitly typed array, which of the following is most appropriate? a. All elements in the initializer list must be of the same type. b. All elements in the initializer list must be implicitly convertible to a known type which is the actual type of at least one member in the initializer list c. All elements in the initializer list must be implicitly convertible to common type which is a base type of the items actually in the list 27. Which of the following is false about anonymous types? a. They can be derived from any reference type. b. Two anonymous types with the same named parameters in the same order declared in different classes have the same type. c. All properties of an anonymous type are read/write. 28. Which of the following are true about Extension methods. a. They can be declared either static or instance members b. They must be declared in the same assembly (but may be in different source files) c. Extension methods can be used to override existing instance methods d. Extension methods with the same signature for the same class may be declared in multiple namespaces without causing compilation errors

    Read the article

  • .NET Winform AJAX Login Services

    - by AdamSane
    I am working on a Windows Form that connects to a ASP.NET membership database and I am trying to use the AJAX Login Service. No matter what I do I keep on getting 404 errors on the Authentication_JSON_AppService.axd call. Web Config Below <?xml version="1.0"?> <!-- Note: As an alternative to hand editing this file you can use the web admin tool to configure settings for your application. Use the Website->Asp.Net Configuration option in Visual Studio. A full list of settings and comments can be found in machine.config.comments usually located in \Windows\Microsoft.Net\Framework\v2.x\Config --> <configuration> <configSections> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere"/> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <connectionStrings <!-- Removed --> /> <appSettings/> <system.web> <!-- Set compilation debug="true" to insert debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. --> <compilation debug="true"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Data.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> </assemblies> </compilation> <membership defaultProvider="dbSqlMembershipProvider"> <providers> <add name="dbSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="Fire.Common.Properties.Settings.dbFireConnectionString" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" applicationName="/" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="7" minRequiredNonalphanumericCharacters="1" passwordAttemptWindow="10" passwordStrengthRegularExpression=""/> </providers> </membership> <roleManager enabled="true" defaultProvider="dbSqlRoleProvider"> <providers> <add connectionStringName="Fire.Common.Properties.Settings.dbFireConnectionString" applicationName="/" name="dbSqlRoleProvider" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> </providers> </roleManager> <authentication mode="Forms"> <forms loginUrl="Login.aspx" cookieless="UseCookies" protection="All" timeout="30" requireSSL="false" slidingExpiration="true" defaultUrl="default.aspx" enableCrossAppRedirects="false"/> </authentication> <authorization> <allow users="*"/> <allow users="?"/> </authorization> <customErrors mode="Off"> </customErrors> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" validate="false" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpModules> </system.web> <location path="~/Admin"> <system.web> <authorization> <allow roles="Admin"/> <allow roles="System"/> <deny users="*"/> </authorization> </system.web> </location> <location path="~/Admin/System"> <system.web> <authorization> <allow roles="System"/> <deny users="*"/> </authorization> </system.web> </location> <location path="~/Export"> <system.web> <authorization> <allow roles="Export"/> <deny users="*"/> </authorization> </system.web> </location> <location path="~/Field"> <system.web> <authorization> <allow roles="Field"/> <deny users="*"/> </authorization> </system.web> </location> <location path="~/Default.aspx"> <system.web> <authorization> <allow roles="Admin"/> <allow roles="System"/> <allow roles="Export"/> <allow roles="Field"/> <deny users="?"/> </authorization> </system.web> </location> <location path="~/Login.aspx"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <location path="~/App_Themes"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <location path="~/Includes"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <location path="~/WebServices"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <location path="~/Authentication_JSON_AppService.axd"> <system.web> <authorization> <allow users="*"/> <allow users="?"/> </authorization> </system.web> </location> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" type="Microsoft.CSharp.CSharpCodeProvider,System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" warningLevel="4"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> <compiler language="vb;vbs;visualbasic;vbscript" extension=".vb" type="Microsoft.VisualBasic.VBCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" warningLevel="4"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="OptionInfer" value="true"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <remove name="ScriptModule"/> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <remove name="ScriptHandlerFactory"/> <remove name="ScriptHandlerFactoryAppServices"/> <remove name="ScriptResource"/> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptResource" verb="GET,HEAD" path="ScriptResource.axd" preCondition="integratedMode" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </handlers> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> </assemblyBinding> </runtime> </configuration>

    Read the article

< Previous Page | 45 46 47 48 49