Search Results

Search found 11822 results on 473 pages for 'external assembly'.

Page 131/473 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Visual Studio 2010 Winform Application &ndash; Unable to resolve custom assemblies?

    - by Harish Ranganathan
    Recently I surfaced a problem where, one of my friend had a tough time in getting rid of an assembly reference error.  Despite adding reference to the assembly, while referencing it in code, it was spitting out the “The type or namespace name ‘ASSEMBLYNAME’ could not be found” error.   This was a migration project and owing to the above error, it was throwing another 100 errors. We tried adding reference to the assembly in other projects and it was not even resolving the namespace while typing out in the using section. Upon further digging into the error warnings, it indicated something to do with the .NET Framework targeted i.e. 4.0.  My suspicion grew since the target framework was 4.0 and the assembly should be able to be recognized.  Then, when we checked “Project – “<APPNAME> Properties…”, the issue was with the default target framework which is “.NET Framework 4 Client Profile” By default, Visual Studio 2010 creates Windows Forms App/WPF Apps with the Target Framework set to .NET Framework 4 Client Profile.  This is to minimize the framework size required to be bundled along with the app. Client Profile is new feature since .NET 3.5 SP1 that allows users to package a minified version of .NET Framework that doesn’t include stuff such as ASP.NET, Server programming assemblies and few other assemblies which are typically never used in the Desktop Applications. Since the .NET Framework client profile is a minified version, it doesn’t contain all the assemblies related to Web services and other deprecated assemblies.  However, this application is a migration app and needed some of the references from Services and hence couldn’t run. Once, we changed the Target Framework to .NET Framework 4 instead of the default client profile, the application compiled. Here is link to a very nice article that explains the features of .NET Framework 4 client Profile, the assemblies supported by default etc., http://blogs.msdn.com/b/jgoldb/archive/2010/04/12/what-s-new-in-net-framework-4-client-profile-rtm.aspx Cheers !!

    Read the article

  • Couldn't make Angry birds to work on wine

    - by Ashfame
    I could run Notepad++ easily but I fail to run the Angry bird exe. Whenever I open the exe, I see one of my screen flickrs a bit (as lines and not the whole screen) and nothing happens. Any ideas? Edit: Output of wine angrybirds.exe fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.VC80.CRT" (8.0.50727.4053) fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.VC90.CRT" (9.0.21022.8) err:module:import_dll Library MSVCP90.dll (which is needed by L"C:\\windows\\system32\\AppUpWrapper.dll") not found err:module:import_dll Library AppUpWrapper.dll (which is needed by L"C:\\windows\\system32\\angrybirds.exe") not found err:module:LdrInitializeThunk Main exe initialization for L"C:\\windows\\system32\\angrybirds.exe" failed, status c0000135 I think it didn't even install. I manually dropped those files in the folder but still no gain. Edit: Progress I dropped the file MSVCP90.dll manually and now this is what I get in the output fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.VC80.CRT" (8.0.50727.4053) fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.VC90.CRT" (9.0.21022.8) fixme:heap:HeapSetInformation 0x541000 0 0x32fd48 4 fixme:heap:HeapSetInformation (nil) 1 (nil) 0 EXCEPTION: Failed to open data/scripts/starLimits.lua wine: Unhandled exception 0x40000015 at address 0x7b880023:0x78b271d0 (thread 0009), starting debugger... fixme:msvcr90:__clean_type_info_names_internal (0x10267694) stub fixme:msvcr90:__clean_type_info_names_internal (0x78506644) stub ashfame@ashfame-desktop:~$ Process of pid=0008 has terminated No process loaded, cannot execute 'echo Modules:' Cannot get info on module while no process is loaded No process loaded, cannot execute 'echo Threads:' process tid prio (all id:s are in hex) 0000000e services.exe 00000014 0 00000010 0 0000000f 0 00000011 winedevice.exe 00000018 0 00000016 0 00000013 0 00000012 0 00000019 explorer.exe 0000001a 0 You must be attached to a process to run this command. No process loaded, cannot execute 'detach' and there the terminal hangs (I mean I would have to Ctrl + C to get out). It shows up the famous message, that it needs to close down. Edit: Just to let you know that I am still stuck at it. I don't use wine for anything else, so I am ready to do a clean install of wine and everything if anyone is willing to provide me instructions.

    Read the article

  • Low-level GPU code and Shader Compilation

    - by ktodisco
    Bear with me, because I will raise several questions at once. I still feel, though, that overall this can be treated as one question that may be answered succinctly. I recently dove into solidifying my understanding of the assembly language, low-level memory operations, CPU structure, and program optimizations. This also sparked my interest in how higher-level shading languages, GLSL and HLSL in particular, are compiled and optimized, as well as what formats they are reduced to before machine code is generated (assuming they are not converted directly into machine code). After a bit of research into this, the best resource I've found is this presentation from ATI about the compilation of and optimizations for HLSL. I also found sample ARB assembly code. This sort of addressed my original curiosity, but it raised several other questions. The assembler code in the ATI presentation seems like it contains instructions specifically targeted for the GPU, but is this merely a hypothetical example created for the purpose of conceptual understanding, or is this code really generated during shader compilation? If so, is it possible to inspect it, or even write it in place of the higher-level syntax? My initial searches for an answer to the last question tell me that this may be disallowed, but I have not dug too deep yet. Also, along the same lines, are GLSL shader programs compiled into ARB assembly code before machine code is generated, and is it possible to write direct ARB assembly? Lastly, and perhaps what I am most interested in finding out: are there comprehensive resources on shader compilation and low-level GPU code? I have been unable to find any thus far. I ask simply because I am curious :)

    Read the article

  • Configuration Manager Setting Causing Error PRJ0019

    - by Jeff Paterno
    Recently I ran into an issue with a project failing to build on an automated build server using CruiseControl. When I looked into the build log I saw that the Post-Build project was failing with the error message: "error PRJ0019: A tool returned an error code from "Performing Post-Build Event..." This was most frustrating especially since the solution was building without issue on my local development environment. The Post-Build project was a C++ project that basically called several batch files to unregister/register assemblies, copy resources and supporting files, and place other dependencies in the GAC. I decided to run each of the batch files manually to see if that would provide more information as to why this project was failing. This lead me to determine that the batch file that was placing assemblies in the GAC was the culprit and that it was failing to find a particular assembly. The missing assembly was the output of another project. The project that was not producing the expected output was another C++ project that called a batch file. This batch process was actually embedding resource files into an assembly and then copying the assembly to the expected location. The real confusion started when I looked back into my Subversion log and noted that nothing had changed in this project in more than 2 months! It was almost as if the project had stopped building altogether. But what would cause that?! The Configuration Manager, obviously! Checking the solution's Configuration Manager settings, I found that the project that was not producing any output was in fact not selected to be part of the build process when the "Any CPU" platform was selected. This was the problem! I had recently updated the CruiseControl configurations to force the solution to be built targeting the platform "Any CPU". As a result, the project that was at the root of the problem was not configured to be built and the post-build process was failing when it couldn't find what it needed.

    Read the article

  • Understanding Application binary interface (ABI)

    - by Tim
    I am trying to understand the concept of Application binary interface (ABI). From The Linux Kernel Primer: An ABI is a set of conventions that allows a linker to combine separately compiled modules into one unit without recompilation, such as calling conventions, machine interface, and operating-system interface. Among other things, an ABI defines the binary interface between these units. ... The benefits of conforming to an ABI are that it allows linking object files compiled by different compilers. From Wikipedia: an application binary interface (ABI) describes the low-level interface between an application (or any type of) program and the operating system or another application. ABIs cover details such as data type, size, and alignment; the calling convention, which controls how functions' arguments are passed and return values retrieved; the system call numbers and how an application should make system calls to the operating system; and in the case of a complete operating system ABI, the binary format of object files, program libraries and so on. I was wondering whether ABI depends on both the instruction set and the OS. Are the two all that ABI depends on? What kinds of role does ABI play in different stages of compilation: preprocessing, conversion of code from C to Assembly, conversion of code from Assembly to Machine code, and linking? From the first quote above, it seems to me that ABI is needed for only linking stage, not the other stages. Is it correct? When is ABI needed to be considered? Is ABI needed to be considered during programming in C, Assembly or other languages? If yes, how are ABI and API different? Or is it only for linker or compiler? Is ABI specified for/in machine code, Assembly language, and/or of C?

    Read the article

  • BizTalk Schema Validation

    - by Christopher House
    Perhaps this one should be filed under:  Obvious Yesterday I created a new schema that is going to be used for a WCF receive.  The schema has a bunch of restrictions in it, with the intention that we'd validate incoming messages against the schema.  I'd never done message validation with BizTalk but I knew the XmlDisassembler component had an option for validating, so I figured it would be a piece of cake.  Sadly, that was not to be the case.  I deployed my artifacts and configured my receive location's XmlDisassembler with what I thought to be the correct document spec name.  I entered My.Project.Name.SchemaTypeName for the document spec and started running unit tests.  All of them failed with the following error logged in the event log: "WcfReceivePort_BizTalkWcfService/PurchaseOrderService" URI: "/BizTalkWcfService/PurchaseOrderService.svc" Reason: No Disassemble stage components can recognize the data. I went to the receive port and turned on tracking, submitted another message, then went to the admin console and saved the message.  It looked correct, but just to be sure, I manually validated it against the schema in my project.  As expected, it validated correctly. After a bit of thinking on this, I realized that I probably needed to fully qualify my document spec name, meaning, include the assembly name, as well as the type name.  So, I went back to the receive location and changed the document spec to: My.Project.Name.SchemaTypeName, My.Project.Name,Version=1.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxx I re-ran my unit tests and everything was working as expected.  So, note to self:  remember to include the assembly name when setting the document spec.  If you need an easy way to determine your schema name and assembly name, find your schema in the admin console and go to it's properties.  On the property screen, look at the Name and Assembly properties.  Your document spec will be "SchemaName, AssemblyName"

    Read the article

  • How do OSes work on multiple CPUs? [on hold]

    - by user3691093
    Assumption: "OS es (atleast in some part) should be written in assembly.Assembly programs are CPU specefic." If so how can one os run on different CPUs ? For example: how is that I can load Ubuntu on different systems having different CPUs (like intel i3,i5,i7, amd a8,a6,etc) from the same bootable disk? Does the disk contain seporate assembly programs for each CPU? Are these CPUs 'similar' enough to run the same assembly program? Is my assumption wrong? Something else.... Thanks for responding. I tried to find out in what way are the CPUs that I mentioned 'similar'. I came across the concepts of Instruction Set Architecture and Microarchitecture of CPUs.A CPU will understand a program if it is combatible with its ISA. Even if CPUs are 'wired up' differently (different microarchitecture) , as long as the ISA implemented on top is same ,the program will work. ARM and x86 have different ISA ( that why there are 2 windows 8 versions, right?). And if an app program is written in an HLL with compilers for both platforms we will saved from wasting time writing 2 programs. Did I understand anything wrong? Are there programs that can take a compiled program as input and produce a program executable on another CPU as output? Is it possible? (Virtualisation?) 32 bit windows programs do install on 64 bit windows ,dont they? Arent 64 bit CPUs 'differerent' from 32 bit CPUs? They do get seporate OS versions, right? Is this backward combatibility achieved using programes mentioned in (3) ?

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • Accessing resources on localhost using domain credentials

    - by jas
    I'm trying to set up Team Foundation Server 2010, Sharepoint Server 2010 and Report Server 2008R2. I apologize for how long my question/problem is but I'm really lost on where to even look so am being as descriptive as possible in hopes that I'm making sense. The goal: Since developers can be inside or outside the firewall there needs to be a single http point of entry to TFS that works regardless of which side of the firewall you are and needs to work with external access to SharePoint and Report Server. Meaning we have it set up in DNS so buildserver.mydomain.com: points to the build service box which contains all of the services listed at the top of this post and specific services are defined/located by the port number. This is working great on every machine inside and out except for from the build server itself. All services must be able to work using external URLs. If I use http:// buildserver.mydomain.com:4800/tfs (the external URL) from my notebook which is behind the firewall I'm able to login with my domain credentials as expected. If the other developer points to the same URL from their home which isn't on the domain they are also able to login using their domain credentials. However if I am directly on buildserver and call SharePoint, TFS or Reporting Server from (i.e. http:// buildserver.mydomain.com:4800) itself using the external URL, I am prompted for a username and password. Entering my domain credentials results in another prompt to enter my credentials again. It will prompt three times regardless of which credentials are used (I have rights as a domain admin) and then after the third prompt directs me to a blank white page as though access was denied. There are no errors displayed on the page and nothing ends up in the event viewer. From buildserver if i use just the host name (the internal URL), then I'm prompted a single time for credentials and it works. i.e. http:// buildserver:4800/tfs works from the server itself. The behavior is identical for any service requiring authentication. Meaning from the box itself Sharepoint Central Admin, SharePoint WebApp, TFS, TFS Web Access, Report Server and Report Manager all fail using the external URL but will succeed if called using the interal URL. So the problem comes into play when configuring all of the services to work together. The only way to configure TFS is locally from the server which means I must point to the internal reporting server url (http:// buildserver:4800/reports and reportServer respectively instead of http:// buildserver.domainname.com:4800 like they need to be) since external URLs aren't working from itself. If I configure TFS to use the internal URL for Report Server then creating team projects or working in the SharePoint site for the team project fails for anyone not inside the domain since their machines have no idea who http:// buildserver:/reports even is or how to resolve them. I have configured Sharepoint with Alternate Access Mappings as well as set up Report Server to listen for external URLs. The external URLs simply aren't working when called from the server itself. I hope this makes sense. Thanks for taking the time to read this rather verbose plea for help.

    Read the article

  • Applying ServiceKnownTypeAttribute to WCF service with Spring

    - by avidgoffer
    I am trying to apply the ServiceKnownTypeAttribute to my WCF Service but keep getting the error below my config. Does anyone have any ideas? <object id="HHGEstimating" type="Spring.ServiceModel.ServiceExporter, Spring.Services"> <property name="TargetName" value="HHGEstimatingHelper"/> <property name="Name" value="HHGEstimating"/> <property name="Namespace" value="http://www.igcsoftware.com/HHGEstimating"/> <property name="TypeAttributes"> <list> <ref local="wcfErrorBehavior"/> <ref local="wcfSilverlightFaultBehavior"/> <object type="System.ServiceModel.ServiceKnownTypeAttribute, System.ServiceModel"> <constructor-arg name="type" value="IGCSoftware.HHG.Business.UserControl.AtlasUser, IGCSoftware.HHG.Business"/> </object> </list> </property> Error thrown by a dependency of object 'HHGEstimating' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46' : '1' constructor arguments specified but no matching constructor found in object 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' (hint: specify argument indexes, names, or types to avoid ambiguities). while resolving 'TypeAttributes[2]' to 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46' Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: Spring.Objects.Factory.ObjectCreationException: Error thrown by a dependency of object 'HHGEstimating' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46' : '1' constructor arguments specified but no matching constructor found in object 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' (hint: specify argument indexes, names, or types to avoid ambiguities). while resolving 'TypeAttributes[2]' to 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46' Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [ObjectCreationException: Error thrown by a dependency of object 'HHGEstimating' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46' : '1' constructor arguments specified but no matching constructor found in object 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' (hint: specify argument indexes, names, or types to avoid ambiguities). while resolving 'TypeAttributes[2]' to 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46'] Spring.Objects.Factory.Support.ObjectDefinitionValueResolver.ResolveInnerObjectDefinition(String name, String innerObjectName, String argumentName, IObjectDefinition definition, Boolean singletonOwner) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\ObjectDefinitionValueResolver.cs:300 Spring.Objects.Factory.Support.ObjectDefinitionValueResolver.ResolvePropertyValue(String name, IObjectDefinition definition, String argumentName, Object argumentValue) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\ObjectDefinitionValueResolver.cs:150 Spring.Objects.Factory.Support.ObjectDefinitionValueResolver.ResolveValueIfNecessary(String name, IObjectDefinition definition, String argumentName, Object argumentValue) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\ObjectDefinitionValueResolver.cs:112 Spring.Objects.Factory.Config.ManagedList.Resolve(String objectName, IObjectDefinition definition, String propertyName, ManagedCollectionElementResolver resolver) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Config\ManagedList.cs:126 Spring.Objects.Factory.Support.ObjectDefinitionValueResolver.ResolvePropertyValue(String name, IObjectDefinition definition, String argumentName, Object argumentValue) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\ObjectDefinitionValueResolver.cs:201 Spring.Objects.Factory.Support.ObjectDefinitionValueResolver.ResolveValueIfNecessary(String name, IObjectDefinition definition, String argumentName, Object argumentValue) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\ObjectDefinitionValueResolver.cs:112 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.ApplyPropertyValues(String name, RootObjectDefinition definition, IObjectWrapper wrapper, IPropertyValues properties) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\AbstractAutowireCapableObjectFactory.cs:373 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.PopulateObject(String name, RootObjectDefinition definition, IObjectWrapper wrapper) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\AbstractAutowireCapableObjectFactory.cs:563 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.ConfigureObject(String name, RootObjectDefinition definition, IObjectWrapper wrapper) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\AbstractAutowireCapableObjectFactory.cs:1844 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.InstantiateObject(String name, RootObjectDefinition definition, Object[] arguments, Boolean allowEagerCaching, Boolean suppressConfigure) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\AbstractAutowireCapableObjectFactory.cs:918 Spring.Objects.Factory.Support.AbstractObjectFactory.CreateAndCacheSingletonInstance(String objectName, RootObjectDefinition objectDefinition, Object[] arguments) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\AbstractObjectFactory.cs:2120 Spring.Objects.Factory.Support.AbstractObjectFactory.GetObjectInternal(String name, Type requiredType, Object[] arguments, Boolean suppressConfigure) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\AbstractObjectFactory.cs:2046 Spring.Objects.Factory.Support.DefaultListableObjectFactory.PreInstantiateSingletons() in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Objects\Factory\Support\DefaultListableObjectFactory.cs:505 Spring.Context.Support.AbstractApplicationContext.Refresh() in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Context\Support\AbstractApplicationContext.cs:911 _dynamic_Spring.Context.Support.XmlApplicationContext..ctor(Object[] ) +197 Spring.Reflection.Dynamic.SafeConstructor.Invoke(Object[] arguments) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Reflection\Dynamic\DynamicConstructor.cs:116 Spring.Context.Support.RootContextInstantiator.InvokeContextConstructor(ConstructorInfo ctor) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Context\Support\ContextHandler.cs:550 Spring.Context.Support.ContextInstantiator.InstantiateContext() in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Context\Support\ContextHandler.cs:494 Spring.Context.Support.ContextHandler.InstantiateContext(IApplicationContext parentContext, Object configContext, String contextName, Type contextType, Boolean caseSensitive, String[] resources) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Context\Support\ContextHandler.cs:330 Spring.Context.Support.ContextHandler.Create(Object parent, Object configContext, XmlNode section) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Context\Support\ContextHandler.cs:280 [ConfigurationErrorsException: Error creating context 'spring.root': Error thrown by a dependency of object 'HHGEstimating' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46' : '1' constructor arguments specified but no matching constructor found in object 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' (hint: specify argument indexes, names, or types to avoid ambiguities). while resolving 'TypeAttributes[2]' to 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46'] System.Configuration.BaseConfigurationRecord.EvaluateOne(String[] keys, SectionInput input, Boolean isTrusted, FactoryRecord factoryRecord, SectionRecord sectionRecord, Object parentResult) +202 System.Configuration.BaseConfigurationRecord.Evaluate(FactoryRecord factoryRecord, SectionRecord sectionRecord, Object parentResult, Boolean getLkg, Boolean getRuntimeObject, Object& result, Object& resultRuntimeObject) +1061 System.Configuration.BaseConfigurationRecord.GetSectionRecursive(String configKey, Boolean getLkg, Boolean checkPermission, Boolean getRuntimeObject, Boolean requestIsHere, Object& result, Object& resultRuntimeObject) +1431 System.Configuration.BaseConfigurationRecord.GetSection(String configKey, Boolean getLkg, Boolean checkPermission) +56 System.Configuration.BaseConfigurationRecord.GetSection(String configKey) +8 System.Web.Configuration.HttpConfigurationSystem.GetApplicationSection(String sectionName) +45 System.Web.Configuration.HttpConfigurationSystem.GetSection(String sectionName) +49 System.Web.Configuration.HttpConfigurationSystem.System.Configuration.Internal.IInternalConfigSystem.GetSection(String configKey) +6 System.Configuration.ConfigurationManager.GetSection(String sectionName) +78 Spring.Util.ConfigurationUtils.GetSection(String sectionName) in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Util\ConfigurationUtils.cs:69 Spring.Context.Support.ContextRegistry.InitializeContextIfNeeded() in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Context\Support\ContextRegistry.cs:340 Spring.Context.Support.ContextRegistry.GetContext() in l:\projects\spring-net\trunk\src\Spring\Spring.Core\Context\Support\ContextRegistry.cs:206 Spring.ServiceModel.Activation.ServiceHostFactory.CreateServiceHost(String reference, Uri[] baseAddresses) in l:\projects\spring-net\trunk\src\Spring\Spring.Services\ServiceModel\Activation\ServiceHostFactory.cs:66 System.ServiceModel.HostingManager.CreateService(String normalizedVirtualPath) +11687036 System.ServiceModel.HostingManager.ActivateService(String normalizedVirtualPath) +42 System.ServiceModel.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) +479 [ServiceActivationException: The service '/HHGEstimating.svc' cannot be activated due to an exception during compilation. The exception message is: Error creating context 'spring.root': Error thrown by a dependency of object 'HHGEstimating' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46' : '1' constructor arguments specified but no matching constructor found in object 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' (hint: specify argument indexes, names, or types to avoid ambiguities). while resolving 'TypeAttributes[2]' to 'System.ServiceModel.ServiceKnownTypeAttribute#25A5628' defined in 'assembly [IGCSoftware.HHG.WebService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [IGCSoftware.HHG.WebService.Resources.Spring.objects.xml] line 46'.] System.ServiceModel.AsyncResult.End(IAsyncResult result) +11592858 System.ServiceModel.Activation.HostedHttpRequestAsyncResult.End(IAsyncResult result) +194 System.ServiceModel.Activation.HostedHttpRequestAsyncResult.ExecuteSynchronous(HttpApplication context, Boolean flowContext) +176 System.ServiceModel.Activation.HttpModule.ProcessRequest(Object sender, EventArgs e) +275 System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +68 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +75

    Read the article

  • Could not load file or assembly ... or one of its dependencies. An attempt was made to load a progra

    - by Dan
    I am getting the following error message when compiling or attempting to run my application on Windows 7 64 bit. I've scoured the internet and many people have the same error message however none of the solutions address my problem or situation. Using VS 2010. Error 38 Could not load file or assembly 'file:///D:/Projects/Windows Projects/Weld/Components/FileAttachments/FileAttachments/FileAttachments/bin/x86/Debug/FileAttaching.dll' or one of its dependencies. An attempt was made to load a program with an incorrect format. Line 1212, position 5. D:\Projects\Windows Projects\Weld\Weld\Weld.UI\frmMain.resx 1212 5 Weld.UI Ok, so I have 2 projects a UI project and a FileAttachment project. UI project has a reference to FileAttachment project. When I compile UI project in "Any CPU" mode everything works fine and it runs. I assume 'Any CPU' will run in 64bit mode when I compile as that is the platform I am using. I want to run/compile as x86 so I try to do that, so I change configuration for all projects to x86 and verify that these configurations are compiling to x86. I compile and get the error as stated above. I find it odd that it compiles and works fine in 64bit but not 32bit. However when compiled and deployed to users as 'Any CPU', if these users have x86 it still works for them no problem. I just can't compile or run as x86 on my PC. Again, I can compile as Any CPU and deploy to a 32bit PC no problem. Neither project are referencing any 64bit only dlls. Both projects are verified to be targetting 32bit dll's and .NET Framework assemblies. I need to compile and run this locally under 32bit mode. I need JIT edit/continue among other things. Here is the line of code in the resx file that is causing the problem: </data> <data name="Appearance17.Image" type="System.Drawing.Bitmap, System.Drawing" mimetype="application/x-microsoft.net.object.bytearray.base64"> <value> The resx file is verified to be generated for .NET 2.0 amnd is only referencing .NET 2.0 assemblies and not .NET 4.0 versions. Any ideas here? I've searched the net and have found hundreds of people with the same error message but a different problem.

    Read the article

  • Strategy for using snapshots to back up Ubuntu Linux server?

    - by MountainX
    I need some backup advice for my home file server. Here are the mount points, volume groups, logical volumes and used/total space of all the volumes on my Ubuntu 8.10 home file server. / vgA/lvRoot [7.5G/50G] /tmp vgB/lvTmp [195M/30G] /var vgB/lvVar [780M/30G] swap vgB/lvSwap [16.00 GB] /media1 vgC/lvMedia1 [400G/975G] /media2 vgC/lvMedia2 [75G/295G] /boot partition (no volume group) [95M/200M] /video partition (no volume group) [450G/950G] /backups vgD/lvBackupTarget [800G/925G] /home vgE/lvHome [85G/200G] I have just added a 2.0 TB external USB drive that I would like to use to backup everything. (It will be a close fit to get it all on one 2.0 TB drive. I actually have a 2nd external USB drive if needed.) I'd like to backup "/", var, /media1, media2 and /home. I'll deal with /boot and /video separately since they are not logical volumes. For all the logical volumes I'm anticipating taking snapshots and then copying those snapshots to the 2.0 TB external USB drive. I have never done a task like that before. If I do that, I could use the tutorial I found here: http://www.howtoforge.com/linux_lvm_snapshots My questions are: What is the best overall strategy? Is it LVM snapshots, as I'm assuming? How should I prepare, subdivide and mount the 2.0 TB external USB drive? 2.a. Should I create one or more regular partitions or should I create a physical volume with one or more logical volumes? 2.b. Would it be advisable to extactly mirror the source pv/lv layout on the external drive, and if so, is this a good strategy? What's the best way to get the snapshots onto the external drive? dd? Even though this is a strategy question, feedback with actual commands is appreciated. I need step-by-step cookbook-style help because I don't do much server admin work. (Background: This is a home file server that I have rarely had to touch in about 2 years. It has done its job without much intervention. The really old PC that I used to back everything up recently failed, so I'm replacing that with the external USB drive(s) and I'd like to upgrade my backup strategy at the same time. Previously, I just copied stuff from /backups over to the other computer and that would not have made things very easy in a real restore situation. The /backups mount point contains backup copies of "most" of the important data on a file by file basis, but it does not contain copies of /boot, etc. BTW, the actual internal HDD that holds /backups is separate from the other storage devices.) EDIT: I'll propose a strategy... The idea came from a comment here: LVM mirroring VS RAID1 "LVM mirrors are for replication of a logical volume to a different physical volume. It's essentially meant to "move the data to a different disk". The mirror is then broken..." That would fit my requirements well. Here is an ideal situation: establish the LV mirror on the external drive break the link with the mirror create a (persistent) snapshot on the mirror after a week, resync the mirror with the original source and update the mirror break the link and create another snapshot on the mirror. Obviously, the mirror will be like a weekly full backup. And the snapshots on the mirror will represent earlier points in time. If this would work and if it would be time efficient, it would give a nice full & differential type backup on the external drive based on LVM. I have not heard of a strategy like this before. Will it work? Could it be scripted? Thoughts? EDIT 2: Creating Portable DiskSafes With LoopbackFS And LVM Snapshots This article seems intriguing: http://www.howtoforge.com/creating-portable-disksafes-with-loopbackfs-and-lvm-snapshots Unfortunately, I don't understand exactly how to map those ideas to the strategy I'm proposing above. I'm going to ask this last bit as a separate question. I will leave my original question in place because I still desire feedback on the overall best strategy. At this moment I'm assuming it is LVM mirroring in the style of "Creating Portable DiskSafes with LVM Snapshots" but that might be wrong.

    Read the article

  • Does a receiving mail server (the ultimate destination) see emails delivered directly to it vs. to an external relay which then forwards them to it?

    - by Matt
    Let's say my users have accounts on some mail server mail.example.com. I currently have my mx record set to mail.example.com and all is good. Now let's say I want to have mails initially delivered to an external service (e.g. Postini. Note that this is not a postini-specific question though). In the normal situation where my mx is set directly to my mail server mail.example.com, sending MTAs will of course look up my MX and send to mail.example.com. In my new situation I'd have my mx set to mx.othermailservice.com and emails would be received there. OtherEmailService.com will then relay the emails (while keeping the return-path header the same) to mail.example.com. Do the emails that are received at mail.example.com after be relayed from the other service "look" any different than emails that go directly to it as would be the case where the mx was set to mail.example.com?

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • System.Web.Services.Protocols.SoapException - Security perssmission issue

    - by Hiscal
    Can any one help me to resolve this error.My website hosted on shared environment. Server Error in '/' Application. System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessPermission.Demand() at System.Net.ServicePointManager.set_CertificatePolicy(ICertificatePolicy value) at BirdieThis.WebService.golfService.BookGolfCourse(CourseBooking oCourseInfo, CoursePlayer oCoursePlayer, CoursePayment oCoursePayment) The action that failed was: Demand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The first permission that failed was: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The demand was for: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The granted set of the failing assembly was: <PermissionSet class="System.Security.PermissionSet" version="1"> <IPermission class="System.Security.Permissions.EnvironmentPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="TEMP;TMP;USERNAME;OS;COMPUTERNAME"/> <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="D:\Hosting\5457055\html" Write="d:\content\;d:\hosting\" Append="D:\Hosting\5457055\html" PathDiscovery="d:\hosting\"/> <IPermission class="System.Security.Permissions.IsolatedStorageFilePermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Allowed="AssemblyIsolationByUser" UserQuota="9223372036854775807"/> <IPermission class="System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="RestrictedMemberAccess"/> <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="Assertion, Execution, ControlThread, ControlPrincipal, RemotingConfiguration"/> <IPermission class="System.Security.Permissions.UrlIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Url="file:///D:/Hosting/5457055/html/bin/App_Code.DLL"/> <IPermission class="System.Security.Permissions.ZoneIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Zone="MyComputer"/> <IPermission class="System.Security.Permissions.KeyContainerPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Level="Medium"/> <IPermission class="System.Configuration.ConfigurationPermission, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Unrestricted="true"/> <IPermission class="System.Net.DnsPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Drawing.Printing.PrintingPermission, System.Drawing, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Level="DefaultPrinting"/> <IPermission class="System.Net.Mail.SmtpPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Access="Connect"/> <IPermission class="System.Data.SqlClient.SqlClientPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.OleDb.OleDbPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.Odbc.OdbcPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <URI uri="http://.*"/> <URI uri="https://.*"/> </ConnectAccess> </IPermission> <IPermission class="System.Net.SocketPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <ENDPOINT host="*.*.*.*" transport="Tcp" port="3306"/> </ConnectAccess> </IPermission> </PermissionSet> The assembly or AppDomain that failed was: App_Code, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null The method that caused the failure was: golfswitchs.BookGolfResult BookGolfCourse(mygolf.CourseBooking, mygolf.CoursePlayer, mygolf.CoursePayment) The Zone of the assembly that failed was: MyComputer The Url of the assembly that failed was: file:///D:/Hosting/5457055/html/bin/App_Code.DLL --- End of inner exception stack trace --- Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.Services.Protocols.SoapException: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessPermission.Demand() at System.Net.ServicePointManager.set_CertificatePolicy(ICertificatePolicy value) at BirdieThis.WebService.golfService.BookGolfCourse(CourseBooking oCourseInfo, CoursePlayer oCoursePlayer, CoursePayment oCoursePayment) The action that failed was: Demand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The first permission that failed was: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The demand was for: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The granted set of the failing assembly was: <PermissionSet class="System.Security.PermissionSet" version="1"> <IPermission class="System.Security.Permissions.EnvironmentPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="TEMP;TMP;USERNAME;OS;COMPUTERNAME"/> <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="D:\Hosting\5457055\html" Write="d:\content\;d:\hosting\" Append="D:\Hosting\5457055\html" PathDiscovery="d:\hosting\"/> <IPermission class="System.Security.Permissions.IsolatedStorageFilePermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Allowed="AssemblyIsolationByUser" UserQuota="9223372036854775807"/> <IPermission class="System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="RestrictedMemberAccess"/> <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="Assertion, Execution, ControlThread, ControlPrincipal, RemotingConfiguration"/> <IPermission class="System.Security.Permissions.UrlIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Url="file:///D:/Hosting/5457055/html/bin/App_Code.DLL"/> <IPermission class="System.Security.Permissions.ZoneIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Zone="MyComputer"/> <IPermission class="System.Security.Permissions.KeyContainerPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Level="Medium"/> <IPermission class="System.Configuration.ConfigurationPermission, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Unrestricted="true"/> <IPermission class="System.Net.DnsPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Drawing.Printing.PrintingPermission, System.Drawing, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Level="DefaultPrinting"/> <IPermission class="System.Net.Mail.SmtpPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Access="Connect"/> <IPermission class="System.Data.SqlClient.SqlClientPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.OleDb.OleDbPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.Odbc.OdbcPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <URI uri="http://.*"/> <URI uri="https://.*"/> </ConnectAccess> </IPermission> <IPermission class="System.Net.SocketPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <ENDPOINT host="*.*.*.*" transport="Tcp" port="3306"/> </ConnectAccess> </IPermission> </PermissionSet> The assembly or AppDomain that failed was: App_Code, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null The method that caused the failure was: golfswitchs.BookGolfResult BookGolfCourse(mygolf.CourseBooking, mygolf.CoursePlayer, mygolf.CoursePayment) The Zone of the assembly that failed was: MyComputer The Url of the assembly that failed was: file:///D:/Hosting/5457055/html/bin/App_Code.DLL --- End of inner exception stack trace --- Source Error: Line 446: Line 447: oPayment.PayCurrency = "USD"; Line 448: oResult = oService.BookGolfCourse(oGolfItem, oGolfplayer, oPayment); Line 449: Response.Write(oResult.RetMsg); Line 450: Source File: c:\inetpub\vhosts\cfmdeveloper.com\subdomains\ind103\httpdocs\test.aspx.cs Line: 448 Stack Trace: [SoapException: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessPermission.Demand() at System.Net.ServicePointManager.set_CertificatePolicy(ICertificatePolicy value) at BirdieThis.WebService.golfService.BookGolfCourse(CourseBooking oCourseInfo, CoursePlayer oCoursePlayer, CoursePayment oCoursePayment) The action that failed was: Demand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The first permission that failed was: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The demand was for: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The granted set of the failing assembly was: <PermissionSet class="System.Security.PermissionSet" version="1"> <IPermission class="System.Security.Permissions.EnvironmentPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="TEMP;TMP;USERNAME;OS;COMPUTERNAME"/> <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="D:\Hosting\5457055\html" Write="d:\content\;d:\hosting\" Append="D:\Hosting\5457055\html" PathDiscovery="d:\hosting\"/> <IPermission class="System.Security.Permissions.IsolatedStorageFilePermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Allowed="AssemblyIsolationByUser" UserQuota="9223372036854775807"/> <IPermission class="System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="RestrictedMemberAccess"/> <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="Assertion, Execution, ControlThread, ControlPrincipal, RemotingConfiguration"/> <IPermission class="System.Security.Permissions.UrlIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Url="file:///D:/Hosting/5457055/html/bin/App_Code.DLL"/> <IPermission class="System.Security.Permissions.ZoneIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Zone="MyComputer"/> <IPermission class="System.Security.Permissions.KeyContainerPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Level="Medium"/> <IPermission class="System.Configuration.ConfigurationPermission, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Unrestricted="true"/> <IPermission class="System.Net.DnsPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Drawing.Printing.PrintingPermission, System.Drawing, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Level="DefaultPrinting"/> <IPermission class="System.Net.Mail.SmtpPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Access="Connect"/> <IPermission class="System.Data.SqlClient.SqlClientPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.OleDb.OleDbPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.Odbc.OdbcPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <URI uri="http://.*"/> <URI uri="https://.*"/> </ConnectAccess> </IPermission> <IPermission class="System.Net.SocketPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <ENDPOINT host="*.*.*.*" transport="Tcp" port="3306"/> </ConnectAccess> </IPermission> </PermissionSet> The assembly or AppDomain that failed was: App_Code, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null The method that caused the failure was: golfswitchs.BookGolfResult BookGolfCourse(mygolf.CourseBooking, mygolf.CoursePlayer, mygolf.CoursePayment) The Zone of the assembly that failed was: MyComputer The Url of the assembly that failed was: file:///D:/Hosting/5457055/html/bin/App_Code.DLL --- End of inner exception stack trace ---] System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) +431766 System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) +204 mygolf.golfService.BookGolfCourse(CourseBooking oCourseInfo, CoursePlayer oCoursePlayer, CoursePayment oCoursePayment) +80 birdiethis.web.test.BookClub() in c:\inetpub\vhosts\cfmdeveloper.com\subdomains\ind103\httpdocs\test.aspx.cs:448 birdiethis.web.test.Page_Load(Object sender, EventArgs e) in c:\inetpub\vhosts\cfmdeveloper.com\subdomains\ind103\httpdocs\test.aspx.cs:28 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 Version Information: Microsoft .NET Framework Version:2.0.50727.3603; ASP.NET Version:2.0.50727.3082

    Read the article

  • fftw in Visual Studio?

    - by drhorrible
    I'm trying to link my project with fftw and so far, I've gotten it to compile, but not link. As the site said, I generated all the .lib files (even though I'm only using double precision), and copied them to C:\Program Files\Microsoft Visual Studio 9.0\VC\lib, the .h file to C:\Program Files\Microsoft Visual Studio 9.0\VC\include and the .dll to C:\windows\system32. I've copied the tutorial program, and the exact error I am getting is: 1>hw10.obj : error LNK2019: unresolved external symbol __imp__fftw_free referenced in function "bool __cdecl test(void)" (?test@@YA_NXZ) 1>hw10.obj : error LNK2019: unresolved external symbol __imp__fftw_destroy_plan referenced in function "bool __cdecl test(void)" (?test@@YA_NXZ) 1>hw10.obj : error LNK2019: unresolved external symbol __imp__fftw_execute referenced in function "bool __cdecl test(void)" (?test@@YA_NXZ) 1>hw10.obj : error LNK2019: unresolved external symbol __imp__fftw_plan_dft_1d referenced in function "bool __cdecl test(void)" (?test@@YA_NXZ) 1>hw10.obj : error LNK2019: unresolved external symbol __imp__fftw_malloc referenced in function "bool __cdecl test(void)" (?test@@YA_NXZ) So, what could be wrong with my project setup? Thanks!

    Read the article

  • Using XML to generate SAP ABAP and/or SAPScript?

    - by Rob
    Has anyone got examples and/or experience of generating SAP ABAP or SAPScript form code from XML that came from an external application? This would help: creation of SAP-based applications in a data-driven way by automating the knowledge to do so from the export of XML from an external application automated inputting of knowledge from an external application into SAP applications, rather than manually copying between systems enable 3rd-party external tools to be used to create data, perhaps in a more easier-to-use way than could be done in SAP. Or if there was already heavy investment in training with these third party tools rather than SAP, or if the employment market favoured staff with knowledge of these tools enable creation of data for multiple purposes, views: those in SAP and outside SAP. enable inter-operability of SAP with 3rd-party external tools I'm looking for: - experiences as to the feasibility - tools, e.g. parsers, XSLT etc. - examples

    Read the article

  • Some notes on Reflector 7

    - by CliveT
    Both Bart and I have blogged about some of the changes that we (and other members of the team) have made to .NET Reflector for version 7, including the new tabbed browsing model, the inclusion of Jason Haley's PowerCommands add-in and some improvements to decompilation such as handling iterator blocks. The intention of this blog post is to cover all of the main new features in one place, and to describe the three new editions of .NET Reflector 7. If you'd simply like to try out the latest version of the beta for yourself you can do so here. Three new editions .NET Reflector 7 will come in three new editions: .NET Reflector .NET Reflector VS .NET Reflector VSPro The first edition is just the standalone Windows application. The latter two editions include the Windows application, but also add the power of Reflector into Visual Studio so that you can save time switching tools and quickly get to the bottom of a debugging issue that involves third-party code. Let's take a look at some of the new features in each edition. Tabbed browsing .NET Reflector now has a tabbed browsing model, in which the individual tabs have independent histories. You can open a new tab to view the selected object by using CTRL+CLICK. I've found this really useful when I'm investigating a particular piece of code but then want to focus on some other methods that I find along the way. For version 7, we wanted to implement the basic idea of tabs to see whether it is something that users will find helpful. If it is something that enhances productivity, we will add more tab-based features in a future version. PowerCommands add-in We have also included Jason Haley's PowerCommands add-in as part of version 7. This add-in provides a number of useful commands, including support for opening .xap files and extracting the constituent assemblies, and a query editor that allows C# queries to be written and executed against the Reflector object model . All of the PowerCommands features can be turned on from the options menu. We will be really interested to see what people are finding useful for further integration into the main tool in the future. My personal favourite part of the PowerCommands add-in is the query editor. You can set up as many of your own queries as you like, but we provide 25 to get you started. These do useful things like listing all extension methods in a given assembly, and displaying other lower-level information, such as the number of times that a given method uses the box IL instruction. These queries can be extracted and then executed from the 'Run Query' context menu within the assembly explorer. Moreover, the queries can be loaded, modified, and saved using the built-in editor, allowing very specific user customization and sharing of queries. The PowerCommands add-in contains many other useful utilities. For example, you can open an item using an external application, work with enumeration bit flags, or generate assembly binding redirect files. You can see Bart's earlier post for a more complete list. .NET Reflector VS .NET Reflector VS adds a brand new Reflector object browser into Visual Studio to save you time opening .NET Reflector separately and browsing for an object. A 'Decompile and Explore' option is also added to the context menu of references in the Solution Explorer, so you don't need to leave Visual Studio to look through decompiled code. We've also added some simple navigation features to allow you to move through the decompiled code as quickly and easily as you can in .NET Reflector. When this is selected, the add-in decompiles the given assembly, Once the decompilation has finished, a clone of the Reflector assembly explorer can be used inside Visual Studio. When Reflector generates the source code, it records the location information. You can therefore navigate from the source file to other decompiled source using the 'Go To Definition' context menu item. This then takes you to the definition in another decompiled assembly. .NET Reflector VSPro .NET Reflector VSPro builds on the features in .NET Reflector VS to add the ability to debug any source code you decompile. When you decompile with .NET Reflector VSPro, a matching .pdb is generated, so you can use Visual Studio to debug the source code as if it were part of the project. You can now use all the standard debugging techniques that you are used to in the Visual Studio debugger, and step through decompiled code as if it were your own. Again, you can select assemblies for decompilation. They are then decompiled. And then you can debug as if they were one of your own source code files. The future of .NET Reflector As I have mentioned throughout this post, most of the new features in version 7 are exploratory steps and we will be watching feedback closely. Although we don't want to speculate now about any other new features or bugs that will or won't be fixed in the next few versions of .NET Reflector, Bart has mentioned in a previous post that there are lots of improvements we intend to make. We plan to do this with great care and without taking anything away from the simplicity of the core product. User experience is something that we pride ourselves on at Red Gate, and it is clear that Reflector is still a long way off our usual standards. We plan for the next few versions of Reflector to be worked on by some of our top usability specialists who have been involved with our other market-leading products such as the ANTS Profilers and SQL Compare. I re-iterate the need for the really great simple mode in .NET Reflector to remain intact regardless of any other improvements we are planning to make. I really hope that you enjoy using some of the new features in version 7 and that Reflector continues to be your favourite .NET development tool for a long time to come.

    Read the article

  • How to perform ant path mapping in war task?

    - by eljenso
    I have several JAR file pattern sets, like <patternset id="common.jars"> <include name="external/castor-1.1.jar" /> <include name="external/commons-logging-1.2.6.jar" /> <include name="external/itext-2.0.4.jar" /> ... </patternset> I also have a war task containing a lib element: ... Like this however, I end up with a WEB-INF/lib containing the subdirectories from my patterns: WEB-INF/lib/external/castor-1.1.jar WEB-INF/lib/external/... Is there any way to flatten this, so the JAR files appear top-level under WEB-INF/lib, regardless of the directories specified in the patterns? I looked at mapper but it seems you cannot use them inside lib.

    Read the article

  • Tool Review: Telerik JustDecompile

    - by Sam Abraham
    In the next few lines, I will be providing a brief review of Telerik’s JustDecompile, a free .Net decompiler and assembly browser. In using Telerik’s 2012 Q3 JustDecompile release, one can see many great features.  First off, I loved the built-in options for loading .Net assemblies automatically using the Open->Load Framework menu option. Other options enable loading assemblies from GAC, XAP URL or locally from disk. The ability to create an “Assembly List” is quiet handy for grouping and saving a “List” of DLLs to load. All loaded assemblies are shown in the left panel of a split-panel screen. Clicking an assembly expands all namespaces within. Drilling further to class level displays the actual source code in the right panel in either IL, C# or Visual Basic. In conclusion, JustDecompile has grown and quickly matured into an indispensible handy tool for us developers. Telerik’s effort in maintaining and updating JustDecompile as well as the company’s commitment to keeping it free is much appreciated and valued.

    Read the article

  • Developing a computer system based on Nand2Tetris [on hold]

    - by Ryan
    I recently finished a book called Nand2Tetris (nand2tetris.org) where I built my own computer system from scratch with its own machine language, assembly code, and a high level language called Jack that's translated to Hack binary. However, I feel like the "computer" I built throughout the course of this book (called the Hack computer) is a bit too simple for various reasons: 1) There are only two registers (D and A), whereas most computers have much more 2) Peripheral devices like mouse and keyboard have to be directly implemented 3) Peripheral devices use a pre-planned shared memory map to communicate with the CPU instead of using interrupts (which aren't covered at all) 4) Jack (the high level language) code doesn't compile to Assembly code directly, instead it compiles to an intermediate language, which in turn gets translated to Assembly. 5) There is no ROM or permanent storage device, everything is stored in RAM 6) No support for colored monitor, networking or sound I would like to build a more complicated computer system now based on what I've learned from Nand2Tetris. Does anyone know of any good resources or books to get started on this? (BTW by computer system I mean software that can emulate the hardware of a virtual computer with its own unique instruction set)

    Read the article

  • What is the best way to deal with 404s that are all trying to point to the same page that are from an external site?

    - by Lee
    I started getting 404s showing up in my Google Webmaster's Tools from a site linking to a specific category but with odd characters at the end of the url. So Something like this: http://example.com/category/puppies%EF%BC%9A.textwidget%E8%A6%81%E7%B4%A0%E7%B7%A8%E9%9B%86 Google Webmaster says that there are about 120 of these links and I can imagine there will be more to come. What is the best way to handle these links from an seo point-of-view? I have heard 301 redirecting too many links at one time can cause Google to ding the site but I don't want this site to continue posting broken links. Any help on this would be appreciated.

    Read the article

  • .NET app - Should we use SQL Server and duplicate some reference data from an external Oracle DB? Or use Oracle and have a DB link?

    - by Daventry
    We're looking to migrate some existing Excel/Access processes into a new system which will provide the users with a Silverlight frontend to run and view the reports instead of using MS Access. The initial idea was to have SQL Server 2008 as RDBMS. The problem is that we've got some static data such as country codes, counterparties, etc which live in an existing Oracle DB. Since we do not want to duplicate that data (if possible), we were thinking of having a DB link between SQL Server and Oracle, but our firm does not allow that. So the options are either duplicate the data or use Oracle as RDBMS - surprise, the firm does allow DB links between Oracle databases. The initial idea was also to use WCF RIA Services, Entity Framework, etc which we're not sure they play well with Oracle, that's why it was decided to go with SQL Server in the first place. Would you advise to go for Oracle so that we can just link the static data? Or use SQL Server 2008 and replicate it because it's "safer" to stay within the Microsoft land? To use or not to use Entity Framework and WCF RIA Services at all? Regards. UPDATE: Thanks everyone for your answers. Nothing is set in stone yet. We'll try to import the data instead of linking, as if the other DB goes down, our system can still carry on. We're likely to use SQL Server just because most developers are more experienced with it. Even if we used RIA Services, we can swap out the Data Access Layer and use other frameworks such those mentioned below.

    Read the article

  • Linking errors when building against Boost Unit Test Framework

    - by Rafid
    I am trying to use Boost Unit Test Framework by building a stand alone library as detailed here: http://www.boost.org/doc/libs/1_35_0/libs/test/doc/components/utf/compilation.html So I created a VC library project containing the mentioned files and build it and it was successful. Then I created a test project and referenced the library project I just created, but when I tried to build it, I got the following linking errors: 1>Type.obj : error LNK2019: unresolved external symbol "bool __cdecl boost::test_tools::tt_detail::check_impl(class boost::test_tools::predicate_result const &,class boost::unit_test::lazy_ostream const &,class boost::unit_test::basic_cstring<char const >,unsigned __int64,enum boost::test_tools::tt_detail::tool_level,enum boost::test_tools::tt_detail::check_type,unsigned __int64,...)" (?check_impl@tt_detail@test_tools@boost@@YA_NAEBVpredicate_result@23@AEBVlazy_ostream@unit_test@3@V?$basic_cstring@$$CBD@63@_KW4tool_level@123@W4check_type@123@3ZZ) referenced in function "public: void __cdecl test1::test_method(void)" (?test_method@test1@@QEAAXXZ) 1>BoostUnitTestFramework.lib(framework.obj) : error LNK2019: unresolved external symbol "void __cdecl boost::debug::break_memory_alloc(long)" (?break_memory_alloc@debug@boost@@YAXJ@Z) referenced in function "void __cdecl boost::unit_test::framework::init(class boost::unit_test::test_suite * (__cdecl*)(int,char * * const),int,char * * const)" (?init@framework@unit_test@boost@@YAXP6APEAVtest_suite@23@HQEAPEAD@ZH0@Z) 1>BoostUnitTestFramework.lib(framework.obj) : error LNK2019: unresolved external symbol "void __cdecl boost::debug::detect_memory_leaks(bool)" (?detect_memory_leaks@debug@boost@@YAX_N@Z) referenced in function "void __cdecl boost::unit_test::framework::init(class boost::unit_test::test_suite * (__cdecl*)(int,char * * const),int,char * * const)" (?init@framework@unit_test@boost@@YAXP6APEAVtest_suite@23@HQEAPEAD@ZH0@Z) 1>BoostUnitTestFramework.lib(execution_monitor.obj) : error LNK2019: unresolved external symbol "bool __cdecl boost::debug::attach_debugger(bool)" (?attach_debugger@debug@boost@@YA_N_N@Z) referenced in function "public: int __cdecl boost::detail::system_signal_exception::operator()(unsigned int,struct _EXCEPTION_POINTERS *)" (??Rsystem_signal_exception@detail@boost@@QEAAHIPEAU_EXCEPTION_POINTERS@@@Z) 1>BoostUnitTestFramework.lib(execution_monitor.obj) : error LNK2019: unresolved external symbol "bool __cdecl boost::debug::under_debugger(void)" (?under_debugger@debug@boost@@YA_NXZ) referenced in function "public: int __cdecl boost::execution_monitor::execute(class boost::unit_test::callback0<int> const &)" (?execute@execution_monitor@boost@@QEAAHAEBV?$callback0@H@unit_test@2@@Z) 1>BoostUnitTestFramework.lib(unit_test_main.obj) : error LNK2019: unresolved external symbol "class boost::unit_test::test_suite * __cdecl init_unit_test_suite(int,char * * const)" (?init_unit_test_suite@@YAPEAVtest_suite@unit_test@boost@@HQEAPEAD@Z) referenced in function main 1>C:\Users\Rafid\Workspace\MyPhysics\Builds\VC10\Tests\Debug\Tests.exe : fatal error LNK1120: 6 unresolved externals They seem to be mainly caused by Boost debug library, but I can't see a reason why I should get linking errors putting in mind that Boost debug library only need to be included as header files, rather than linking against as a library! Any ideas?!

    Read the article

  • Statically Compiling QWebKit 4.6.2

    - by geeko
    I tried to compile Qt+Webkit statically with MS VS 2008 and this worked. C:\Qt\4.6.2configure -release -static -opensource -no-fast -no-exceptions -no-accessibility -no-rtti -no-stl -no-opengl -no-openvg -no-incredibuild-xge -no-style-plastique -no-style-cleanlooks -no-style-motif -no-style-cde -no-style-windowsce -no-style-windowsmobile -no-style-s60 -no-gif -no-libpng -no-libtiff -no-libjpeg -no-libmng -no-qt3support -no-mmx -no-3dnow -no-sse -no-sse2 -no-iwmmxt -no-openssl -no-dbus -platform win32-msvc2008 -arch windows -no-phonon -no-phonon-backend -no-multimedia -no-audio-backend -no-script -no-scripttools -webkit -no-declarative However, I get these errors whenever building a project that links statically to QWebKit: 1 Creating library C:\Users\Geeko\Desktop\Qt\TestQ\Release\TestQ.lib and object C:\Users\Geeko\Desktop\Qt\TestQ\Release\TestQ.exp 1QtWebKit.lib(PluginPackageWin.obj) : error LNK2019: unresolved external symbol _VerQueryValueW@16 referenced in function "class WebCore::String __cdecl WebCore::getVersionInfo(void * const,class WebCore::String const &)" (?getVersionInfo@WebCore@@YA?AVString@1@QAXABV21@@Z) 1QtWebKit.lib(PluginPackageWin.obj) : error LNK2019: unresolved external symbol _GetFileVersionInfoW@16 referenced in function "private: bool __thiscall WebCore::PluginPackage::fetchInfo(void)" (?fetchInfo@PluginPackage@WebCore@@AAE_NXZ) 1QtWebKit.lib(PluginPackageWin.obj) : error LNK2019: unresolved external symbol _GetFileVersionInfoSizeW@8 referenced in function "private: bool __thiscall WebCore::PluginPackage::fetchInfo(void)" (?fetchInfo@PluginPackage@WebCore@@AAE_NXZ) 1QtWebKit.lib(PluginDatabaseWin.obj) : error LNK2019: unresolved external symbol _imp_PathRemoveFileSpecW@4 referenced in function "class WebCore::String __cdecl WebCore::safariPluginsDirectory(void)" (?safariPluginsDirectory@WebCore@@YA?AVString@1@XZ) 1QtWebKit.lib(PluginDatabaseWin.obj) : error LNK2019: unresolved external symbol _imp_SHGetValueW@24 referenced in function "void __cdecl WebCore::addWindowsMediaPlayerPluginDirectory(class WTF::Vector &)" (?addWindowsMediaPlayerPluginDirectory@WebCore@@YAXAAV?$Vector@VString@WebCore@@$0A@@WTF@@@Z) 1QtWebKit.lib(PluginDatabaseWin.obj) : error LNK2019: unresolved external symbol _imp_PathCombineW@12 referenced in function "void __cdecl WebCore::addMacromediaPluginDirectories(class WTF::Vector &)" (?addMacromediaPluginDirectories@WebCore@@YAXAAV?$Vector@VString@WebCore@@$0A@@WTF@@@Z) 1C:\Users\Geeko\Desktop\Qt\TestQ\Release\TestQ.exe : fatal error LNK1120: 6 unresolved externals Do I need to check something in the Qt project options ? I have QtCore, QtGui, Network and WebKit checked.

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >