Search Results

Search found 9109 results on 365 pages for 'external authorization'.

Page 94/365 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • Accessing resources on localhost using domain credentials

    - by jas
    I'm trying to set up Team Foundation Server 2010, Sharepoint Server 2010 and Report Server 2008R2. I apologize for how long my question/problem is but I'm really lost on where to even look so am being as descriptive as possible in hopes that I'm making sense. The goal: Since developers can be inside or outside the firewall there needs to be a single http point of entry to TFS that works regardless of which side of the firewall you are and needs to work with external access to SharePoint and Report Server. Meaning we have it set up in DNS so buildserver.mydomain.com: points to the build service box which contains all of the services listed at the top of this post and specific services are defined/located by the port number. This is working great on every machine inside and out except for from the build server itself. All services must be able to work using external URLs. If I use http:// buildserver.mydomain.com:4800/tfs (the external URL) from my notebook which is behind the firewall I'm able to login with my domain credentials as expected. If the other developer points to the same URL from their home which isn't on the domain they are also able to login using their domain credentials. However if I am directly on buildserver and call SharePoint, TFS or Reporting Server from (i.e. http:// buildserver.mydomain.com:4800) itself using the external URL, I am prompted for a username and password. Entering my domain credentials results in another prompt to enter my credentials again. It will prompt three times regardless of which credentials are used (I have rights as a domain admin) and then after the third prompt directs me to a blank white page as though access was denied. There are no errors displayed on the page and nothing ends up in the event viewer. From buildserver if i use just the host name (the internal URL), then I'm prompted a single time for credentials and it works. i.e. http:// buildserver:4800/tfs works from the server itself. The behavior is identical for any service requiring authentication. Meaning from the box itself Sharepoint Central Admin, SharePoint WebApp, TFS, TFS Web Access, Report Server and Report Manager all fail using the external URL but will succeed if called using the interal URL. So the problem comes into play when configuring all of the services to work together. The only way to configure TFS is locally from the server which means I must point to the internal reporting server url (http:// buildserver:4800/reports and reportServer respectively instead of http:// buildserver.domainname.com:4800 like they need to be) since external URLs aren't working from itself. If I configure TFS to use the internal URL for Report Server then creating team projects or working in the SharePoint site for the team project fails for anyone not inside the domain since their machines have no idea who http:// buildserver:/reports even is or how to resolve them. I have configured Sharepoint with Alternate Access Mappings as well as set up Report Server to listen for external URLs. The external URLs simply aren't working when called from the server itself. I hope this makes sense. Thanks for taking the time to read this rather verbose plea for help.

    Read the article

  • Strategy for using snapshots to back up Ubuntu Linux server?

    - by MountainX
    I need some backup advice for my home file server. Here are the mount points, volume groups, logical volumes and used/total space of all the volumes on my Ubuntu 8.10 home file server. / vgA/lvRoot [7.5G/50G] /tmp vgB/lvTmp [195M/30G] /var vgB/lvVar [780M/30G] swap vgB/lvSwap [16.00 GB] /media1 vgC/lvMedia1 [400G/975G] /media2 vgC/lvMedia2 [75G/295G] /boot partition (no volume group) [95M/200M] /video partition (no volume group) [450G/950G] /backups vgD/lvBackupTarget [800G/925G] /home vgE/lvHome [85G/200G] I have just added a 2.0 TB external USB drive that I would like to use to backup everything. (It will be a close fit to get it all on one 2.0 TB drive. I actually have a 2nd external USB drive if needed.) I'd like to backup "/", var, /media1, media2 and /home. I'll deal with /boot and /video separately since they are not logical volumes. For all the logical volumes I'm anticipating taking snapshots and then copying those snapshots to the 2.0 TB external USB drive. I have never done a task like that before. If I do that, I could use the tutorial I found here: http://www.howtoforge.com/linux_lvm_snapshots My questions are: What is the best overall strategy? Is it LVM snapshots, as I'm assuming? How should I prepare, subdivide and mount the 2.0 TB external USB drive? 2.a. Should I create one or more regular partitions or should I create a physical volume with one or more logical volumes? 2.b. Would it be advisable to extactly mirror the source pv/lv layout on the external drive, and if so, is this a good strategy? What's the best way to get the snapshots onto the external drive? dd? Even though this is a strategy question, feedback with actual commands is appreciated. I need step-by-step cookbook-style help because I don't do much server admin work. (Background: This is a home file server that I have rarely had to touch in about 2 years. It has done its job without much intervention. The really old PC that I used to back everything up recently failed, so I'm replacing that with the external USB drive(s) and I'd like to upgrade my backup strategy at the same time. Previously, I just copied stuff from /backups over to the other computer and that would not have made things very easy in a real restore situation. The /backups mount point contains backup copies of "most" of the important data on a file by file basis, but it does not contain copies of /boot, etc. BTW, the actual internal HDD that holds /backups is separate from the other storage devices.) EDIT: I'll propose a strategy... The idea came from a comment here: LVM mirroring VS RAID1 "LVM mirrors are for replication of a logical volume to a different physical volume. It's essentially meant to "move the data to a different disk". The mirror is then broken..." That would fit my requirements well. Here is an ideal situation: establish the LV mirror on the external drive break the link with the mirror create a (persistent) snapshot on the mirror after a week, resync the mirror with the original source and update the mirror break the link and create another snapshot on the mirror. Obviously, the mirror will be like a weekly full backup. And the snapshots on the mirror will represent earlier points in time. If this would work and if it would be time efficient, it would give a nice full & differential type backup on the external drive based on LVM. I have not heard of a strategy like this before. Will it work? Could it be scripted? Thoughts? EDIT 2: Creating Portable DiskSafes With LoopbackFS And LVM Snapshots This article seems intriguing: http://www.howtoforge.com/creating-portable-disksafes-with-loopbackfs-and-lvm-snapshots Unfortunately, I don't understand exactly how to map those ideas to the strategy I'm proposing above. I'm going to ask this last bit as a separate question. I will leave my original question in place because I still desire feedback on the overall best strategy. At this moment I'm assuming it is LVM mirroring in the style of "Creating Portable DiskSafes with LVM Snapshots" but that might be wrong.

    Read the article

  • Does a receiving mail server (the ultimate destination) see emails delivered directly to it vs. to an external relay which then forwards them to it?

    - by Matt
    Let's say my users have accounts on some mail server mail.example.com. I currently have my mx record set to mail.example.com and all is good. Now let's say I want to have mails initially delivered to an external service (e.g. Postini. Note that this is not a postini-specific question though). In the normal situation where my mx is set directly to my mail server mail.example.com, sending MTAs will of course look up my MX and send to mail.example.com. In my new situation I'd have my mx set to mx.othermailservice.com and emails would be received there. OtherEmailService.com will then relay the emails (while keeping the return-path header the same) to mail.example.com. Do the emails that are received at mail.example.com after be relayed from the other service "look" any different than emails that go directly to it as would be the case where the mx was set to mail.example.com?

    Read the article

  • AuthSub token from Google/YouTube API is always returned as invalid

    - by Miriam Raphael Roberts
    Anyone out there have experience with the YouTube/Google API? I am trying to login to Google/Youtube using clientLogin, retrieve an AuthSub token, exchange it for a multi-session token and then use it in our upload form. Just a note that we are not going to have other users logging into our (secure) website, this is for our use only (no multi-users). We just want a way to upload videos to our YT account via our own website without having to login/upload to YouTube. Ultimately, everything is dependent on the first step. My AuthSub token is always being returned as invalid (Error '403'). All the steps I used are below with username/password changed. Anyone have an insight on why my AuthSub is always invalid? I am spending an enormous amount of time trying to get this to work. STEP 1: Getting the authsub token from Youtube/Google POST /youtube/accounts/ClientLogin HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: www.google.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/x-www-form-urlencoded Content-Length: 86 Email=MyGoogleUsername&Passwd=MyGooglePasswd&accountType=GOOGLE&service=youtube&source=Test RESPONSE RECEIVED: Auth=AIwbFAR99f3iACfkT-5PXCB-1tN4vlyP_1CiNZ8JOn6P-......yv4d4zeGRemNm4il1e-M6czgfDXAR0w9fQ YouTubeUser=MyYouTubeUsername CURL COMMAND USED: /usr/bin/curl -S -v --location https://www.google.com/youtube/accounts/ClientLogin --data Email=MyGoogleUsername&Passwd=MyGooglePasswd&accountType=GOOGLE&service=youtube&source=Test --header Content-Type:application/x-www-form-urlencoded STEP 2: Exchanging the AuthSub token for a multi-use token GET /accounts/AuthSubSessionToken HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: www.google.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/x-www-form-urlencoded Authorization: AuthSub token="AIwbFASiRR3XDKs......p5Oy_VA_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" Response received: 403 Invalid AuthSub token. curl command used: /usr/bin/curl -S -v --location https://www.google.com/accounts/AuthSubSessionToken --header Content-Type:application/x-www-form-urlencoded -H Authorization: AuthSub token="AIwbFAQR_4xG2g.....vp3BQZW5XEMyIj_wFozHSTEQ-BQRfYuIY-1CyqLeQ" STEP 3: Checking to see if the token is good/valid GET /accounts/AuthSubTokenInfo HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: www.google.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/x-www-form-urlencoded Authorization: AuthSub token="AIwbFASiRR3XDKsNkaIoPaujN5RQhKs3u.....A_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" Received response: 403 Invalid AuthSub token. curl command used: /usr/bin/curl -S -v --location https://www.google.com/accounts/AuthSubTokenInfo --header Content-Type:application/x-www-form-urlencoded -H Authorization: AuthSub token="AIwbFAQR_4xG2gHoAKDsNdFqdZdwWjGeNquOLpvp3BQZW5XEMyIj_wFozHSTEQ-BQRfYuIY-1CyqLeQ" STEP 4: Trying to get the upload token using the authsub token POST /action/GetUploadToken HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: gdata.youtube.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/atom+xml Authorization: AuthSub token="AIwbFASiRR3XDKsNkaIoPaujN5RQhp5Oy_VA_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" X-Gdata-Key:key="AI39si5EQyo-TZPFAnmGjxJGFKpxd_7a6hEERh_3......R82AShoQ" Content-Length:0 GData-Version:2 Recevied Response: 401 Token invalid - Invalid AuthSub token. Curl command used: /usr/bin/curl -S -v --location http://gdata.youtube.com/action/GetUploadToken -H Content-Type:application/atom+xml -H Authorization: AuthSub token="AIwbFASiRR3XDKs....sYDp5Oy_VA_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" -H X-Gdata-Key:key="AI39si5EQyo-TZPFAnmGjxJGF......Kpxd6dN2J1oHFQYTj_7a6hEERh_3E48R82AShoQ" -H Content-Length:0 -H GData-Version:2

    Read the article

  • How to share a folder using the Ubuntu One Web API

    - by Mario César
    I have successfully implement OAuth Authorization with Ubuntu One in Django, Here are my views and models: https://gist.github.com/mariocesar/7102729 Right now, I can use the file_storage ubuntu api, for example the following, will ask if Path exists, then create the directory, and then get the information on the created path to probe is created. >>> user.oauth_access_token.get_file_storage(volume='/~/Ubuntu One', path='/Websites/') <Response [404]> >>> user.oauth_access_token.put_file_storage(volume='/~/Ubuntu One', path='/Websites/', data={"kind": "directory"}) <Response [200]> >>> user.oauth_access_token.get_file_storage(volume='/~/Ubuntu One', path='/Websites/').json() {u'content_path': u'/content/~/Ubuntu One/Websites', u'generation': 10784, u'generation_created': 10784, u'has_children': False, u'is_live': True, u'key': u'MOQgjSieTb2Wrr5ziRbNtA', u'kind': u'directory', u'parent_path': u'/~/Ubuntu One', u'path': u'/Websites', u'resource_path': u'/~/Ubuntu One/Websites', u'volume_path': u'/volumes/~/Ubuntu One', u'when_changed': u'2013-10-22T15:34:04Z', u'when_created': u'2013-10-22T15:34:04Z'} So it works, it's great I'm happy about that. But I can't share a folder. My question is? How can I share a folder using the api? I found no web api to do this, the Ubuntu One SyncDaemon tool is the only mention on solving this https://one.ubuntu.com/developer/files/store_files/syncdaemontool#ubuntuone.platform.tools.SyncDaemonTool.offer_share But I'm reluctant to maintain a DBUS and a daemon in my server for every Ubuntu One connection I have authorization for. Any one have an idea how can I using a web API to programmatically share a folder? even better using the OAuth authorization tokens that I already have.

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • Is there any reason not to go directly from client-side Javascript to a database?

    - by Chris Smith
    So, let's say I'm going to build a Stack Exchange clone and I decide to use something like CouchDB as my backend store. If I use their built-in authentication and database-level authorization, is there any reason not to allow the client-side Javascript to write directly to the publicly available CouchDB server? Since this is basically a CRUD application and the business logic consists of "Only the author can edit their post" I don't see much of a need to have a layer between the client-side stuff and the database. I would simply use validation on the CouchDB side to make sure someone isn't putting in garbage data and make sure that permissions are set properly so that users can only read their own _user data. The rendering would be done client-side by something like AngularJS. In essence you could just have a CouchDB server and a bunch of "static" pages and you're good to go. You wouldn't need any kind of server-side processing, just something that could serve up the HTML pages. Opening my database up to the world seems wrong, but in this scenario I can't think of why as long as permissions are set properly. It goes against my instinct as a web developer, but I can't think of a good reason. So, why is this a bad idea? EDIT: Looks like there is a similar discussion here: Writing Web "server less" applications EDIT: Awesome discussion so far, and I appreciate everyone's feedback! I feel like I should add a few generic assumptions instead of calling out CouchDB and AngularJS specifically. So let's assume that: The database can authenticate users directly from its hidden store All database communication would happen over SSL Data validation can (but maybe shouldn't?) be handled by the database The only authorization we care about other than admin functions is someone only being allowed to edit their own post We're perfectly fine with everyone being able to read all data (EXCEPT user records which may contain password hashes) Administrative functions would be restricted by database authorization No one can add themselves to an administrator role The database is relatively easy to scale There is little to no true business logic; this is a basic CRUD app

    Read the article

  • Internal Data Masking

    - by ACShorten
    By default, the data in the product is unmasked for authorized users. If particular data within the object is considered a candidate for data masking then the masking capabilities with the product can be used to mask the data in an appropriate fashion. The inbuilt Data Masking capabilities of the Oracle Utilities Application Framework uses a number of configuration elements: An algorithm, of type F1-MASK, is specified to configure the elements of the data masking including the masking character, number of suffix characters left unmasked, characters to ignore in the string, the application service, security type and authorization levels applicable to the mask. A Data Masking Feature Configuration is created to define where the algorithm applies. The specification of the feature allows you to define the fields to encrypt using the configured algorithm. The algorithm can be attached to a schema field, table field, characteristic, search field and even a child record (such as an identifier). The appropriate user groups are then connected to the application services with the appropriate service types and level to indicate whether the masking applies to the user group or not. For example, say there is a field called CCNBR in the product which holds the credit card details. I would create an algorithm, say CCformatCC, to mask the credit card number with the last few digits as unmasked (as the standard in most systems dictate). I would specify on the Field Mask the following: field="CCNBR", alg="CMformatCC" On the algorithm CMfomatCC, I would specify the mask, application service, security type and the authorization level which users would see the credit card unmasked. To finish the configuration off and to implemention I would connect the appropriate user groups to the application service I specified with the security type and appropriate authorization level for that group. Whenever a user accesses the CCNBR field on any of the maintenance screens, searches and other screens that use the CCNBR meta data definition would then be masked according to the user group that the user was a member of. Refer to the documentation supplied with F1-MASK algorithm type entry for more examples of what is possible.

    Read the article

  • fftw in Visual Studio?

    - by drhorrible
    I'm trying to link my project with fftw and so far, I've gotten it to compile, but not link. As the site said, I generated all the .lib files (even though I'm only using double precision), and copied them to C:\Program Files\Microsoft Visual Studio 9.0\VC\lib, the .h file to C:\Program Files\Microsoft Visual Studio 9.0\VC\include and the .dll to C:\windows\system32. I've copied the tutorial program, and the exact error I am getting is: 1>hw10.obj : error LNK2019: unresolved external symbol __imp__fftw_free referenced in function "bool __cdecl test(void)" (?test@@YA_NXZ) 1>hw10.obj : error LNK2019: unresolved external symbol __imp__fftw_destroy_plan referenced in function "bool __cdecl test(void)" (?test@@YA_NXZ) 1>hw10.obj : error LNK2019: unresolved external symbol __imp__fftw_execute referenced in function "bool __cdecl test(void)" (?test@@YA_NXZ) 1>hw10.obj : error LNK2019: unresolved external symbol __imp__fftw_plan_dft_1d referenced in function "bool __cdecl test(void)" (?test@@YA_NXZ) 1>hw10.obj : error LNK2019: unresolved external symbol __imp__fftw_malloc referenced in function "bool __cdecl test(void)" (?test@@YA_NXZ) So, what could be wrong with my project setup? Thanks!

    Read the article

  • Using XML to generate SAP ABAP and/or SAPScript?

    - by Rob
    Has anyone got examples and/or experience of generating SAP ABAP or SAPScript form code from XML that came from an external application? This would help: creation of SAP-based applications in a data-driven way by automating the knowledge to do so from the export of XML from an external application automated inputting of knowledge from an external application into SAP applications, rather than manually copying between systems enable 3rd-party external tools to be used to create data, perhaps in a more easier-to-use way than could be done in SAP. Or if there was already heavy investment in training with these third party tools rather than SAP, or if the employment market favoured staff with knowledge of these tools enable creation of data for multiple purposes, views: those in SAP and outside SAP. enable inter-operability of SAP with 3rd-party external tools I'm looking for: - experiences as to the feasibility - tools, e.g. parsers, XSLT etc. - examples

    Read the article

  • How to perform ant path mapping in war task?

    - by eljenso
    I have several JAR file pattern sets, like <patternset id="common.jars"> <include name="external/castor-1.1.jar" /> <include name="external/commons-logging-1.2.6.jar" /> <include name="external/itext-2.0.4.jar" /> ... </patternset> I also have a war task containing a lib element: ... Like this however, I end up with a WEB-INF/lib containing the subdirectories from my patterns: WEB-INF/lib/external/castor-1.1.jar WEB-INF/lib/external/... Is there any way to flatten this, so the JAR files appear top-level under WEB-INF/lib, regardless of the directories specified in the patterns? I looked at mapper but it seems you cannot use them inside lib.

    Read the article

  • Turning off ASP.Net WebForms authentication for one sub-directory

    - by Keith
    I have a large enterprise application containing both WebForms and MVC pages. It has existing authentication and authorisation settings that I don't want to change. The WebForms authentication is configured in the web.config: <authentication mode="Forms"> <forms blah... blah... blah /> </authentication> <authorization> <deny users="?" /> </authorization> Fairly standard so far. I have a REST service that is part of this big application and I want to use HTTP authentication instead for this one service. So, when a user attempts to get JSON data from the REST service it returns an HTTP 401 status and a WWW-Authenticate header. If they respond with a correctly formed HTTP Authorization response it lets them in. The problem is that WebForms overrides this at a low level - if you return 401 (Unauthorised) it overrides that with a 302 (redirection to login page). That's fine in the browser but useless for a REST service. I want to turn off the authentication setting in the web.config: <location path="rest"> <system.web> <authentication mode="None" /> <authorization><allow users="?" /></authorization> </system.web> </location> The authorisation bit works fine, but when I try to change the authentication I get an exception: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. I'm configuring this at application level though - it's in the root web.config How do I override the authentication so that all of the rest of the site uses WebForms authentication and this one directory uses none? This is similar to another question: 401 response code for json requests with ASP.NET MVC, but I'm not looking for the same solution - I don't want to just remove the WebForms authentication and add new custom code globally, there's far to much risk and work involved. I want to change just the one directory in configuration.

    Read the article

  • What is the best way to deal with 404s that are all trying to point to the same page that are from an external site?

    - by Lee
    I started getting 404s showing up in my Google Webmaster's Tools from a site linking to a specific category but with odd characters at the end of the url. So Something like this: http://example.com/category/puppies%EF%BC%9A.textwidget%E8%A6%81%E7%B4%A0%E7%B7%A8%E9%9B%86 Google Webmaster says that there are about 120 of these links and I can imagine there will be more to come. What is the best way to handle these links from an seo point-of-view? I have heard 301 redirecting too many links at one time can cause Google to ding the site but I don't want this site to continue posting broken links. Any help on this would be appreciated.

    Read the article

  • .NET app - Should we use SQL Server and duplicate some reference data from an external Oracle DB? Or use Oracle and have a DB link?

    - by Daventry
    We're looking to migrate some existing Excel/Access processes into a new system which will provide the users with a Silverlight frontend to run and view the reports instead of using MS Access. The initial idea was to have SQL Server 2008 as RDBMS. The problem is that we've got some static data such as country codes, counterparties, etc which live in an existing Oracle DB. Since we do not want to duplicate that data (if possible), we were thinking of having a DB link between SQL Server and Oracle, but our firm does not allow that. So the options are either duplicate the data or use Oracle as RDBMS - surprise, the firm does allow DB links between Oracle databases. The initial idea was also to use WCF RIA Services, Entity Framework, etc which we're not sure they play well with Oracle, that's why it was decided to go with SQL Server in the first place. Would you advise to go for Oracle so that we can just link the static data? Or use SQL Server 2008 and replicate it because it's "safer" to stay within the Microsoft land? To use or not to use Entity Framework and WCF RIA Services at all? Regards. UPDATE: Thanks everyone for your answers. Nothing is set in stone yet. We'll try to import the data instead of linking, as if the other DB goes down, our system can still carry on. We're likely to use SQL Server just because most developers are more experienced with it. Even if we used RIA Services, we can swap out the Data Access Layer and use other frameworks such those mentioned below.

    Read the article

  • Linking errors when building against Boost Unit Test Framework

    - by Rafid
    I am trying to use Boost Unit Test Framework by building a stand alone library as detailed here: http://www.boost.org/doc/libs/1_35_0/libs/test/doc/components/utf/compilation.html So I created a VC library project containing the mentioned files and build it and it was successful. Then I created a test project and referenced the library project I just created, but when I tried to build it, I got the following linking errors: 1>Type.obj : error LNK2019: unresolved external symbol "bool __cdecl boost::test_tools::tt_detail::check_impl(class boost::test_tools::predicate_result const &,class boost::unit_test::lazy_ostream const &,class boost::unit_test::basic_cstring<char const >,unsigned __int64,enum boost::test_tools::tt_detail::tool_level,enum boost::test_tools::tt_detail::check_type,unsigned __int64,...)" (?check_impl@tt_detail@test_tools@boost@@YA_NAEBVpredicate_result@23@AEBVlazy_ostream@unit_test@3@V?$basic_cstring@$$CBD@63@_KW4tool_level@123@W4check_type@123@3ZZ) referenced in function "public: void __cdecl test1::test_method(void)" (?test_method@test1@@QEAAXXZ) 1>BoostUnitTestFramework.lib(framework.obj) : error LNK2019: unresolved external symbol "void __cdecl boost::debug::break_memory_alloc(long)" (?break_memory_alloc@debug@boost@@YAXJ@Z) referenced in function "void __cdecl boost::unit_test::framework::init(class boost::unit_test::test_suite * (__cdecl*)(int,char * * const),int,char * * const)" (?init@framework@unit_test@boost@@YAXP6APEAVtest_suite@23@HQEAPEAD@ZH0@Z) 1>BoostUnitTestFramework.lib(framework.obj) : error LNK2019: unresolved external symbol "void __cdecl boost::debug::detect_memory_leaks(bool)" (?detect_memory_leaks@debug@boost@@YAX_N@Z) referenced in function "void __cdecl boost::unit_test::framework::init(class boost::unit_test::test_suite * (__cdecl*)(int,char * * const),int,char * * const)" (?init@framework@unit_test@boost@@YAXP6APEAVtest_suite@23@HQEAPEAD@ZH0@Z) 1>BoostUnitTestFramework.lib(execution_monitor.obj) : error LNK2019: unresolved external symbol "bool __cdecl boost::debug::attach_debugger(bool)" (?attach_debugger@debug@boost@@YA_N_N@Z) referenced in function "public: int __cdecl boost::detail::system_signal_exception::operator()(unsigned int,struct _EXCEPTION_POINTERS *)" (??Rsystem_signal_exception@detail@boost@@QEAAHIPEAU_EXCEPTION_POINTERS@@@Z) 1>BoostUnitTestFramework.lib(execution_monitor.obj) : error LNK2019: unresolved external symbol "bool __cdecl boost::debug::under_debugger(void)" (?under_debugger@debug@boost@@YA_NXZ) referenced in function "public: int __cdecl boost::execution_monitor::execute(class boost::unit_test::callback0<int> const &)" (?execute@execution_monitor@boost@@QEAAHAEBV?$callback0@H@unit_test@2@@Z) 1>BoostUnitTestFramework.lib(unit_test_main.obj) : error LNK2019: unresolved external symbol "class boost::unit_test::test_suite * __cdecl init_unit_test_suite(int,char * * const)" (?init_unit_test_suite@@YAPEAVtest_suite@unit_test@boost@@HQEAPEAD@Z) referenced in function main 1>C:\Users\Rafid\Workspace\MyPhysics\Builds\VC10\Tests\Debug\Tests.exe : fatal error LNK1120: 6 unresolved externals They seem to be mainly caused by Boost debug library, but I can't see a reason why I should get linking errors putting in mind that Boost debug library only need to be included as header files, rather than linking against as a library! Any ideas?!

    Read the article

  • Statically Compiling QWebKit 4.6.2

    - by geeko
    I tried to compile Qt+Webkit statically with MS VS 2008 and this worked. C:\Qt\4.6.2configure -release -static -opensource -no-fast -no-exceptions -no-accessibility -no-rtti -no-stl -no-opengl -no-openvg -no-incredibuild-xge -no-style-plastique -no-style-cleanlooks -no-style-motif -no-style-cde -no-style-windowsce -no-style-windowsmobile -no-style-s60 -no-gif -no-libpng -no-libtiff -no-libjpeg -no-libmng -no-qt3support -no-mmx -no-3dnow -no-sse -no-sse2 -no-iwmmxt -no-openssl -no-dbus -platform win32-msvc2008 -arch windows -no-phonon -no-phonon-backend -no-multimedia -no-audio-backend -no-script -no-scripttools -webkit -no-declarative However, I get these errors whenever building a project that links statically to QWebKit: 1 Creating library C:\Users\Geeko\Desktop\Qt\TestQ\Release\TestQ.lib and object C:\Users\Geeko\Desktop\Qt\TestQ\Release\TestQ.exp 1QtWebKit.lib(PluginPackageWin.obj) : error LNK2019: unresolved external symbol _VerQueryValueW@16 referenced in function "class WebCore::String __cdecl WebCore::getVersionInfo(void * const,class WebCore::String const &)" (?getVersionInfo@WebCore@@YA?AVString@1@QAXABV21@@Z) 1QtWebKit.lib(PluginPackageWin.obj) : error LNK2019: unresolved external symbol _GetFileVersionInfoW@16 referenced in function "private: bool __thiscall WebCore::PluginPackage::fetchInfo(void)" (?fetchInfo@PluginPackage@WebCore@@AAE_NXZ) 1QtWebKit.lib(PluginPackageWin.obj) : error LNK2019: unresolved external symbol _GetFileVersionInfoSizeW@8 referenced in function "private: bool __thiscall WebCore::PluginPackage::fetchInfo(void)" (?fetchInfo@PluginPackage@WebCore@@AAE_NXZ) 1QtWebKit.lib(PluginDatabaseWin.obj) : error LNK2019: unresolved external symbol _imp_PathRemoveFileSpecW@4 referenced in function "class WebCore::String __cdecl WebCore::safariPluginsDirectory(void)" (?safariPluginsDirectory@WebCore@@YA?AVString@1@XZ) 1QtWebKit.lib(PluginDatabaseWin.obj) : error LNK2019: unresolved external symbol _imp_SHGetValueW@24 referenced in function "void __cdecl WebCore::addWindowsMediaPlayerPluginDirectory(class WTF::Vector &)" (?addWindowsMediaPlayerPluginDirectory@WebCore@@YAXAAV?$Vector@VString@WebCore@@$0A@@WTF@@@Z) 1QtWebKit.lib(PluginDatabaseWin.obj) : error LNK2019: unresolved external symbol _imp_PathCombineW@12 referenced in function "void __cdecl WebCore::addMacromediaPluginDirectories(class WTF::Vector &)" (?addMacromediaPluginDirectories@WebCore@@YAXAAV?$Vector@VString@WebCore@@$0A@@WTF@@@Z) 1C:\Users\Geeko\Desktop\Qt\TestQ\Release\TestQ.exe : fatal error LNK1120: 6 unresolved externals Do I need to check something in the Qt project options ? I have QtCore, QtGui, Network and WebKit checked.

    Read the article

  • ASPX FormsAuthentication.RedirectFromLoginPage function is not working anymore

    - by Mike Webb
    Here is my issue. I have an ASPX web site and I have code in there to redirect from the login page with the call to "FormsAuthentication.RedirectFromLoginPage(username, false);" This sends the user from the root website folder to 'website/Admin/'. I have a 'default.aspx' page in 'website/Admin/' and the call to redirect works on a previous version of the website we have running currently, but the one that I am updating on a separate test server is not working. It gives me the error "Directory Listing Denied. This Virtual Directory does not allow contents to be listed." I have this in the config file: <authorization> <allow users="*" /> </authorization> under the "authentication" option and... <location path="Admin"> <system.web> <authorization> <deny users="?" /> </authorization> </system.web> </location> for the location of Admin. Also, there is no difference in the code between the web.config, Login.aspx, or the default.aspx files on the current server and the one on the test server, so I am confused as to why the redirect will not work on both. It even works in the Visual Studio server environment, for which the code is also identical. Any suggestions and help is appreciated.

    Read the article

  • Form Based Authentication problem?

    - by programmerist
    i have 2 pages : Login.aspx and Satis.aspx. i redirected from Login.aspx to Satis.aspx if authentication is correct . if i signout from satis i redirected to Login.aspx. But if i write satis.aspx' url on web scanner i entered satis.aspx. But i am not sign in Satis.aspx. i should't enter Satis.aspx directly. my web config: <authentication mode="Forms"> <forms loginUrl="Login.aspx" name=".ASPXFORMSAUTH" path="/" protection="All"> <credentials> <user name="a" password="a"></user> </credentials> </forms> </authentication> <authorization> <allow users="*"/> </authorization> </system.web> <location path="~/ContentPages/Satis/Satis.aspx"> <system.web> <authorization> <deny users="?"/> </authorization> </system.web> </location> Login.aspx.cs: protected void lnkSubmit_Click(object sender, EventArgs e) { if(FormsAuthentication.Authenticate(UserEmail.Value,UserPass.Value)) { FormsAuthentication.RedirectFromLoginPage (UserEmail.Value, PersistForms.Checked); } else Msg.Text = "Invalid Credentials: Please try again"; } Satis.aspx protected void LogoutSystem_Click(object sender, EventArgs e) { FormsAuthentication.SignOut(); Response.Redirect("~/Login/Login.aspx"); }

    Read the article

  • VS2010 and CSS: What is the best strategy to individually position form controls

    - by George
    OK, I have a ton of controls on my page that I need to individually place. I need to set a margin here, a padding there, etc. None of these particular styles that I want to apply will be applied to more than control. What is the bets practice for determining at which level the style is placed, etc? OK, my choices are 1) External CSS file 1A) Using ClientIdMode = Auto (the default) I could assign a unique CssClass value to the ASP.NET control and, in the external CSS file, create a class selector that would only be applied to that one control. 1B) User Client ID = Predicatable In the external CSS file, I could determine what the ID will be for the controls of interest and create an ID selector (#ControlID{Style} ). However, I fear maintenance issues due to including/removing parent containers that would cause the ID to change. 1C) User Client ID = Static. I could choose static IDs for the controls such that I minimize the likelihood of a clash with auto generated IDs (perhaps by prefixing the ID with "StaticID_" and use an external stylesheet with ID selectors. 2) I could place the style right on the control. The only disadvantage here, as I see it, is that style info is brought down each time instead of being cached , which is what I'd get using an external CSS. If a style isn't resused, I personally don't see much benefit to placing it in an external file, though please explain why if you disagree. Is there moire of a reason that "It's nice to have all the CSS in one place?"

    Read the article

  • Google Data Api returning an invalid access token

    - by kingdavies
    I'm trying to pull a list of contacts from a google account. But Google returns a 401. The url used for requesting an authorization code: String codeUrl = 'https://accounts.google.com/o/oauth2/auth' + '?' + 'client_id=' + EncodingUtil.urlEncode(CLIENT_ID, 'UTF-8') + '&redirect_uri=' + EncodingUtil.urlEncode(MY_URL, 'UTF-8') + '&scope=' + EncodingUtil.urlEncode('https://www.google.com/m8/feeds/', 'UTF-8') + '&access_type=' + 'offline' + '&response_type=' + EncodingUtil.urlEncode('code', 'UTF-8') + '&approval_prompt=' + EncodingUtil.urlEncode('force', 'UTF-8'); Exchanging the returned authorization code for an access token (and refresh token): String params = 'code=' + EncodingUtil.urlEncode(authCode, 'UTF-8') + '&client_id=' + EncodingUtil.urlEncode(CLIENT_ID, 'UTF-8') + '&client_secret=' + EncodingUtil.urlEncode(CLIENT_SECRET, 'UTF-8') + '&redirect_uri=' + EncodingUtil.urlEncode(MY_URL, 'UTF-8') + '&grant_type=' + EncodingUtil.urlEncode('authorization_code', 'UTF-8'); Http con = new Http(); Httprequest req = new Httprequest(); req.setEndpoint('https://accounts.google.com/o/oauth2/token'); req.setHeader('Content-Type', 'application/x-www-form-urlencoded'); req.setBody(params); req.setMethod('POST'); Httpresponse reply = con.send(req); Which returns a JSON array with what looks like a valid access token: { "access_token" : "{access_token}", "token_type" : "Bearer", "expires_in" : 3600, "refresh_token" : "{refresh_token}" } However when I try and use the access token (either in code or curl) Google returns a 401: curl -H "Authorization: Bearer {access_token}" https://www.google.com/m8/feeds/contacts/default/full/ Incidentally the same curl command but with an access token acquired via https://code.google.com/oauthplayground/ works. Which leads me to believe there is something wrong with the exchanging authorization code for access token request as the returned access token does not work. I should add this is all within the expires_in time frame so its not that the access_token has expired

    Read the article

  • VS2010 and CSS: What is the best way to position a single form control

    - by George
    OK, I have a ton of controls on my page that I need to individually place. I need to set a margin here, a padding there, etc. None of these particular styles that I want to apply will be applied to more than control. What is the bets practice for determining at which level the style is placed, etc? OK, my choices are 1) External CSS file 1A) Using ClientIdMode = Auto (the default) I could assign a unique CssClass value to the ASP.NET control and, in the external CSS file, create a class selector that would only be applied to that one control. 1B) User Client ID = Predicatable In the external CSS file, I could determine what the ID will be for the controls of interest and create an ID selector (#ControlID{Style} ). However, I fear maintenance issues due to including/removing parent containers that would cause the ID to change. 1C) User Client ID = Static. I could choose static IDs for the controls such that I minimize the likelihood of a clash with auto generated IDs (perhaps by prefixing the ID with "StaticID_" and use an external stylesheet with ID selectors. 2) I could place the style right on the control. The only disadvantage here, as I see it, is that style info is brought down each time instead of being cached , which is what I'd get using an external CSS. If a style isn't resused, I personally don't see much benefit to placing it in an external file, though please explain why if you disagree. Is there moire of a reason that "It's nice to have all the CSS in one place?"

    Read the article

  • What are the linkage of the following functions?

    - by Derui Si
    When I was reading the c++ 03 standard (7.1.1 Storage class specifiers [dcl.stc]), there are some examples as below, I'm not able to tell how the linkage of each successive declarations is determined? Could anyone help here? Thanks in advance! static char* f(); // f() has internal linkage char* f() { /* ... */ } // f() still has internal linkage char* g(); // g() has external linkage static char* g() { /* ... */ } // error: inconsistent linkage void h(); inline void h(); // external linkage inline void l(); void l(); // external linkage inline void m(); extern void m(); // external linkage static void n(); inline void n(); // internal linkage static int a; // a has internal linkage int a; // error: two definitions static int b; // b has internal linkage extern int b; // b still has internal linkage int c; // c has external linkage static int c; // error: inconsistent linkage extern int d; // d has external linkage static int d; // error: inconsistent linkage UPD: Additionally, how can I understand the statement in the standard, " The linkages implied by successive declarations for a given entity shall agree. That is, within a given scope, each declaration declaring the same object name or the same overloading of a function name shall imply the same linkage. Each function in a given set of overloaded functions can have a different linkage, however."

    Read the article

  • A brief introduction to BRM and architecture

    - by Yani Miguel
    Oracle Communications Billing and Revenue Management (Oracle BRM) is the telcos industry´s leading solution intended for communications service providers. This post encourages to know BRM starting with the basics. History Portal was a billing and revenue managament solution to communications industry created by Portal Software. In 2006 Oracle acquired Portal Software and the solution was renamed BRM. Today Oracle BRM is the first end-to-end packaged enterprise software suite for the communications industry, however BRM is just one more product in the catalog of OSS solutions that Oracle offers. BRM can bill and manage all communications services including wireline, wireless, broadband, cable, voice over IP, IPTV, music, and video. BRM Architecture BRM´s architecture consists of 4 layers or tiers. Through these layers are the data, bussines logic and interfaces to connect graphical client tools.Application tier This layer provides GUI client tools enabling communication to other layers through open APIs. Some BRM client applications are: Customer Center Pricing Center Universal Event Loader Web Server BRM Billing Application Collections Center Permissioning Center Furthermore, this layer is where are provided real-time external events. Bussines Process Tier Although all layers are equally important, I think it deserves more atention because in this tier BRM functionality is implemented. All functions that give life to BRM are in this layer coded in C language called Opcodes (System Processes in the image). Any changes or additional functionality should be made here, so when we try to customize the product, we will most of the time programming in this layer (Business Policies in the image).Bussines Process Tier Features: Implements Portal system functionalityValidates data from the application tierModifies Portal behavior through business policies. Business policies can by customized.Triggers external systems using event notification. Object Tier This layer is responsible for transfer the BRM requests into database language and translate BRM requests into external system requests. Without it, the business logic (data from Bussines Process Tier) could not be understood by the relational database. Data tier Data tier is responsable for the storage of BRM database and other external systems databases. External systems include credit card, tax, and directory servers. Finally, It's important to note that BRM is designed to easily integrate with the following solutions:AIA 2.4 Siebel CRM E-Business Suite - G/L onlyCommunications Services Gatekeeper Oracle BI Publisher. Personally, I think that BRM could improve migrating client-server architecture to a fully web platform that works with Oracle Middleware like any product of the Fusion Middleware family. Hopefully there are already initiatives in this area.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >