Search Results

Search found 1052 results on 43 pages for 'victorias secret'.

Page 5/43 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to define AUTHPARAMS in Amazon EC2 API call

    - by The Joker
    I am trying to make an API call to EC2. I want to add my IP address to the security group. https://ec2.amazonaws.com/ ?Action=AuthorizeSecurityGroupIngress &GroupName=grppp20 &GroupId=sg-b2z982mq &IpPermissions.1.IpProtocol=tcp &IpPermissions.1.FromPort=3389 &IpPermissions.1.ToPort=3389 &IpPermissions.1.IpRanges.1.CidrIp=22.951.17.728/32 &&AWSAccessKeyId=AOPLDRACULALK6U7A And i get the following error. AWS was not able to validate the provided access credentials I have a secret access key & a username. I searched internet & found that we have to make a signature of the secret key & use it in the request instead of adding it directly. Can anyone tell me how to make a signature of my AWS secret key & how to use them with my API call? Let my secret key be: 2WwRiQzBs7RTFG4545PIOJ7812CXZ Username: thejoker

    Read the article

  • Apache RewriteRule ignoring RewriteCond?

    - by winsmith
    So I have an Apache running on OSX Server 10.4 (don't ask) with multiple sites. In 0002_[example.com].conf, I have this bit of code: <Directory "/Library/WebServer/Documents/secret/"> RewriteEngine On RewriteCond %{REMOTE_ADDR} !^137\.250\. RewriteRule .* /messages/secret.html </Directory> However, in this configuration, the RewriteCond always seems to evaluate to false, since the secret directory gets shown even if the client's address does not begin with 137.250. If I change the config to this <Directory "/Library/WebServer/Documents/secret/"> RewriteEngine On RewriteRule .* /messages/secret.html RewriteCond %{REMOTE_ADDR} !^137\.250\. </Directory> the condition either does not get evaluated at all or always evaluates to true. Either way, all clients get blocked. What am I doing wrong?

    Read the article

  • Fabric "TypeError: not all arguments converted during string formatting"

    - by Brian Carpio
    I have the following fabric task: @task def deploy_west_ec2_ami(name, puppetClass, size='m1.small', region='us-west-1', basedn='joe', ldap='arch-ldap-01', secret='secret', subnet='subnet-d43b8ab d', sgroup='sg-926578fe'): execute(deploy_ec2_ami, name='%s',puppetClass='%s',size='%s',region='%s',basedn='%s',ldap='%s',secret='%s',subnet='%s',sgroup='%s' %(name, puppetClass , size, region, basedn, ldap, secret, subnet, sgroup)) However when I run the command: fab deploy_west_ec2_ami:test,java I get the following Traceback: Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/fabric/main.py", line 710, in main *args, **kwargs File "/usr/local/lib/python2.6/dist-packages/fabric/tasks.py", line 321, in execute results['<local-only>'] = task.run(*args, **new_kwargs) File "/usr/local/lib/python2.6/dist-packages/fabric/tasks.py", line 113, in run return self.wrapped(*args, **kwargs) File "/home/bcarpio/Projects/githubenterprise/awsdeploy/fabfile.py", line 35, in deploy_west_ec2_ami execute(deploy_ec2_ami, name='%s',puppetClass='%s',size='%s',region='%s',basedn='%s',ldap='%s',secret='%s',subnet='%s',sgroup='%s' %(name, puppetClass, size, region, basedn, ldap, secret, subnet, sgroup)) TypeError: not all arguments converted during string formatting I am not sure I understand why. I am pretty sure I have all the values defined here just fine. Also when I run the execute task deploy_ec2_ami as so: deploy_ec2_ami:test,java,m1.small,us-west-1,'dc\=test\,dc\=net',ldap-01,secret,subnet-d43b8abd,sg-926578fe It works just fine

    Read the article

  • How to I make private class constants in Ruby

    - by DMisener
    In Ruby how does one create a private class constant? (i.e one that is visible inside the class but not outside) class Person SECRET='xxx' # How to make class private?? def show_secret puts "Secret: #{SECRET}" end end Person.new.show_secret puts Person::SECRET # I'd like this to fail

    Read the article

  • Tweepy + App Engine Example OAuth Help

    - by Wasauce
    Hi I am trying to follow the Tweepy App Engine OAuth Example app in my app but am running into trouble. Here is a link to the tweepy example code: http://github.com/joshthecoder/tweepy-examples Specifically look at: http://github.com/joshthecoder/tweepy-examples/blob/master/appengine/oauth_example/handlers.py Here is the relevant snippet of my code [Ignore the spacing problems]: try: authurl = auth.get_authorization_url() request_token = auth.request_token db_user.token_key = request_token.key db_user.token_secret = request_token.secret db_user.put() except tweepy.TweepError, e: # Failed to get a request token self.generate('error.html', { 'error': e, }) return self.generate('signup.html', { 'authurl': authurl, 'request_token': request_token, 'request_token.key': request_token.key, 'request_token.secret': request_token.secret, }) As you can see my code is very similar to the example. However, when I compare the version of the request_token.key and request_token.secret that are rendered on my signup page (this is for the request_token.key and request_token.secret found in the datastore. Any guidance on what I am doing wrong here? Thanks! Reference Links:

    Read the article

  • java equivalent to php's hmac-SHA1

    - by Bee
    I'm looking for a java equivalent to this php call: hash_hmac('sha1', "test", "secret") I tried this, using java.crypto.Mac, but the two do not agree: String mykey = "secret"; String test = "test"; try { Mac mac = Mac.getInstance("HmacSHA1"); SecretKeySpec secret = new SecretKeySpec(mykey.getBytes(),"HmacSHA1"); mac.init(secret); byte[] digest = mac.doFinal(test.getBytes()); String enc = new String(digest); System.out.println(enc); } catch (Exception e) { System.out.println(e.getMessage()); } The outputs with key = "secret" and test = "test" do not seem to match.

    Read the article

  • Twitter oauth php problems

    - by Patrick Gates
    I'm writing some backend script for a twitter app and heres how it's going On the app you click a button that sends you to login.php on my server which logs into my database connects to twitter with my consumer key and secret: $to = new TwitterOAuth($consumer_key, $consumer_secret); $tok = $to->getRequestToken(); $request_link = $to->getAuthorizeURL($tok); and then writes the token and secret to the database, sets a session equal to the id in the database of the token and secret and then redirects to the "$request_link" You then go through the process of logging in and such on twitter and it redirects you to callback.php on my server Callback.php consists of logging into the database again, getting the new token and secret, and then writing the new token and secret to the database and then prompts you to go back to the app Then on the app, all I'm trying to do is access the basic credentials$to->get('account/verify_credentials') and it keeps coming back "could not authenticate you" What am I doing wrong?? Thank you for all the help :)

    Read the article

  • Google's Oauth for Installed apps vs. Oauth for Web Apps

    - by burgerguy
    So I'm having trouble understanding something... If you do Oauth for Web Apps, you register your site with a callback URL and get a unique consumer secret key. But once you've obtained an Oauth for Web Apps token, you don't have to generate Oauth calls to the google server from your registered domain. I regularly use my key and token from scripts running via an apache server at localhost on my laptop and Google never says "you're not sending this request from the registered domain." It just sends me the data. Now, as I understand it, if you do Oauth for Installed Apps, you use "anonymous" instead of a secret key you got from Google. I've been thinking of just using the OAuth for Web Apps auth method, then passing that token to an installed app that has my secret code embedded in its innards. The worry is that the code could be discovered by bad people. But what's more secure... making them work for the secret code or letting them default to anonymous? What really goes bad if the "secret" is discovered when the alternative is using "anonymous" as the secret?

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • Using a random string to authenticate HMAC?

    - by mrwooster
    I am designing a simple webservice and want to use HMAC for authentication to the service. For the purpose of this question we have: a web service at example.com a secret key shared between a user and the server [K] a consumer ID which is known to the user and the server (but is not necessarily secret) [D] a message which we wish to send to the server [M] The standard HMAC implementation would involve using the secret key [K] and the message [M] to create the hash [H], but I am running into issues with this. The message [M] can be quite long and tends to be read from a file. I have found its very difficult to produce a correct hash consistently across multiple operating systems and programming languages because of hidden characters which make it into various file formats. This is of course bad implementation on the client side (100%), but I would like this webservice to be easily accessible and not have trouble with different file formats. I was thinking of an alternative, which would allow the use a short (5-10 char) random string [R] rather than the message for autentication, e.g. H = HMAC(K,R) The user then passes the random string to the server and the server checks the HMAC server side (using random string + shared secret). As far as I can see, this produces the following issues: There is no message integrity - this is ok message integrity is not important for this service A user could re-use the hash with a different message - I can see 2 ways around this Combine the random string with a timestamp so the hash is only valid for a set period of time Only allow each random string to be used once Since the client is in control of the random string, it is easier to look for collisions I should point out that the principle reason for authentication is to implement rate limiting on the API service. There is zero need for message integrity, and its not a big deal if someone can forge a single request (but it is if they can forge a very large number very quickly). I know that the correct answer is to make sure the message [M] is the same on all platforms/languages before hashing it. But, taking that out of the equation, is the above proposal an acceptable 2nd best?

    Read the article

  • GnuPG + Webservice + ASP.NET

    - by Karol Bladek
    Hi! I'm exhausted. I have installed GnuPG and exported secret key, and two public keys (my own and one of my client) from another instance of GnuPG. I try to configure 'my encrypting/decrypting' method on the local machine. When I run encrypting method from a little console application it works good. When I run this (same! - with the same body) method from my webservice on my local machine ... I have an ExitCode = 2. Happy in fact of catching the error message, but unhappy with their body. "gpg: no default secret key: secret key not available gpg: XXXXXXXXXXXXXXXX.xml: sign+encrypt failed: secret key not available" What should I do? Whats wrong? Best regards, Karol Bladek

    Read the article

  • OAuth secrets in mobile apps

    - by Felixyz
    When using the OAuth protocol, you need a secret string obtained from the service you want to delegate to. If you are doing this in a web app, you can simply store the secret in your data base or on the file system, but what is the best way to handle it in a mobile app (or a desktop app for that matter)? Storing the string in the app is obviously not good, as someone could easily find it and abuse it. Another approach would be to store it on you server, and have the app fetch it on every run, never storing it on the phone. This is almost as bad, because you have to include the URL in the app. I don't believe using https is any help. The only workable solution I can come up with is to first obtain the Access Token as normal (preferably using a web view inside the app), and then route all further communication through our server, where a script would append the secret to the request data and communicates with the provider. Then again, I'm a security noob, so I'd really like to hear some knowledgeable peoples' opinions on this. It doesn't seem to me that most apps are going to these lengths to guarantee security (for example, Facebook Connect seems to assume that you put the secret into a string right in your app). Another thing: I don't believe the secret is involved in initially requesting the Access Token, so that could be done without involving our own server. Am I correct?

    Read the article

  • Problem logging in and changing permissions in Facebook

    - by kujawk
    Hi everybody, I've got a piece of code that logs into Facebook, gets a session, sets status_update and offline_access permission if they are not set, and gets a new session with the newly set permissions. This code used to work fine but now I'm getting error 100 "One of the parameters specified was missing or invalid" as a response to the second call to get session and I can't figure out why. Here's the sequence in detail: CREATE TOKEN restserver.php?method=auth.createToken&api_key=[our key]&v=1.0&format=JSON&sig=[sig created with our secret] response: new token LOGIN m.facebook.com/login.php?api_key=[our key]&v=1.0&auth_token=[token created above] login screen loads and user successfully logs in with their username/password. GET SESSION restserver.php?method=auth.getSession&api_key=[our key]&v=1.0&format=JSON&auth_token=token created above&sig=[sig created with our secret] response: session key with expiration date and a secret CHECK/AUTHORIZE PERMISSIONS restserver.php?method=users.hasAppPermission&api_key=[our key]&v=1.0&format=JSON&ext_perm=status_update&call_id=[proper id]&session_key=[key returned above]&sig=[sig created with secret returned for get session] response: 0 m.facebook.com/authorize.php?api_key=[our key]&v=1.0&ext_perm=status_update authorization screen loads and user authorizes Same steps for status_update CREATE NEW TOKEN Same steps as done to create the first token LOGIN m.facebook.com/login.php?api_key=[our key]&v=1.0&auth_token=[new token] user is already logged in, redirected to their homepage GET NEW SESSION restserver.php?method=auth.getSession&api_key=[our key]&format=JSON&auth_token=[new token]&sig=[sig created with our secret] response: error 100 - missing or invalid parameter. Of course it doesn't tell me which one. Anybody have any ideas what I'm doing wrong here? I tried skipping the second login and going right to creating the new session and that didn't work. The only thing that seems to work is logging out the user after they've authorized the permissions and having them log back in again. I'd like to avoid this if possible. Can you have two outstanding sessions at one time? This code used to work but I'm thinking maybe something changed on Facebook's end that I'm not aware of. Thanks, kris

    Read the article

  • Storing an encrypted cookie with Rails

    - by J. Pablo Fernández
    I need to store a small piece of data (less than 10 characters) in a cookie in Rails and I need it to be secure. I don't want anybody being able to read that piece of data or injecting their own piece of data (as that would open up the app to many kinds of attacks). I think encrypting the contents of the cookie is the way to go (should I also sign it?). What is the best way to do it? Right now I'm doing this, which looks secure, but many things looked secure to people that knew much more than I about security and then it was discovered it wasn't really secure. I'm saving the secret in this way: encryptor = ActiveSupport::MessageEncryptor.new(Example::Application.config.secret_token) cookies[:secret] = { :value => encryptor.encrypt(secret), :domain => "example.com", :secure => !(Rails.env.test? || Rails.env.development?) } and then I'm reading it like this: encryptor = ActiveSupport::MessageEncryptor.new(Example::Application.config.secret_token) secret = encryptor.decrypt(cookies[:secret]) Is that secure? Any better ways of doing it? Update: I know about Rails' session and how it is secure, both by signing the cookie and by optionally storing the contents of the session server side and I do use the session for what it is for. But my question here is about storing a cookie, a piece of information I do not want in the session but I still need it to be secure.

    Read the article

  • Help me with query string parameters (Rails)

    - by Martin Petrov
    Hi, I'm creating a newsletter. Each email contains a link for editing your subscription: <%= edit_user_url(@user, :secret => @user.created_at.to_i) %> :secret = @user.created_at.to_i prevents users from editing each others profiles. def edit @user = user.find(params[:id]) if params[:secret] == @user.created_at.to_i render 'edit' else redirect_to root_path end end It doesn't work - you're always redirected to root_path. It works if I modify it like this: def edit @user = user.find(params[:id]) if params[:secret] == "1293894219" ... 1293894219 is the "created_at.to_i" for a particular user. Do you have any ideas why?

    Read the article

  • generate 10 UUID records and save it it database in rails

    - by user662503
    I need to create certain number of UUId records (based on the selection of a drop down) and save them in the database. Now I am generating only one unique id. Can this be done in the model in this way? Or do I need to write a helper file for that? def generate_unique_token=(value) self.secret = Base64.encode64(UUIDTools::UUID.random_create)[0..8] end My controller: def create @secretcode = Secretcode.new(params[:secretcode]) @user = User.new(params[:user]) @secretcode.user_id = @user @secretcode.generate_unique_token = params[:secretcode][:secret] if @secretcode.valid? @secretcode.save redirect_to secretcodes_path else render 'new' end end My view page <%= form_for(@secretcode) do |f| %> <%= f.select(:secret, options_for_select([['1',1], ['10',10], ['20',20],['50',50]['100',100]])) %> <%= render 'layouts/error' %> <%=f.label :secret%> <%= f.hidden_field :user %> <%=f.submit :generate %> <% end %>

    Read the article

  • Evolution crashes

    - by allenskd
    Well, somehow it started to crash for no reason This the what I'm getting in terminal, not sure yet: ** Message: secret service operation failed: The name org.freedesktop.secrets was not provided by any .service files ** Message: secret service operation failed: The name org.freedesktop.secrets was not provided by any .service files (evolution:8246): gtkhtml-editor-WARNING **: lc: No such language ** Gtk:ERROR:/build/buildd/gtk+2.0-2.22.0/gtk/gtkrecentmanager.c:1942:get_icon_fallback: assertion failed: (retval != NULL) Aborted It has happened to a few GTK Apps I've installed in my Kubuntu, any ideas on how to fix this?

    Read the article

  • Creating WPF Prototypes with SketchFlow

    Prototyping with Sketchflow transforms what was once a frustrating and time-consuming chore. With SketchFlow, WPF prototypes can be created and changed with amazing ease. SketchFlow is WPF's secret weapon. Well, it was secret until Michael Sorens produced this article.

    Read the article

  • TransportWithMessageCredential & Service Bus – Introduction

    - by Michael Stephenson
    Recently we have been working on a project using the Windows Azure Service Bus to expose line of business applications. One of the topics we discussed a lot was around the security aspects of the solution. Most of the samples you see for Windows Azure Service Bus often use the shared secret with the Access Control Service to protect the service bus endpoint but one of the problems we found was that with this scenario any claims resulting from credentials supplied by the client are not passed through to the service listening to the service bus endpoint. As an example of this we originally were hoping that we could give two different clients their own shared secret key and the issuer for each would indicate which client it was. If the claims had flown to the listening service then we could check that the message sent by client one was a type they are allowed to send. Unfortunately this claim isn't flown to the listening service so we were unable to implement this scenario. We had also seen samples that talk about changing the relayClientAuthenticationType attribute would allow you to authenticate the client within the service itself rather than with ACS. While this was interesting it wasn't exactly what we wanted. By removing the step where access to the Relay endpoint is protected by authentication against ACS it means that anyone could send messages via the service bus to the on-premise listening service which would then authenticate clients. In our scenario we certainly didn't want to allow clients to skip the ACS authentication step because this could open up two attack opportunities for an attacker. The first of these would allow an attacker to send messages through to our on-premise servers and potentially cause a denial of service situation. The second case would be with the same kind of attack by running lots of messages through service bus which were then rejected the attacker would be causing us to incur charges per message on our Windows Azure account. The correct way to implement our desired scenario is to combine one of the common options for authenticating against ACS so the service bus endpoint cannot be accessed by an unauthenticated caller with the normal WCF security features using the TransportWithMessageCredential security option. Looking around I could not find any guidance on how to implement this correctly so on the back of setting this up I decided to write a couple of articles to walk through a couple of the common scenarios you may be interested in. These are available on the following links: Walkthrough - Combining shared secret and username token Walkthrough – Combining shared secret and certificates

    Read the article

  • SEO Secrets an SEO Specialist Won't Want You to Know About

    The market place for self claimed 'SEO secrets' is littered with shoddy SEO specialists and cowboys trying to push the common information available to anyone with a computer, internet connection, and a search engine. Here's the breaking news people - there is no secret email that Google sends out to these SEO specialists, and they don't keep a stack of top secret documents back just for the lucky few. It just doesn't work like that.

    Read the article

  • Apache cannot access remotely

    - by MMRUSer
    I have set up and configured Apache 2.2 on Redhat EL .. But I cannot access it remotely (through a web browser). Here's my Apache log . [Sun Apr 11 05:58:12 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/sbi$ [Sun Apr 11 05:58:12 2010] [notice] Digest: generating secret for digest authen$ [Sun Apr 11 05:58:12 2010] [notice] Digest: done [Sun Apr 11 05:58:13 2010] [notice] Apache/2.2.3 (Red Hat) configured -- resumi$ [Sun Apr 11 05:59:32 2010] [notice] caught SIGTERM, shutting down [Sun Apr 11 06:06:38 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/sbi$ [Sun Apr 11 06:06:38 2010] [notice] Digest: generating secret for digest authen$ [Sun Apr 11 06:06:38 2010] [notice] Digest: done [Sun Apr 11 06:06:39 2010] [notice] Apache/2.2.3 (Red Hat) configured -- resumi$ [Sun Apr 11 06:10:13 2010] [notice] caught SIGTERM, shutting down [Sun Apr 11 06:14:29 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/sbi$ [Sun Apr 11 06:14:29 2010] [notice] Digest: generating secret for digest authen$ [Sun Apr 11 06:14:29 2010] [notice] Digest: done [Sun Apr 11 06:14:29 2010] [notice] Apache/2.2.3 (Red Hat) configured -- resumi$ [Sun Apr 11 06:37:05 2010] [notice] caught SIGTERM, shutting down [Sun Apr 11 06:37:05 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/sbi$ [Sun Apr 11 06:37:05 2010] [notice] Digest: generating secret for digest authen$ [Sun Apr 11 06:37:05 2010] [notice] Digest: done [Sun Apr 11 06:37:05 2010] [notice] Apache/2.2.3 (Red Hat) configured -- resumi$ http://x.x.x.x.x./ does not working.

    Read the article

  • xampp apache on windows 7 returns http header only

    - by bumperbox
    i am having issues with xampp running on windows 7 RC32 i type in a localhost and get a header back only, no page content somedays it works fine, other days i can't get it to work after multiple attempts, reboot or otherwise the request doesn't even get put into the acccess log which seems unusual here is the log file at startup incase that helps any ideas ?? [Wed Sep 09 12:27:08 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:08 2009] [notice] Digest: done [Wed Sep 09 12:27:09 2009] [notice] Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9 configured -- resuming normal operations [Wed Sep 09 12:27:09 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Sep 09 12:27:09 2009] [notice] Parent: Created child process 2500 [Wed Sep 09 12:27:10 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:10 2009] [notice] Digest: done [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Child process is running [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Acquired the start mutex. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting 250 worker threads. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting thread to listen on port 443. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting thread to listen on port 80. [Wed Sep 09 12:27:15 2009] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Sep 09 12:27:15 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:15 2009] [notice] Digest: done [Wed Sep 09 12:27:16 2009] [notice] Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9 configured -- resuming normal operations [Wed Sep 09 12:27:16 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Sep 09 12:27:16 2009] [notice] Parent: Created child process 3252 [Wed Sep 09 12:27:17 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:17 2009] [notice] Digest: done [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Child process is running [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Acquired the start mutex. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting 250 worker threads. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting thread to listen on port 443. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting thread to listen on port 80.

    Read the article

  • Snort not detecting outgoing traffic

    - by Reacen
    I'm using Snort 2.9 on windows server 2008 R2 x64, with a very simple configuration that goes like this: # Entire content of Snort.conf: alert tcp any any -> any any (sid:5000000; content:"_secret_"; msg:"TRIGGERED";) # command line: snort.exe -c etc/Snort.conf -l etc/log -A console Using my browser, I send the string "_secret_" in the url to my server (where Snort is located). Example: http://myserver.com/index.php?_secret_ Snort receives it and throws an alert, it works, no problem ! But when I try something like this : <?php // (index.php) header('XTest: _secret_'); // header echo '_secret_'; // data ?> If I just request http://myserver.com/index.php, it does not work or detect anything from the outgoing traffic even though the php file is sending the same string both in headers and in data, with no compression/encoding or whatsoever. (I checked using Wireshark) This looks to me like a Snort problem. No matter what I do it only detects receiving packets. Did anyone ever face this sort of problems with Snort ? Any idea how to fix it ?

    Read the article

  • Updating Cisco VPN config to add vpnc support

    - by Igor Kuzmitshov
    I have a Cisco 1841 configured for VPN connections of two types: Peer-to-peer for partners' routers (IPsec) — using different crypto isakmp key and crypto map with set peer, set transform-set, match address for every peer (same map name, different priorities). That crypto map name is added to the WAN interface. Client access (PPTP) — using vpdn-group with accept-dialin protocol pptp. Now, a new partner wants to connect using vpnc client. The latter needs IPSec ID (group name) and IPSec secret in addition to username and password. I guess that IPSec secret is pre-shared key that can be specified in crypto isakmp key on Cisco. But I could not find any VPN tutorials involving groups. Hence, my questions: How to add IPSec ID (group name) and IPSec secret on Cisco router for vpnc connections? Should I add a new crypto map matching all addresses as well? Is it possible to add this configuration without breaking the existing setup? Thank you.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >