Search Results

Search found 2011 results on 81 pages for 'token bucket'.

Page 29/81 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. We'd like to avoid any kind of local storage (share a disk on a desktop or something) since we're a geographically distributed team). So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). FTP and other client-based apps don't seem to support this at all. Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. In my mind, the magical tool would be some combination of TrueCrypt and rsync. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • Updating a SharePoint master page via a solution (WSP)

    - by Kelly Jones
    In my last blog post, I wrote how to deploy a SharePoint theme using Features and a solution package.  As promised in that post, here is how to update an already deployed master page. There are several ways to update a master page in SharePoint.  You could upload a new version to the master page gallery, or you could upload a new master page to the gallery, and then set the site to use this new page.  Manually uploading your master page to the master page gallery might be the best option, depending on your environment.  For my client, I did these steps in code, which is what they preferred. (Image courtesy of: http://www.joiningdots.net/blog/2007/08/sharepoint-and-quick-launch.html ) Before you decide which method you need to use, take a look at your existing pages.  Are they using the SharePoint dynamic token or the static token for the master page reference?  The wha, huh? SO, there are four ways to tell an .aspx page hosted in SharePoint which master page it should use: “~masterurl/default.master” – tells the page to use the default.master property of the site “~masterurl/custom.master” – tells the page to use the custom.master property of the site “~site/default.master” – tells the page to use the file named “default.master” in the site’s master page gallery “~sitecollection/default.master” – tells the page to use the file named “default.master” in the site collection’s master page gallery For more information about these tokens, take a look at this article on MSDN. Once you determine which token your existing pages are pointed to, then you know which file you need to update.  So, if the ~masterurl tokens are used, then you upload a new master page, either replacing the existing one or adding another one to the gallery.  If you’ve uploaded a new file with a new name, you’ll just need to set it as the master page either through the UI (MOSS only) or through code (MOSS or WSS Feature receiver code – or using SharePoint Designer). If the ~site or ~sitecollection tokens were used, then you’re limited to either replacing the existing master page, or editing all of your existing pages to point to another master page.  In most cases, it probably makes sense to just replace the master page. For my project, I’m working with WSS and the existing pages are set to the ~sitecollection token.  Based on this, I decided to just upload a new version of the existing master page (and not modify the dozens of existing pages). Also, since my client prefers Features and solutions, I created a master page Feature and a corresponding Feature Receiver.  For information on creating the elements and feature files, check out this post: http://sharepointmagazine.net/technical/development/deploying-the-master-page . This works fine, unless you are overwriting an existing master page, which was my case.  You’ll run into errors because the master page file needs to be checked out, replaced, and then checked in.  I wrote code in my Feature Activated event handler to accomplish these steps. Here are the steps necessary in code: Get the file name from the elements file of the Feature Check out the file from the master page gallery Upload the file to the master page gallery Check in the file to the master page gallery Here’s the code in my Feature Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3: try 4: { 5:   6: SPElementDefinitionCollection col = properties.Definition.GetElementDefinitions(System.Globalization.CultureInfo.CurrentCulture); 7:   8: using (SPWeb curweb = GetCurWeb(properties)) 9: { 10: foreach (SPElementDefinition ele in col) 11: { 12: if (string.Compare(ele.ElementType, "Module", true) == 0) 13: { 14: // <Module Name="DefaultMasterPage" List="116" Url="_catalogs/masterpage" RootWebOnly="FALSE"> 15: // <File Url="myMaster.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" 16: // Path="MasterPages/myMaster.master" /> 17: // </Module> 18: string Url = ele.XmlDefinition.Attributes["Url"].Value; 19: foreach (System.Xml.XmlNode file in ele.XmlDefinition.ChildNodes) 20: { 21: string Url2 = file.Attributes["Url"].Value; 22: string Path = file.Attributes["Path"].Value; 23: string fileType = file.Attributes["Type"].Value; 24:   25: if (string.Compare(fileType, "GhostableInLibrary", true) == 0) 26: { 27: //Check out file in document library 28: SPFile existingFile = curweb.GetFile(Url + "/" + Url2); 29:   30: if (existingFile != null) 31: { 32: if (existingFile.CheckOutStatus != SPFile.SPCheckOutStatus.None) 33: { 34: throw new Exception("The master page file is already checked out. Please make sure the master page file is checked in, before activating this feature."); 35: } 36: else 37: { 38: existingFile.CheckOut(); 39: existingFile.Update(); 40: } 41: } 42:   43: //Upload file to document library 44: string filePath = System.IO.Path.Combine(properties.Definition.RootDirectory, Path); 45: string fileName = System.IO.Path.GetFileName(filePath); 46: char slash = Convert.ToChar("/"); 47: string[] folders = existingFile.ParentFolder.Url.Split(slash); 48:   49: if (folders.Length > 2) 50: { 51: Logger.logMessage("More than two folders were detected in the library path for the master page. Only two are supported.", 52: Logger.LogEntryType.Information); //custom logging component 53: } 54:   55: SPFolder myLibrary = curweb.Folders[folders[0]].SubFolders[folders[1]]; 56:   57: FileStream fs = File.OpenRead(filePath); 58:   59: SPFile newFile = myLibrary.Files.Add(fileName, fs, true); 60:   61: myLibrary.Update(); 62: newFile.CheckIn("Updated by Feature", SPCheckinType.MajorCheckIn); 63: newFile.Update(); 64: } 65: } 66: } 67: } 68: } 69: } 70: catch (Exception ex) 71: { 72: string msg = "Error occurred during feature activation"; 73: Logger.logException(ex, msg, ""); 74: } 75:   76: } 77:   78: /// <summary> 79: /// Using a Feature's properties, get a reference to the Current Web 80: /// </summary> 81: /// <param name="properties"></param> 82: public SPWeb GetCurWeb(SPFeatureReceiverProperties properties) 83: { 84: SPWeb curweb; 85:   86: //Check if the parent of the web is a site or a web 87: if (properties != null && properties.Feature.Parent.GetType().ToString() == "Microsoft.SharePoint.SPWeb") 88: { 89:   90: //Get web from parent 91: curweb = (SPWeb)properties.Feature.Parent; 92: 93: } 94: else 95: { 96: //Get web from Site 97: using (SPSite cursite = (SPSite)properties.Feature.Parent) 98: { 99: curweb = (SPWeb)cursite.OpenWeb(); 100: } 101: } 102:   103: return curweb; 104: } This did the trick.  It allowed me to update my existing master page, through an easily repeatable process (which is great when you are working with more than one environment and what to do things like TEST it!).  I did run into what I would classify as a strange issue with one of my subsites, but that’s the topic for another blog post.

    Read the article

  • How should secret files be pushed to an EC2 (on AWS) Ruby on Rails application?

    - by nikc
    How should secret files be pushed to an EC2 Ruby on Rails application using amazon web services with their elastic beanstalk? I add the files to a git repository, and I push to github, but I want to keep my secret files out of the git repository. I'm deploying to aws using: git aws.push The following files are in the .gitignore: /config/database.yml /config/initializers/omniauth.rb /config/initializers/secret_token.rb Following this link I attempted to add an S3 file to my deployment: http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers.html Quoting from that link: Example Snippet The following example downloads a zip file from an Amazon S3 bucket and unpacks it into /etc/myapp: sources: /etc/myapp: http://s3.amazonaws.com/mybucket/myobject Following those directions I uploaded a file to an S3 bucket and added the following to a private.config file in the .elasticbeanstalk .ebextensions directory: sources: /var/app/current/: https://s3.amazonaws.com/mybucket/config.tar.gz That config.tar.gz file will extract to: /config/database.yml /config/initializers/omniauth.rb /config/initializers/secret_token.rb However, when the application is deployed the config.tar.gz file on the S3 host is never copied or extracted. I still receive errors that the database.yml couldn't be located and the EC2 log has no record of the config file, here is the error message: Error message: No such file or directory - /var/app/current/config/database.yml Exception class: Errno::ENOENT Application root: /var/app/current

    Read the article

  • Why is uploading to S3 so slow?

    - by Tom Marthenal
    I am using s3cmd to upload to S3: # s3cmd put 1gb.bin s3://my-bucket/1gb.bin 1gb.bin -> s3://my-bucket/1gb.bin [1 of 1] 366706688 of 1073741824 34% in 371s 963.22 kB/s I am uploading from Linode, which has an outgoing bandwidth cap of 50 Mb/s according to support (roughly 6 MB/s). Why am I getting such slow upload speeds to S3, and how can I improve them? Update: Uploading the same file via SCP to an m1.medium EC2 instance (SCP from my Linode to the instance's EBS drive) gives about 44 Mb/s according to iftop (any compression done by the cipher is not a factor). Traceroute: Here's a traceroute to the server it's uploading to (according to tcpdump). # traceroute s3-1-w.amazonaws.com. traceroute to s3-1-w.amazonaws.com. (72.21.194.32), 30 hops max, 60 byte packets 1 207.99.1.13 (207.99.1.13) 0.635 ms 0.743 ms 0.723 ms 2 207.99.53.41 (207.99.53.41) 0.683 ms 0.865 ms 0.915 ms 3 vlan801.tbr1.mmu.nac.net (209.123.10.9) 0.397 ms 0.541 ms 0.527 ms 4 0.e1-1.tbr1.tl9.nac.net (209.123.10.102) 1.400 ms 1.481 ms 1.508 ms 5 0.gi-0-0-0.pr1.tl9.nac.net (209.123.11.62) 1.602 ms 1.677 ms 1.699 ms 6 equinix02-iad2.amazon.com (206.223.115.35) 9.393 ms 8.925 ms 8.900 ms 7 72.21.220.41 (72.21.220.41) 32.610 ms 9.812 ms 9.789 ms 8 72.21.222.141 (72.21.222.141) 9.519 ms 9.439 ms 9.443 ms 9 72.21.218.3 (72.21.218.3) 10.245 ms 10.202 ms 10.154 ms 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * * The latency looks reasonable, at least until the server stopped responding to ping requests.

    Read the article

  • Cloudfront - How to invalidate objects in a distribution that was transformed from secured to public?

    - by Gil
    The setting I have an Amazon Cloudfront distribution that was originally set as secured. Objects in this distribution required a URL signing. For example, a valid URL used to be of the following format: https://d1stsppuecoabc.cloudfront.net/images/TheImage.jpg?Expires=1413119282&Signature=NLLRTVVmzyTEzhm-ugpRymi~nM2v97vxoZV5K9sCd4d7~PhgWINoTUVBElkWehIWqLMIAq0S2HWU9ak5XIwNN9B57mwWlsuOleB~XBN1A-5kzwLr7pSM5UzGn4zn6GRiH-qb2zEoE2Fz9MnD9Zc5nMoh2XXwawMvWG7EYInK1m~X9LXfDvNaOO5iY7xY4HyIS-Q~xYHWUnt0TgcHJ8cE9xrSiwP1qX3B8lEUtMkvVbyLw__&Key-Pair-Id=APKAI7F5R77FFNFWGABC The distribution points to an S3 bucket that also used to be secured (it only allowed access through the cloudfront). What happened At some point, the URL singing expired and would return a 403. Since we no longer need to keep the same security level, I recently changed the setting of the cloudfront distribution and of the S3 bucket it is pointing to, both to be public. I then tried to invalidate objects in this distribution. Invalidation did not throw any errors, however the invalidation did not seem to succeed. Requests to the same cloudfront URL (with or without the query string) still return 403. The response header looks like: HTTP/1.1 403 Forbidden Server: CloudFront Date: Mon, 18 Aug 2014 15:16:08 GMT Content-Type: text/xml Content-Length: 110 Connection: keep-alive X-Cache: Error from cloudfront Via: 1.1 3abf650c7bf73e47515000bddf3f04a0.cloudfront.net (CloudFront) X-Amz-Cf-Id: j1CszSXz0DO-IxFvHWyqkDSdO462LwkfLY0muRDrULU7zT_W4HuZ2B== Things I tried I tried to set another cloudfront distribution that points to the same S3 as origin server. Requests to the same object in the new distribution were successful. The question Did anyone encounter the same situation where a cloudfront URL that returns 403 cannot be invalidated? Is there any reason why wouldn't the object get invalidated? Thanks for your help!

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • A more elegant way of embedding a SOAP security header in Silverlight 4

    - by Your DisplayName here!
    The current situation with Silverlight is, that there is no support for the WCF federation binding. This means that all security token related interactions have to be done manually. Requesting the token from an STS is not really the bad part, sending it along with outgoing SOAP messages is what’s a little annoying. So far you had to wrap all calls on the channel in an OperationContextScope wrapping an IContextChannel. This “programming model” was a little disruptive (in addition to all the async stuff that you are forced to do). It seems that starting with SL4 there is more support for traditional WCF extensibility points – especially IEndpointBehavior, IClientMessageInspector. I never read somewhere that these are new features in SL4 – but I am pretty sure they did not exist in SL3. With the above mentioned interfaces at my disposal, I thought I have another go at embedding a security header – and yeah – I managed to make the code much prettier (and much less bizarre). Here’s the code for the behavior/inspector: public class IssuedTokenHeaderInspector : IClientMessageInspector {     RequestSecurityTokenResponse _rstr;       public IssuedTokenHeaderInspector(RequestSecurityTokenResponse rstr)     {         _rstr = rstr;     }       public void AfterReceiveReply(ref Message reply, object correlationState)     { }       public object BeforeSendRequest(ref Message request, IClientChannel channel)     {         request.Headers.Add(new IssuedTokenHeader(_rstr));                  return null;     } }   public class IssuedTokenHeaderBehavior : IEndpointBehavior {     RequestSecurityTokenResponse _rstr;       public IssuedTokenHeaderBehavior(RequestSecurityTokenResponse rstr)     {         if (rstr == null)         {             throw new ArgumentNullException();         }           _rstr = rstr;     }       public void ApplyClientBehavior(       ServiceEndpoint endpoint, ClientRuntime clientRuntime)     {         clientRuntime.MessageInspectors.Add(new IssuedTokenHeaderInspector(_rstr));     }       // rest omitted } This allows to set up a proxy with an issued token header and you don’t have to worry anymore with embedding the header manually with every call: var client = GetWSTrustClient();   var rst = new RequestSecurityToken(WSTrust13Constants.KeyTypes.Symmetric) {     AppliesTo = new EndpointAddress("https://rp/") };   client.IssueCompleted += (s, args) => {     _proxy = new StarterServiceContractClient();     _proxy.Endpoint.Behaviors.Add(new IssuedTokenHeaderBehavior(args.Result));   };   client.IssueAsync(rst); Since SL4 also support the IExtension<T> interface, you can also combine this with Nicholas Allen’s AutoHeaderExtension.

    Read the article

  • Need help regarding one LALR(1) parsing.

    - by AppleGrew
    I am trying to parse a context-free language, called Context Free Art. I have created its parser in Javascript using a YACC-like JS LALR(1) parser generator JSCC. Take the example of following CFA (Context Free Art) code. This code is a valid CFA. startshape A rule A { CIRCLE { s 1} } Notice the A and s in above. s is a command to scale the CIRCLE, but A is just a name of this rule. In the language's grammar I have set s as token SCALE and A comes under token STRING (I have a regular expression to match string and it is at the bottom of of all tokens). This works fine, but in the below case it breaks. startshape s rule s { CIRCLE { s 1} } This too is a perfectly valid code, but since my parser marks s after rule as SCALE token so it errors out saying that it was expecting STRING. Now my question is, if there is any way to re-write the production rules of the parser to account for this? The related production rule is:- rule: RULE STRING '{' buncha_replacements '}' [* rule(%2, 1) *] | RULE STRING RATIONAL '{' buncha_replacements '}' [* rule(%2, 1*%3) *] ; One simple solution I can think of is create a copy of above rule with STRING replaced by SCALE, but this is just one of the many similar rules which would need such fixing. Furthermore there are many other terminals which can get matched to STRING. So that means way too many rules!

    Read the article

  • Gmail IMAP OAuth for desktop clients

    - by Sabya
    Recently Google announced that they are supporting OAUth for Gmail IMAP/SMTP. I browsed through their multiple documentations, but still I am confused about if they support OAuth for installed applications. 1. In this documentation they say: Note: Though the OAuth protocol supports the desktop/installed application use case, Google only supports OAuth for web applications. But they also have a document for OAuth for installed applications. 2. When I read the OAuth specification pointed by them, it says (in section 11.7): In many applications, the Consumer application will be under the control of potentially untrusted parties. For example, if the Consumer is a freely available desktop application, an attacker may be able to download a copy for analysis. In such cases, attackers will be able to recover the Consumer Secret used to authenticate the Consumer to the Service Provider. Also I think the disclaimer in point 1 above is about Google Data APIs, and surely IMAP/SMTP is not a part of them. I understand that for installed applications I can have a setup like: Have a small web-app at say example.com for my application. This web-app talks to Google gets the access token. The installed application talks to example.com only to get the access token. Installed application then talks to Google with the access token. I am now confused. Is this the only way?

    Read the article

  • How to encrypt an NSString in Objective C with DES in ECB-Mode?

    - by blauesocke
    Hi, I am trying to encrypt an NSString in Objective C on the iPhone. At least I wan't to get a string like "TmsbDaNG64lI8wC6NLhXOGvfu2IjLGuEwc0CzoSHnrs=" when I encode "us=foo;pw=bar;pwAlg=false;" by using this key: "testtest". My problem for now is, that CCCrypt always returns "4300 - Parameter error" and I have no more idea why. This is my code (the result of 5 hours google and try'n'error): NSString *token = @"us=foo;pw=bar;pwAlg=false;"; NSString *key = @"testtest"; const void *vplainText; size_t plainTextBufferSize; plainTextBufferSize = [token length]; vplainText = (const void *) [token UTF8String]; CCCryptorStatus ccStatus; uint8_t *bufferPtr = NULL; size_t bufferPtrSize = 0; size_t *movedBytes; bufferPtrSize = (plainTextBufferSize + kCCBlockSize3DES) & ~(kCCBlockSize3DES - 1); bufferPtr = malloc( bufferPtrSize * sizeof(uint8_t)); memset((void *)bufferPtr, 0x0, bufferPtrSize); // memset((void *) iv, 0x0, (size_t) sizeof(iv)); NSString *initVec = @"init Vec"; const void *vkey = (const void *) [key UTF8String]; const void *vinitVec = (const void *) [initVec UTF8String]; ccStatus = CCCrypt(kCCEncrypt, kCCAlgorithmDES, kCCOptionECBMode, vkey, //"123456789012345678901234", //key kCCKeySizeDES, NULL,// vinitVec, //"init Vec", //iv, vplainText, //"Your Name", //plainText, plainTextBufferSize, (void *)bufferPtr, bufferPtrSize, movedBytes); NSString *result; NSData *myData = [NSData dataWithBytes:(const void *)bufferPtr length:(NSUInteger)movedBytes]; result = [myData base64Encoding];

    Read the article

  • WCF MessageHeaders in OperationContext.Current

    - by Nate Bross
    If I use code like this [just below] to add Message Headers to my OperationContext, will all future out-going messages contain that data on any new ClientProxy defined from the same "run" of my application? The objective, is to pass a parameter or two to each OpeartionContract w/out messing with the signature of the OperationContract, since the parameters being passed will be consistant for all requests for a given run of my client application. public void DoSomeStuff() { var proxy = new MyServiceClient(); Guid myToken = Guid.NewGuid(); MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken); MessageHeader untyped = mhg.GetUntypedHeader("token", "ns"); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); proxy.DoOperation(...); } public void DoSomeOTHERStuff() { var proxy = new MyServiceClient(); Guid myToken = Guid.NewGuid(); MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken); MessageHeader untyped = mhg.GetUntypedHeader("token", "ns"); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); proxy.DoOtherOperation(...); } In other words, is it safe to refactor the above code like this? bool isSetup = false; public void SetupMessageHeader() { if(isSetup) { return; } Guid myToken = Guid.NewGuid(); MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken); MessageHeader untyped = mhg.GetUntypedHeader("token", "ns"); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); isSetup = true; } public void DoSomeStuff() { var proxy = new MyServiceClient(); SetupMessageHeader(); proxy.DoOperation(...); } public void DoSomeOTHERStuff() { var proxy = new MyServiceClient(); SetupMessageHeader(); proxy.DoOtherOperation(...); } Since I don't really understand what's happening there, I don't want to cargo cult it and just change it and let it fly if it works, I'd like to hear your thoughts on if it is OK or not.

    Read the article

  • need some help in Abraham twitteroauth class

    - by diEcho
    Hello All, i m learning how to use twitter from twitter developer link on Authenticating Requests with OAuth page i was debugging my code with given procedure on Sending the user to authorization section there is written that if you are using the callback flow, your oauth_callback should have received back your oauth_token (the same that you sent, your "request token") and a field called the oauth_verifier. You'll need that for the next step. Here's the response I received: oauth_token=8ldIZyxQeVrFZXFOZH5tAwj6vzJYuLQpl0WUEYtWc&oauth_verifier=pDNg57prOHapMbhv25RNf75lVRd6JDsni1AJJIDYoTY my original code is require_once('twitteroauth/twitteroauth.php'); require_once('config.php'); /* Build TwitterOAuth object with client credentials. */ $connection = new TwitterOAuth(CONSUMER_KEY, CONSUMER_SECRET); /* Get temporary credentials. */ $request_token = $connection->getRequestToken(OAUTH_CALLBACK); /* Save temporary credentials to session. */ $_SESSION['oauth_token'] = $token = $request_token['oauth_token']; $_SESSION['oauth_token_secret'] = $request_token['oauth_token_secret']; /* If last connection failed don't display authorization link. */ switch ($connection->http_code) { case 200: /* Build authorize URL and redirect user to Twitter. */ echo "<br/>Authorize URL:".$url = $connection->getAuthorizeURL($token); //header('Location: ' . $url); break; default: /* Show notification if something went wrong. */ echo 'Could not connect to Twitter. Refresh the page or try again later.'; } and i m getting Authorize URL: https://twitter.com/oauth/authenticate?oauth_token=BHqbrTjsPcyvaAsfDwfU149aAcZjtw45nhLBeG1c i m not getting above URL having oauth_verifier. please tell me from where do i see/debug that url??

    Read the article

  • Uploading to TwitVid using x_verify_credentials_authorization

    - by deepa
    Hi, I am developing an iPhone application that uploads videos to TwitVid using the library TwitVid-iPhone-OAuth3. I have selected this library since Twitter is shifting to OAuth mechanism of authentication. 1) Does this new library allows to upload without user-name and password parameters? 2) I am using the following steps for uploading videos (assumed that new library allows to upload without user-name and password parameters). But, I am getting the error 'Incorrect Signature'. I am not able to identify the mistake that I am doing here. Could you please help me out to solve this problem. Authenticate the app using OAuth (MGTwitterEngine etc) which redirects the user to the web-page asking for log-in and app authentication. If the user authenticates the app, access token will obtained as final step. Upload the video to TwitVid using the following code snippet: OAuth *authorizer = [[OAuth alloc] initWithConsumerKey: @"my_consumer_key" andConsumerSecret: @"my_consumer_secret"]; authorizer.oauth_token = //The key-part of the access token obtained in step 2 authorizer.oauth_token_secret = //The secret-part of the access token obtained in step 2 authorizer.oauth_token_authorized = YES; //Since authenticated in steps 1 and 2 NSString *authHeader = [authorizer oAuthHeaderForMethod: @"POST" andUrl: @"https://api.twitter.com/1/account/verify_credentials.xml" andParams: nil]; twitVid = [[TwitVid alloc] init]; [mTwitVid setDelegate: self]; [mTwitVid authenticateWithXVerifyCredentialsAuthorization: authHeader xAuthServiceProvider: nil]; Thanks and Regards, Deepa

    Read the article

  • SmtpClient.SendAsync - How to ensure my application doesn't finish before callback?

    - by James
    Hi, I need to send emails asychronously through a console application. I need to do some DB updates on the callback but my application is exiting before the callback code gets run! How can I stop this from happening in a nice manner rather than simply guessing how long to wait before exiting. I would imagine the Async calls get placed in some form of thread? Is it possible to check if any are waiting to be called? Sample Code private static void SendCompletedCallback(object sender, AsyncCompletedEventArgs e) { // Get the unique identifier for this asynchronous operation. String token = (string) e.UserState; if (e.Cancelled) { Console.WriteLine("[{0}] Send canceled.", token); } if (e.Error != null) { Console.WriteLine("[{0}] {1}", token, e.Error.ToString()); } else { // update DB Console.WriteLine("Message sent."); } } public static void Main(string[] args) { var users = Repository.GetUsers(); SmtpClient client = new SmtpClient("Host"); client.SendCompleted += new SendCompletedEventHandler(SendCompletedCallback); MailAddress from = new MailAddress("[email protected]", "System", Encoding.UTF8); foreach (var user in users) { MailAddress to = new MailAddress(user.Email); MailMessage message = new MailMessage(from, to); message.Body = "This is a test"; message.BodyEncoding = System.Text.Encoding.UTF8; message.Subject = "test message 1" + someArrows; message.SubjectEncoding = System.Text.Encoding.UTF8; string userState = String.Format("Message for user id {0}", user.ID); client.SendAsync(message, userState); message.Dispose(); } // need to wait here until I have received a callback for each message // otherwise the application will exit }

    Read the article

  • Django - Override admin site's login form

    - by TrojanCentaur
    I'm currently trying to override the default form used in Django 1.4 when logging in to the admin site (my site uses an additional 'token' field required for users who opt in to Two Factor Authentication, and is mandatory for site staff). Django's default form does not support what I need. Currently, I've got a file in my templates/ directory called templates/admin/login.html, which seems to be correctly overriding the template used with the one I use throughout the rest of my site. The contents of the file are simply as below: # admin/login.html: {% extends "login.html" %} The actual login form is as below: # login.html: {% load url from future %}<!DOCTYPE html> <html> <head> <title>Please log in</title> </head> <body> <div id="loginform"> <form method="post" action="{% url 'id.views.auth' %}"> {% csrf_token %} <input type="hidden" name="next" value="{{ next }}" /> {{ form.username.label_tag }}<br/> {{ form.username }}<br/> {{ form.password.label_tag }}<br/> {{ form.password }}<br/> {{ form.token.label_tag }}<br/> {{ form.token }}<br/> <input type="submit" value="Log In" /> </form> </div> </body> </html> My issue is that the form provided works perfectly fine when accessed using my normal login URLs because I supply my own AuthenticationForm as the form to display, but through the Django Admin login route, Django likes to supply it's own form to this template and thus only the username and password fields render. Is there any way I can make this work, or is this something I am just better off 'hard coding' the HTML fields into the form for?

    Read the article

  • I am Unable to Post Xml to Linkedin Share API

    - by Vijesh V.Nair
    I am using Delphi 2010, with Indy 10.5.8(svn version) and oAuth.pas from chuckbeasley. I am able to collect token with app key and App secret, authorize token with a web page and Access the final token. Now I have to post a status with Linkedin’s Share API. I am getting a unauthorized response. My request and responses are giving bellow. Request, POST /v1/people/~/shares HTTP/1.0 Content-Encoding: utf-8 Content-Type: text/xml; charset=us-ascii Content-Length: 999 Authorization: OAuth oauth_consumer_key="xxx",oauth_signature_method="HMAC-SHA1",oauth_timestamp="1340438599",oauth_nonce="BB4C78E0A6EB452BEE0FAA2C3F921FC4",oauth_version="1.0",oauth_token="xxx",oauth_signature="Pz8%2FPz8%2FPz9ePzkxPyc%2FDD82Pz8%3D" Host: api.linkedin.com Accept: text/html, */* Accept-Encoding: identity User-Agent: Mozilla/3.0 (compatible; Indy Library) %3C%3Fxml+version=%25221.0%2522%2520encoding%253D%2522UTF-8%2522%253F%253E%253Cshare%253E%253Ccomment%253E83%2525%2520of%2520employers%2520will%2520use%2520social%2520media%2520to%2520hire%253A%252078%2525%2520LinkedIn%252C%252055%2525%2520Facebook%252C%252045%2525%2520Twitter%2520%255BSF%2520Biz%2520Times%255D%2520http%253A%252F%252Fbit.ly%252FcCpeOD%253C%252Fcomment%253E%253Ccontent%253E%253Ctitle%253ESurvey%253A%2520Social%2520networks%2520top%2520hiring%2520tool%2520-%2520San%2520Francisco%2520Business%2520Times%253C%252Ftitle%253E%253Csubmitted-url%253Ehttp%253A%252F%252Fsanfrancisco.bizjournals.com%252Fsanfrancisco%252Fstories%252F2010%252F06%252F28%252Fdaily34.html%253C%252Fsubmitted-url%253E%253Csubmitted-image-url%253Ehttp%253A%252F%252Fimages.bizjournals.com%252Ftravel%252Fcityscapes%252Fthumbs%252Fsm_sanfrancisco.jpg%253C%252Fsubmitted-image-url%253E%253C%252Fcontent%253E%253Cvisibility%253E%253Ccode%253Eanyone%253C%252Fcode%253E%253C%252Fvisibility%253E%253C%252Fshare%253E Response, HTTP/1.1 401 Unauthorized Server: Apache-Coyote/1.1 x-li-request-id: K14SWRPEPL Date: Sat, 23 Jun 2012 08:07:17 GMT Vary: * x-li-format: xml Content-Type: text/xml;charset=UTF-8 Content-Length: 341 Connection: keep-alive <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <error> <status>401</status> <timestamp>1340438838344</timestamp> <request-id>K14SWRPEPL</request-id> <error-code>0</error-code> <message>[unauthorized]. OAU:xxx|nnnnn|*01|*01:1340438599:Pz8/Pz8/Pz9ePzkxPyc/DD82Pz8=</message> </error> Please help. Regards, Vijesh Nair

    Read the article

  • Google's Oauth for Installed apps vs. Oauth for Web Apps

    - by burgerguy
    So I'm having trouble understanding something... If you do Oauth for Web Apps, you register your site with a callback URL and get a unique consumer secret key. But once you've obtained an Oauth for Web Apps token, you don't have to generate Oauth calls to the google server from your registered domain. I regularly use my key and token from scripts running via an apache server at localhost on my laptop and Google never says "you're not sending this request from the registered domain." It just sends me the data. Now, as I understand it, if you do Oauth for Installed Apps, you use "anonymous" instead of a secret key you got from Google. I've been thinking of just using the OAuth for Web Apps auth method, then passing that token to an installed app that has my secret code embedded in its innards. The worry is that the code could be discovered by bad people. But what's more secure... making them work for the secret code or letting them default to anonymous? What really goes bad if the "secret" is discovered when the alternative is using "anonymous" as the secret?

    Read the article

  • How to get list of declared functions with their data from php file?

    - by ermac2014
    I need to get list of functions with their contents (not only the function name) from php file. I tried to use regex but it has lots of limitations. it doesn't parse all types of functions. for example it fails if the function has if and for loop statements. in details: I have around 100 include files. each file has number of declared functions. some files has functions duplicated in other files. so what I want is to get list of all functions from specific file then put this list inside an array then I will be using array unique in order to remove the duplicates. I read about tokenizer but I really have no idea how to make it grab the declared function with its data. all I have is this: function get_defined_functions_in_file($file) { $source = file_get_contents($file); $tokens = token_get_all($source); $functions = array(); $nextStringIsFunc = false; $inClass = false; $bracesCount = 0; foreach($tokens as $token) { switch($token[0]) { case T_CLASS: $inClass = true; break; case T_FUNCTION: if(!$inClass) $nextStringIsFunc = true; break; case T_STRING: if($nextStringIsFunc) { $nextStringIsFunc = false; $functions[] = $token[1]; } break; // Anonymous functions case '(': case ';': $nextStringIsFunc = false; break; // Exclude Classes case '{': if($inClass) $bracesCount++; break; case '}': if($inClass) { $bracesCount--; if($bracesCount === 0) $inClass = false; } break; } } return $functions; } unfortunately, this function lists only the function names.. what I need is to list the whole declared function with its structure.. so any ideas? thanks in advance..

    Read the article

  • HSQLDB connection error

    - by user1478527
    I created a java program for connect the HSQLDB , the first one works well, public final static String DRIVER = "org.hsqldb.jdbcDriver"; public final static String URL = "jdbc:hsqldb:file:F:/hsqlTest/data/db"; public final static String DBNAME = "SA"; but these are not public final static String DRIVER = "org.hsqldb.jdbcDriver"; public final static String URL = "jdbc:hsqldb:file:C:/Program Files/tich Tools/mos tech/app/data/db/t1/t2/01/db"; public final static String DBNAME = "SA"; the error shows like this: java.sql.SQLException: error in script file line: 1 unexpected token: ? at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.JDBCConnection.<init>(Unknown Source) at org.hsqldb.jdbc.JDBCDriver.getConnection(Unknown Source) at org.hsqldb.jdbc.JDBCDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at HSQLDBManagerImp.getconn(HSQLDBManagerImp.java:48) at TESTHSQLDB.main(TESTHSQLDB.java:15) Caused by: org.hsqldb.HsqlException: error in script file line: 1 unexpected token: ? at org.hsqldb.error.Error.error(Unknown Source) at org.hsqldb.scriptio.ScriptReaderText.readDDL(Unknown Source) at org.hsqldb.scriptio.ScriptReaderBase.readAll(Unknown Source) at org.hsqldb.persist.Log.processScript(Unknown Source) at org.hsqldb.persist.Log.open(Unknown Source) at org.hsqldb.persist.Logger.openPersistence(Unknown Source) at org.hsqldb.Database.reopen(Unknown Source) at org.hsqldb.Database.open(Unknown Source) at org.hsqldb.DatabaseManager.getDatabase(Unknown Source) at org.hsqldb.DatabaseManager.newSession(Unknown Source) ... 7 more Caused by: org.hsqldb.HsqlException: unexpected token: ? at org.hsqldb.error.Error.parseError(Unknown Source) at org.hsqldb.ParserBase.unexpectedToken(Unknown Source) at org.hsqldb.ParserCommand.compilePart(Unknown Source) at org.hsqldb.ParserCommand.compileStatement(Unknown Source) at org.hsqldb.Session.compileStatement(Unknown Source) ... 16 more I googled this connection question , but most of them not help much, someone says that may be the HSQLDB version problem. The dbase shut down in "SHUT COMPRESS" model. Anyone give some advice?

    Read the article

  • Facebook SDK Comment Deleting

    - by mwoodworth
    Working with the Facebook php SDK's, I am having a lot of trouble figuring out how to delete comments, given its id and xid. At first I was using the REST API, where you can call 'comments_remove($xid, $id);' to delete a comment. The problem with this method came when the xid parameter only accepts alphanumeric characters and underscores. Based on the documentation (http://developers.facebook.com/docs/reference/fbml/comments ) a valid XID can be the result of any url_encode. Now I am testing my luck with the new GRAPH api. Looking at http://developers.facebook.com/docs/api under 'Deleting Objects', It seems that comment deleting is definitely supported. However, I have tried sending a DELETE request, and I have also tried sending POST and GET to the object url with the argument 'method=delete'. No matter how I try it, I always get the same error: {"error":{"type":"GraphMethodException","message":"Unsupported delete request."}} I am sending the access token as a parameter as well. The access token that I am sending is the access token saved in the facebook cookie from the single sign on javascript cookie. These are all comments made on my application. Does this happen to anyone else, or am I simply not doing this right? Any help or guidance is GREATLY appreciated.

    Read the article

  • Why Illegal cookies are send by Browser and received by web servers (rfc 2109, 2965)?

    - by Artyom
    Hello, According to RFC 2109, 2965 cookie's value can be either HTTP token or quoted string, and token can't include non-ASCII characters. Cookie's RFC 2109 and RFC2965 HTTP's RFC 2068 token definition: http://tools.ietf.org/html/rfc2068#page-16 However I had found that Firefox browser (3.0.6) sends cookies with utf-8 string as-is and three web servers I tested (apache2, lighttpd, nginx) pass this string as-is to the application. For example, raw request from browser: $ nc -l -p 8080 GET /hello HTTP/1.1 Host: localhost:8080 User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.9) Gecko/2009050519 Firefox/2.0.0.13 (Debian-3.0.6-1) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: windows-1255,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: wikipp=1234; wikipp_username=?????? Cache-Control: max-age=0 And raw response of apache, nginx and lighttpd HTTP_COOKIE CGI variable: wikipp=1234; wikipp_username=?????? What do I miss? Can somebody explain me?

    Read the article

  • Why Illegal cookies are send by Browser and received by web servers (rfc2109)?

    - by Artyom
    Hello, According to RFC 2109 cookie's value can be either HTTP token or quoted string, and token can't include non-ASCII characters. Cookie's RFC 2109: http://tools.ietf.org/html/rfc2109#page-3 HTTP's RFC 2068 token definition: http://tools.ietf.org/html/rfc2068#page-16 However I had found that Firefox browser (3.0.6) sends cookies with utf-8 string as-is and three web servers I tested (apache2, lighttpd, nginx) pass this string as-is to the application. For example, raw request from browser: $ nc -l -p 8080 GET /hello HTTP/1.1 Host: localhost:8080 User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.9) Gecko/2009050519 Firefox/2.0.0.13 (Debian-3.0.6-1) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: windows-1255,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: wikipp=1234; wikipp_username=?????? Cache-Control: max-age=0 And raw response of apache, nginx and lighttpd HTTP_COOKIE CGI variable: wikipp=1234; wikipp_username=?????? What do I miss? Can somebody explain me?

    Read the article

  • Impersonate SYSTEM (or equivalent) from Administrator Account

    - by KevenK
    This question is a follow up and continuation of this question about a Privilege problem I'm dealing with currently. Problem Summary: I'm running a program under a Domain Administrator account that does not have Debug programs (SeDebugPrivilege) privilege, but I need it on the local machine. Klugey Solution: The program can install itself as a service on the local machine, and start the service. Said service now runs under the SYSTEM account, which enables us to use our SeTCBPrivilege privilege to create a new access token which does have SeDebugPrivilege. We can then use the newly created token to re-launch the initial program with the elevated rights. I personally do not like this solution. I feel it should be possible to acquire the necessary privileges as an Administrator without having to make system modifications such as installing a service (even if it is only temporary). I am hoping that there is a solution that minimizes system modifications and can preferably be done on the fly (ie: Not require restarting itself). I have unsuccessfully tried to LogonUser as SYSTEM and tried to OpenProcessToken on a known SYSTEM process (such as csrss.exe) (which fails, because you cannot OpenProcess with PROCESS_TOKEN_QUERY to get a handle to the process without the privileges I'm trying to acquire). I'm just at my wit's end trying to come up with an alternative solution to this problem. I was hoping there was an easy way to grab a privileged token on the host machine and impersonate it for this program, but I haven't found a way. If anyone knows of a way around this, or even has suggestions on things that might work, please let me know. I really appreciate the help, thanks!

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >