Search Results

Search found 29574 results on 1183 pages for 'directory services'.

Page 439/1183 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • Use a Fake Http Channel to Unit Test with HttpClient

    - by Steve Michelotti
    Applications get data from lots of different sources. The most common is to get data from a database or a web service. Typically, we encapsulate calls to a database in a Repository object and we create some sort of IRepository interface as an abstraction to decouple between layers and enable easier unit testing by leveraging faking and mocking. This works great for database interaction. However, when consuming a RESTful web service, this is is not always the best approach. The WCF Web APIs that are available on CodePlex (current drop is Preview 3) provide a variety of features to make building HTTP REST services more robust. When you download the latest bits, you’ll also find a new HttpClient which has been updated for .NET 4.0 as compared to the one that shipped for 3.5 in the original REST Starter Kit. The HttpClient currently provides the best API for consuming REST services on the .NET platform and the WCF Web APIs provide a number of extension methods which extend HttpClient and make it even easier to use. Let’s say you have a client application that is consuming an HTTP service – this could be Silverlight, WPF, or any UI technology but for my example I’ll use an MVC application: 1: using System; 2: using System.Net.Http; 3: using System.Web.Mvc; 4: using FakeChannelExample.Models; 5: using Microsoft.Runtime.Serialization; 6:   7: namespace FakeChannelExample.Controllers 8: { 9: public class HomeController : Controller 10: { 11: private readonly HttpClient httpClient; 12:   13: public HomeController(HttpClient httpClient) 14: { 15: this.httpClient = httpClient; 16: } 17:   18: public ActionResult Index() 19: { 20: var response = httpClient.Get("Person(1)"); 21: var person = response.Content.ReadAsDataContract<Person>(); 22:   23: this.ViewBag.Message = person.FirstName + " " + person.LastName; 24: 25: return View(); 26: } 27: } 28: } On line #20 of the code above you can see I’m performing an HTTP GET request to a Person resource exposed by an HTTP service. On line #21, I use the ReadAsDataContract() extension method provided by the WCF Web APIs to serialize to a Person object. In this example, the HttpClient is being passed into the constructor by MVC’s dependency resolver – in this case, I’m using StructureMap as an IoC and my StructureMap initialization code looks like this: 1: using StructureMap; 2: using System.Net.Http; 3:   4: namespace FakeChannelExample 5: { 6: public static class IoC 7: { 8: public static IContainer Initialize() 9: { 10: ObjectFactory.Initialize(x => 11: { 12: x.For<HttpClient>().Use(() => new HttpClient("http://localhost:31614/")); 13: }); 14: return ObjectFactory.Container; 15: } 16: } 17: } My controller code currently depends on a concrete instance of the HttpClient. Now I *could* create some sort of interface and wrap the HttpClient in this interface and use that object inside my controller instead – however, there are a few why reasons that is not desirable: For one thing, the API provided by the HttpClient provides nice features for dealing with HTTP services. I don’t really *want* these to look like C# RPC method calls – when HTTP services have REST features, I may want to inspect HTTP response headers and hypermedia contained within the message so that I can make intelligent decisions as to what to do next in my workflow (although I don’t happen to be doing these things in my example above) – this type of workflow is common in hypermedia REST scenarios. If I just encapsulate HttpClient behind some IRepository interface and make it look like a C# RPC method call, it will become difficult to take advantage of these types of things. Second, it could get pretty mind-numbing to have to create interfaces all over the place just to wrap the HttpClient. Then you’re probably going to have to hard-code HTTP knowledge into your code to formulate requests rather than just “following the links” that the hypermedia in a message might provide. Third, at first glance it might appear that we need to create an interface to facilitate unit testing, but actually it’s unnecessary. Even though the code above is dependent on a concrete type, it’s actually very easy to fake the data in a unit test. The HttpClient provides a Channel property (of type HttpMessageChannel) which allows you to create a fake message channel which can be leveraged in unit testing. In this case, what I want is to be able to write a unit test that just returns fake data. I also want this to be as re-usable as possible for my unit testing. I want to be able to write a unit test that looks like this: 1: [TestClass] 2: public class HomeControllerTest 3: { 4: [TestMethod] 5: public void Index() 6: { 7: // Arrange 8: var httpClient = new HttpClient("http://foo.com"); 9: httpClient.Channel = new FakeHttpChannel<Person>(new Person { FirstName = "Joe", LastName = "Blow" }); 10:   11: HomeController controller = new HomeController(httpClient); 12:   13: // Act 14: ViewResult result = controller.Index() as ViewResult; 15:   16: // Assert 17: Assert.AreEqual("Joe Blow", result.ViewBag.Message); 18: } 19: } Notice on line #9, I’m setting the Channel property of the HttpClient to be a fake channel. I’m also specifying the fake object that I want to be in the response on my “fake” Http request. I don’t need to rely on any mocking frameworks to do this. All I need is my FakeHttpChannel. The code to do this is not complex: 1: using System; 2: using System.IO; 3: using System.Net.Http; 4: using System.Runtime.Serialization; 5: using System.Threading; 6: using FakeChannelExample.Models; 7:   8: namespace FakeChannelExample.Tests 9: { 10: public class FakeHttpChannel<T> : HttpClientChannel 11: { 12: private T responseObject; 13:   14: public FakeHttpChannel(T responseObject) 15: { 16: this.responseObject = responseObject; 17: } 18:   19: protected override HttpResponseMessage Send(HttpRequestMessage request, CancellationToken cancellationToken) 20: { 21: return new HttpResponseMessage() 22: { 23: RequestMessage = request, 24: Content = new StreamContent(this.GetContentStream()) 25: }; 26: } 27:   28: private Stream GetContentStream() 29: { 30: var serializer = new DataContractSerializer(typeof(T)); 31: Stream stream = new MemoryStream(); 32: serializer.WriteObject(stream, this.responseObject); 33: stream.Position = 0; 34: return stream; 35: } 36: } 37: } The HttpClientChannel provides a Send() method which you can override to return any HttpResponseMessage that you want. You can see I’m using the DataContractSerializer to serialize the object and write it to a stream. That’s all you need to do. In the example above, the only thing I’ve chosen to do is to provide a way to return different response objects. But there are many more features you could add to your own re-usable FakeHttpChannel. For example, you might want to provide the ability to add HTTP headers to the message. You might want to use a different serializer other than the DataContractSerializer. You might want to provide custom hypermedia in the response as well as just an object or set HTTP response codes. This list goes on. This is the just one example of the really cool features being added to the next version of WCF to enable various HTTP scenarios. The code sample for this post can be downloaded here.

    Read the article

  • Why does git remember changes, but not let me stage them?

    - by Andres Jaan Tack
    I have a list of modifications when I run git status, but I cannot stage them or commit them. How can I fix this? This occurred after pulling the kernelmode directory from a bare repository somewhere in one huge commit. % git status # On branch master # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: kernelmode/linux-2.6.33/Documentation/IO-mapping.txt # ... $ git add . $ git status # On branch master # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: kernelmode/linux-2.6.33/Documentation/IO-mapping.txt # ...

    Read the article

  • Getting 'error while loading shared libraries' when using -L to specifically find the library.

    - by e5
    I've been trying to solve this for a few hours now. I am compiling some c files using gcc. The files require libpbc, so I am using the -L flag to point gcc at the directory which contains libpbc.so.1. The code compiles without error yet when I attempt to run it I get the following error message: ./example.out: error while loading shared libraries: libpbc.so.1: cannot open shared object file: No such file or directory Looking at similar questions this error message seems to indicate that gcc can't find libpbc.so.1. I know gcc sees libpbc.so.1 because when I rename libpbc.so.1 to something else it fails to compile. I am using -L to point to the directory which contains libpbc.so.1. Not sure what next steps I can take to figure this out. Would appreciate any ideas. What does this error message mean exactly?

    Read the article

  • Writing temporary data from R

    - by Shane
    I want to write some temporary data to disk in an R package, and I want to be sure that it can run on every OS without assuming the user has admin rights. Is there an existing R function that can provide a path to a temporary directory on all major OS's? Or a way to reference a user's home directory? Otherwise, I was thinking of trying this: Sys.getenv("temp") I presume that I can't expect people to have write access to their R locations, otherwise I could reference a path within the package directory: .find.package("package.name").

    Read the article

  • allowing index access only with .htaccess

    - by YsoL8
    Hello I have this in my .htaccess file, in the site root: Options -Indexes <directory ../.*> Deny from all </directory> <Files .htaccess> order allow,deny deny from all </Files> <Files index.php> Order allow,deny allow from all </Files> What I'm trying to achieve is to block folder and file access to anything that isn't called index.php, regardless of which directory is accessed. I have the folder part working perfectly and the deny from all rule is working as well - but my attempt to allow access to index.php is failing. Basically could someone tell me how to get it working?

    Read the article

  • Why does running rake gems:unpack result in a Gem::FilePermissionError

    - by Globalkeith
    I'm attempting to upgrade the friendly_id gem in a rails project. I have removed the old gem from the vendor directory, installed the new gem from rubygems.org. When I type: rake gems:unpack I get the following response: ERROR: While executing gem ... (Gem::FilePermissionError) You don't have write permissions into the /usr/lib/ruby/gems/1.8 directory. Sure, I realise I can sudo it, but what I don't understand is if I would like to unpack the gem into my project vender directory, why does it need access to /usr/lib/ruby/gems....

    Read the article

  • Accessing resources in the package of an application created with xcode/cocoa

    - by Michael Minerva
    I am trying to make a simple that needs to create an NSImage and I want to put the .png file in the resources of my package contents. I added the .png file to my resources directory in Xcode and when I create the applicaiton the .png file shows up in my resources inside package contents but I am havhing trouble figuring out how to reference file here is what I tried: [image initWithContentsOfFile:@"resources/draw-button.png"]; I figured that my package contents would be my current directory so I thought this would work, but it does. How do I reference my resources directory?

    Read the article

  • Combining Shared Secret and Username Token – Azure Service Bus

    - by Michael Stephenson
    As discussed in the introduction article this walkthrough will explain how you can implement WCF security with the Windows Azure Service Bus to ensure that you can protect your endpoint in the cloud with a shared secret but also flow through a username token so that in your listening WCF service you will be able to identify who sent the message. This could either be in the form of an application or a user depending on how you want to use your token. Prerequisites Before going into the walk through I want to explain a few assumptions about the scenario we are implementing but to keep the article shorter I am not going to walk through all of the steps in how to setup some of this. In the solution we have a simple console application which will represent the client application. There is also the services WCF application which contains the WCF service we will expose via the Windows Azure Service Bus. The WCF Service application in this example was hosted in IIS 7 on Windows 2008 R2 with AppFabric Server installed and configured to auto-start the WCF listening services. I am not going to go through significant detail around the IIS setup because it should not matter in relation to this article however if you want to understand more about how to configure WCF and IIS for such a scenario please refer to the following paper which goes into a lot of detail about how to configure this. The link is: http://tinyurl.com/8s5nwrz   The Service Component To begin with let's look at the service component and how it can be configured to listen to the service bus using a shared secret but to also accept a username token from the client. In the sample the service component is called Acme.Azure.ServiceBus.Poc.UN.Services. It has a single service which is the Visual Studio template for a WCF service when you add a new WCF Service Application so we have a service called Service1 with its Echo method. Nothing special so far!.... The next step is to look at the web.config file to see how we have configured the WCF service. In the services section of the WCF configuration you can see I have created my service and I have created a local endpoint which I simply used to do a little bit of diagnostics and to check it was working, but more importantly there is the Windows Azure endpoint which is using the ws2007HttpRelayBinding (note that this should also work just the same if your using netTcpRelayBinding). The key points to note on the above picture are the service behavior called MyServiceBehaviour and the service bus endpoints behavior called MyEndpointBehaviour. We will go into these in more detail later.   The Relay Binding The relay binding for the service has been configured to use the TransportWithMessageCredential security mode. This is the important bit where the transport security really relates to the interaction between the service and listening to the Azure Service Bus and the message credential is where we will use our username token like we have specified in the message/clientCrentialType attribute. Note also that we have left the relayClientAuthenticationType set to RelayAccessToken. This means that authentication will be made against ACS for accessing the service bus and messages will not be accepted from any sender who has not been authenticated by ACS.   The Endpoint Behaviour In the below picture you can see the endpoint behavior which is configured to use the shared secret client credential for accessing the service bus and also for diagnostic purposes I have included the service registry element. Hopefully if you are familiar with using Windows Azure Service Bus relay feature the above is very familiar to you and this is a very common setup for this section. There is nothing specific to the username token implementation here. The Service Behaviour Now we come to the bit with most of the username token bits in it. When you configure the service behavior I have included the serviceCredentials element and then setup to use userNameAuthentication and you can see that I have created my own custom username token validator.   This setup means that WCF will hand off to my class for validating the username token details. I have also added the serviceSecurityAudit element to give me a simple auditing of access capability. My UsernamePassword Validator The below picture shows you the details of the username password validator class I have implemented. WCF will hand off to this class when validating the token and give me a nice way to check the token credentials against an on-premise store. You have all of the validation features with a non-service bus WCF implementation available such as validating the username password against active directory or ASP.net membership features or as in my case above something much simpler.   The Client Now let's take a look at the client side of this solution and how we can configure the client to authenticate against ACS but also send a username token over to the service component so it can implement additional security checks on-premise. I have a console application and in the program class I want to use the proxy generated with Add Service Reference to send a message via the Azure Service Bus. You can see in my WCF client configuration below I have setup my details for the azure service bus url and am using the ws2007HttpRelayBinding. Next is my configuration for the relay binding. You can see below I have configured security to use TransportWithMessageCredential so we will flow the username token with the message and also the RelayAccessToken relayClientAuthenticationType which means the component will validate against ACS before being allowed to access the relay endpoint to send a message.     After the binding we need to configure the endpoint behavior like in the below picture. This is the normal configuration to use a shared secret for accessing a Service Bus endpoint.   Finally below we have the code of the client in the console application which will call the service bus. You can see that we have created our proxy and then made a normal call to a WCF service but this time we have also set the ClientCredentials to use the appropriate username and password which will be flown through the service bus and to our service which will validate them.     Conclusion As you can see from the above walkthrough it is not too difficult to configure a service to use both a shared secret and username token at the same time. This gives you the power and protection offered by the access control service in the cloud but also the ability to flow additional tokens to the on-premise component for additional security features to be implemented. Sample The sample used in this post is available at the following location: https://s3.amazonaws.com/CSCBlogSamples/Acme.Azure.ServiceBus.Poc.UN.zip

    Read the article

  • Mixing static and dynamic endpoints in app.yaml file

    - by Greg
    I'm trying to describe endpoints in my App Engine app and am having difficulty for directory structures that mix static and dynamic content. But my yaml rules are conflicting with one another. Before I change my directory structure, does anyone have a recommendation? The goal is to create a directory that contains both documentation (static html files) and implementations. /api - /v1 - getitdone.py - doc.html - index.html What I think I should be doing with my application yaml... - url: /api/v1/getitdone script: api/v1/getitdone.py - url: /api/ static_files: api/index.html upload: api/index.html - url: /api static_dir: api But this causes the dynamic endpoints to fail. I'm assuming the static_dir reference is breaking it. How can I do this without describing every script and static file reference (I have many more than are listed here)?

    Read the article

  • Changing resource file in new version of an app

    - by Michael Frost
    Hi, I'm working on an update for an already existing iphone app. The existing version contains a .sql database file which is used in the app. I would like to use a new version of this file in the update of the app. On the first startup of the existing app the .sql file is placed in the caches directory of the users iphone. From what I can understand from Apple's documentation the files in the caches directory might get copied from the old app to the new versions caches directory when the user updates the app. Does this mean that for being sure my new file is used in the updated version I should use a different name of the file? And what happens with the old file? Do I have to manually delete it from inside the app? Which means I have to check if it's there at every startup of the app? Thanks Michael

    Read the article

  • User Mode Linux - Installing a module error

    - by Zach
    I am trying to run 'make' on a module in User Mode Linux to install a simple makefile. Here is my make file: obj-m := hello.o KDIR := /lib/modules/$(shell uname -r)/build PWD := $(shell pwd) default: $(MAKE) -C $(KDIR) SUBDIRS=$(PWD) modules When I run this in User Mode Linux I get the following error: make[1]: Entering directory /lib/modules/2.6.28/build' make[1]: *** No rule to make targetmodules'. Stop. make[1]: Leaving directory `/lib/modules/2.6.28/build' make: * [default] Error 2 The problem is that no files are present under /lib/modules/. There's no directory for 2.6.28 or build. From what I've read, these should be symlinks to /usr/src, but under /usr/src, I don't see any files under that either.

    Read the article

  • How to apply Ubuntu patch for rpcbind?

    - by Linda
    I am currently running Ubuntu 12.04.1 Desktop and would like to apply this patch: https://launchpad.net/ubuntu/+source/rpcbind/0.2.0-7ubuntu1.2 My current rpcbind version is here: # aptitude show rpcbind Package: rpcbind State: installed Automatically installed: yes Version: 0.2.0-7ubuntu1.1 As you can see on the patch page, I'd like to patch to this version: Version: 0.2.0-7ubuntu1.2 However, based on the downloadable files on the patch page, I'm not sure where to start. (directory structure of the original rpcbind source) # find rpcbind-0.2.0 -type d rpcbind-0.2.0 rpcbind-0.2.0/src rpcbind-0.2.0/man (directory structure of the patch download) # find debian -type d debian debian/patches debian/source [EDIT] I've figured out how to apply the individual patches in the patches directory: # patch -p1 < ../debian/patches/01-usage-fix.patch patching file src/rpcbind.c (and so on for each patch file) ... but I'm not sure what do with the patch-related files in the root debian folder. Any help here? Thanks in advance, Linda

    Read the article

  • Boot strapper for VSTO 3.0 SP1

    - by Amjid Qureshi
    Hi, I've got a .net 3.5 vs2008 Excel addin. I've created an installer for it and have it working apart from the fact that I cant get an option in the prerequisites for VSTO 3.0 SP1. I have one for VSTO 3.0 and when I check the C:\Program Files (x86)\Microsoft SDKs\Windows\v6.0A\Bootstrapper\Packages\VSTOR30 directory it has vstor30.exe vstor30sp1-KB949258-x86.exe product.xml "en" directory. However when I build the installer only the vstor.exe file gets copied over to the bin directory. I need vstor3.0sp1 for the addin to work. If I manually install vstor sp1 then everything works fine.

    Read the article

  • Building path independent mod_rewrite statements for generic .htaccess file

    - by Pekka
    Say I have three small web applications stored under a shared web root: www.example.com/app1/ www.example.com/app2/ www.example.com/app3/ www.example.com/app4/ each application has a .htaccess file containing some run-off-the-mill mod_rewrite statements to rewrite urls like RewriteCond %{REQUEST_URI} ^/app1/([^/]+)/([^/]+)\.html$ RewriteRule .* /app1/index.php?selectedProfile=%1&match=%2&%{QUERY_STRING} now, I would like to have a generic .htaccess file in each /app{n} directory. So, no RewriteBase and no /app{n} prefix in the RewriteConds. One idea I had was making the first level a wildcard directory as well: RewriteCond %{REQUEST_URI} ^/([^/]+)/([^/]+)/([^/]+)\.html$ seeing as the .htaccess file gets triggered only when the /app{n} directory is entered, this should work. Is this an acceptable solution? Are there other, better ones?

    Read the article

  • How do I execute a shell-command in background?

    - by Adobe
    Here's a simple defun to run a shell script: (defun bk-konsoles () "Calls: bk-konsoles.bash" (interactive) (shell-command (concat (expand-file-name "~/its/plts/goodies/bk-konsoles.bash ") (if (buffer-file-name) (file-name-directory (buffer-file-name))) " &") nil nil)) If I start a program with no ampersand - it start the script, but blocks emacs until I close the program, if I don't put ampersand it gives error: /home/boris/its/plts/goodies/bk-konsoles.bash /home/boris/scl/geekgeek/: exited abnormally with code 1. Edit: So now I'm using: (defun bk-konsoles () "Calls: bk-konsoles.bash" (interactive) (shell-command (concat (expand-file-name "~/its/plts/goodies/bk-konsoles.bash ") (if (buffer-file-name) (file-name-directory (buffer-file-name))) " & disown") nil nil) (kill-buffer "*Shell Command Output*")) Edit 2: Nope - doesn't work: (defun bk-konsoles () "Calls: bk-konsoles.bash" (interactive) (let ((curDir default-directory)) ;; (shell-command (concat "nohup " (expand-file-name "~/its/plts/goodies/bk-konsoles.bash ") curDir) nil nil) (shell-command (concat (expand-file-name "~/its/plts/goodies/bk-konsoles.bash ") curDir "& disown") nil nil) (kill-buffer "*Shell Command Output*"))) keeps emacs busy - either with disown, or nohup. Here's a script I'm running if it might be of help: bk-konsoles.bash

    Read the article

  • .htaccess mod_rewrite is preventing certain files from being served

    - by Lucas
    i have a successful use of mod_rewrite to make a site display as i wish... however, i have migrated the 'mock-up' folder to the root directory and in implementing these rules for the site, some files are not being served in the ^pdfs folder: RewriteCond %{REQUEST_FILENAME} -f RewriteRule ^ - [L] (old directory) RewriteRule ^redesign_03012010/mock-up/([^/]+)/([^/]+)$ /redesign_03012010/mock-up/index.php?page=$1&section=$2 [PT] RewriteRule ^redesign_03012010/mock-up/([^/]+)$ /redesign_03012010/mock-up/index.php?page=$1 [PT,L] (new directory) RewriteRule ^([^/]+)/([^/]+)$ /index.php?test=1&page=$1&section=$2 [PT] RewriteRule ^([^/]+)$ /index.php?test=1&page=$1 [PT,L] ... ^pdfs (aka /pdfs/) is not serving the files... any suggestions?

    Read the article

  • Problem with CruiseControl.net configuration

    - by Pawel
    Hi I started using ccnet to build my project. This is quite new issue for me so I have some problems. First thing: Why does ccnet copy directory with my project to another directory (ccnet creates new folder named the same as project name included in ccnet.config file and copies to them directory with my project) Second thing: Dashboard page cannot show reports for recent build (When I click on any item in recent build then I get page: "The page Cannot be found" I suppose that page cannot link files with logs. but I don't know how to link it. I create one publisher: <publishers> <xmllogger logDir="c:\Branches" /> Can anyone help me?

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 2: The Service

    - by Your DisplayName here!
    OK – so let’s first start with a simple WCF service and connect that to ADFS 2 for authentication. The service itself simply echoes back the user’s claims – just so we can make sure it actually works and to see how the ADFS 2 issuance rules emit claims for the service: [ServiceContract(Namespace = "urn:leastprivilege:samples")] public interface IService {     [OperationContract]     List<ViewClaim> GetClaims(); } public class Service : IService {     public List<ViewClaim> GetClaims()     {         var id = Thread.CurrentPrincipal.Identity as IClaimsIdentity;         return (from c in id.Claims                 select new ViewClaim                 {                     ClaimType = c.ClaimType,                     Value = c.Value,                     Issuer = c.Issuer,                     OriginalIssuer = c.OriginalIssuer                 }).ToList();     } } The ViewClaim data contract is simply a DTO that holds the claim information. Next is the WCF configuration – let’s have a look step by step. First I mapped all my http based services to the federation binding. This is achieved by using .NET 4.0’s protocol mapping feature (this can be also done the 3.x way – but in that scenario all services will be federated): <protocolMapping>   <add scheme="http" binding="ws2007FederationHttpBinding" /> </protocolMapping> Next, I provide a standard configuration for the federation binding: <bindings>   <ws2007FederationHttpBinding>     <binding>       <security mode="TransportWithMessageCredential">         <message establishSecurityContext="false">           <issuerMetadata address="https://server/adfs/services/trust/mex" />         </message>       </security>     </binding>   </ws2007FederationHttpBinding> </bindings> This binding points to our ADFS 2 installation metadata endpoint. This is all that is needed for svcutil (aka “Add Service Reference”) to generate the required client configuration. I also chose mixed mode security (SSL + basic message credential) for best performance. This binding also disables session – you can control that via the establishSecurityContext setting on the binding. This has its pros and cons. Something for a separate blog post, I guess. Next, the behavior section adds support for metadata and WIF: <behaviors>   <serviceBehaviors>     <behavior>       <serviceMetadata httpsGetEnabled="true" />       <federatedServiceHostConfiguration />     </behavior>   </serviceBehaviors> </behaviors> The next step is to add the WIF specific configuration (in <microsoft.identityModel />). First we need to specify the key material that we will use to decrypt the incoming tokens. This is optional for web applications but for web services you need to protect the proof key – so this is mandatory (at least for symmetric proof keys, which is the default): <serviceCertificate>   <certificateReference storeLocation="LocalMachine"                         storeName="My"                         x509FindType="FindBySubjectDistinguishedName"                         findValue="CN=Service" /> </serviceCertificate> You also have to specify which incoming tokens you trust. This is accomplished by registering the thumbprint of the signing keys you want to accept. You get this information from the signing certificate configured in ADFS 2: <issuerNameRegistry type="...ConfigurationBasedIssuerNameRegistry">   <trustedIssuers>     <add thumbprint="d1 … db"           name="ADFS" />   </trustedIssuers> </issuerNameRegistry> The last step (promised) is to add the allowed audience URIs to the configuration – WCF clients use (by default – and we’ll come back to this) the endpoint address of the service: <audienceUris>   <add value="https://machine/soapadfs/service.svc" /> </audienceUris> OK – that’s it – now we have a basic WCF service that uses ADFS 2 for authentication. The next step will be to set-up ADFS to issue tokens for this service. Afterwards we can explore various options on how to use this service from a client. Stay tuned… (if you want to have a look at the full source code or peek at the upcoming parts – you can download the complete solution here)

    Read the article

  • How to override environment variables when running configure?

    - by Sam
    In any major package for Linux, running ./configure --help will output at the end: Some influential environment variables: CC C compiler command CFLAGS C compiler flags LDFLAGS linker flags, e.g. -L<lib dir> if you have libraries in a nonstandard directory <lib dir> CPPFLAGS C/C++ preprocessor flags, e.g. -I<include dir> if you have headers in a nonstandard directory <include dir> CPP C preprocessor Use these variables to override the choices made by `configure' or to help it to find libraries and programs with nonstandard names/locations. How do I use these variables to include a directory? I tried running ./configure --CFLAGS="-I/home/package/custom/" and ./configure CFLAGS="-I/home/package/custom/" however these do not work. Any suggestions?

    Read the article

  • How can my CGI program access non-browseable files?

    - by Zerobu
    I was wondering if it was possible to read a text file that was located in a directory called "/home/user/files" I wanted to read it from my cgi-bin which is located in /home/user/cgi-bi/ Below is my code, #!/usr/bin/perl use strict; use CGI; #Virtual Directory #Steffan Harris eval { use constant PASSWORD => 'perl'; use constant UPLOAD_DIR => '/home/sharris2/files'; sub mapToFile { print chdir UPLOAD_DIR; } #This function will list all files in a directory. sub listDirectoryFiles { chdir UPLOAD_DIR; my @files = <*>; mapToFile; print<<LIST; <h2>Current Files</h2> <ul> LIST if(!$files[0]) { print" </ul>\n<em>No files in directory</em>"; } foreach(@files) { print" <li>$_</li>"; } print " </ul>\n"; } #This function generates a 404 Not Found error sub generate404 { print<<RESPONSE; Status: 404 Not Found Content-Type: text/html <html> <head><title>404 Not Found</title></head> <body> <p> <h1>404 - Not Found</h1> </p> The requested URL <b>$ENV{"HTTP_HOST"}$ENV{"REQUEST_URI"}</b> was not found on the server. </body> </html> RESPONSE exit; } #This function checks the path info to see if it matches a file in the UPLOAD_DIR directory, If it does not, then it returns a 404 error sub checkExsistence { if($ENV{"PATH_INFO"}) { chdir UPLOAD_DIR; my @files = <*>; if(!$files[0] and $ENV{"PATH_INFO"} eq "/") { return; } foreach(@files) { if($ENV{"PATH_INFO"} eq "/".$_ || $ENV{"PATH_INFO"} eq "/") { print "yes"; return; } } generate404; } } sub checkPassword { my ($password, $cgi); $cgi = new CGI; $password = $cgi->param('passwd'); unless($password eq PASSWORD) { print<<RESPONSE; Status: 200 OK Content-Type: text/html <html> <head> <title>Incorrect Password</title> </head> <body> <h1>Invalid password entered.</h1> <h3><a href="/~sharris2/cgi-bin/files/">Go Back</a></h3> </body> RESPONSE exit; } } sub upLoadFile { checkPassword; my ($uploadfile, $cgi); $cgi = new CGI; $uploadfile = $cgi->upload('uploadfile'); chdir UPLOAD_DIR; $uploadfile or die "Did not receive a file to upload"; open my $FILE, '>', UPLOAD_DIR."/$uploadfile" or die "$!"; while(<$uploadfile>) { print $FILE $_; } } #Start of main part of program my $cgi = new CGI; if(!$ENV{"PATH_INFO"}) { print $cgi->redirect('/~sharris2/cgi-bin/files/'); } checkExsistence; if($ENV{"REQUEST_METHOD"} eq "POST") { upLoadFile; } print <<"HEADERS"; Status: 200 OK Content-Type: text/html HEADERS print <<"HTML"; <html> <head> <title>Virtual Directory</title> </head> <body> HTML listDirectoryFiles; print<<HTML; <h2>Upload a new file</h2> <form method = "POST" enctype = "multipart/form-data" action = "/~sharris2/cgi-bin/files/" /> File:<input type = "file" name="uploadfile"/> <p>Password: <input type = "password" name ="passwd"/></p> <p><input type = "submit" value= "Submit File" /></p> </form> </body> </html> HTML };

    Read the article

  • netbeans ftp configuration

    - by sunwukung
    Hi, I've set up my FTP connection for my project, but when it uploads the file, it adds a directory named after the project to the uploads (which means it isn't going to the right folder). i.e. initial directory set to '/httpdocs'; no upload directory specified. I upload a file from my local folder: project name/library/script.php I want it to go here: FTP/httpdocs/library/script.php but it's going here: FTP/httpdocs/PROJECTNAME/library/script.php Can anyone help me get this configured correctly?

    Read the article

  • How to import *.pyc file from different version of python?

    - by Almog
    Hello, I used python 2.5 and imported a file named "irit.py" from C:\util\Python25\Lib\site-packages directory. This files imports the file "_irit.pyc which is in the same directory. It worked well and did what I wanted. Than, I tried the same thing with python version 2.6.4. "irit.py" which is in C:\util\Python26\Lib\site-packages was imported, but "_irit.pyc" (which is in the same directory of 26, like before) hasn't been found. I got the error message: File "C:\util\Python26\lib\site-packages\irit.py", line 5, in import _irit ImportError: DLL load failed: The specified module could not be found. Can someone help me understand the problem and how to fix it?? Thanks, Almog.

    Read the article

  • a program similar to ls with some modifications

    - by Bond
    Hi, here is a simple puzzle I wanted to discuss. A C program to take directory name as command line argument and print last 3 directories and 3 files in all subdirectories without using api 'system' inside it. suppose directory bond0 contains bond1, di2, bond3, bond4, bond5 and my_file1, my_file2, my_file3, my_file4, my_file5, my_file6 and bond1 contains bond6 my_file7 my_file8 my_file9 my_file10 program should output - bond3, bond4, bond5, my_file4, my_file5, my_file6, bond6, my_file8, my_file9, my_file10 My code for the above problem is here #include<dirent.h> #include<unistd.h> #include<string.h> #include<sys/stat.h> #include<stdlib.h> #include<stdio.h> char *directs[20], *files[20]; int i = 0; int j = 0; int count = 0; void printdir(char *); int count_dirs(char *); int count_files(char *); int main() { char startdir[20]; printf("Scanning user directories\n"); scanf("%s", startdir); printdir(startdir); } void printdir(char *dir) { printf("printdir called %d directory is %s\n", ++count, dir); DIR *dp = opendir(dir); int nDirs, nFiles, nD, nF; nDirs = 0; nFiles = 0; nD = 0; nF = 0; if (dp) { struct dirent *entry = 0; struct stat statBuf; nDirs = count_dirs(dir); nFiles = count_files(dir); printf("The no of subdirectories in %s is %d \n", dir, nDirs); printf("The no of files in %s is %d \n", dir, nFiles); while ((entry = readdir(dp)) != 0) { if (strcmp(entry->d_name, ".") == 0 || strcmp(entry->d_name, "..") == 0) { continue; } char *filepath = malloc(strlen(dir) + strlen(entry->d_name) + 2); if (filepath) { sprintf(filepath, "%s/%s", dir, entry->d_name); if (lstat(filepath, &statBuf) != 0) { } if (S_ISDIR(statBuf.st_mode)) { nD++; if ((nDirs - nD) < 3) { printf("The directory is %s\n",entry->d_name); } } else { nF++; if ((nFiles - nF) < 3) { printf("The files are %s\n", entry->d_name); } //if } //else free(filepath); } //if(filepath) } //while while ((entry = readdir(dp)) != 0) { if (strcmp(entry->d_name, ".") == 0 || strcmp(entry->d_name, "..") == 0) { continue; } printf("In second while loop *entry=%s\n",entry->d_name); char *filepath = malloc(strlen(dir) + strlen(entry->d_name) + 2); if (filepath) { sprintf(filepath, "%s/%s", dir, entry->d_name); if (lstat(filepath, &statBuf) != 0) { } if (S_ISDIR(statBuf.st_mode)) { printdir(entry->d_name); } } //else free(filepath); } //2nd while closedir(dp); } else { fprintf(stderr, "Error, cannot open directory %s\n", dir); } } //printdir int count_dirs(char *dir) { DIR *dp = opendir(dir); int nD; nD = 0; if (dp) { struct dirent *entry = 0; struct stat statBuf; while ((entry = readdir(dp)) != 0) { if (strcmp(entry->d_name, ".") == 0 || strcmp(entry->d_name, "..") == 0) { continue; } char *filepath = malloc(strlen(dir) + strlen(entry->d_name) + 2); if (filepath) { sprintf(filepath, "%s/%s", dir, entry->d_name); if (lstat(filepath, &statBuf) != 0) { fprintf(stderr, "File Not found? %s\n", filepath); } if (S_ISDIR(statBuf.st_mode)) { nD++; } else { continue; } free(filepath); } } closedir(dp); } else { fprintf(stderr, "Error, cannot open directory %s\n", dir); } return nD; } int count_files(char *dir) { DIR *dp = opendir(dir); int nF; nF = 0; if (dp) { struct dirent *entry = 0; struct stat statBuf; while ((entry = readdir(dp)) != 0) { if (strcmp(entry->d_name, ".") == 0 || strcmp(entry->d_name, "..") == 0) { continue; } char *filepath = malloc(strlen(dir) + strlen(entry->d_name) + 2); if (filepath) { sprintf(filepath, "%s/%s", dir, entry->d_name); if (lstat(filepath, &statBuf) != 0) { fprintf(stderr, "File Not found? %s\n", filepath); } if (S_ISDIR(statBuf.st_mode)) { continue; } else { nF++; } free(filepath); } } closedir(dp); } else { fprintf(stderr, "Error, cannot open file %s\n", dir); } return nF; } The above code I wrote is a bit not functioning correctly can some one help me to understand the error which is coming.So that I improve it further.There seems to be some small glitch which is not clear to me right now.

    Read the article

  • Using 'git pull' vs 'git checkout -f' for website deployment

    - by Michelle
    I've found two common approaches to automatically deploying website updates using a bare remote repo. The first requires that the repo is cloned into the document root of the webserver and in the post-update hook a git pull is used. cd /srv/www/siteA/ || exit unset GIT_DIR git pull hub master The second approach adds a 'detached work tree' to the bare repository. The post-receive hook uses git checkout -f to replicate the repository's HEAD into the work directory which is the webservers document root i.e. GIT_WORK_TREE=/srv/www/siteA/ git checkout -f The first approach has the advantage that changes made in the websites working directory can be committed and pushed back to the bare repo (however files should not be updated on the live server). The second approach has the advantage that the git directory is not within the document root but this is easily solved using htaccess. Is one method objectively better than the other in terms of best practice? What other advantages and disadvantages am I missing?

    Read the article

  • Location of my own directories

    - by Nicsoft
    Hello, I am trying to understand how to create my own directory tree and access it from my application. I am doing some kind of learning program and want to create a directory structure like this: -MyLessonsDirectory --LessonDirectory_1 ---Step_1 ---Step_2 ---Step_3 --LessonDirectory_2 ---Step_1 ---Step_2 ---Step_3 . . and so on In each "Step"-directory I will have specific content for that step of the lesson. The content is added by me while developing. Hence, I don't want to create the structure and content from within the application, just access it and reading the content of the directories. If I can create this kind of structure and use it it will be very easy to create a nice design that makes it easy to create new lessons in my app. What I cannot figure out is where I should create those directories in the file-structure in xCode and what is the path to those directories from my application. Thanks in advance!

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >