Search Results

Search found 9696 results on 388 pages for 'proxy authentication'.

Page 219/388 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • xsd.exe - schema to class - for use with WCF

    - by NealWalters
    I have created a schema as an agreed upon interface between our company and an external company. I am now creating a WCF C# web service to handle the interface. I ran the XSD utility and it created a C# class. The schema was built in BizTalk, and references other schemas, so all-in-all there are over 15 classes being generated. I put [DataContract} attribute in front of each of the classes. Do I have to put the [DataMember] attribute on every single property? When I generate a test client program, the proxy does not have any code for any of these 15 classes. We used to use this technique when using .asmx services, but not sure if it will work the same with WCF. If we change the schema, we would want to regenerate the WCF class, and then we would haev to each time redecorate it with all the [DataMember] attributes? Is there an newer tool similar to XSD.exe that will work better with WCF? Thanks, Neal Walters SOLUTION (buried in one of Saunders answer/comments): Add the XmlSerializerFormat to the Interface definition: [OperationContract] [XmlSerializerFormat] // ADD THIS LINE Transaction SubmitTransaction(Transaction transactionIn); Two notes: 1) After I did this, I saw a lot more .xsds in the my proxy (Service Reference) test client program, but I didn't see the new classes in my intellisense. 2) For some reason, until I did a build on the project, I didn't get all the classes in the intellisense (not sure why).

    Read the article

  • IncludeExceptionDetailInFaults not behaving as thought

    - by pdiddy
    I have this simple test project just to test the IncludeExceptionDetailInFaults behavior. public class Service1 : IService1 { public string GetData(int value) { throw new InvalidCastException("test"); return string.Format("You entered: {0}", value); } } [ServiceContract] public interface IService1 { [OperationContract] string GetData(int value); } In the app.config of the service i have it set to true <serviceDebug includeExceptionDetailInFaults="True" /> On the client side: try { using (var proxy = new ServiceReference1.Service1Client()) Console.WriteLine(proxy.GetData(5)); } catch (Exception ex) { Console.WriteLine(ex.Message); } This is what I thought the behavior was: Setting to includeExceptionDetailInFaults=true would propagate the exception detail to the client. But I'm always getting the CommunicationObjectFaultException. I did try having the FaultContract(typeof(InvalidCastException)) on the contract but same behavior, only getting the CommunicationObjectFaultException. The only way to make it work was to throw new FaultException(new InvalidCastException("test")); But I thought with IncludeExceptionDetailInFaults=true the above was done automatically. Am I missing something?

    Read the article

  • SQL Server 2008 Express Edition Connection Problem

    - by waqasahmed
    I have VS 2008 Professional Edition. After the installation (which included SQL Server 2008), I decided to install SQL Server 2008 Express Edition with Advanced Tools (so I could get SQL Server Management Studio on it). So I uninstalled the SQL Express that came with VS 2008, and installed the standalone SQL Server Express 2008 version with advanced tools. However, When I try to logon onto SQL Server Management Studio using: .\SQLEXPRESS as Server name and Windows Authentication as the authentication, I get the following message: TITLE: Connect to Server ------------------------------ Cannot connect to .\SQLEXPRESS. ------------------------------ ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) For help, click: http//go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=-1&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ Any suggestions on how to get it to work? I have tried disabling Windows Firewall as well and still no luck. I am using WIndows Vista and SQL Server 2008 Express SP1 Patch has also been applied recently.

    Read the article

  • Implementing Struts 2 Interceptors using Struts 1

    - by Andriy Zakharchuk
    Hello all, I have a legacy application written with Struts 1. The only feature I was asked to add is to protect some actions. Currently any user can do whatever he/she wants. The idea is to allows all user see the data, but block modification operation, i.e. to modify data a user should log in. I know Struts2 has interceptors, so I could attach them to required actions and forward users to log in page when needed. But how can I do similar thing in Struts 1 application? My first idea was to create my own abstract Action class: public class AuthenticatedAction { public ActionForward execute( ActionMapping mapping, ActionForm form, HttpServletRequest theRequest, HttpServletResponse theResponse) { if (!logged) { // forward to log in form } else { doExecute(mapping, form, request, response); } } public abstract ActionForward doExecute( ActionMapping mapping, ActionForm form, HttpServletRequest theRequest, HttpServletResponse theResponse); } Then change all actions that require authentication from extends Action to extends AuthenticatedAction then add login form, login action (which performs authentications and puts this status into the session) and change JSP header tile to display authentication block, e.g., "You are (not logged in)/", Login/Logout. As I guess this should solve the problem. If this doesn't solve the problem, please explain me why. Is there any better (more elegant like interceptors are) way to do this? Thank you in advance.

    Read the article

  • How to make NAnt send an email using a real account

    - by Turro
    First of all, I have already seen this post: nant mail issues but the only answer is not satisfactory (i.e.: doesn't work for me). I am using NAnt to get latest version of source, upgrade version of the libraries and application, build the application, build the setups... all the usual things, I bet. I would like NAnt to send an email to some people confirming the conclusion of the build process; I've already checked the official (pretty ugly, IMHO) documentation for the task, but the example, once copied and customized, doesn't work. This are the NAnt target and task I'm using: <target name="sendMail" > <mail from="[email protected]" tolist="[email protected];[email protected]" subject="Subject of email" mailhost="smtp.gmail.com" message="Your new release is ready!"> </mail> </target> The error message I get is: 530 5.7.0 Must issue a STARTTLS command first. It looks like that the task was designed for use by an account whose provider doesn't need authentication; but what can I do if I must use an external smtp server which requires authentication (telling my boss I need an smtp server in house is not an option)? Can anybody help/teach me? Thanks in advance...

    Read the article

  • Devise not allowing active resource to access the services

    - by Saurabh Pandit
    In my application there are two folders one for a rails application and another for a ruby application. In the ruby folder I have created a ruby file in which I have written code to access some model which is present in the rails application using active resource. Sample code is shown below : active_resource_example.rb require 'rubygems' require 'active_resource' class Website < ActiveResource::Base self.site = "http://localhost:3000/admin/" self.user = "user" self.password = "password" end websites = Website.find(:all) puts websites.inspect In my rails application I have used ActiveAdmin gem which uses devise for authentication. On rails Server I get the following result : Started GET "/admin/websites.json" for 192.168.1.37 at 2011-11-12 14:41:06 +0530 Processing by Admin::WebsitesController#index as JSON Completed in 43ms And on my terminal where I executed active_resource_example.rb, I got following error : user@user:~/Desktop$ ruby active_resource_example.rb /home/user/.rvm/gems/ruby-1.9.2-p180/gems/activeresource-3.1.1/lib/active_resource/connection.rb:132:in `handle_response': Failed. Response code = 401. Response message = Unauthorized . (ActiveResource::UnauthorizedAccess) from /home/user/.rvm/gems/ruby-1.9.2-p180/gems/activeresource-3.1.1/lib/active_resource/connection.rb:115:in `request' from /home/user/.rvm/gems/ruby-1.9.2-p180/gems/activeresource-3.1.1/lib/active_resource/connection.rb:80:in `block in get' from /home/user/.rvm/gems/ruby-1.9.2-p180/gems/activeresource-3.1.1/lib/active_resource/connection.rb:218:in `with_auth' from /home/user/.rvm/gems/ruby-1.9.2-p180/gems/activeresource-3.1.1/lib/active_resource/connection.rb:80:in `get' from /home/user/.rvm/gems/ruby-1.9.2-p180/gems/activeresource-3.1.1/lib/active_resource/base.rb:894:in `find_every' from /home/user/.rvm/gems/ruby-1.9.2-p180/gems/activeresource-3.1.1/lib/active_resource/base.rb:806:in `find' from active_resource_example.rb:12:in `<main>' user@user:~/Desktop$ I tried this with another application in which Devise authentication is not used with the same configuration I used in active_resource_example.rb, there I got the result. Desperately need some solution to this issue.

    Read the article

  • Uploading to TwitVid using x_verify_credentials_authorization

    - by deepa
    Hi, I am developing an iPhone application that uploads videos to TwitVid using the library TwitVid-iPhone-OAuth3. I have selected this library since Twitter is shifting to OAuth mechanism of authentication. 1) Does this new library allows to upload without user-name and password parameters? 2) I am using the following steps for uploading videos (assumed that new library allows to upload without user-name and password parameters). But, I am getting the error 'Incorrect Signature'. I am not able to identify the mistake that I am doing here. Could you please help me out to solve this problem. Authenticate the app using OAuth (MGTwitterEngine etc) which redirects the user to the web-page asking for log-in and app authentication. If the user authenticates the app, access token will obtained as final step. Upload the video to TwitVid using the following code snippet: OAuth *authorizer = [[OAuth alloc] initWithConsumerKey: @"my_consumer_key" andConsumerSecret: @"my_consumer_secret"]; authorizer.oauth_token = //The key-part of the access token obtained in step 2 authorizer.oauth_token_secret = //The secret-part of the access token obtained in step 2 authorizer.oauth_token_authorized = YES; //Since authenticated in steps 1 and 2 NSString *authHeader = [authorizer oAuthHeaderForMethod: @"POST" andUrl: @"https://api.twitter.com/1/account/verify_credentials.xml" andParams: nil]; twitVid = [[TwitVid alloc] init]; [mTwitVid setDelegate: self]; [mTwitVid authenticateWithXVerifyCredentialsAuthorization: authHeader xAuthServiceProvider: nil]; Thanks and Regards, Deepa

    Read the article

  • Zend: How to authenticate using OpenId on local server.

    - by NAVEED
    I am using zend framework. Now I want to authenticate users with other already registered accounts using RPX. I am following the 3 steps as described at RPX site: 1- Get the Widget 2- Receive Tokens 3- Choose Providers I created a controller(person) and action(signin) to show widget and my own signin form. When following action (http://test.dev/#person/personsignin) is called then my own login form and widget is shown successfully. # is used in above URL for AJAX indication. public function personsigninAction() { $this->view->jsonEncoded = true; // Person Signin Form $PersonSigninForm = new Form_PersonSignin(); $this->view->PersonSigninForm = $PersonSigninForm; $this->view->PersonSigninForm->setAction( $this->view->url() ); $request = $this->getRequest(); if ( $request->isPost() ) { } } There are two problems while login using openid widget: When I am authenticated from outside(for example: Yahoo) then I am redirected to http://test.dev, therefor indexAction in called in indexController and home page is shown. I want to redirect to http://test.dev/#person/personsignin after authentication and want to set session in isPost() condition of personsigninAction() (described above). For now I consider indexAction to be called when outside authentication is done. Now I posted the code from http://gist.github.com/291396 in indexAction to follow step 3 mentioned above. But it is giving me following error: An error occured: Invalid parameter: apiKey Am I using the right way to use this. This is my very first attempt to this stuff. Can someone tell me the exact steps using my above actions? Thanks.

    Read the article

  • LinkedIn API returning extra/incorrect login prompt

    - by Paul Osetinsky
    I have a Rails application running the omniauth-linkedin gem and linkedin gem (essentialy an API wrapper). When a user logs in, they receive a primary login prompt that displays to them the correct scopes (FULL PROFILE and EMAIL ADDRESS), as below: However, after they log in, they get another login prompt that should not come up, and that ignores the initial scope request. It tells them that LinkedIN is only requesting their PROFILE OVERVIEW, which is incorrect: The problem must lie in my auth_controller, and I think it has do to with the url that is created in one of the authentication stages (definitely right after the user enters their LinkedIn authentication credentials). Here is my auth_controller: require 'linkedin' class AuthController < ApplicationController def auth client = LinkedIn::Client.new(ENV['LINKEDIN_KEY'], ENV['LINKEDIN_SECRET']) request_token = client.request_token(:oauth_callback => "http://#{request.host_with_port}/callback") session[:rtoken] = request_token.token session[:rsecret] = request_token.secret redirect_to client.request_token.authorize_url end def callback client = LinkedIn::Client.new(ENV['LINKEDIN_KEY'], ENV['LINKEDIN_SECRET']) if session[:atoken].nil? pin = params[:oauth_verifier] atoken, asecret = client.authorize_from_request(session[:rtoken], session[:rsecret], pin) session[:atoken] = atoken session[:asecret] = asecret @user = current_user @user.uid = client.profile(:fields => ["id"]).id flash.now[:success] = 'Signed in with LinkedIn.' else client.authorize_from_access(session[:atoken], session[:asecret]) @user.uid = client.profile(:fields => ["id"]).id flash.now[:success] = 'Signed in with LinkedIn.' end @user = current_user @user.save redirect_to current_user end end Just in case, here is my omniauth.rb file that states the scopes I am requesting for my application: Rails.application.config.middleware.use OmniAuth::Builder do provider :linkedin, ENV['LINKEDIN_KEY'], ENV['LINKEDIN_SECRET'], :scope => 'r_fullprofile r_emailaddress', :fields => ['id', 'email-address', 'first-name', 'last-name', 'headline', 'industry', 'picture-url', 'public-profile-url', 'location', 'positions', 'educations'] end Can't figure out how to get rid of that second unnecessary and misleading prompt from LinkedIn and would appreciate any guidance! Thank you.

    Read the article

  • Issue intercepting property in Silverlight application

    - by joblot
    I am using Ninject as DI container in a Silverlight application. Now I am extending the application to support interception and started integrating DynamicProxy2 extension for Ninject. I am trying to intercept call to properties on a ViewModel and ending up getting following exception: “Attempt to access the method failed: System.Reflection.Emit.DynamicMethod..ctor(System.String, System.Type, System.Type[], System.Reflection.Module, Boolean)” This exception is thrown when invocation.Proceed() method is called. I tried two implementations of the interceptor and they both fail public class NotifyPropertyChangedInterceptor: SimpleInterceptor { protected override void AfterInvoke(IInvocation invocation) { var model = (IAutoNotifyPropertyChanged)invocation.Request.Proxy; model.OnPropertyChanged(invocation.Request.Method.Name.Substring("set_".Length)); } } public class NotifyPropertyChangedInterceptor: IInterceptor { public void Intercept(IInvocation invocation) { invocation.Proceed(); var model = (IAutoNotifyPropertyChanged)invocation.Request.Proxy; model.OnPropertyChanged(invocation.Request.Method.Name.Substring("set_".Length)); } } I want to call OnPropertyChanged method on the ViewModel when property value is set. I am using Attribute based interception. [AttributeUsage(AttributeTargets.Property, AllowMultiple = false, Inherited = true)] public class NotifyPropertyChangedAttribute : InterceptAttribute { public override IInterceptor CreateInterceptor(IProxyRequest request) { if(request.Method.Name.StartsWith("set_")) return request.Context.Kernel.Get<NotifyPropertyChangedInterceptor>(); return null; } } I tested the implementation with a Console Application and it works alright. I also noted in Console Application as long as I had Ninject.Extensions.Interception.DynamicProxy2.dll in same folder as Ninject.dll I did not have to explicitly load DynamicProxy2Module into the Kernel, where as I had to explicitly load it for Silverlight application as follows: IKernel kernel = new StandardKernel(new DIModules(), new DynamicProxy2Module()); Could someone please help? Thanks

    Read the article

  • Creating cookieless application on development machine with asp.net

    - by zaladane
    I tried posting this on ServerFault with no luck so i am trying here. I am thinking about setting up a new domain to host static content on my website and have it cookieless just like Stackoverflow with their static domain. So before going ahead and buying the domain and setting it up I wanted to test it on my developement machine first under localhost (I have to mention that i am planning on having IIS running on my new domain for the static files). I therefore created a new application under IIS and disabled session state and forms authentication. When my main application needs resources like css, images and js , I use the path to the "static" application where they are hosted. The problem is that when I look at the request and the response for the requested files, they still have the session_id cookie defined as well as the asp.net authentication cookie. Is it at all possible to accomplish what i am trying to do on a development machine or do i have to just go ahead and purchase the new domain which hopefully with make things right? I tried to read about cookieless domain but can't figure out what i might be missing.

    Read the article

  • Connecting to a network drive programmatically and caching credentials

    - by Chris Doggett
    I'm finally set up to be able to work from home via VPN (using Shrew as a client), and I only have one annoyance. We use some batch files to upload config files to a network drive. Works fine from work, and from my team lead's laptop, but both of those machines are on the domain. My home system is not, and won't be, so when I run the batch file, I get a ton of "invalid drive" errors because I'm not a domain user. The solution I've found so far is to make a batch file with the following: explorer \\MACHINE1 explorer \\MACHINE2 explorer \\MACHINE3 Then manually login to each machine using my domain credentials as they pop up. Unfortunately, there are around 10 machines I may need to use, and it's a pain to keep entering the password if I missed one that a batch file requires. I'm looking into using the answer to this question to make a little C# app that'll take the login info once and login programmatically. Will the authentication be shared automatically with Explorer, or is there anything special I need to do? If it does work, how long are the credentials cached? Is there an app that does something like this automatically? Unfortunately, domain authentication via the VPN isn't an option, according to our admin. EDIT: If there's a way to pass login info to Explorer via the command line, that would be even easier using Ruby and highline.

    Read the article

  • Internet Explorer Warning when embedding Youtube on HTTPS site?

    - by pellepim
    Our application is run over HTTPS which rarely presents any problems for us. When it comes to youtube however, the fact that they do not present any content over SSL connections is giving us some head ache when trying to embed clips. Mostly because of Internet Explorers famous little warning message: "Do you want to view only the webpage content that was delivered securely? This page contains content that will not be delivered using a secure HTTPS ... etc" I've tried to solve this in several ways. The most promising one was to use the ProxyPass functionality in Apache to map to YouTube. Like this: ProxyPass: /youtube/ http://www.youtube.com ProxyPassReverse: /youtube/ http://www.youtube.com This gets rid of the annoying warning. However, the youtube SWF fails to start streaming The SWF i manage to load into the browser simply states : "An error occurred, please try again later". Potential solutions are perhaps: Download youtube FLV:s and serve them out of own domain (gah) Use custom FLV-player and stream only FLV:s from youtube over a https proxy? Update 10 March: I've tried to use Googles Youtube API for ActionScript to load a player. It looked promising at first and I was able to load a player through my https:// proxy. However, the SWF that is loaded contains loads of explicit calls to different non-ssl urls to create authentication links for the FLV-stream and for loading different crossdomain policies. It really seems like we're not supposed to access flv-streams directly. This makes it very hard to bypass the Internet Explorer warning, short of ripping out the FLV:s from youtube and serving them out of your own domain. There are solutions out there for downloading youtubes FLV:s. But that is not compliant with the Youtube terms of use and is really not an option for us.

    Read the article

  • Hooking the http/https protocol in IE causes GET requests to be sequential

    - by watsonmw
    I'm using the PassthruAPP method to hook into HTTP/HTTPS requests made by IE. It's working well for the most part, however I noticed a problem. Only one download thread is active at a time. I can see two IInternetProtocol objects getting created, but IE uses only one at a time. This is happening with IE7. The odd thing is that the problem occurs when overriding the existing default HTTP/HTTPS handler, even if the handler is not the one being used to make the request. E.g. Registering a handler for the HTTPS protocol will cause HTTP requests to be made sequentially, even though HTTP requests are not hooked. I installed Google Gears and it has the same problem. This always happens for the first few items on the page, but it seems that after the document complete is issued, concurrent downloads can occur again. For example Javascript code that is executed after the page has finished loading can load images concurrently just fine. One option is to try to IAT patch the 'IInternetProtocol' registered for HTTP requests, but Google Gears does this already and it has the same problem. I know installing a HTTP Proxy is another option, but I don't want to monkey with the users' HTTP Proxy settings if there another option.

    Read the article

  • How do I create Twitter style URLs for my app - Using existing application or app redesign - Ruby on

    - by bgadoci
    I have developed a blog application of sorts that I am trying to allow other users to take advantage of (for free and mostly for family). I wondering if the authentication I have set up will allow for such a thing. Here is the scenario. Currently the application allows for users to sign up for an account and when they do so they can create blog posts and organize those posts via tags. The application displays no data publicly (another words, you have to login to see anything). To gain access you have to create an account and even after you do, you cannot see anyone else's information as the applications filters using the current_user method and displays in the /posts/index.html.erb page. This would be great if a user only wanted to blog and share it with themselves, not really what I am looking for. My question has two parts (hopefully I won't make anyone mad by not putting these into two questions) Is it possible for a particular users data to live at www.myapplication.com/user without moving everything to the /user/show.html.erb file? Is it possible to make some of that information (living at the URL) public but still require login for create and destroy actions. Essentially, exactly like twitter. I am just curious if I can get from where I am (using the current_user methods across controllers to display in /posts/index.html.erb) to where I want to be. My fear is that I have to redesign the app such that the user data lives in the /user/show.html.erb page. Thoughts? UPDATE: I am using Clearance for authentication by Thoughtbot. I wonder if there is something I can set in the vendored gem path to represent the /posts/index.html.erb code as the /user/id code and replace id with the user name.

    Read the article

  • SecurityNegotiationException in WCF Service Hosted on IIS

    - by Ram
    Hi, I have hosted a WCF service on IIS. The configuration file is as follows <?xml version="1.0"?> <!-- Note: As an alternative to hand editing this file you can use the web admin tool to configure settings for your application. Use the Website->Asp.Net Configuration option in Visual Studio. A full list of settings and comments can be found in machine.config.comments usually located in \Windows\Microsoft.Net\Framework\v2.x\Config --> <configuration> <configSections> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere" /> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <appSettings/> <connectionStrings/> <system.web> <!-- Set compilation debug="true" to insert debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. --> <compilation debug="false"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> </compilation> <!-- The <authentication> section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. --> <authentication mode="Windows" /> <!-- The <customErrors> section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. <customErrors mode="RemoteOnly" defaultRedirect="GenericErrorPage.htm"> <error statusCode="403" redirect="NoAccess.htm" /> <error statusCode="404" redirect="FileNotFound.htm" /> </customErrors> --> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpModules> </system.web> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> <system.web.extensions> <scripting> <webServices> <!-- Uncomment this section to enable the authentication service. Include requireSSL="true" if appropriate. <authenticationService enabled="true" requireSSL = "true|false"/> --> <!-- Uncomment these lines to enable the profile service, and to choose the profile properties that can be retrieved and modified in ASP.NET AJAX applications. <profileService enabled="true" readAccessProperties="propertyname1,propertyname2" writeAccessProperties="propertyname1,propertyname2" /> --> <!-- Uncomment this section to enable the role service. <roleService enabled="true"/> --> </webServices> <!-- <scriptResourceHandler enableCompression="true" enableCaching="true" /> --> </scripting> </system.web.extensions> <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <add name="ScriptModule" preCondition="integratedMode" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </handlers> </system.webServer> <system.serviceModel> <services> <service name="IISTest2.Service1" behaviorConfiguration="IISTest2.Service1Behavior"> <!-- Service Endpoints --> <endpoint address="" binding="wsHttpBinding" contract="IISTest2.IService1"> <!-- Upon deployment, the following identity element should be removed or replaced to reflect the identity under which the deployed service runs. If removed, WCF will infer an appropriate identity automatically. --> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="IISTest2.Service1Behavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> The client configuration file is as follows <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IService1" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://yyy.zzz.xxx.net/IISTest2/Service1.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IService1" contract="ServTest.IService1" name="WSHttpBinding_IService1"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> </system.serviceModel> </configuration> When I tried to access the service from client application, I got SecurityNegotiationException and details are Secure channel cannot be opened because security negotiation with the remote endpoint has failed. This may be due to absent or incorrectly specified EndpointIdentity in the EndpointAddress used to create the channel. Please verify the EndpointIdentity specified or implied by the EndpointAddress correctly identifies the remote endpoint. If I host the service on ASP .NET Dev server, it work well but if I host on IIS above mentioned error occurs. Thanks, Ram

    Read the article

  • Erlang, SSH and authorized_keys

    - by Roberto Aloi
    Playing with the ssh and public_key application in Erlang, I've discovered a nice feature. I was trying to connect to my running Erlang SSH daemon by using a rsa key, but the authentication was failing and I was prompted for a password. After some debugging and tracing (and a couple of coffees), I've realized that, for some weird reason, a non valid key for my user was there. The authorized_keys file contained two keys. The wrong one was at some point in the file, while the correct one was appended at the end of the file. Now, the Erlang SSH application, when diffing the provided key with the ones contained in the authorized_keys, it was finding the first entry (completely ignoring the second on - the correct one). Then, it was switching to different authentication mechanism (at first it was trying dsa instead of rsa and then it was prompting for a password). The question is: Is this behavior intended or should the SSH server check for multiple entries for the same user in the *authorized_keys* file? Is this a generic SSH behaviour or it's just specific to the Erlang implementation?

    Read the article

  • Having trouble coming up with a good architecture for a client/server application

    - by rmw1985
    I am writing a remote backup service meant to support 1000+ users. It is going to use librsync to store reverse diffs (like rdiff-backup) and make data transfer efficient. My trouble is that I do not know the "best" way to implement the client/server model. I have thought of doing it like rsync/rdiff-backup do it by having the client open an SSH connection and running a server executable and communicating across pipes. Another alternative would be to write a server which would handle authentication and communicate with the client via SSL. The reason I have thought of this is that there is "state" information like how many backup jobs are setup, etc. that must be maintained. Another alternative that I have thought about is running a "web service" using Pylons or Django to handle the authentication, but I do not know how to bridge that the the "storage" side. Since I am using librsync, I cannot use "dumb" storage. Is there a way to pipe data through Pylons or Django to a server side handler that would do the rsync calculation? This seems to me like maybe a dumb question but I am sort of lost. Any tips or suggestions from more experienced developers would be extremely helpful.

    Read the article

  • can load data(google app enngine) from http://localhost:8100/remote_api ..

    - by zjm1126
    i can download data from gae (http://zjm1126.appspot.com/remote_api), this is code: appcfg.py download_data --application=zjm1126 --url=http://zjm1126.appspot.com/remote_api --filename=a.csv and it successful : D:\zjm_demo\app>appcfg.py download_data --application=zjm1126 --url=http://zjm1 126.appspot.com/remote_api --filename=a.csv Downloading data records. [INFO ] Logging to bulkloader-log-20100618.162421 [INFO ] Throttling transfers: [INFO ] Bandwidth: 250000 bytes/second [INFO ] HTTP connections: 8/second [INFO ] Entities inserted/fetched/modified: 20/second [INFO ] Batch Size: 10 [INFO ] Opening database: bulkloader-progress-20100618.162421.sql3 [INFO ] Opening database: bulkloader-results-20100618.162421.sql3 [INFO ] Connecting to zjm1126.appspot.com/remote_api Please enter login credentials for zjm1126.appspot.com Email: [email protected] Password for [email protected]: [INFO ] Downloading kinds: [u'LogText', u'Greeting', u'Forum', u'Thread'] .... [INFO ] Have 0 entities, 0 previously transferred [INFO ] 0 entities (8804 bytes) transferred in 11.3 seconds so i want to know can load data from 127.0.0.1 , this is my code : appcfg.py download_data --application=zjm1126 --url=http://localhost:8100/remote_api --filename=a.csv and the error is : D:\zjm_demo\app>appcfg.py download_data --application=zjm1126 --url=http://loca lhost:8100/remote_api --filename=a.csv Downloading data records. [INFO ] Logging to bulkloader-log-20100618.162325 [INFO ] Throttling transfers: [INFO ] Bandwidth: 250000 bytes/second [INFO ] HTTP connections: 8/second [INFO ] Entities inserted/fetched/modified: 20/second [INFO ] Batch Size: 10 [INFO ] Opening database: bulkloader-progress-20100618.162325.sql3 [INFO ] Opening database: bulkloader-results-20100618.162325.sql3 Please enter login credentials for localhost Email: [email protected] Password for [email protected]: [INFO ] Connecting to localhost:8100/remote_api [ERROR ] Exception during authentication Traceback (most recent call last): File "d:\Program Files\Google\google_appengine\google\appengine\tools\bulkload er.py", line 3169, in Run self.request_manager.Authenticate() File "d:\Program Files\Google\google_appengine\google\appengine\tools\bulkload er.py", line 1178, in Authenticate remote_api_stub.MaybeInvokeAuthentication() File "d:\Program Files\Google\google_appengine\google\appengine\ext\remote_api \remote_api_stub.py", line 542, in MaybeInvokeAuthentication datastore_stub._server.Send(datastore_stub._path, payload=None) File "d:\Program Files\Google\google_appengine\google\appengine\tools\appengin e_rpc.py", line 346, in Send f = self.opener.open(req) File "D:\Python25\lib\urllib2.py", line 387, in open response = meth(req, response) File "D:\Python25\lib\urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "D:\Python25\lib\urllib2.py", line 425, in error return self._call_chain(*args) File "D:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "D:\Python25\lib\urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 404: Not Found [INFO ] Authentication Failed so what should i do , thanks

    Read the article

  • Squid handling of concurrent cache misses

    - by Oliver H-H
    We're using a Squid cache to off-load traffic from our web servers, ie. it's setup as a reverse-proxy responding to inbound requests before they hit our web servers. When we get blitzed with concurrent requests for the same request that's not in the cache, Squid proxies all the requests through to our web ("origin") servers. For us, this behavior isn't ideal: our origin servers gets bogged down trying to fulfill N identical requests concurrently. Instead, we'd like the first request to proxy through to the origin server, the rest of the requests to queue at the Squid layer, and then all be fulfilled by Squid when the origin server has responded to that first request. Does anyone know how to configure Squid to do this? We've read through the documentation multiple times and thoroughly web-searched the topic, but can't figure out how to do it. We use Akamai too and, interestingly, this is its default behavior. (However, Akamai has so many nodes that we still see lots of concurrent requests in certain traffic spike scenarios, even with Akamai's super-node feature enabled.) This behavior is clearly configurable for some other caches, eg. the Ehcache documentation offers the option "Concurrent Cache Misses: A cache miss will cause the filter chain, upstream of the caching filter to be processed. To avoid threads requesting the same key to do useless duplicate work, these threads block behind the first thread." Some folks call this behavior a "blocking cache," since the subsequent concurrent requests block behind the first request until it's fulfilled or timed-out. Thx for looking over my noob question! Oliver

    Read the article

  • Saving a record in Authlogic table

    - by denniss
    I am using authlogic to do my authentication. The current model that serves as the authentication model is the user model. I want to add a "belongs to" relationship to user which means that I need a foreign key in the user table. Say the foreign key is called car_id in the user's model. However, for some reason, when I do u = User.find(1) u.car_id = 1 u.save! I get ActiveRecord::RecordInvalid: Validation failed: Password can't be blank My guess is that this has something to do with authlogic. I do not have validation on password on the user's model. This is the migration for the user's table. def self.up create_table :users do |t| t.string :email t.string :first_name t.string :last_name t.string :crypted_password t.string :password_salt t.string :persistence_token t.string :single_access_token t.string :perishable_token t.integer :login_count, :null => false, :default => 0 # optional, see Authlogic::Session::MagicColumns t.integer :failed_login_count, :null => false, :default => 0 # optional, see Authlogic::Session::MagicColumns t.datetime :last_request_at # optional, see Authlogic::Session::MagicColumns t.datetime :current_login_at # optional, see Authlogic::Session::MagicColumns t.datetime :last_login_at # optional, see Authlogic::Session::MagicColumns t.string :current_login_ip # optional, see Authlogic::Session::MagicColumns t.string :last_login_ip # optional, see Authlogic::Session::MagicColumns t.timestamps end end And later I added the car_id column to it. def self.up add_column :users, :user_id, :integer end Is there anyway for me to turn off this validation?

    Read the article

  • jquery .submit live click runs more than once

    - by fxuser
    I use the following code to run my form ajax requests but when i use the live selector on a button i can see the ajax response fire 1 time, then if i re-try it 2 times, 3 times, 4 times and so on... I use .live because i also have a feature to add a post and that appears instantly so the user can remove it without refreshing the page... Then this leads to the above problem... using .click could solve this but it's not the ideal solution i'm looking for... jQuery.fn.postAjax = function(success_callback, show_confirm) { this.submit(function(e) { e.preventDefault(); if (show_confirm == true) { if (confirm('Are you sure you want to delete this item? You can\'t undo this.')) { $.post(this.action, $(this).serialize(), $.proxy(success_callback, this)); } } else { $.post(this.action, $(this).serialize(), $.proxy(success_callback, this)); } return false; }) return this; }; $(document).ready(function() { $(".delete_button").live('click', function() { $(this).parent().postAjax(function(data) { if (data.error == true) { } else { } }, true); }); });? EDIT: temporary solution is to change this.submit(function(e) { to this.unbind('submit').bind('submit',function(e) { the problem is how can i protect it for real because people who know how to use Firebug or the same tool on other browsers can easily alter my Javascript code and re-create the problem

    Read the article

  • Can't start the portable version of NetBeans 6.9.1 IDE

    - by Coder
    I downloaded the portable version of netbeans netbeans-6.9.1-201007282301-ml.zip from the netbeans site and changed the config file in etc/netbeans.conf as indicated on the netbeans site. The file contents are below. # ${HOME} will be replaced by JVM user.home system property #netbeans_default_userdir="${HOME}/.netbeans/6.9" netbeans_default_userdir=".netbeans/6.9" # Options used by NetBeans launcher by default, can be overridden by explicit # command line switches: netbeans_default_options="-J-client -J-Xss2m -J-Xms32m -J-XX:PermSize=32m -J-XX:MaxPermSize=200m -J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true -J-Dsun.java2d.noddraw=true" # Note that a default -Xmx is selected for you automatically. # You can find this value in var/log/messages.log file in your userdir. # The automatically selected value can be overridden by specifying -J-Xmx here # or on the command line. # If you specify the heap size (-Xmx) explicitely, you may also want to enable # Concurrent Mark & Sweep garbage collector. In such case add the following # options to the netbeans_default_options: # -J-XX:+UseConcMarkSweepGC -J-XX:+CMSClassUnloadingEnabled -J-XX:+CMSPermGenSweepingEnabled # (see http://wiki.netbeans.org/wiki/view/FaqGCPauses) # Default location of JDK, can be overridden by using --jdkhome <dir>: #netbeans_jdkhome="/path/to/jdk" netbeans_jdkhome="C:\Program Files\Java\jdk1.6.0_24\" # Additional module clusters, using ${path.separator} (';' on Windows or ':' on Unix): #netbeans_extraclusters="/absolute/path/to/cluster1:/absolute/path/to/cluster2" # If you have some problems with detect of proxy settings, you may want to enable # detect the proxy settings provided by JDK5 or higher. # In such case add -J-Djava.net.useSystemProxies=true to the netbeans_default_options. But it refuses to start when i try to run it. If i change the JDK path to something incorrect it complains that it can't find the jdk so i think the jdk path is correct. It also creates a .netbeans directory when i try to start it. I don't see any errors and it just doesn't do anything else observable. Does anybody know how to set up this version of netbeans? Thanks.

    Read the article

  • WIKI replacement solution for SharePoint?

    - by Jakub
    I'm trying to research a replacement for the pathetic WIKI that comes with WSS (only wiki code it has is to create url links). I have looked at a few but most 'replacements' I see are MOSS only? (or so it just states MOSS for requirements). Has anyone faced this situation? What did you end up using? I would like something that I can have all in one location (not different apps, hence WSS). With LDAP / AD Integration like WSS. Thanks appreciate any input. I would like to see ~ $3k solutions tho (nothing super expensive, hence why we don't run MOSS). EDIT: Anyone else have any suggestions? EDIT2: Actually since I haven't had much feedback (thanks to those that have). I installed mediawiki under IIS with PHP enabled, and enabled the IIS AD hack for authentication. IIS ends up prompting for authentication (user/pass) if you use a non IE browser, then it sets the $_SERVER["REMOTE_USER"] variable, and grabs some AD info (groups etc). Works rather well, only issues is the UGLY urls so far. But its fully working. Seems like a good setup. Other than having to rely on MYSQL (my company strives to be mainly SQL Server)

    Read the article

  • web.config + asp.net MVC + location > system.web > authorization + Integrated Security

    - by vdh_ant
    Hi guys I have an ASP.Net MVC app using Integrated Security that I need to be able grant open access to a specific route. The route in question is '~/Agreements/Upload' and the config I have setup looks like this: <configuration> ... <location path="~/Agreements/Upload"> <system.web> <authorization> <allow users="*"/> </authorization> </system.web> </location> ... </configuration> I have tried a few things and nothing has worked thus far. In IIS under Directory Security Authentication Methods I only have "Integrated Windows Authentication" selected. Now this could be part of my problem (as even though IIS allows the above IIS doesn't). But if that's the case how do I configure it so that Integrated Security works but allows people who aren't authenticated to access the given route. Cheers Anthony

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >