Search Results

Search found 4748 results on 190 pages for 'oauth provider'.

Page 30/190 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Integrating with a payment provider; Proper and robust OOP approach

    - by ExternalUse
    History We are currently using a so called redirect model for our online payments (where you send the payer to a payment gateway, where he inputs his payment details - the gateway will then return him to a success/failure callback page). That's easy and straight-forward, but unfortunately quite inconvenient and at times confusing for our customers (leaving the site, changing their credit card details with an additional login on another site etc). Intention & Problem description We are now intending to switch to an integrated approach using an exchange of XML requests and responses. My problem is on how to cater with all (or rather most) of the things that may happen during processing - bearing in mind that normally simplicity is robust whereas complexity is fragile. Examples User abort: The user inputs Credit Card details and hits submit. An XML message to the provider's gateway is sent and waiting for response. The user hits "stop" in his browser or closes the window. ignore_user_abort() in PHP may be an option - but is that reliable? might it be better to redirect the user to a "please wait"-page, that in turn opens an AJAX or other request to the actual processor that does not rely on the connection? Database goes away sounds over-complicated, but with e.g. a webserver in the States and a DB in the UK, it has happened and will happen again: User clicks together his order, payment request has been sent to the provider but the response cannot be stored in the database. What approach could I use, using PHP to sort of start an SQL like "Transaction" that only at the very end gets committed or rolled back, depending on the individual steps? Should then neither commit or roll back have happened, I could sort of "lock" the user to prevent him from paying again or to improperly account for payments - but how? And what else do I need to consider technically? None of the integration examples of e.g. Worldpay, Realex or SagePay offer any insight, and neither Google or my search terms were good enough to find somebody else's thoughts on this. Thank you very much for any insight on how you would approach this!

    Read the article

  • A ToDynamic() Extension Method For Fluent Reflection

    - by Dixin
    Recently I needed to demonstrate some code with reflection, but I felt it inconvenient and tedious. To simplify the reflection coding, I created a ToDynamic() extension method. The source code can be downloaded from here. Problem One example for complex reflection is in LINQ to SQL. The DataContext class has a property Privider, and this Provider has an Execute() method, which executes the query expression and returns the result. Assume this Execute() needs to be invoked to query SQL Server database, then the following code will be expected: using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // Executes the query. Here reflection is required, // because Provider, Execute(), and ReturnValue are not public members. IEnumerable<Product> results = database.Provider.Execute(query.Expression).ReturnValue; // Processes the results. foreach (Product product in results) { Console.WriteLine("{0}, {1}", product.ProductID, product.ProductName); } } Of course, this code cannot compile. And, no one wants to write code like this. Again, this is just an example of complex reflection. using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // database.Provider PropertyInfo providerProperty = database.GetType().GetProperty( "Provider", BindingFlags.NonPublic | BindingFlags.GetProperty | BindingFlags.Instance); object provider = providerProperty.GetValue(database, null); // database.Provider.Execute(query.Expression) // Here GetMethod() cannot be directly used, // because Execute() is a explicitly implemented interface method. Assembly assembly = Assembly.Load("System.Data.Linq"); Type providerType = assembly.GetTypes().SingleOrDefault( type => type.FullName == "System.Data.Linq.Provider.IProvider"); InterfaceMapping mapping = provider.GetType().GetInterfaceMap(providerType); MethodInfo executeMethod = mapping.InterfaceMethods.Single(method => method.Name == "Execute"); IExecuteResult executeResult = executeMethod.Invoke(provider, new object[] { query.Expression }) as IExecuteResult; // database.Provider.Execute(query.Expression).ReturnValue IEnumerable<Product> results = executeResult.ReturnValue as IEnumerable<Product>; // Processes the results. foreach (Product product in results) { Console.WriteLine("{0}, {1}", product.ProductID, product.ProductName); } } This may be not straight forward enough. So here a solution will implement fluent reflection with a ToDynamic() extension method: IEnumerable<Product> results = database.ToDynamic() // Starts fluent reflection. .Provider.Execute(query.Expression).ReturnValue; C# 4.0 dynamic In this kind of scenarios, it is easy to have dynamic in mind, which enables developer to write whatever code after a dot: using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // database.Provider dynamic dynamicDatabase = database; dynamic results = dynamicDatabase.Provider.Execute(query).ReturnValue; } This throws a RuntimeBinderException at runtime: 'System.Data.Linq.DataContext.Provider' is inaccessible due to its protection level. Here dynamic is able find the specified member. So the next thing is just writing some custom code to access the found member. .NET 4.0 DynamicObject, and DynamicWrapper<T> Where to put the custom code for dynamic? The answer is DynamicObject’s derived class. I first heard of DynamicObject from Anders Hejlsberg's video in PDC2008. It is very powerful, providing useful virtual methods to be overridden, like: TryGetMember() TrySetMember() TryInvokeMember() etc.  (In 2008 they are called GetMember, SetMember, etc., with different signature.) For example, if dynamicDatabase is a DynamicObject, then the following code: dynamicDatabase.Provider will invoke dynamicDatabase.TryGetMember() to do the actual work, where custom code can be put into. Now create a type to inherit DynamicObject: public class DynamicWrapper<T> : DynamicObject { private readonly bool _isValueType; private readonly Type _type; private T _value; // Not readonly, for value type scenarios. public DynamicWrapper(ref T value) // Uses ref in case of value type. { if (value == null) { throw new ArgumentNullException("value"); } this._value = value; this._type = value.GetType(); this._isValueType = this._type.IsValueType; } public override bool TryGetMember(GetMemberBinder binder, out object result) { // Searches in current type's public and non-public properties. PropertyInfo property = this._type.GetTypeProperty(binder.Name); if (property != null) { result = property.GetValue(this._value, null).ToDynamic(); return true; } // Searches in explicitly implemented properties for interface. MethodInfo method = this._type.GetInterfaceMethod(string.Concat("get_", binder.Name), null); if (method != null) { result = method.Invoke(this._value, null).ToDynamic(); return true; } // Searches in current type's public and non-public fields. FieldInfo field = this._type.GetTypeField(binder.Name); if (field != null) { result = field.GetValue(this._value).ToDynamic(); return true; } // Searches in base type's public and non-public properties. property = this._type.GetBaseProperty(binder.Name); if (property != null) { result = property.GetValue(this._value, null).ToDynamic(); return true; } // Searches in base type's public and non-public fields. field = this._type.GetBaseField(binder.Name); if (field != null) { result = field.GetValue(this._value).ToDynamic(); return true; } // The specified member is not found. result = null; return false; } // Other overridden methods are not listed. } In the above code, GetTypeProperty(), GetInterfaceMethod(), GetTypeField(), GetBaseProperty(), and GetBaseField() are extension methods for Type class. For example: internal static class TypeExtensions { internal static FieldInfo GetBaseField(this Type type, string name) { Type @base = type.BaseType; if (@base == null) { return null; } return @base.GetTypeField(name) ?? @base.GetBaseField(name); } internal static PropertyInfo GetBaseProperty(this Type type, string name) { Type @base = type.BaseType; if (@base == null) { return null; } return @base.GetTypeProperty(name) ?? @base.GetBaseProperty(name); } internal static MethodInfo GetInterfaceMethod(this Type type, string name, params object[] args) { return type.GetInterfaces().Select(type.GetInterfaceMap).SelectMany(mapping => mapping.TargetMethods) .FirstOrDefault( method => method.Name.Split('.').Last().Equals(name, StringComparison.Ordinal) && method.GetParameters().Count() == args.Length && method.GetParameters().Select( (parameter, index) => parameter.ParameterType.IsAssignableFrom(args[index].GetType())).Aggregate( true, (a, b) => a && b)); } internal static FieldInfo GetTypeField(this Type type, string name) { return type.GetFields( BindingFlags.GetField | BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.NonPublic).FirstOrDefault( field => field.Name.Equals(name, StringComparison.Ordinal)); } internal static PropertyInfo GetTypeProperty(this Type type, string name) { return type.GetProperties( BindingFlags.GetProperty | BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.NonPublic).FirstOrDefault( property => property.Name.Equals(name, StringComparison.Ordinal)); } // Other extension methods are not listed. } So now, when invoked, TryGetMember() searches the specified member and invoke it. The code can be written like this: dynamic dynamicDatabase = new DynamicWrapper<NorthwindDataContext>(ref database); dynamic dynamicReturnValue = dynamicDatabase.Provider.Execute(query.Expression).ReturnValue; This greatly simplified reflection. ToDynamic() and fluent reflection To make it even more straight forward, A ToDynamic() method is provided: public static class DynamicWrapperExtensions { public static dynamic ToDynamic<T>(this T value) { return new DynamicWrapper<T>(ref value); } } and a ToStatic() method is provided to unwrap the value: public class DynamicWrapper<T> : DynamicObject { public T ToStatic() { return this._value; } } In the above TryGetMember() method, please notice it does not output the member’s value, but output a wrapped member value (that is, memberValue.ToDynamic()). This is very important to make the reflection fluent. Now the code becomes: IEnumerable<Product> results = database.ToDynamic() // Here starts fluent reflection. .Provider.Execute(query.Expression).ReturnValue .ToStatic(); // Unwraps to get the static value. With the help of TryConvert(): public class DynamicWrapper<T> : DynamicObject { public override bool TryConvert(ConvertBinder binder, out object result) { result = this._value; return true; } } ToStatic() can be omitted: IEnumerable<Product> results = database.ToDynamic() .Provider.Execute(query.Expression).ReturnValue; // Automatically converts to expected static value. Take a look at the reflection code at the beginning of this post again. Now it is much much simplified! Special scenarios In 90% of the scenarios ToDynamic() is enough. But there are some special scenarios. Access static members Using extension method ToDynamic() for accessing static members does not make sense. Instead, DynamicWrapper<T> has a parameterless constructor to handle these scenarios: public class DynamicWrapper<T> : DynamicObject { public DynamicWrapper() // For static. { this._type = typeof(T); this._isValueType = this._type.IsValueType; } } The reflection code should be like this: dynamic wrapper = new DynamicWrapper<StaticClass>(); int value = wrapper._value; int result = wrapper.PrivateMethod(); So accessing static member is also simple, and fluent of course. Change instances of value types Value type is much more complex. The main problem is, value type is copied when passing to a method as a parameter. This is why ref keyword is used for the constructor. That is, if a value type instance is passed to DynamicWrapper<T>, the instance itself will be stored in this._value of DynamicWrapper<T>. Without the ref keyword, when this._value is changed, the value type instance itself does not change. Consider FieldInfo.SetValue(). In the value type scenarios, invoking FieldInfo.SetValue(this._value, value) does not change this._value, because it changes the copy of this._value. I searched the Web and found a solution for setting the value of field: internal static class FieldInfoExtensions { internal static void SetValue<T>(this FieldInfo field, ref T obj, object value) { if (typeof(T).IsValueType) { field.SetValueDirect(__makeref(obj), value); // For value type. } else { field.SetValue(obj, value); // For reference type. } } } Here __makeref is a undocumented keyword of C#. But method invocation has problem. This is the source code of TryInvokeMember(): public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { if (binder == null) { throw new ArgumentNullException("binder"); } MethodInfo method = this._type.GetTypeMethod(binder.Name, args) ?? this._type.GetInterfaceMethod(binder.Name, args) ?? this._type.GetBaseMethod(binder.Name, args); if (method != null) { // Oops! // If the returnValue is a struct, it is copied to heap. object resultValue = method.Invoke(this._value, args); // And result is a wrapper of that copied struct. result = new DynamicWrapper<object>(ref resultValue); return true; } result = null; return false; } If the returned value is of value type, it will definitely copied, because MethodInfo.Invoke() does return object. If changing the value of the result, the copied struct is changed instead of the original struct. And so is the property and index accessing. They are both actually method invocation. For less confusion, setting property and index are not allowed on struct. Conclusions The DynamicWrapper<T> provides a simplified solution for reflection programming. It works for normal classes (reference types), accessing both instance and static members. In most of the scenarios, just remember to invoke ToDynamic() method, and access whatever you want: StaticType result = someValue.ToDynamic()._field.Method().Property[index]; In some special scenarios which requires changing the value of a struct (value type), this DynamicWrapper<T> does not work perfectly. Only changing struct’s field value is supported. The source code can be downloaded from here, including a few unit test code.

    Read the article

  • SQL 2000 Not Supported by .NET Framework Data Provider for SQL Server in VS2010's Server Explorer D

    - by Canoehead
    Just tried creating a data connection to a SQL 2000 database in VS2010's Server Explorer using a .NET Framework Data Provider for SQL Server (versus OLE) and found that it didn't work. VS2010 complained that I had to use SQL Server 2005 and up. This used to work in VS2008 (using .NET Framework Data Provider for SQL Server instead of the .NET Framework Data Provider for OLE DB). Is this just a VS2010 restriction or has the ability to connect to SQL 2000 with .NET Framework Data Provider for SQL Server been obsoleted in a post-2.0 version of .NET being used by VS2010? Anyone know why this was done by MS (please don't speculate - I can do that myself ;)?

    Read the article

  • Is there a more easy way to create a WCF/OData Data Service Query Provider?

    - by routeNpingme
    I have a simple little data model resembling the following: InventoryContext { IEnumerable<Computer> GetComputers() IEnumerable<Printer> GetPrinters() } Computer { public string ComputerName { get; set; } public string Location { get; set; } } Printer { public string PrinterName { get; set; } public string Location { get; set; } } The results come from a non-SQL source, so this data does not come from Entity Framework connected up to a database. Now I want to expose the data through a WCF OData service. The only way I've found to do that thus far is creating my own Data Service Query Provider, per this blog tutorial: http://blogs.msdn.com/alexj/archive/2010/01/04/creating-a-data-service-provider-part-1-intro.aspx ... which is great, but seems like a pretty involved undertaking. The code for the provider would be 4 times longer than my whole data model to generate all of the resource sets and property definitions. Is there something like a generic provider in between Entity Framework and writing your own data source from zero? Maybe some way to build an object data source or something, so that the magical WCF unicorns can pick up my data and ride off into the sunset without having to explicitly code the provider?

    Read the article

  • Should we develop a custom membership provider in this case?

    - by Allen
    I'll be adding a bounty to this, probably 200, more if you guys think its appropriate. I wont accept an answer until I can add a bounty so feel free to go ahead and answer now Summary Long story short, we've been tasked with gutting the authentication and authorization parts of a fairly old and bloated asp.net application that previously had all of these components written from scratch. Since our application isn't a typical one, and none of us have experience in asp.net's built in membership provider stuff, we're not sure if we should roll our own authentication and authorization again or if we should try to work within the asp.net membership provider mindset and develop our own membership provider. Our Application We have a fairly old asp.net application that gets installed at customer locations to service clients on a LAN. Admins create users (users do not sign up) and depending on the install, we may have the software integrated with LDAP. Currently, the LDAP integration bulk-imports the users to our database and when they login, it authenticates against LDAP so we dont have to manage their passwords. Nothing amazing there. Admins can assign users to 1 group and they can change the authorization of that group to manage access to various parts of the software. Groups are maintained by Admins (web based UI) and as said earlier, granted / denied permissions to certain functionality within the application. All this was completely written from the ground up without using any of the built in .net authorization or authentication. We literally have IsLoggedIn() methods that check for login and redirect to our login page if they aren't. Our Rewrite We've been tasked to integrate more tightly with LDAP, they want us to tie groups in our application to groups (or whatever types of containers that LDAP uses) in LDAP so that when a customer opt's to use our LDAP integration, they dont have to manage their users in LDAP AND in our application. The new way, they will simply create users in LDAP, add them to Groups in LDAP and our application will see that they belong to the appropriate LDAP group and authenticate and authorize them. In addition, we've been granted the go ahead to completely rip out the User authentication and authorization code and completely re-do it. Our Problem The problem is that none of us have any experience with asp.net membership provider functionality. The little bit of exposure I have to it makes me worry that it was not intended to be used for an application such as ours. Though, developing our own ASP.NET Membership Provider and Role Manager sounds like it would be a great experience and most likely the appropriate thing to do. Basically, I'm looking for advice, should we be using the ASP.NET Membership provider & Role Management API or should we continue to roll our own? I know this decision will be influenced by our requirements so I'm going over them below Our Requirements Just a quick n dirty list Maintain the ability to have a db of users and authenticate them and give admins (only, not users) the ability to CRUD users Allow the site to integrate with LDAP, when this is chosen, they don't want any users stored in the DB, only the relationship between Groups as they exist in our app / db and the Groups/Containers as they exist in LDAP. .net 3.5 is being used (mix of asp.net webforms and asp.net mvc) Has to work in ASP.NET and ASP.NET MVC (shouldn't be a problem I'm guessing) This can't be user centric, administrators need to be the only ones that CRUD (or import via ldap) users and groups We have to be able to Auth via LDAP when its configured to do so I always try to monitor my questions closely so feel free to ask for more info. Also, as a general summary of what I'm looking for in an answer is just. "You should/shouldn't use xyz, here's why". Links regarding asp.net membership provider and role management stuff are very welcome, most of the stuff I'm finding is 5+ years old. Edit: Added some stuff to "Our Rewrite"

    Read the article

  • How do I write a Guice Provider that doesn't explicitly create objects?

    - by ripper234
    Say I have a ClassWithManyDependencies. I want to write a Guice Provider for this class, in order to create a fresh instance of the class several times in my program (another class will depend on this Provider and use it at several points to create new instances). One way to achieve this is by having the Provider depend on all the dependencies of ClassWithManyDependencies. This is quite ugly. Is there a better way to achieve this? Note - I certainly don't want the Provider to depend on the injector. Another option I considered is having ClassWithManyDependencies and ClassWithManyDependenciesProvider extend the same base class, but it's butt ugly.

    Read the article

  • ASP.NET/IIS Fix: The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine.

    - by Ken Cox [MVP]
    In my latest ASP.NET project, I refresh the sample data using an Excel spreadsheet from the client. After upgrading to Windows Server 2008 R2, I suddenly discovered this error: The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine. The error message is totally bogus! The problem is that I’m running IIS on a 64-bit machine and the ol’ OLEDB thingy just isn’t up with the times. To fix it, go into the IIS Manager and find out which Application Pool the site is using. In my case...(read more)

    Read the article

  • Who Provides Internet Service for My Internet Service Provider?

    - by Jason Fitzpatrick
    You pay your Internet Service Provider (ISP) for internet access, and they turn on the sweet, sweet, fire hose of data for you. But who provides the flow for your ISP? Read on to learn the ins and outs of global data delivery. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How

    Read the article

  • Rails Plugins Load Path - I have ActiveRecord Models in a Plugin, How do I load them without Namespa

    - by viatropos
    I have a bunch of models for Oauth services, things like: TwitterToken GoogleToken There are OAuth versions and OpenID versions for some, so I decided to logically organize my gem like so: lib lib/my-auth-gem lib/my-auth-gem/oauth lib/my-auth-gem/oauth/tokens/google_token ... lib/my-auth-gem/openid/tokens/google_token ... I would like to be able to name my models GoogleToken, rather than MyAuthGem::Oauth::Tokens::GoogleToken. How do I do that? This will be for Rails 2.3+ and Rails 3.

    Read the article

  • OFM 11g: OAM SSO for Forms and ADF Faces

    - by olaf.heimburger
    In my blog entry OFM 11g: Implementing OAM SSO with Forms we set the foundation for providing a complete Single Sign-On solution based on Oracle Access Manager (OAM). This foundation should now be used to combine Forms 11g and ADF Faces 11g applications with a transparent login. The Beginning Before we start, lets re-consider the requirements to achieve the ultimate goal. These are:- Access to the Forms 11g Application must be authenticated by OAM (protected). Access to the ADF Faces 11g Application must be authenticated by OAM (protected). Switching from one application to the other should not result in a re-authentication (aka single sign-on). User identity should be availble to the application without any extra work in the application code. All these are the common requirements for a single sign-on solution. The challenge here is that Forms relies on Oracle AS SSO (OSSO or "the old SSO") while ADF Faces is quite open and can be protected by Oracle AS SSO and Oracle Access Manager SSO (OAM SSO or "the modern SSO"). Both application types can use their own login mechanism. The Forms 11g Application To demonstrate the SSO functionality, we use the standard Forms test (/forms/frmservlet?form=test.fmx). Although this shows nothing specific in the Forms application, it is good enough to demonstrate that it is protected. The ADF Faces 11g Application With ADF 11g you can develop quite a number of useful Faces based applications. Among many features, it comes with the ADF Security feature that provides you with functionality to protect your pages, regions, and even TaskFlows from un-authenticated usage in a declarative way.To demonstrate that functionality a sample application with different access levels plus a login dialog is used. This application comes with a publc page that has protected content (a button). Once you are authenticated for the application, the protected content and some personalisation (the users name) is shown. Protecting Forms 11g As already explained in the OFM 11g: Implementing OAM SSO with Forms, the easiest way to protect a Forms application is to configure it as a OSSO partner application, setup mod_osso, test it, migrate OSSO to OAM SSO with the Upgrade Agent, reconfigure mod_osso, and you are done.Sort of. By default the OAM is configured to run in co-exist mode. This means that a user has to re-authenticate to the Forms application when logged into an OAM SSO application before. To avoid this, you must disable the co-exist mode, for example by using WLST and issue the disableCoexistMode on the OAM server. Protecting ADF Faces 11g To protect an ADF Faces 11g application we have to consider two scenarios: Use a HTTPD server in front of WLS Use WLS without a HTTPD server Both scenarios have their pro's and cons' and we won't get into details and just describe how to configure both. Scenario 1: HTTPD Server with WLS In this scenario we have to setup the environment in some steps:- Configure a WebGate at OAMThis configuration can be done through the OAM console or by a script. No matter which way you choose, the WebGate configuration files will be created for you. Install the OAM WebGate into an HTTPD serverThe type of webgate you need to install depends on you HTTPD server. With Oracle HTTP Server 11g you can use the latest OAM 11g WebGate. With other HTTPD servers you must resort to OAM 10g WebGates. A OAM 11g WebGate can use the pre-created configuration files supplied during the WebGate configuration at OAM. An OAM 10g WebGate asks for the specific configuration and verifies it during installation. Configure the WLS plugin to forward the requests to WLSAgain, depending on your HTTPD Server you have different plugins to forward requests to WLS. With OHS 11g you can use the pre-installed mod_wl_ohs plugin. Its configuration is quite simple and straightforward. Configure an OAM SSPI Provider as a IdentityAsserter in WLS to retrieve the user identifierThis configuration is quite important as it retrieves the user identifier for the next step. If you have a SOA Suite installation within your OFM_HOME, the necessary software is already installed and you only need to setup your Security Realm within WLS.You can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. We add the OAMIdentityAsserter as the first SSPI Provider. It is important that the Control Flag is set to SUFFICIENT. Every other configuration can be left as is, no changes are necessary here. Configure an OAM Identity Provider to get the real user identityIn OFM 11g: Implementing OAM SSO with Forms we have configured an OID as Identity Store. To get the user identity we need to configure the same OID as an SSPI Provider for WLS. This will retrieve the real user information from OID and creates the JAAS Subject and Principals to be used by any application within WLS.Again, you can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. Now add the OIDAuthenticator as the second SSPI Provider. It is important that the Control Flag is set to OPTIONAL. After we saved this setup, we need to configure this provider by setting the Provider Specific details to access OID. Scenario 2: WLS only This scenario is a bit easier but requires more work in the WLS setup:- Configure a WebGate at OAMThis configuration can be done through the OAM console or by a script. No matter which way you choose, the WebGate configuration files will be created for you. Configure the OAM SSPI Provider as IdentityAuthenticator to authenticate and set the user identifierWhen using the OAM SSPI Provider as OAMAuthenticator we create it with the Control Flag as SUFFICIENT. Afte saving it, the Provider Specific settings must be configured to allow the OAM SSPI Provider to connect to the OAM Server. Configure an OAM Identity Provider to get the real user identity providerAgain, you can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. Now add the OIDAuthenticator as the second SSPI Provider. It is important that the Control Flag is set to OPTIONAL. After we saved this setup, we need to configure this provider by setting the Provider Specific details to access OID. Configure ADF 11g Application for OAM Actually, there are no changes to be made within the ADF application. We only need to add the value CLIENT_CERT to the <auth-mode> tag in the <login-config> tag in the web.xml file. Testing To test the configuration, simply point your browser to one of both appliction URLs. OAM should kick in and redirect you to the OAM Login page. After you have entered the correct credentials, access to the URLs is granted and you will see the application. Enjoy!

    Read the article

  • strange xcode error Could not compile reconstructed dtrace script:

    - by Rahul Vyas
    Hello all , i am getting this error when i try to compile an iPhone app. error: Could not compile reconstructed dtrace script: provider CorePlot { probe layer_position_change(char *,int,int,int,int); }; #pragma D attributes PRIVATE/PRIVATE/UNKNOWN provider CorePlot provider #pragma D attributes PRIVATE/PRIVATE/UNKNOWN provider CorePlot module #pragma D attributes PRIVATE/PRIVATE/UNKNOWN provider CorePlot function #pragma D attributes PRIVATE/PRIVATE/UNKNOWN provider CorePlot name #pragma D attributes PRIVATE/PRIVATE/UNKNOWN provider CorePlot args ld: error creating dtrace DOF section collect2: ld returned 1 exit status

    Read the article

  • What exactly do I have to pay attention for when choosing Windows Hosting Provider?

    - by user850010
    This is my first time choosing a hosting company. It is for a web site made in asp.net mvc3. So I was thinking choosing a provider would be easy since I found this page http://www.microsoft.com/web/Hosting/Home which contains hosting offers. Now hours later, I am still searching. The reason is that as soon as I start investigating about particular company, something stands out that I do not like. Here are some examples what I noticed when checking various companies in more detail: Company "about us" page is lacking in information about their company. Few of them had just general description what they do and nothing else, while some others had information like company name but had no address. Checking company name in Business Registry Searches gave no results. Two of the companies I checked had both company name and address but I was unable to find them in the registry. Putting company domain into Google gave mostly results from that domain or web hosting review sites but not much else. I am assuming that good companies should have search results from other sites too. Low Alexa Traffic Rank. There was one company which had a site that looked very professional but their alexa traffic ranking was like 2 million. Are there any other factors I should pay attention to when choosing a hosting company? Do I have legitimate concerns or am I just too paranoid?

    Read the article

  • SQL 2008/2005 Hosting :: Error - “Named Pipes Provider, error: 40 – Could not open a connection to SQL Server”

    - by mbridge
    When setting up a Microsoft Windows Server 2008 system, I went through the motions to set up IIS, MS SQL Server 2008, and Visual Studio 2010 to use as a test-bed. One of the immediate benefits of setting up such a system is that most development can be done remotely: MS SQL Server Management Studio, Visual Studio’s Web development suite, as well as file shares, remote desktop, etc, make for a great way to remotely develop in ‘pristine’ conditions. But there are drawbacks, too, such as needing to deal with firewall issues, not being able to penetrate past a router or the requirement of setting up a VPN. One of the problems I encountered when trying to remote into the MS SQL Server 2008 that I’d set up was the following error: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server I followed the below steps, and was able to connect to the server after just a few moments of tinkering: 1. From the server in question, surf to this Microsoft article, and download and install the Firewall rules modification program. Never drop your firewall, even on a development machine, unless you have a really good reason to. 2. Launch SQL Server Configuration Manager. Navigate to SQL Server Network Configuration, then Protocols for your server name. Enable TCP/IP and Named Pipes by right-clicking and choosing Enable for each given Protocol Name. 3. Restart the SQL Server service from Services (or from command line, subsequently run “net stop mssqlserver” then “net start mssqlserver”. 4. Try your remote connection once more, and you should be able to connect. It’s not a terribly difficult concept, but one of the more challenging tasks developers face is dealing with environment setup. And while there is a certain blurred-line overlap between software development and server administration, sometimes the latter is daunting, especially given that you might set up only a handful of servers during your career.

    Read the article

  • How do I create a Linked Server in SQL Server 2005 to a password protected Access 95 database?

    - by Brad Knowles
    I need to create a linked server with SQL Server Management Studio 2005 to an Access 95 database, which happens to be password protected at the database level. User level security has not been implemented. I cannot convert the Access database to a newer version. It is being used by a 3rd party application; so modifying it, in any way, is not allowed. I've tried using the Jet 4.0 OLE DB Provider and the ODBC OLE DB Provider. The 3rd party application creates a System DSN (with the proper database password), but I've not had any luck in using either method. If I were using a standard connection string, I think it would look something like this: Provider=Microsoft.Jet.OLEDB.4.0;Data Source='C:\Test.mdb';Jet OLEDB:Database Password=####; I'm fairly certain I need to somehow incorporate Jet OLEDB:Database Password into the linked server setup, but haven't figured out how. I've posted the scripts I'm using along with the associated error messages below. Any help is greatly appreciated. I'll provide more details if needed, just ask. Thanks! Method #1 - Using the Jet 4.0 Provider When I try to run these statements to create the linked server: sp_dropserver 'Test', 'droplogins'; EXEC sp_addlinkedserver @server = N'Test', @provider = N'Microsoft.Jet.OLEDB.4.0', @srvproduct = N'Access DB', @datasrc = N'C:\Test.mdb' GO EXEC sp_addlinkedsrvlogin @rmtsrvname=N'Test', @useself=N'False',@locallogin=NULL, @rmtuser=N'Admin', @rmtpassword='####' GO I get this error when testing the connection: TITLE: Microsoft SQL Server Management Studio ------------------------------ "The test connection to the linked server failed." ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ The OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test" reported an error. Authentication failed. Cannot initialize the data source object of OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test". OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test" returned message "Cannot start your application. The workgroup information file is missing or opened exclusively by another user.". (Microsoft SQL Server, Error: 7399) ------------------------------ Method #2 - Using the ODBC Provider... sp_dropserver 'Test', 'droplogins'; EXEC sp_addlinkedserver @server = N'Test', @provider = N'MSDASQL', @srvproduct = N'ODBC', @datasrc = N'Test:DSN' GO EXEC sp_addlinkedsrvlogin @rmtsrvname=N'Test', @useself=N'False',@locallogin=NULL, @rmtuser=N'Admin', @rmtpassword='####' GO I get this error: TITLE: Microsoft SQL Server Management Studio ------------------------------ "The test connection to the linked server failed." ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ Cannot initialize the data source object of OLE DB provider "MSDASQL" for linked server "Test". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Microsoft Access Driver] Cannot open database '(unknown)'. It may not be a database that your application recognizes, or the file may be corrupt.". (Microsoft SQL Server, Error: 7303)

    Read the article

  • Need help making an ODBC MySQL Connection

    - by Andy Moore
    Short Version: How do I connect from PowerShell to an ODBC 5.1 MySQL Driver? I can't seem to find any connection strings that accurately have a "Provider" field for this particular instance. (See bottom of this question for examples/errors) ===== Long Version: I'm not a server guy, and I've been handed the task of setting up PowerGadgets on our network. I have a MySQL server running on a Linux box, that is configured for remote access and has a user defined for remote access as well. On my windows desktop PC, I have PowerGadgets installed. I installed the MySQL ODBC 5.1 connector, and went to Control Panel Data Sources and set up a User DSN connection to the database. The connection, user, and pass seem to be correct because it lists the tables of the database in my windows control panel. Where I'm running into trouble is in 3 places in PowerGadgets: When selecting a data source, I can select "SQL Server". Inputting the servers IP address does not work and I can't get this option to work at all. When selecting a data source, I can select "OleDB". This screen has a wizard on it, that appears to populate all the correct information (including database table names!) for me. "Test Connection" runs great. But if I try to complete the wizard, I get the error "The .NET Framework data provider for OLEDB does not support the MS Ole DB provider for ODBC Drivers." When selecting a data source, I can select "ODBC". This screen does not have a wizard and I cannot figure out a "connection string" that works. Typically it will respond with the error "The field 'Provider' is missing". Googling ODBC connection strings doesn't reveal any examples with a "provider" field and have no idea what to put in here. The connection string (for #2) above contains "SQLOLEDB" as a provider, and upon inputting that value into this connection string I get the same connection error that #2 gets. I believe I can solve my problems by figuring out a connection string for #3 but don't know where to get started. (PowerGadgets also allows for PowerShell support but I believe I will run into the same problem there) == Here's my current PowerShell connection that doesn't work: invoke-sql -connection "Driver={MySQL ODBC 5.1 Driver};Initial Catalog=hq_live;Data Source=HQDB" -sql "Select * FROM accounts" Spits back the error: "Invoke-Sql : An OLE DB Provider was not specified in the ConnectionString. An example would be, 'Provider=SQLOLEDB;'. == Another string that doesn't work: invoke-sql -connection "Provider=MSDASQL.1;Persist Security Info=False;Data Source=HQDB;Initial Catalog=hq_live" -sql "select * from accounts" And the error: The .Net Framework Data Provider for OLEDB (System.Data.OleDb) does not support the Microsoft OLE DB Provider for ODBC Drivers (MSDASQL). Use the .Net Framework Data Provider for ODBC (System.Data.Odbc).

    Read the article

  • Two Tomcat SSL Providers & One FreeBSD

    - by mosg
    Hello everyone. Question: On FreeBSD8 I need to have two opened HTTPS different ports (443 and 444, for example). In other words, I need two providers, working simultaneously: Ordinary SSL signed certificate (# Thawte) on 443 port Special russian security provider (# DIGTProvider, based on CryptoPro CSP software) on 444 port I also have to mentioned, that the major provider is the 2'nd provider. Here is some of DIGTProvider options: add to ${JRE_HOME}/lib/security/java.security this line security.provider.N=com.digt.trusted.jce.provider.DIGTProvider ssl.SocketFactory.provider=com.digt.trusted.jsse.provider.DigtSocketFactory uncomment and edit in conf/server.xml HTTPS section: sslProtocol="GostTLS" (added) edit bin/catalina.sh and add: export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/opt/cprocsp/lib/ia32" export JAVA_OPTS="${JAVA_OPTS} -Dcom.digt.trusted.jsse.server.certFile=/home//server-gost.cer -Dcom.digt.trusted.jsse.server.keyPasswd=11111111" As I know if I just define in server.xml tomcat's configuration file two SSL connectors, tomcat would not start, because in JRE you can use only one JSSE provider. Thanks for help.

    Read the article

  • Issue with translating a delegate function from c# to vb.net for use with Google OAuth 2

    - by Jeremy
    I've been trying to translate a Google OAuth 2 example from C# to Vb.net for a co-worker's project. I'm having on end of issues translating the following methods: private OAuth2Authenticator<WebServerClient> CreateAuthenticator() { // Register the authenticator. var provider = new WebServerClient(GoogleAuthenticationServer.Description); provider.ClientIdentifier = ClientCredentials.ClientID; provider.ClientSecret = ClientCredentials.ClientSecret; var authenticator = new OAuth2Authenticator<WebServerClient>(provider, GetAuthorization) { NoCaching = true }; return authenticator; } private IAuthorizationState GetAuthorization(WebServerClient client) { // If this user is already authenticated, then just return the auth state. IAuthorizationState state = AuthState; if (state != null) { return state; } // Check if an authorization request already is in progress. state = client.ProcessUserAuthorization(new HttpRequestInfo(HttpContext.Current.Request)); if (state != null && (!string.IsNullOrEmpty(state.AccessToken) || !string.IsNullOrEmpty(state.RefreshToken))) { // Store and return the credentials. HttpContext.Current.Session["AUTH_STATE"] = _state = state; return state; } // Otherwise do a new authorization request. string scope = TasksService.Scopes.TasksReadonly.GetStringValue(); OutgoingWebResponse response = client.PrepareRequestUserAuthorization(new[] { scope }); response.Send(); // Will throw a ThreadAbortException to prevent sending another response. return null; } The main issue being this line: var authenticator = new OAuth2Authenticator<WebServerClient>(provider, GetAuthorization) { NoCaching = true }; The Method signature reads as for this particular line reads as follows: Public Sub New(tokenProvider As TClient, authProvider As System.Func(Of TClient, DotNetOpenAuth.OAuth2.IAuthorizationState)) My understanding of Delegate functions in VB.net isn't the greatest. However I have read over all of the MSDN documentation and other relevant resources on the web, but I'm still stuck as to how to translate this particular line. So far all of my attempts have resulted in either the a cast error (see below) or no call to GetAuthorization. The Code (vb.net on .net 3.5) Private Function CreateAuthenticator() As OAuth2Authenticator(Of WebServerClient) ' Register the authenticator. Dim client As New WebServerClient(GoogleAuthenticationServer.Description, oauth.ClientID, oauth.ClientSecret) Dim authDelegate As Func(Of WebServerClient, IAuthorizationState) = AddressOf GetAuthorization Dim authenticator = New OAuth2Authenticator(Of WebServerClient)(client, authDelegate) With {.NoCaching = True} 'Dim authenticator = New OAuth2Authenticator(Of WebServerClient)(client, GetAuthorization(client)) With {.NoCaching = True} 'Dim authenticator = New OAuth2Authenticator(Of WebServerClient)(client, New Func(Of WebServerClient, IAuthorizationState)(Function(c) GetAuthorization(c))) With {.NoCaching = True} 'Dim authenticator = New OAuth2Authenticator(Of WebServerClient)(client, New Func(Of WebServerClient, IAuthorizationState)(AddressOf GetAuthorization)) With {.NoCaching = True} Return authenticator End Function Private Function GetAuthorization(arg As WebServerClient) As IAuthorizationState ' If this user is already authenticated, then just return the auth state. Dim state As IAuthorizationState = AuthState If (Not state Is Nothing) Then Return state End If ' Check if an authorization request already is in progress. state = arg.ProcessUserAuthorization(New HttpRequestInfo(HttpContext.Current.Request)) If (state IsNot Nothing) Then If ((String.IsNullOrEmpty(state.AccessToken) = False Or String.IsNullOrEmpty(state.RefreshToken) = False)) Then ' Store Credentials HttpContext.Current.Session("AUTH_STATE") = state _state = state Return state End If End If ' Otherwise do a new authorization request. Dim scope As String = AnalyticsService.Scopes.AnalyticsReadonly.GetStringValue() Dim _response As OutgoingWebResponse = arg.PrepareRequestUserAuthorization(New String() {scope}) ' Add Offline Access and forced Approval _response.Headers("location") += "&access_type=offline&approval_prompt=force" _response.Send() ' Will throw a ThreadAbortException to prevent sending another response. Return Nothing End Function The Cast Error Server Error in '/' Application. Unable to cast object of type 'DotNetOpenAuth.OAuth2.AuthorizationState' to type 'System.Func`2[DotNetOpenAuth.OAuth2.WebServerClient,DotNetOpenAuth.OAuth2.IAuthorizationState]'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidCastException: Unable to cast object of type 'DotNetOpenAuth.OAuth2.AuthorizationState' to type 'System.Func`2[DotNetOpenAuth.OAuth2.WebServerClient,DotNetOpenAuth.OAuth2.IAuthorizationState]'. I've spent the better part of a day on this, and it's starting to drive me nuts. Help is much appreciated.

    Read the article

  • What's the easiest way to use OAuth with ActiveResource?

    - by Barry Hess
    I'm working with some old code and using ActiveResource for a very basic Twitter integration. I'd like to touch the app code as little as possible and just bring OAuth in while still using ActiveResource. Unfortunately I'm finding no easy way to do this. I did run into the oauth-active-resource gem, but it's not exactly documented and it appears to be designed for creating full-on API wrapper libraries. As you can imagine, I'd like to avoid creating a whole Twitter ActiveResource API wrapper for this one legacy change. Any success stories out there? In my instance, it might be quicker to just leave ActiveResource rather than get this working. I'm happy to be proven wrong!

    Read the article

  • "System.Data.OracleClient requires Oracle client software version 8.1.7 or greater." Error Message

    - by Jandost Khoso
    Quick resolution: Give full permission to AUTHENTICATED USERS in following folders. a) ORACLE_HOME b) Program Files\ORACLE   Check your PATH. You might have installed different clients in your system and your .NET application is pointing to a home with inappoperiate client. What your .NET application should load is OCI.DLL with File version more than 8.1.7. According to the MSDN document Oracle and ADO.NET:   "The .NET Framework Data Provider for Oracle provides access to an Oracle database using the Oracle Call Interface (OCI) as provided by Oracle Client software. The functionality of the data provider is designed to be similar to that of the .NET Framework data providers for SQL Server, OLE DB, and ODBC. "     The MSDN document System Requirements (Oracle) says: "The .NET Framework Data Provider for Oracle requires Microsoft Data Access Components (MDAC) version 2.6 or later. MDAC 2.8 SP1 is recommended. You must also have Oracle 8i Release 3 (8.1.7) Client or later installed. "   Both the .NET Framework Data Provider for Oracle and Oracle Data Provider for .NET are data providers to access Oracle database. The former ships with .NET Framework and requires Oracle client version 8.1.7 or above. The latter is provided by Oracle company and requires Oracle client version 9.2 or later.     The Oracle Data Provider for .NET (ODP.NET) features optimized ADO.NET data access to the Oracle database. ODP.NET allows developers to take advantage of advanced Oracle database functionality, including Real Application Clusters, XML DB, and advanced security.   See the document Comparing the Microsoft .NET Framework 1.1 Data Provider for Oracle and the Oracle Data Provider for .NET for more information about the difference.

    Read the article

  • Web2.0, AJAX, HTML5, Facebook, Social web, openid, Oauth, web browsers... where is all this going ?

    - by jokoon
    We have seen many new things appear in the last 7 or 5 five years on the web: Facebook, html5 appeared, new browsers grew strongly, Google failed with Wave... Since Facebook and other stuff like Gtalk and Gmail, I thought and hoped that forums, chat, mail, usenet, conversation rooms and p2p protocols could inter operate to allow the user to use all those services transparently. Of course I realized that things are far much complicated, for several reasons: the IETF cannot invent new things: they just propose standards. Microsoft as well as big players often are obstacles to relevant innovation regarding open formats. The biggest stories being document formats or internet explorer with its long reaction to support web standards. Smartphones, thanks to the appearances of OSes such as iOS and Android, are finally able to navigate on internet: former devices were deaf, they weren't directly connected to internet. The mail protocol were left unchanged even with the grow of spam and malwares. I don't know what to think, because I think there is still a lot to do, but I feel like it will never happen or that nobody seems interested in those basic text transmit features... So what do you think what are the next big steps in the evolution of the web ? Do you think is will still walk hand in hand with open source ?

    Read the article

  • does entity framework or mysql provider swallows timeout exceptions on enumeration of result?!

    - by Freddy Rios
    I'm trying to make sense of a situation I have using entity framework on .net 3.5 sp1 + MySQL 6.1.2.0 as the provider. It involves the following code: Response.Write("Products: " + plist.Count() + "<br />"); var total = 0; foreach (var p in plist) { //... some actions total++; //... other actions } Response.Write("Total Products Checked: " + total + "<br />"); Basically the total products is varying on each run, and it isn't matching the full total in plist. Its varies widely, from ~ 1/5th to half. There isn't any control flow code inside the foreach i.e. no break, continue, try/catch, conditions around total++, anything that could affect the count. As confirmation, there are other totals captured inside the loop related to the actions, and those match the lower and higher total runs. I don't find any reason to the above, other than something in entity framework or the mysql provider that causes it to end the foreach when retrieving an item. The body of the foreach can have some good variation in time, as the actions involve file & network access, my best shot at the time is that when it takes beyond certain threshold there is some type of timeout in the underlying framework/provider and instead of causing an exception it is silently reporting no more items for enumeration. Can anyone give some light in the above scenario and/or confirm if the entity framework/mysql provider has the above behavior?

    Read the article

  • Can one have multiple name servers that don't all belong to the same TLD/provider?

    - by Simon
    In light of the GoDaddy outage we updated our name server list for our domain to include an additional name server provider. The list looks something like this: ns61.domaincontrol.com ns54.domaincontrol.com ns1.dreamhost.com ns2.dreamhost.com Both Godaddy and Dreamhost have zone entries to handle the A and MX records. The idea is that if one provider goes out the other will be a fall-back. However, when I tested my config with http://www.intodns.com/ I am getting a warning about SOA serials not being agreed. Have I misunderstood some fundamentals in name-server config? What can I do to prevent future problems?

    Read the article

  • What is the Best Internet Provider in the Salt Lake Valley for hosting your future online business f

    - by Justin
    This is for people familiar with the ISP scene in Salt Lake. Also, UTOPIA is not available in my neighborhood yet. I'm looking for comparisons between Comcast, Qwest, and especially other providers I'm not aware of. While I will have online backup (of course!), I want to host some things from my own home at the start of my business. Once money starts flowing in, I will move to a hosted provider, but in the meantime I would like a provider which provides fast (1+ mb/s at least) upload speeds (fast download a given), a static IP, and especially a reasonable price.

    Read the article

  • Cloud storage provider lost my data. How to back up next time?

    - by tomcam
    What do you do when cloud storage fails you? First, some background. A popular cloud storage provider (rhymes with Booger Link) damaged a bunch of my data. Getting it back was an uphill battle with all the usual accusations that it was my fault, etc. Finally I got the data back. Yes, I can back this up with evidence. Idiotically, I stayed with them, so I totally get that the rest of this is on me. The problem had been with a shared folder that works with all 12 computers my business and family use with the service. We'll call that folder the Tragic Briefcase. It is a sort of global folder that's publicly visible to all computers on the service. It's our main repository. Today I decided to deal with some residual effects of the Crash of '11. Part of the damage they did was that in just one of my computers (my primary, of course) all the documents in the Tragic Briefcase were duplicated in the Windows My Documents folder. I finally started deleting them. But guess what. Though they appeared to be duplicated in the file system, removing them from My Documents on the primary PC caused them to disappear from the Tragic Briefcase too. They efficiently disappeared from all the other computers' Tragic Briefcases as well. So now, 21 gigs of files are gone, and of course I don't know which ones. I want to avoid this in the future. Apart from using a different storage provider, the bigger picture is this: how do I back up my cloud data? A complete backup every week or so from web to local storage would cause me to exceed my ISP's bandwidth. Do I need to back up each of my 12 PCs locally? I do use Backupify for my primary Google Docs, but I have been storing taxes, confidential documents, Photoshop source, video source files, and so on using the web service. So it's a lot of data, but I need to keep it safe. Backup locally would also mean 2 backup drives or some kind of RAID per PC, right, because you can't trust a single point of failure? Assuming I move to DropBox or something of its ilk, what is the best way to make sure that if the next cloud storage provider messes up I can restore?

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >