Search Results

Search found 5864 results on 235 pages for 'transparent proxy'.

Page 87/235 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • OpenGL : Keeping alpha in a render buffer

    - by Cyan
    In my current task, i need to render a texture into a render buffer, in order to work on it (apply special filters) there. The result is then considered a "new texture", which is later displayed. This works fine, except when the texture contains some transparent/semi-transparent parts. My current guess it that, within the render buffer, the texture is "merged" with a kind of "grey background". In this case, it obviously impacts the R,G,B color components of transparent pixels. I've yet to find a way around this. Even manually assigning alpha after the rendering process doesn't save the day for semi-transparent pixels, which RGB are "tainted" by the grey background.

    Read the article

  • How can I get nexus to proxy springsource maven repository on s3?

    - by Peter Kahn
    I have nexus 1.5.0 setup to proxy springsource repositories but it's not working. The repositories are on s3 that nexus doesn't seem to understand how to deal with that. What's the right pattern? Here are the repositories I'm told I need, but I cannot access the maven paths with in them http://repository.springsource.com/maven/bundles/release http://repository.springsource.com/maven/bundles/external Do, I need to mirror these locally?

    Read the article

  • How to use a Proxy with Youtube API? (Python)

    - by Kate
    Hi, I'm working a script that will upload videos to YouTube with different accounts. Is there a way to use HTTPS or SOCKS proxies to filter all the requests. My client doesn't want to leave any footprints for Google. The only way I found was to set the proxy environment variable beforehand but this seems cumbersome. Is there some way I'm missing? Thanks :)

    Read the article

  • Data Source Security Part 5

    - by Steve Felts
    If you read through the first four parts of this series on data source security, you should be an expert on this focus area.  There is one more small topic to cover related to WebLogic Resource permissions.  After that comes the test, I mean example, to see with a real set of configuration parameters what the results are with some concrete values. WebLogic Resource Permissions All of the discussion so far has been about database credentials that are (eventually) used on the database side.  WLS has resource credentials to control what WLS users are allowed to access JDBC resources.  These can be defined on the Policies tab on the Security tab associated with the data source.  There are four permissions: “reserve” (get a new connection), “admin”, “shrink”, and reset (plus the all-inclusive “ALL”); we will focus on “reserve” here because we are talking about getting connections.  By default, JDBC resource permissions are completely open – anyone can do anything.  As soon as you add one policy for a permission, then all other users are restricted.  For example, if I add a policy so that “weblogic” can reserve a connection, then all other users will fail to reserve connections unless they are also explicitly added.  The validation is done for WLS user credentials only, not database user credentials.  Configuration of resources in general is described at “Create policies for resource instances” http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/security/CreatePoliciesForResourceInstances.html.  This feature can be very useful to restrict what code and users can get to your database. There are the three use cases: API Use database credentials User for permission checking getConnection() True or false Current WLS user getConnection(user,password) False User/password from API getConnection(user,password) True Current WLS user If a simple getConnection() is used or database credentials are enabled, the current user that is authenticated to the WLS system is checked. If database credentials are not enabled, then the user and password on the API are used. Example The following is an actual example of the interactions between identity-based-connection-pooling-enabled, oracle-proxy-session, and use-database-credentials. On the database side, the following objects are configured.- Database users scott; jdbcqa; jdbcqa3- Permission for proxy: alter user jdbcqa3 grant connect through jdbcqa;- Permission for proxy: alter user jdbcqa grant connect through jdbcqa; The following WebLogic Data Source objects are configured.- Users weblogic, wluser- Credential mapping “weblogic” to “scott”- Credential mapping "wluser" to "jdbcqa3"- Data source descriptor configured with user “jdbcqa”- All tests are run with Set Client ID set to true (more about that below).- All tests are run with oracle-proxy-session set to false (more about that below). The test program:- Runs in servlet- Authenticates to WLS as user “weblogic” Use DB Credentials Identity based getConnection(scott,***) getConnection(weblogic,***) getConnection(jdbcqa3,***) getConnection()  true  true Identity scottClient weblogicProxy null weblogic fails - not a db user User jdbcqa3Client weblogicProxy null Default user jdbcqaClient weblogicProxy null  false  true scott fails - not a WLS user User scottClient scottProxy null jdbcqa3 fails - not a WLS user User scottClient scottProxy null  true  false Proxy for scott fails weblogic fails - not a db user User jdbcqa3Client weblogicProxy jdbcqa Default user jdbcqaClient weblogicProxy null  false  false scott fails - not a WLS user Default user jdbcqaClient scottProxy null jdbcqa3 fails - not a WLS user Default user jdbcqaClient scottProxy null If Set Client ID is set to false, all cases would have Client set to null. If this was not an Oracle thin driver, the one case with the non-null Proxy in the above table would throw an exception because proxy session is only supported, implicitly or explicitly, with the Oracle thin driver. When oracle-proxy-session is set to true, the only cases that will pass (with a proxy of "jdbcqa") are the following.1. Setting use-database-credentials to true and doing getConnection(jdbcqa3,…) or getConnection().2. Setting use-database-credentials to false and doing getConnection(wluser, …) or getConnection(). Summary There are many options to choose from for data source security.  Considerations include the number and volatility of WLS and Database users, the granularity of data access, the depth of the security identity (property on the connection or a real user), performance, coordination of various components in the software stack, and driver capabilities.  Now that you have the big picture (remember that table in part 1), you can make a more informed choice.

    Read the article

  • Inheritance of templates in WPF

    - by Alxandr
    I'm trying to make sure that every child of a given element (MPF.MWindow) gets custom templates. For instance, the button should get the template defined in resMButton.xaml. As of now I'm using the following code on: (resMWindow.xaml) <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:MPF"> <Style x:Key="SystemKeyAnimations" TargetType="{x:Type Button}"> <Setter Property="Opacity" Value="0.5" /> <Setter Property="Background" Value="Transparent" /> <Style.Triggers> <EventTrigger RoutedEvent="Mouse.MouseEnter"> <BeginStoryboard> <Storyboard> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetProperty="Opacity"> <SplineDoubleKeyFrame KeyTime="00:00:00.2" Value="1.0" /> </DoubleAnimationUsingKeyFrames> </Storyboard> </BeginStoryboard> </EventTrigger> <EventTrigger RoutedEvent="Mouse.MouseLeave"> <BeginStoryboard> <Storyboard> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetProperty="Opacity"> <SplineDoubleKeyFrame KeyTime="00:00:00.2" Value="0.5" /> </DoubleAnimationUsingKeyFrames> </Storyboard> </BeginStoryboard> </EventTrigger> </Style.Triggers> </Style> <Style TargetType="{x:Type local:MWindow}"> <!-- Remove default frame appearance --> <Setter Property="WindowStyle" Value="None" /> <Setter Property="AllowsTransparency" Value="True" /> <Setter Property="Background" Value="Transparent" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type local:MWindow}"> <Border Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" x:Name="ChromeBorder"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="4" /> <ColumnDefinition /> <ColumnDefinition Width="4" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="4" /> <RowDefinition /> <RowDefinition Height="4" /> </Grid.RowDefinitions> <Thumb Grid.Row="0" Grid.Column="1" x:Name="TopThumb" Cursor="SizeNS" BorderThickness="4" BorderBrush="Transparent" /> <Thumb Grid.Row="2" Grid.Column="1" x:Name="BottomThumb" Cursor="SizeNS" BorderThickness="4" BorderBrush="Transparent" /> <Thumb Grid.Row="1" Grid.Column="0" x:Name="LeftThumb" Cursor="SizeWE" BorderThickness="4" BorderBrush="Transparent" /> <Thumb Grid.Row="1" Grid.Column="2" x:Name="RightThumb" Cursor="SizeWE" BorderThickness="4" BorderBrush="Transparent" /> <Thumb Grid.Row="0" Grid.Column="0" x:Name="TopLeftThumb" Cursor="SizeNWSE" BorderThickness="5" BorderBrush="Transparent" /> <Thumb Grid.Row="0" Grid.Column="2" x:Name="TopRightThumb" Cursor="SizeNESW" BorderThickness="5" BorderBrush="Transparent" /> <Thumb Grid.Row="2" Grid.Column="0" x:Name="BottomLeftThumb" Cursor="SizeNESW" BorderThickness="5" BorderBrush="Transparent" /> <Thumb Grid.Row="2" Grid.Column="2" x:Name="BottomRightThumb" Cursor="SizeNWSE" BorderThickness="5" BorderBrush="Transparent" /> <Grid Grid.Row="1" Grid.Column="1"> <Grid.RowDefinitions> <RowDefinition Height="20" /> <RowDefinition /> </Grid.RowDefinitions> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition Width="Auto" /> </Grid.ColumnDefinitions> <StackPanel Orientation="Horizontal" Grid.Column="1"> <Button Command="local:WindowCommands.Minimize" Style="{StaticResource ResourceKey=SystemKeyAnimations}"> <Button.Template> <ControlTemplate> <Canvas Width="10" Height="10" Margin="5" Background="Transparent"> <Line X1="0" X2="10" Y1="5" Y2="5" Stroke="White" StrokeThickness="2" /> </Canvas> </ControlTemplate> </Button.Template> </Button> <Button Command="local:WindowCommands.Maximize" x:Name="MaximizeButton" Style="{StaticResource ResourceKey=SystemKeyAnimations}"> <Button.Template> <ControlTemplate> <Canvas Width="10" Height="10" Margin="5" Background="Transparent"> <Rectangle Width="10" Height="10" Stroke="White" StrokeThickness="2" /> </Canvas> </ControlTemplate> </Button.Template> </Button> <Button Command="ApplicationCommands.Close" Style="{StaticResource ResourceKey=SystemKeyAnimations}"> <Button.Template> <ControlTemplate> <Canvas Width="10" Height="10" Margin="5" Background="Transparent"> <Line X1="0" X2="10" Y1="0" Y2="10" Stroke="White" StrokeThickness="2" /> <Line X1="10" X2="0" Y1="0" Y2="10" Stroke="White" StrokeThickness="2" /> </Canvas> </ControlTemplate> </Button.Template> </Button> </StackPanel> <ContentControl x:Name="TitleContentControl"> <TextBlock Text="{TemplateBinding Title}" Foreground="DarkGray" Margin="5,0" /> </ContentControl> </Grid> <ContentPresenter Content="{TemplateBinding Content}" Grid.Row="1"> <ContentPresenter.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/MPF;component/Themes/resMWindowContent.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </ContentPresenter.Resources> </ContentPresenter> </Grid> </Grid> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary> As you can see during the ContentPresenter which gets the content of the window I merge in a dicrionary called resMWindowContent.xaml. The resMWindowContent.xaml looks as following: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:MPF"> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/MPF;component/Themes/resMButton.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> It simply merges in the resMButton.xaml dictionary (this is done because in the feature I will have MTextBox, mList... and I want to separate them). The resMButton.xaml looks as following: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:MPF"> <Style TargetType="{x:Type Button}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type Button}"> <Grid Background="Transparent"> <Rectangle Stroke="{TemplateBinding BorderBrush}" StrokeThickness="{TemplateBinding BorderThickness}" Fill="{TemplateBinding Background}" /> <ContentPresenter Content="{TemplateBinding Content}" Margin="3" /> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary> A simple template drawing a square button. However, it isn't applied at all. My buttons remain normal, and I don't understand what I'm doing wrong. I just want every button inside the MWindow to get a special style (and in time every textbox and so forth). How do I achieve this? One note though: It's important that the styles doesn't apply to elements outside an MWindow.

    Read the article

  • WCF MessageHeaders in OperationContext.Current

    - by Nate Bross
    If I use code like this [just below] to add Message Headers to my OperationContext, will all future out-going messages contain that data on any new ClientProxy defined from the same "run" of my application? The objective, is to pass a parameter or two to each OpeartionContract w/out messing with the signature of the OperationContract, since the parameters being passed will be consistant for all requests for a given run of my client application. public void DoSomeStuff() { var proxy = new MyServiceClient(); Guid myToken = Guid.NewGuid(); MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken); MessageHeader untyped = mhg.GetUntypedHeader("token", "ns"); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); proxy.DoOperation(...); } public void DoSomeOTHERStuff() { var proxy = new MyServiceClient(); Guid myToken = Guid.NewGuid(); MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken); MessageHeader untyped = mhg.GetUntypedHeader("token", "ns"); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); proxy.DoOtherOperation(...); } In other words, is it safe to refactor the above code like this? bool isSetup = false; public void SetupMessageHeader() { if(isSetup) { return; } Guid myToken = Guid.NewGuid(); MessageHeader<Guid> mhg = new MessageHeader<Guid>(myToken); MessageHeader untyped = mhg.GetUntypedHeader("token", "ns"); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); isSetup = true; } public void DoSomeStuff() { var proxy = new MyServiceClient(); SetupMessageHeader(); proxy.DoOperation(...); } public void DoSomeOTHERStuff() { var proxy = new MyServiceClient(); SetupMessageHeader(); proxy.DoOtherOperation(...); } Since I don't really understand what's happening there, I don't want to cargo cult it and just change it and let it fly if it works, I'd like to hear your thoughts on if it is OK or not.

    Read the article

  • Configuring nginx to check for hard files in only a few directories,

    - by Evan Carroll
    For a node.js project I'm doing, I have a tree like this. +-- public ¦   +-- components ¦   +-- css ¦   +-- img +-- routes +-- views Essentially, I have the root to be set to public. I want all requests destined to /components/ /css/ /img/ To check to see if their appropriate destinations exist on disk. However, I don't want requests to other directories to even run an IO operation, /foo/asdf /bar /baz/index.html None of those should result in the disk being touched. I have a stansa that does the proxy to node.js, location @proxy { internal; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://localhost:3030; proxy_redirect off; } I just would like to know how to arrange this. My problem would be easily solved if try_files took a single argument, but it always wants a file first. location /components/ { try_files $uri, @proxy } location /css/ { try_files $uri, @proxy } location /img/ { try_files $uri, @proxy } However, there is nothing that I can find that will give me, location / { try_files @proxy } How do I get the effect I want?

    Read the article

  • HttpContext.Items and Server.Transfer/Execute

    - by Rick Strahl
    A few days ago my buddy Ben Jones pointed out that he ran into a bug in the ScriptContainer control in the West Wind Web and Ajax Toolkit. The problem was basically that when a Server.Transfer call was applied the script container (and also various ClientScriptProxy script embedding routines) would potentially fail to load up the specified scripts. It turns out the problem is due to the fact that the various components in the toolkit use request specific singletons via a Current property. I use a static Current property tied to a Context.Items[] entry to handle this type of operation which looks something like this: /// <summary> /// Current instance of this class which should always be used to /// access this object. There are no public constructors to /// ensure the reference is used as a Singleton to further /// ensure that all scripts are written to the same clientscript /// manager. /// </summary> public static ClientScriptProxy Current { get { if (HttpContext.Current == null) return new ClientScriptProxy(); ClientScriptProxy proxy = null; if (HttpContext.Current.Items.Contains(STR_CONTEXTID)) proxy = HttpContext.Current.Items[STR_CONTEXTID] as ClientScriptProxy; else { proxy = new ClientScriptProxy(); HttpContext.Current.Items[STR_CONTEXTID] = proxy; } return proxy; } } The proxy is attached to a Context.Items[] item which makes the instance Request specific. This works perfectly fine in most situations EXCEPT when you’re dealing with Server.Transfer/Execute requests. Server.Transfer doesn’t cause Context.Items to be cleared so both the current transferred request and the original request’s Context.Items collection apply. For the ClientScriptProxy this causes a problem because script references are tracked on a per request basis in Context.Items to check for script duplication. Once a script is rendered an ID is written into the Context collection and so considered ‘rendered’: // No dupes - ref script include only once if (HttpContext.Current.Items.Contains( STR_SCRIPTITEM_IDENTITIFIER + fileId ) ) return; HttpContext.Current.Items.Add(STR_SCRIPTITEM_IDENTITIFIER + fileId, string.Empty); where the fileId is the script name or unique identifier. The problem is on the Transferred page the item will already exist in Context and so fail to render because it thinks the script has already rendered based on the Context item. Bummer. The workaround for this is simple once you know what’s going on, but in this case it was a bitch to track down because the context items are used in many places throughout this class. The trick is to determine when a request is transferred and then removing the specific keys. The first issue is to determine if a script is in a Trransfer or Execute call: if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) Context.Handler is the original handler and CurrentHandler is the actual currently executing handler that is running when a Transfer/Execute is active. You can also use Context.PreviousHandler to get the last handler and chain through the whole list of handlers applied if Transfer calls are nested (dog help us all for the person debugging that). For the ClientScriptProxy the full logic to check for a transfer and remove the code looks like this: /// <summary> /// Clears all the request specific context items which are script references /// and the script placement index. /// </summary> public void ClearContextItemsOnTransfer() { if (HttpContext.Current != null) { // Check for Server.Transfer/Execute calls - we need to clear out Context.Items if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) { List<string> Keys = HttpContext.Current.Items.Keys.Cast<string>().Where(s => s.StartsWith(STR_SCRIPTITEM_IDENTITIFIER) || s == STR_ScriptResourceIndex).ToList(); foreach (string key in Keys) { HttpContext.Current.Items.Remove(key); } } } } along with a small update to the Current property getter that sets a global flag to indicate whether the request was transferred: if (!proxy.IsTransferred && HttpContext.Current.Handler != HttpContext.Current.CurrentHandler) { proxy.ClearContextItemsOnTransfer(); proxy.IsTransferred = true; } return proxy; I know this is pretty ugly, but it works and it’s actually minimal fuss without affecting the behavior of the rest of the class. Ben had a different solution that involved explicitly clearing out the Context items and replacing the collection with a manually maintained list of items which also works, but required changes through the code to make this work. In hindsight, it would have been better to use a single object that encapsulates all the ‘persisted’ values and store that object in Context instead of all these individual small morsels. Hindsight is always 20/20 though :-}. If possible use Page.Items ClientScriptProxy is a generic component that can be used from anywhere in ASP.NET, so there are various methods that are not Page specific on this component which is why I used Context.Items, rather than the Page.Items collection.Page.Items would be a better choice since it will sidestep the above Server.Transfer nightmares as the Page is reloaded completely and so any new Page gets a new Items collection. No fuss there. So for the ScriptContainer control, which has to live on the page the behavior is a little different. It is attached to Page.Items (since it’s a control): /// <summary> /// Returns a current instance of this control if an instance /// is already loaded on the page. Otherwise a new instance is /// created, added to the Form and returned. /// /// It's important this function is not called too early in the /// page cycle - it should not be called before Page.OnInit(). /// /// This property is the preferred way to get a reference to a /// ScriptContainer control that is either already on a page /// or needs to be created. Controls in particular should always /// use this property. /// </summary> public static ScriptContainer Current { get { // We need a context for this to work! if (HttpContext.Current == null) return null; Page page = HttpContext.Current.CurrentHandler as Page; if (page == null) throw new InvalidOperationException(Resources.ERROR_ScriptContainer_OnlyWorks_With_PageBasedHandlers); ScriptContainer ctl = null; // Retrieve the current instance ctl = page.Items[STR_CONTEXTID] as ScriptContainer; if (ctl != null) return ctl; ctl = new ScriptContainer(); page.Form.Controls.Add(ctl); return ctl; } } The biggest issue with this approach is that you have to explicitly retrieve the page in the static Current property. Notice again the use of CurrentHandler (rather than Handler which was my original implementation) to ensure you get the latest page including the one that Server.Transfer fired. Server.Transfer and Server.Execute are Evil All that said – this fix is probably for the 2 people who are crazy enough to rely on Server.Transfer/Execute. :-} There are so many weird behavior problems with these commands that I avoid them at all costs. I don’t think I have a single application that uses either of these commands… Related Resources Full source of ClientScriptProxy.cs (repository) Part of the West Wind Web Toolkit Static Singletons for ASP.NET Controls Post © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • How can I click a button behind a transparent UIView?

    - by Sean Clark Hess
    Let's say we have a view controller with one sub view. the subview takes up the center of the screen with 100 px margins on all sides. We then add a bunch of little stuff to click on inside that subview. We are only using the subview to take advantage of the new frame ( x=0, y=0 inside the subview is actually 100,100 in the parent view). Then, imagine that we have something behind the subview, like a menu. I want the user to be able to select any of the "little stuff" in the subview, but if there is nothing there, I want touches to pass through it (since the background is clear anyway) to the buttons behind it. How can I do this? It looks like touchesBegan goes through, but buttons don't work.

    Read the article

  • My framework will utilise other frameworks, but I'd like this to be transparent to the end-user

    - by d11wtq
    I'm building a framework, which aims to provide a new development environment for the user, but I need to use things like RegexKit and almost certainly some other established frameworks in order to do this. Any functionality exposed from such frameworks would be abstracted through classes and methods in my own framework for maintenance reasons (allowing me to change my mind on which dependencies I want). In an ideal world I just want to ship a single .framework. However I'm aware that unlike with standard bundles and applications it is not possibile to embed a framework inside the framework bundle. Do I have any other option other than to tell the end user that they must also install RegexKit and any other dependencies? I have a feeling this lessens the appeal value of the easy to use embedded framework I'd envisaged building. Right now I'm feeling like I have some limited options: Force the user to install the dependencies. Write my own classes that provide the same functionality -- ugh! If at all possible, try to statically link the third party frameworks (is this possible??) My end-product is ideally a single .framework bundle that uses @rpath and so can be installed in the system or simply bundled with the app that uses it.

    Read the article

  • Timeout Considerations for Solicit Response

    - by Michael Stephenson
    Background One of the clients I work with had been experiencing some issues for a while surrounding web service timeouts.  It's been a little challenging to work through the problems due to limitations in the diagnostic information available from one of the applications, but I learned some interesting things while troubleshooting the problem which don't seem to have been discussed much in the community so I thought I'd share my findings. In the scenario we have BizTalk trying to make calls to a .net web service which was exposed as a WSE 2 endpoint.  In the process BizTalk will try to make a large number of concurrent web service calls to the application, and the backend application has more than enough infrastructure and capability to handle the load. We have configured the <ConnectionManagement> section of the BizTalk configuration file to support up to 100 concurrent connections from each of our 2 BizTalk send servers to the web servers of the application. The problem we were facing was that the BizTalk side was reporting a significant number of timeouts when calling the web service.   One of the biggest issues was the challenge of being able to correlate a message from BizTalk to the IIS log in the .net application and the custom logs in the application especially when there was a fairly large number of servers hosting the web services.  However the key moment came when we were able to identify a specific call which had taken 40 seconds to execute on the server (yes a long time I know but that's a different story!).  Anyway we were able to identify that this had timed out on the BizTalk side.  Based on the normal 2 minute timeout we knew something unexpected was going on. From here I decided to do some experimentation and I wanted to start outside of BizTalk because my hunch was this was not a BizTalk behaviour but something which was being highlighted by BizTalk because of our large load.     Server-side - Sample Web Service To begin with I created a sample web service.  Nothing special just a vanilla asmx web service hosted in IIS6 on Windows 2003 Standard Edition.  The web service is just a hello world style web service as shown in the below picture.  The only key feature is that the server side web method has a 30 second sleep in it and will trace out some information before and after the thread is set to sleep.      In the configuration for this web service there again is nothing special it's pretty much the most plain simple web service you could build. Client-Side To begin looking at what was happening with our example I created a number of different ways to consume the web service. SoapHttpClientProtocol Example I created a small application which would use a normal proxy generated to call the web service.  It would iterate around a loop and make calls using the begin/end methods so I can do this asynchronously.  I would do a loop of 20 calls with the ConnectionManager configuration section supporting only 5 concurrent connections to the server.     <connectionManagement> <remove address="*"/> <add address = "*" maxconnection = "12" /> <add address = "http://<ServerName>" maxconnection = "5" />                         </connectionManagement> </system.net>     The below picture shows an example of the service calling code, key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service         The Test I would run the client and execute 21 calls to the web service.   The Results  Below is the client side trace showing what's happening on the client. In the below diagram is the web service side trace showing what's happening on the server Some observations on the results are: All of the calls were successful from the clients perspective You could see the next call starting on the server as soon as the previous one had completed Calls took significantly longer than 40 seconds from the start of our call to the return. In fact call 20 took 2 minutes and 30 seconds from the perspective of my code to execute even though I had set the timeout to 40 seconds     WSE 2 Sample In the second example I used the exact same code to call the web service again with a single exception that I modified the web service proxy to derive from WebServiceClient protocol which is part of WSE 2 (using SP3).  The below picture shows the basic code and the key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service        The Test This test would execute 21 calls from the client to the web service.   The Results  The below trace is from the client side: The below trace is from the server side:   Some observations on the trace results for this scenario are: With call 4 if you look at the server side trace it did not start executing on the server for a number of seconds after the other 4 initial calls which were accepted by the server. I re-ran the test and this happened a couple of times and not on most others so at this point I'm just putting this down to something unexpected happening on the development machine and we will leave this observation out of scope of this article. You can see that the client side trace statement executed almost immediately in all cases All calls after the initial few calls would timeout On the client side the calls that did timeout; timed out in a longer duration than the 40 seconds we set as the timeout You can see that as calls were completing on the server the next calls were starting to come through The calls that timed out on the client did actually connect to the server and their server side execution completed successfully     Elaboration on the findings Based on the above observations I have drawn the below sequence diagram to illustrate conceptually what is happening.  Everything except the final web service object is on the client side of the call. In the diagram below I've put two notes on the Web Service Proxy to show the two different places where the different base classes seem to start their timeout counters. From the earlier samples we can work out that the timeout counter for the WSE web service proxy starts before the one for the SoapHttpClientProtocol proxy and the WSE one includes the time to get a connection from the pool; whereas the Soap proxy timeout just covers the method execution. One interesting observation is if we rerun the above sample and increase the number of calls from 21 to 100,000 then for the WSE sample we will see a similar pattern where everything after the first few calls will timeout on the client as soon as it makes a connection to the server whereas the soap proxy will happily plug away and process all of the calls without a single timeout. I have actually set the sample running overnight and this did happen. At this point you are probably thinking the same thoughts I was at the time about the differences in behaviour and which is right and why are they different? I'm not sure there is a definitive answer to this in the documentation, or at least not that I could find! I think you just have to consider that they are different and they could have different effects depending on your messaging solution. In lots of situations this is just not an issue as your concurrent requests doesn't get to the situation where you end up throttling the web service calls on the client side, however this is definitely more common with an integration broker such as BizTalk where you often have high throughput requirements.  Some of the considerations you should make Based on this behaviour you should be aware of the following: In a .net application if you are making lots of concurrent web service calls from an application in an asynchronous manner your user may thing they are experiencing poor performance but you think your web service is working well. The problem could be that the client will have a default of 2 connections to remote servers so you should bear this in mind When you are developing a BizTalk solution or a .net solution with the WSE 2 stack you may experience timeouts under load and throttling the number of connections using the max connections element in the configuration file will not help you For an application using WSE2 or SoapHttpClientProtocol an expired timeout will not throw an error until after a connection to the server has been made so you should consider this in your transaction and durability patterns     Our Work Around In the short term for our specific scenario we know that we can handle this by just increasing our timeout value.  There is only a specific small window when we get lots of concurrent traffic that causes this scenario so we should be able to increase the timeout to take into consideration the additional client side wait, and on the odd occasion where we do get a timeout the BizTalk send port retry will handle this. What was causing our original problem was that for that short window we were getting a lot of retries which significantly increased the load on our send servers and highlighted the issue.  Longer Term Solution As a longer term solution this really gives us more ammunition to argue a migration to WCF. The application we are calling has some factors which limit the protocols we can use but with WCF we would have more control on the various timeout options because in WCF you can configure specific parts of the timeout. Summary I've had this blog post on my to do list for ages but hopefully it will be useful to some people to just understand this behaviour and to possibly help you with some performance issues you may have. I do not believe there is too much in the way of documentation particularly around WSE2 and ASMX in this area so again another bit of ammunition for migrating to WCF. I'll try to do a follow up post with the sample for WCF to show how this changes things.

    Read the article

  • Quick guide to Oracle IRM 11g: Configuring SSL

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index So far in this guide we have an IRM Server up and running, however I skipped over SSL configuration in the previous article because I wanted to focus in more detail now. You can, if you wish, not bother with setting up SSL, but considering this is a security technology it is worthwhile doing. Contents Setting up a one way, self signed SSL certificate in WebLogic Setting up an official SSL certificate in Apache 2.x Configuring Apache to proxy traffic to the IRM server There are two common scenarios in which an Oracle IRM server is configured. For a development or evaluation system, people usually communicate directly to the WebLogic Server running the IRM service. However in a production environment and for some proof of concept evaluations that require a setup reflecting a production system, the traffic to the IRM server travels via a web server proxy, commonly Apache. In this guide we are building an Oracle Enterprise Linux based IRM service and this article will go over the configuration of SSL in WebLogic and also in Apache. Like in the past articles, we are going to use two host names in the configuration below,irm.company.com will refer to the public Apache server irm.company.internal will refer to the internal WebLogic IRM server Setting up a one way, self signed SSL certificate in WebLogic First lets look at creating just a simple self signed SSL certificate to be used in WebLogic. This is a quick and easy way to get SSL working in your environment, however the downside is that no browsers are going to trust this certificate you create and you'll need to manually install the certificate onto any machine's communicating with the server. This is fine for development or when you have only a few users evaluating the system, but for any significant use it's usually better to have a fully trusted certificate in use and I explain that in the next section. But for now lets go through creating, installing and testing a self signed certificate. We use a library in Java to create the certificates, open a console and running the following commands. Note you should choose your own secure passwords whenever you see password below. [oracle@irm /] source /oracle/middleware/wlserver_10.3/server/bin/setWLSEnv.sh [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irm /] java utils.CertGen -selfsigned -certfile MyOwnSelfCA.cer -keyfile MyOwnSelfKey.key -keyfilepass password -cn "irm.oracle.demo" [oracle@irm /] java utils.ImportPrivateKey -keystore MyOwnIdentityStore.jks -storepass password -keypass password -alias trustself -certfile MyOwnSelfCA.cer.pem -keyfile MyOwnSelfKey.key.pem -keyfilepass password [oracle@irm /] keytool -import -trustcacerts -alias trustself -keystore TrustMyOwnSelf.jks -file MyOwnSelfCA.cer.der -keyalg RSA We now have two Java Key Stores, MyOwnIdentityStore.jks and TrustMyOwnSelf.jks. These contain keys and certificates which we will use in WebLogic Server. Now we need to tell the IRM server to use these stores when setting up SSL connections for incoming requests. Make sure the Admin server is running and login into the WebLogic Console at http://irm.company.intranet:7001/console and do the following; In the menu on the left, select the + next to Environment to expose the submenu, then click on Servers. You will see two servers in the list, AdminServer(admin) and IRM_server1. If the IRM server is running, shut it down either by hitting CONTROL + C in the console window it was started from, or you can switch to the CONTROL tab, select IRM_server1 and then select the Shutdown menu and then Force Shutdown Now. In the Configuration tab select IRM_server1 and switch to the Keystores tab. By default WebLogic Server uses it's own demo identity and trust. We are now going to switch to the self signed one's we've just created. So select the Change button and switch to Custom Identity and Custom Trust and hit save. Now we have to complete the resulting fields, the setting's i've used in my evaluation server are below. IdentityCustom Identity Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/MyOwnIdentityStore.jks Custom Identity Keystore Type: JKS Custom Identity Keystore Passphrase: password Confirm Custom Identity Keystore Passphrase: password TrustCustom Trust Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/TrustMyOwnSelf.jks Custom Trust Keystore Type: JKS Custom Trust Keystore Passphrase: password Confirm Custom Trust Keystore Passphrase: password Now click on the SSL tab for the IRM_server1 and enter in the alias and passphrase, in my demo here the details are; IdentityPrivate Key Alias: trustself Private Key Passphrase: password Confirm Private Key Passphrase: password And hit save. Now lets test a connection to the IRM server over HTTPS using SSL. Go back to a console window and start the IRM server, a quick reminder on how to do this is... [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/bin [oracle@irm /] ./startManagedWeblogic IRM_server1 Once running, open a browser and head to the SSL port of the server. By default the IRM server will be listening on the URL https://irm.company.intranet:16101/irm_rights. Note in the example image on the right the port is 7002 because it's a system that has the IRM services installed on the Admin server, this isn't typical (or advisable). Your system is going to have a separate managed server which will be listening on port 16101. Once you open this address you will notice that your browser is going to complain that the server certificate is untrusted. The images on the right show how Firefox displays this error. You are going to be prompted every time you create a new SSL session with the server, both from the browser and more annoyingly from the IRM Desktop. If you plan on always using a self signed certificate, it is worth adding it to the Windows certificate store so that when you are accessing sealed content you do not keep being informed this certificate is not trusted. Follow these instructions (which are for Internet Explorer 8, they may vary for your version of IE.) Start Internet Explorer and open the URL to your IRM server over SSL, e.g. https://irm.company.intranet:16101/irm_rights. IE will complain that about the certificate, click on Continue to this website (not recommended). From the IE Tools menu select Internet Options and from the resulting dialog select Security and then click on Trusted Sites and then the Sites button. Add to the list of trusted sites a URL which mates the server you are accessing, e.g. https://irm.company.intranet/ and select OK. Now refresh the page you were accessing and next to the URL you should see a red cross and the words Certificate Error. Click on this button and select View Certificates. You will now see a dialog with the details of the self signed certificate and the Install Certificate... button should be enabled. Click on this to start the wizard. Click next and you'll be asked where you should install the certificate. Change the option to Place all certificates in the following store. Select browse and choose the Trusted Root Certification Authorities location and hit OK. You'll then be prompted to install the certificate and answer yes. You also need to import the root signed certificate into the same location, so once again select the red Certificate Error option and this time when viewing the certificate, switch to the Certification Path tab and you should see a CertGenCAB certificate. Select this and then click on View Certificate and go through the same process as above to import the certificate into the store. Finally close all instances of the IE browser and re-access the IRM server URL again, this time you should not receive any errors. Setting up an official SSL certificate in Apache 2.x At this point we now have an IRM server that you can communicate with over SSL. However this certificate isn't trusted by any browser because it's path of trust doesn't end in a recognized certificate authority (CA). Also you are communicating directly to the WebLogic Server over a non standard SSL port, 16101. In a production environment it is common to have another device handle the initial public internet traffic and then proxy this to the WebLogic server. The diagram below shows a very simplified view of this type of deployment. What i'm going to walk through next is configuring Apache to proxy traffic to a WebLogic server and also to use a real SSL certificate from an official CA. First step is to configure Apache to handle incoming requests over SSL. In this guide I am configuring the IRM service in Oracle Enterprise Linux 5 update 3 and Apache 2.2.3 which came with OpenSSL and mod_ssl components. Before I purchase an SSL certificate, I need to generate a certificate request from the server. Oracle.com uses Verisign and for my own personal needs I use cheaper certificates from GoDaddy. The following instructions are specific to Apache, but there are many references out there for other web servers. For Apache I have OpenSSL and the commands are; [oracle@irm /] cd /usr/bin [oracle@irm bin] openssl genrsa -des3 -out irm-apache-server.key 2048 Generating RSA private key, 2048 bit long modulus ............................+++ .........+++ e is 65537 (0x10001) Enter pass phrase for irm-apache-server.key: Verifying - Enter pass phrase for irm-apache-server.key: [oracle@irm bin] openssl req -new -key irm-apache-server.key -out irm-apache-server.csr Enter pass phrase for irm-apache-server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:San Francisco Organization Name (eg, company) [My Company Ltd]:Oracle Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:irm.company.com Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:testing An optional company name []: You must make sure to remember the pass phrase you used in the initial key generation, you will need this when later configuring Apache. In the /usr/bin directory there are now two new files. The irm-apache-server.csr contains our certificate request and is what you cut and paste, or upload, to your certificate authority when you purchase and validate your SSL certificate. In response you will typically get two files. Your server certificate and another certificate file that will likely contain a set of certificates from your CA which validate your certificate's trust. Next we need to configure Apache to use these files. Typically there is an ssl.conf file which is where all the SSL configuration is done. On my Oracle Enterprise Linux server this file is located in /etc/httpd/conf.d/ssl.conf and i've added the following lines. <VirtualHost irm.company.com> # Setup SSL for irm.company.com ServerName irm.company.com SSLEngine On SSLCertificateFile /oracle/secure/irm.company.com.crt SSLCertificateKeyFile /oracle/secure/irm.company.com.key SSLCertificateChainFile /oracle/secure/gd_bundle.crt </VirtualHost> Restarting Apache (apachectl restart) and I can now attempt to connect to the Apache server in a web browser, https://irm.company.com/. If all is configured correctly I should now see an Apache test page delivered to me over HTTPS. Configuring Apache to proxy traffic to the IRM server Final piece in setting up SSL is to have Apache proxy requests for the IRM server but do so securely. So the requests to Apache will be over HTTPS using a legitimate certificate, but we can also configure Apache to proxy these requests internally across to the IRM server using SSL with the self signed certificate we generated at the start of this article. To do this proxying we use the WebLogic Web Server plugin for Apache which you can download here from Oracle. Download the zip file and extract onto the server. The file extraction reveals a set of zip files, each one specific to a supported web server. In my instance I am using Apache 2.2 32bit on an Oracle Enterprise Linux, 64 bit server. If you are not sure what version your Apache server is, run the command /usr/sbin/httpd -V and you'll see version and it its 32 or 64 bit. Mine is a 32bit server so I need to extract the file WLSPlugin1.1-Apache2.2-linux32-x86.zip. The from the resulting lib folder copy the file mod_wl.so into /usr/lib/httpd/modules/. First we want to test that the plug in will work for regular HTTP traffic. Edit the httpd.conf for Apache and add the following section at the bottom. LoadModule weblogic_module modules/mod_wl.so <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16100    WLLogFile /tmp/wl-proxy.log </IfModule> <Location /irm_rights>    SetHandler weblogic-handler </Location> <Location /irm_desktop>    SetHandler weblogic-handler </Location> <Location /irm_sealing>    SetHandler weblogic-handler </Location> <Location /irm_services>    SetHandler weblogic-handler </Location> Now restart Apache again (apachectl restart) and now open a browser to http://irm.company.com/irm_rights. Apache will proxy the HTTP traffic from the port 80 of your Apache server to the IRM service listening on port 16100 of the WebLogic Managed server. Note above I have included all four of the Locations you might wish to proxy. http://irm.company.internalirm_rights is the URL to the management website, /irm_desktop is the URL used for the IRM Desktop to communicate. irm_sealing is for web services based document sealing and irm_services is for IRM server web services. The last two are typically only used when you have the IRM server integrated with another application and it is unlikely you'd be accessing these resources from the public facing Apache server. However, just in case, i've mentioned them above. Now let's enable SSL communication from Apache to WebLogic. In the ZIP file we extracted were some more modules we need to copy into the Apache folder. Looking back in the lib that we extracted, there are some more files. Copy the following into the /usr/lib/httpd/modules/ folder. libwlssl.so libnnz11.so libclntsh.so.11.1 Now the documentation states that should only need to do this, but I found that I also needed to create an environment variable called LD_LIBRARY_PATH and point this to the folder /usr/lib/httpd/modules/. If I didn't do this, starting Apache with the WebLogic module configured to SSL would throw the error. [crit] (20014)Internal error: WL SSL Init failed for server: (null) on 0 So I had to edit the file /etc/profile and add the following lines at the bottom. You may already have the LD_LIBRARY_PATH variable defined, therefore simply add this path to it. LD_LIBRARY_PATH=/usr/lib/httpd/modules/ export LD_LIBRARY_PATH Now the WebLogic plug in uses an Oracle Wallet to store the required certificates.You'll need to copy the self signed certificate from the IRM server over to the Apache server. Copy over the MyOwnSelfCA.cer.der into the same folder where you are storing your public certificates, in my example this is /oracle/secure. It's worth mentioning these files should ONLY be readable by root (the user Apache runs as). Now lets create an Oracle Wallet and import the self signed certificate from the IRM server. The file orapki was included in the bin folder of the Apache 1.1 plugin zip you extracted. orapki wallet create -wallet /oracle/secure/my-wallet -auto_login_only orapki wallet add -wallet /oracle/secure/my-wallet -trusted_cert -cert MyOwnSelfCA.cer.der -auto_login_only Finally change the httpd.conf to reflect that we want the WebLogic Apache plug-in to use HTTPS/SSL and not just plain HTTP. <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16101    SecureProxy ON    WLSSLWallet /oracle/secure/my-wallet    WLLogFile /tmp/wl-proxy.log </IfModule> Then restart Apache once more and you can go back to the browser to test the communication. Opening the URL https://irm.company.com/irm_rights will proxy your request to the WebLogic server at https://irm.company.internal:16101/irm_rights. At this point you have a fully functional Oracle IRM service, the next step is to create a sealed document and test the entire system.

    Read the article

  • .NET HttpListener - no traffic when listening to "https://*.8080" when browser proxy is set???

    - by Greg
    Hi, Background - I can get HttpListener working fine for HTTP traffic. I'm having trouble with HTTPS traffic however. QUESTION: How can I change the code below so that a browser request to a "https" URL will actually be picked up by my HttpListener? Notes - At the moment with firefox's proxy settings set to "localhost:8080", when I listen to traffic on port 8080 ("https://*:8080/"), and I enter a HTTPS url in firefox, I am getting no traffic being picked up? (when I listen to just http and enter normal http url's it works fine) _httpListener = new HttpListener(); _httpListener.Prefixes.Add("https://*:8080/"); _httpListener.Start(); thanks

    Read the article

  • When should I open and close a website's cached WCF proxy?

    - by Brandon Linton
    I've browsed around the other articles on StackOverflow that relate to caching WCF proxies for reuse, and I've read this article explaining why I should explicitly open the proxy before calling anything on it. I'm still a little hazy on the best implementation details. My question is: when should I open and close proxies for service calls on a website, and what should their lifetime be (per call, per request, or per web app)? We aren't planning on leveraging cached security contexts at the moment (but it's not unforeseeable). Thanks!

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >