Search Results

Search found 4890 results on 196 pages for '2003'.

Page 194/196 | < Previous Page | 190 191 192 193 194 195 196  | Next Page >

  • How to improve Varnish performance?

    - by Darkseal
    We're experiencing a strange problem with our current Varnish configuration. 4x Web Servers (IIS 6.5 on Windows 2003 Server, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) 3x Varnish Servers (varnish-3.0.3 revision 9e6a70f on Ubuntu 12.04.2 LTS - 64 bit/precise, Kernel Linux 3.2.0-29-generic, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) The Varnish Servers performance are awfully bad in general, to the point that if we shut down one of them the other two are unable to fullfill all the requests and start to skip beats resulting in pending requests, timeouts, 404, etc. What can we do to improve our Varnish performance? Considering that we're getting less than 5k request per seconds during our max peak, we should be able to serve our pages even with a single one of them without any problem. We use a standard, vanilla CFG, as shown by this varnishadm param.show output: acceptor_sleep_decay 0.900000 [] acceptor_sleep_incr 0.001000 [s] acceptor_sleep_max 0.050000 [s] auto_restart on [bool] ban_dups on [bool] ban_lurker_sleep 0.010000 [s] between_bytes_timeout 60.000000 [s] cc_command "exec gcc -std=gnu99 -g -O2 -pthread -fpic -shared - Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 20 [seconds] clock_skew 10 [s] connect_timeout 0.700000 [s] critbit_cooloff 180.000000 [s] default_grace 10.000000 [seconds] default_keep 0.000000 [seconds] default_ttl 120.000000 [seconds] diag_bitmap 0x0 [bitmap] esi_syntax 0 [bitmap] expiry_sleep 1.000000 [seconds] fetch_chunksize 128 [kilobytes] fetch_maxchunksize 262144 [kilobytes] first_byte_timeout 60.000000 [s] group varnish (113) gzip_level 6 [] gzip_memlevel 8 [] gzip_stack_buffer 32768 [Bytes] gzip_tmp_space 0 [] gzip_window 15 [] http_gzip_support off [bool] http_max_hdr 64 [header lines] http_range_support on [bool] http_req_hdr_len 8192 [bytes] http_req_size 32768 [bytes] http_resp_hdr_len 8192 [bytes] http_resp_size 32768 [bytes] idle_send_timeout 60 [seconds] listen_address :80 listen_depth 1024 [connections] log_hashstring on [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_depth 5 [levels] max_restarts 4 [restarts] nuke_limit 50 [allocations] pcre_match_limit 10000 [] pcre_match_limit_recursion 10000 [] ping_interval 3 [seconds] pipe_timeout 60 [seconds] prefer_ipv6 off [bool] queue_max 100 [%] rush_exponent 3 [requests per request] saintmode_threshold 10 [objects] send_timeout 600 [seconds] sess_timeout 5 [seconds] sess_workspace 16384 [bytes] session_linger 50 [ms] session_max 100000 [sessions] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] shortlived 10.000000 [s] syslog_cli_traffic on [bool] thread_pool_add_delay 2 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 2000 [threads] thread_pool_min 5 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_stack unlimited [bytes] thread_pool_timeout 300 [seconds] thread_pool_workspace 65536 [bytes] thread_pools 2 [pools] thread_stats_rate 10 [requests] user varnish (106) vcc_err_unref on [bool] vcl_dir /etc/varnish vcl_trace off [bool] vmod_dir /usr/lib/varnish/vmods waiter default (epoll, poll) This is our default.vcl file: LINK sub vcl_recv { # BASIC recv COMMANDS: # # lookup -> search the item in the cache # pass -> always serve a fresh item (no-caching) # pipe -> like pass but ensures a direct-connection with the backend (no-cache AND no-proxy) # Allow the backend to serve up stale content if it is responding slow. # This defines when Varnish should use a stale object if it has one in the cache. set req.grace = 30s; if (client.ip == "127.0.0.1") { # request from NGINX - do not alter X-Forwarded-For set req.http.HTTPS = "on"; } else { # Add an X-Forwarded-For to keep track of original request unset req.http.HTTPS; unset req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; } set req.backend = www_director; # Strip all cookies to force an anonymous request when the back-end servers are down. if (!req.backend.healthy) { unset req.http.Cookie; } ## HHTP Accept-Encoding if (req.http.Accept-Encoding) { if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { unset req.http.Accept-Encoding; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* non-RFC2616 or CONNECT */ return (pipe); } if (req.request != "GET" && req.request != "HEAD") { /* only deal with GET and HEAD by default */ return (pass); } if (req.http.Authorization) { return (pass); } if (req.http.HTTPS ~ "on") { return (pass); } ###################################################### # COOKIE HANDLING ###################################################### # METHOD 1: do not remove cookies, but pass the page if they contain TB_NC if (!(req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$")) { if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { return (pass); } } return (lookup); } # Code determining what to do when serving items from the IIS Server sub vcl_fetch { unset beresp.http.Server; set beresp.http.Server = "Server-1"; # Allow items to be stale if needed. This is the maximum time Varnish should keep an object. set beresp.grace = 1h; if (req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$") { unset beresp.http.set-cookie; } # Default Varnish VCL logic if (!beresp.cacheable || beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has specific TB_NC no-caching cookie if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { set beresp.http.X-Cacheable = "NO:Got Cookie"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control private else if (beresp.http.Cache-Control ~ "private") { set beresp.http.X-Cacheable = "NO:Cache-Control=private"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control no-cache or Pragma no-cache else if (beresp.http.Cache-Control ~ "no-cache" || beresp.http.Pragma ~ "no-cache") { set beresp.http.X-Cacheable = "NO:Cache-Control=no-cache (or pragma no-cache)"; set beresp.ttl = 120 s; return(hit_for_pass); } # If we reach to this point, the object is cacheable. # Cacheable but with not enough ttl: we need to extend the lifetime of the object artificially # NOTE: Varnish default TTL is set in /etc/sysconfig/varnish # and can be checked using the following command: # varnishadm param.show default_ttl else if (beresp.ttl < 1s) { set beresp.ttl = 5s; set beresp.grace = 5s; set beresp.http.X-Cacheable = "YES:FORCED"; } # Cacheable and with valid TTL. else { set beresp.http.X-Cacheable = "YES"; } # DEBUG INFO (Cookies) # set beresp.http.X-Cookie-Debug = "Request cookie: " + req.http.Cookie; return(deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; if (obj.status == 404) { synthetic {" <!-- Markup for the 404 page goes here --> "}; } else if (obj.status == 500) { synthetic {" <!-- Markup for the 500 page goes here --> "}; } else if (obj.status == 503) { if (req.restarts < 4) { return(restart); } else { synthetic {" <!-- Markup for the 503 page goes here --> "}; } } else { synthetic {" <!-- Markup for a generic error page goes here --> "}; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } Thanks in advance,

    Read the article

  • Creating STA COM compatible ASP.NET Applications

    - by Rick Strahl
    When building ASP.NET applications that interface with old school COM objects like those created with VB6 or Visual FoxPro (MTDLL), it's extremely important that the threads that are serving requests use Single Threaded Apartment Threading. STA is a COM built-in technology that allows essentially single threaded components to operate reliably in a multi-threaded environment. STA's guarantee that COM objects instantiated on a specific thread stay on that specific thread and any access to a COM object from another thread automatically marshals that thread to the STA thread. The end effect is that you can have multiple threads, but a COM object instance lives on a fixed never changing thread. ASP.NET by default uses MTA (multi-threaded apartment) threads which are truly free spinning threads that pay no heed to COM object marshaling. This is vastly more efficient than STA threading which has a bit of overhead in determining whether it's OK to run code on a given thread or whether some sort of thread/COM marshaling needs to occur. MTA COM components can be very efficient, but STA COM components in a multi-threaded environment always tend to have a fair amount of overhead. It's amazing how much COM Interop I still see today so while it seems really old school to be talking about this topic, it's actually quite apropos for me as I have many customers using legacy COM systems that need to interface with other .NET applications. In this post I'm consolidating some of the hacks I've used to integrate with various ASP.NET technologies when using STA COM Components. STA in ASP.NET Support for STA threading in the ASP.NET framework is fairly limited. Specifically only the original ASP.NET WebForms technology supports STA threading directly via its STA Page Handler implementation or what you might know as ASPCOMPAT mode. For WebForms running STA components is as easy as specifying the ASPCOMPAT attribute in the @Page tag:<%@ Page Language="C#" AspCompat="true" %> which runs the page in STA mode. Removing it runs in MTA mode. Simple. Unfortunately all other ASP.NET technologies built on top of the core ASP.NET engine do not support STA natively. So if you want to use STA COM components in MVC or with class ASMX Web Services, there's no automatic way like the ASPCOMPAT keyword available. So what happens when you run an STA COM component in an MTA application? In low volume environments - nothing much will happen. The COM objects will appear to work just fine as there are no simultaneous thread interactions and the COM component will happily run on a single thread or multiple single threads one at a time. So for testing running components in MTA environments may appear to work just fine. However as load increases and threads get re-used by ASP.NET COM objects will end up getting created on multiple different threads. This can result in crashes or hangs, or data corruption in the STA components which store their state in thread local storage on the STA thread. If threads overlap this global store can easily get corrupted which in turn causes problems. STA ensures that any COM object instance loaded always stays on the same thread it was instantiated on. What about COM+? COM+ is supposed to address the problem of STA in MTA applications by providing an abstraction with it's own thread pool manager for COM objects. It steps in to the COM instantiation pipeline and hands out COM instances from its own internally maintained STA Thread pool. This guarantees that the COM instantiation threads are STA threads if using STA components. COM+ works, but in my experience the technology is very, very slow for STA components. It adds a ton of overhead and reduces COM performance noticably in load tests in IIS. COM+ can make sense in some situations but for Web apps with STA components it falls short. In addition there's also the need to ensure that COM+ is set up and configured on the target machine and the fact that components have to be registered in COM+. COM+ also keeps components up at all times, so if a component needs to be replaced the COM+ package needs to be unloaded (same is true for IIS hosted components but it's more common to manage that). COM+ is an option for well established components, but native STA support tends to provide better performance and more consistent usability, IMHO. STA for non supporting ASP.NET Technologies As mentioned above only WebForms supports STA natively. However, by utilizing the WebForms ASP.NET Page handler internally it's actually possible to trick various other ASP.NET technologies and let them work with STA components. This is ugly but I've used each of these in various applications and I've had minimal problems making them work with FoxPro STA COM components which is about as dififcult as it gets for COM Interop in .NET. In this post I summarize several STA workarounds that enable you to use STA threading with these ASP.NET Technologies: ASMX Web Services ASP.NET MVC WCF Web Services ASP.NET Web API ASMX Web Services I start with classic ASP.NET ASMX Web Services because it's the easiest mechanism that allows for STA modification. It also clearly demonstrates how the WebForms STA Page Handler is the key technology to enable the various other solutions to create STA components. Essentially the way this works is to override the WebForms Page class and hijack it's init functionality for processing requests. Here's what this looks like for Web Services:namespace FoxProAspNet { public class WebServiceStaHandler : System.Web.UI.Page, IHttpAsyncHandler { protected override void OnInit(EventArgs e) { IHttpHandler handler = new WebServiceHandlerFactory().GetHandler( this.Context, this.Context.Request.HttpMethod, this.Context.Request.FilePath, this.Context.Request.PhysicalPath); handler.ProcessRequest(this.Context); this.Context.ApplicationInstance.CompleteRequest(); } public IAsyncResult BeginProcessRequest( HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } } public class AspCompatWebServiceStaHandlerWithSessionState : WebServiceStaHandler, IRequiresSessionState { } } This class overrides the ASP.NET WebForms Page class which has a little known AspCompatBeginProcessRequest() and AspCompatEndProcessRequest() method that is responsible for providing the WebForms ASPCOMPAT functionality. These methods handle routing requests to STA threads. Note there are two classes - one that includes session state and one that does not. If you plan on using ASP.NET Session state use the latter class, otherwise stick to the former. This maps to the EnableSessionState page setting in WebForms. This class simply hooks into this functionality by overriding the BeginProcessRequest and EndProcessRequest methods and always forcing it into the AspCompat methods. The way this works is that BeginProcessRequest() fires first to set up the threads and starts intializing the handler. As part of that process the OnInit() method is fired which is now already running on an STA thread. The code then creates an instance of the actual WebService handler factory and calls its ProcessRequest method to start executing which generates the Web Service result. Immediately after ProcessRequest the request is stopped with Application.CompletRequest() which ensures that the rest of the Page handler logic doesn't fire. This means that even though the fairly heavy Page class is overridden here, it doesn't end up executing any of its internal processing which makes this code fairly efficient. In a nutshell, we're highjacking the Page HttpHandler and forcing it to process the WebService process handler in the context of the AspCompat handler behavior. Hooking up the Handler Because the above is an HttpHandler implementation you need to hook up the custom handler and replace the standard ASMX handler. To do this you need to modify the web.config file (here for IIS 7 and IIS Express): <configuration> <system.webServer> <handlers> <remove name="WebServiceHandlerFactory-Integrated-4.0" /> <add name="Asmx STA Web Service Handler" path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" precondition="integrated"/> </handlers> </system.webServer> </configuration> (Note: The name for the WebServiceHandlerFactory-Integrated-4.0 might be slightly different depending on your server version. Check the IIS Handler configuration in the IIS Management Console for the exact name or simply remove the handler from the list there which will propagate to your web.config). For IIS 5 & 6 (Windows XP/2003) or the Visual Studio Web Server use:<configuration> <system.web> <httpHandlers> <remove path="*.asmx" verb="*" /> <add path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" /> </httpHandlers> </system.web></configuration> To test, create a new ASMX Web Service and create a method like this: [WebService(Namespace = "http://foxaspnet.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class FoxWebService : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello World. Threading mode is: " + System.Threading.Thread.CurrentThread.GetApartmentState(); } } Run this before you put in the web.config configuration changes and you should get: Hello World. Threading mode is: MTA Then put the handler mapping into Web.config and you should see: Hello World. Threading mode is: STA And you're on your way to using STA COM components. It's a hack but it works well! I've used this with several high volume Web Service installations with various customers and it's been fast and reliable. ASP.NET MVC ASP.NET MVC has quickly become the most popular ASP.NET technology, replacing WebForms for creating HTML output. MVC is more complex to get started with, but once you understand the basic structure of how requests flow through the MVC pipeline it's easy to use and amazingly flexible in manipulating HTML requests. In addition, MVC has great support for non-HTML output sources like JSON and XML, making it an excellent choice for AJAX requests without any additional tools. Unlike WebForms ASP.NET MVC doesn't support STA threads natively and so some trickery is needed to make it work with STA threads as well. MVC gets its handler implementation through custom route handlers using ASP.NET's built in routing semantics. To work in an STA handler requires working in the Page Handler as part of the Route Handler implementation. As with the Web Service handler the first step is to create a custom HttpHandler that can instantiate an MVC request pipeline properly:public class MvcStaThreadHttpAsyncHandler : Page, IHttpAsyncHandler, IRequiresSessionState { private RequestContext _requestContext; public MvcStaThreadHttpAsyncHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); _requestContext = requestContext; } public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } protected override void OnInit(EventArgs e) { var controllerName = _requestContext.RouteData.GetRequiredString("controller"); var controllerFactory = ControllerBuilder.Current.GetControllerFactory(); var controller = controllerFactory.CreateController(_requestContext, controllerName); if (controller == null) throw new InvalidOperationException("Could not find controller: " + controllerName); try { controller.Execute(_requestContext); } finally { controllerFactory.ReleaseController(controller); } this.Context.ApplicationInstance.CompleteRequest(); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } public override void ProcessRequest(HttpContext httpContext) { throw new NotSupportedException("STAThreadRouteHandler does not support ProcessRequest called (only BeginProcessRequest)"); } } This handler code figures out which controller to load and then executes the controller. MVC internally provides the information needed to route to the appropriate method and pass the right parameters. Like the Web Service handler the logic occurs in the OnInit() and performs all the processing in that part of the request. Next, we need a RouteHandler that can actually pick up this handler. Unlike the Web Service handler where we simply registered the handler, MVC requires a RouteHandler to pick up the handler. RouteHandlers look at the URL's path and based on that decide on what handler to invoke. The route handler is pretty simple - all it does is load our custom handler: public class MvcStaThreadRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); return new MvcStaThreadHttpAsyncHandler(requestContext); } } At this point you can instantiate this route handler and force STA requests to MVC by specifying a route. The following sets up the ASP.NET Default Route:Route mvcRoute = new Route("{controller}/{action}/{id}", new RouteValueDictionary( new { controller = "Home", action = "Index", id = UrlParameter.Optional }), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute);   To make this code a little easier to work with and mimic the behavior of the routes.MapRoute() functionality extension method that MVC provides, here is an extension method for MapMvcStaRoute(): public static class RouteCollectionExtensions { public static void MapMvcStaRoute(this RouteCollection routeTable, string name, string url, object defaults = null) { Route mvcRoute = new Route(url, new RouteValueDictionary(defaults), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute); } } With this the syntax to add  route becomes a little easier and matches the MapRoute() method:RouteTable.Routes.MapMvcStaRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); The nice thing about this route handler, STA Handler and extension method is that it's fully self contained. You can put all three into a single class file and stick it into your Web app, and then simply call MapMvcStaRoute() and it just works. Easy! To see whether this works create an MVC controller like this: public class ThreadTestController : Controller { public string ThreadingMode() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Try this test both with only the MapRoute() hookup in the RouteConfiguration in which case you should get MTA as the value. Then change the MapRoute() call to MapMvcStaRoute() leaving all the parameters the same and re-run the request. You now should see STA as the result. You're on your way using STA COM components reliably in ASP.NET MVC. WCF Web Services running through IIS WCF Web Services provide a more robust and wider range of services for Web Services. You can use WCF over HTTP, TCP, and Pipes, and WCF services support WS* secure services. There are many features in WCF that go way beyond what ASMX can do. But it's also a bit more complex than ASMX. As a basic rule if you need to serve straight SOAP Services over HTTP I 'd recommend sticking with the simpler ASMX services especially if COM is involved. If you need WS* support or want to serve data over non-HTTP protocols then WCF makes more sense. WCF is not my forte but I found a solution from Scott Seely on his blog that describes the progress and that seems to work well. I'm copying his code below so this STA information is all in one place and quickly explain. Scott's code basically works by creating a custom OperationBehavior which can be specified via an [STAOperation] attribute on every method. Using his attribute you end up with a class (or Interface if you separate the contract and class) that looks like this: [ServiceContract] public class WcfService { [OperationContract] public string HelloWorldMta() { return Thread.CurrentThread.GetApartmentState().ToString(); } // Make sure you use this custom STAOperationBehavior // attribute to force STA operation of service methods [STAOperationBehavior] [OperationContract] public string HelloWorldSta() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Pretty straight forward. The latter method returns STA while the former returns MTA. To make STA work every method needs to be marked up. The implementation consists of the attribute and OperationInvoker implementation. Here are the two classes required to make this work from Scott's post:public class STAOperationBehaviorAttribute : Attribute, IOperationBehavior { public void AddBindingParameters(OperationDescription operationDescription, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { } public void ApplyClientBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.ClientOperation clientOperation) { // If this is applied on the client, well, it just doesn’t make sense. // Don’t throw in case this attribute was applied on the contract // instead of the implementation. } public void ApplyDispatchBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.DispatchOperation dispatchOperation) { // Change the IOperationInvoker for this operation. dispatchOperation.Invoker = new STAOperationInvoker(dispatchOperation.Invoker); } public void Validate(OperationDescription operationDescription) { if (operationDescription.SyncMethod == null) { throw new InvalidOperationException("The STAOperationBehaviorAttribute " + "only works for synchronous method invocations."); } } } public class STAOperationInvoker : IOperationInvoker { IOperationInvoker _innerInvoker; public STAOperationInvoker(IOperationInvoker invoker) { _innerInvoker = invoker; } public object[] AllocateInputs() { return _innerInvoker.AllocateInputs(); } public object Invoke(object instance, object[] inputs, out object[] outputs) { // Create a new, STA thread object[] staOutputs = null; object retval = null; Thread thread = new Thread( delegate() { retval = _innerInvoker.Invoke(instance, inputs, out staOutputs); }); thread.SetApartmentState(ApartmentState.STA); thread.Start(); thread.Join(); outputs = staOutputs; return retval; } public IAsyncResult InvokeBegin(object instance, object[] inputs, AsyncCallback callback, object state) { // We don’t handle async… throw new NotImplementedException(); } public object InvokeEnd(object instance, out object[] outputs, IAsyncResult result) { // We don’t handle async… throw new NotImplementedException(); } public bool IsSynchronous { get { return true; } } } The key in this setup is the Invoker and the Invoke method which creates a new thread and then fires the request on this new thread. Because this approach creates a new thread for every request it's not super efficient. There's a bunch of overhead involved in creating the thread and throwing it away after each thread, but it'll work for low volume requests and insure each thread runs in STA mode. If better performance is required it would be useful to create a custom thread manager that can pool a number of STA threads and hand off threads as needed rather than creating new threads on every request. If your Web Service needs are simple and you need only to serve standard SOAP 1.x requests, I would recommend sticking with ASMX services. It's easier to set up and work with and for STA component use it'll be significantly better performing since ASP.NET manages the STA thread pool for you rather than firing new threads for each request. One nice thing about Scotts code is though that it works in any WCF environment including self hosting. It has no dependency on ASP.NET or WebForms for that matter. STA - If you must STA components are a  pain in the ass and thankfully there isn't too much stuff out there anymore that requires it. But when you need it and you need to access STA functionality from .NET at least there are a few options available to make it happen. Each of these solutions is a bit hacky, but they work - I've used all of them in production with good results with FoxPro components. I hope compiling all of these in one place here makes it STA consumption a little bit easier. I feel your pain :-) Resources Download STA Handler Code Examples Scott Seely's original STA WCF OperationBehavior Article© Rick Strahl, West Wind Technologies, 2005-2012Posted in FoxPro   ASP.NET  .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • Errors when installing Open Office

    - by user109036
    I followed the first set of instructions on this page to install Open Office: How to install Open Office? However, the last step which says to change the CHMOD of a folder, I got an error saying that the directory does not exist. Open Office now appears in my Ubuntu start menu, but clicking on it does nothing. I tried a reboot. Below is what I could copy from my terminal. I am running the latest Ubuntu. I have not uninstalled Libreoffice as suggested somewhere. The reason is that in the Ubuntu software centre, Libre office appears to be made up of several components and I don't know which ones to remove (or all maybe?). They are Libreoffice Draw, Math, Writer, Calc. After this operation, 480 MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre-lib all 6b24-1.11.5-0ubuntu1~12.10.1 [6,135 kB] Get:2 http://ppa.launchpad.net/upubuntu-com/office/ubuntu/ quantal/main openoffice amd64 3.4~oneiric [321 MB] Get:3 http://gb.archive.ubuntu.com/ubuntu/ quantal/main ca-certificates-java all 20120721 [13.2 kB] Get:4 http://gb.archive.ubuntu.com/ubuntu/ quantal/main tzdata-java all 2012e-0ubuntu2 [140 kB] Get:5 http://gb.archive.ubuntu.com/ubuntu/ quantal/main java-common all 0.43ubuntu3 [61.7 kB] Get:6 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre-headless amd64 6b24-1.11.5-0ubuntu1~12.10.1 [25.4 MB] Get:7 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libgif4 amd64 4.1.6-9.1ubuntu1 [31.3 kB] Get:8 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre amd64 6b24-1.11.5-0ubuntu1~12.10.1 [234 kB] Get:9 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libatk-wrapper-java all 0.30.4-0ubuntu4 [29.8 kB] Get:10 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libatk-wrapper-java-jni amd64 0.30.4-0ubuntu4 [31.1 kB] Get:11 http://gb.archive.ubuntu.com/ubuntu/ quantal/main xorg-sgml-doctools all 1:1.10-1 [12.0 kB] Get:12 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-core-dev all 7.0.23-1 [744 kB] Get:13 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libice-dev amd64 2:1.0.8-2 [57.6 kB] Get:14 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libpthread-stubs0 amd64 0.3-3 [3,258 B] Get:15 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libpthread-stubs0-dev amd64 0.3-3 [2,866 B] Get:16 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libsm-dev amd64 2:1.2.1-2 [19.9 kB] Get:17 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxau-dev amd64 1:1.0.7-1 [10.2 kB] Get:18 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxdmcp-dev amd64 1:1.1.1-1 [26.9 kB] Get:19 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-input-dev all 2.2-1 [133 kB] Get:20 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-kb-dev all 1.0.6-2 [269 kB] Get:21 http://gb.archive.ubuntu.com/ubuntu/ quantal/main xtrans-dev all 1.2.7-1 [84.3 kB] Get:22 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxcb1-dev amd64 1.8.1-1ubuntu1 [82.6 kB] Get:23 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-dev amd64 2:1.5.0-1 [912 kB] Get:24 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-doc all 2:1.5.0-1 [2,460 kB] Get:25 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxt-dev amd64 1:1.1.3-1 [492 kB] Get:26 http://gb.archive.ubuntu.com/ubuntu/ quantal/main ttf-dejavu-extra all 2.33-2ubuntu1 [3,420 kB] Get:27 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe icedtea-6-jre-cacao amd64 6b24-1.11.5-0ubuntu1~12.10.1 [417 kB] Get:28 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe icedtea-6-jre-jamvm amd64 6b24-1.11.5-0ubuntu1~12.10.1 [581 kB] Get:29 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/main icedtea-netx-common all 1.3-1ubuntu1.1 [617 kB] Get:30 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/main icedtea-netx amd64 1.3-1ubuntu1.1 [16.2 kB] Get:31 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jdk amd64 6b24-1.11.5-0ubuntu1~12.10.1 [11.1 MB] Fetched 374 MB in 9min 18s (671 kB/s) Extract templates from packages: 100% Selecting previously unselected package openjdk-6-jre-lib. (Reading database ... 143191 files and directories currently installed.) Unpacking openjdk-6-jre-lib (from .../openjdk-6-jre-lib_6b24-1.11.5-0ubuntu1~12.10.1_all.deb) ... Selecting previously unselected package ca-certificates-java. Unpacking ca-certificates-java (from .../ca-certificates-java_20120721_all.deb) ... Selecting previously unselected package tzdata-java. Unpacking tzdata-java (from .../tzdata-java_2012e-0ubuntu2_all.deb) ... Selecting previously unselected package java-common. Unpacking java-common (from .../java-common_0.43ubuntu3_all.deb) ... Selecting previously unselected package openjdk-6-jre-headless:amd64. Unpacking openjdk-6-jre-headless:amd64 (from .../openjdk-6-jre-headless_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package libgif4:amd64. Unpacking libgif4:amd64 (from .../libgif4_4.1.6-9.1ubuntu1_amd64.deb) ... Selecting previously unselected package openjdk-6-jre:amd64. Unpacking openjdk-6-jre:amd64 (from .../openjdk-6-jre_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package libatk-wrapper-java. Unpacking libatk-wrapper-java (from .../libatk-wrapper-java_0.30.4-0ubuntu4_all.deb) ... Selecting previously unselected package libatk-wrapper-java-jni:amd64. Unpacking libatk-wrapper-java-jni:amd64 (from .../libatk-wrapper-java-jni_0.30.4-0ubuntu4_amd64.deb) ... Selecting previously unselected package xorg-sgml-doctools. Unpacking xorg-sgml-doctools (from .../xorg-sgml-doctools_1%3a1.10-1_all.deb) ... Selecting previously unselected package x11proto-core-dev. Unpacking x11proto-core-dev (from .../x11proto-core-dev_7.0.23-1_all.deb) ... Selecting previously unselected package libice-dev:amd64. Unpacking libice-dev:amd64 (from .../libice-dev_2%3a1.0.8-2_amd64.deb) ... Selecting previously unselected package libpthread-stubs0:amd64. Unpacking libpthread-stubs0:amd64 (from .../libpthread-stubs0_0.3-3_amd64.deb) ... Selecting previously unselected package libpthread-stubs0-dev:amd64. Unpacking libpthread-stubs0-dev:amd64 (from .../libpthread-stubs0-dev_0.3-3_amd64.deb) ... Selecting previously unselected package libsm-dev:amd64. Unpacking libsm-dev:amd64 (from .../libsm-dev_2%3a1.2.1-2_amd64.deb) ... Selecting previously unselected package libxau-dev:amd64. Unpacking libxau-dev:amd64 (from .../libxau-dev_1%3a1.0.7-1_amd64.deb) ... Selecting previously unselected package libxdmcp-dev:amd64. Unpacking libxdmcp-dev:amd64 (from .../libxdmcp-dev_1%3a1.1.1-1_amd64.deb) ... Selecting previously unselected package x11proto-input-dev. Unpacking x11proto-input-dev (from .../x11proto-input-dev_2.2-1_all.deb) ... Selecting previously unselected package x11proto-kb-dev. Unpacking x11proto-kb-dev (from .../x11proto-kb-dev_1.0.6-2_all.deb) ... Selecting previously unselected package xtrans-dev. Unpacking xtrans-dev (from .../xtrans-dev_1.2.7-1_all.deb) ... Selecting previously unselected package libxcb1-dev:amd64. Unpacking libxcb1-dev:amd64 (from .../libxcb1-dev_1.8.1-1ubuntu1_amd64.deb) ... Selecting previously unselected package libx11-dev:amd64. Unpacking libx11-dev:amd64 (from .../libx11-dev_2%3a1.5.0-1_amd64.deb) ... Selecting previously unselected package libx11-doc. Unpacking libx11-doc (from .../libx11-doc_2%3a1.5.0-1_all.deb) ... Selecting previously unselected package libxt-dev:amd64. Unpacking libxt-dev:amd64 (from .../libxt-dev_1%3a1.1.3-1_amd64.deb) ... Selecting previously unselected package ttf-dejavu-extra. Unpacking ttf-dejavu-extra (from .../ttf-dejavu-extra_2.33-2ubuntu1_all.deb) ... Selecting previously unselected package icedtea-6-jre-cacao:amd64. Unpacking icedtea-6-jre-cacao:amd64 (from .../icedtea-6-jre-cacao_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package icedtea-6-jre-jamvm:amd64. Unpacking icedtea-6-jre-jamvm:amd64 (from .../icedtea-6-jre-jamvm_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package icedtea-netx-common. Unpacking icedtea-netx-common (from .../icedtea-netx-common_1.3-1ubuntu1.1_all.deb) ... Selecting previously unselected package icedtea-netx:amd64. Unpacking icedtea-netx:amd64 (from .../icedtea-netx_1.3-1ubuntu1.1_amd64.deb) ... Selecting previously unselected package openjdk-6-jdk:amd64. Unpacking openjdk-6-jdk:amd64 (from .../openjdk-6-jdk_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package openoffice. Unpacking openoffice (from .../openoffice_3.4~oneiric_amd64.deb) ... Processing triggers for doc-base ... Processing 2 added doc-base files... Processing triggers for man-db ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Processing triggers for fontconfig ... Processing triggers for gnome-icon-theme ... Processing triggers for shared-mime-info ... Setting up tzdata-java (2012e-0ubuntu2) ... Setting up java-common (0.43ubuntu3) ... Setting up libgif4:amd64 (4.1.6-9.1ubuntu1) ... Setting up xorg-sgml-doctools (1:1.10-1) ... Setting up x11proto-core-dev (7.0.23-1) ... Setting up libice-dev:amd64 (2:1.0.8-2) ... Setting up libpthread-stubs0:amd64 (0.3-3) ... Setting up libpthread-stubs0-dev:amd64 (0.3-3) ... Setting up libsm-dev:amd64 (2:1.2.1-2) ... Setting up libxau-dev:amd64 (1:1.0.7-1) ... Setting up libxdmcp-dev:amd64 (1:1.1.1-1) ... Setting up x11proto-input-dev (2.2-1) ... Setting up x11proto-kb-dev (1.0.6-2) ... Setting up xtrans-dev (1.2.7-1) ... Setting up libxcb1-dev:amd64 (1.8.1-1ubuntu1) ... Setting up libx11-dev:amd64 (2:1.5.0-1) ... Setting up libx11-doc (2:1.5.0-1) ... Setting up libxt-dev:amd64 (1:1.1.3-1) ... Setting up ttf-dejavu-extra (2.33-2ubuntu1) ... Setting up icedtea-netx-common (1.3-1ubuntu1.1) ... Setting up openjdk-6-jre-lib (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up openjdk-6-jre-headless:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java to provide /usr/bin/java (java) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/keytool to provide /usr/bin/keytool (keytool) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/pack200 to provide /usr/bin/pack200 (pack200) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/rmid to provide /usr/bin/rmid (rmid) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/rmiregistry to provide /usr/bin/rmiregistry (rmiregistry) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/unpack200 to provide /usr/bin/unpack200 (unpack200) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/orbd to provide /usr/bin/orbd (orbd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/servertool to provide /usr/bin/servertool (servertool) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/tnameserv to provide /usr/bin/tnameserv (tnameserv) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/lib/jexec to provide /usr/bin/jexec (jexec) in auto mode Setting up ca-certificates-java (20120721) ... Adding debian:Deutsche_Telekom_Root_CA_2.pem Adding debian:Comodo_Trusted_Services_root.pem Adding debian:Certum_Trusted_Network_CA.pem Adding debian:thawte_Primary_Root_CA_-_G2.pem Adding debian:UTN_USERFirst_Hardware_Root_CA.pem Adding debian:AddTrust_Low-Value_Services_Root.pem Adding debian:Microsec_e-Szigno_Root_CA.pem Adding debian:SwissSign_Silver_CA_-_G2.pem Adding debian:ComSign_Secured_CA.pem Adding debian:Buypass_Class_2_CA_1.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Certum_Root_CA.pem Adding debian:AddTrust_External_Root.pem Adding debian:Chambers_of_Commerce_Root_-_2008.pem Adding debian:Starfield_Root_Certificate_Authority_-_G2.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority_-_G2.pem Adding debian:Visa_eCommerce_Root.pem Adding debian:Digital_Signature_Trust_Co._Global_CA_3.pem Adding debian:AC_Raíz_Certicámara_S.A..pem Adding debian:NetLock_Arany_=Class_Gold=_Fotanúsítvány.pem Adding debian:Taiwan_GRCA.pem Adding debian:Camerfirma_Chambers_of_Commerce_Root.pem Adding debian:Juur-SK.pem Adding debian:Entrust.net_Premium_2048_Secure_Server_CA.pem Adding debian:XRamp_Global_CA_Root.pem Adding debian:Security_Communication_RootCA2.pem Adding debian:AddTrust_Qualified_Certificates_Root.pem Adding debian:NetLock_Qualified_=Class_QA=_Root.pem Adding debian:TC_TrustCenter_Class_2_CA_II.pem Adding debian:DST_ACES_CA_X6.pem Adding debian:thawte_Primary_Root_CA.pem Adding debian:thawte_Primary_Root_CA_-_G3.pem Adding debian:GeoTrust_Universal_CA_2.pem Adding debian:ACEDICOM_Root.pem Adding debian:Security_Communication_EV_RootCA1.pem Adding debian:America_Online_Root_Certification_Authority_2.pem Adding debian:TC_TrustCenter_Universal_CA_I.pem Adding debian:SwissSign_Platinum_CA_-_G2.pem Adding debian:Global_Chambersign_Root_-_2008.pem Adding debian:SecureSign_RootCA11.pem Adding debian:GeoTrust_Global_CA_2.pem Adding debian:Buypass_Class_3_CA_1.pem Adding debian:Baltimore_CyberTrust_Root.pem Adding debian:UbuntuOne-Go_Daddy_Class_2_CA.pem Adding debian:Equifax_Secure_eBusiness_CA_1.pem Adding debian:SwissSign_Gold_CA_-_G2.pem Adding debian:AffirmTrust_Premium_ECC.pem Adding debian:TC_TrustCenter_Universal_CA_III.pem Adding debian:ca.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority_-_G2.pem Adding debian:NetLock_Express_=Class_C=_Root.pem Adding debian:VeriSign_Class_3_Public_Primary_Certification_Authority_-_G5.pem Adding debian:Firmaprofesional_Root_CA.pem Adding debian:Comodo_Secure_Services_root.pem Adding debian:cacert.org.pem Adding debian:GeoTrust_Primary_Certification_Authority.pem Adding debian:RSA_Security_2048_v3.pem Adding debian:Staat_der_Nederlanden_Root_CA.pem Adding debian:Cybertrust_Global_Root.pem Adding debian:DigiCert_High_Assurance_EV_Root_CA.pem Adding debian:TDC_OCES_Root_CA.pem Adding debian:A-Trust-nQual-03.pem Adding debian:Equifax_Secure_CA.pem Adding debian:Digital_Signature_Trust_Co._Global_CA_1.pem Adding debian:GeoTrust_Global_CA.pem Adding debian:Starfield_Class_2_CA.pem Adding debian:ApplicationCA_-_Japanese_Government.pem Adding debian:Swisscom_Root_CA_1.pem Adding debian:Verisign_Class_2_Public_Primary_Certification_Authority_-_G2.pem Adding debian:Camerfirma_Global_Chambersign_Root.pem Adding debian:QuoVadis_Root_CA_3.pem Adding debian:QuoVadis_Root_CA.pem Adding debian:Comodo_AAA_Services_root.pem Adding debian:ComSign_CA.pem Adding debian:AddTrust_Public_Services_Root.pem Adding debian:DigiCert_Assured_ID_Root_CA.pem Adding debian:UTN_DATACorp_SGC_Root_CA.pem Adding debian:CA_Disig.pem Adding debian:E-Guven_Kok_Elektronik_Sertifika_Hizmet_Saglayicisi.pem Adding debian:GlobalSign_Root_CA_-_R3.pem Adding debian:QuoVadis_Root_CA_2.pem Adding debian:Entrust_Root_Certification_Authority.pem Adding debian:GTE_CyberTrust_Global_Root.pem Adding debian:ValiCert_Class_1_VA.pem Adding debian:Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem Adding debian:GeoTrust_Primary_Certification_Authority_-_G2.pem Adding debian:spi-ca-2003.pem Adding debian:America_Online_Root_Certification_Authority_1.pem Adding debian:AffirmTrust_Premium.pem Adding debian:Sonera_Class_1_Root_CA.pem Adding debian:Verisign_Class_2_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Certplus_Class_2_Primary_CA.pem Adding debian:TURKTRUST_Certificate_Services_Provider_Root_2.pem Adding debian:Network_Solutions_Certificate_Authority.pem Adding debian:Go_Daddy_Class_2_CA.pem Adding debian:StartCom_Certification_Authority.pem Adding debian:Hongkong_Post_Root_CA_1.pem Adding debian:Hellenic_Academic_and_Research_Institutions_RootCA_2011.pem Adding debian:Thawte_Premium_Server_CA.pem Adding debian:EBG_Elektronik_Sertifika_Hizmet_Saglayicisi.pem Adding debian:TURKTRUST_Certificate_Services_Provider_Root_1.pem Adding debian:NetLock_Business_=Class_B=_Root.pem Adding debian:Microsec_e-Szigno_Root_CA_2009.pem Adding debian:DigiCert_Global_Root_CA.pem Adding debian:VeriSign_Class_3_Public_Primary_Certification_Authority_-_G4.pem Adding debian:IGC_A.pem Adding debian:TWCA_Root_Certification_Authority.pem Adding debian:S-TRUST_Authentication_and_Encryption_Root_CA_2005_PN.pem Adding debian:VeriSign_Universal_Root_Certification_Authority.pem Adding debian:DST_Root_CA_X3.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority.pem Adding debian:Root_CA_Generalitat_Valenciana.pem Adding debian:UTN_USERFirst_Email_Root_CA.pem Adding debian:ssl-cert-snakeoil.pem Adding debian:Starfield_Services_Root_Certificate_Authority_-_G2.pem Adding debian:GeoTrust_Primary_Certification_Authority_-_G3.pem Adding debian:Certinomis_-_Autorité_Racine.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority.pem Adding debian:TDC_Internet_Root_CA.pem Adding debian:UbuntuOne-ValiCert_Class_2_VA.pem Adding debian:AffirmTrust_Commercial.pem Adding debian:spi-cacert-2008.pem Adding debian:Izenpe.com.pem Adding debian:EC-ACC.pem Adding debian:Go_Daddy_Root_Certificate_Authority_-_G2.pem Adding debian:COMODO_ECC_Certification_Authority.pem Adding debian:CNNIC_ROOT.pem Adding debian:NetLock_Notary_=Class_A=_Root.pem Adding debian:Equifax_Secure_eBusiness_CA_2.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Secure_Global_CA.pem Adding debian:UbuntuOne-Go_Daddy_CA.pem Adding debian:GeoTrust_Universal_CA.pem Adding debian:Wells_Fargo_Root_CA.pem Adding debian:Thawte_Server_CA.pem Adding debian:WellsSecure_Public_Root_Certificate_Authority.pem Adding debian:TC_TrustCenter_Class_3_CA_II.pem Adding debian:COMODO_Certification_Authority.pem Adding debian:Equifax_Secure_Global_eBusiness_CA.pem Adding debian:Security_Communication_Root_CA.pem Adding debian:GlobalSign_Root_CA_-_R2.pem Adding debian:TÜBITAK_UEKAE_Kök_Sertifika_Hizmet_Saglayicisi_-_Sürüm_3.pem Adding debian:Verisign_Class_4_Public_Primary_Certification_Authority_-_G3.pem Adding debian:certSIGN_ROOT_CA.pem Adding debian:RSA_Root_Certificate_1.pem Adding debian:ePKI_Root_Certification_Authority.pem Adding debian:Entrust.net_Secure_Server_CA.pem Adding debian:OISTE_WISeKey_Global_Root_GA_CA.pem Adding debian:Sonera_Class_2_Root_CA.pem Adding debian:Certigna.pem Adding debian:AffirmTrust_Networking.pem Adding debian:ValiCert_Class_2_VA.pem Adding debian:GlobalSign_Root_CA.pem Adding debian:Staat_der_Nederlanden_Root_CA_-_G2.pem Adding debian:SecureTrust_CA.pem done. Setting up openjdk-6-jre:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/policytool to provide /usr/bin/policytool (policytool) in auto mode Setting up libatk-wrapper-java (0.30.4-0ubuntu4) ... Setting up icedtea-6-jre-cacao:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up icedtea-6-jre-jamvm:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up icedtea-netx:amd64 (1.3-1ubuntu1.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/javaws to provide /usr/bin/javaws (javaws) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/itweb-settings to provide /usr/bin/itweb-settings (itweb-settings) in auto mode update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/javaws to provide /usr/bin/javaws (javaws) in auto mode update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/itweb-settings to provide /usr/bin/itweb-settings (itweb-settings) in auto mode Setting up openjdk-6-jdk:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/appletviewer to provide /usr/bin/appletviewer (appletviewer) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/extcheck to provide /usr/bin/extcheck (extcheck) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/idlj to provide /usr/bin/idlj (idlj) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jar to provide /usr/bin/jar (jar) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jarsigner to provide /usr/bin/jarsigner (jarsigner) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javadoc to provide /usr/bin/javadoc (javadoc) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javah to provide /usr/bin/javah (javah) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javap to provide /usr/bin/javap (javap) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jconsole to provide /usr/bin/jconsole (jconsole) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jdb to provide /usr/bin/jdb (jdb) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jhat to provide /usr/bin/jhat (jhat) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jinfo to provide /usr/bin/jinfo (jinfo) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jmap to provide /usr/bin/jmap (jmap) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jps to provide /usr/bin/jps (jps) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jrunscript to provide /usr/bin/jrunscript (jrunscript) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jsadebugd to provide /usr/bin/jsadebugd (jsadebugd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstack to provide /usr/bin/jstack (jstack) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstat to provide /usr/bin/jstat (jstat) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstatd to provide /usr/bin/jstatd (jstatd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/native2ascii to provide /usr/bin/native2ascii (native2ascii) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/rmic to provide /usr/bin/rmic (rmic) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/schemagen to provide /usr/bin/schemagen (schemagen) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/serialver to provide /usr/bin/serialver (serialver) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/wsgen to provide /usr/bin/wsgen (wsgen) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/wsimport to provide /usr/bin/wsimport (wsimport) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/xjc to provide /usr/bin/xjc (xjc) in auto mode Setting up openoffice (3.4~oneiric) ... Setting up libatk-wrapper-java-jni:amd64 (0.30.4-0ubuntu4) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place philip@X301-2:~$ sudo apt-get install libxrandr2:i386 libxinerama1:i386 Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: linux-headers-3.5.0-17 Use 'apt-get autoremove' to remove it. The following extra packages will be installed: gcc-4.7-base:i386 libc6:i386 libgcc1:i386 libx11-6:i386 libxau6:i386 libxcb1:i386 libxdmcp6:i386 libxext6:i386 libxrender1:i386 Suggested packages: glibc-doc:i386 locales:i386 The following NEW packages will be installed gcc-4.7-base:i386 libc6:i386 libgcc1:i386 libx11-6:i386 libxau6:i386 libxcb1:i386 libxdmcp6:i386 libxext6:i386 libxinerama1:i386 libxrandr2:i386 libxrender1:i386 0 upgraded, 11 newly installed, 0 to remove and 93 not upgraded. Need to get 4,936 kB of archives. After this operation, 11.9 MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://gb.archive.ubuntu.com/ubuntu/ quantal/main gcc-4.7-base i386 4.7.2-2ubuntu1 [15.5 kB] Get:2 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libc6 i386 2.15-0ubuntu20 [3,940 kB] Get:3 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libgcc1 i386 1:4.7.2-2ubuntu1 [53.5 kB] Get:4 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxau6 i386 1:1.0.7-1 [8,582 B] Get:5 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxdmcp6 i386 1:1.1.1-1 [13.1 kB] Get:6 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxcb1 i386 1.8.1-1ubuntu1 [48.7 kB] Get:7 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-6 i386 2:1.5.0-1 [776 kB] Get:8 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxext6 i386 2:1.3.1-2 [33.9 kB] Get:9 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxinerama1 i386 2:1.1.2-1 [8,118 B] Get:10 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxrender1 i386 1:0.9.7-1 [20.1 kB] Get:11 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxrandr2 i386 2:1.4.0-1 [18.8 kB] Fetched 4,936 kB in 30s (161 kB/s) Preconfiguring packages ... Selecting previously unselected package gcc-4.7-base:i386. (Reading database ... 146005 files and directories currently installed.) Unpacking gcc-4.7-base:i386 (from .../gcc-4.7-base_4.7.2-2ubuntu1_i386.deb) ... Selecting previously unselected package libc6:i386. Unpacking libc6:i386 (from .../libc6_2.15-0ubuntu20_i386.deb) ... Selecting previously unselected package libgcc1:i386. Unpacking libgcc1:i386 (from .../libgcc1_1%3a4.7.2-2ubuntu1_i386.deb) ... Selecting previously unselected package libxau6:i386. Unpacking libxau6:i386 (from .../libxau6_1%3a1.0.7-1_i386.deb) ... Selecting previously unselected package libxdmcp6:i386. Unpacking libxdmcp6:i386 (from .../libxdmcp6_1%3a1.1.1-1_i386.deb) ... Selecting previously unselected package libxcb1:i386. Unpacking libxcb1:i386 (from .../libxcb1_1.8.1-1ubuntu1_i386.deb) ... Selecting previously unselected package libx11-6:i386. Unpacking libx11-6:i386 (from .../libx11-6_2%3a1.5.0-1_i386.deb) ... Selecting previously unselected package libxext6:i386. Unpacking libxext6:i386 (from .../libxext6_2%3a1.3.1-2_i386.deb) ... Selecting previously unselected package libxinerama1:i386. Unpacking libxinerama1:i386 (from .../libxinerama1_2%3a1.1.2-1_i386.deb) ... Selecting previously unselected package libxrender1:i386. Unpacking libxrender1:i386 (from .../libxrender1_1%3a0.9.7-1_i386.deb) ... Selecting previously unselected package libxrandr2:i386. Unpacking libxrandr2:i386 (from .../libxrandr2_2%3a1.4.0-1_i386.deb) ... Setting up gcc-4.7-base:i386 (4.7.2-2ubuntu1) ... Setting up libc6:i386 (2.15-0ubuntu20) ... Setting up libgcc1:i386 (1:4.7.2-2ubuntu1) ... Setting up libxau6:i386 (1:1.0.7-1) ... Setting up libxdmcp6:i386 (1:1.1.1-1) ... Setting up libxcb1:i386 (1.8.1-1ubuntu1) ... Setting up libx11-6:i386 (2:1.5.0-1) ... Setting up libxext6:i386 (2:1.3.1-2) ... Setting up libxinerama1:i386 (2:1.1.2-1) ... Setting up libxrender1:i386 (1:0.9.7-1) ... Setting up libxrandr2:i386 (2:1.4.0-1) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place $ sudo chmod a+rx /opt/openoffice.org3/share/uno_packages/cache/uno_packages chmod: cannot access `/opt/openoffice.org3/share/uno_packages/cache/uno_packages': No such file or directory

    Read the article

  • BizTalk Send Ports, Delivery Notification and ACK / NACK messages

    - by Robert Kokuti
    Recently I worked on an orchestration which sent messages out to a Send Port on a 'fire and forget' basis. The idea was that once the orchestration passed the message to the Messagebox, it was left to BizTalk to manage the sending process. Should the send operation fail, the Send Port got suspended, and the orchestration completed asynchronously, regardless of the Send Port success or failure. However, we still wanted to log the sending success, using the ACK / NACK messages. On normal ports, BizTalk generates ACK / NACK messages back to the Messagebox, if the logical port's Delivery Notification property is set to 'Transmitted'. Unfortunately, this setting also causes the orchestration to wait for the send port's result, and should the Send Port fail, the orchestration will also receive a 'DeliveryFailureException' exception. So we may end up with a suspended port and a suspended orchestration - not the outcome wanted here, there was no value in suspending the orchestration in our case. There are a couple of ways to fix this: 1. Catch the DeliveryFailureException  (full type name Microsoft.XLANGs.BaseTypes.DeliveryFailureException) and do nothing in the orchestration's exception block. Although this works, it still slows down the orchestration as the orchestration still has to wait for the outcome of the send port operation. 2. Use a Direct Port instead, and set the ACK request on the message Context, prior passing to the port: msgToSend(BTS.AckRequired) = true; This has to be done in an expression shape, as a Direct logical port does not have Delivery Notification property - make sure to add a reference to Microsoft.BizTalk.GlobalPropertySchemas. Setting this context value in the message will cause the messaging agent to create an appropriate ACK or NACK message after the port execution. The ACK / NACK messages can be caught and logged by dedicated Send Ports, filtering on BTS.AckType value (which is either ACK or NACK). ACK/NACK messages are treated in a special way by BizTalk, and a useful feature is that the original message's context values are copied to the ACK/NACK message context - these can be used for logging the right information. Other useful context properties of the ACK/NACK messages: -  BTS.AckSendPortName can be used to identify the original send port. - BTS.AckOwnerID, aka http://schemas.microsoft.com/BizTalk/2003/system-properties.AckOwnerID - holds the instance ID of the failed Send Port - can be used to resubmit / terminate the instance Someone may ask, can we just turn off the Delivery Notification on a 'normal' port, and set the AckRequired property on the message as for a Direct port. Unfortunately, this does not work - BizTalk seems to remove this property automatically, if the message goes through a port where Delivery Notification is set to None.

    Read the article

  • Set up linux box for secure local hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP/MySQL: To upgrade PHP and MySQL to the latest versions, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! Add IUS repository to our package manager cd /tmp wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. # list all the packages in the IUS repository; use this to find PHP/MySQL version and libraries you want to install Remove old version of PHP and install newer version from IUS rpm -qa | grep php # to list all of the installed php packages we want to remove yum shell # open an interactive yum shell remove php-common php-mysql php-cli #remove installed PHP components install php53 php53-mysql php53-cli php53-common #add packages you want transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) Upgrade MySQL from IUS repository /etc/init.d/mysqld stop rpm -qa | grep mysql # to see installed mysql packages yum shell remove mysql mysql-server #remove installed MySQL components install mysql51 mysql51-server mysql51-devel transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell service mysqld start mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project Upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Install rssh (restricted shell) to provide scp and sftp access, without allowing ssh login cd /tmp wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm useradd -m -d /home/dev -s /usr/bin/rssh dev passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. vi /etc/rssh.conf Uncomment or add: allowscp allowsftp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). rssh instructions appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html Set up virtual interfaces ifconfig eth1:1 192.168.1.3 up #start up the virtual interface cd /etc/sysconfig/network-scripts/ cp ifcfg-eth1 ifcfg-eth1:1 #copy default script and match name to our virtual interface vi ifcfg-eth1:1 #modify eth1:1 script #ifcfg-eth1:1 | modify so it looks like this: DEVICE=eth1:1 IPADDR=192.168.1.3 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes NAME=eth1:1 Add more Virtual interfaces as needed by repeating. Because of the ONBOOT=yes line in the ifcfg-eth1:1 file, this interface will be brought up when the system boots, or the network starts/restarts. service network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface eth1: [ OK ] ping 192.168.1.3 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.105 ms Virtualhosts In the rssh section above I added a user to use for SFTP. In this users' home directory, I created a folder called 'https'. This is where the documents for this site will live, so I need to add a virtualhost that will point to it. I will use the above virtual interface for this site (herein called dev.site.local). vi /etc/http/conf/httpd.conf Add the following to the end of httpd.conf: <VirtualHost 192.168.1.3:80> ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> I put a dummy index.html file in the https directory just to check everything out. I tried browsing to it, and was met with permission denied errors. The logs only gave an obscure reference to what was going on: [Mon May 17 14:57:11 2010] [error] [client 192.168.1.100] (13)Permission denied: access to /index.html denied I tried chmod 777 et. al., but to no avail. Turns out, I needed to chmod+x the https directory and its' parent directories. chmod +x /home chmod +x /home/dev chmod +x /home/dev/https This solved that problem. DNS I'm handling DNS via our local Windows Server 2003 box. However, the CentOS documentation for BIND can be found here: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-bind.html SSL To get SSL working, I changed the following in httpd.conf: NameVirtualHost 192.168.1.3:443 #make sure this line is in httpd.conf <VirtualHost 192.168.1.3:443> #change port to 443 ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Unfortunately, I keep getting (Error code: ssl_error_rx_record_too_long) errors when trying to access a page with SSL. As JamesHannah gracefully pointed out below, I had not set up the locations of the certs in httpd.conf, and thusly was getting the page thrown at the broswer as the cert making the browser balk. So first, I needed to set up a CA and make certificate files. I found a great (if old) walkthrough on the process here: http://www.debian-administration.org/articles/284. Here are the relevant steps I took from that article: mkdir /home/CA cd /home/CA/ mkdir newcerts private echo '01' > serial touch index.txt #this and the above command are for the database that will keep track of certs Create an openssl.cnf file in the /home/CA/ dir and edit it per the walkthrough linked above. (For reference, my finished openssl.cnf file looked like this: http://pastebin.com/raw.php?i=hnZDij4T) openssl req -new -x509 -extensions v3_ca -keyout private/cakey.pem -out cacert.pem -days 3650 -config ./openssl.cnf #this creates the cacert.pem which gets distributed and imported to the browser(s) Modified openssl.cnf again per walkthrough instructions. openssl req -new -nodes -out dev.req.pem -config ./openssl.cnf #generates certificate request, and key.pem which I renamed dev.key.pem. Modified openssl.cnf again per walkthrough instructions. openssl ca -out dev.cert.pem -config ./openssl.cnf -infiles dev.req.pem #create and sign certificate. cp dev.cert.pem /home/dev/certs/cert.pem cp dev.key.pem /home/certs/key.pem I updated httpd.conf to reflect the certs and turn SSLEngine on: NameVirtualHost 192.168.1.3:443 <VirtualHost 192.168.1.3:443> ServerAdmin [email protected] DocumentRoot /home/dev/https SSLEngine on SSLCertificateFile /home/dev/certs/cert.pem SSLCertificateKeyFile /home/dev/certs/key.pem ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Put the CA cert.pem in a web-accessible place, and downloaded/imported it into my browser. Now I can visit https://dev.site.local with no errors or warnings. And this is where I'm at. I will keep editing this as I make progress. Any tips on how to configure SSL email would be appreciated.

    Read the article

  • How do you get linux to honor setuid directories?

    - by Takigama
    Some time ago while in a conversation in IRC, one user in a channel I was in suggested someone setuid a directory in order for it to inherit the userid on files to solve a problem someone else was having. At the time I spoke up and said "linux doesn't support setuid directories". After that, the person giving the advice showed me a pastebin (http://codepad.org/4In62f13) of his system honouring the setuid permission set on a directory. Just to explain, when i say "linux doesnt support setuid directories" what I mean is that you can go "chmod u+s directory" and it will set the bit on the directory. However, linux (as i understood it) ignores this bit (on directories). Try as I might, I just cant quite replicate that pastebin. Someone suggested to me once that it might be possible to emulate the behaviour with selinux - and playing around with rules, its possible to force a uid on a file, but not from a setuid directory permission (that I can see). Reading around on the internet has been fairly uninformative - most places claim "no, setuid on directories does not work with linux" with the occasional "it can be done under specific circumstances" (such as this: http://arstechnica.com/etc/linux/2003/linux.ars-12032003.html) I dont remember who the original person was, but the original system was a debian 6 system, and the filesystem it was running was xfs mounted with "default,acl". I've tried replicating that, but no luck so far (tried so far with various versions of debian, ubuntu, fedora and centos) Can anyone clue me in on what or how you get a system to honor setuid on a directory?

    Read the article

  • Confusing Java syntax...

    - by posfan12
    I'm trying to convert the following code (from Wikipedia) from Java to JavaScript: /* * 3 June 2003, [[:en:User:Cyp]]: * Maze, generated by my algorithm * 24 October 2006, [[:en:User:quin]]: * Source edited for clarity * 25 January 2009, [[:en:User:DebateG]]: * Source edited again for clarity and reusability * 1 June 2009, [[:en:User:Nandhp]]: * Source edited to produce SVG file when run from the command-line * * This program was originally written by [[:en:User:Cyp]], who * attached it to the image description page for an image generated by * it on en.wikipedia. The image was licensed under CC-BY-SA-3.0/GFDL. */ import java.awt.*; import java.applet.*; import java.util.Random; /* Define the bit masks */ class Constants { public static final int WALL_ABOVE = 1; public static final int WALL_BELOW = 2; public static final int WALL_LEFT = 4; public static final int WALL_RIGHT = 8; public static final int QUEUED = 16; public static final int IN_MAZE = 32; } public class Maze extends java.applet.Applet { /* The width and height (in cells) of the maze */ private int width; private int height; private int maze[][]; private static final Random rnd = new Random(); /* The width in pixels of each cell */ private int cell_width; /* Construct a Maze with the default width, height, and cell_width */ public Maze() { this(20,20,10); } /* Construct a Maze with specified width, height, and cell_width */ public Maze(int width, int height, int cell_width) { this.width = width; this.height = height; this.cell_width = cell_width; } /* Initialization method that will be called when the program is * run from the command-line. Maze will be written as SVG file. */ public static void main(String[] args) { Maze m = new Maze(); m.createMaze(); m.printSVG(); } /* Initialization method that will be called when the program is * run as an applet. Maze will be displayed on-screen. */ public void init() { createMaze(); } /* The maze generation algorithm. */ private void createMaze(){ int x, y, n, d; int dx[] = { 0, 0, -1, 1 }; int dy[] = { -1, 1, 0, 0 }; int todo[] = new int[height * width], todonum = 0; /* We want to create a maze on a grid. */ maze = new int[width][height]; /* We start with a grid full of walls. */ for (x = 0; x < width; ++x) { for (y = 0; y < height; ++y) { if (x == 0 || x == width - 1 || y == 0 || y == height - 1) { maze[x][y] = Constants.IN_MAZE; } else { maze[x][y] = 63; } } } /* Select any square of the grid, to start with. */ x = 1 + rnd.nextInt (width - 2); y = 1 + rnd.nextInt (height - 2); /* Mark this square as connected to the maze. */ maze[x][y] &= ~48; /* Remember the surrounding squares, as we will */ for (d = 0; d < 4; ++d) { if ((maze[][d][][d] & Constants.QUEUED) != 0) { /* want to connect them to the maze. */ todo[todonum++] = ((x + dx[d]) << Constants.QUEUED) | (y + dy[d]); maze[][d][][d] &= ~Constants.QUEUED; } } /* We won't be finished until all is connected. */ while (todonum > 0) { /* We select one of the squares next to the maze. */ n = rnd.nextInt (todonum); x = todo[n] >> 16; /* the top 2 bytes of the data */ y = todo[n] & 65535; /* the bottom 2 bytes of the data */ /* We will connect it, so remove it from the queue. */ todo[n] = todo[--todonum]; /* Select a direction, which leads to the maze. */ do { d = rnd.nextInt (4); } while ((maze[][d][][d] & Constants.IN_MAZE) != 0); /* Connect this square to the maze. */ maze[x][y] &= ~((1 << d) | Constants.IN_MAZE); maze[][d][][d] &= ~(1 << (d ^ 1)); /* Remember the surrounding squares, which aren't */ for (d = 0; d < 4; ++d) { if ((maze[][d][][d] & Constants.QUEUED) != 0) { /* connected to the maze, and aren't yet queued to be. */ todo[todonum++] = ((x + dx[d]) << Constants.QUEUED) | (y + dy[d]); maze[][d][][d] &= ~Constants.QUEUED; } } /* Repeat until finished. */ } /* Add an entrance and exit. */ maze[1][1] &= ~Constants.WALL_ABOVE; maze[width - 2][height - 2] &= ~Constants.WALL_BELOW; } /* Called by the applet infrastructure to display the maze on-screen. */ public void paint(Graphics g) { drawMaze(g); } /* Called to write the maze to an SVG file. */ public void printSVG() { System.out.format("<svg width=\"%d\" height=\"%d\" version=\"1.1\"" + " xmlns=\"http://www.w3.org/2000/svg\">\n", width*cell_width, height*cell_width); System.out.println(" <g stroke=\"black\" stroke-width=\"1\"" + " stroke-linecap=\"round\">"); drawMaze(null); System.out.println(" </g>\n</svg>"); } /* Main maze-drawing loop. */ public void drawMaze(Graphics g) { int x, y; for (x = 1; x < width - 1; ++x) { for (y = 1; y < height - 1; ++y) { if ((maze[x][y] & Constants.WALL_ABOVE) != 0) drawLine( x * cell_width, y * cell_width, (x + 1) * cell_width, y * cell_width, g); if ((maze[x][y] & Constants.WALL_BELOW) != 0) drawLine( x * cell_width, (y + 1) * cell_width, (x + 1) * cell_width, (y + 1) * cell_width, g); if ((maze[x][y] & Constants.WALL_LEFT) != 0) drawLine( x * cell_width, y * cell_width, x * cell_width, (y + 1) * cell_width, g); if ((maze[x][y] & Constants.WALL_RIGHT) != 0) drawLine((x + 1) * cell_width, y * cell_width, (x + 1) * cell_width, (y + 1) * cell_width, g); } } } /* Draw a line, either in the SVG file or on the screen. */ public void drawLine(int x1, int y1, int x2, int y2, Graphics g) { if ( g != null ) g.drawLine(x1, y1, x2, y2); else System.out.format(" <line x1=\"%d\" y1=\"%d\"" + " x2=\"%d\" y2=\"%d\" />\n", x1, y1, x2, y2); } } Anyway, I was chugging along fairly quickly when I came to a bit that I just don't understand: /* Remember the surrounding squares, as we will */ for (var d = 0; d < 4; ++d) { if ((maze[][d][][d] & Constants.QUEUED) != 0) { /* want to connect them to the maze. */ todo[todonum++] = ((x + dx[d]) << Constants.QUEUED) | (y + dy[d]); maze[][d][][d] &= ~Constants.QUEUED; } } What I don't get is why there are four sets of brackets following the "maze" parameter instead of just two, since "maze" is a two dimensional array, not a four dimensional array. I'm sure there's a good reason for this. Problem is, I just don't get it. Thanks!

    Read the article

  • Casting interfaces in java

    - by owca
    I had two separate interfaces, one 'MultiLingual' for choosing language of text to return, and second 'Justification' to justify returned text. Now I need to join them, but I'm stuck at 'java.lang.ClassCastException' error. Class Book is not important here. Data.length and FormattedInt.width correspond to the width of the line. The code : public interface MultiLingual { static int ENG = 0; static int PL = 1; String get(int lang); int setLength(int length); } class Book implements MultiLingual { private String title; private String publisher; private String author; private int jezyk; public Book(String t, String a, String p, int y){ this(t, a, p, y, 1); } public Book(String t, String a, String p, int y, int lang){ title = t; author = a; publisher = p; jezyk = lang; } public String get(int lang){ jezyk = lang; return this.toString(); } public int setLength(int i){ return 0; } @Override public String toString(){ String dane; if (jezyk == ENG){ dane = "Author: "+this.author+"\n"+ "Title: "+this.title+"\n"+ "Publisher: "+this.publisher+"\n"; } else { dane = "Autor: "+this.author+"\n"+ "Tytul: "+this.title+"\n"+ "Wydawca: "+this.publisher+"\n"; } return dane; } } class Data implements MultiLingual { private int day; private int month; private int year; private int jezyk; private int length; public Data(int d, int m, int y){ this(d, m, y, 1); } public Data(int d, int m, int y, int lang){ day = d; month = m; year = y; jezyk = lang; } public String get(int lang){ jezyk = lang; return this.toString(); } public int setLength(int i){ length = i; return length; } @Override public String toString(){ String dane=""; String miesiac=""; String dzien=""; switch(month){ case 1: miesiac="January"; case 2: miesiac="February"; case 3: miesiac="March"; case 4: miesiac="April"; case 5: miesiac="May"; case 6: miesiac="June"; case 7: miesiac="July"; case 8: miesiac="August"; case 9: miesiac="September"; case 10: miesiac="October"; case 11: miesiac="November"; case 12: miesiac="December"; } if(day==1){ dzien="st"; } if(day==2 || day==22){ dzien="nd"; } if(day==3 || day==23){ dzien="rd"; } else{ dzien="th"; } if(jezyk==ENG){ dane =this.day+dzien+" of "+miesiac+" "+this.year; } else{ dane = this.day+"."+this.month+"."+this.year; } return dane; } } interface Justification { static int RIGHT=1; static int LEFT=2; static int CENTER=3; String justify(int just); } class FormattedInt implements Justification { private int liczba; private int width; private int wyrownanie; public FormattedInt(int s, int i){ liczba = s; width = i; wyrownanie = 2; } public String justify(int just){ wyrownanie = just; String wynik=""; String tekst = Integer.toString(liczba); int len = tekst.length(); int space_left=width - len; int space_right = space_left; int space_center_left = (width - len)/2; int space_center_right = width - len - space_center_left -1; String puste=""; if(wyrownanie == LEFT){ for(int i=0; i<space_right; i++){ puste = puste + " "; } wynik = tekst+puste; } else if(wyrownanie == RIGHT){ for(int i=0; i<space_left; i++){ puste = puste + " "; } wynik = puste+tekst; } else if(wyrownanie == CENTER){ for(int i=0; i<space_center_left; i++){ puste = puste + " "; } wynik = puste + tekst; puste = " "; for(int i=0; i< space_center_right; i++){ puste = puste + " "; } wynik = wynik + puste; } return wynik; } } And the test code that shows this casting "(Justification)gatecrasher[1]" which gives me errors : MultiLingual gatecrasher[]={ new Data(3,12,1998), new Data(10,6,1924,MultiLingual.ENG), new Book("Sekret","Rhonda Byrne", "Nowa proza",2007), new Book("Tuesdays with Morrie", "Mitch Albom", "Time Warner Books",2003, MultiLingual.ENG), }; gatecrasher[0].setLength(25); gatecrasher[1].setLength(25); Justification[] t={ new FormattedInt(345,25), (Justification)gatecrasher[1], (Justification)gatecrasher[0], new FormattedInt(-7,25) }; System.out.println(" 10 20 30"); System.out.println("123456789 123456789 123456789"); for(int i=0;i < t.length;i++) System.out.println(t[i].justify(Justification.RIGHT)+"<----\n");

    Read the article

  • WCF timeout exception detailed investigation

    - by Jason Kealey
    We have an application that has a WCF service (*.svc) running on IIS7 and various clients querying the service. The server is running Win 2008 Server. The clients are running either Windows 2008 Server or Windows 2003 server. I am getting the following exception, which I have seen can in fact be related to a large number of potential WCF issues. System.TimeoutException: The request channel timed out while waiting for a reply after 00:00:59.9320000. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout. ---> System.TimeoutException: The HTTP request to 'http://www.domain.com/WebServices/myservice.svc/gzip' has exceeded the allotted timeout of 00:01:00. The time allotted to this operation may have been a portion of a longer timeout. I have increased the timeout to 30min and the error still occurred. This tells me that something else is at play, because the quantity of data could never take 30min to upload or download. The error comes and goes. At the moment, it is more frequent. It does not seem to matter if I have 3 clients running simultaneously or 100, it still occurs once in a while. Most of the time, there are no timeouts but I still get a few per hour. The error comes from any of the methods that are invoked. One of these methods does not have parameters and returns a bit of data. Another takes in lots of data as a parameter but executes asynchronously. The errors always originate from the client and never reference any code on the server in the stack trace. It always ends with: at System.Net.HttpWebRequest.GetResponse() at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) On the server: I've tried (and currently have) the following binding settings: maxBufferSize="2147483647" maxReceivedMessageSize="2147483647" maxBufferPoolSize="2147483647" It does not seem to have an impact. I've tried (and currently have) the following throttling settings: <serviceThrottling maxConcurrentCalls="1500" maxConcurrentInstances="1500" maxConcurrentSessions="1500"/> It does not seem to have an impact. I currently have the following settings for the WCF service. [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Single)] I ran with ConcurrencyMode.Multiple for a while, and the error still occurred. I've tried restarting IIS, restarting my underlying SQL Server, restarting the machine. All of these don't seem to have an impact. I've tried disabling the Windows firewall. It does not seem to have an impact. On the client, I have these settings: maxReceivedMessageSize="2147483647" <system.net> <connectionManagement> <add address="*" maxconnection="16"/> </connectionManagement> </system.net> My client closes its connections: var client = new MyClient(); try { return client.GetConfigurationOptions(); } finally { client.Close(); } I have changed the registry settings to allow more outgoing connections: MaxConnectionsPerServer=24, MaxConnectionsPer1_0Server=32. I have now just recently tried SvcTraceViewer.exe. I managed to catch one exception on the client end. I see that its duration is 1 minute. Looking at the server side trace, I can see that the server is not aware of this exception. The maximum duration I can see is 10 seconds. I have looked at active database connections using exec sp_who on the server. I only have a few (2-3). I have looked at TCP connections from one client using TCPview. It usually is around 2-3 and I have seen up to 5 or 6. Simply put, I am stumped. I have tried everything I could find, and must be missing something very simple that a WCF expert would be able to see. It is my gut feeling that something is blocking my clients at the low-level (TCP), before the server actually receives the message and/or that something is queuing the messages at the server level and never letting them process. If you have any performance counters I should look at, please let me know. (please indicate what values are bad, as some of these counters are hard to decypher). Also, how could I log the WCF message size? Finally, are there any tools our there that would allow me to test how many connections I can establish between my client and server (independently from my application) Thanks for your time! Extra information added June 20th: My WCF application does something similar to the following. while (true) { Step1GetConfigurationSettingsFromServerViaWCF(); // can change between calls Step2GetWorkUnitFromServerViaWCF(); DoWorkLocally(); // takes 5-15minutes. Step3SendBackResultsToServerViaWCF(); } Using WireShark, I did see that when the error occurs, I have a five TCP retransmissions followed by a TCP reset later on. My guess is the RST is coming from WCF killing the connection. The exception report I get is from Step3 timing out. I discovered this by looking at the tcp stream "tcp.stream eq 192". I then expanded my filter to "tcp.stream eq 192 and http and http.request.method eq POST" and saw 6 POSTs during this stream. This seemed odd, so I checked with another stream such as tcp.stream eq 100. I had three POSTs, which seems a bit more normal because I am doing three calls. However, I do close my connection after every WCF call, so I would have expected one call per stream (but I don't know much about TCP). Investigating a bit more, I dumped the http packet load to disk to look at what these six calls where. 1) Step3 2) Step1 3) Step2 4) Step3 - corrupted 5) Step1 6) Step2 My guess is two concurrent clients are using the same connection, that is why I saw duplicates. However, I still have a few more issues that I can't comprehend: a) Why is the packet corrupted? Random network fluke - maybe? The load is gzipped using this sample code: http://msdn.microsoft.com/en-us/library/ms751458.aspx - Could the code be buggy once in a while when used concurrently? I should test without the gzip library. b) Why would I see step 1 & step 2 running AFTER the corrupted operation timed out? It seems to me as if these operations should not have occurred. Maybe I am not looking at the right stream because my understanding of TCP is flawed. I have other streams that occur at the same time. I should investigate other streams - a quick glance at streams 190-194 show that the Step3 POST have proper payload data (not corrupted). Pushing me to look at the gzip library again.

    Read the article

  • Using tinybutstrong templating system, how do I have a (secondary) mysql query (sub-block), inside a

    - by desbest
    This is the code that works well. $TBS->MergeBlock("items",$conn,"SELECT * FROM items WHERE categoryid = $getcategory"); and it has a good job of looping the items with no problem. <tr id="itemid_[items.id;block=tr]"> <td height="30"> <!-- <img src="move.png" align="left" style="margin-right: 8px;"> --> <div class="lowlight">[items.title] </div> </td> <td height="30"> <div class="editable" itemid="$item[id]">[items.quantity]</div> </td> <td height="30"> <!-- <img src="pencil.png" id="edititem"width="24" height="24"> --> &nbsp; &nbsp; &nbsp; &nbsp;<img src="icons/folder.png" id="category_[items.categoryid]";" class="changecategory" categoryid="[items.categoryid]" itemid="[items.id]" itemtitle="[items.title]" title="Change category" width="24" height="24"> &nbsp; &nbsp; &nbsp; &nbsp; <a href="index.php?category=[var.getcategory]&deleteitem=[items.id]" title="Delete item" onclick="return confirm('Are you sure you want to delete [items.title]?')"><img src="icons/trash.png" id="deleteitem" width="24" height="24"></a> </td> </tr> This is where the problem lies. In the table row I would like a secondary mysql merge block based on the id column that the primary (items) mysql table has. This means that ideally I would love to do this. $TBS->MergeBlock("fields",$conn,"SELECT * FROM items WHERE categoryid = [items.id]"); So then when I used [fields.value;block=div] inside the table row, it would pick up the a row from the fields table based on the primary mysql query. It would recognise what the id is for the merged block and perform a mysql query based on a variable that that block has. So that's what I would like to do and that's what I call a secondary mysql merge block. This is what I attempted by using the sub-blocks example and modifying it. index.html <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <title>TinyButStrong - Examples - subblocks</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <link href="tbs_us_examples_0styles.css" rel="stylesheet" type="text/css"> <style type="text/css"> <!-- .border02 { border: 2px solid #39C; } .border03 { border: 2px solid #396; } --> </style> </head> <body> <table width="400" align="center" cellpadding="5" cellspacing="0" style="border-collapse:collapse;"> <tr> <td class="border02"><strong>Item name:</strong> [item.title;block=tr] , <strong>Item quantity:</strong> [item.quantity]<br> <table border="1" align="center" cellpadding="2" cellspacing="0"> <tr bgcolor="#CACACA"> <td width="30"><u>Position</u></td> <td width="150"><u>Attribute</u></td> <td width="50"><u>Value</u></td> <td width="100"><div align="center"><u>Date Number</u></div></td> </tr> <tr bgcolor="#F0F0F0"> <td>[field.#]</td> <td>[field.attribute;block=tr;p1=[item.id]] <br><b>[item.id]</b> </td> <td><div align="right">[field.value]</div></td> <td><div align="center">[field.datenumber;frm='mm-dd-yyyy']</div></td> </tr> </table> </td> </tr> </table> </body> </html> index.php <?php require_once('../stockman-v3/tbs_class.php'); require_once('../stockman-v3/config.php'); // Create data $TeamList[0] = array('team'=>'Eagle' ,'total'=>'458'); //{ $TeamList[0]['matches'][] = array('town'=>'London','score'=>'253','date'=>'1999-11-30'); $TeamList[0]['matches'][] = array('town'=>'Paris' ,'score'=>'145','date'=>'2002-07-24'); $TeamList[1] = array('team'=>'Goonies','total'=>'281'); $TeamList[1]['matches'][] = array('town'=>'New-York','score'=>'365','date'=>'2001-12-25'); $TeamList[1]['matches'][] = array('town'=>'Madrid' ,'score'=>'521','date'=>'2004-01-14'); // } $TeamList[2] = array('team'=>'MIB' ,'total'=>'615'); // { $TeamList[2]['matches'][] = array('town'=>'Dallas' ,'score'=>'362','date'=>'2001-01-02'); $TeamList[2]['matches'][] = array('town'=>'Lyon' ,'score'=>'321','date'=>'2002-11-17'); $TeamList[2]['matches'][] = array('town'=>'Washington','score'=>'245','date'=>'2003-08-24'); // } $TBS = new clsTinyButStrong; $TBS->LoadTemplate('index3.html'); // Automatic subblock $TBS->MergeBlock("item",$conn,"SELECT * FROM items"); $TBS->MergeBlock("field",$conn,"SELECT * FROM fields WHERE itemid='[%p1%]' "); $TBS->MergeBlock('teamKEY',$TeamList); // Subblock with a dynamic query //$TBS->MergeBlock('match','array','TeamList[%p1%][matches]'); $TBS->MergeBlock("match",$conn,"SELECT * FROM fields WHERE id='[%p1%]' "); $TBS->Show(); ?> please note what i am trying to do If I was to change $TBS->MergeBlock("field",$conn,"SELECT * FROM fields WHERE itemid='[%p1%]' "); to... $TBS->MergeBlock("field",$conn,"SELECT * FROM fields"); It would show all the items.

    Read the article

  • Executing legacy MSBuild scripts in TFS 2010 Build

    - by Jakob Ehn
    When upgrading from TFS 2008 to TFS 2010, all builds are “upgraded” in the sense that a build definition with the same name is created, and it uses the UpgradeTemplate  build process template to execute the build. This template basically just runs MSBuild on the existing TFSBuild.proj file. The build definition contains a property called ConfigurationFolderPath that points to the TFSBuild.proj file. So, existing builds will run just fine after upgrade. But what if you want to use the new workflow functionality in TFS 2010 Build, but still have a lot of MSBuild scripts that maybe call custom MSBuild tasks that you don’t have the time to rewrite? Then one option is to keep these MSBuild scrips and call them from a TFS 2010 Build workflow. This can be done using the MSBuild workflow activity that is avaiable in the toolbox in the Team Foundation Build Activities section: This activity wraps the call to MSBuild.exe and has the following parameters: Most of these properties are only relevant when actually compiling projects, for example C# project files. When calling custom MSBuild project files, you should focus on these properties: Property Meaning Example CommandLineArguments Use this to send in/override MSBuild properties in your project “/p:MyProperty=SomeValue” or MSBuildArguments (this will let you define the arguments in the build definition or when queuing the build) LogFile Name of the log file where MSbuild will log the output “MyBuild.log” LogFileDropLocation Location of the log file BuildDetail.DropLocation + “\log” Project The project to execute SourcesDirectory + “\BuildExtensions.targets” ResponseFile The name of the MSBuild response file SourcesDirectory + “\BuildExtensions.rsp” Targets The target(s) to execute New String() {“Target1”, “Target2”} Verbosity Logging verbosity Microsoft.TeamFoundation.Build.Workflow.BuildVerbosity.Normal Integrating with Team Build   If your MSBuild scripts tries to use Team Build tasks, they will most likely fail with the above approach. For example, the following MSBuild project file tries to add a build step using the BuildStep task:   <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets" /> <Target Name="MyTarget"> <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Name="MyBuildStep" Message="My build step executed" Status="Succeeded"></BuildStep> </Target> </Project> When executing this file using the MSBuild activity, calling the MyTarget, it will fail with the following message: The "Microsoft.TeamFoundation.Build.Tasks.BuildStep" task could not be loaded from the assembly \PrivateAssemblies\Microsoft.TeamFoundation.Build.ProcessComponents.dll. Could not load file or assembly 'file:///D:\PrivateAssemblies\Microsoft.TeamFoundation.Build.ProcessComponents.dll' or one of its dependencies. The system cannot find the file specified. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. You can see that the path to the ProcessComponents.dll is incomplete. This is because in the Microsoft.TeamFoundation.Build.targets file the task is referenced using the $(TeamBuildRegPath) property. Also note that the task needs the TeamFounationServerUrl and BuildUri properties. One solution here is to pass these properties in using the Command Line Arguments parameter:   Here we pass in the parameters with the corresponding values from the curent build. The build log shows that the build step has in fact been inserted:   The problem as you probably spted is that the build step is insert at the top of the build log, instead of next to the MSBuild activity call. This is because we are using a legacy team build task (BuildStep), and that is how these are handled in TFS 2010. You can see the same behaviour when running builds that are using the UpgradeTemplate, that cutom build steps shows up at the top of the build log.

    Read the article

  • Converting linear colors to SRGB shows banding in FFmpeg

    - by user1863947
    When I convert an EXR file sequence with x264 using FFmpeg and convert the colorspace from linear to SRGB (with gamma 0.45454545) I get some heavy banding issues (most visible on a dark gradient). Here is the ffmpeg command I use: C:/ffmpeg.exe -y -i C:/seq_v001.%04d.exr -vf lutrgb=r=gammaval(0.45454545):g=gammaval(0.45454545):b=gammaval(0.45454545) -vcodec libx264 -pix_fmt yuv420p -preset slow -crf 18 -r 25 C:/out.mov Here is the output: ffmpeg version N-47062-g26c531c Copyright (c) 2000-2012 the FFmpeg developers built on Nov 25 2012 12:25:21 with gcc 4.7.2 (GCC) configuration: --enable-gpl --enable-version3 --disable-pthreads --enable-runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libutvideo --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib libavutil 52. 9.100 / 52. 9.100 libavcodec 54. 77.100 / 54. 77.100 libavformat 54. 37.100 / 54. 37.100 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 23.102 / 3. 23.102 libswscale 2. 1.102 / 2. 1.102 libswresample 0. 17.101 / 0. 17.101 libpostproc 52. 2.100 / 52. 2.100 Input #0, image2, from 'C:/seq_v001.%04d.exr': Duration: 00:00:09.60, start: 0.000000, bitrate: N/A Stream #0:0: Video: exr, rgb48le, 960x540 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 25 tbn, 25 tbc [libx264 @ 0000000004d11540] using SAR=1/1 [libx264 @ 0000000004d11540] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0000000004d11540] profile High, level 3.1 [libx264 @ 0000000004d11540] 264 - core 128 r2216 198a7ea - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=5 deblock=1:0:0 analyse=0x3:0x113 me=umh subme=8 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=18 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=2 b_bias=0 direct=3 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=50 rc=crf mbtree=1 crf=18.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mov, to 'C:/out.mov': Metadata: encoder : Lavf54.37.100 Stream #0:0: Video: h264 (avc1 / 0x31637661), yuv420p, 960x540 [SAR 1:1 DAR 16:9], q=-1--1, 12800 tbn, 25 tbc Stream mapping: Stream #0:0 -> #0:0 (exr -> libx264) Press [q] to stop, [?] for help [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute frame= 16 fps=0.0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute frame= 34 fps= 33 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute frame= 52 fps= 34 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute frame= 68 fps= 34 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute frame= 85 fps= 33 q=23.0 size= 47kB time=00:00:00.44 bitrate= 867.5kbits/s Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute frame= 104 fps= 34 q=23.0 size= 94kB time=00:00:01.20 bitrate= 640.3kbits/s Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute frame= 121 fps= 34 q=23.0 size= 133kB time=00:00:01.88 bitrate= 577.8kbits/s Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute frame= 139 fps= 34 q=23.0 size= 172kB time=00:00:02.60 bitrate= 543.4kbits/s Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute frame= 157 fps= 34 q=23.0 size= 213kB time=00:00:03.32 bitrate= 525.6kbits/s Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute frame= 175 fps= 34 q=23.0 size= 254kB time=00:00:04.04 bitrate= 516.0kbits/s Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute frame= 193 fps= 35 q=23.0 size= 287kB time=00:00:04.76 bitrate= 494.6kbits/s Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute frame= 211 fps= 35 q=23.0 size= 332kB time=00:00:05.48 bitrate= 496.4kbits/s Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute frame= 228 fps= 34 q=23.0 size= 421kB time=00:00:06.16 bitrate= 559.8kbits/s frame= 240 fps= 32 q=-1.0 Lsize= 708kB time=00:00:09.52 bitrate= 609.3kbits/s video:705kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.505636% [libx264 @ 0000000004d11540] frame I:2 Avg QP:15.07 size: 18186 [libx264 @ 0000000004d11540] frame P:73 Avg QP:16.51 size: 3719 [libx264 @ 0000000004d11540] frame B:165 Avg QP:18.38 size: 2502 [libx264 @ 0000000004d11540] consecutive B-frames: 2.5% 3.3% 42.5% 51.7% [libx264 @ 0000000004d11540] mb I I16..4: 46.2% 33.3% 20.4% [libx264 @ 0000000004d11540] mb P I16..4: 6.8% 2.0% 0.6% P16..4: 29.4% 10.5% 4.6% 0.0% 0.0% skip:46.1% [libx264 @ 0000000004d11540] mb B I16..4: 1.8% 0.7% 0.2% B16..8: 40.9% 6.5% 0.3% direct: 1.2% skip:48.5% L0:52.0% L1:47.5% BI: 0.5% [libx264 @ 0000000004d11540] 8x8 transform intra:24.7% inter:81.3% [libx264 @ 0000000004d11540] direct mvs spatial:93.3% temporal:6.7% [libx264 @ 0000000004d11540] coded y,uvDC,uvAC intra: 10.7% 31.4% 24.9% inter: 2.3% 9.0% 2.9% [libx264 @ 0000000004d11540] i16 v,h,dc,p: 83% 11% 6% 1% [libx264 @ 0000000004d11540] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 9% 9% 52% 6% 4% 4% 5% 5% 5% [libx264 @ 0000000004d11540] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 11% 44% 5% 4% 3% 3% 4% 3% [libx264 @ 0000000004d11540] i8c dc,h,v,p: 69% 15% 15% 2% [libx264 @ 0000000004d11540] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0000000004d11540] ref P L0: 48.9% 0.1% 16.8% 17.0% 11.3% 5.8% [libx264 @ 0000000004d11540] ref B L0: 57.7% 21.9% 13.9% 6.4% [libx264 @ 0000000004d11540] ref B L1: 82.4% 17.6% [libx264 @ 0000000004d11540] kb/s:600.61 For me it looks like it converts the video first and afterwards applies the gamma correction on 8-bit clipped video. Does someone have an idea?

    Read the article

  • The endpoint reference (EPR) for the Operation not found is

    - by Denise Wu
    I have been struggling with the following error the last couple of days can you please help! I generated my server and client code using the wsdl2java tool from a wsdl 2.0 file. When invoking the webservice I am getting the following error: org.apache.axis2.AxisFault: The endpoint reference (EPR) for the Operation not found is /axis2/services/MyService/authentication/?username=Denise345&password=xxxxx and the WSA Action = null My service is displayed on the axis2 webpage with all available methods. Here is the output from TcpMon ============== Listen Port: 8090 Target Host: 127.0.0.1 Target Port: 8080 ==== Request ==== GET /axis2/services/MyService/authentication/?username=Denise345&password=xxxxx HTTP/1.1 Content-Type: application/x-www-form-urlencoded; charset=UTF-8 SOAPAction: "" User-Agent: Axis2 Host: 127.0.0.1:8090 ==== Response ==== HTTP/1.1 500 Internal Server Error Server: Apache-Coyote/1.1 Content-Type: application/xml;charset=UTF-8 Transfer-Encoding: chunked Date: Thu, 12 May 2011 15:53:20 GMT Connection: close 12b <soapenv:Reason xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"> <soapenv:Text xml:lang="en-US">The endpoint reference (EPR) for the Operation not found is /axis2/services/MyService/authentication/?username=Denise345&password=xxxxx and the WSA Action = null</soapenv:Text></soapenv:Reason> 0 ============== I am using: * axis2-1.5.4 * Tomcat 7.0.8 * wsdl 2.0 file Please help!

    Read the article

  • Calling Msbuild from Php - Wrong Codepage and Culture

    - by miasbeck
    I have a Php script that calls Msbuild via System: <?php system( "msbuild umlaut.proj" ); ?> This is the project file: <?xml version="1.0" encoding="UTF-8"?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="EchoUmlaut" ToolsVersion="3.5"> <Target Name="EchoUmlaut"> <Message Text="Umlaute: Ä Ö Ü ä ö ü ß" /> </Target> </Project> When I call Msbuild directly from the command line the output of msbuild is in German (as excpected) and the umlauts come out OK (I chcp to 1252). But when I use php to call msbuild the umlauts are wrong, and the output of msbuild is changed to English. I wonder what I can do to prevent this. C:\>chcp Aktive Codepage: 1252. C:\>msbuild umlaut.proj Microsoft (R)-Buildmodul, Version 3.5.30729.1 [Microsoft .NET Framework, Version 2.0.50727.3607] Copyright (C) Microsoft Corporation 2007. Alle Rechte vorbehalten. Das Erstellen wurde am 13.04.2010 08:57:04 gestartet. Projekt "D:\Cvsroot\projekte\e4elaui\v1.0\umlaut.proj" auf Knoten 0 (Standardziele). Umlaute: Ä Ö Ü ä ö ü ß Die Erstellung von Projekt "D:\Cvsroot\projekte\e4elaui\v1.0\umlaut.proj" ist abgeschlossen (Standardziele). Das Erstellen war erfolgreich. 0 Warnung(en) 0 Fehler Vergangene Zeit 00:00:00 C:\>php call_from_php.php Microsoft (R) Build Engine Version 3.5.30729.1 [Microsoft .NET Framework, Version 2.0.50727.3607] Copyright (C) Microsoft Corporation 2007. All rights reserved. Build started 13.04.2010 08:57:11. Project "D:\Cvsroot\projekte\e4elaui\v1.0\umlaut.proj" on node 0 (default targets). Umlaute: Ž ™ š „ ” á Done Building Project "D:\Cvsroot\projekte\e4elaui\v1.0\umlaut.proj" (default targets). Build succeeded. 0 Warning(s) 0 Error(s) Time Elapsed 00:00:00

    Read the article

  • Android, sending XML via HTTP POST (SOAP)

    - by Intosia
    Hi, I would like to invoke a webservice via Android. I need to POST some XML to a URL via HTTP. I found this snipped for sending a POST, but i dont know how to include/add the XML data itself. public void postData() { // Create a new HttpClient and Post Header HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost("http://10.10.4.35:53011/"); try { // Add your data List<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(2); nameValuePairs.add(new BasicNameValuePair("Content-Type", "application/soap+xml")); httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); // Where/how to add the XML data? // Execute HTTP Post Request HttpResponse response = httpclient.execute(httppost); } catch (ClientProtocolException e) { // TODO Auto-generated catch block } catch (IOException e) { // TODO Auto-generated catch block } } This is the complete POST message that i need to imitate: POST /a8103e90-f1e3-11dd-bfdb-8b1fcff1a110 HTTP/1.1 Host: 10.10.4.35:53011 Content-Type: application/soap+xml Content-Length: 602 <?xml version='1.0' encoding='UTF-8' ?> <s12:Envelope xmlns:s12="http://www.w3.org/2003/05/soap-envelope" xmlns:wsa="http://schemas.xmlsoap.org/ws/2004/08/addressing"> <s12:Header> <wsa:MessageID>urn:uuid:fc061d40-3d63-11df-bfba-62764ccc0e48</wsa:MessageID> <wsa:Action>http://schemas.xmlsoap.org/ws/2004/09/transfer/Get</wsa:Action> <wsa:To>urn:uuid:a8103e90-f1e3-11dd-bfdb-8b1fcff1a110</wsa:To> <wsa:ReplyTo> <wsa:Address>http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous</wsa:Address> </wsa:ReplyTo> </s12:Header> <s12:Body /> </s12:Envelope>

    Read the article

  • In WCF How Can I add SAML 2.0 assertion to SOAP Header?

    - by Tone
    I'm trying to add the saml 2.0 assertion node from the soap header example below - I came across the samlassertion type in the .net framework but that looks like it is only for saml 1.1. <S:Header> <To xmlns="http://www.w3.org/2005/08/addressing">https://rs1.greenwaymedical.com:8181/CONNECTGateway/EntityService/NhincProxyXDRRequestSecured</To> <Action xmlns="http://www.w3.org/2005/08/addressing">tns:ProvideAndRegisterDocumentSet-bRequest_Request</Action> <ReplyTo xmlns="http://www.w3.org/2005/08/addressing"> <Address>http://www.w3.org/2005/08/addressing/anonymous</Address> </ReplyTo> <MessageID xmlns="http://www.w3.org/2005/08/addressing">uuid:662ee047-3437-4781-a8d2-ee91bc940ef0</MessageID> <wsse:Security S:mustUnderstand="1"> <wsu:Timestamp xmlns:ns17="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512" xmlns:ns16="http://www.w3.org/2003/05/soap-envelope" wsu:Id="_1"> <wsu:Created>2010-05-26T03:51:57Z</wsu:Created> <wsu:Expires>2010-05-26T03:56:57Z</wsu:Expires> </wsu:Timestamp> <saml2:Assertion xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:exc14n="http://www.w3.org/2001/10/xml-exc-c14n#" xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xenc="http://www.w3.org/2001/04/xmlenc#" xmlns:xs="http://www.w3.org/2001/XMLSchema" ID="bd1ecf8d-a6d8-488d-9183-a11227c6a219" IssueInstant="2010-05-26T03:51:57.959Z" Version="2.0"> <saml2:Issuer Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">CN=SAML User,OU=SU,O=SAML User,L=Los Angeles,ST=CA,C=US</saml2:Issuer> <saml2:Subject> <saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">UID=kskagerb</saml2:NameID> <saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:holder-of-key"> <saml2:SubjectConfirmationData> <ds:KeyInfo> <ds:KeyValue> <ds:RSAKeyValue> <ds:Modulus>p4jUkEUg..gwO7U=</ds:Modulus> <ds:Exponent>AQAB</ds:Exponent> </ds:RSAKeyValue> </ds:KeyValue> </ds:KeyInfo> </saml2:SubjectConfirmationData> </saml2:SubjectConfirmation> </saml2:Subject> <saml2:AuthnStatement AuthnInstant="2009-04-16T13:15:39.000Z" SessionIndex="987"> <saml2:SubjectLocality Address="158.147.185.168" DNSName="cs.myharris.net"/> <saml2:AuthnContext> <saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:X509</saml2:AuthnContextClassRef> </saml2:AuthnContext> </saml2:AuthnStatement> <saml2:AttributeStatement> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:subject-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Karl S Skagerberg</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:organization"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">InternalTest2</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:organization-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">2.2</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:nhin:names:saml:homeCommunityId"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">2.16.840.1.113883.3.441</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xacml:2.0:subject:role"> <saml2:AttributeValue> <hl7:Role xmlns:hl7="urn:hl7-org:v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" code="307969004" codeSystem="2.16.840.1.113883.6.96" codeSystemName="SNOMED_CT" displayName="Public Health" xsi:type="hl7:CE"/> </saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:purposeofuse"> <saml2:AttributeValue> <hl7:PurposeForUse xmlns:hl7="urn:hl7-org:v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" code="PUBLICHEALTH" codeSystem="2.16.840.1.113883.3.18.7.1" codeSystemName="nhin-purpose" displayName="Use or disclosure of Psychotherapy Notes" xsi:type="hl7:CE"/> </saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xacml:2.0:resource:resource-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">500000000^^^&amp;1.1&amp;ISO</saml2:AttributeValue> </saml2:Attribute> </saml2:AttributeStatement> <saml2:AuthzDecisionStatement Decision="Permit" Resource="https://158.147.185.168:8181/SamlReceiveService/SamlProcessWS"> <saml2:Action Namespace="urn:oasis:names:tc:SAML:1.0:action:rwedc">Execute</saml2:Action> <saml2:Evidence> <saml2:Assertion ID="40df7c0a-ff3e-4b26-baeb-f2910f6d05a9" IssueInstant="2009-04-16T13:10:39.093Z" Version="2.0"> <saml2:Issuer Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">CN=SAML User,OU=Harris,O=HITS,L=Melbourne,ST=FL,C=US</saml2:Issuer> <saml2:Conditions NotBefore="2009-04-16T13:10:39.093Z" NotOnOrAfter="2009-12-31T12:00:00.000Z"/> <saml2:AttributeStatement> <saml2:Attribute Name="AccessConsentPolicy" NameFormat="http://www.hhs.gov/healthit/nhin"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Claim-Ref-1234</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="InstanceAccessConsentPolicy" NameFormat="http://www.hhs.gov/healthit/nhin"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Claim-Instance-1</saml2:AttributeValue> </saml2:Attribute> </saml2:AttributeStatement> </saml2:Assertion> </saml2:Evidence> </saml2:AuthzDecisionStatement> <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#bd1ecf8d-a6d8-488d-9183-a11227c6a219"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue>ONbZqPUyFVPMx4v9vvpJGNB4cao=</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>Dm/aW5bB..pF93s=</ds:SignatureValue> <ds:KeyInfo> <ds:KeyValue> <ds:RSAKeyValue> <ds:Modulus>p4jUkEU..bzqgwO7U=</ds:Modulus> <ds:Exponent>AQAB</ds:Exponent> </ds:RSAKeyValue> </ds:KeyValue> </ds:KeyInfo> </ds:Signature> </saml2:Assertion> <ds:Signature xmlns:ns17="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512" xmlns:ns16="http://www.w3.org/2003/05/soap-envelope" Id="_2"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <exc14n:InclusiveNamespaces PrefixList="wsse S"/> </ds:CanonicalizationMethod> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#_1"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <exc14n:InclusiveNamespaces PrefixList="wsu wsse S"/> </ds:Transform> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue> <Include xmlns="http://www.w3.org/2004/08/xop/include" href="cid:[email protected]"/> </ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue> <Include xmlns="http://www.w3.org/2004/08/xop/include" href="cid:[email protected]"/> </ds:SignatureValue> <ds:KeyInfo> <wsse:SecurityTokenReference wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0"> <wsse:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">bd1ecf8d-a6d8-488d-9183-a11227c6a219</wsse:KeyIdentifier> </wsse:SecurityTokenReference> </ds:KeyInfo> </ds:Signature> </wsse:Security> </S:Header> I've been researching for days and cannot seem to come up with a straightforward way of doing this in WCF. The web service is running on Glassfish and is soap 1.1, I've tried using all the packaged wcf bindings but have not been able to get them to work. I started down the path of using a MessageInspector, and wrote one but then realized there must be a better way, surely WCF provides some way to insert saml 2.0 assertions. I've made the most progress writing a custom binding - i've been able to get the timestamp and signature nodes in the soap header, but cannot for the life of me figure out the saml assertion. Any ideas? public static System.ServiceModel.Channels.Binding BuildCONNECTCustomBinding() { TransportSecurityBindingElement transportSecurityBindingElement = SecurityBindingElement.CreateCertificateOverTransportBindingElement(MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10); TextMessageEncodingBindingElement textMessageEncodingBindingElement = new TextMessageEncodingBindingElement(MessageVersion.Soap11WSAddressing10, System.Text.Encoding.UTF8); HttpsTransportBindingElement httpsTransportBindingElement = new HttpsTransportBindingElement(); SecurityTokenReferenceType securityTokenReference = new SecurityTokenReferenceType(); BindingElementCollection bindingElementCollection = new BindingElementCollection(); bindingElementCollection.Add(transportSecurityBindingElement); bindingElementCollection.Add(textMessageEncodingBindingElement); bindingElementCollection.Add(httpsTransportBindingElement); CustomBinding cb = new CustomBinding(bindingElementCollection); cb.CreateBindingElements(); return cb; }

    Read the article

  • problem with addressfilter in WCF

    - by Zé Carlos
    I've build my own WCF channel with all necessary stuff (like encoders, bindings, etc) to use it with ServiceHost. I just want to build the "channel stack" making no custumizations at "Service Model". To acomplish this, my encoder returns perfect ServiceModel.Messages with a XML infoset just like other channel does. Lets assume the following service implementation: [ServiceContract(Namespace = "http://MyNS")] public interface IService1 { [OperationContract(IsOneWay = true)] void dummy(); } public class Service1 : IService1 { public void dummy() { Console.WriteLine("In Service1:dummy()"); } } I used this service through other bindings and traced the following ServiceModel.Message contents (SOAP format): <s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" xmlns:a="http://www.w3.org/2005/08/addressing"> <s:Header> <a:Action s:mustUnderstand="1">http://MyNS/IService1/dummy</a:Action> <a:To s:mustUnderstand="1">amqp://localhost</a:To> </s:Header> <s:Body> <dummy xmlns="http://MyNS"></dummy> </s:Body> </s:Envelope> Then (just to debug) i changed my encoder to allways return this message. When i use my custom channel the WCF's runtime replay with an faul message telling: "The message with To '' cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver's EndpointAddresses agree." I read that the default EndPointDispatcher.AddressFilter simply looks to the "TO" header and delivery the message to corresponding service. This is happening with other bindings, why not happens with my custom channel too? Is there any way to i check what default AddressFilter is doing? Thanks

    Read the article

  • In a WCF Client How Can I add SAML 2.0 assertion to SOAP Header?

    - by Tone
    I'm trying to add the saml 2.0 assertion node from the soap header example below - I came across the samlassertion type in the .net framework but that looks like it is only for saml 1.1. <S:Header> <To xmlns="http://www.w3.org/2005/08/addressing">https://rs1.greenwaymedical.com:8181/CONNECTGateway/EntityService/NhincProxyXDRRequestSecured</To> <Action xmlns="http://www.w3.org/2005/08/addressing">tns:ProvideAndRegisterDocumentSet-bRequest_Request</Action> <ReplyTo xmlns="http://www.w3.org/2005/08/addressing"> <Address>http://www.w3.org/2005/08/addressing/anonymous</Address> </ReplyTo> <MessageID xmlns="http://www.w3.org/2005/08/addressing">uuid:662ee047-3437-4781-a8d2-ee91bc940ef0</MessageID> <wsse:Security S:mustUnderstand="1"> <wsu:Timestamp xmlns:ns17="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512" xmlns:ns16="http://www.w3.org/2003/05/soap-envelope" wsu:Id="_1"> <wsu:Created>2010-05-26T03:51:57Z</wsu:Created> <wsu:Expires>2010-05-26T03:56:57Z</wsu:Expires> </wsu:Timestamp> <saml2:Assertion xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:exc14n="http://www.w3.org/2001/10/xml-exc-c14n#" xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xenc="http://www.w3.org/2001/04/xmlenc#" xmlns:xs="http://www.w3.org/2001/XMLSchema" ID="bd1ecf8d-a6d8-488d-9183-a11227c6a219" IssueInstant="2010-05-26T03:51:57.959Z" Version="2.0"> <saml2:Issuer Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">CN=SAML User,OU=SU,O=SAML User,L=Los Angeles,ST=CA,C=US</saml2:Issuer> <saml2:Subject> <saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">UID=kskagerb</saml2:NameID> <saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:holder-of-key"> <saml2:SubjectConfirmationData> <ds:KeyInfo> <ds:KeyValue> <ds:RSAKeyValue> <ds:Modulus>p4jUkEUg..gwO7U=</ds:Modulus> <ds:Exponent>AQAB</ds:Exponent> </ds:RSAKeyValue> </ds:KeyValue> </ds:KeyInfo> </saml2:SubjectConfirmationData> </saml2:SubjectConfirmation> </saml2:Subject> <saml2:AuthnStatement AuthnInstant="2009-04-16T13:15:39.000Z" SessionIndex="987"> <saml2:SubjectLocality Address="158.147.185.168" DNSName="cs.myharris.net"/> <saml2:AuthnContext> <saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:X509</saml2:AuthnContextClassRef> </saml2:AuthnContext> </saml2:AuthnStatement> <saml2:AttributeStatement> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:subject-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Karl S Skagerberg</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:organization"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">InternalTest2</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:organization-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">2.2</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:nhin:names:saml:homeCommunityId"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">2.16.840.1.113883.3.441</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xacml:2.0:subject:role"> <saml2:AttributeValue> <hl7:Role xmlns:hl7="urn:hl7-org:v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" code="307969004" codeSystem="2.16.840.1.113883.6.96" codeSystemName="SNOMED_CT" displayName="Public Health" xsi:type="hl7:CE"/> </saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:purposeofuse"> <saml2:AttributeValue> <hl7:PurposeForUse xmlns:hl7="urn:hl7-org:v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" code="PUBLICHEALTH" codeSystem="2.16.840.1.113883.3.18.7.1" codeSystemName="nhin-purpose" displayName="Use or disclosure of Psychotherapy Notes" xsi:type="hl7:CE"/> </saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="urn:oasis:names:tc:xacml:2.0:resource:resource-id"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">500000000^^^&amp;1.1&amp;ISO</saml2:AttributeValue> </saml2:Attribute> </saml2:AttributeStatement> <saml2:AuthzDecisionStatement Decision="Permit" Resource="https://158.147.185.168:8181/SamlReceiveService/SamlProcessWS"> <saml2:Action Namespace="urn:oasis:names:tc:SAML:1.0:action:rwedc">Execute</saml2:Action> <saml2:Evidence> <saml2:Assertion ID="40df7c0a-ff3e-4b26-baeb-f2910f6d05a9" IssueInstant="2009-04-16T13:10:39.093Z" Version="2.0"> <saml2:Issuer Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">CN=SAML User,OU=Harris,O=HITS,L=Melbourne,ST=FL,C=US</saml2:Issuer> <saml2:Conditions NotBefore="2009-04-16T13:10:39.093Z" NotOnOrAfter="2009-12-31T12:00:00.000Z"/> <saml2:AttributeStatement> <saml2:Attribute Name="AccessConsentPolicy" NameFormat="http://www.hhs.gov/healthit/nhin"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Claim-Ref-1234</saml2:AttributeValue> </saml2:Attribute> <saml2:Attribute Name="InstanceAccessConsentPolicy" NameFormat="http://www.hhs.gov/healthit/nhin"> <saml2:AttributeValue xmlns:ns6="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns7="http://www.w3.org/2001/XMLSchema" ns6:type="ns7:string">Claim-Instance-1</saml2:AttributeValue> </saml2:Attribute> </saml2:AttributeStatement> </saml2:Assertion> </saml2:Evidence> </saml2:AuthzDecisionStatement> <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#bd1ecf8d-a6d8-488d-9183-a11227c6a219"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue>ONbZqPUyFVPMx4v9vvpJGNB4cao=</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>Dm/aW5bB..pF93s=</ds:SignatureValue> <ds:KeyInfo> <ds:KeyValue> <ds:RSAKeyValue> <ds:Modulus>p4jUkEU..bzqgwO7U=</ds:Modulus> <ds:Exponent>AQAB</ds:Exponent> </ds:RSAKeyValue> </ds:KeyValue> </ds:KeyInfo> </ds:Signature> </saml2:Assertion> <ds:Signature xmlns:ns17="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512" xmlns:ns16="http://www.w3.org/2003/05/soap-envelope" Id="_2"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <exc14n:InclusiveNamespaces PrefixList="wsse S"/> </ds:CanonicalizationMethod> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#_1"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <exc14n:InclusiveNamespaces PrefixList="wsu wsse S"/> </ds:Transform> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue> <Include xmlns="http://www.w3.org/2004/08/xop/include" href="cid:[email protected]"/> </ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue> <Include xmlns="http://www.w3.org/2004/08/xop/include" href="cid:[email protected]"/> </ds:SignatureValue> <ds:KeyInfo> <wsse:SecurityTokenReference wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0"> <wsse:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">bd1ecf8d-a6d8-488d-9183-a11227c6a219</wsse:KeyIdentifier> </wsse:SecurityTokenReference> </ds:KeyInfo> </ds:Signature> </wsse:Security> </S:Header> I've been researching for days and cannot seem to come up with a straightforward way of doing this in WCF. The web service is running on Glassfish and is soap 1.1, I've tried using all the packaged wcf bindings but have not been able to get them to work. I started down the path of using a MessageInspector, and wrote one but then realized there must be a better way, surely WCF provides some way to insert saml 2.0 assertions. I've made the most progress writing a custom binding - i've been able to get the timestamp and signature nodes in the soap header, but cannot for the life of me figure out the saml assertion. Any ideas? public static System.ServiceModel.Channels.Binding BuildCONNECTCustomBinding() { TransportSecurityBindingElement transportSecurityBindingElement = SecurityBindingElement.CreateCertificateOverTransportBindingElement(MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10); TextMessageEncodingBindingElement textMessageEncodingBindingElement = new TextMessageEncodingBindingElement(MessageVersion.Soap11WSAddressing10, System.Text.Encoding.UTF8); HttpsTransportBindingElement httpsTransportBindingElement = new HttpsTransportBindingElement(); SecurityTokenReferenceType securityTokenReference = new SecurityTokenReferenceType(); BindingElementCollection bindingElementCollection = new BindingElementCollection(); bindingElementCollection.Add(transportSecurityBindingElement); bindingElementCollection.Add(textMessageEncodingBindingElement); bindingElementCollection.Add(httpsTransportBindingElement); CustomBinding cb = new CustomBinding(bindingElementCollection); cb.CreateBindingElements(); return cb; }

    Read the article

  • Gallio MbUnit and Team City problem

    - by Bernard Larouche
    I asked a question this morning about an integration problem between Gallio and Team City. I changed the msbuild file to use the proper syntax with the latest Gallio build script API. Thank you for that Jeff Brown but now when I tried to build the application on Team City I get the following error : An unexpected error occurred during execution of the Gallio task.[16:19:49]: [Project "CoderForTraders.msbuild.teamcity.patch.tcprojx" (RebuildSolution;RunTests target(s)):] C:\TeamCity\buildAgent\work\fa1d38b0af329d65\CoderForTraders.msbuild(9, 9): FilterParseException: Colon expected Here's line 9 : <Gallio IgnoreFailures="true" Filter="Type=SomeFixture" Files="@(TestFile)"> and here is the whole file : <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <!-- This is needed by MSBuild to locate the Gallio task --> <UsingTask AssemblyFile="C:\Gallio\bin\Gallio.MSBuildTasks.dll" TaskName="Gallio" /> <!-- Specify the test files and assemblies --> <ItemGroup> <TestFile Include="C:\_CBL\CBL\CoderForTraders\Source\trunk\UnitTest\DomainModel.Tests\bin\Debug\CBL.CoderForTraders.DomainModel.Tests.dll" /> </ItemGroup> <Target Name="RunTests"> <Gallio IgnoreFailures="true" Filter="Type=SomeFixture" Files="@(TestFile)"> <!-- This tells MSBuild to store the output value of the task's ExitCode property into the project's ExitCode property --> <Output TaskParameter="ExitCode" PropertyName="ExitCode"/> </Gallio> <Error Text="Tests execution failed" Condition="'$(ExitCode)' != 0" /> </Target> <Target Name="RebuildSolution"> <Message Text="Starting to Build"/> <MSBuild Projects="CoderForTraders.sln" Properties="Configuration=Debug" Targets="Rebuild" /> </Target> </Project> Do you have an idea about the possible problem ?

    Read the article

  • AjaxControlToolkit Resource Files Not Copied To Output in MSBuild Script

    - by Dario Solera
    I'm new to MSBuild, but I managed to setup the following simple script: <Project ToolsVersion="3.5" DefaultTargets="Compile" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <Configuration Condition="'$(Configuration)' == ''">Debug</Configuration> </PropertyGroup> <ItemGroup> <SolutionRoot Include=".." /> <BuildArtifacts Include=".\Artifacts\" /> <SolutionFile Include="..\SolutionName.sln" /> </ItemGroup> <Target Name="Clean"> <RemoveDir Directories="@(BuildArtifacts)" /> </Target> <Target Name="Init" DependsOnTargets="Clean"> <MakeDir Directories="@(BuildArtifacts)" /> </Target> <Target Name="Compile" DependsOnTargets="Init"> <MSBuild Projects="@(SolutionFile)" Properties="OutDir=%(BuildArtifacts.FullPath);Configuration=$(Configuration)" /> <MakeDir Directories="%(BuildArtifacts.FullPath)\_PublishedWebsites\RDE.XAP.UnifiedGui.Web\Temp" /> </Target> </Project> The solution has 23 projects, 4 of which are WebApps. Now, the script works fine and the output is generated correctly. The only problem I counter is with two WebApp projects in the solution that use the AJAX Control Toolkit. The toolkit has a set of folders (e.g. ar, it, es, fr) that contain localized resources. These folders are not copied in the bin directory of the WebApps when the solution is built in MSBuild, but they are copied when it is built in Visual Studio. How can I solve this in a clean manner? I know I could write a (quite convoluted) task that copies the directories after the compile, but it does not seem the right solution to me. Also, neither Google, SO and MSDN could provide more details on this kind of issue.

    Read the article

  • Not all TFS Build type files are getting copied

    - by k4k4sh1
    Because I have several builds sharing some assemblies containing common build tasks, I have one TFSBuild.proj for all builds and import different targets depending on the build, like the following: <Project DefaultTargets="DesktopBuild" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5"> <Import Project="Build_1.targets" Condition="'$(BuildDefinition)'=='Build_1'" /> <Import Project="Build_2.targets" Condition="'$(BuildDefinition)'=='Build_2'" /> <Import Project="Build_3.targets" Condition="'$(BuildDefinition)'=='Build_3'" /> </Project> Each target for a particular build has your usual content for a build type file, but in my case, I also reference some tasks inside assemblies checked into the same folder as TFSBuild.proj in source control. I wanted to add folders to contain some test build targets, since my folder was getting a bit full and cluttered. The following illustrates what I mean. $(TFS project)\build\ TFSBuild.proj Build_1.targets ... Assembly1.dll Assembly2.dll ... Folder\ Test_target_1.targets .... When I stated my build, however, I found that Test_target_1.targets and other files in Folder were not being copied to the build directory, while TFSBuild.proj and other files in the root level, as it were, of the build type folder were being copied. This caused my test build to not be able to reference files inside Folder, causing my build to immediately fail. I realize the simplest work-around would be to get rid of Folder and move all of its contents up to the build folder, but I would really like to have Folder if at all possible. Thanks for your help in advance.

    Read the article

  • What is usefulness of W3C's Semantic Data Extractor in semantically correct XHTML CSS Development?

    - by metal-gear-solid
    What is the usefulness of W3C's Semantic Data Extractor? http://www.w3.org/2003/12/semantic-extractor.html This tool, geared by an XSLT stylesheet, tries to extract some information from a HTML semantic rich document. It only uses information available through a good usage of the semantics defined in HTML. The aim is to show that providing a semantically rich HTML gives much more value to your code: using a semantically rich HTML code allows a better use of CSS, makes your HTML intelligible to a wider range of user agents (especially search engines bots). As an aside, it can give clues to user agents developers on some hooks that could be interesting to add in their product. After checking validation for CSS and HTML. Should i go for Semantic Data Extractor tool. What it does. and how it can improved our coding.? Is anyone using it? And i check some site randomly with but with most of sites it gives error Using org.apache.xerces.parsers.SAXParser Exception net.sf.saxon.trans.XPathException: org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`". org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`".

    Read the article

  • What is usefulness of W3C's "Semantic Data Extractor" in semantically correct XHTML CSS Development?

    - by metal-gear-solid
    What is the usefulness of W3C's Semantic Data Extractor? http://www.w3.org/2003/12/semantic-extractor.html This tool, geared by an XSLT stylesheet, tries to extract some information from a HTML semantic rich document. It only uses information available through a good usage of the semantics defined in HTML. The aim is to show that providing a semantically rich HTML gives much more value to your code: using a semantically rich HTML code allows a better use of CSS, makes your HTML intelligible to a wider range of user agents (especially search engines bots). As an aside, it can give clues to user agents developers on some hooks that could be interesting to add in their product. After checking validation for CSS and HTML. Should i go for Semantic Data Extractor tool. What it does. and how it can improved our coding.? Is anyone using it? And i check some site randomly with but with most of sites it gives error Using org.apache.xerces.parsers.SAXParser Exception net.sf.saxon.trans.XPathException: org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`". org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`". Is it possible to get pass every site with this tool? on one site i got this error No top-level heading (h1) found, no outline extracted. Is it necessary to have at least a H1 in any webpage?

    Read the article

< Previous Page | 190 191 192 193 194 195 196  | Next Page >