Search Results

Search found 684 results on 28 pages for 'pipeline'.

Page 12/28 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Any new Sony RAID laptops coming after March 2010?

    - by Simon
    My current Sony laptop has RAID 0 (don't worry I back it up 100% every day). I have 2 × 320gb drives giving me 640gb. I want my next laptop to be RAID but with SSD + 500GB drive Unfortunately Sony's current line seems to have abandoned two disk configuations and I really don't want to go to a slower system than I have now. Anybody know if Sony has any RAID laptops (16" or larger) in the pipeline - or do I have to switch to DELL or something else

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • Apache mod_proxy vs mod_rewrite

    - by Scott
    What is the difference between using mod_proxy and mod_rewrite? I have a requirement to send certain url patterns through the tomcat, which runs on the same host but under port 8080. I know this is something for mod_proxy, but I"m wondering why I can't just use mod_rewrite, or what the difference is? Probably has to do w/ reverse proxy, and also when in the pipeline it gets handled? Thanks.

    Read the article

  • Is it important to reboot Linux after a kernel update?

    - by lfaraone
    I have a few production Fedora and Debian webservers that host our sites as well as user shell accounts (used for git vcs work, some screen+irssi sessions, etc). Occasionally a new kernel update will come down the pipeline in yum/apt-get, and I was wondering if most of the fixes are severe enough to warrant a reboot, or if I can apply the fixes sans reboot. Our main development server currently has 213 days of uptime, and I wasn't sure if it was insecure to run such an older kernel.

    Read the article

  • What is the best IIS tracing tool you have used?

    - by Vivek
    I have spent majority of my career using and troubleshooting IIS Web Server. According to me the best thing happened to a Web admin is FRT (Failed Request Tracing) in IIS 7.0.I have used Event Tracing for Windows as well and FRT is as much helpful.Is there any such tracing tool which can give such good in-depth and greater understanding on request flow through the pipeline?

    Read the article

  • Why does use of H264 in sender/receiver pipelines introduce just HUGE delay?

    - by Serguey Zefirov
    When I try to create pipeline that uses H264 to transmit video, I get some enormous delay, up to 10 seconds to transmit video from my machine to... my machine! This is unacceptable for my goals and I'd like to consult StackOverflow over what I (or someone else) do wrong. I took pipelines from gstrtpbin documentation page and slightly modified them to use Speex: This is sender pipeline: #!/bin/sh gst-launch -v gstrtpbin name=rtpbin \ v4l2src ! ffmpegcolorspace ! ffenc_h263 ! rtph263ppay ! rtpbin.send_rtp_sink_0 \ rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000 \ rtpbin.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5001 sync=false async=false \ udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0 \ pulsesrc ! audioconvert ! audioresample ! audio/x-raw-int,rate=16000 ! \ speexenc bitrate=16000 ! rtpspeexpay ! rtpbin.send_rtp_sink_1 \ rtpbin.send_rtp_src_1 ! udpsink host=127.0.0.1 port=5002 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5003 sync=false async=false \ udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1 Receiver pipeline: !/bin/sh gst-launch -v\ gstrtpbin name=rtpbin \ udpsrc caps="application/x-rtp,media=(string)video, clock-rate=(int)90000, encoding-name=(string)H263-1998" \ port=5000 ! rtpbin.recv_rtp_sink_0 \ rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink \ udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \ rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false \ udpsrc caps="application/x-rtp,media=(string)audio, clock-rate=(int)16000, encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" \ port=5002 ! rtpbin.recv_rtp_sink_1 \ rtpbin. ! rtpspeexdepay ! speexdec ! audioresample ! audioconvert ! alsasink \ udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5007 sync=false async=false Those pipelines, a combination of H263 and Speex, work fine enough. I snap my fingers near camera and micropohne and then I see movement and hear sound at the same time. Then I changed pipelines to use H264 along the video path. The sender becomes: #!/bin/sh gst-launch -v gstrtpbin name=rtpbin \ v4l2src ! ffmpegcolorspace ! x264enc bitrate=300 ! rtph264pay ! rtpbin.send_rtp_sink_0 \ rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000 \ rtpbin.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5001 sync=false async=false \ udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0 \ pulsesrc ! audioconvert ! audioresample ! audio/x-raw-int,rate=16000 ! \ speexenc bitrate=16000 ! rtpspeexpay ! rtpbin.send_rtp_sink_1 \ rtpbin.send_rtp_src_1 ! udpsink host=127.0.0.1 port=5002 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5003 sync=false async=false \ udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1 And receiver becomes: #!/bin/sh gst-launch -v\ gstrtpbin name=rtpbin \ udpsrc caps="application/x-rtp,media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264" \ port=5000 ! rtpbin.recv_rtp_sink_0 \ rtpbin. ! rtph264depay ! ffdec_h264 ! xvimagesink \ udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \ rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false \ udpsrc caps="application/x-rtp,media=(string)audio, clock-rate=(int)16000, encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" \ port=5002 ! rtpbin.recv_rtp_sink_1 \ rtpbin. ! rtpspeexdepay ! speexdec ! audioresample ! audioconvert ! alsasink \ udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5007 sync=false async=false This is what happen under Ubuntu 10.04. I didn't noticed such huge delays on Ubuntu 9.04 - the delays there was in range 2-3 seconds, AFAIR.

    Read the article

  • Creating STA COM compatible ASP.NET Applications

    - by Rick Strahl
    When building ASP.NET applications that interface with old school COM objects like those created with VB6 or Visual FoxPro (MTDLL), it's extremely important that the threads that are serving requests use Single Threaded Apartment Threading. STA is a COM built-in technology that allows essentially single threaded components to operate reliably in a multi-threaded environment. STA's guarantee that COM objects instantiated on a specific thread stay on that specific thread and any access to a COM object from another thread automatically marshals that thread to the STA thread. The end effect is that you can have multiple threads, but a COM object instance lives on a fixed never changing thread. ASP.NET by default uses MTA (multi-threaded apartment) threads which are truly free spinning threads that pay no heed to COM object marshaling. This is vastly more efficient than STA threading which has a bit of overhead in determining whether it's OK to run code on a given thread or whether some sort of thread/COM marshaling needs to occur. MTA COM components can be very efficient, but STA COM components in a multi-threaded environment always tend to have a fair amount of overhead. It's amazing how much COM Interop I still see today so while it seems really old school to be talking about this topic, it's actually quite apropos for me as I have many customers using legacy COM systems that need to interface with other .NET applications. In this post I'm consolidating some of the hacks I've used to integrate with various ASP.NET technologies when using STA COM Components. STA in ASP.NET Support for STA threading in the ASP.NET framework is fairly limited. Specifically only the original ASP.NET WebForms technology supports STA threading directly via its STA Page Handler implementation or what you might know as ASPCOMPAT mode. For WebForms running STA components is as easy as specifying the ASPCOMPAT attribute in the @Page tag:<%@ Page Language="C#" AspCompat="true" %> which runs the page in STA mode. Removing it runs in MTA mode. Simple. Unfortunately all other ASP.NET technologies built on top of the core ASP.NET engine do not support STA natively. So if you want to use STA COM components in MVC or with class ASMX Web Services, there's no automatic way like the ASPCOMPAT keyword available. So what happens when you run an STA COM component in an MTA application? In low volume environments - nothing much will happen. The COM objects will appear to work just fine as there are no simultaneous thread interactions and the COM component will happily run on a single thread or multiple single threads one at a time. So for testing running components in MTA environments may appear to work just fine. However as load increases and threads get re-used by ASP.NET COM objects will end up getting created on multiple different threads. This can result in crashes or hangs, or data corruption in the STA components which store their state in thread local storage on the STA thread. If threads overlap this global store can easily get corrupted which in turn causes problems. STA ensures that any COM object instance loaded always stays on the same thread it was instantiated on. What about COM+? COM+ is supposed to address the problem of STA in MTA applications by providing an abstraction with it's own thread pool manager for COM objects. It steps in to the COM instantiation pipeline and hands out COM instances from its own internally maintained STA Thread pool. This guarantees that the COM instantiation threads are STA threads if using STA components. COM+ works, but in my experience the technology is very, very slow for STA components. It adds a ton of overhead and reduces COM performance noticably in load tests in IIS. COM+ can make sense in some situations but for Web apps with STA components it falls short. In addition there's also the need to ensure that COM+ is set up and configured on the target machine and the fact that components have to be registered in COM+. COM+ also keeps components up at all times, so if a component needs to be replaced the COM+ package needs to be unloaded (same is true for IIS hosted components but it's more common to manage that). COM+ is an option for well established components, but native STA support tends to provide better performance and more consistent usability, IMHO. STA for non supporting ASP.NET Technologies As mentioned above only WebForms supports STA natively. However, by utilizing the WebForms ASP.NET Page handler internally it's actually possible to trick various other ASP.NET technologies and let them work with STA components. This is ugly but I've used each of these in various applications and I've had minimal problems making them work with FoxPro STA COM components which is about as dififcult as it gets for COM Interop in .NET. In this post I summarize several STA workarounds that enable you to use STA threading with these ASP.NET Technologies: ASMX Web Services ASP.NET MVC WCF Web Services ASP.NET Web API ASMX Web Services I start with classic ASP.NET ASMX Web Services because it's the easiest mechanism that allows for STA modification. It also clearly demonstrates how the WebForms STA Page Handler is the key technology to enable the various other solutions to create STA components. Essentially the way this works is to override the WebForms Page class and hijack it's init functionality for processing requests. Here's what this looks like for Web Services:namespace FoxProAspNet { public class WebServiceStaHandler : System.Web.UI.Page, IHttpAsyncHandler { protected override void OnInit(EventArgs e) { IHttpHandler handler = new WebServiceHandlerFactory().GetHandler( this.Context, this.Context.Request.HttpMethod, this.Context.Request.FilePath, this.Context.Request.PhysicalPath); handler.ProcessRequest(this.Context); this.Context.ApplicationInstance.CompleteRequest(); } public IAsyncResult BeginProcessRequest( HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } } public class AspCompatWebServiceStaHandlerWithSessionState : WebServiceStaHandler, IRequiresSessionState { } } This class overrides the ASP.NET WebForms Page class which has a little known AspCompatBeginProcessRequest() and AspCompatEndProcessRequest() method that is responsible for providing the WebForms ASPCOMPAT functionality. These methods handle routing requests to STA threads. Note there are two classes - one that includes session state and one that does not. If you plan on using ASP.NET Session state use the latter class, otherwise stick to the former. This maps to the EnableSessionState page setting in WebForms. This class simply hooks into this functionality by overriding the BeginProcessRequest and EndProcessRequest methods and always forcing it into the AspCompat methods. The way this works is that BeginProcessRequest() fires first to set up the threads and starts intializing the handler. As part of that process the OnInit() method is fired which is now already running on an STA thread. The code then creates an instance of the actual WebService handler factory and calls its ProcessRequest method to start executing which generates the Web Service result. Immediately after ProcessRequest the request is stopped with Application.CompletRequest() which ensures that the rest of the Page handler logic doesn't fire. This means that even though the fairly heavy Page class is overridden here, it doesn't end up executing any of its internal processing which makes this code fairly efficient. In a nutshell, we're highjacking the Page HttpHandler and forcing it to process the WebService process handler in the context of the AspCompat handler behavior. Hooking up the Handler Because the above is an HttpHandler implementation you need to hook up the custom handler and replace the standard ASMX handler. To do this you need to modify the web.config file (here for IIS 7 and IIS Express): <configuration> <system.webServer> <handlers> <remove name="WebServiceHandlerFactory-Integrated-4.0" /> <add name="Asmx STA Web Service Handler" path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" precondition="integrated"/> </handlers> </system.webServer> </configuration> (Note: The name for the WebServiceHandlerFactory-Integrated-4.0 might be slightly different depending on your server version. Check the IIS Handler configuration in the IIS Management Console for the exact name or simply remove the handler from the list there which will propagate to your web.config). For IIS 5 & 6 (Windows XP/2003) or the Visual Studio Web Server use:<configuration> <system.web> <httpHandlers> <remove path="*.asmx" verb="*" /> <add path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" /> </httpHandlers> </system.web></configuration> To test, create a new ASMX Web Service and create a method like this: [WebService(Namespace = "http://foxaspnet.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class FoxWebService : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello World. Threading mode is: " + System.Threading.Thread.CurrentThread.GetApartmentState(); } } Run this before you put in the web.config configuration changes and you should get: Hello World. Threading mode is: MTA Then put the handler mapping into Web.config and you should see: Hello World. Threading mode is: STA And you're on your way to using STA COM components. It's a hack but it works well! I've used this with several high volume Web Service installations with various customers and it's been fast and reliable. ASP.NET MVC ASP.NET MVC has quickly become the most popular ASP.NET technology, replacing WebForms for creating HTML output. MVC is more complex to get started with, but once you understand the basic structure of how requests flow through the MVC pipeline it's easy to use and amazingly flexible in manipulating HTML requests. In addition, MVC has great support for non-HTML output sources like JSON and XML, making it an excellent choice for AJAX requests without any additional tools. Unlike WebForms ASP.NET MVC doesn't support STA threads natively and so some trickery is needed to make it work with STA threads as well. MVC gets its handler implementation through custom route handlers using ASP.NET's built in routing semantics. To work in an STA handler requires working in the Page Handler as part of the Route Handler implementation. As with the Web Service handler the first step is to create a custom HttpHandler that can instantiate an MVC request pipeline properly:public class MvcStaThreadHttpAsyncHandler : Page, IHttpAsyncHandler, IRequiresSessionState { private RequestContext _requestContext; public MvcStaThreadHttpAsyncHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); _requestContext = requestContext; } public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } protected override void OnInit(EventArgs e) { var controllerName = _requestContext.RouteData.GetRequiredString("controller"); var controllerFactory = ControllerBuilder.Current.GetControllerFactory(); var controller = controllerFactory.CreateController(_requestContext, controllerName); if (controller == null) throw new InvalidOperationException("Could not find controller: " + controllerName); try { controller.Execute(_requestContext); } finally { controllerFactory.ReleaseController(controller); } this.Context.ApplicationInstance.CompleteRequest(); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } public override void ProcessRequest(HttpContext httpContext) { throw new NotSupportedException("STAThreadRouteHandler does not support ProcessRequest called (only BeginProcessRequest)"); } } This handler code figures out which controller to load and then executes the controller. MVC internally provides the information needed to route to the appropriate method and pass the right parameters. Like the Web Service handler the logic occurs in the OnInit() and performs all the processing in that part of the request. Next, we need a RouteHandler that can actually pick up this handler. Unlike the Web Service handler where we simply registered the handler, MVC requires a RouteHandler to pick up the handler. RouteHandlers look at the URL's path and based on that decide on what handler to invoke. The route handler is pretty simple - all it does is load our custom handler: public class MvcStaThreadRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); return new MvcStaThreadHttpAsyncHandler(requestContext); } } At this point you can instantiate this route handler and force STA requests to MVC by specifying a route. The following sets up the ASP.NET Default Route:Route mvcRoute = new Route("{controller}/{action}/{id}", new RouteValueDictionary( new { controller = "Home", action = "Index", id = UrlParameter.Optional }), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute);   To make this code a little easier to work with and mimic the behavior of the routes.MapRoute() functionality extension method that MVC provides, here is an extension method for MapMvcStaRoute(): public static class RouteCollectionExtensions { public static void MapMvcStaRoute(this RouteCollection routeTable, string name, string url, object defaults = null) { Route mvcRoute = new Route(url, new RouteValueDictionary(defaults), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute); } } With this the syntax to add  route becomes a little easier and matches the MapRoute() method:RouteTable.Routes.MapMvcStaRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); The nice thing about this route handler, STA Handler and extension method is that it's fully self contained. You can put all three into a single class file and stick it into your Web app, and then simply call MapMvcStaRoute() and it just works. Easy! To see whether this works create an MVC controller like this: public class ThreadTestController : Controller { public string ThreadingMode() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Try this test both with only the MapRoute() hookup in the RouteConfiguration in which case you should get MTA as the value. Then change the MapRoute() call to MapMvcStaRoute() leaving all the parameters the same and re-run the request. You now should see STA as the result. You're on your way using STA COM components reliably in ASP.NET MVC. WCF Web Services running through IIS WCF Web Services provide a more robust and wider range of services for Web Services. You can use WCF over HTTP, TCP, and Pipes, and WCF services support WS* secure services. There are many features in WCF that go way beyond what ASMX can do. But it's also a bit more complex than ASMX. As a basic rule if you need to serve straight SOAP Services over HTTP I 'd recommend sticking with the simpler ASMX services especially if COM is involved. If you need WS* support or want to serve data over non-HTTP protocols then WCF makes more sense. WCF is not my forte but I found a solution from Scott Seely on his blog that describes the progress and that seems to work well. I'm copying his code below so this STA information is all in one place and quickly explain. Scott's code basically works by creating a custom OperationBehavior which can be specified via an [STAOperation] attribute on every method. Using his attribute you end up with a class (or Interface if you separate the contract and class) that looks like this: [ServiceContract] public class WcfService { [OperationContract] public string HelloWorldMta() { return Thread.CurrentThread.GetApartmentState().ToString(); } // Make sure you use this custom STAOperationBehavior // attribute to force STA operation of service methods [STAOperationBehavior] [OperationContract] public string HelloWorldSta() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Pretty straight forward. The latter method returns STA while the former returns MTA. To make STA work every method needs to be marked up. The implementation consists of the attribute and OperationInvoker implementation. Here are the two classes required to make this work from Scott's post:public class STAOperationBehaviorAttribute : Attribute, IOperationBehavior { public void AddBindingParameters(OperationDescription operationDescription, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { } public void ApplyClientBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.ClientOperation clientOperation) { // If this is applied on the client, well, it just doesn’t make sense. // Don’t throw in case this attribute was applied on the contract // instead of the implementation. } public void ApplyDispatchBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.DispatchOperation dispatchOperation) { // Change the IOperationInvoker for this operation. dispatchOperation.Invoker = new STAOperationInvoker(dispatchOperation.Invoker); } public void Validate(OperationDescription operationDescription) { if (operationDescription.SyncMethod == null) { throw new InvalidOperationException("The STAOperationBehaviorAttribute " + "only works for synchronous method invocations."); } } } public class STAOperationInvoker : IOperationInvoker { IOperationInvoker _innerInvoker; public STAOperationInvoker(IOperationInvoker invoker) { _innerInvoker = invoker; } public object[] AllocateInputs() { return _innerInvoker.AllocateInputs(); } public object Invoke(object instance, object[] inputs, out object[] outputs) { // Create a new, STA thread object[] staOutputs = null; object retval = null; Thread thread = new Thread( delegate() { retval = _innerInvoker.Invoke(instance, inputs, out staOutputs); }); thread.SetApartmentState(ApartmentState.STA); thread.Start(); thread.Join(); outputs = staOutputs; return retval; } public IAsyncResult InvokeBegin(object instance, object[] inputs, AsyncCallback callback, object state) { // We don’t handle async… throw new NotImplementedException(); } public object InvokeEnd(object instance, out object[] outputs, IAsyncResult result) { // We don’t handle async… throw new NotImplementedException(); } public bool IsSynchronous { get { return true; } } } The key in this setup is the Invoker and the Invoke method which creates a new thread and then fires the request on this new thread. Because this approach creates a new thread for every request it's not super efficient. There's a bunch of overhead involved in creating the thread and throwing it away after each thread, but it'll work for low volume requests and insure each thread runs in STA mode. If better performance is required it would be useful to create a custom thread manager that can pool a number of STA threads and hand off threads as needed rather than creating new threads on every request. If your Web Service needs are simple and you need only to serve standard SOAP 1.x requests, I would recommend sticking with ASMX services. It's easier to set up and work with and for STA component use it'll be significantly better performing since ASP.NET manages the STA thread pool for you rather than firing new threads for each request. One nice thing about Scotts code is though that it works in any WCF environment including self hosting. It has no dependency on ASP.NET or WebForms for that matter. STA - If you must STA components are a  pain in the ass and thankfully there isn't too much stuff out there anymore that requires it. But when you need it and you need to access STA functionality from .NET at least there are a few options available to make it happen. Each of these solutions is a bit hacky, but they work - I've used all of them in production with good results with FoxPro components. I hope compiling all of these in one place here makes it STA consumption a little bit easier. I feel your pain :-) Resources Download STA Handler Code Examples Scott Seely's original STA WCF OperationBehavior Article© Rick Strahl, West Wind Technologies, 2005-2012Posted in FoxPro   ASP.NET  .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Seriously, It’s Time to Get Your Content Act Together

    - by Mike Stiles
    Branded content, content marketing, social content, brand journalism, we’re seeing those terms more and more. Why? The technology tools are coming together. We should know. We can gather big data, crunch it, listen to the public, moderate, respond, get to know the customer intimately, know what they like, know what they want, we can target, distribute, amplify, measure engagement and reaction, modify strategy and even automate a great deal of all that. An amazing machine, a sleek, smooth-running engine has been built such that all the parts can interact and work together to deliver peak performance and maximum output. But that engine isn’t going anywhere without any gas. Content is the gas. Yes, we curate other people’s content. We can siphon their gas. There’s tech to help with that too. But as for the creation of original, worthwhile content made for a specific audience, our audience, machines can’t do that…at least not yet. Curated content is great. But somebody has to originate the content for it to be curated and shared. And since the need for good, curated content is obviously large and the desire to share is there, it’s a winning proposition for a brand to be a consistent producer of original content. And yet, it feels like content is an issue we’re avoiding. There’s a reluctance to build a massive pipeline if you have no idea what you’re going to run through it. The C-suite often doesn’t know what content is, that it’s different from ads, where to get it, who makes it, how long it should be, what the point of it is if there’s no hard sell of the product, what it costs, how to use it, how to measure it, how to make sure it’s good, or how to make sure it will keep flowing. It could be the reason many brands aren’t pulling the trigger on socially enabling the enterprise. And that’s a shame, because there are a lot of creative, daring, experimental, uniquely talented entertainers and journalists chomping at the bit to execute content for brands. But for many corporate executives, content is “weird,” and the people who make it are even weirder. The content side of the equation is human. It’s art, but art that can be informed by data. The natural inclination is for brands to turn to their agencies for such creative endeavors. But agencies are falling into one of two categories. They’re failing to transition from ads to content. In “Content Era, What’s the Role of Agencies?” Alexander Jutkowitz says agencies were made for one-hit campaigns, not ongoing content. Or, they’re ready and capable but can’t get clients to do the right things. Agencies have to make money, even if it means continuing to do the wrong things because that’s all the client will agree to. So what we wind up with in the pipeline is advertising, marketing-heavy content, content that was obviously created or spearheaded by non-creative executives, random & inconsistent content, copy written for SEO bots, and other completely uninteresting nightmares. Frank Rose, author of “The Art of Immersion,” writes, “Content without story and excitement is noise pollution.” In the old days, you made an ad and inserted it into shows made by people who knew what they were doing. You could bask in that show’s success and leverage their audience. Now, you are tasked with attracting, amassing and holding your own audience. You may just want to make, advertise and sell your widgets. But now there’s a war on for a precious commodity, attention. People are busy. They have filters to keep uninteresting and irrelevant things out. They value their time and expect value back when they give it up. Joe Pulizzi, founder of the Content Marketing Institute, says, "Your customers don't care about you, your products, your services…they care about themselves, their wants and their needs." Is it worth getting serious about content and doing it right? 61% of consumers feel better about a company that delivers custom content (Custom Content Council). Interesting content is one of the top 3 reasons people follow brands on social (Content+). 78% of consumers think organizations that provide custom content want to build good relationships with them (TMG Custom Media). On the B2B side, 80% of business decision makers prefer to get company info in a series of articles vs. an ad. So what’s the hang-up? Cited barriers to content marketing are lack of human resources (42%) and lack of budget (35%). 54% of brands don’t have a single on-site, dedicated content creator. And only 38% of brands have a content marketing strategy. Tech has built the biggest, most incredible stage for brands that’s ever been built. Putting something on that stage is your responsibility. Do a bad show, or no show at all, and you’ll be the beautiful, talented actress that never got discovered. @mikestilesPhoto: Gabriella Fabbri, stock.xchng

    Read the article

  • ASP.NET WebAPI Security 2: Identity Architecture

    - by Your DisplayName here!
    Pedro has beaten me to the punch with a detailed post (and diagram) about the WebAPI hosting architecture. So go read his post first, then come back so we can have a closer look at what that means for security. The first important takeaway is that WebAPI is hosting independent-  currently it ships with two host integration implementations – one for ASP.NET (aka web host) and WCF (aka self host). Pedro nicely shows the integration into the web host. Self hosting is not done yet so we will mainly focus on the web hosting case and I will point out security related differences when they exist. The interesting part for security (amongst other things of course) is the HttpControllerHandler (see Pedro’s diagram) – this is where the host specific representation of an HTTP request gets converted to the WebAPI abstraction (called HttpRequestMessage). The ConvertRequest method does the following: Create a new HttpRequestMessage. Copy URI, method and headers from the HttpContext. Copies HttpContext.User to the Properties<string, object> dictionary on the HttpRequestMessage. The key used for that can be found on HttpPropertyKeys.UserPrincipalKey (which resolves to “MS_UserPrincipal”). So the consequence is that WebAPI receives whatever IPrincipal has been set by the ASP.NET pipeline (in the web hosting case). Common questions are: Are there situations where is property does not get set? Not in ASP.NET – the DefaultAuthenticationModule in the HTTP pipeline makes sure HttpContext.User (and Thread.CurrentPrincipal – more on that later) are always set. Either to some authenticated user – or to an anonymous principal. This may be different in other hosting environments (again more on that later). Why so generic? Keep in mind that WebAPI is hosting independent and may run on a host that materializes identity completely different compared to ASP.NET (or .NET in general). This gives them a way to evolve the system in the future. How does WebAPI code retrieve the current client identity? HttpRequestMessage has an extension method called GetUserPrincipal() which returns the property as an IPrincipal. A quick look at self hosting shows that the moral equivalent of HttpControllerHandler.ConvertRequest() is HttpSelfHostServer.ProcessRequestContext(). Here the principal property gets only set when the host is configured for Windows authentication (inconsisteny). Do I like that? Well – yes and no. Here are my thoughts: I like that it is very straightforward to let WebAPI inherit the client identity context of the host. This might not always be what you want – think of an ASP.NET app that consists of UI and APIs – the UI might use Forms authentication, the APIs token based authentication. So it would be good if the two parts would live in a separate security world. It makes total sense to have this generic hand off point for identity between the host and WebAPI. It also makes total sense for WebAPI plumbing code (especially handlers) to use the WebAPI specific identity abstraction. But – c’mon we are running on .NET. And the way .NET represents identity is via IPrincipal/IIdentity. That’s what every .NET developer on this planet is used to. So I would like to see a User property of type IPrincipal on ApiController. I don’t like the fact that Thread.CurrentPrincipal is not populated. T.CP is a well established pattern as a one stop shop to retrieve client identity on .NET.  That makes a lot of sense – even if the name is misleading at best. There might be existing library code you want to call from WebAPI that makes use of T.CP (e.g. PrincipalPermission, or a simple .Name or .IsInRole()). Having the client identity as an ambient property is useful for code that does not have access to the current HTTP request (for calling GetUserPrincipal()). I don’t like the fact that that the client identity conversion from host to WebAPI is inconsistent. This makes writing security plumbing code harder. I think the logic should always be: If the host has a client identity representation, copy it. If not, set an anonymous principal on the request message. Btw – please don’t annoy me with the “but T.CP is static, and static is bad for testing” chant. T.CP is a getter/setter and, in fact I find it beneficial to be able to set different security contexts in unit tests before calling in some logic. And, in case you have wondered – T.CP is indeed thread static (and the name comes from a time where a logical operation was bound to a thread – which is not true anymore). But all thread creation APIs in .NET actually copy T.CP to the new thread they create. This is the case since .NET 2.0 and is certainly an improvement compared to how Win32 does things. So to sum it up: The host plumbing copies the host client identity to WebAPI (this is not perfect yet, but will surely be improved). or in other words: The current WebAPI bits don’t ship with any authentication plumbing, but solely use whatever authentication (and thus client identity) is set up by the host. WebAPI developers can retrieve the client identity from the HttpRequestMessage. Hopefully my proposed changes around T.CP and the User property on ApiController will be added. In the next post, I will detail how to add WebAPI specific authentication support, e.g. for Basic Authentication and tokens. This includes integrating the notion of claims based identity. After that we will look at the built-in authorization bits and how to improve them as well. Stay tuned.

    Read the article

  • Allowing Access to HttpContext in WCF REST Services

    - by Rick Strahl
    If you’re building WCF REST Services you may find that WCF’s OperationContext, which provides some amount of access to Http headers on inbound and outbound messages, is pretty limited in that it doesn’t provide access to everything and sometimes in a not so convenient manner. For example accessing query string parameters explicitly is pretty painful: [OperationContract] [WebGet] public string HelloWorld() { var properties = OperationContext.Current.IncomingMessageProperties; var property = properties[HttpRequestMessageProperty.Name] as HttpRequestMessageProperty; string queryString = property.QueryString; var name = StringUtils.GetUrlEncodedKey(queryString,"Name"); return "Hello World " + name; } And that doesn’t account for the logic in GetUrlEncodedKey to retrieve the querystring value. It’s a heck of a lot easier to just do this: [OperationContract] [WebGet] public string HelloWorld() { var name = HttpContext.Current.Request.QueryString["Name"] ?? string.Empty; return "Hello World " + name; } Ok, so if you follow the REST guidelines for WCF REST you shouldn’t have to rely on reading query string parameters manually but instead rely on routing logic, but you know what: WCF REST is a PITA anyway and anything to make things a little easier is welcome. To enable the second scenario there are a couple of steps that you have to take on your service implementation and the configuration file. Add aspNetCompatibiltyEnabled in web.config Fist you need to configure the hosting environment to support ASP.NET when running WCF Service requests. This ensures that the ASP.NET pipeline is fired up and configured for every incoming request. <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> </system.serviceModel> Markup your Service Implementation with AspNetCompatibilityRequirements Attribute Next you have to mark up the Service Implementation – not the contract if you’re using a separate interface!!! – with the AspNetCompatibilityRequirements attribute: [ServiceContract(Namespace = "RateTestService")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class RestRateTestProxyService Typically you’ll want to use Allowed as the preferred option. The other options are NotAllowed and Required. Allowed will let the service run if the web.config attribute is not set. Required has to have it set. All these settings determine whether an ASP.NET host AppDomain is used for requests. Once Allowed or Required has been set on the implemented class you can make use of the ASP.NET HttpContext object. When I allow for ASP.NET compatibility in my WCF services I typically add a property that exposes the Context and Request objects a little more conveniently: public HttpContext Context { get { return HttpContext.Current; } } public HttpRequest Request { get { return HttpContext.Current.Request; } } While you can also access the Response object and write raw data to it and manipulate headers THAT is probably not such a good idea as both your code and WCF will end up writing into the output stream. However it might be useful in some situations where you need to take over output generation completely and return something completely custom. Remember though that WCF REST DOES actually support that as well with Stream responses that essentially allow you to return any kind of data to the client so using Response should really never be necessary. Should you or shouldn’t you? WCF purists will tell you never to muck with the platform specific features or the underlying protocol, and if you can avoid it you definitely should avoid it. Querystring management in particular can be handled largely with Url Routing, but there are exceptions of course. Try to use what WCF natively provides – if possible as it makes the code more portable. For example, if you do enable ASP.NET Compatibility you won’t be able to self host a WCF REST service. At the same time realize that especially in WCF REST there are number of big holes or access to some features are a royal pain and so it’s not unreasonable to access the HttpContext directly especially if it’s only for read-only access. Since everything in REST works of URLS and the HTTP protocol more control and easier access to HTTP features is a key requirement to building flexible services. It looks like vNext of the WCF REST stuff will feature many improvements along these lines with much deeper native HTTP support that is often so useful in REST applications along with much more extensibility that allows for customization of the inputs and outputs as data goes through the request pipeline. I’m looking forward to this stuff as WCF REST as it exists today still is a royal pain (in fact I’m struggling with a mysterious version conflict/crashing error on my machine that I have not been able to resolve – grrrr…).© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  WCF  

    Read the article

  • DirectCompute Lectures

    - by Daniel Moth
    Previously I shared resources to get you started with DirectCompute, for taking advantage of GPGPUs in an a way that doesn't tie you to a hardware vendor (e.g. nvidia, amd). I just stumbled upon and had to share a lecture series on channel9 on DirectCompute! Here are direct links to the episodes that are up there now: DirectCompute Expert Roundtable Discussion DirectCompute Lecture Series 101- Introduction to DirectCompute DirectCompute Lecture Series 110- Memory Patterns DirectCompute Lecture Series 120- Basics of DirectCompute Application Development DirectCompute Lecture Series 210- GPU Optimizations and Performance DirectCompute Lecture Series 230- GPU Accelerated Physics DirectCompute Lecture Series 250- Integration with the Graphics Pipeline Having watched these I recommend them all, but if you only want to watch a few, I suggest #2, #3, #4 and #5. Also, you should download the "WMV (High)" so you can see the code clearly and be able to Ctrl+Shift+G for fast playback… TIP: To subscribe to channel9 GPU content, use this RSS feed. Comments about this post welcome at the original blog.

    Read the article

  • Is sticking to one language on a particular project a good practice?

    - by Ans
    I'm developing a pipeline for processing text that will go into production. The question I keep asking myself is: should I stick to one language for the project when I'm looking for a tool to do a particular task (e.g. NLTK, PDFMiner, CLD, CRFsuite, etc.)? Or is it OK to mix and match languages on the project? So I pick the best tool regardless of what language it's written in (e.g. OpenNLP, ParsCit, poppler, CFR++, etc.) and warp (wrap) my code around it? Note, I am not asking about should a developer stick to just one language for their career.

    Read the article

  • Forward Mysql logs to Syslog-ng

    - by Mbeale
    Ubuntu 10.04 I have set MySQL to log slow queries and a general mysql log. How can I pipe those files in syslog to forward to centralized logging service (which is working)? Tried: source s_mysql_instance_1 { pipe("/var/log/mysql/mysql.log" ); }; log { source(s_mysql_instance_1); destination(d_loggly); }; Get: Error opening file for reading; filename='/var/log/mysql/mysql.log', error='Permission denied (13)' Error initializing source driver; source='s_mysql_instance_1', id='s_mysql_instance_1#0' Error initializing message pipeline; I have also disabled apparmor and still get the same results

    Read the article

  • Direct3D9 application won't write to depth buffer

    - by DeadMG
    I've got an application written in D3D9 which will not write any values to the depth buffer, resulting in incorrect values for the depth test. Things I've checked so far: D3DRS_ZENABLE, set to TRUE D3DRS_ZWRITEENABLE, set to TRUE D3DRS_ZFUNC, set to D3DCMP_LESSEQUAL The depth buffer is definitely bound to the pipeline at the relevant time The depth buffer was correctly cleared before use. I've used PIX to confirm that all of these things occurred as expected. For example, if I clear the depth buffer to 0 instead of 1, then correctly nothing is drawn, and PIX confirms that all the pixels failed the depth test. But I've also used PIX to confirm that my submitted geometry does not write to the depth buffer and so is not correctly rendered. Any other suggestions?

    Read the article

  • Delving into design patterns, and what that means for the Oracle user experience

    - by Kathy.Miedema
    By Kathy Miedema, Oracle Applications User Experience George Hackman, Senior Director, Applications User Experiences The Oracle Applications User Experience team has some exciting things happening around Fusion Applications design patterns. Because we’re hoping to have some new offerings soon (stay tuned with VoX to see what’s in the pipeline around Fusion Applications design patterns), now is a good time to talk more about what design patterns can do for the individual user as well as the entire company. George Hackman, Senior Director of Operations User Experience, says the first thing to note is that user experience is not just about the user interface. It’s about understanding how people do things, observing them, and then finding the patterns that emerge. The Applications UX team develops those patterns and then builds them into Oracle applications. What emerges, Hackman says, is a consistent, efficient user experience that promotes a productive workplace. Creating design patterns What is a design pattern in the context of enterprise software? “Every day, people use technology to get things done,” Hackman says. “They navigate a virtual world that reaches from enterprise to consumer apps, and from desktop to mobile. This virtual world is constantly under construction. New areas are being developed and old areas are being redone. As this world is being built and remodeled, efficient pathways and practices emerge. “Oracle's user experience team watches users navigate this world. We measure their productivity and ask them about their satisfaction. We take the most efficient, most productive pathways from the enterprise and consumer world and turn them into Oracle's user experience patterns.” Hackman describes the process as combining all of the best practices from every part of a user’s world. Members of the user experience team observe, analyze, design, prototype, and measure each work task to find the best possible pattern for a particular work flow. As the team builds the patterns, “we make sure they are fully buildable using Oracle technology,” Hackman said. “So customers know they can use these patterns. There’s no need to make something up from scratch, not knowing whether you can even build it.” Hackman says that creating something on a computer is a good example of a user experience pattern. “People are creating things all the time,” he says. “On the consumer side, they are creating documents. On the enterprise side, they are creating expense reports. On a mobile phone, they are creating contacts. They are using different apps like iPhone or Facebook or Gmail or Oracle software, all doing this creation process.” The Applications UX team starts their process by observing how people might create something. “We observe people creating things. We see the patterns, we analyze and document, then we apply them to our products. It might be different from phone to web browser, but we have these design patterns that create a consistent experience across platforms, and across products, too. The result for customers Oracle constantly improves its part of the virtual world, Hackman said. New products are created and existing products are upgraded. Because Oracle builds user experience design patterns, Oracle's virtual world becomes both more powerful and more familiar at the same time. Because of design patterns, users can navigate with ease as they embrace the latest technology – because it behaves the way they expect it to. This means less training and faster adoption for individual users, and more productivity for the business as a whole. Hackman said Oracle gives customers and partners access to design patterns so that they can build in the virtual world using the same best practices. Customers and partners can extend applications with a user experience that is comfortable and familiar to their users. For businesses that are integrating different Oracle applications, design patterns are key. The user experience created in E-Business Suite should be similar to the user experience in Fusion Applications, Hackman said. If a user is transitioning from one application to the other, it shouldn’t be difficult for them to do their work. With design patterns, it isn’t. “Oracle user experience patterns are the building blocks for the virtual world that ensure productivity, consistency and user satisfaction,” Hackman said. “They are built for the enterprise, but incorporate the best practices from across the virtual world. They empower productivity and facilitate social interaction. When you build with patterns, you get all the end-user benefits of less training / retraining from the finished product. You also get faster / cheaper development.” What’s coming? You can already access design patterns to help you build Dashboards with OBIEE here. And we promised you at the beginning that we had something in the pipeline on Fusion Applications design patterns. Look for the announcement about when they are available here on VoX.

    Read the article

  • problem with loading in .FBX meshes in DirectX 10

    - by N0xus
    I'm trying to load in meshes into DirectX 10. I've created a bunch of classes that handle it and allow me to call in a mesh with only a single line of code in my main game class. How ever, when I run the program this is what renders: In the debug output window the following errors keep appearing: D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that Semantic 'TEXCOORD' is defined for mismatched hardware registers between the output stage and input stage. [ EXECUTION ERROR #343: DEVICE_SHADER_LINKAGE_REGISTERINDEX ] D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that the input stage requires Semantic/Index (POSITION,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] The thing is, I've no idea how to fix this. The code I'm using does work and I've simply brought all of that code into a new project of mine. There are no build errors and this only appears when the game is running The .fx file is as follows: float4x4 matWorld; float4x4 matView; float4x4 matProjection; struct VS_INPUT { float4 Pos:POSITION; float2 TexCoord:TEXCOORD; }; struct PS_INPUT { float4 Pos:SV_POSITION; float2 TexCoord:TEXCOORD; }; Texture2D diffuseTexture; SamplerState diffuseSampler { Filter = MIN_MAG_MIP_POINT; AddressU = WRAP; AddressV = WRAP; }; // // Vertex Shader // PS_INPUT VS( VS_INPUT input ) { PS_INPUT output=(PS_INPUT)0; float4x4 viewProjection=mul(matView,matProjection); float4x4 worldViewProjection=mul(matWorld,viewProjection); output.Pos=mul(input.Pos,worldViewProjection); output.TexCoord=input.TexCoord; return output; } // // Pixel Shader // float4 PS(PS_INPUT input ) : SV_Target { return diffuseTexture.Sample(diffuseSampler,input.TexCoord); //return float4(1.0f,1.0f,1.0f,1.0f); } RasterizerState NoCulling { FILLMODE=SOLID; CULLMODE=NONE; }; technique10 Render { pass P0 { SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetGeometryShader( NULL ); SetPixelShader( CompileShader( ps_4_0, PS() ) ); SetRasterizerState(NoCulling); } } In my game, the .fx file and model are called and set as follows: Loading in shader file //Set the shader flags - BMD DWORD dwShaderFlags = D3D10_SHADER_ENABLE_STRICTNESS; #if defined( DEBUG ) || defined( _DEBUG ) dwShaderFlags |= D3D10_SHADER_DEBUG; #endif ID3D10Blob * pErrorBuffer=NULL; if( FAILED( D3DX10CreateEffectFromFile( TEXT("TransformedTexture.fx" ), NULL, NULL, "fx_4_0", dwShaderFlags, 0, md3dDevice, NULL, NULL, &m_pEffect, &pErrorBuffer, NULL ) ) ) { char * pErrorStr = ( char* )pErrorBuffer->GetBufferPointer(); //If the creation of the Effect fails then a message box will be shown MessageBoxA( NULL, pErrorStr, "Error", MB_OK ); return false; } //Get the technique called Render from the effect, we need this for rendering later on m_pTechnique=m_pEffect->GetTechniqueByName("Render"); //Number of elements in the layout UINT numElements = TexturedLitVertex::layoutSize; //Get the Pass description, we need this to bind the vertex to the pipeline D3D10_PASS_DESC PassDesc; m_pTechnique->GetPassByIndex( 0 )->GetDesc( &PassDesc ); //Create Input layout to describe the incoming buffer to the input assembler if (FAILED(md3dDevice->CreateInputLayout( TexturedLitVertex::layout, numElements,PassDesc.pIAInputSignature, PassDesc.IAInputSignatureSize, &m_pVertexLayout ) ) ) { return false; } model loading: m_pTestRenderable=new CRenderable(); //m_pTestRenderable->create<TexturedVertex>(md3dDevice,8,6,vertices,indices); m_pModelLoader = new CModelLoader(); m_pTestRenderable = m_pModelLoader->loadModelFromFile( md3dDevice,"armoredrecon.fbx" ); m_pGameObjectTest = new CGameObject(); m_pGameObjectTest->setRenderable( m_pTestRenderable ); // Set primitive topology, how are we going to interpet the vertices in the vertex buffer md3dDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); if ( FAILED( D3DX10CreateShaderResourceViewFromFile( md3dDevice, TEXT( "armoredrecon_diff.png" ), NULL, NULL, &m_pTextureShaderResource, NULL ) ) ) { MessageBox( NULL, TEXT( "Can't load Texture" ), TEXT( "Error" ), MB_OK ); return false; } m_pDiffuseTextureVariable = m_pEffect->GetVariableByName( "diffuseTexture" )->AsShaderResource(); m_pDiffuseTextureVariable->SetResource( m_pTextureShaderResource ); Finally, the draw function code: //All drawing will occur between the clear and present m_pViewMatrixVariable->SetMatrix( ( float* )m_matView ); m_pWorldMatrixVariable->SetMatrix( ( float* )m_pGameObjectTest->getWorld() ); //Get the stride(size) of the a vertex, we need this to tell the pipeline the size of one vertex UINT stride = m_pTestRenderable->getStride(); //The offset from start of the buffer to where our vertices are located UINT offset = m_pTestRenderable->getOffset(); ID3D10Buffer * pVB=m_pTestRenderable->getVB(); //Bind the vertex buffer to input assembler stage - md3dDevice->IASetVertexBuffers( 0, 1, &pVB, &stride, &offset ); md3dDevice->IASetIndexBuffer( m_pTestRenderable->getIB(), DXGI_FORMAT_R32_UINT, 0 ); //Get the Description of the technique, we need this in order to loop through each pass in the technique D3D10_TECHNIQUE_DESC techDesc; m_pTechnique->GetDesc( &techDesc ); //Loop through the passes in the technique for( UINT p = 0; p < techDesc.Passes; ++p ) { //Get a pass at current index and apply it m_pTechnique->GetPassByIndex( p )->Apply( 0 ); //Draw call md3dDevice->DrawIndexed(m_pTestRenderable->getNumOfIndices(),0,0); //m_pD3D10Device->Draw(m_pTestRenderable->getNumOfVerts(),0); } Is there anything I've clearly done wrong or are missing? Spent 2 weeks trying to workout what on earth I've done wrong to no avail. Any insight a fresh pair eyes could give on this would be great.

    Read the article

  • Oracle ACEs / ACE Directors in the OTN Lounge - JavaOne Latin America 2012

    - by Bob Rhubart
    What's an Oracle ACE? Oracle ACEs and Oracle ACE Directors are community members who have demonstrated both community leadership and expertise with Oracle technologies. You'll get a chance to interact with several Oracle ACEs and Oracle ACE Directors in the mini theater in the OTN Lounge this week during JavaOne Latin America 2012 in São Paulo, Brazil. Tuesday, 4 December 2012 Presentation Presenter Presenter title and company 4:30 – 4:50 Co-existence between Applications' Unlimited and Fusion Applications Gustavo Gonzales, Oracle ACE Director CTO, IT Convergence 4:50 – 5:10 Pipeline Table Functions Marcelo Ochoa, Oracle ACE CTO, Scotas.com 5:10 - 5:30 Automatic Diagnostic Repository (ADR) Day-to-Day Rodrigo Almeida, Oracle ACE CDS - Condomínio de Soluções Corporativas Wednesday, 5 December 2012 Presentation Presenter Presenter title and company 4:30 – 4:50 TBA 4:50 – 5:10 Oracle VM Template - Facilitating the Construction Environment. David Siqueira, Oracle ACE CDS Condominio de Soluções 5:10 – 5:30 Database Migration with Minimal Downtime Marcus Vinicius Miguel Pedro, Oracle ACE Discover

    Read the article

  • Three Hidden Extensibility Gems in ASP.NET 4

    ASP.NET 4 introduces a few new extensibility APIs that live the hermit lifestyle away from the public eye. Theyre not exactly hidden - they are well documented on MSDN - but they arent well publicized. Its about time we shine a spotlight on them. PreApplicationStartMethodAttribute This new attribute allows you to have code run way early in the ASP.NET pipeline as an application starts up. I mean way early, even before Application_Start. This happens to also be before code in your App_code folder...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Top Innovations for Sales Managers

    - by divya.malik
    Sales managers are always looking for ways to motivate their troops as well as make themselves more effective and productive. Here is a small X’mas present for those folks that are looking for some effective tips. Our friends at Selling Power magazine recently wrote an interesting blog post with top 10 best practices for sales managers. Here we go: Harness social media Strategically align marketing campaigns with sales efforts Establish a customer-centric sales process Realize ROI with CRM Embrace online collaboration Improve accuracy in sales forecasting and pipeline metrics Coach for sales success Leverage mobile technology Focus on sales enablement Improve sales performance and compensation management We have a complete suite of sales applications, to help increase sales revenues, sales productivity as well as to improve your sales execution. You can find more details here. For more details on the SellingPower blog post click here. Happy Holidays to you and your family.

    Read the article

  • How can you print a text file via gedit from the command line?

    - by dan
    I'd like to use gedit or some similar program just as a page formatter and pipe some text through it and onto the printer. | lpr just doesn't cut it in the presentation department. The printed output is subpar, even if I try to tinker with the margin and font size options. But I like the way text looks like when printed from gedit. Is there a way to have the best of both worlds and use a command line pipeline to print a text file with gedit-quality formatting?

    Read the article

  • Extending SSIS with custom Data Flow components (Presentation)

    Download the slides and sample code from my Extending SSIS with custom Data Flow components presentation, first presented at the SQLBits II (The SQL) Community Conference. Abstract Get some real-world insights into developing data flow components for SSIS. This starts with an introduction to the data flow pipeline engine, and explains the real differences between adapters and the three sub-types of transformation. Understanding how the different types of component behave and manage data is key to writing components of your own, and probably should but be required knowledge for anyone building packages at all. Using sample code throughout, I will show you how to write components, as well as highlighting best practice and lessons learned. The sample code includes fully working example projects for source, destination and transformation components. Presentation & Samples (358KB) Extending SSIS with custom Data Flow components.zip

    Read the article

  • Correct Rotation and Translation with a 4x4 matrix

    - by sFuller
    I am using a 4x4 matrix to transform verts in a shader. I multiply an identity matrix by a rotation matrix by a translation matrix. I am trying to first rotate the verts and then translate them, however in my result, it appears that the verts are being transformed and then rotated. My matrix looks something like this: m00 m01 m02 tx m10 m11 m12 ty m20 m21 m22 tz --- --- --- 1 I am not using OpenGL's fixed function pipeline, I am multiplying matrices on the client side, and uploading the matrix to a GLSL shader. If it helps, I am using my own matrix multiplication code, but I have recreated this problem using matrices on my graphing calculator, so I don't believe my matrix code has errors.. I'll include a visual aid if needed.

    Read the article

  • Checksum Transformation

    The Checksum Transformation computes a hash value, the checksum, across one or more columns, returning the result in the Checksum output column. The transformation provides functionality similar to the T-SQL CHECKSUM function, but is encapsulated within SQL Server Integration Services, for use within the pipeline without code or a SQL Server connection. As featured in The Microsoft Data Warehouse Toolkit by Joy Mundy and Warren Thornthwaite from the Kimbal Group. Have a look at the book samples especially Sample package for custom SCD handling. All input columns are passed through the transformation unaltered, those selected are used to generate the checksum which is passed out through a single output column, Checksum. This does not restrict the number of columns available downstream from the transformation, as columns will always flow through a transformation. The Checksum output column is in addition to all existing columns within the pipeline buffer. The Checksum Transformation uses an algorithm based on the .Net framework GetHashCode method, it is not consistent with the T-SQL CHECKSUM() or BINARY_CHECKSUM() functions. The transformation does not support the following Integration Services data types, DT_NTEXT, DT_IMAGE and DT_BYTES. ChecksumAlgorithm Property There ChecksumAlgorithm property is defined with an enumeration. It was first added in v1.3.0, when the FrameworkChecksum was added. All previous algorithms are still supported for backward compatibility as ChecksumAlgorithm.Original (0). Original - Orginal checksum function, with known issues around column separators and null columns. This was deprecated in the first SQL Server 2005 RTM release. FrameworkChecksum - The hash function is based on the .NET Framework GetHash method for object types. This is based on the .NET Object.GetHashCode() method, which unfortunately differs between x86 and x64 systems. For that reason we now default to the CRC32 option. CRC32 - Using a standard 32-bit cyclic redundancy check (CRC), this provides a more open implementation. The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox by hand. This process has been described in detail in the related FAQ entry for How do I install a task or transform component?, just select Checksum from the SSIS Data Flow Items list in the Choose Toolbox Items window. Downloads The Checksum Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Checksum Transformation for SQL Server 2005 Checksum Transformation for SQL Server 2008 Checksum Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.27 – SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2010) SQL Server 2008 Version 2.0.0.27 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . Fix for upgrade mappings between 2005 and 2008. (19 Oct 2010) Version 2.0.0.24 - SQL Server 2008 release. Introduces the new CRC-32 algorithm, which is consistent across x86 and x64.. The default algorithm is now CRC32. (29 Oct 2008) Version 2.0.0.6 - SQL Server 2008 pre-release. This version was released by mistake as part of the site migration, and had known issues. (20 Oct 2008) SQL Server 2005 Version 1.5.0.43 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . (19 Oct 2010) Version 1.5.0.16 - Introduces the new CRC-32 algorithm, which is consistent across x86 and x64. The default algorithm is now CRC32. (20 Oct 2008) Version 1.4.0.0 - Installer refresh only. (22 Dec 2007) Version 1.4.0.0 - Refresh for minor UI enhancements. (5 Mar 2006) Version 1.3.0.0 - SQL Server 2005 RTM. The checksum algorithm has changed to improve cardinality when calculating multiple column checksums. The original algorithm is still available for backward compatibility. Fixed custom UI bug with Output column name not persisting. (10 Nov 2005) Version 1.2.0.1 - SQL Server 2005 IDW 15 June CTP. A user interface is provided, as well as the ability to change the checksum output column name. (29 Aug 2005) Version 1.0.0 - Public Release (Beta). (30 Oct 2004) Screenshot

    Read the article

  • How to design a character animation system?

    - by Andrea Benedetti
    I'm searching for suggestions and resources on the possible ways to design a character animation system. I mean a system built on top of the graphics engine (as graphics engine I use Ogre3D, that provide an animation layer), and in strict contact with the logic of the game. It's for a sports title, so the question is not easy. Edit: What I'm searching for are suggestions and resources about the action state mechines (or animation state machines), that is build on top of the animation pipeline already provided by the graphics engine. So, a state-driver animation interface for use by virtually all higher-level game code.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >