Search Results

Search found 29990 results on 1200 pages for 'object recognition'.

Page 438/1200 | < Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >

  • Virtual functions - base class pointer

    - by user980411
    I understood why a base class pointer is made to point to a derived class object. But, I fail to understand why we need to assign to it, a base class object, when it is a base class object by itself. Can anyone please explain that? #include <iostream> using namespace std; class base { public: virtual void vfunc() { cout << "This is base's vfunc().\n"; } }; class derived1 : public base { public: void vfunc() { cout << "This is derived1's vfunc().\n"; } }; int main() { base *p, b; derived1 d1; // point to base p = &b; p->vfunc(); // access base's vfunc() // point to derived1 p = &d1; p->vfunc(); // access derived1's vfunc() return 0; }

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Rendering ASP.NET Script References into the Html Header

    - by Rick Strahl
    One thing that I’ve come to appreciate in control development in ASP.NET that use JavaScript is the ability to have more control over script and script include placement than ASP.NET provides natively. Specifically in ASP.NET you can use either the ClientScriptManager or ScriptManager to embed scripts and script references into pages via code. This works reasonably well, but the script references that get generated are generated into the HTML body and there’s very little operational control for placement of scripts. If you have multiple controls or several of the same control that need to place the same scripts onto the page it’s not difficult to end up with scripts that render in the wrong order and stop working correctly. This is especially critical if you load script libraries with dependencies either via resources or even if you are rendering referenced to CDN resources. Natively ASP.NET provides a host of methods that help embedding scripts into the page via either Page.ClientScript or the ASP.NET ScriptManager control (both with slightly different syntax): RegisterClientScriptBlock Renders a script block at the top of the HTML body and should be used for embedding callable functions/classes. RegisterStartupScript Renders a script block just prior to the </form> tag and should be used to for embedding code that should execute when the page is first loaded. Not recommended – use jQuery.ready() or equivalent load time routines. RegisterClientScriptInclude Embeds a reference to a script from a url into the page. RegisterClientScriptResource Embeds a reference to a Script from a resource file generating a long resource file string All 4 of these methods render their <script> tags into the HTML body. The script blocks give you a little bit of control by having a ‘top’ and ‘bottom’ of the document location which gives you some flexibility over script placement and precedence. Script includes and resource url unfortunately do not even get that much control – references are simply rendered into the page in the order of declaration. The ASP.NET ScriptManager control facilitates this task a little bit with the abililty to specify scripts in code and the ability to programmatically check what scripts have already been registered, but it doesn’t provide any more control over the script rendering process itself. Further the ScriptManager is a bear to deal with generically because generic code has to always check and see if it is actually present. Some time ago I posted a ClientScriptProxy class that helps with managing the latter process of sending script references either to ClientScript or ScriptManager if it’s available. Since I last posted about this there have been a number of improvements in this API, one of which is the ability to control placement of scripts and script includes in the page which I think is rather important and a missing feature in the ASP.NET native functionality. Handling ScriptRenderModes One of the big enhancements that I’ve come to rely on is the ability of the various script rendering functions described above to support rendering in multiple locations: /// <summary> /// Determines how scripts are included into the page /// </summary> public enum ScriptRenderModes { /// <summary> /// Inherits the setting from the control or from the ClientScript.DefaultScriptRenderMode /// </summary> Inherit, /// Renders the script include at the location of the control /// </summary> Inline, /// <summary> /// Renders the script include into the bottom of the header of the page /// </summary> Header, /// <summary> /// Renders the script include into the top of the header of the page /// </summary> HeaderTop, /// <summary> /// Uses ClientScript or ScriptManager to embed the script include to /// provide standard ASP.NET style rendering in the HTML body. /// </summary> Script, /// <summary> /// Renders script at the bottom of the page before the last Page.Controls /// literal control. Note this may result in unexpected behavior /// if /body and /html are not the last thing in the markup page. /// </summary> BottomOfPage } This enum is then applied to the various Register functions to allow more control over where scripts actually show up. Why is this useful? For me I often render scripts out of control resources and these scripts often include things like a JavaScript Library (jquery) and a few plug-ins. The order in which these can be loaded is critical so that jQuery.js always loads before any plug-in for example. Typically I end up with a general script layout like this: Core Libraries- HeaderTop Plug-ins: Header ScriptBlocks: Header or Script depending on other dependencies There’s also an option to render scripts and CSS at the very bottom of the page before the last Page control on the page which can be useful for speeding up page load when lots of scripts are loaded. The API syntax of the ClientScriptProxy methods is closely compatible with ScriptManager’s using static methods and control references to gain access to the page and embedding scripts. For example, to render some script into the current page in the header: // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Same again - shouldn't be rendered because it's the same id ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Create a second script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function2", "function helloWorld2() { alert('hello2'); }", true, ScriptRenderModes.Header); // This just calls ClientScript and renders into bottom of document ClientScriptProxy.Current.RegisterStartupScript(this,typeof(ControlResources), "call_hello", "helloWorld();helloWorld2();", true); which generates: <html xmlns="http://www.w3.org/1999/xhtml" > <head><title> </title> <script type="text/javascript"> function helloWorld() { alert('hello'); } </script> <script type="text/javascript"> function helloWorld2() { alert('hello2'); } </script> </head> <body> … <script type="text/javascript"> //<![CDATA[ helloWorld();helloWorld2();//]]> </script> </form> </body> </html> Note that the scripts are generated into the header rather than the body except for the last script block which is the call to RegisterStartupScript. In general I wouldn’t recommend using RegisterStartupScript – ever. It’s a much better practice to use a script base load event to handle ‘startup’ code that should fire when the page first loads. So instead of the code above I’d actually recommend doing: ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "call_hello", "$().ready( function() { alert('hello2'); });", true, ScriptRenderModes.Header); assuming you’re using jQuery on the page. For script includes from a Url the following demonstrates how to embed scripts into the header. This example injects a jQuery and jQuery.UI script reference from the Google CDN then checks each with a script block to ensure that it has loaded and if not loads it from a server local location: // load jquery from CDN ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js", ScriptRenderModes.HeaderTop); // check if jquery loaded - if it didn't we're not online string scriptCheck = @"if (typeof jQuery != 'object') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));"; string jQueryUrl = ClientScriptProxy.Current.GetWebResourceUrl(this, typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jquery_register", string.Format(scriptCheck,jQueryUrl),true, ScriptRenderModes.HeaderTop); // Load jquery-ui from cdn ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js", ScriptRenderModes.Header); // check if we need to load from local string jQueryUiUrl = ResolveUrl("~/scripts/jquery-ui-custom.min.js"); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jqueryui_register", string.Format(scriptCheck, jQueryUiUrl), true, ScriptRenderModes.Header); // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "$().ready( function() { alert('hello'); });", true, ScriptRenderModes.Header); which in turn generates this HTML: <html xmlns="http://www.w3.org/1999/xhtml" > <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=DIykvYhJ_oXCr-TA_dr35i4AayJoV1mgnQAQGPaZsoPM2LCdvoD3cIsRRitHKlKJfV5K_jQvylK7tsqO3lQIFw2&t=633979863959332352' type='text/javascript'%3E%3C/script%3E")); </script> <title> </title> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/scripts/jquery-ui-custom.min.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> $().ready(function() { alert('hello'); }); </script> </head> <body> …</body> </html> As you can see there’s a bit more control in this process as you can inject both script includes and script blocks into the document at the top or bottom of the header, plus if necessary at the usual body locations. This is quite useful especially if you create custom server controls that interoperate with script and have certain dependencies. The above is a good example of a useful switchable routine where you can switch where scripts load from by default – the above pulls from Google CDN but a configuration switch may automatically switch to pull from the local development copies if your doing development for example. How does it work? As mentioned the ClientScriptProxy object mimicks many of the ScriptManager script related methods and so provides close API compatibility with it although it contains many additional overloads that enhance functionality. It does however work against ScriptManager if it’s available on the page, or Page.ClientScript if it’s not so it provides a single unified frontend to script access. There are however many overloads of the original SM methods like the above to provide additional functionality. The implementation of script header rendering is pretty straight forward – as long as a server header (ie. it has to have runat=”server” set) is available. Otherwise these routines fall back to using the default document level insertions of ScriptManager/ClientScript. Given that there is a server header it’s relatively easy to generate the script tags and code and append them to the header either at the top or bottom. I suspect Microsoft didn’t provide header rendering functionality precisely because a runat=”server” header is not required by ASP.NET so behavior would be slightly unpredictable. That’s not really a problem for a custom implementation however. Here’s the RegisterClientScriptBlock implementation that takes a ScriptRenderModes parameter to allow header rendering: /// <summary> /// Renders client script block with the option of rendering the script block in /// the Html header /// /// For this to work Header must be defined as runat="server" /// </summary> /// <param name="control">any control that instance typically page</param> /// <param name="type">Type that identifies this rendering</param> /// <param name="key">unique script block id</param> /// <param name="script">The script code to render</param> /// <param name="addScriptTags">Ignored for header rendering used for all other insertions</param> /// <param name="renderMode">Where the block is rendered</param> public void RegisterClientScriptBlock(Control control, Type type, string key, string script, bool addScriptTags, ScriptRenderModes renderMode) { if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; if (control.Page.Header == null || renderMode != ScriptRenderModes.HeaderTop && renderMode != ScriptRenderModes.Header && renderMode != ScriptRenderModes.BottomOfPage) { RegisterClientScriptBlock(control, type, key, script, addScriptTags); return; } // No dupes - ref script include only once const string identifier = "scriptblock_"; if (HttpContext.Current.Items.Contains(identifier + key)) return; HttpContext.Current.Items.Add(identifier + key, string.Empty); StringBuilder sb = new StringBuilder(); // Embed in header sb.AppendLine("\r\n<script type=\"text/javascript\">"); sb.AppendLine(script); sb.AppendLine("</script>"); int? index = HttpContext.Current.Items["__ScriptResourceIndex"] as int?; if (index == null) index = 0; if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if(renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1,new LiteralControl(sb.ToString())); HttpContext.Current.Items["__ScriptResourceIndex"] = index; } Note that the routine has to keep track of items inserted by id so that if the same item is added again with the same key it won’t generate two script entries. Additionally the code has to keep track of how many insertions have been made at the top of the document so that entries are added in the proper order. The RegisterScriptInclude method is similar but there’s some additional logic in here to deal with script file references and ClientScriptProxy’s (optional) custom resource handler that provides script compression /// <summary> /// Registers a client script reference into the page with the option to specify /// the script location in the page /// </summary> /// <param name="control">Any control instance - typically page</param> /// <param name="type">Type that acts as qualifier (uniqueness)</param> /// <param name="url">the Url to the script resource</param> /// <param name="ScriptRenderModes">Determines where the script is rendered</param> public void RegisterClientScriptInclude(Control control, Type type, string url, ScriptRenderModes renderMode) { const string STR_ScriptResourceIndex = "__ScriptResourceIndex"; if (string.IsNullOrEmpty(url)) return; if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; // Extract just the script filename string fileId = null; // Check resource IDs and try to match to mapped file resources // Used to allow scripts not to be loaded more than once whether // embedded manually (script tag) or via resources with ClientScriptProxy if (url.Contains(".axd?r=")) { string res = HttpUtility.UrlDecode( StringUtils.ExtractString(url, "?r=", "&", false, true) ); foreach (ScriptResourceAlias item in ScriptResourceAliases) { if (item.Resource == res) { fileId = item.Alias + ".js"; break; } } if (fileId == null) fileId = url.ToLower(); } else fileId = Path.GetFileName(url).ToLower(); // No dupes - ref script include only once const string identifier = "script_"; if (HttpContext.Current.Items.Contains( identifier + fileId ) ) return; HttpContext.Current.Items.Add(identifier + fileId, string.Empty); // just use script manager or ClientScriptManager if (control.Page.Header == null || renderMode == ScriptRenderModes.Script || renderMode == ScriptRenderModes.Inline) { RegisterClientScriptInclude(control, type,url, url); return; } // Retrieve script index in header int? index = HttpContext.Current.Items[STR_ScriptResourceIndex] as int?; if (index == null) index = 0; StringBuilder sb = new StringBuilder(256); url = WebUtils.ResolveUrl(url); // Embed in header sb.AppendLine("\r\n<script src=\"" + url + "\" type=\"text/javascript\"></script>"); if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if (renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1, new LiteralControl(sb.ToString())); HttpContext.Current.Items[STR_ScriptResourceIndex] = index; } There’s a little more code here that deals with cleaning up the passed in Url and also some custom handling of script resources that run through the ScriptCompressionModule – any script resources loaded in this fashion are automatically cached based on the resource id. Raw urls extract just the filename from the URL and cache based on that. All of this to avoid doubling up of scripts if called multiple times by multiple instances of the same control for example or several controls that all load the same resources/includes. Finally RegisterClientScriptResource utilizes the previous method to wrap the WebResourceUrl as well as some custom functionality for the resource compression module: /// <summary> /// Returns a WebResource or ScriptResource URL for script resources that are to be /// embedded as script includes. /// </summary> /// <param name="control">Any control</param> /// <param name="type">A type in assembly where resources are located</param> /// <param name="resourceName">Name of the resource to load</param> /// <param name="renderMode">Determines where in the document the link is rendered</param> public void RegisterClientScriptResource(Control control, Type type, string resourceName, ScriptRenderModes renderMode) { string resourceUrl = GetClientScriptResourceUrl(control, type, resourceName); RegisterClientScriptInclude(control, type, resourceUrl, renderMode); } /// <summary> /// Works like GetWebResourceUrl but can be used with javascript resources /// to allow using of resource compression (if the module is loaded). /// </summary> /// <param name="control"></param> /// <param name="type"></param> /// <param name="resourceName"></param> /// <returns></returns> public string GetClientScriptResourceUrl(Control control, Type type, string resourceName) { #if IncludeScriptCompressionModuleSupport // If wwScriptCompression Module through Web.config is loaded use it to compress // script resources by using wcSC.axd Url the module intercepts if (ScriptCompressionModule.ScriptCompressionModuleActive) { string url = "~/wwSC.axd?r=" + HttpUtility.UrlEncode(resourceName); if (type.Assembly != GetType().Assembly) url += "&t=" + HttpUtility.UrlEncode(type.FullName); return WebUtils.ResolveUrl(url); } #endif return control.Page.ClientScript.GetWebResourceUrl(type, resourceName); } This code merely retrieves the resource URL and then simply calls back to RegisterClientScriptInclude with the URL to be embedded which means there’s nothing specific to deal with other than the custom compression module logic which is nice and easy. What else is there in ClientScriptProxy? ClientscriptProxy also provides a few other useful services beyond what I’ve already covered here: Transparent ScriptManager and ClientScript calls ClientScriptProxy includes a host of routines that help figure out whether a script manager is available or not and all functions in this class call the appropriate object – ScriptManager or ClientScript – that is available in the current page to ensure that scripts get embedded into pages properly. This is especially useful for control development where controls have no control over the scripting environment in place on the page. RegisterCssLink and RegisterCssResource Much like the script embedding functions these two methods allow embedding of CSS links. CSS links are appended to the header or to a form declared with runat=”server”. LoadControlScript Is a high level resource loading routine that can be used to easily switch between different script linking modes. It supports loading from a WebResource, a url or not loading anything at all. This is very useful if you build controls that deal with specification of resource urls/ids in a standard way. Check out the full Code You can check out the full code to the ClientScriptProxyClass here: ClientScriptProxy.cs ClientScriptProxy Documentation (class reference) Note that the ClientScriptProxy has a few dependencies in the West Wind Web Toolkit of which it is part of. ControlResources holds a few standard constants and script resource links and the ScriptCompressionModule which is referenced in a few of the script inclusion methods. There’s also another useful ScriptContainer companion control  to the ClientScriptProxy that allows scripts to be placed onto the page’s markup including the ability to specify the script location and script minification options. You can find all the dependencies in the West Wind Web Toolkit repository: West Wind Web Toolkit Repository West Wind Web Toolkit Home Page© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  JavaScript  

    Read the article

  • Creating a dynamic proxy generator with c# – Part 3 – Creating the constructors

    - by SeanMcAlinden
    Creating a dynamic proxy generator with c# – Part 1 – Creating the Assembly builder, Module builder and caching mechanism Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design For the latest code go to http://rapidioc.codeplex.com/ When building our proxy type, the first thing we need to do is build the constructors. There needs to be a corresponding constructor for each constructor on the passed in base type. We also want to create a field to store the interceptors and construct this list within each constructor. So assuming the passed in base type is a User<int, IRepository> class, were looking to generate constructor code like the following:   Default Constructor public User`2_RapidDynamicBaseProxy() {     this.interceptors = new List<IInterceptor<User<int, IRepository>>>();     DefaultInterceptor<User<int, IRepository>> item = new DefaultInterceptor<User<int, IRepository>>();     this.interceptors.Add(item); }     Parameterised Constructor public User`2_RapidDynamicBaseProxy(IRepository repository1) : base(repository1) {     this.interceptors = new List<IInterceptor<User<int, IRepository>>>();     DefaultInterceptor<User<int, IRepository>> item = new DefaultInterceptor<User<int, IRepository>>();     this.interceptors.Add(item); }   As you can see, we first populate a field on the class with a new list of the passed in base type. Construct our DefaultInterceptor class. Add the DefaultInterceptor instance to our interceptor collection. Although this seems like a relatively small task, there is a fair amount of work require to get this going. Instead of going through every line of code – please download the latest from http://rapidioc.codeplex.com/ and debug through. In this post I’m going to concentrate on explaining how it works. TypeBuilder The TypeBuilder class is the main class used to create the type. You instantiate a new TypeBuilder using the assembly module we created in part 1. /// <summary> /// Creates a type builder. /// </summary> /// <typeparam name="TBase">The type of the base class to be proxied.</typeparam> public static TypeBuilder CreateTypeBuilder<TBase>() where TBase : class {     TypeBuilder typeBuilder = DynamicModuleCache.Get.DefineType         (             CreateTypeName<TBase>(),             TypeAttributes.Class | TypeAttributes.Public,             typeof(TBase),             new Type[] { typeof(IProxy) }         );       if (typeof(TBase).IsGenericType)     {         GenericsHelper.MakeGenericType(typeof(TBase), typeBuilder);     }       return typeBuilder; }   private static string CreateTypeName<TBase>() where TBase : class {     return string.Format("{0}_RapidDynamicBaseProxy", typeof(TBase).Name); } As you can see, I’ve create a new public class derived from TBase which also implements my IProxy interface, this is used later for adding interceptors. If the base type is generic, the following GenericsHelper.MakeGenericType method is called. GenericsHelper using System; using System.Reflection.Emit; namespace Rapid.DynamicProxy.Types.Helpers {     /// <summary>     /// Helper class for generic types and methods.     /// </summary>     internal static class GenericsHelper     {         /// <summary>         /// Makes the typeBuilder a generic.         /// </summary>         /// <param name="concrete">The concrete.</param>         /// <param name="typeBuilder">The type builder.</param>         public static void MakeGenericType(Type baseType, TypeBuilder typeBuilder)         {             Type[] genericArguments = baseType.GetGenericArguments();               string[] genericArgumentNames = GetArgumentNames(genericArguments);               GenericTypeParameterBuilder[] genericTypeParameterBuilder                 = typeBuilder.DefineGenericParameters(genericArgumentNames);               typeBuilder.MakeGenericType(genericTypeParameterBuilder);         }           /// <summary>         /// Gets the argument names from an array of generic argument types.         /// </summary>         /// <param name="genericArguments">The generic arguments.</param>         public static string[] GetArgumentNames(Type[] genericArguments)         {             string[] genericArgumentNames = new string[genericArguments.Length];               for (int i = 0; i < genericArguments.Length; i++)             {                 genericArgumentNames[i] = genericArguments[i].Name;             }               return genericArgumentNames;         }     } }       As you can see, I’m getting all of the generic argument types and names, creating a GenericTypeParameterBuilder and then using the typeBuilder to make the new type generic. InterceptorsField The interceptors field will store a List<IInterceptor<TBase>>. Fields are simple made using the FieldBuilder class. The following code demonstrates how to create the interceptor field. FieldBuilder interceptorsField = typeBuilder.DefineField(     "interceptors",     typeof(System.Collections.Generic.List<>).MakeGenericType(typeof(IInterceptor<TBase>)),       FieldAttributes.Private     ); The field will now exist with the new Type although it currently has no data – we’ll deal with this in the constructor. Add method for interceptorsField To enable us to add to the interceptorsField list, we are going to utilise the Add method that already exists within the System.Collections.Generic.List class. We still however have to create the methodInfo necessary to call the add method. This can be done similar to the following: Add Interceptor Field MethodInfo addInterceptor = typeof(List<>)     .MakeGenericType(new Type[] { typeof(IInterceptor<>).MakeGenericType(typeof(TBase)) })     .GetMethod     (        "Add",        BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic,        null,        new Type[] { typeof(IInterceptor<>).MakeGenericType(typeof(TBase)) },        null     ); So we’ve create a List<IInterceptor<TBase>> type, then using the type created a method info called Add which accepts an IInterceptor<TBase>. Now in our constructor we can use this to call this.interceptors.Add(// interceptor); Building the Constructors This will be the first hard-core part of the proxy building process so I’m going to show the class and then try to explain what everything is doing. For a clear view, download the source from http://rapidioc.codeplex.com/, go to the test project and debug through the constructor building section. Anyway, here it is: DynamicConstructorBuilder using System; using System.Collections.Generic; using System.Reflection; using System.Reflection.Emit; using Rapid.DynamicProxy.Interception; using Rapid.DynamicProxy.Types.Helpers; namespace Rapid.DynamicProxy.Types.Constructors {     /// <summary>     /// Class for creating the proxy constructors.     /// </summary>     internal static class DynamicConstructorBuilder     {         /// <summary>         /// Builds the constructors.         /// </summary>         /// <typeparam name="TBase">The base type.</typeparam>         /// <param name="typeBuilder">The type builder.</param>         /// <param name="interceptorsField">The interceptors field.</param>         public static void BuildConstructors<TBase>             (                 TypeBuilder typeBuilder,                 FieldBuilder interceptorsField,                 MethodInfo addInterceptor             )             where TBase : class         {             ConstructorInfo interceptorsFieldConstructor = CreateInterceptorsFieldConstructor<TBase>();               ConstructorInfo defaultInterceptorConstructor = CreateDefaultInterceptorConstructor<TBase>();               ConstructorInfo[] constructors = typeof(TBase).GetConstructors();               foreach (ConstructorInfo constructorInfo in constructors)             {                 CreateConstructor<TBase>                     (                         typeBuilder,                         interceptorsField,                         interceptorsFieldConstructor,                         defaultInterceptorConstructor,                         addInterceptor,                         constructorInfo                     );             }         }           #region Private Methods           private static void CreateConstructor<TBase>             (                 TypeBuilder typeBuilder,                 FieldBuilder interceptorsField,                 ConstructorInfo interceptorsFieldConstructor,                 ConstructorInfo defaultInterceptorConstructor,                 MethodInfo AddDefaultInterceptor,                 ConstructorInfo constructorInfo             ) where TBase : class         {             Type[] parameterTypes = GetParameterTypes(constructorInfo);               ConstructorBuilder constructorBuilder = CreateConstructorBuilder(typeBuilder, parameterTypes);               ILGenerator cIL = constructorBuilder.GetILGenerator();               LocalBuilder defaultInterceptorMethodVariable =                 cIL.DeclareLocal(typeof(DefaultInterceptor<>).MakeGenericType(typeof(TBase)));               ConstructInterceptorsField(interceptorsField, interceptorsFieldConstructor, cIL);               ConstructDefaultInterceptor(defaultInterceptorConstructor, cIL, defaultInterceptorMethodVariable);               AddDefaultInterceptorToInterceptorsList                 (                     interceptorsField,                     AddDefaultInterceptor,                     cIL,                     defaultInterceptorMethodVariable                 );               CreateConstructor(constructorInfo, parameterTypes, cIL);         }           private static void CreateConstructor(ConstructorInfo constructorInfo, Type[] parameterTypes, ILGenerator cIL)         {             cIL.Emit(OpCodes.Ldarg_0);               if (parameterTypes.Length > 0)             {                 LoadParameterTypes(parameterTypes, cIL);             }               cIL.Emit(OpCodes.Call, constructorInfo);             cIL.Emit(OpCodes.Ret);         }           private static void LoadParameterTypes(Type[] parameterTypes, ILGenerator cIL)         {             for (int i = 1; i <= parameterTypes.Length; i++)             {                 cIL.Emit(OpCodes.Ldarg_S, i);             }         }           private static void AddDefaultInterceptorToInterceptorsList             (                 FieldBuilder interceptorsField,                 MethodInfo AddDefaultInterceptor,                 ILGenerator cIL,                 LocalBuilder defaultInterceptorMethodVariable             )         {             cIL.Emit(OpCodes.Ldarg_0);             cIL.Emit(OpCodes.Ldfld, interceptorsField);             cIL.Emit(OpCodes.Ldloc, defaultInterceptorMethodVariable);             cIL.Emit(OpCodes.Callvirt, AddDefaultInterceptor);         }           private static void ConstructDefaultInterceptor             (                 ConstructorInfo defaultInterceptorConstructor,                 ILGenerator cIL,                 LocalBuilder defaultInterceptorMethodVariable             )         {             cIL.Emit(OpCodes.Newobj, defaultInterceptorConstructor);             cIL.Emit(OpCodes.Stloc, defaultInterceptorMethodVariable);         }           private static void ConstructInterceptorsField             (                 FieldBuilder interceptorsField,                 ConstructorInfo interceptorsFieldConstructor,                 ILGenerator cIL             )         {             cIL.Emit(OpCodes.Ldarg_0);             cIL.Emit(OpCodes.Newobj, interceptorsFieldConstructor);             cIL.Emit(OpCodes.Stfld, interceptorsField);         }           private static ConstructorBuilder CreateConstructorBuilder(TypeBuilder typeBuilder, Type[] parameterTypes)         {             return typeBuilder.DefineConstructor                 (                     MethodAttributes.Public | MethodAttributes.SpecialName | MethodAttributes.RTSpecialName                     | MethodAttributes.HideBySig, CallingConventions.Standard, parameterTypes                 );         }           private static Type[] GetParameterTypes(ConstructorInfo constructorInfo)         {             ParameterInfo[] parameterInfoArray = constructorInfo.GetParameters();               Type[] parameterTypes = new Type[parameterInfoArray.Length];               for (int p = 0; p < parameterInfoArray.Length; p++)             {                 parameterTypes[p] = parameterInfoArray[p].ParameterType;             }               return parameterTypes;         }           private static ConstructorInfo CreateInterceptorsFieldConstructor<TBase>() where TBase : class         {             return ConstructorHelper.CreateGenericConstructorInfo                 (                     typeof(List<>),                     new Type[] { typeof(IInterceptor<TBase>) },                     BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic                 );         }           private static ConstructorInfo CreateDefaultInterceptorConstructor<TBase>() where TBase : class         {             return ConstructorHelper.CreateGenericConstructorInfo                 (                     typeof(DefaultInterceptor<>),                     new Type[] { typeof(TBase) },                     BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic                 );         }           #endregion     } } So, the first two tasks within the class should be fairly clear, we are creating a ConstructorInfo for the interceptorField list and a ConstructorInfo for the DefaultConstructor, this is for instantiating them in each contructor. We then using Reflection get an array of all of the constructors in the base class, we then loop through the array and create a corresponding proxy contructor. Hopefully, the code is fairly easy to follow other than some new types and the dreaded Opcodes. ConstructorBuilder This class defines a new constructor on the type. ILGenerator The ILGenerator allows the use of Reflection.Emit to create the method body. LocalBuilder The local builder allows the storage of data in local variables within a method, in this case it’s the constructed DefaultInterceptor. Constructing the interceptors field The first bit of IL you’ll come across as you follow through the code is the following private method used for constructing the field list of interceptors. private static void ConstructInterceptorsField             (                 FieldBuilder interceptorsField,                 ConstructorInfo interceptorsFieldConstructor,                 ILGenerator cIL             )         {             cIL.Emit(OpCodes.Ldarg_0);             cIL.Emit(OpCodes.Newobj, interceptorsFieldConstructor);             cIL.Emit(OpCodes.Stfld, interceptorsField);         } The first thing to know about generating code using IL is that you are using a stack, if you want to use something, you need to push it up the stack etc. etc. OpCodes.ldArg_0 This opcode is a really interesting one, basically each method has a hidden first argument of the containing class instance (apart from static classes), constructors are no different. This is the reason you can use syntax like this.myField. So back to the method, as we want to instantiate the List in the interceptorsField, first we need to load the class instance onto the stack, we then load the new object (new List<TBase>) and finally we store it in the interceptorsField. Hopefully, that should follow easily enough in the method. In each constructor you would now have this.interceptors = new List<User<int, IRepository>>(); Constructing and storing the DefaultInterceptor The next bit of code we need to create is the constructed DefaultInterceptor. Firstly, we create a local builder to store the constructed type. Create a local builder LocalBuilder defaultInterceptorMethodVariable =     cIL.DeclareLocal(typeof(DefaultInterceptor<>).MakeGenericType(typeof(TBase))); Once our local builder is ready, we then need to construct the DefaultInterceptor<TBase> and store it in the variable. Connstruct DefaultInterceptor private static void ConstructDefaultInterceptor     (         ConstructorInfo defaultInterceptorConstructor,         ILGenerator cIL,         LocalBuilder defaultInterceptorMethodVariable     ) {     cIL.Emit(OpCodes.Newobj, defaultInterceptorConstructor);     cIL.Emit(OpCodes.Stloc, defaultInterceptorMethodVariable); } As you can see, using the ConstructorInfo named defaultInterceptorConstructor, we load the new object onto the stack. Then using the store local opcode (OpCodes.Stloc), we store the new object in the local builder named defaultInterceptorMethodVariable. Add the constructed DefaultInterceptor to the interceptors field collection Using the add method created earlier in this post, we are going to add the new DefaultInterceptor object to the interceptors field collection. Add Default Interceptor private static void AddDefaultInterceptorToInterceptorsList     (         FieldBuilder interceptorsField,         MethodInfo AddDefaultInterceptor,         ILGenerator cIL,         LocalBuilder defaultInterceptorMethodVariable     ) {     cIL.Emit(OpCodes.Ldarg_0);     cIL.Emit(OpCodes.Ldfld, interceptorsField);     cIL.Emit(OpCodes.Ldloc, defaultInterceptorMethodVariable);     cIL.Emit(OpCodes.Callvirt, AddDefaultInterceptor); } So, here’s whats going on. The class instance is first loaded onto the stack using the load argument at index 0 opcode (OpCodes.Ldarg_0) (remember the first arg is the hidden class instance). The interceptorsField is then loaded onto the stack using the load field opcode (OpCodes.Ldfld). We then load the DefaultInterceptor object we stored locally using the load local opcode (OpCodes.Ldloc). Then finally we call the AddDefaultInterceptor method using the call virtual opcode (Opcodes.Callvirt). Completing the constructor The last thing we need to do is complete the constructor. Complete the constructor private static void CreateConstructor(ConstructorInfo constructorInfo, Type[] parameterTypes, ILGenerator cIL)         {             cIL.Emit(OpCodes.Ldarg_0);               if (parameterTypes.Length > 0)             {                 LoadParameterTypes(parameterTypes, cIL);             }               cIL.Emit(OpCodes.Call, constructorInfo);             cIL.Emit(OpCodes.Ret);         }           private static void LoadParameterTypes(Type[] parameterTypes, ILGenerator cIL)         {             for (int i = 1; i <= parameterTypes.Length; i++)             {                 cIL.Emit(OpCodes.Ldarg_S, i);             }         } So, the first thing we do again is load the class instance using the load argument at index 0 opcode (OpCodes.Ldarg_0). We then load each parameter using OpCode.Ldarg_S, this opcode allows us to specify an index position for each argument. We then setup calling the base constructor using OpCodes.Call and the base constructors ConstructorInfo. Finally, all methods are required to return, even when they have a void return. As there are no values on the stack after the OpCodes.Call line, we can safely call the OpCode.Ret to give the constructor a void return. If there was a value, we would have to pop the value of the stack before calling return otherwise, the method would try and return a value. Conclusion This was a slightly hardcore post but hopefully it hasn’t been too hard to follow. The main thing is that a number of the really useful opcodes have been used and now the dynamic proxy is capable of being constructed. If you download the code and debug through the tests at http://rapidioc.codeplex.com/, you’ll be able to create proxies at this point, they cannon do anything in terms of interception but you can happily run the tests, call base methods and properties and also take a look at the created assembly in Reflector. Hope this is useful. The next post should be up soon, it will be covering creating the private methods for calling the base class methods and properties. Kind Regards, Sean.

    Read the article

  • Suggestions on switching from lamp based web design-development to game design-development

    - by Sandeepan Nath
    I have around 2.5 years of experience as a web developer cum designer working mainly on the LAMP platform. Now, I want to try out game development (of the likes of First Person Shooter games like Call of Duty (COD)). It is one of my dreams to some day succeed in making a profitable, popular, commercial game of this type. However, I have never done any kind of business nor even freelancing yet even in the web domain. Okay, first things first, I am just starting and I don't yet have any idea about the technologies, languages, engines (game engines) etc involved in that. I would like this question to be a complete guide for people with similar interests. Best resources for getting hold really fast What would be the best approach to get the basic hold of the domain really fast? Any resource(s) for programmers coming from other domains/experienced in other domains would be the ideal ones for me. E.g., if anybody would ask me some good resource for quickly learning PHP/Mysql, I would suggest books like "How to do everything with PHP & MySql" - because - it introduces all the basics of the domain (not the advanced things which can be later learnt by practice and also a lot by searching in stackoverflow questions) it contains some very nice working projects in the end, which help in applying the skills learnt in the chapters of the book. This is the best way for self learners, I feel. I would appreciate some similar resource which connects all concepts together to get the bigger picture. I have read about C, C++, C#, JAVA being used in game programming but not sure which language to go for (I have previously learnt a little of C and JAVA). I have also read about game engines but there would be various other concepts. Commonly accepted ways of learning Should 3D games like these be tried after 2D games? Are there some commonly accepted ways of learning such kind of games? Like in web development, we should go for frameworks after practising well with basic language, AJAX after getting properly done with simple page-reload processing etc. Apart from these, any useful tips (like language choices etc.) would be much appreciated. Like it is highly recommended to contribute to open source web projects for getting recognition, are there similar open source game projects? Thanks, Sandeepan

    Read the article

  • How to Quickly Cut a Clip From a Video File with Avidemux

    - by Trevor Bekolay
    Whether you’re cutting out the boring parts of your vacation video or getting a hilarious scene for an animated GIF, Avidemux provides a quick and easy way to cut clips from any video file. It’s overkill to use a full-featured video editing program if you just want to cut a few clips from a video file. Even programs that are designed to be small can have confusing interfaces when dealing with video. We’ve found that a great free program, Avidemux, makes the job of cutting clips extremely simple. Note: While the screenshots in this guide are taken from the Windows version, Avidemux runs on all of the major platforms – Windows, Mac OS X and Linux (GTK). Image by Keith Williamson. Cutting Clips from a Video File Open up Avidemux, and load the video file that you want to work with. If you get a prompt like this one: we recommend clicking Yes to use the safer mode. Find the portion of the video that you’d like to isolate. Get as close as you can to the start of the clip you want to cut. Once you find the start of your clip, look at the “Frame Type” of the current frame. You want it to read I; if it isn’t frame type I, then use the single left and right arrow buttons to go forward or backward one frame until you find an appropriate I frame. Once you’ve found the right starting frame, click the button with the A over a red bar. This will set the start of the clip. Advance to where you want your clip to end. Click on the button with a B when you’ve found the appropriate frame. This frame can be of any type. You can now save the clip, either by going to File –> Save –> Save Video… or by pressing Ctrl+S. Give the file a name, and Avidemux will prepare your clip. And that’s it! You should now have a movie file that contains only the portion of the original file that you want. Download Avidemux free for all platforms Latest Features How-To Geek ETC How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? Battlestar Galactica – Caprica Map of the 12 Colonies (Wallpaper Also Available) View Enlarged Versions of Thumbnail Images with Thumbnail Zoom for Firefox IntoNow Identifies Any TV Show by Sound Walk Score Calculates a Neighborhood’s Pedestrian Friendliness Factor Fantasy World at Twilight Wallpaper Hack a Wireless Doorbell into a Snail Mail Indicator

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

  • UPK Customer Success Story: The City and County of San Francisco

    - by karen.rihs(at)oracle.com
    The value of UPK during an upgrade is a hot topic and was a primary focus during our latest customer roundtable featuring The City and County of San Francisco: Leveraging UPK to Accelerate Your PeopleSoft Upgrade. As the Change Management Analyst for their PeopleSoft 9.0 HCM project (Project eMerge), Jan Crosbie-Taylor provided a unique perspective on how they're utilizing UPK and UPK pre-built content early on to successfully manage change for thousands of city and county employees and retirees as they move to this new release. With the first phase of the project going live next September, it's important to the City and County of San Francisco to 1) ensure that the various constituents are brought along with the project team, and 2) focus on the end user aspects of the implementation, including training. Here are some highlights on how UPK and UPK pre-built content are helping them accomplish this: As a former documentation manager, Jan really appreciates the power of UPK as a single source content creation tool. It saves them time by streamlining the documentation creation process, enabling them to record content once, then repurpose it multiple times. With regard to change management, UPK has enabled them to educate the project team and gain critical buy in and support by familiarizing users with the application early on through User Experience Workshops and by promoting UPK at meetings whenever possible. UPK has helped create awareness for the project, making the project real to users. They are taking advantage of UPK pre-built content to: Educate the project team and subject matter experts on how PeopleSoft 9.0 works as delivered Create a guide/storyboard for their own recording Save time/effort and create consistency by enhancing their recorded content with text and conceptual information from the pre-built content Create PeopleSoft Help for their development databases by publishing and integrating the UPK pre-built content into the application help menu Look ahead to the next release of PeopleTools, comparing the differences to help the team evaluate which version to use with their implemtentation When it comes time for training, they will be utilizing UPK in the classroom, eliminating the time and cost of maintaining training databases. Instructors will be able to carry all training content on a thumb drive, allowing them to easily provide consistent training at their many locations, regardless of the environment. Post go-live, they will deploy the same UPK content to provide just-in-time, in-application support for the entire system via the PeopleSoft Help menu and their PeopleSoft Enterprise Portal. Users will already be comfortable with UPK as a source of help, having been exposed to it during classroom training. They are also using UPK for a non-Oracle application called JobAps, an online job application solution used by many government organizations. Jan found UPK's object recognition to be excellent, yet it's been incredibly easy for her to change text or a field name if needed. Please take time to listen to this recording. The City and County of San Francisco's UPK story is very exciting, and Jan shared so many great examples of how they're taking advantage of UPK and UPK pre-built content early on in their project. We hope others will be able to incorporate these into their projects. Many thanks to Jan for taking the time to share her experiences and creative uses of UPK with us! - Karen Rihs, Oracle UPK Outbound Product Management

    Read the article

  • Three Principles to Fix Your Broken Organization

    - by Michael Snow
    Everyone's organization is broken in some capacity. For some this is painfully visible both inside and outside their organization. For others, there are cracks noticed by only the keenest trained eyes used to looking for problems in the midst of perfection. We all know that there is often incredible hope in the despair of chaos and recognition of your problems is the first step on the road to recovery. Let us help you in your path to recovery. Join our very own, Christian Finn,  this Thursday (11/15), as he guides you through three important principles you can take back to the office to start the mending process. (Above Image Credits: the BEST site on the web to make fun of our organizations and ourselves: http://www.despair.com/ ) His three principles are NOT "TeamWork", "Ignorance" and "Tradition", but - before jumping lower on this blog post to click and register for the upcoming webcast - I thought it would be a good opportunity to give you a little taste of what we have to offer beyond the array of our fabulous On-Demand webcasts from our Social Business Thought Leader Webcast Series featuring Christian as the host. Instead, here's a snippet from our marketing team friends across the pond in Europe, where they hosted a Social Business Forum recently and featured Christian in a segment.  Simple. Powerful. Proven. Face it, your organization is broken. Customers are not the focus they should be. Processes are running amok. Your intranet is a ghost town. And colleagues wonder why it’s easier to get things done on the Web than at work. What’s the solution?Join us for this Webcast. Christian Finn will talk about three simple, powerful, and proven principles for improving your organization through collaboration. Each principle will be illustrated by real-world examples. Discover: How to dramatically improve workplace collaboration Why improved employee engagement creates better business results What’s the value of a fully engaged customer Time to Fix What’s Broken Register now for this Webcast—the tenth in the Oracle Social Business Thought Leaders Series. Register Now Thurs., Nov. 15, 2012 10 a.m. PT / 1 p.m. ET Presented by: Christian Finn Senior Director, Product Management, Oracle Copyright © 2012, Oracle Corporation and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • Exceptional DBA Awards 2011

    - by Rebecca Amos
    From today, we’re accepting nominations for the 2011 Exceptional DBA Awards. DBAs make a vital contribution to the running of the companies they work for, and the Exceptional DBA Awards aim to acknowledge this and make this contribution more widely known. Check out our new website for all the info: www.exceptionaldba.com  Being an exceptional DBA doesn’t mean you have to sleep at the office, or know everything there is to know about SQL Server; who ever could? It means that you make an effort to make your servers secure and reliable, and to make your users’ lives easier. Maybe you’ve helped a junior colleague learn something new about server backups? Or cancelled your coffee break to get a database back online? Or contributed to a forum post on performance monitoring? All of these actions show that you might be an exceptional DBA. So have a think about the tasks you do every day that already make you exceptional – and then get started on your entry! You just need to answer a few questions on our website about your experience as a DBA, some of your biggest achievements, and any other activities you participate in within the SQL Server community. Anyone who is currently working as a SQL Server database administrator can enter, or be nominated by someone else. We’ve got four fantastic judges for the Awards, who you’ll be familiar with already: Brent Ozar, Brad McGehee, Rodney Landrum and Steve Jones. They’ll pick five finalists, and then we’ll ask the SQL Server community to vote for their winner. Not only could you win the respect and recognition of peers and colleagues, but the prizes also include full conference registration for the 2011 PASS Summit in Seattle (where the awards ceremony will take place), four nights' hotel accommodation, and $300 towards travel expenses. The winner will get a copy of Red Gate’s SQL DBA Bundle – and they’ll also be featured here, on Simple-Talk. So what are you waiting for? Chances are you’ve already made a small effort for someone today that means you might be an exceptional DBA. Visit the website now, and start writing your entry – or nominate your favourite DBA to enter: www.exceptionaldba.com

    Read the article

  • Oracle Customer Hub - Directions, Roadmap and Customer Success

    - by Mala Narasimharajan
     By Gurinder Bahl With less than a week from OOW 2012, I would like to introduce you all to the core Oracle Customer MDM Strategy sessions. Fragmentation of customer data across disparate systems prohibits companies from achieving a complete and accurate view of their customers. Oracle Customer Hub provide a comprehensive set of services, utilities and applications to create and maintain a trusted master customer system of record across the enterprise. Customer Hub centralizes customer data from disparate systems across your enterprise into a master repository. Existing systems are integrated in real-time or via batch with the Hub, allowing you to leverage legacy platform investments while capitalizing on the benefits of a single customer identity. Don’t miss out on two sessions geared towards Oracle Customer Hub:   1) Attend session CON9747 - Turn Customer Data into an Enterprise Asset with Oracle Fusion Customer Hub Applications at Oracle Open World 2012 on Monday, Oct 1st, 10:45 AM - 11:45 AM @ Moscone West – 2008. Manouj Tahiliani, Sr. Director MDM Product Management will provide insight into the vision of Oracle Fusion Customer Hub solutions, and review the roadmap. You will discover how Fusion Customer MDM can help your enterprise improve data quality, create accurate and complete customer information,  manage governance and help create great customer experiences. You will also understand how to leverage data quality capabilities and create a sophisticated customer foundation within Oracle Fusion Applications. You will also hear Danette Patterson, Group Lead, Church Pension Group talk about how Oracle Fusion Customer Hub applications provide a modern, next-generation, multi-domain foundation for managing customer information in a private cloud. 2)  Don't miss session  CON9692 - Customer MDM is key to Strategic Business Success and Customer Experience Management at Oracle Open World 2012 on Wednesday, October 3rd 2012 from 3:30-4:30pm @ Westin San Francisco Metropolitan 1. JP Hurtado, Director, Customer Systems, will provide insight on how RCCL overcame challenges of data quality, guest recognition & centralized customer view to provide consolidated customer view to multiple reservation, CRM, marketing, service, sales, data warehouse and loyalty systems. You will learn how Royal Caribbean Cruise Lines (RCCL), which has over 30 million customer and maintain multiple brands, leveraged Oracle Customer Hub (Siebel UCM) as backbone to customer data management strategy for past 5 years. Gurinder Bahl from MDM Product Management will provide an update on Oracle Customer Hub strategy, what we have achieved since last Open World and our future plans for the Oracle Customer Hub. You will learn about Customer Hub Data Quality capabilities around data analysis, cleansing, matching, address validation as well as reporting and monitoring capabilities. The MDM track at Oracle Open World covers variety of topics related to MDM. In addition to the product management team presenting product updates and roadmap, we have several Customer Panels, and Conference sessions. You can see an overview of MDM sessions here.  Looking forward to see you at Open World, the perfect opportunity to learn about cutting edge Oracle technologies. 

    Read the article

  • Unable to update/ install any files [closed]

    - by Surya
    Possible Duplicate: “Problem with MergeList” error when trying to do an update Just now I installed ubuntu 12.04 on my Lenovo G570 laptop. First I got an error at the time of installation (don't know about it) and I restarted the system and next time, it went well. So, after installing problems started.. There was a error with "Language recognition" and I tried to fix it but didn't work. I tried to install powerTop to check the status of power management. at terminal: sudo apt-get install powertop This is the error I got surya@surya-Lenovo-G570:~$ sudo apt-get powertop install [sudo] password for surya: E: Invalid operation powertop surya@surya-Lenovo-G570:~$ sudo apt-get install powertop Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages E: The package lists or status file could not be parsed or opened. surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ I downloaded Google Chrome .deb one and tried to install but its not working. Software center is opened and its not loading. There was a notification on the status bar which says: An error occurred please run the package manager from the right-click menu ... .... ... E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages "Copy & Paste" from terminal is not really working... When I press Ctrl + C; its showing ^C on terminal but its not working.. The most important error: I am unable to see a "chip" icon on the status bar so as to install proprietary drivers for my ATI drivers... The interesting part is, powertop worked will on live cd and it even detected my ATI card. Update When I opened "Software Up to Date", this showed a error: Could not initialize the package information An unresolvable problem occurred while initializing the package information. Please report this bug against the 'update-manager' package and include the following error message: 'E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages, E:The package lists or status file could not be parsed or opened.' : My laptop details Lenovo G570; Intel 2nd Gen i5 processor 4GB DDR3 RAM Intel in-build graphics + AMD Radeon HD 6370M 1GB graphics. I need help ASAP.

    Read the article

  • Customer Support Spotlight: Clemson University

    - by cwarticki
    I've begun a Customer Support Spotlight series that highlights our wonderful customers and Oracle loyalists.  A week ago I visited Clemson University.  As I travel to visit and educate our customers, I provide many useful tips/tricks and support best practices (as found on my blog and twitter). Most of all, I always discover an Oracle gem who deserves recognition for their hard work and advocacy. Meet George Manley.  George is a Storage Engineer who has worked in Clemson's Data Center all through college, partially in the Hardware Architecture group and partially in the Storage group. George and the rest of the Storage Team work with most all of the storage technologies that they have here at Clemson. This includes a wide array of different vendors' disk arrays, with the most of them being Oracle/Sun 2540's.  He also works with SAM/QFS, ACSLS, and our SL8500 Tape Libraries (all three Oracle/Sun products). (pictured L to R, Matt Schoger (Oracle), Mark Flores (Oracle) and George Manley) George was kind enough to take us for a data center tour.  It was amazing.  I rarely get to see the inside of data centers, and this one was massive. Clemson Computing and Information Technology’s physical resources include the main data center located in the Information Technology Center at the Innovation Campus and Technology Park. The core of Clemson’s computing infrastructure, the data center has 21,000 sq ft of raised floor and is powered by a 14MW substation. The ITC power capacity is 4.5MW.  The data center is the home of both enterprise and HPC systems, and is staffed by CCIT staff on a 24 hour basis from a state of the art network operations center within the ITC. A smaller business continuance data center is located on the main campus.  The data center serves a wide variety of purposes including HPC (supercomputing) resources which are shared with other Universities throughout the state, the state's medicaid processing system, and nearly all other needs for Clemson University. Yes, that's no typo (14,256 cores and 37TB of memory!!! Thanks for the tour George and thank you very much for your time.  The tour was fantastic. I enjoyed getting to know your team and I look forward to many successes from Clemson using Oracle products. -Chris WartickiGlobal Customer Management

    Read the article

  • Silverlight Cream for December 31, 2010 -- #1019

    - by Dave Campbell
    In this Issue: Michael Washington, Thomas Martinsen, Mike Ormond, William E. Burrows(-2-), Vangos Pterneas, Jesse Liberty, Diptimaya Patra, and Jeff Blankenburg(-2-). Above the Fold: Silverlight: "Drag from Multiple Source In Silverlight 4" Diptimaya Patra WP7: "What I Learned In WP7 – Issue 12" Jeff Blankenburg Shoutouts: Paul Thurrott posted a great phone comparison chart: Great Windows Phone comparison chart Kunal Chowdhury announced his new Silverlight Site: Welcome to Silverlight-Zone - Site is Live Now ... Good Luck, Kunal! From SilverlightCream.com: MyStudioServer goes Open Source Michael Washington decided to put his "MyStudioServer" on CodePlex... I saw this last spring and it's pretty darn cool... check out the post and examples. UriMapping for WP7 Thomas Martinsen discusses UriMapping in WP7, details the steps you need to follow and has sample code to demonstrate. More Monitoring Web Requests on Windows Phone Mike Ormond revisits a post about monitoring WP7 web requests, and shows how to get the data via Fiddler. New Tutorial – Windows Phone 7 (Getting Started) William E. Burrows has 2 parts of a video tutorial series on WP7 development up. This first gets things rolling, explains what is going on, and gets far enough to display golf courses stored in the database. WP7 Tutorial – Part 2: Managing Courses William E. Burrows's 2nd video tutorial is on building out the app to provide features to manage the gold courses for this gold handicap application. Face detection in Windows Phone 7 Vangos Pterneas has a post up about a WP7 app he did using René Schulte's Facelight to do facial recognition. Source available and also on CodePlex. Windows Phone From Scratch – Navigation II Jesse Liberty has up his latest WP7 from Scratch and is the 2nd post in the Navigation series, which is combining the previous navigation with the animation from the one before to produce a better navigation experience. Drag from Multiple Source In Silverlight 4 Diptimaya Patra has a post up at dotnetslackers on dragging into a drop area from multiple sources of different data templates and contexts. What I Learned In WP7 – Issue 12 Jeff Blankenburg's number 12 is up and he's got all the RGB colors on WP7 charted out, name, HEX, RGB, and visual... looks like a good one to bookmark What I Learned In WP7 – Issue 13 Jeff Blankenburg's number 13 is the chart I have listed in the Shoutout above... a complete phone comparison chart. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Recording Topics manually and automatically

    - by maria.cozzolino(at)oracle.com
    When you are recording UPK topics, the default mode for recording is manual recording, where you tell the system when to record each screen shot. This mode allows you to take the exact screen shot you need. However, it does get a bit tedious when you are recording long topics, especially if you forget to take a few screen shots. In UPK 3.5, a new version of recording was introduced - Automatic Recording. It was designed to simplify the recording process by automatically capturing screen shots as you perform your transaction. If you haven't experimented with Automatic Recording, I'd recommend you give it a try - it might make your recording life easier. If you are recording with sound, you can also narrate your topic while recording it. To turn on Automatic Recording: 1. In Tools/Options, there are two recorder tabs. The first tab, under content defaults, includes settings that you may want to share between developers, like whether keyboard shortcuts are automatically captured. 2. The second tab is the one that contains the personal preferences, like screen shot capture key and whether to record automatically or manually. On this tab, choose the option for Automatic Recording. 3. Save the settings. Note that this setting will NOT impact content defaults; this is for your user only. When you launch the recorder, you will notice a slightly different message with guidance on how to start and stop automatic recording. Once you start recording, the recorder window is hidden until the end of the recording session to allow you to capture your transaction. In the task tray, there is a series of icons that let you know that you are capturing content. You can pause the recording, as well as set and view your sound levels if you are using sound. A camera appears during each screen capture to help you know when the system is capturing a screen shot, and a context indicator appears to show the recognition. With automatic recording, you can let the system capture the necessary screen shots. It may provide a more natural recording experience, and is probably easier for the untrained developer. On the other hand, you have a bit more control with manual recording on which screen shot appears, but it also means you have to remember to capture the screen shot. :) We'd be interested in hearing which type of recording you do, and any rationale on why you made that choice. Please comment and let us know. --Maria Cozzolino, Manager of UPK Software Requirements and UI Design

    Read the article

  • How do you play or record audio (to .WAV) on Linux in C++? [closed]

    - by Jacky Alcine
    Hello, I've been looking for a way to play and record audio on a Linux (preferably Ubuntu) system. I'm currently working on a front-end to a voice recognition toolkit that'll automate a few steps required to adapt a voice model for PocketSphinx and Julius. Suggestions of alternative means of audio input/output are welcome, as well as a fix to the bug shown below. Here is the current code I've used so far to play a .WAV file: void Engine::sayText ( const string OutputText ) { string audioUri = "temp.wav"; string requestUri = this->getRequestUri( OPENMARY_PROCESS , OutputText.c_str( ) ); int error , audioStream; pa_simple *pulseConnection; pa_sample_spec simpleSpecs; simpleSpecs.format = PA_SAMPLE_S16LE; simpleSpecs.rate = 44100; simpleSpecs.channels = 2; eprintf( E_MESSAGE , "Generating audio for '%s' from '%s'..." , OutputText.c_str( ) , requestUri.c_str( ) ); FILE* audio = this->getHttpFile( requestUri , audioUri ); fclose(audio); eprintf( E_MESSAGE , "Generated audio."); if ( ( audioStream = open( audioUri.c_str( ) , O_RDONLY ) ) < 0 ) { fprintf( stderr , __FILE__": open() failed: %s\n" , strerror( errno ) ); goto finish; } if ( dup2( audioStream , STDIN_FILENO ) < 0 ) { fprintf( stderr , __FILE__": dup2() failed: %s\n" , strerror( errno ) ); goto finish; } close( audioStream ); pulseConnection = pa_simple_new( NULL , "AudioPush" , PA_STREAM_PLAYBACK , NULL , "openMary C++" , &simpleSpecs , NULL , NULL , &error ); for (int i = 0;;i++ ) { const int bufferSize = 1024; uint8_t audioBuffer[bufferSize]; ssize_t r; eprintf( E_MESSAGE , "Buffering %d..",i); /* Read some data ... */ if ( ( r = read( STDIN_FILENO , audioBuffer , sizeof (audioBuffer ) ) ) <= 0 ) { if ( r == 0 ) /* EOF */ break; eprintf( E_ERROR , __FILE__": read() failed: %s\n" , strerror( errno ) ); if ( pulseConnection ) pa_simple_free( pulseConnection ); } /* ... and play it */ if ( pa_simple_write( pulseConnection , audioBuffer , ( size_t ) r , &error ) < 0 ) { fprintf( stderr , __FILE__": pa_simple_write() failed: %s\n" , pa_strerror( error ) ); if ( pulseConnection ) pa_simple_free( pulseConnection ); } usleep(2); } /* Make sure that every single sample was played */ if ( pa_simple_drain( pulseConnection , &error ) < 0 ) { fprintf( stderr , __FILE__": pa_simple_drain() failed: %s\n" , pa_strerror( error ) ); if ( pulseConnection ) pa_simple_free( pulseConnection ); } } NOTE: If you want the rest of the code to this file, you can download it here directly from Launchpad.

    Read the article

  • Transparent Technology from Amazon

    - by David Dorf
    Amazon has been making some interesting moves again, this time in the augmented humanity area.  Augmented humanity is about helping humans overcome their shortcomings using technology.  Putting a powerful smartphone in your pocket helps you in many ways like navigating streets, communicating with far off friends, and accessing information.  But the interface for smartphones is somewhat limiting and unnatural, so companies have been looking for ways to make the technology more transparent and therefore easier to use. When Apple helped us drop the stylus, we took a giant leap forward in simplicity.  Using touchscreens with intuitive gestures was part of the iPhone's original appeal.  People don't want to know that technology is there -- they just want the benefits.  So what's the next leap beyond the touchscreen to make smartphones even easier to use? Two natural ways we interact with the world around us is by using sight and voice.  Google and Apple have been using both in their mobile platforms for limited uses cases.  Nobody actually wants to type a text message, so why not just speak it?  Any if you want more information about a book, why not just snap a picture of the cover?  That's much more accurate than trying to key the title and/or author. So what's Amazon been doing?  First, Amazon released a new iPhone app called Flow that allows iPhone users to see information about products in context.  Yes, its an augmented reality app that uses the phone's camera to view products, and overlays data about the products on the screen.  For the most part it requires the barcode to be visible to correctly identify the product, but I believe it can also recognize certain logos as well.  Download the app and try it out but don't expect perfection.  Its good enough to demonstrate the concept, but its far from accurate enough.  (MobileBeat did a pretty good review.)  Extrapolate to the future and we might just have a heads-up display in our eyeglasses. The second interesting area is voice response, for which Siri is getting lots of attention.  Amazon may have purchased a voice recognition company called Yap, although the deal is not confirmed.  But it would make perfect sense, especially with the Kindle Fire in Amazon's lineup. I believe over the next 3-5 years the way in which we interact with smartphones will mature, and they will become more transparent yet more important to our daily lives.  This will, of course, impact the way we shop, making information more readily accessible than it already is.  Amazon seems to be positioning itself to be at the forefront of this trend, so we should be watching them carefully.

    Read the article

  • Bad Data is Really the Monster

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Bad Data is really the monster – is an article written by Bikram Sinha who I borrowed the title and the inspiration for this blog. Sinha writes: “Bad or missing data makes application systems fail when they process order-level data. One of the key items in the supply-chain industry is the product (aka SKU). Therefore, it becomes the most important data element to tie up multiple merchandising processes including purchase order allocation, stock movement, shipping notifications, and inventory details… Bad data can cause huge operational failures and cost millions of dollars in terms of time, resources, and money to clean up and validate data across multiple participating systems. Yes bad data really is the monster, so what do we do about it? Close our eyes and hope it stays in the closet? We’ve tacked this problem for some years now at Oracle, and with our latest introduction of Oracle Enterprise Data Quality along with our integrated Oracle Master Data Management products provides a complete, best-in-class answer to the bad data monster. What’s unique about it? Oracle Enterprise Data Quality also combines powerful data profiling, cleansing, matching, and monitoring capabilities while offering unparalleled ease of use. What makes it unique is that it has dedicated capabilities to address the distinct challenges of both customer and product data quality – [different monsters have different needs of course!]. And the ability to profile data is just as important to identify and measure poor quality data and identify new rules and requirements. Included are semantic and pattern-based recognition to accurately parse and standardize data that is poorly structured. Finally all of the data quality components are integrated with Oracle Master Data Management, including Oracle Customer Hub and Oracle Product Hub, as well as Oracle Data Integrator Enterprise Edition and Oracle CRM. Want to learn more? On Tuesday Nov 15th, I invite you to listen to our webcast on Reduce ERP consolidation risks with Oracle Master Data Management I’ll be joined by our partner iGate Patni and be talking about one specific way to deal with the bad data monster specifically around ERP consolidation. Look forward to seeing you there!

    Read the article

  • Java Spotlight Episode 111: Bruno Souza @brjavaman and Fabiane Nardon @fabianenardonon StoryTroop @storytroop

    - by Roger Brinkley
    Interview with Bruno Souza and Fabiane Nardon on StoryTroop. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News End of Puplic Updates for JDK 6 Bean Valdiation 1.1 public review approved Two key JSRs accepted in time for JavaEE7 Public_JCP EC_meeting_audio_and materials posted Devoxx UK and Devoxx France CFP open JPA 2.1 Schema Generation WebSocket, Java EE 7, and GlassFish Events Dec 3-5, jDays, Göteborg, Sweden Dec 4-6, JavaOne Latin America, Sao Paolo, Brazil Dec 14-15, IndicThreads, Pune, India JCP Spec Lead Call December on Developing a TCK JCP EC Face to Face Meeting, January 15-16, West Coast USA Feature InterviewBruno Souza is a Java Developer and Open Source Evangelist at Summa Technologies, and a Cloud Expert at ToolsCloud. Nurturing developer communities is a personal passion, and Bruno worked actively with Java, NetBeans, Open Solaris, OFBiz, and many other open source communities. As founder and coordinator of SouJava (The Java Users Society), one of the world's largest Java User Groups, Bruno leaded the expansion of the Java movement in Brazil. Founder of the Worldwide Java User Groups Community, Bruno helped the creation and organization of hundreds of JUGs worldwide. A Java Developer since the early days, Bruno participated in some of the largest Java projects in Brazil.Fabiane Nardon is a computer scientist who is passionate about creating software that will positively change the world we live in. She was the architect of the Brazilian Healthcare Information System, considered the largest JavaEE application in the world and winner of the 2005 Duke's Choice Award. She leaded several communities, including the JavaTools Community at java.net, where 800+ open source projects were born. She is a frequent speaker at conferences in Brazil and abroad, including JavaOne, OSCON, Jfokus, JustJava and more. She’s also the author of several technical articles and member of the program committee of several conferences as JavaOne, OSCON, TDC. She was chosen a Java Champion by Sun Microsystems as a recognition of her contribution to the Java ecosystem. Currently, she works as a tools expert at ToolsCloud and in companies she co-founded, where she is helping to shape new disruptive Internet based services.StoryTroop is a space where we combine multiple perspectives about a story. This creates an understanding of that story like never seen before. Pieces of a story are organized in time and space and anyone can add a different perspective.What’s Cool Geek Bike Ride at JavaOne LAD Devoxx UK (Mar 26, 27) and FR (Mar 27 - 29) CFP jFokus schedule is firming up Nashorn Blog 1,500 @JavaSpotlight Twitter followers

    Read the article

  • A Community Cure for a String Splitting Headache

    - by Tony Davis
    A heartwarming tale of dogged perseverance and Community collaboration to solve some SQL Server string-related headaches. Michael J Swart posted a blog this week that had me smiling in recognition and agreement, describing how an inquisitive Developer or DBA deals with a problem. It's a three-step process, starting with discomfort and anxiety; a feeling that one doesn't know as much about one's chosen specialized subject as previously thought. It progresses through a phase of intense research and learning until finally one achieves breakthrough, blessed relief and renewed optimism. In this case, the discomfort was provoked by the mystery of massively high CPU when searching Unicode strings in SQL Server. Michael explored the problem via Stack Overflow, Google and Twitter #sqlhelp, finally leading to resolution and a blog post that shared what he learned. Perfect; except that sometimes you have to be prepared to share what you've learned so far, while still mired in the phase of nagging discomfort. A good recent example of this recently can be found on our own blogs. Despite being a loud advocate of the lightning fast T-SQL-based string splitting techniques, honed to near perfection over many years by Jeff Moden and others, Phil Factor retained a dogged conviction that, in theory, shredding element-based XML using XQuery ought to be even more efficient for splitting a string to create a table. After some careful testing, he found instead that the XML way performed and scaled miserably by comparison. Somewhat subdued, and with a nagging feeling that perhaps he was still missing "something", he posted his findings. What happened next was a joy to behold; the community jumped in to suggest subtle changes in approach, using an attribute-based rather than element-based XML list, and tweaking the XQuery shredding. The result was performance and scalability that surpassed all other techniques. I asked Phil how quickly he would have arrived at the real breakthrough on his own. His candid answer was "never". Both are great examples of the power of Community learning and the latter in particular the importance of being brave enough to parade one's ignorance. Perhaps Jeff Moden will accept the string-splitting gauntlet one more time. To quote the great man: you've just got to love this community! If you've an interesting tale to tell about being helped to a significant breakthrough for a problem by the community, I'd love to hear about it. Cheers, Tony.

    Read the article

  • Congratulations to the 2012 Oracle Spatial Award Winners!

    - by Mandy Ho
    I just returned from the 2012 Location Intelligence and Oracle Spatial User conference in Washington, DC, held by Directions Magazine. It was a great conference with presentations from across the country and globe, networking with Oracle Spatial users and meeting new customers and partners. As part of the yearly event, Oracle recognizes special customers and partners for their contributions to advancing mainstream solutions using geospatial technology. This was the 8th year that Oracle has recognized innovative, industry leaders.   The awards were given in three categories: Education/Research, Innovator and Partnership. Here's a little on each of the award winners. Education and Research Award Winner: Technical University of Berlin The Institute for Geodesy and Geoinformation Science of the Technical University of Berlin (TU Berlin) was selected for its leading research work in mapping of urban and regional space onto virtual 3D-city and landscape models, and use of Oracle Spatial, including 3D Vector and Georaster type support, as the data management platform. Innovator Award Winner:  Istanbul Metropolitan Municipality Istanbul is the 3rd largest metropolitan area in Europe. One of their greatest challenges is organizing efficient public transportation for citizens and visitors. There are 15 types of transportations organized by 8 different agencies. To solve this problem, the Directorate of GIS of Istanbul Metropolitan Municipality has created a multi-model itinerary system to help citizens in their decision process for using public transport or their private cars. They choose to use Oracle Spatial Network Model as the solution in our system together with Java and SOAP web services.  Partnership Award Winners: CSoft Group and OSCARS. The Partnership award is given to the ISV or integrator who have demonstrated outstanding achievements in partnering with Oracle on the development side, in taking solutions to market.  CSoft Group- the largest Russion integrator and consultancy provider in CAD and GIS. CSoft was selected by the Oracle Spatial product development organization for the key role in delivering geospatial solutions based on Oracle Database and Fusion Middleware to the Russian market. OSCARS - Provides consulting/training in France, Belgium and Luxembourg. With only 3 full time staff, they have achieved significant success with leading edge customer implementations leveraging the latest Oracle Spatial/MapViewer technologies, and delivering training throughout Europe.  Finally, we also awarded two Special Recognition awards for two partners that helped contribute to the Oracle Partner Network Spatial Specialization. These two partners provided insight and technical expertise from a partner perspective to help launch the new certification program for Oracle Spatial Technologies. Award Winners: ThinkHuddle and OSCARS  For more pictures on the conference and the awards, visit our facebook page: http://www.facebook.com/OracleDatabase

    Read the article

  • ArchBeat Link-o-Rama for November 16, 2012

    - by Bob Rhubart
    X.509 Certificate Revocation Checking Using OCSP protocol with Oracle WebLogic Server 12c | Abhijit Patil Abhijit Patil's article focuses on how to use X.509 Certificate Revocation Checking Functionality with the OCSP protocol to validate in-bound certificates. Although this article focuses on inbound OCSP validation using OCSP, Oracle WebLogic Server 12c also supports outbound OCSP validation. Leveraging Oracle Scorecard and Strategy Management for Everyday BI Needs "Oracle Scorecard and Strategy Management (OSSM) is built-upon the premise that a scorecard system should not be separate from the BI system, like many comparable tools are today," says author Kevin McGinely. "Instead of a separate application with its own data, its own data definitions, and its own front-end, Oracle made the choice to integrate OSSM directly into OBIEE." Applying BI for personal productivity recognition and gamification | Capgemini Oracle Blog "It is quite obvious that if you want people to participate you need an appealing and intuitive user interface," says Capgemini's Henk Vermeulen in this interesting exploration of gamification in the enterprise. Build and release OSB projects with Maven | Edwin Biemond "With Maven we are able to build and deploy OSB projects," says Oracle ACE Edwin Biemond. "The artifacts generated by Maven called snaphosts and releases can be automatically uploaded to a software repository. These versioned OSB jars can then be downloaded by the OSB Servers and deployed." Biemond shows you how in this detailed technical post. ADF Generator for Dynamic ADF BC and ADF UI | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis' post is an extension of his OOW12 presentation, "Oracle ADF Implementations Around the Globe: Best Practices," and includes the sample application he promised to share. Service-oriented organizations have a head start in the cloud race | ZDNet ZDNet SOA blogger Joe McKendrick offers a snapshot of a recent report Forrester analyst James Staten. Oracle Fusion Middleware Security: X509 Fallback to Form | Debasish BhattacharyaOracle Fusion Middleware A-Team architect Debasish Bhattacharya shares a solution that resulted from brainstorming with colleagues Chris Johnson and Brian Eidelman. "The solution is not very difficult," says Bhattacharya, "though it needs some additional configurations and coding." It's all presented in this detailed post. Agile Architecture | David Sprott "There is ample evidence that Agile Architecture is a primary contributor to business agility, yet we do not have a well understood architecture management system that integrates with Agile methods," observes David Sprott in this extensive post. Thought for the Day "Operating systems are like underwear — nobody really wants to look at them." — Bill Joy Source: SoftwareQuotes.com

    Read the article

  • Planning Bulletin for JRE 7: What EBS Customers Can Do Today

    - by Steven Chan (Oracle Development)
    An initiative to certify Oracle E-Business Suite with JRE 7 desktop clients is underway.  We have tested EBS 11.5.10.2, 12.0, and 12.1 with JRE 7. We have fixes for nearly all of the compatibility issues now, and are working hard to produce the remaining fixes quickly. A. When will JRE 7 be certified with Oracle E-Business Suite? We will announce the certification of the E-Business Suite with JRE 7 once all fixes are available, verified, and documented.  Certification announcements will be distributed through My Oracle Support, My Oracle Support Community Forums, Oracle Technology Network Forums, newsletters, and user group news outlets. Oracle's Revenue Recognition rules prohibit us from discussing certification and release dates.  In addition to the standard communication channels listed above, customers are encouraged to monitor or subscribe to this blog.    B. What can customers do to prepare for the JRE 7 certification? Customers should ensure that they are on the latest available JRE 6 update. Of the compatibility issues identified so far with JRE 7, the most critical is an issue that prevents E-Business Suite Forms-based products from launching on Windows desktops that are running JRE 7.  Customers can prevent this issue today by ensuring that they have applied the latest certified patches documented for JRE 6 configurations to their EBS application tier servers: Apply Forms patch 14615390 to EBS 11i environments (Note 125767.1) Apply Forms patch 14614795 to EBS 12.0 and 12.1 environments (Note 437878.1) These patches are compatible with JRE 6 and 7, production ready, and fully-tested with the E-Business Suite.  These patches may be applied immediately to all E-Business Suite environments. All other Forms prerequisites documented in the Notes above should also be applied now. C. What else will be required by the final certified configuration? Oracle expects that all other compatibility issues will be resolved by installing a specific JRE 7 release, at minimum.  That specific release has not been finalized yet, and this is subject to change. D.  Where will the official patch requirements be documented? All patches required for ensuring full compatibility of the E-Business Suite with JRE 7 will be documented in updates to these published Notes: For EBS 11i: Deploying Sun JRE (Native Plug-in) for Windows Clients in Oracle E-Business Suite Release 11i (Note 290807.1) Upgrading Developer 6i with Oracle E-Business Suite 11i (Note 125767.1) For EBS 12 Deploying Sun JRE (Native Plug-in) for Windows Clients in Oracle E-Business Suite Release 12 (Note 393931.1) Upgrading OracleAS 10g Forms and Reports in Oracle E-Business Suite Release 12 (Note 437878.1) Disclaimer The preceding is intended to outline our general product direction.  It is intended for information purposes only, and may not be incorporated into any contract.   It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decision.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • What Precalculus knowledge is required before learning Discrete Math Computer Science topics?

    - by Ein Doofus
    Below I've listed the chapters from a Precalculus book as well as the author recommended Computer Science chapters from a Discrete Mathematics book. Although these chapters are from two specific books on these subjects I believe the topics are generally the same between any Precalc or Discrete Math book. What Precalculus topics should one know before starting these Discrete Math Computer Science topics?: Discrete Mathematics CS Chapters 1.1 Propositional Logic 1.2 Propositional Equivalences 1.3 Predicates and Quantifiers 1.4 Nested Quantifiers 1.5 Rules of Inference 1.6 Introduction to Proofs 1.7 Proof Methods and Strategy 2.1 Sets 2.2 Set Operations 2.3 Functions 2.4 Sequences and Summations 3.1 Algorithms 3.2 The Growths of Functions 3.3 Complexity of Algorithms 3.4 The Integers and Division 3.5 Primes and Greatest Common Divisors 3.6 Integers and Algorithms 3.8 Matrices 4.1 Mathematical Induction 4.2 Strong Induction and Well-Ordering 4.3 Recursive Definitions and Structural Induction 4.4 Recursive Algorithms 4.5 Program Correctness 5.1 The Basics of Counting 5.2 The Pigeonhole Principle 5.3 Permutations and Combinations 5.6 Generating Permutations and Combinations 6.1 An Introduction to Discrete Probability 6.4 Expected Value and Variance 7.1 Recurrence Relations 7.3 Divide-and-Conquer Algorithms and Recurrence Relations 7.5 Inclusion-Exclusion 8.1 Relations and Their Properties 8.2 n-ary Relations and Their Applications 8.3 Representing Relations 8.5 Equivalence Relations 9.1 Graphs and Graph Models 9.2 Graph Terminology and Special Types of Graphs 9.3 Representing Graphs and Graph Isomorphism 9.4 Connectivity 9.5 Euler and Hamilton Ptahs 10.1 Introduction to Trees 10.2 Application of Trees 10.3 Tree Traversal 11.1 Boolean Functions 11.2 Representing Boolean Functions 11.3 Logic Gates 11.4 Minimization of Circuits 12.1 Language and Grammars 12.2 Finite-State Machines with Output 12.3 Finite-State Machines with No Output 12.4 Language Recognition 12.5 Turing Machines Precalculus Chapters R.1 The Real-Number System R.2 Integer Exponents, Scientific Notation, and Order of Operations R.3 Addition, Subtraction, and Multiplication of Polynomials R.4 Factoring R.5 Rational Expressions R.6 Radical Notation and Rational Exponents R.7 The Basics of Equation Solving 1.1 Functions, Graphs, Graphers 1.2 Linear Functions, Slope, and Applications 1.3 Modeling: Data Analysis, Curve Fitting, and Linear Regression 1.4 More on Functions 1.5 Symmetry and Transformations 1.6 Variation and Applications 1.7 Distance, Midpoints, and Circles 2.1 Zeros of Linear Functions and Models 2.2 The Complex Numbers 2.3 Zeros of Quadratic Functions and Models 2.4 Analyzing Graphs of Quadratic Functions 2.5 Modeling: Data Analysis, Curve Fitting, and Quadratic Regression 2.6 Zeros and More Equation Solving 2.7 Solving Inequalities 3.1 Polynomial Functions and Modeling 3.2 Polynomial Division; The Remainder and Factor Theorems 3.3 Theorems about Zeros of Polynomial Functions 3.4 Rational Functions 3.5 Polynomial and Rational Inequalities 4.1 Composite and Inverse Functions 4.2 Exponential Functions and Graphs 4.3 Logarithmic Functions and Graphs 4.4 Properties of Logarithmic Functions 4.5 Solving Exponential and Logarithmic Equations 4.6 Applications and Models: Growth and Decay 5.1 Systems of Equations in Two Variables 5.2 System of Equations in Three Variables 5.3 Matrices and Systems of Equations 5.4 Matrix Operations 5.5 Inverses of Matrices 5.6 System of Inequalities and Linear Programming 5.7 Partial Fractions 6.1 The Parabola 6.2 The Circle and Ellipse 6.3 The Hyperbola 6.4 Nonlinear Systems of Equations

    Read the article

  • Alfa AWUS036H USB wireless adapter not recognized

    - by GFiasco
    The Alfa AWUS036H USB wireless adapter will not be recognized by my netbook (Ubuntu 14.04, Asus X201E). As I understand it, the drivers should already be built in to this version of Linux, but I tried a make/make install of the latest Realtek drivers (as mentioned on How do I install drivers for the Alfa AWUS036H USB wireless adapter?) and it didn't work. I then followed the advice of this thread (ALFA AWUS036NH driver) and did a make/make install of the most up-to-date backport of the drivers, but that didn't work. At this point I tried a series of commands from this thread (http://ubuntuforums.org/showthread.php?t=2187780) in an attempt to identify the problem, but at no point could I get the laptop to ever recognize the USB adapter. I have also troubleshot the USB cable itself, tried both the USB 2.0 and 3.0 ports on the laptop, have never received an error message regarding a need to update the firmware, and have seemingly successfully installed all manner of variation of Realtek drivers which were supposed to make the adapter work. (I also tried to delete/clean up after each install, in the hope I wasn't making things worse.) Not sure what I should do next. Please let me know if I need to post any more information. Thanks very much for your help. EDIT: Before inserting Alpha USB adapter: :~$ lsusb Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 004: ID 0bda:570c Realtek Semiconductor Corp. Bus 001 Device 026: ID 13d3:3393 IMC Networks Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 03f0:3112 Hewlett-Packard Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub After inserting Alpha USB adapter (USB 3.0 port, no change): :~$ lsusb Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 004: ID 0bda:570c Realtek Semiconductor Corp. Bus 001 Device 026: ID 13d3:3393 IMC Networks Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 03f0:3112 Hewlett-Packard Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Ran tail -f /var/log/syslog, inserted device, no recognition (last entry is dated 16:17:01, so an hour ago). Going to check on an Ubuntu 14.04 laptop and Windows XP desktop. I'll update after. Thanks for your help to this point.

    Read the article

< Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >