Search Results

Search found 2347 results on 94 pages for 'slightly frustrated'.

Page 56/94 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Rendering ASP.NET Script References into the Html Header

    - by Rick Strahl
    One thing that I’ve come to appreciate in control development in ASP.NET that use JavaScript is the ability to have more control over script and script include placement than ASP.NET provides natively. Specifically in ASP.NET you can use either the ClientScriptManager or ScriptManager to embed scripts and script references into pages via code. This works reasonably well, but the script references that get generated are generated into the HTML body and there’s very little operational control for placement of scripts. If you have multiple controls or several of the same control that need to place the same scripts onto the page it’s not difficult to end up with scripts that render in the wrong order and stop working correctly. This is especially critical if you load script libraries with dependencies either via resources or even if you are rendering referenced to CDN resources. Natively ASP.NET provides a host of methods that help embedding scripts into the page via either Page.ClientScript or the ASP.NET ScriptManager control (both with slightly different syntax): RegisterClientScriptBlock Renders a script block at the top of the HTML body and should be used for embedding callable functions/classes. RegisterStartupScript Renders a script block just prior to the </form> tag and should be used to for embedding code that should execute when the page is first loaded. Not recommended – use jQuery.ready() or equivalent load time routines. RegisterClientScriptInclude Embeds a reference to a script from a url into the page. RegisterClientScriptResource Embeds a reference to a Script from a resource file generating a long resource file string All 4 of these methods render their <script> tags into the HTML body. The script blocks give you a little bit of control by having a ‘top’ and ‘bottom’ of the document location which gives you some flexibility over script placement and precedence. Script includes and resource url unfortunately do not even get that much control – references are simply rendered into the page in the order of declaration. The ASP.NET ScriptManager control facilitates this task a little bit with the abililty to specify scripts in code and the ability to programmatically check what scripts have already been registered, but it doesn’t provide any more control over the script rendering process itself. Further the ScriptManager is a bear to deal with generically because generic code has to always check and see if it is actually present. Some time ago I posted a ClientScriptProxy class that helps with managing the latter process of sending script references either to ClientScript or ScriptManager if it’s available. Since I last posted about this there have been a number of improvements in this API, one of which is the ability to control placement of scripts and script includes in the page which I think is rather important and a missing feature in the ASP.NET native functionality. Handling ScriptRenderModes One of the big enhancements that I’ve come to rely on is the ability of the various script rendering functions described above to support rendering in multiple locations: /// <summary> /// Determines how scripts are included into the page /// </summary> public enum ScriptRenderModes { /// <summary> /// Inherits the setting from the control or from the ClientScript.DefaultScriptRenderMode /// </summary> Inherit, /// Renders the script include at the location of the control /// </summary> Inline, /// <summary> /// Renders the script include into the bottom of the header of the page /// </summary> Header, /// <summary> /// Renders the script include into the top of the header of the page /// </summary> HeaderTop, /// <summary> /// Uses ClientScript or ScriptManager to embed the script include to /// provide standard ASP.NET style rendering in the HTML body. /// </summary> Script, /// <summary> /// Renders script at the bottom of the page before the last Page.Controls /// literal control. Note this may result in unexpected behavior /// if /body and /html are not the last thing in the markup page. /// </summary> BottomOfPage } This enum is then applied to the various Register functions to allow more control over where scripts actually show up. Why is this useful? For me I often render scripts out of control resources and these scripts often include things like a JavaScript Library (jquery) and a few plug-ins. The order in which these can be loaded is critical so that jQuery.js always loads before any plug-in for example. Typically I end up with a general script layout like this: Core Libraries- HeaderTop Plug-ins: Header ScriptBlocks: Header or Script depending on other dependencies There’s also an option to render scripts and CSS at the very bottom of the page before the last Page control on the page which can be useful for speeding up page load when lots of scripts are loaded. The API syntax of the ClientScriptProxy methods is closely compatible with ScriptManager’s using static methods and control references to gain access to the page and embedding scripts. For example, to render some script into the current page in the header: // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Same again - shouldn't be rendered because it's the same id ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Create a second script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function2", "function helloWorld2() { alert('hello2'); }", true, ScriptRenderModes.Header); // This just calls ClientScript and renders into bottom of document ClientScriptProxy.Current.RegisterStartupScript(this,typeof(ControlResources), "call_hello", "helloWorld();helloWorld2();", true); which generates: <html xmlns="http://www.w3.org/1999/xhtml" > <head><title> </title> <script type="text/javascript"> function helloWorld() { alert('hello'); } </script> <script type="text/javascript"> function helloWorld2() { alert('hello2'); } </script> </head> <body> … <script type="text/javascript"> //<![CDATA[ helloWorld();helloWorld2();//]]> </script> </form> </body> </html> Note that the scripts are generated into the header rather than the body except for the last script block which is the call to RegisterStartupScript. In general I wouldn’t recommend using RegisterStartupScript – ever. It’s a much better practice to use a script base load event to handle ‘startup’ code that should fire when the page first loads. So instead of the code above I’d actually recommend doing: ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "call_hello", "$().ready( function() { alert('hello2'); });", true, ScriptRenderModes.Header); assuming you’re using jQuery on the page. For script includes from a Url the following demonstrates how to embed scripts into the header. This example injects a jQuery and jQuery.UI script reference from the Google CDN then checks each with a script block to ensure that it has loaded and if not loads it from a server local location: // load jquery from CDN ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js", ScriptRenderModes.HeaderTop); // check if jquery loaded - if it didn't we're not online string scriptCheck = @"if (typeof jQuery != 'object') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));"; string jQueryUrl = ClientScriptProxy.Current.GetWebResourceUrl(this, typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jquery_register", string.Format(scriptCheck,jQueryUrl),true, ScriptRenderModes.HeaderTop); // Load jquery-ui from cdn ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js", ScriptRenderModes.Header); // check if we need to load from local string jQueryUiUrl = ResolveUrl("~/scripts/jquery-ui-custom.min.js"); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jqueryui_register", string.Format(scriptCheck, jQueryUiUrl), true, ScriptRenderModes.Header); // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "$().ready( function() { alert('hello'); });", true, ScriptRenderModes.Header); which in turn generates this HTML: <html xmlns="http://www.w3.org/1999/xhtml" > <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=DIykvYhJ_oXCr-TA_dr35i4AayJoV1mgnQAQGPaZsoPM2LCdvoD3cIsRRitHKlKJfV5K_jQvylK7tsqO3lQIFw2&t=633979863959332352' type='text/javascript'%3E%3C/script%3E")); </script> <title> </title> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/scripts/jquery-ui-custom.min.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> $().ready(function() { alert('hello'); }); </script> </head> <body> …</body> </html> As you can see there’s a bit more control in this process as you can inject both script includes and script blocks into the document at the top or bottom of the header, plus if necessary at the usual body locations. This is quite useful especially if you create custom server controls that interoperate with script and have certain dependencies. The above is a good example of a useful switchable routine where you can switch where scripts load from by default – the above pulls from Google CDN but a configuration switch may automatically switch to pull from the local development copies if your doing development for example. How does it work? As mentioned the ClientScriptProxy object mimicks many of the ScriptManager script related methods and so provides close API compatibility with it although it contains many additional overloads that enhance functionality. It does however work against ScriptManager if it’s available on the page, or Page.ClientScript if it’s not so it provides a single unified frontend to script access. There are however many overloads of the original SM methods like the above to provide additional functionality. The implementation of script header rendering is pretty straight forward – as long as a server header (ie. it has to have runat=”server” set) is available. Otherwise these routines fall back to using the default document level insertions of ScriptManager/ClientScript. Given that there is a server header it’s relatively easy to generate the script tags and code and append them to the header either at the top or bottom. I suspect Microsoft didn’t provide header rendering functionality precisely because a runat=”server” header is not required by ASP.NET so behavior would be slightly unpredictable. That’s not really a problem for a custom implementation however. Here’s the RegisterClientScriptBlock implementation that takes a ScriptRenderModes parameter to allow header rendering: /// <summary> /// Renders client script block with the option of rendering the script block in /// the Html header /// /// For this to work Header must be defined as runat="server" /// </summary> /// <param name="control">any control that instance typically page</param> /// <param name="type">Type that identifies this rendering</param> /// <param name="key">unique script block id</param> /// <param name="script">The script code to render</param> /// <param name="addScriptTags">Ignored for header rendering used for all other insertions</param> /// <param name="renderMode">Where the block is rendered</param> public void RegisterClientScriptBlock(Control control, Type type, string key, string script, bool addScriptTags, ScriptRenderModes renderMode) { if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; if (control.Page.Header == null || renderMode != ScriptRenderModes.HeaderTop && renderMode != ScriptRenderModes.Header && renderMode != ScriptRenderModes.BottomOfPage) { RegisterClientScriptBlock(control, type, key, script, addScriptTags); return; } // No dupes - ref script include only once const string identifier = "scriptblock_"; if (HttpContext.Current.Items.Contains(identifier + key)) return; HttpContext.Current.Items.Add(identifier + key, string.Empty); StringBuilder sb = new StringBuilder(); // Embed in header sb.AppendLine("\r\n<script type=\"text/javascript\">"); sb.AppendLine(script); sb.AppendLine("</script>"); int? index = HttpContext.Current.Items["__ScriptResourceIndex"] as int?; if (index == null) index = 0; if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if(renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1,new LiteralControl(sb.ToString())); HttpContext.Current.Items["__ScriptResourceIndex"] = index; } Note that the routine has to keep track of items inserted by id so that if the same item is added again with the same key it won’t generate two script entries. Additionally the code has to keep track of how many insertions have been made at the top of the document so that entries are added in the proper order. The RegisterScriptInclude method is similar but there’s some additional logic in here to deal with script file references and ClientScriptProxy’s (optional) custom resource handler that provides script compression /// <summary> /// Registers a client script reference into the page with the option to specify /// the script location in the page /// </summary> /// <param name="control">Any control instance - typically page</param> /// <param name="type">Type that acts as qualifier (uniqueness)</param> /// <param name="url">the Url to the script resource</param> /// <param name="ScriptRenderModes">Determines where the script is rendered</param> public void RegisterClientScriptInclude(Control control, Type type, string url, ScriptRenderModes renderMode) { const string STR_ScriptResourceIndex = "__ScriptResourceIndex"; if (string.IsNullOrEmpty(url)) return; if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; // Extract just the script filename string fileId = null; // Check resource IDs and try to match to mapped file resources // Used to allow scripts not to be loaded more than once whether // embedded manually (script tag) or via resources with ClientScriptProxy if (url.Contains(".axd?r=")) { string res = HttpUtility.UrlDecode( StringUtils.ExtractString(url, "?r=", "&", false, true) ); foreach (ScriptResourceAlias item in ScriptResourceAliases) { if (item.Resource == res) { fileId = item.Alias + ".js"; break; } } if (fileId == null) fileId = url.ToLower(); } else fileId = Path.GetFileName(url).ToLower(); // No dupes - ref script include only once const string identifier = "script_"; if (HttpContext.Current.Items.Contains( identifier + fileId ) ) return; HttpContext.Current.Items.Add(identifier + fileId, string.Empty); // just use script manager or ClientScriptManager if (control.Page.Header == null || renderMode == ScriptRenderModes.Script || renderMode == ScriptRenderModes.Inline) { RegisterClientScriptInclude(control, type,url, url); return; } // Retrieve script index in header int? index = HttpContext.Current.Items[STR_ScriptResourceIndex] as int?; if (index == null) index = 0; StringBuilder sb = new StringBuilder(256); url = WebUtils.ResolveUrl(url); // Embed in header sb.AppendLine("\r\n<script src=\"" + url + "\" type=\"text/javascript\"></script>"); if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if (renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1, new LiteralControl(sb.ToString())); HttpContext.Current.Items[STR_ScriptResourceIndex] = index; } There’s a little more code here that deals with cleaning up the passed in Url and also some custom handling of script resources that run through the ScriptCompressionModule – any script resources loaded in this fashion are automatically cached based on the resource id. Raw urls extract just the filename from the URL and cache based on that. All of this to avoid doubling up of scripts if called multiple times by multiple instances of the same control for example or several controls that all load the same resources/includes. Finally RegisterClientScriptResource utilizes the previous method to wrap the WebResourceUrl as well as some custom functionality for the resource compression module: /// <summary> /// Returns a WebResource or ScriptResource URL for script resources that are to be /// embedded as script includes. /// </summary> /// <param name="control">Any control</param> /// <param name="type">A type in assembly where resources are located</param> /// <param name="resourceName">Name of the resource to load</param> /// <param name="renderMode">Determines where in the document the link is rendered</param> public void RegisterClientScriptResource(Control control, Type type, string resourceName, ScriptRenderModes renderMode) { string resourceUrl = GetClientScriptResourceUrl(control, type, resourceName); RegisterClientScriptInclude(control, type, resourceUrl, renderMode); } /// <summary> /// Works like GetWebResourceUrl but can be used with javascript resources /// to allow using of resource compression (if the module is loaded). /// </summary> /// <param name="control"></param> /// <param name="type"></param> /// <param name="resourceName"></param> /// <returns></returns> public string GetClientScriptResourceUrl(Control control, Type type, string resourceName) { #if IncludeScriptCompressionModuleSupport // If wwScriptCompression Module through Web.config is loaded use it to compress // script resources by using wcSC.axd Url the module intercepts if (ScriptCompressionModule.ScriptCompressionModuleActive) { string url = "~/wwSC.axd?r=" + HttpUtility.UrlEncode(resourceName); if (type.Assembly != GetType().Assembly) url += "&t=" + HttpUtility.UrlEncode(type.FullName); return WebUtils.ResolveUrl(url); } #endif return control.Page.ClientScript.GetWebResourceUrl(type, resourceName); } This code merely retrieves the resource URL and then simply calls back to RegisterClientScriptInclude with the URL to be embedded which means there’s nothing specific to deal with other than the custom compression module logic which is nice and easy. What else is there in ClientScriptProxy? ClientscriptProxy also provides a few other useful services beyond what I’ve already covered here: Transparent ScriptManager and ClientScript calls ClientScriptProxy includes a host of routines that help figure out whether a script manager is available or not and all functions in this class call the appropriate object – ScriptManager or ClientScript – that is available in the current page to ensure that scripts get embedded into pages properly. This is especially useful for control development where controls have no control over the scripting environment in place on the page. RegisterCssLink and RegisterCssResource Much like the script embedding functions these two methods allow embedding of CSS links. CSS links are appended to the header or to a form declared with runat=”server”. LoadControlScript Is a high level resource loading routine that can be used to easily switch between different script linking modes. It supports loading from a WebResource, a url or not loading anything at all. This is very useful if you build controls that deal with specification of resource urls/ids in a standard way. Check out the full Code You can check out the full code to the ClientScriptProxyClass here: ClientScriptProxy.cs ClientScriptProxy Documentation (class reference) Note that the ClientScriptProxy has a few dependencies in the West Wind Web Toolkit of which it is part of. ControlResources holds a few standard constants and script resource links and the ScriptCompressionModule which is referenced in a few of the script inclusion methods. There’s also another useful ScriptContainer companion control  to the ClientScriptProxy that allows scripts to be placed onto the page’s markup including the ability to specify the script location and script minification options. You can find all the dependencies in the West Wind Web Toolkit repository: West Wind Web Toolkit Repository West Wind Web Toolkit Home Page© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  JavaScript  

    Read the article

  • Using JCA Adapter with OSB 11.1.1.3

    - by James Taylor
    In OSB 10g to use the JCA adapters you were required to use JDeveloper to create the necessary WSDLs and XSDs etc using the associated adapter wizard. These files were imported into Oracle Workshop (Eclipse) and used to create the business service as you would any other web service. In 11g unfortunately JDeveloper is still required. The process has changed slightly as described below. As an example I have used the JCA DB adapter as an example. Start JDeveloper 11.1.1.3 Create a new SOA Application Create a new SOA Project and call it DBAdapters. Choose the Empty Composite Template Drag a Database Adapter Component to the External References panel on the composite. Provide a service name. Create a new database connection, or use an existing one Take note of the JNDI Name, e.g. eis/DB/MyConnection This will be used to configure the DB connection in the WebLogic Console. In my example I use a stored procedure, but you can use what ever operation you require. Please refer to the following link for other options: User's Guide for Technology Adapters Select a schema and stored procedure Once the procedure has been selected, accept the defaults and finish. Startup your OEPE version of Eclipse. Create a new Oracle Service Bus Configuration Project (you can use an existing project if you have one) Create a new Oracle Service Bus Project in the configuration project created above. Instead of importing the WSDL and XSD files you import the jca file created in JDeveloper. In Eclipse right click the Oracle Service Bus Project and select Import –> Import    Choose File System Browse to the directory where JDeveloper stores its project Select the jca, wsdl, and xsd files based on the service you created in step 5. Also check the ‘Create selected folders only’ radio button. When you import you may have a little red x indicating the files are invalid. This is due to the location of the files. Open the invalid files and fix the path in relation to where you store your files in the OSB project.   Once you have the files all valid, Right-Click the jca file and select Oracle Service Bus –> Generate Service. This will create a new Business Service. In the WebLogic Console configure the JNDI name defined in step 7. You can now deploy your project and test

    Read the article

  • Game Design Dilemma

    - by Chris Williams
    I'm working on a 2d tilemapped RPG. I've actually made quite a fair amount of progress, but I'm at a point where I need to make a UI decision. I have the overland world completely mapped out, and I have several towns and special areas. I'm on the fence about how to integrate the two. Scenario 1: I have one ginormous map, where everything is the same scale. This means you can walk in and out of towns without having to load or wait and transition in any way. With everything the same scale, movement costs the same no matter where you are (in terms of time/turns/energy/hunger/whatever/etc...)  The potential downside to this is that it could take quite a long time to get anywhere on foot. Scenario 2: I have an overland map, a set of town maps, overland tactical maps, dungeon maps & special area maps. The overland map is at a different scale than the other maps. This means that time/turns/energy/hunger/whatever/etc is calculated at a different rate than on the other maps, which have a 1:1 scale. When entering a town, dungeon, special area or having a random encounter, you would effectively zoom in from the overland scale to the tactical scale. When you are done with combat, or exit a dungeon or town, it would zoom back out to the overland map. The downside to this is that at the zoomed out scale, the overland map isn't all that big (comparitively) and you can traverse it fairly quickly (in real time, not game world time.) Options: 1) Go with scenario 1, as is. 2) Go with scenario 1 and introduce a slightly speedier version of overland travel, such as a horse. 3) Go with scenario 1 and introduce "instant" travel, via portals or some kind of "click the big map" mechanism. This would only work with places you've already been, or somehow unlocked (perhaps via a quest.) 4) Go with Scenario 2, as is.   Thoughts, opinions, suggestions?  Feedback appreciated.

    Read the article

  • SQL SERVER – Introduction to Adaptive ETL Tool – How adaptive is your ETL?

    - by pinaldave
    I am often reminded by the fact that BI/data warehousing infrastructure is very brittle and not very adaptive to change. There are lots of basic use cases where data needs to be frequently loaded into SQL Server or another database. What I have found is that as long as the sources and targets stay the same, SSIS or any other ETL tool for that matter does a pretty good job handling these types of scenarios. But what happens when you are faced with more challenging scenarios, where the data formats and possibly the data types of the source data are changing from customer to customer?  Let’s examine a real life situation where a health management company receives claims data from their customers in various source formats. Even though this company supplied all their customers with the same claims forms, they ended up building one-off ETL applications to process the claims for each customer. Why, you ask? Well, it turned out that the claims data from various regional hospitals they needed to process had slightly different data formats, e.g. “integer” versus “string” data field definitions.  Moreover the data itself was represented with slight nuances, e.g. “0001124” or “1124” or “0000001124” to represent a particular account number, which forced them, as I eluded above, to build new ETL processes for each customer in order to overcome the inconsistencies in the various claims forms.  As a result, they experienced a lot of redundancy in these ETL processes and recognized quickly that their system would become more difficult to maintain over time. So imagine for a moment that you could use an ETL tool that helps you abstract the data formats so that your ETL transformation process becomes more reusable. Imagine that one claims form represents a data item as a string – acc_no(varchar) – while a second claims form represents the same data item as an integer – account_no(integer). This would break your traditional ETL process as the data mappings are hard-wired.  But in a world of abstracted definitions, all you need to do is create parallel data mappings to a common data representation used within your ETL application; that is, map both external data fields to a common attribute whose name and type remain unchanged within the application. acc_no(varchar) is mapped to account_number(integer) expressor Studio first claim form schema mapping account_no(integer) is also mapped to account_number(integer) expressor Studio second claim form schema mapping All the data processing logic that follows manipulates the data as an integer value named account_number. Well, these are the kind of problems that that the expressor data integration solution automates for you.  I’ve been following them since last year and encourage you to check them out by downloading their free expressor Studio ETL software. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL, SSIS

    Read the article

  • Blender mesh mirroring screws up normals when importing in Unity

    - by Shivan Dragon
    My issue is as follows: I've modeled a robot in Blender 2.6. It's a mech-like biped or if you prefer, it kindda looks like a chicken. Since it's symmetrical on the XZ plane, I've decided to mirror some of its parts instead of re-modeling them. Problem is, those mirrored meshes look fine in Blender (faces all show up properly and light falls on them as it should) but in Unity faces and lighting on those very same mirrored meshes is wrong. What also stumps me is the fact that even if I flip normals in Blender, I still get bad results in Unity for those meshes (though now I get different bad results than before). Here's the details: Here's a Blender screen shot of the robot. I've took 2 pictures and slightly rotated the camera around so the geometry in question can be clearly seen: Now, the selected cog-wheel-like piece is the mirrored mesh obtained from mirroring the other cog-wheel on the other (far) side of the robot torso. The back-face culling is turned of here, so it's actually showing the faces as dictated by their normals. As you can see it looks ok, faces are orientated correctly and light falls on it ok (as it does on the original cog-wheel from which it was mirrored). Now if I export this as fbx using the following settings: and then import it into Unity, it looks all screwy: It looks like the normals are in the wrong direction. This is already very strange, because, while in Blender, the original cog-wheel and its mirrored counter part both had normals facing one way, when importing this in Unity, the original cog-wheel still looks ok (like in Blender) but the mirrored one now has normals inverted. First thing I've tried is to go "ok, so I'll flip normals in Blender for the mirrored cog-wheel and then it'll display ok in Unity and that's that". So I went back to Blender, flipped the normals on that mesh, so now it looks bad in Blender: and then re-exported as fbx with the same settings as before, and re-imported into Unity. Sure enough the cog-wheel now looks ok in Unity, in the sense where the faces show up properly, but if you look closely you'll notice that light and shadows are now wrong: Now in Unity, even though the light comes from the back of the robot, the cog-wheel in question acts as if light was coming from some-where else, its faces which should be in shadow are lit up, and those that should be lit up are dark. Here's some things I've tried and which didn't do anything: in Blender I tried mirroring the mesh in 2 ways: first by using the scale to -1 trick, then by using the mirroring tool (select mesh, hit crtl-m, select mirror axis), both ways yield the exact same result in Unity I've tried playing around with the prefab import settings like "normals: import/calculate", "tangents: import/calculate" I've also tired not exporting as fbx manually from Blender, but just dropping the .blend file in the assets folder inside the Unity project So, my question is: is there a way to actually mirror a mesh in Blender and then have it imported in Unity so that it displays properly (as it does in Blender)? If yes, how? Thank you, and please excuse the TL;DR style.

    Read the article

  • Demonstration VM BIC2g 2013-10 Partner Edition for Download

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 UPDATED ! The “BIC2g” demo VM (now version 2013-10) is downloadable from our BIC2g Beehive Online Workspace portal for OPN member partners. Compared to the prior version, Bic2g 2013-04, the new Bic2g 2013-10 has: OBIEE was upgraded from 11.1.1.7.0 to 11.1.1.7.1. with BI Mobile Application Designer (BIMAD) added. TimesTen was upgraded from 11.2.2.3.0 to 11.2.2.5.0 ODI Client 11.1.1.7.0 was installed, including a Standalone agent and empty repositories, and the BI Applications 11g ODI Repositories were included (BIAPPS_11g) Informatica and DAC were removed from image There are additional demos for BI-Apps and for Endeca. The compact deployment of EPM is installed and configured, including: Hyperion Foundation, Essbase, Essbase Studio, Essbase Administration Services, Provider Services, Calculation Manager, Planning, Reporting and Analysis, Financial Reporting, Web Analysis, and Workspace. The access details, for OPN member partners only, to get added to the BIC2G Beehive Online Workspace portal are shown from this page @ BI Solutions Engineering Partner Portal. This Oracle Business Intelligence Linux VM virtual appliance (“BIC2g”) was developed to support Oracle OBI, BI-Apps and EPM Hyperion sales and Oracle partners in product demonstrations, training activities and POC activities.  If you do not need BI-Apps, then there is a slightly smaller VM OBI Sample-App you can get from OTN: see @ Oracle BI and EPM Demonstration SampleApp V309 on OTN. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • SQL SERVER – Various Leap Year Logics

    - by pinaldave
    Earlier I wrote one article on Leap Year and created one video about Leap Year. My point of view was to demonstrate how we can use SQL Server 2012 features to identify Leap year. How ever during the conversation I had some really good conversation. Here are updates for those who have missed reading the excellent comments on the blog. Incorrect Logic There are so many people still think Leap Year is the event which is consistently happening at every four year and the way to find it is divide the year with 4 and if the remainder is 0. That year is leap year. Well, it is not correct. Comment by David Bridge Check out this excerpt from wikipedia page http://en.wikipedia.org/wiki/Leap_year “most years that are evenly divisible by 4 are leap years…” “…Some exceptions to this rule are required since the duration of a solar year is slightly less than 365.25 days. Years that are evenly divisible by 100 are not leap years, unless they are also evenly divisible by 400, in which case they are leap years. For example, 1600 and 2000 were leap years, but 1700, 1800 and 1900 were not. Similarly, 2100, 2200, 2300, 2500, 2600, 2700, 2900 and 3000 will not be leap years, but 2400 and 2800 will be.” If you use logic of divide by 4 and remainder is 0 to find leap year, you will may end up with inaccurate result. The correct way to identify the year is to figure out the days of February and if the count is 29, the year is for sure leap year. Valid Alternate Solutions Comment by sainswor99insworth IIF((@Year%4=0 AND @Year%100 != 0) OR @Year%400=0, 1,0) Comment by Madhivanan Madhivanan has written a blog post about an year ago where he listed multiple ways to find leap year. Comment by Jayan DECLARE @year INT SET @year = 2012 IF (((@year % 4 = 0) AND (@year % 100 != 0)) OR (@year % 400 = 0)) PRINT ’1' ELSE print ’0' Comment by David DECLARE @Year INT = 2012 SELECT ISDATE('2/29/' + CAST(@Year AS CHAR(4))) Comment by David Bridge Incidentally – Another approach would be to take one day off March 1st and see if it is 29. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Make the Firefox Awesome Bar Semi-Transparent Like Google Chrome

    - by Matthew Guay
    Would you like to make the Firefox Awesome Bar drop-down menu semi-transparent like in Google Chrome?  Here’s a quick trick that can make your Firefox Awesome Bar a bit more awesome. When you type an address or search query into the address bar in Google Chrome, the drop-down list of history and search suggestions that appears is slightly transparent.  Nothing extreme, but it adds a nice touch. Firefox’s Awesome bar, on the other hand, is fully opaque by default. We can change that with a simple change.  Exit Firefox, then open your Firefox profile folder by entering the following in the address bar in Explorer or in the Run command: %appdata%\Mozilla\Firefox\Profiles\ Open the default folder, and then open the Chrome folder in it. Now, open the userChrome.css file in an editor such as Notepad.  If you do not have a userChrome.css file, open the userChrome-example.css file instead. Now, add the following to the end of the file: #PopupAutoCompleteRichResult[type="autocomplete-richlistbox"]{    opacity: 0.9 !important;} You can change the opacity value, but 0.9 seemed the closest to Chrome’s transparency while keeping the text readable. Save the file as userChrome.css in that same folder.  If you’re editing with Notepad, make sure to select to save as All Files so the file won’t be saved with a .txt extension. Open Firefox, and now your Awesome Bar’s drop-down list will be transparent.  Actually, it may look even more awesome than Google Chrome’s address bar! Conclusion With this simple trick, you can make your Firefox Awesome bar a bit more awesome.  With tweaks like this, it’s no wonder Firefox is still so popular. Special thanks to Daniel Spiewak for the tip! Similar Articles Productive Geek Tips Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeHow to Make Google Chrome Your Default BrowserEnable Vista Black Style Theme for Google Chrome in XPMake your Gnome Terminal Background (mostly)Transparent on UbuntuStop YouTube Videos from Automatically Playing in Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule

    Read the article

  • Move Window Buttons Back to the Right in Ubuntu 10.04

    - by Trevor Bekolay
    One of the more controversial changes in the Ubuntu 10.04 beta is the Mac OS-inspired change to have window buttons on the left side. We’ll show you how to move the buttons back to the right. Before While the change may or may not persist through to the April 29 release of Ubuntu 10.04, in the beta version the maximize, minimize, and close buttons appear in the top left of a window. How to move the window buttons The window button locations are dictated by a configuration file. We’ll use the graphical program gconf-editor to change this configuration file. Press Alt+F2 to bring up the Run Application dialog box, enter “gconf-editor” in the text field, and click on Run. The Configuration Editor should pop up. The key that we want to edit is in apps/metacity/general. Click on the + button next to the “apps” folder, then beside “metacity” in the list of folders expanded for apps, and then click on the “general” folder. The button layout can be changed by changing the “button_layout” key. Double-click button_layout to edit it. Change the text in the Value text field to: menu:maximize,minimize,close Click OK and the change will occur immediately, changing the location of the window buttons in the Configuration Editor. Note that this ordering of the window buttons is slightly different than the typical order; in previous versions of Ubuntu and in Windows, the minimize button is to the left of the maximize button. You can change the button_layout string to reflect that ordering, but using the default Ubuntu 10.04 theme, it looks a bit strange. If you plan to change the theme, or even just the graphics used for the window buttons, then this ordering may be more natural to you. After After this change, all of your windows will have the maximize, minimize, and close buttons on the right. What do you think of Ubuntu 10.04’s visual change? Let us know in the comments! Similar Articles Productive Geek Tips Move a Window Without Clicking the Titlebar in UbuntuBring Misplaced Off-Screen Windows Back to Your Desktop (Keyboard Trick)Keep the Display From Turning Off on UbuntuPut Close/Maximize/Minimize Buttons on the Left in UbuntuAllow Remote Control To Your Desktop On Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users

    Read the article

  • Should I choose Doctrine 2 or Propel 1.5/1.6, and why?

    - by Billy ONeal
    I'd like to hear from those who have used Doctrine 2 (or later) and Propel 1.5 (or later). Most comparisons between these two object relational mappers are based on old versions -- Doctrine 1 versus Propel 1.3/1.4, and both ORMs went through significant redesigns in their recent revisions. For example, most of the criticism of Propel seems to center around the "ModelName Peer" classes, which are deprecated in 1.5 in any case. Here's what I've accumulated so far (And I've tried to make this list as balanced as possible...): Propel Pros Extremely IDE friendly, because actual code is generated, instead of relying on PHP magic methods. This means IDE features like code completion are actually helpful. Fast (In terms of database usage -- no runtime introspection is done on the database) Clean migration between schema versions (at least in the 1.6 beta) Can generate PHP 5.3 models (i.e. namespaces) Easy to chain a lot of things into a single database query with things like useXxx methods. (See the "code completion" video above) Cons Requires an extra build step, namely building the model classes. Generated code needs rebuilt whenever Propel version is changed, a setting is changed, or the schema changes. This might be unintuitive to some and custom methods applied to the model are lost. (I think?) Some useful features (i.e. version behavior, schema migrations) are in beta status. Doctrine Pros More popular Doctrine Query Language can express potentially more complicated relationships between data than easily possible with Propel's ActiveRecord strategy. Easier to add reusable behaviors when compared with Propel. DocBlock based commenting for building the schema is embedded in the actual PHP instead of a separate XML file. Uses PHP 5.3 Namespaces everywhere Cons Requires learning an entirely new programming language (Doctrine Query Language) Implemented in terms of "magic methods" in several places, making IDE autocomplete worthless. Requires database introspection and thus is slightly slower than Propel by default; caching can remove this but the caching adds considerable complexity. Fewer behaviors are included in the core codebase. Several features Propel provides out of the box (such as Nested Set) are available only through extensions. Freakin' HUGE :) This I have gleaned though only through reading the documentation available for both tools -- I've not actually built anything yet. I'd like to hear from those who have used both tools though, to share their experience on pros/cons of each library, and what their recommendation is at this point :)

    Read the article

  • Indie Software Developers - How do I handle taxes?

    - by Connor
    I apologize if this is the wrong site to post on, perhaps someone could point me to the proper place if it is not. Hello, I am 17 years old and currently develop applications/games for Android and iPhone as well as develop internet websites and code a variety of my own projects. I have been very fortunate and have made a large amount of money and continue to make money online to the point where I do not need a stable job, though I'd like to get one after college. I've never held a job anywhere, and have never had to pay taxes. I'm coming into a lot of issues and I am quite confused. I get money from MANY sources- 15 different advertisement networks(!), 4 different payment processors, 5 different affiliate networks and a variety of other sources. All of them pay to different places and at different times (checking account, PayPal, reloadable debit card, ect.) I essentially have a list in a Notepad with names and login information for each source. I have also created a PHP script that uses cURL to grab all the revenue from each service, add it all up, then text me every few hours so I can keep track. It's a mess, but it's working OK, and I can create custom reports (for IRS?). But enough of that, my questions are about taxes in the US, and how indie developers handle it all. I'm at slightly over $250k so far this year, with negligible earnings last year. I have it all stockpiled in a bank account and haven't touched it, I'm a bit scared to. What do I file as? A sole proprietor, a business, just a regular person? How can I handle all of the different revenue sources? (AdSense, CJ, LinkShare) So far none of them have sent me any paperwork on taxes and I've read that I'm supposed to pay taxes quarterly? Do I need paperwork from EACH source to file? Or can I just say I got $x total and that'd be it? What percentage do you pay of total earnings? Average? Should I create an LLC? A corporation? Or stay as a developer? What would be the cheapest options? Could I go to jail? I haven't touched the money except a few dollars to help my parents pay the mortgage once. Any insight would be great. My parents have no idea what I should do, both have no forms of higher education and both have no high school diploma's. They just live day by day with simple jobs. I appreciate any help or experience with this.

    Read the article

  • Recording Topics manually and automatically

    - by maria.cozzolino(at)oracle.com
    When you are recording UPK topics, the default mode for recording is manual recording, where you tell the system when to record each screen shot. This mode allows you to take the exact screen shot you need. However, it does get a bit tedious when you are recording long topics, especially if you forget to take a few screen shots. In UPK 3.5, a new version of recording was introduced - Automatic Recording. It was designed to simplify the recording process by automatically capturing screen shots as you perform your transaction. If you haven't experimented with Automatic Recording, I'd recommend you give it a try - it might make your recording life easier. If you are recording with sound, you can also narrate your topic while recording it. To turn on Automatic Recording: 1. In Tools/Options, there are two recorder tabs. The first tab, under content defaults, includes settings that you may want to share between developers, like whether keyboard shortcuts are automatically captured. 2. The second tab is the one that contains the personal preferences, like screen shot capture key and whether to record automatically or manually. On this tab, choose the option for Automatic Recording. 3. Save the settings. Note that this setting will NOT impact content defaults; this is for your user only. When you launch the recorder, you will notice a slightly different message with guidance on how to start and stop automatic recording. Once you start recording, the recorder window is hidden until the end of the recording session to allow you to capture your transaction. In the task tray, there is a series of icons that let you know that you are capturing content. You can pause the recording, as well as set and view your sound levels if you are using sound. A camera appears during each screen capture to help you know when the system is capturing a screen shot, and a context indicator appears to show the recognition. With automatic recording, you can let the system capture the necessary screen shots. It may provide a more natural recording experience, and is probably easier for the untrained developer. On the other hand, you have a bit more control with manual recording on which screen shot appears, but it also means you have to remember to capture the screen shot. :) We'd be interested in hearing which type of recording you do, and any rationale on why you made that choice. Please comment and let us know. --Maria Cozzolino, Manager of UPK Software Requirements and UI Design

    Read the article

  • Space partitioning when everything is moving

    - by Roy T.
    Background Together with a friend I'm working on a 2D game that is set in space. To make it as immersive and interactive as possible we want there to be thousands of objects freely floating around, some clustered together, others adrift in empty space. Challenge To unburden the rendering and physics engine we need to implement some sort of spatial partitioning. There are two challenges we have to overcome. The first challenge is that everything is moving so reconstructing/updating the data structure has to be extremely cheap since it will have to be done every frame. The second challenge is the distribution of objects, as said before there might be clusters of objects together and vast bits of empty space and to make it even worse there is no boundary to space. Existing technologies I've looked at existing techniques like BSP-Trees, QuadTrees, kd-Trees and even R-Trees but as far as I can tell these data structures aren't a perfect fit since updating a lot of objects that have moved to other cells is relatively expensive. What I've tried I made the decision that I need a data structure that is more geared toward rapid insertion/update than on giving back the least amount of possible hits given a query. For that purpose I made the cells implicit so each object, given it's position, can calculate in which cell(s) it should be. Then I use a HashMap that maps cell-coordinates to an ArrayList (the contents of the cell). This works fairly well since there is no memory lost on 'empty' cells and its easy to calculate which cells to inspect. However creating all those ArrayLists (worst case N) is expensive and so is growing the HashMap a lot of times (although that is slightly mitigated by giving it a large initial capacity). Problem OK so this works but still isn't very fast. Now I can try to micro-optimize the JAVA code. However I'm not expecting too much of that since the profiler tells me that most time is spent in creating all those objects that I use to store the cells. I'm hoping that there are some other tricks/algorithms out there that make this a lot faster so here is what my ideal data structure looks like: The number one priority is fast updating/reconstructing of the entire data structure Its less important to finely divide the objects into equally sized bins, we can draw a few extra objects and do a few extra collision checks if that means that updating is a little bit faster Memory is not really important (PC game)

    Read the article

  • XNA 2D line-of-sight check

    - by bionicOnion
    I'm working on a top-down shooter in XNA, and I need to implement line-of-sight checking. I've come up with a solution that seems to work, but I get the nagging feeling that it won't be efficient enough to do every frame for multiple calls (the game already hiccups slightly at about 10 calls per frame). The code is below, but my general plan was to create a series of rectangles with a width and height of zero to act as points along the sight line, and then check to see if any of these rectangles intersects a ClutterObject (an interface I defined for things like walls or other obstacles) after first screening for any that can't possibly be in the line of sight (i.e. behind the viewer) or are too far away (a concession I made for efficiency). public static bool LOSCheck(Vector2 pos1, Vector2 pos2) { Vector2 currentPos = pos1; Vector2 perMove = (pos2 - pos1); perMove.Normalize(); HashSet<ClutterObject> clutter = new HashSet<ClutterObject>(); foreach (Room r in map.GetRooms()) { if (r != null) { foreach (ClutterObject c in r.GetClutter()) { if (c != null &&!(c.GetRectangle().X * perMove.X < 0) && !(c.GetRectangle().Y * perMove.Y < 0)) { Vector2 cVector = new Vector2(c.GetRectangle().X, c.GetRectangle().Y); if ((cVector - pos1).Length() < 1500) clutter.Add(c); } } } } while (currentPos != pos2 && ((currentPos - pos1).Length() < 1500)) { Rectangle position = new Rectangle((int)currentPos.X, (int)currentPos.Y, 0, 0); foreach (ClutterObject c in clutter) { if (position.Intersects(c.GetRectangle())) return false; } currentPos += perMove; } return true; } I'm sure that there's a better way to do this (or at least a way to make this method more efficient), but I'm not too used to XNA yet, so I figured it couldn't hurt to bring it here. At the very least, is there an efficient to determine which objects may be in front of the viewer with greater precision than the rather broad 90 degree window I've given myself?

    Read the article

  • Beware of const members

    - by nmarun
    I happened to learn a new thing about const today and how one needs to be careful with its usage. Let’s say I have a third-party assembly ‘ConstVsReadonlyLib’ with a class named ConstSideEffect.cs: 1: public class ConstSideEffect 2: { 3: public static readonly int StartValue = 10; 4: public const int EndValue = 20; 5: } In my project, I reference the above assembly as follows: 1: static void Main(string[] args) 2: { 3: for (int i = ConstSideEffect.StartValue; i < ConstSideEffect.EndValue; i++) 4: { 5: Console.WriteLine(i); 6: } 7: Console.ReadLine(); 8: } You’ll see values 10 through 19 as expected. Now, let’s say I receive a new version of the ConstVsReadonlyLib. 1: public class ConstSideEffect 2: { 3: public static readonly int StartValue = 5; 4: public const int EndValue = 30; 5: } If I just drop this new assembly in the bin folder and run the application, without rebuilding my console application, my thinking was that the output would be from 5 to 29. Of course I was wrong… if not you’d not be reading this blog. The actual output is from 5 through 19. The reason is due to the behavior of const and readonly members. To begin with, const is the compile-time constant and readonly is a runtime constant. Next, when you compile the code, a compile-time constant member is replaced with the value of the constant in the code. But, the IL generated when you reference a read-only constant, references the readonly variable, not its value. So, the IL version of the Main method, after compilation actually looks something like: 1: static void Main(string[] args) 2: { 3: for (int i = ConstSideEffect.StartValue; i < 20; i++) 4: { 5: Console.WriteLine(i); 6: } 7: Console.ReadLine(); 8: } I’m no expert with this IL thingi, but when I look at the disassembled code of the exe file (using IL Disassembler), I see the following: I see our readonly member still being referenced by the variable name (ConstVsReadonlyLib.ConstSideEffect::StartValue) in line 0001. Then there’s the Console.WriteLine in line 000b and finally, see the value of 20 in line 0017. This, I’m pretty sure is our const member being replaced by its value which marks the upper bound of the ‘for’ loop. Now you know why the output was from 5 through 19. This definitely is a side-effect of having const members and one needs to be aware of it. While we’re here, I’d like to add a few other points about const and readonly members: const is slightly faster, but is less flexible readonly cannot be declared within a method scope const can be used only on primitive types (numbers and strings) Just wanted to share this before going to bed!

    Read the article

  • Taking fixed direction on hemisphere and project to normal (openGL)

    - by Maik Xhani
    I am trying to perform sampling using hemisphere around a surface normal. I want to experiment with fixed directions (and maybe jitter slightly between frames). So I have those directions: vec3 sampleDirections[6] = {vec3(0.0f, 1.0f, 0.0f), vec3(0.0f, 0.5f, 0.866025f), vec3(0.823639f, 0.5f, 0.267617f), vec3(0.509037f, 0.5f, -0.700629f), vec3(-0.509037f, 0.5f, -0.700629), vec3(-0.823639f, 0.5f, 0.267617f)}; now I want the first direction to be projected on the normal and the others accordingly. I tried these 2 codes, both failing. This is what I used for random sampling (it doesn't seem to work well, the samples seem to be biased towards a certain direction) and I just used one of the fixed directions instead of s (here is the code of the random sample, when i used it with the fixed direction i didn't use theta and phi). vec3 CosWeightedRandomHemisphereDirection( vec3 n, float rand1, float rand2 ) float theta = acos(sqrt(1.0f-rand1)); float phi = 6.283185f * rand2; vec3 s = vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta)); vec3 v = normalize(cross(n,vec3(0.0072, 1.0, 0.0034))); vec3 u = cross(v, n); u = s.x*u; v = s.y*v; vec3 w = s.z*n; vec3 direction = u+v+w; return normalize(direction); } ** EDIT ** This is the new code vec3 FixedHemisphereDirection( vec3 n, vec3 sampleDir) { vec3 x; vec3 z; if(abs(n.x) < abs(n.y)){ if(abs(n.x) < abs(n.z)){ x = vec3(1.0f,0.0f,0.0f); }else{ x = vec3(0.0f,0.0f,1.0f); } }else{ if(abs(n.y) < abs(n.z)){ x = vec3(0.0f,1.0f,0.0f); }else{ x = vec3(0.0f,0.0f,1.0f); } } z = normalize(cross(x,n)); x = cross(n,z); mat3 M = mat3( x.x, n.x, z.x, x.y, n.y, z.y, x.z, n.z, z.z); return M*sampleDir; } So if my n = (0,0,1); and my sampleDir = (0,1,0); shouldn't the M*sampleDir be (0,0,1)? Cause that is what I was expecting.

    Read the article

  • The Dispose Pattern (and FxCop warnings)

    - by Scott Dorman
    [This is actually a response to Bill’s blog post, but since it isn’t possible to leave this as a comment on his blog it’s a post here.] There are many different ways to implement the Dispose pattern correctly. Some are (in my opinion) better than others. In Bill’s blog post he presents a particular pattern, which is an excerpt from his book (Effective C#). The issue centers around the fact that a reader took the code sample presented in the book and ran FxCop (Code Analysis) on it, which generated a warning: “Ensure that base.Dispose() is always called.” The “lesson learned” that Bill presents is that “tools are there to help us, not control us.” While I completely agree with the belief that tools are there to help us, I think it’s important to understand why FxCop is raising this particular warning. The code presented in Bill’s book looks like: // Have its own disposed flag.private bool disposed = false;protected override void Dispose(bool isDisposing){ // Don't dispose more than once. if (disposed) return; if (isDisposing) { // TODO: free managed resources here. } // TODO: free unmanaged resources here. // Let the base class free its resources. // Base class is responsible for calling // GC.SuppressFinalize( ) base.Dispose(isDisposing); // Set derived class disposed flag: disposed = true;} This code does follow all of the guidelines for implementing the Dispose pattern. In this case, it’s presumably part of a larger example showing how to implement the pattern as part of a base class. The reason FxCop is warning you about this code is the first if statement in the Dispose method, which will cause the method to exit if disposed is true. The problem here is that there is the possibility that if the disposed flag is true, the call to base.Dispose() will never be executed. As Bill points out, it is possible for some other code elsewhere in the class to set this flag. He states that this is an “unlikely occurrence.” While that is probably true, it can be a potentially dangerous assumption to make and is one that can be easily corrected. By changing the code slightly you can remove this assumption and correct the FxCop violation. private bool disposed = false;protected override void Dispose(bool disposing){ if (!disposed) { if (disposing) { // Dispose managed resources. } // Dispose unmanaged resources. disposed = true; } base.Dispose(disposing);} Using this implementation allows the call to base.Dispose() to always occur, which ensures that the the disposal chain is always properly followed. Technorati Tags: .NET,C#,Dispose Pattern

    Read the article

  • How do you exclude yourself from Google Analytics on your website using cookies?

    - by Keoki Zee
    I'm trying to set up an exclusion filter with a browser cookie, so that my own visits to my don't show up in my Google Analytics. I tried 3 different methods and none of them have worked so far. I would like help understanding what I am doing wrong and how I can fix this. Method 1 First, I tried following Google's instructions, http://www.google.com/support/analytics/bin/answer.py?hl=en&answer=55481, for excluding traffic by Cookie Content: Create a new page on your domain, containing the following code: <body onLoad="javascript:pageTracker._setVar('test_value');"> Method 2 Next, when that didn't work, I googled around and found this Google thread, http://www.google.com/support/forum/p/Google%20Analytics/thread?tid=4741f1499823fcd5&hl=en, where the most popular answer says to use a slightly different code: SHS Analytics wrote: <body onLoad="javascript:_gaq.push(['_setVar','test_value']);"> Thank you! This has now set a __utmv cookie containing "test_value", whereas the original: pageTracker._setVar('test_value') (which Google is still recommending) did not manage to do that for me (in Mac Safari 5 and Firefox 3.6.8). So I tried this code, but it didn't work for me. Method 3 Finally, I searched StackOverflow and came across this thread, http://stackoverflow.com/questions/3495270/exclude-my-traffic-from-google-analytics-using-cookie-with-subdomain, which suggests that the following code might work: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setVar', 'exclude_me']); _gaq.push(['_setAccount', 'UA-xxxxxxxx-x']); _gaq.push(['_trackPageview']); // etc... </script> This script appeared in the head element in the example, instead of in the onload event of the body like in the previous 2 examples. So I tried this too, but still had no luck with trying to exclude myself from Google Analytics. Re-iterate question So, I tried all 3 methods above with no success. Am I doing something wrong? How can I exclude myself from my Google Analytics using an exclusion cookie for my browser?

    Read the article

  • Camera not staying behind model while moving in circle

    - by ChocoMan
    I have a camera behind a model (3rd Person) and I'm having problems KEEPING it behind the model. When I first start my game, you see the back of the model. If the model moves forward, backward or strafe left or right, the camera moves along accordingly. When the model rotates (stationary), the camera rotates accordingly with the model still pointing at the model's back. So far, so good. The problem comes when the player is BOTH moving and rotating at the same time. Take for example a model moving in a circular pattern like running around a track. As the model moves in this motion, the model rotates slightly more with each complete rotation. Eventually, instead of looking at the model's back, eventually you will see the model in a profile view and before you know it, the model's front is facing the camera. And when you stop moving the model, the model stays in that position. So, as long as my model is stationary and rotating in one place, the camera rotates correctly. But as soon as there is any sort movement while rotating, the model is offset by a mysterious increasing amount. How can I keep the camera maintaining the same view no matter how I move AND rotate at the same time? // Rotates model and pitches camera on its own axis public void modelRotMovement(GamePadState pController) { /* For rotating the model left or right. * Camera maintains distance from model * throughout rotation and if model moves * to a new position. */ Yaw = pController.ThumbSticks.Right.X * MathHelper.ToRadians(speedAngleMAX); AddRotation = Quaternion.CreateFromAxisAngle(Vector3.Up, yaw); //AddRotation = Quaternion.CreateFromYawPitchRoll(Yaw, 0, 0); ModelLoad.MRotation *= AddRotation; MOrientation = Matrix.CreateFromQuaternion(ModelLoad.MRotation); Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(speedAngleMAX); AddPitch = Quaternion.CreateFromAxisAngle(Vector3.Up, pitch); ModelLoad.CRotation *= AddPitch; COrientation = Matrix.CreateFromQuaternion(ModelLoad.CRotation); } // Orbit (yaw) Camera around model public void cameraYaw(float yaw) { Vector3 yawAngle = ModelLoad.CameraPos - ModelLoad.camTarget; Vector3 axisYaw = Vector3.Up; ModelLoad.CameraPos = Vector3.Transform(yawAngle, Matrix.CreateFromAxisAngle(axisYaw, yaw)) + ModelLoad.camTarget; }

    Read the article

  • Split Internet Explorer into Dual-Panes

    - by Asian Angel
    If you have a wide screen monitor then you may want to make better use of Internet Explorer’s browser window area. Now you can split the browser window into dual-panes as needed with the IE Split browser plugin. Note: Requires .NET Framework 2.0 or higher (link provided below). IE Split in Action If you are using an older version of this software here is something to keep in mind before upgrading to the 2.0 release. Once you have installed IE Split you will notice a new toolbar added to your browser. As seen here, you can condense it down tightly and access it using the drop-down bar. A closer look at the drop-down bar. Notice the address bar…this will be for the left pane when you split the browser window. Here is our browser split into dual-panes. There are two address bars and two tab/title bars each corresponding to their appropriate pane. It may look slightly backwards at first but is not hard to get used to. A better view of the left pane with the IE Split navigation & title bars showing. Note: The title bar can be hidden if desired. And the right pane. You can also have multiple “split” tabs open if needed. There is nothing quite like getting double the value for the same amount of space. When you no longer need dual-panes open just click on the “x” to close IE Split down. All back to normal again. Conclusion While might not be for everyone this can still be useful for those who need side-by-side access to websites without using multiple separate windows. Links Download IE-Split Download the Microsoft .NET Framework 4 (Standalone Installer) Similar Articles Productive Geek Tips Set Up Multi-Pane Viewing in FirefoxWhy Can’t I Turn the Details/Preview Panes On or Off in Windows Vista Explorer?Split a text file in half (or any percentage) on Ubuntu LinuxMysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPMake Ctrl+Tab in Internet Explorer 7 Use Most Recent Order TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like)

    Read the article

  • StreamInsight 1.0 Released

    - by Roman Schindlauer
    One piece in the set of products offered in SQL Server 2008 R2 that has generated a lot of buzz and interest during its CTP phase is StreamInsight, Microsoft’s platform for Complex Event Processing. Microsoft’s information platform vision provides enterprises with a “complete approach” to managing information assets, enabling all businesses to gain strategic value from information from the desktop to the datacenter to the cloud. And StreamInsight V1 is one essential piece in this spectrum. After more than a year of blood, sweat, tears, and insane amounts of coffee we are proud to release the first version of our Complex Event Processing Framework.   Those of you who have been following our Community Technology Previews (CTPs) throughout last year have already had the possibility to familiarize themselves with the product. Early feedback was not only incredibly positive, but also very constructive and strongly influenced the final feature set. Four notable increments over our last public CTP are: Count windows Non-occurrence detection (Anti-Join) Dynamic query composition at runtime Synchronize time across input streams Additionally, many smaller issues and bugs were addressed. A few APIs slightly changed with respect to the November CTP, but porting your application to RTM should not require a lot of effort.   Here are the (english) bits - choosing the evaluation license during setup lets you already play with this version. Before you install, make sure to uninstall any previous CTP version:   StreamInsight X86StreamInsight X64   Within a few days, we will update our product page and add download links and instructions there as well.   The StreamInsight documentation is provided through a help file as part of the installation as well as through Books Online on MSDN. We also invite you to visit the StreamInsight Blog and the StreamInsight Forum, which is a great place to discuss questions and issues with the community and the development team.   Regards,Roman Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Why wearing Jeans is considered unprofessional?

    - by Gopinath
    When I started my career 9 years ago I use to wear casual wear to office – Jeans & T-Shirts all the 5 days. The environment at workplace during those days encouraged me to be casual and many of my colleagues use to come in Jeans. We just started our career those days it was perfectly fine to be in casual. As I grow up in the ladder, I started feeling the discomfort of wearing Jeans at work. During clients visits, senior managers meetings and consultations I was an odd man in the crowd as the rest of them are in formals. In order to be one among the professionals I’m forced change my dressing style and start wearing formals. But  the question of “Why wearing jeans to workplace is considered as unprofessional?” use in linger in my mind till today. I got the answer to my question from a discussion thread on Quora When they were invented, jeans were associated with blue-collar work. They were meant to get muddy and gross and take lots of abuse without falling apart, even if you wore the same pair every day. The people who bought them were the ones whose lives required durable clothing. And another commenter says… A professional image is critical to cementing business relationships, and part of that is, for right or wrong, how you dress. Jeans are typically associated with "kicking back", relaxation, leisure, informality,  and even a slightly rebellious flavor. The style and condition of the jeans are a consideration, as we often wear jeans into advanced states of being worn down, with tearing, etc.. that we generally do not do with other clothing items. I agree with this theory even though it may be centuries old. If you want to look like a professional and treated like a professional it’s better to be dress up in formals. These days I make a point to be in formals at workplace. Not everyone is Steve Jobs to wear a Jean & Turtle Neck T-shirt  right? CC Image credit flickr/exey

    Read the article

  • Problems Rendering Text in OpenGL Using FreeType

    - by Sean M.
    I've been following both the FreeType2 tutorial and the WikiBooks tuorial, trying to combine things from them both in order to load and render fonts using the FreeType library. I used the font loading code from the FreeType2 tutorial and tried to implement the rendering code from the wikibooks tutorial (tried being the keyword as I'm still trying to learn model OpenGL, I'm using 3.2). Everything loads correctly and I have the shader program to render the text with working, but I can't get the text to render. I'm 99% sure that it has something to do with how I cam passing data to the shader, or how I set up the screen. These are the code segments that handle OpenGL initialization, as well as Font initialization and rendering: //Init glfw if (!glfwInit()) { fprintf(stderr, "GLFW Initialization has failed!\n"); exit(EXIT_FAILURE); } printf("GLFW Initialized.\n"); //Process the command line arguments processCmdArgs(argc, argv); //Create the window glfwWindowHint(GLFW_SAMPLES, g_aaSamples); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2); g_mainWindow = glfwCreateWindow(g_screenWidth, g_screenHeight, "Voxel Shipyard", g_fullScreen ? glfwGetPrimaryMonitor() : nullptr, nullptr); if (!g_mainWindow) { fprintf(stderr, "Could not create GLFW window!\n"); closeOGL(); exit(EXIT_FAILURE); } glfwMakeContextCurrent(g_mainWindow); printf("Window and OpenGL rendering context created.\n"); glClearColor(0.2f, 0.2f, 0.2f, 1.0f); //Are these necessary for Modern OpenGL (3.0+)? glViewport(0, 0, g_screenWidth, g_screenHeight); glOrtho(0, g_screenWidth, g_screenHeight, 0, -1, 1); //Init glew int err = glewInit(); if (err != GLEW_OK) { fprintf(stderr, "GLEW initialization failed!\n"); fprintf(stderr, "%s\n", glewGetErrorString(err)); closeOGL(); exit(EXIT_FAILURE); } printf("GLEW initialized.\n"); Here is the font file (it's slightly too big to post): CFont.h/CFont.cpp Here is the solution zipped up: [solution] (https://dl.dropboxusercontent.com/u/36062916/VoxelShipyard.zip), if anyone feels they need the entire solution. If anyone could take a look at the code, it would be greatly appreciated. Also if someone has a tutorial that is a little more user friendly, that would also be appreciated. Thanks.

    Read the article

  • Advantages and disadvantages of building a single page web application

    - by ryanzec
    I'm nearing the end of a prototyping/proof of concept phase for a side project I'm working on, and trying to decide on some larger scale application design decisions. The app is a project management system tailored more towards the agile development process. One of the decisions I need to make is whether or not to go with a traditional multi-page application or a single page application. Currently my prototype is a traditional multi-page setup, however I have been looking at backbone.js to clean up and apply some structure to my Javascript (jQuery) code. It seems like while backbone.js can be used in multi-page applications, it shines more with single page applications. I am trying to come up with a list of advantages and disadvantages of using a single page application design approach. So far I have: Advantages All data has to be available via some sort of API - this is a big advantage for my use case as I want to have an API to my application anyway. Right now about 60-70% of my calls to get/update data are done through a REST API. Doing a single page application will allow me to better test my REST API since the application itself will use it. It also means that as the application grows, the API itself will grow since that is what the application uses; no need to maintain the API as an add-on to the application. More responsive application - since all data loaded after the initial page is kept to a minimum and transmitted in a compact format (like JSON), data requests should generally be faster, and the server will do slightly less processing. Disadvantages Duplication of code - for example, model code. I am going to have to create models both on the server side (PHP in this case) and the client side in Javascript. Business logic in Javascript - I can't give any concrete examples on why this would be bad but it just doesn't feel right to me having business logic in Javascript that anyone can read. Javascript memory leaks - since the page never reloads, Javascript memory leaks can happen, and I would not even know where to begin to debug them. There are also other things that are kind of double edged swords. For example, with single page applications, the data processed for each request can be a lot less since the application will be asking for the minimum data it needs for the particular request, however it also means that there could be a lot more small request to the server. I'm not sure if that is a good or bad thing. What are some of the advantages and disadvantages of single page web applications that I should keep in mind when deciding which way I should go for my project?

    Read the article

  • How do I set up nvidia graphics adapter to put out 1080p, it seems to be using interlace mode>

    - by keepitsimpleengineer
    After upgrading to 12.04, my mythbuntu client/server seems to be running in 1080i, the clue comes from: [ 1176.117] (II) NVIDIA(0): Setting mode "1920x1080_60i" [ 1231.340] (II) NVIDIA(0): Setting mode "DFP-1:1920x1080_60@1920x1080+0+0" This is from Xorg.0.log. This whole thing started from video tearing when watching Mythtv recordings. It didn't happen in 10.10. Should I use "TVStandard" "HD1080p" in the screen section since this is a dedicated HTPC? It only connects to an HDTV (1080p) via hdmi. Here is the current xorg.conf file: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 270.29 (buildd@allspice) Fri Feb 25 14:42:07 UTC 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup # InputDevice "Keyboard0" "CoreKeyboard" # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup # InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" FontPath "unix/:7100" EndSection # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup #Section "InputDevice" # # generated from default # Identifier "Mouse0" # Driver "mouse" # Option "Protocol" "auto" # Option "Device" "/dev/psaux" # Option "Emulate3Buttons" "no" # Option "ZAxisMapping" "4 5" #EndSection # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup #Section "InputDevice" # # generated from default # Identifier "Keyboard0" # Driver "kbd" #EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "SAMSUNG" HorizSync 26.0 - 81.0 VertRefresh 24.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 240" Option "TripleBuffer" "1" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection After a little digging, the question changes slightly, to wit... Per Chapter 19 of nvidia README... "If the EDID for the display device reported a preferred mode timing, and that mode timing is considered a valid mode, then that mode is used as the "nvidia-auto-select" mode." The EDID for my HDMI connected LCD monitor says use first device as preferred. Prefer first detailed timing : Yes Also: (--) NVIDIA(0): EDID maximum pixel clock : 230.0 MHz The list: (from startx -- -verbose 6 ) (--) NVIDIA(0): Detailed Timings: (--) NVIDIA(0): 1920 x 1080 @ 60 Hz (--) NVIDIA(0): Pixel Clock : 148.50 MHz (--) NVIDIA(0): HRes, HSyncStart : 1920, 2008 (--) NVIDIA(0): HSyncEnd, HTotal : 2052, 2200 (--) NVIDIA(0): VRes, VSyncStart : 1080, 1084 (--) NVIDIA(0): VSyncEnd, VTotal : 1089, 1125 (--) NVIDIA(0): H/V Polarity : +/+ This is the actual mode selected: (from xorg.0.log) (--) NVIDIA(0): 1920 x 1080 @ 60 Hz (--) NVIDIA(0): Pixel Clock : 74.18 MHz (--) NVIDIA(0): HRes, HSyncStart : 1920, 2008 (--) NVIDIA(0): HSyncEnd, HTotal : 2052, 2200 (--) NVIDIA(0): VRes, VSyncStart : 1080, 1084 (--) NVIDIA(0): VSyncEnd, VTotal : 1094, 1124 (--) NVIDIA(0): H/V Polarity : +/+ (--) NVIDIA(0): Extra : Interlaced (--) NVIDIA(0): CEA Format : 5 So my HTPC is down-converting to 1080i and then the Monitor is up-converting to 1080p How can I fix this, please?

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >