Search Results

Search found 22803 results on 913 pages for 'customer support sr'.

Page 341/913 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • The (Totally) Phantom Menace Takes a Hard Look at Star Wars Saber Duels [Video]

    - by Jason Fitzpatrick
    If you feel like the light saber duels in Star Wars are long on choreography and short on actual brutal combat, you’re sure to get a chuckle out of this “training” guide highlighting how not to kill your opponent. [via Neatorama] What’s the Difference Between Sleep and Hibernate in Windows? Screenshot Tour: XBMC 11 Eden Rocks Improved iOS Support, AirPlay, and Even a Custom XBMC OS How To Be Your Own Personal Clone Army (With a Little Photoshop)

    Read the article

  • Oracle Magazine - Deriving and Sharing Business Intelligence Metadata

    - by David Allan
    There is a new Oracle Magazine article titled 'Deriving and Sharing Business Intelligence Metadata' from Oracle ACE director Mark Rittman in the July/August 2010 issue that illustrates the business definitions derived and shared across OWB 11gR2 and OBIEE: http://www.oracle.com/technology/oramag/oracle/10-jul/o40bi.html Thanks to Mark for the time producing this. As for OWB would be have been useful to have had the reverse engineering capabilities from OBIEE, interesting to have had code template based support for deployment of such business definitions and powerful to use these objects (logical folders etc.) in the mapping itself.

    Read the article

  • Dart and NetBeans IDE 7.4

    - by Geertjan
    Here's the start of Dart in NetBeans IDE. Basic Dart editing support is done and on saving a Dart file the related JavaScript files are automatically generated. In the context of an HTML5 application in NetBeans IDE, that gives you deep integration with the embedded browser and, even better, Chrome, as well as Chrome Developer Tools. Below, notice that the "Sunflower Spectacular" H1 element is selected (click the image to enlarge it to get a better view), which is therefore highlighted in the live DOM view in the bottom left, as well as in the CSS Styles window in the top right, from where the CSS styles can be edited and from where the related files can be opened in the IDE. Identical features are available for Chrome, as well as on Android and iOS. And if you like that, watch this YouTube movie showing how Chrome Developer Tools integration can fit directly into the workflow below. Anyone want to help get this plugin further? What's needed: Much deeper Dart editing support, i.e., right now only very basic syntax coloring is provided, i.e., an ANTLR lexer is integrated into the NetBeans syntax coloring infrastructure. Parsing, error checking, code completion, and some small code templates are needed. A new panel is needed in the Project Properties dialog on NetBeans HTML5 projects for enabling Dart (i.e., similar to enabling Cordova), at which point the "dart.js" file and other Dart artifacts should be added to the project, so that a Dart project is immediately generated and the application should be immediately deployable. Whenever changes are made to a Dart file, Dart should run in the background to create the Dart artifacts in some hidden way, so that the user doesn't see all the Dart artifacts as is currently the case. Some way of recognizing Dart projects (there's a YAML file as an identifier) and creating NetBeans HTML5 projects from that, i.e., from Dart projects outside the IDE. I think that's all... The official Dart Editor is based on Eclipse and requires a massive download of heaps of Eclipse bundles. Compare that to the NetBeans equivalent, which is a very small "HTML5 and PHP" bundle (60 MB), available here, together with the above small Dart plugin. Plus, when you look at how NetBeans IDE integrates with a bunch of Google-oriented projects, i.e., Chrome, Chrome Developer Tools, and Android (via Cordova), that's a pretty interesting toolbox for anyone using Dart. And bear in mind that ANTLRWorks, Microchip, and heaps of other organizations have built and are building their tools on top of NetBeans!

    Read the article

  • CLR Version issues with CorBindRuntimeEx

    - by Rick Strahl
    I’m working on an older FoxPro application that’s using .NET Interop and this app loads its own copy of the .NET runtime through some of our own tools (wwDotNetBridge). This all works fine and it’s fairly straightforward to load and host the runtime and then make calls against it. I’m writing this up for myself mostly because I’ve been bitten by these issues repeatedly and spend 15 minutes each However, things get tricky when calling specific versions of the .NET runtime since .NET 4.0 has shipped. Basically we need to be able to support both .NET 2.0 and 4.0 and we’re currently doing it with the same assembly – a .NET 2.0 assembly that is the AppDomain entry point. This works as .NET 4.0 can easily host .NET 2.0 assemblies and the functionality in the 2.0 assembly provides all the features we need to call .NET 4.0 assemblies via Reflection. In wwDotnetBridge we provide a load flag that allows specification of the runtime version to use. Something like this: do wwDotNetBridge LOCAL loBridge as wwDotNetBridge loBridge = CreateObject("wwDotNetBridge","v4.0.30319") and this works just fine in most cases.  If I specify V4 internally that gets fixed up to a whole version number like “v4.0.30319” which is then actually used to host the .NET runtime. Specifically the ClrVersion setting is handled in this Win32 DLL code that handles loading the runtime for me: /// Starts up the CLR and creates a Default AppDomain DWORD WINAPI ClrLoad(char *ErrorMessage, DWORD *dwErrorSize) { if (spDefAppDomain) return 1; //Retrieve a pointer to the ICorRuntimeHost interface HRESULT hr = CorBindToRuntimeEx( ClrVersion, //Retrieve latest version by default L"wks", //Request a WorkStation build of the CLR STARTUP_LOADER_OPTIMIZATION_MULTI_DOMAIN | STARTUP_CONCURRENT_GC, CLSID_CorRuntimeHost, IID_ICorRuntimeHost, (void**)&spRuntimeHost ); if (FAILED(hr)) { *dwErrorSize = SetError(hr,ErrorMessage); return hr; } //Start the CLR hr = spRuntimeHost->Start(); if (FAILED(hr)) return hr; CComPtr<IUnknown> pUnk; WCHAR domainId[50]; swprintf(domainId,L"%s_%i",L"wwDotNetBridge",GetTickCount()); hr = spRuntimeHost->CreateDomain(domainId,NULL,&pUnk); hr = pUnk->QueryInterface(&spDefAppDomain.p); if (FAILED(hr)) return hr; return 1; } CorBindToRuntimeEx allows for a specific .NET version string to be supplied which is what I’m doing via an API call from the FoxPro code. The behavior of CorBindToRuntimeEx is a bit finicky however. The documentation states that NULL should load the latest version of the .NET runtime available on the machine – but it actually doesn’t. As far as I can see – regardless of runtime overrides even in the .config file – NULL will always load .NET 2.0 even if 4.0 is installed. <supportedRuntime> .config File Settings Things get even more unpredictable once you start adding runtime overrides into the application’s .config file. In my scenario working inside of Visual FoxPro this would be VFP9.exe.config in the FoxPro installation folder (not the current folder). If I have a specific runtime override in the .config file like this: <?xml version="1.0"?> <configuration> <startup> <supportedRuntime version="v2.0.50727" /> </startup> </configuration> Not surprisingly with this I can load a .NET 2.0  runtime, but I will not be able to load Version 4.0 of the .NET runtime even if I explicitly specify it in my call to ClrLoad. Worse I don’t get an error – it will just go ahead and hand me a V2 version of the runtime and assume that’s what I wanted. Yuck! However, if I set the supported runtime to V4 in the .config file: <?xml version="1.0"?> <configuration> <startup> <supportedRuntime version="v4.0.30319" /> </startup> </configuration> Then I can load both V4 and V2 of the runtime. Specifying NULL however will STILL only give me V2 of the runtime. Again this seems pretty inconsistent. If you’re hosting runtimes make sure you check which version of the runtime is actually loading first to ensure you get the one you’re looking for. If the wrong version loads – say 2.0 and you want 4.0 - and you then proceed to load 4.0 assemblies they will all fail to load due to version mismatches. This is how all of this started – I had a bunch of assemblies that weren’t loading and it took a while to figure out that the host was running the wrong version of the CLR and therefore caused the assemblies loading to fail. Arrggh! <supportedRuntime> and Debugger Version <supportedRuntime> also affects the use of the .NET debugger when attached to the target application. Whichever runtime is specified in the key is the version of the debugger that fires up. This can have some interesting side effects. If you load a .NET 2.0 assembly but <supportedRuntime> points at V4.0 (or vice versa) the debugger will never fire because it can only debug in the appropriate runtime version. This has bitten me on several occasions where code runs just fine but the debugger will just breeze by breakpoints without notice. The default version for the debugger is the latest version installed on the system if <supportedRuntime> is not set. Summary Besides all the hassels, I’m thankful I can build a .NET 2.0 assembly and have it host .NET 4.0 and call .NET 4.0 code. This way we’re able to ship a single assembly that provides functionality that supports both .NET 2 and 4 without having to have separate DLLs for both which would be a deployment and update nightmare. The MSDN documentation does point at newer hosting API’s specifically for .NET 4.0 which are way more complicated and even less documented but that doesn’t help here because the runtime needs to be able to host both .NET 4.0 and 2.0. Not pleased about that – the new APIs look way more complex and of course they’re not available with older versions of the runtime installed which in our case makes them useless to me in this scenario where I have to support .NET 2.0 hosting (to provide greater ‘built-in’ platform support). Once you know the behavior above, it’s manageable. However, it’s quite easy to get tripped up here because there are multiple combinations that can really screw up behaviors.© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  FoxPro  

    Read the article

  • Emacs column editing CUA mode - is it possible to select rectangular region with mouse?

    - by MountainX
    Rectangular or column editing is possible in emacs. And it is very easy with cua-mode enabled. Here are my references for this: Here's a video that shows how to do it: http://vimeo.com/1168225 And see section "CUA rectangle support" here: http://www.cua.dk/cua.html But I also wonder if I can do it with the mouse. I want to select the rectangular region entirely with the mouse (like Scite or Geany can do). Is that possible in emacs?

    Read the article

  • Visual Studio 2010 and Target Framework Version

    - by Scott Dorman
    Almost two years ago, I wrote about a Visual Studio macro that allows you to change the Target Framework version of all projects in a solution. If you don’t know, the Target Framework version is what tells the compiler which version of the .NET Framework to compile against (more information is available here) and can be set to one of the following values: .NET Framework 2.0 .NET Framework 3.0 .NET Framework 3.5 .NET Framework 3.5 Client Profile .NET Framework 4.0 .NET Framework 4.0 Client Profile This can be easily accomplished by editing the project properties: The problem with this approach is that if you need to change a lot of projects at one time it becomes rather unwieldy. One possible solution is to edit the project files by hand in a text editor and change the <TargetFrameworkVersion /> and <TargetFrameworkProfile /> properties to the correct values. For example, for the .NET Framework 4.0 Client Profile, these values would be: <TargetFrameworkVersion>v4.0</TargetFrameworkVersion> <TargetFrameworkProfile>Client</TargetFrameworkProfile> Again, this is not only time consuming but can also be error-prone. The better solution is to automate this through the use of a Visual Studio macro. Since I had already created a macro to do this for Visual Studio 2008, I updated that macro to work with Visual Studio 2010 and .NET 4.0. It prompts you for the target framework version you want to set for all of the projects and then loops through each project in the solution and makes the change. If you select one of the Framework versions that support a Client Profile, it will ask if you want to use the Client Profile or the Full Profile. It is smart enough to skip project types that don’t support this property and projects that are already at the correct version. This version also incorporates the changes suggested by George (in the comments). The macro is available on my SkyDrive account. Download it to your <UserProfile>\Documents\Visual Studio 2010\Projects\VSMacros80\MyMacros folder, open the Visual Studio Macro IDE (Alt-F11) and add it as an existing item to the “MyMacros” project. I make no guarantees or warranties on this macro. I have tested it on several solutions and projects and everything seems to work and not cause any problems, but, as always, use with caution. Since it is a macro, you have the full source code available to investigate and see what it’s actually doing. If you find any bugs or make any useful changes, please let me know and I’ll update the macro. Technorati Tags: Macros,Visual Studio

    Read the article

  • SOA+OSB in same JVM

    - by Manoj Neelapu
    Oracle Service Bus 11gR1 (11.1.1.3) supports running in same JVM as SOA. This tutorial covers on how to do create domain in of SOA+OSB combined to run in single JVM . For this tutorial we will use a flavor  WebLogic installer bundled with both OEPE and coherence components (eg oepe111150_wls1033_win32.exe). WebLogic installer bundled with coherence and OEPE components can be seen in the screen shot.Oracle Service Bus 11gR1 (11.1.1.3) has built-in caching support for Business Services using coherence. Because of this we will have to install coherence before  installing OSB.  To get soa and osb running in the same domain, we have to install the SOA and OSB on the above ORACLE_HOME. After installation we should see both the SOA and OSB homes has highlighted in red.We could also see the coherence components which is mandatory for OSB and optional OEPE also installed.Now we will execute RCU(ofm_rcu_win_11.1.1.3.0_disk1_1of1) to install the schema for SOA and OSB. New RCU contains OSB tables (WLI_QS_REPORT_DATA , WLI_QS_REPORT_ATTRIBUTE) gets loaded as part of SOAINFRA schema After this step we will have to create soa+osb domain using config wizard. It is located under $WEBLOGIC_HOME\common\bin\config.* (.cmd or .sh as per your platform) .While creating a domain we will select options for SOA Suite  and Oracle Service Bus Extension-All Domain Topologies.There is another option for OSB  Oracle Service Bus Extension-Single server Domain Topology. This topology is for users who want to use OSB in single server configuration. Currently SOA doesn't support single server topology. So this topology cannot be used with SOA domain but can only be used for stand alone OSB installations.We can continue with domain configuration till we reach the below screen. Following steps are mandatory if we want to have the SOA and OSB run in same JVMwe should select Managed Server, Clusters and Machines as shown below After this selection you should see a screen with two servers One managed server for OSB and one managed for SOA. Since we would like to have both the servers in one managed server (one JVM) we will have to do one important step here. We have to delete either of the servers and rename the other server with deleted server name.eg delete osb_server1 and rename the soa_server1 to osb_server1 or we can also delete soa_server1 and rename the osb_server1 to soa_server1After this steps proceed as as-usual . If we observe created domain we see only one managed server which contains components for both SOA and OSB ($DOMAIN_HOME/startManagedWebLogic_readme.txt). 

    Read the article

  • 10.10 irritating updater message

    - by Ashu
    As you all know 10.10 support has ended. thus whenever i start the update manager i get a dialog box saying Your Ubuntu release is not supported anymore You will not get any further security fixes or critical updates. Please upgrade to a later version of Ubuntu Linux. any way to remove this? that is really bugging me. ps. updating to a newer version is not an option now the screenshot

    Read the article

  • K2 4.5 Quick Thoughts

    I just finished attending a webcast on K2 4.5 and I thought Id share a few quick thoughts. Power User Story Improved Given it is just a presentation and I havent actually played with it, the story seemed improved and more believable that real world power users would be able to define workflows in SharePoint.  Power users who would be comfortable with Excel functions may be able to do some more worthwhile workflows since there is new support for inline functions and conditions.  The new SilverLight K2 designer seems pretty user friendly, though the dialog windows can really stack up which may get confusing.  I thought the neatest part was that the workflow can be defined just by starting with a SharePoint Lists settings which may be okay for some organizations and simpler workflows that dont need to define the workflow and push it through lots of testing in different environments.  The standalone K2 Studio is back.  In K2 2003 it was required because Visual Studio integration didnt exist.  Its back now for use by power users who need functionality up to the point of code.  Not sure if this Administration/Installation Installation is supposed to be simplified, with unattended install and other details I didnt catch.  Install and configuration has always seemed daunting to me so anything to improve that is good.  Related to that there is a new tool that is meant to help diagnose issues in your installation.  That may include figuring out missing permissions or services that arent running.  Also, now all K2 SharePoint features deployed as solutions. Dynamic SQL Service Broker Create a smart object to go against a table that you created, NOT the SmartBox.  This seems promising and something that maybe should have been there all along. Reference Event Allows you to call functionally that youve referenced, in the sample showing it was calling a web service that was referenced.  It seemed odd because it was really like writing code using dialogs (call constructor, set timeout, call web service method).  Seemed a little odd to me. Help We were reminded that help.k2.com site is newish site that is supposed to be the MSDN of K2 for partners and customers. VS 2010 Support Still no hard date on this, but what we were told is approximately 90 days after VS 2010 is officially released.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Excel Solver vs Solver Foundation

    - by JoshReuben
    I recently read a book http://www.amazon.com/Scientific-Engineering-Cookbook-Cookbooks-OReilly/dp/0596008791/ref=sr_1_1?ie=UTF8&s=books&qid=1296593374&sr=8-1 - the Excel Scientific and Engineering Cookbook.     The 2 main tools that this book leveraged were the Data Analysis Pack and Excel Solver. I had previously been aquanted with Microsoft Solver Foundation - this is a full fledged API for solving optimization problems, and went beyond being a mere Excel plugin - it exposed a C# programmatic interface for in process and a web service interface for out of process integration. were they the same? apparently not!   2 different solver frameworks for Excel: http://www.solver.com/index.html http://www.solverfoundation.com/ I contacted both vendors to get their perspectives.   Heres what the Excel Solver guys had to say:   "The Solver Foundation requires you to learn and use a very specific modeling language (OML). The Excel solver allows you to formulate your optimization problems without learning any new language simply by entering the formulas into cells on the Excel spreadsheet, something that nearly everyone is already familiar with doing.   The Excel Solver also allows you to seamlessly upgrade to products that combine Monte Carlo Simulation capabilities (our Risk Solver Premium and Risk Solver Platform products) which allow you to include uncertainty into your models when appropriate.   Our advanced Excel Solver Products also have a number of built in reporting tools for advanced analysis of the your model and it's results"           And Heres what the Microsoft Solver Foundation guys had to say:   "  With the release of Solver Foundation 3.0, Solver Foundation has the same kinds of solvers (plus a few more) than what is found in Excel Solver. I think there are two main differences:   1.      Problems are described differently. In Excel Solver the goals and constraints are specified inside the spreadsheet, in formulas. In Solver Foundation they are described either in .Net code that uses the Solver Foundation Services API, or using the OML modeling language in Excel. 2.      Solver Foundation’s primary strength is on solving large linear, mixed integer, and constraint models. That is, models that contain arbitrary nonlinear functions (such as trig functions, IF(), powers, etc) are handled a bit better by the Excel Solver at this point. "

    Read the article

  • Exposer des modèles C++ imbriqués à QML, un article de Christophe Dumez traduit par Thibaut Cuvelier

    Bien que ce cas d'imbrication puisse sembler rare en pratique, le fait que XML n'ait pas de support direct pour les modèles arborescents rend l'utilisation de modèles C++ imbriqués très utile pour obtenir une structure en arbre. Un exemple de cas pratique où les modèles imbriqués sont utiles est le stockage de conversations Facebook. Un mur Facebook est constitué de notifications sociales (modèle racine), chacun pouvant avoir des commentaires (modèles internes). Exposer des modèles C++ imbriqués à QML...

    Read the article

  • Java Cloud and Developer Service

    - by JuergenKress
    At our WebLogic Community Workspace (WebLogic Community membership required) you can find the latest Java and Developer Cloud presentations and demos: General+Session-+Building+and+Managing+a+Private+Oracle+Java Experiences-building-JavaEE-based-PaaS-Platform_Compressed.ppt Oracle Enterprise Manager 12c Cloud Control Demo.zip WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Oracle cloud,Cloud,Java Cloud,Oracle developer cloud,Java as a service,Oracle PAAS,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Adding Liebert UPS (Uninterrupted power supply) psi Range

    - by Theuns
    I have an Liebert PowerSure PSI Line-Interactive UPS, 1000VA (DISCONTINUED) and using ubuntu lts 12.4 presice . I've used it on windows and it came up as and Notebooks battery. Is there support for these type of devices? any help Plz. I've tried the Multi link software on the emerson site and found 'ML_36_004_Linux_x86.bin' witch would be the software when using the 2.4.18+ Kernel . is it possible to use this file? Thank u

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • La Release Candidate de Flash 10.1 devrait annoncer l'arrivée imminente de la version définitive

    Mise à jour du 08/04/10 La RC de Flash 10.1 Annonce l'arrivée imminente de la version définitive Adobe vient de mettre en ligne la RC (Release Candidate) de Flash Player 10.1. Pour mémoire, la future version de Flash introduit l'accélération matériel (en d'autres termes l'utilisation de la carte graphique et non plus du CPU) dans la technologie. Autre nouveauté, le support du format viéo H.264, le seul qui pourra bénéficier de l'accélération matérielle. La liste des cartes supportées est disponibles ici (pdf). Autre b...

    Read the article

  • What package is meant to replace thinkfinger in 11.10?

    - by misterhaan
    My laptop has a fingerprint reader that until 11.10 I used via thinkfinger. That package is no longer in the repositories, but I assume there’s a different package meant to support fingerprint readers in thinkfinger’s place. What is the recommended fingerprint reader package? Should I just find thinkfinger on my own and use that instead? Here’s my reader’s lsusb output: Bus 003 Device 002: ID 0483:2016 SGS Thomson Microelectronics Fingerprint Reader

    Read the article

  • CVE-2010-1634 Integer Overflow vulnerability in Python

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2010-1634 Integer Overflow vulnerability 5.0 Python Solaris 10 SPARC: 143506-03 X86: 143507-03 Solaris 11 Contact Support This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Oracle's Sun ZFS Storage Appliances and Oracle VM

    - by uwes
    Oracle's Sun ZFS Storage Appliance Is the Optimal Platform for Deploying Consolidated Applications in an Oracle Virtual Machine (OVM) Environment Unsurpassed Integration - Oracle VM and Storage Engineering teams provide seamless integration points and an Oracle VM Connect Plug-In for Sun ZFS Storage Appliance in FC, NFS, and iSCSI Environments.  And Sun ZFS Storage is engineered and tested to work with Oracle VM agility features including Live (VM) Migration and oracle RAC Live Migration. More information could befound under the following links: ZFS Storage Appliance Server Virtualization Oracle.com page ZFS Storage Appliance Oracle.com page ZFS Storage Appliance Oracle Technical Network.com page Software download support.oracle.com page

    Read the article

  • Finland Specialization Campaig - achive your assessments -get cinematicket

    - by ann-kristin.hahne(at)oracle.com
    GET SPECIALIZED!Suorita ONLINE-testi - saat itsellesi elokuvalipun! Suorita online-testi ja saat elokuvalipun!Kumppaniyrityksen palkitsemisen lisäksi haluamme palkita testin suorittaneet henkilöt. Jokainen ennen 31.1.2011 jollakin kolmesta osa-alueesta (Pre-Sales, Sales, Support) hyväksytysti suoritetun testin tekijä palkitaan yhdellä elokuvalipulla. Tee näin: kun olet suorittanut testin, lähetä saamasi OPN-sertifikaatti ja täydelliset yhteystiedot (nimi, e-mail, yritys, puhelinnumero) osoitteeseen:[email protected] päivittää oracle.com -profiiliisi yrityksesi OPN yritys-ID!

    Read the article

  • New Wine in New Bottles

    - by Tony Davis
    How many people, when their car shows signs of wear and tear, would consider upgrading the engine and keeping the shell? Even if you're cash-strapped, you'll soon work out the subtlety of the economics, the cost of sudden breakdowns, the precious time lost coping with the hassle, and the low 'book value'. You'll generally buy a new car. The same philosophy should apply to database systems. Mainstream support for SQL Server 2005 ends on April 12; many DBAS, if they haven't done so already, will be considering the migration to SQL Server 2008 R2. Hopefully, that upgrade plan will include a fresh install of the operating system on brand new hardware. SQL Server 2008 R2 and Windows Server 2008 R2 are designed to work together. The improved architecture, processing power, and hyper-threading capabilities of modern processors will dramatically improve the performance of many SQL Server workloads, and allow consolidation opportunities. Of course, there will be many DBAs smiling ruefully at the suggestion of such indulgence. This is nothing like the real world, this halcyon place where hardware and software budgets are limitless, development and testing resources are plentiful, and third party vendors immediately certify their applications for the latest-and-greatest platform! As with cars, or any other technology, the justification for a complete upgrade is complex. With Servers, the extra cost at time of upgrade will generally pay you back in terms of the increased performance of your business applications, reduced maintenance costs, training costs and downtime. Also, if you plan and design carefully, it's possible to offset hardware costs with reduced SQL Server licence costs. In his forthcoming SQL Server Hardware book, Glenn Berry describes a recent case where he was able to replace 4 single-socket database servers with one two-socket server, saving about $90K in hardware costs and $350K in SQL Server license costs. Of course, there are exceptions. If you do have a stable, reliable, secure SQL Server 6.5 system that still admirably meets the needs of a specific business requirement, and has no security vulnerabilities, then by all means leave it alone. Why upgrade just for the sake of it? However, as soon as a system shows sign of being unfit for purpose, or is moving out of mainstream support, the ruthless DBA will make the strongest possible case for a belts-and-braces upgrade. We'd love to hear what you think. What does your typical upgrade path look like? What are the major obstacles? Cheers, Tony.

    Read the article

  • Microsoft <3 jQuery

    - by Latest Microsoft Blogs
      Today at Mix10 we announced our increased support and involvement in the jQuery Library and how we are working closely with the community and the jQuery Team to accelerate the development of this already powerful front-end library. In recent weeks Read More......(read more)

    Read the article

  • What is a better abstraction layer for D3D9 and OpenGL vertex data management?

    - by Sam Hocevar
    My rendering code has always been OpenGL. I now need to support a platform that does not have OpenGL, so I have to add an abstraction layer that wraps OpenGL and Direct3D 9. I will support Direct3D 11 later. TL;DR: the differences between OpenGL and Direct3D cause redundancy for the programmer, and the data layout feels flaky. For now, my API works a bit like this. This is how a shader is created: Shader *shader = Shader::Create( " ... GLSL vertex shader ... ", " ... GLSL pixel shader ... ", " ... HLSL vertex shader ... ", " ... HLSL pixel shader ... "); ShaderAttrib a1 = shader->GetAttribLocation("Point", VertexUsage::Position, 0); ShaderAttrib a2 = shader->GetAttribLocation("TexCoord", VertexUsage::TexCoord, 0); ShaderAttrib a3 = shader->GetAttribLocation("Data", VertexUsage::TexCoord, 1); ShaderUniform u1 = shader->GetUniformLocation("WorldMatrix"); ShaderUniform u2 = shader->GetUniformLocation("Zoom"); There is already a problem here: once a Direct3D shader is compiled, there is no way to query an input attribute by its name; apparently only the semantics stay meaningful. This is why GetAttribLocation has these extra arguments, which get hidden in ShaderAttrib. Now this is how I create a vertex declaration and two vertex buffers: VertexDeclaration *decl = VertexDeclaration::Create( VertexStream<vec3,vec2>(VertexUsage::Position, 0, VertexUsage::TexCoord, 0), VertexStream<vec4>(VertexUsage::TexCoord, 1)); VertexBuffer *vb1 = new VertexBuffer(NUM * (sizeof(vec3) + sizeof(vec2)); VertexBuffer *vb2 = new VertexBuffer(NUM * sizeof(vec4)); Another problem: the information VertexUsage::Position, 0 is totally useless to the OpenGL/GLSL backend because it does not care about semantics. Once the vertex buffers have been filled with or pointed at data, this is the rendering code: shader->Bind(); shader->SetUniform(u1, GetWorldMatrix()); shader->SetUniform(u2, blah); decl->Bind(); decl->SetStream(vb1, a1, a2); decl->SetStream(vb2, a3); decl->DrawPrimitives(VertexPrimitive::Triangle, NUM / 3); decl->Unbind(); shader->Unbind(); You see that decl is a bit more than just a D3D-like vertex declaration, it kinda takes care of rendering as well. Does this make sense at all? What would be a cleaner design? Or a good source of inspiration?

    Read the article

  • Juju MySQL adding units vs adding new service with relation

    - by user2291975
    What's the point of adding units to MySQL? Why not just create a new service with relation to the master node? MySQL doesn't support multi-master node so adding units to one MySQL service doesn't make any sense. If I create a second service as a slave and add units to that to act as multiple slaves still doesn't make sense because if the primary slave server dies all the unites attached to it become useless as well. Can anyone explain why I should add units to MySQL?

    Read the article

  • Google Python Class Day 2 Part 2

    Google Python Class Day 2 Part 2 Google Python Class Day 2 Part 2: Utilities: OS and Commands. By Nick Parlante. Support materials and exercises: code.google.com From: GoogleDevelopers Views: 11 1 ratings Time: 20:20 More in Science & Technology

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >