Search Results

Search found 22951 results on 919 pages for 'debug build'.

Page 552/919 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • How should I implement a command processing application?

    - by Nini Michaels
    I want to make a simple, proof-of-concept application (REPL) that takes a number and then processes commands on that number. Example: I start with 1. Then I write "add 2", it gives me 3. Then I write "multiply 7", it gives me 21. Then I want to know if it is prime, so I write "is prime" (on the current number - 21), it gives me false. "is odd" would give me true. And so on. Now, for a simple application with few commands, even a simple switch would do for processing the commands. But if I want extensibility, how would I need to implement the functionality? Do I use the command pattern? Do I build a simple parser/interpreter for the language? What if I want more complex commands, like "multiply 5 until >200" ? What would be an easy way to extend it (add new commands) without recompiling? Edit: to clarify a few things, my end goal would not be to make something similar to WolframAlpha, but rather a list (of numbers) processor. But I want to start slowly at first (on single numbers). I'm having in mind something similar to the way one would use Haskell to process lists, but a very simple version. I'm wondering if something like the command pattern (or equivalent) would suffice, or if I have to make a new mini-language and a parser for it to achieve my goals?

    Read the article

  • CodePlex Daily Summary for Wednesday, August 29, 2012

    CodePlex Daily Summary for Wednesday, August 29, 2012Popular ReleasesDiscuzViet: DzX2.5TV Stable Version: Discuz-X2.5-TV-StableMath.NET Numerics: Math.NET Numerics v2.2.1: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Since v2.2.0: Student-T density more robust for very large degrees of freedom Sparse Kronecker product much more efficient (now leverages sparsity) Direct access to raw matrix storage implementations for advanced extensibility Now also separate package for signed core library with a strong name (we dropped strong names in v2.2.0) Also available as NuGet packages...Microsoft SQL Server Product Samples: Database: AdventureWorks Databases – 2012, 2008R2 and 2008: About this release This release consolidates AdventureWorks databases for SQL Server 2012, 2008R2 and 2008 versions to one page. Each zip file contains an mdf database file and ldf log file. This should make it easier to find and download AdventureWorks databases since all OLTP versions are on one page. There are no database schema changes. For each release of the product, there is a light-weight and full version of the AdventureWorks sample database. The light-weight version is denoted by ...Smart Thread Pool: SmartThreadPool 2.2.3: Release Changes Added MaxStackSize option to threadsImageServer: v1.1: This is the first version releasedChristoc's DotNetNuke Module Development Template: DotNetNuke Project Templates V1.1 for VS2012: This release is specifically for Visual Studio 2012 Support, distributed through the Visual Studio Extensions gallery at http://visualstudiogallery.msdn.microsoft.com/ After you build in Release mode the installable packages (source/install) can be found in the INSTALL folder now, within your module's folder, not the packages folder anymore Check out the blog post for all of the details about this release. http://www.dotnetnuke.com/Resources/Blogs/EntryId/3471/New-Visual-Studio-2012-Projec...Home Access Plus+: v8.0: v8.0828.1800 RELEASE CHANGED TO BETA Any issues, please log them on http://www.edugeek.net/forums/home-access-plus/ This is full release, NO upgrade ZIP will be provided as most files require replacing. To upgrade from a previous version, delete everything but your AppData folder, extract all but the AppData folder and run your HAP+ install Documentation is supplied in the Web Zip The Quota Services require executing a script to register the service, this can be found in there install di...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3391 (September 2012): New features: Extended ReflectionClass libxml error handling, constants TreatWarningsAsErrors MSBuild option OnlyPrecompiledCode configuration option; allows to use only compiled code Fixes: ArgsAware exception fix accessing .NET properties bug fix ASP.NET session handler fix for OutOfProc mode Phalanger Tools for Visual Studio: Visual Studio 2010 & 2012 New debugger engine, PHP-like debugging Lot of fixes of project files, formatting, smart indent, colorization etc. Improved ...MabiCommerce: MabiCommerce 1.0.1: What's NewSetup now creates shortcuts Fix spelling errors Minor enhancement to the Map window.ScintillaNET: ScintillaNET 2.5.2: This release has been built from the 2.5 branch. Version 2.5.2 is functionally identical to the 2.5.1 release but also includes the XML documentation comments file generated by Visual Studio. It is not 100% comprehensive but it will give you Visual Studio IntelliSense for a large part of the API. Just make sure the ScintillaNET.xml file is in the same folder as the ScintillaNET.dll reference you're using in your projects. (The XML file does not need to be distributed with your application)....WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.0: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...BlackJumboDog: Ver5.7.1: 2012.08.25 Ver5.7.1 (1)?????·?????LING?????????????? (2)SMTP???(????)????、?????\?????????????????????Visual Studio Team Foundation Server Branching and Merging Guide: v2 - Visual Studio 2012: Welcome to the Branching and Merging Guide Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review Documentation has been reviewed by the quality and recording team All critical bugs have been resolved Known Issues / Bugs Spelling, grammar and content revisions are in progress. Hotfix will be published.MakersEngine: MakersEngine BETA 0.1: First BETA build of MakersEngine Adding TrinityCore compiling support (this should run well on any system) SqlMgr will try to connect to your Database but it wont import anything.Win8GameKit: Windows 8 RTM Release: - Updated for Windows 8 RTM - Fixed Accelerometer Device Check. Gravity Pulse will not be available if no Accelerometer device is present. It will instead ignore any Power ups.SQL Server Keep Alive Service: SQL Server Keep Alive Service 1.0: This is the first official release. Please see the documentation for help https://sqlserverkeepalive.codeplex.com/documentation.ARSoft.Tools.Net - C#/.Net DNS client/server, SPF and SenderID Library: 1.7.0: New Features:Strong name for binary release LLMNR client One-shot Multicast DNS client Some new IPAddress extensions Response validation as described in draft-vixie-dnsext-dns0x20-00 Added support for Owner EDNS option (draft-cheshire-edns0-owner-option) Added support for LLQ EDNS option (draft-sekar-dns-llq) Added support for Update Lease EDNS option (draft-sekar-dns-ul) Changes:Updated to latest IANA parameters Adapted RFC6563 - Moving A6 to Historic Status Use IPv6 addre...7zbackup - PowerShell Script to Backup Files with 7zip: 7zBackup v. 1.8.1 Stable: Do you like this piece of software ? It took some time and effort to develop. Please consider a helping me with a donation Or please visit my blog Code : New work switch maxrecursionlevel to limit recursion depth while searching files to backup Code : rotate argument switch can now be set in selection file too Code : prefix argument switch can now be set in selection file too Code : prefix argument switch is checked against invalid file name chars Code : vars script file has now...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.62: Fix for issue #18525 - escaped characters in CSS identifiers get double-escaped if the character immediately after the backslash is not normally allowed in an identifier. fixed symbol problem with nuget package. 4.62 should have nuget symbols available again. Also want to highlight again the breaking change introduced in 4.61 regarding the renaming of the DLL from AjaxMin.dll to AjaxMinLibrary.dll to fix strong-name collisions between the DLL and the EXE. Please be aware of this change and...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.65: As some of you may know we were planning to release version 2.70 much later (the end of September). But today we have to release this intermediate version (2.65). It fixes a critical issue caused by a third-party assembly when running nopCommerce on a server with .NET 4.5 installed. No major features have been introduced with this release as our development efforts were focused on further enhancements and fixing bugs. To see the full list of fixes and changes please visit the release notes p...New ProjectsAssociativy: Associativy aims to give a platform for building knowledge bases organized through associative connections. See: http://associativy.comAssociativy Administration: Administration module for the Associativy (http://associativy.com) Orchard graph platform.Associativy Core: Core module for the Associativy (http://associativy.com) Orchard graph platform.Associativy Frontend Engines: Frontend Engines module for the Associativy (http://associativy.com) Orchard graph platform.Associativy Notions Demo Instance: Notions Demo Instance module for the Associativy (http://associativy.com) Orchard graph platform.Associativy Tags Adapter: Tags Adapter module for the Associativy (http://associativy.com) Orchard graph platform.Associativy Tests: Tests module for the Associativy (http://associativy.com) Orchard graph platform.Associativy Web Services: Web Services module for the Associativy (http://associativy.com) Orchard graph platform.DotNMap: A type library that can be used to work with NMap scan results in .net.EFCompoundkeyWhere: ??????????, ??????????? ????????? ??????? Where ?? ?????????? ????? ? Entity FrameworkEntertainment Tools: These are tools for entertainment professionals.EntLib.com????????: ◆ 100% ??(Open Source - ?????); ◆ ??.Net Framework 4.0; ◆ ?? ASP.NET、C# ?? ? SQL Server ???; ◆ ????????????????????; ◆ ?????????,?????????????;ExpressProfiler: ExpressProfiler is a simple but good enough replacement for SQL Server Profiler Fancy Thumb: Fancy Thumb helps you decorating your thumb drives, giving them fancy icons and names longer than what FAT(32) allows.GEBestBetAdder: GEBestBetAdder is a SharePoint utility which helps with the exporting and importing of keywords and best bets.Ginger Graphics Library: 3D GraphicsImageServer: This is a high performance, high extensible web server writen in ASP.NET C# for image automative processing.jPaint: This is a copy of mspaint, written on pure js + html + css.ListOfTales: This is simple application for store infromation about book))Metro air hockey: metro air hockeyMetro graphcalc: metro graphcalcMetro speedtest: metro speedtestMetro ToDos: metro todosMVC4 Starter Kit: The MVC4 Starter Kit is designed as a bare bone application that covers many of the concerns to be addressed during the first stage of development.MyProject1: MyProject1npantarhei contribute: use the npantarhei flowruntime configured by mef and design your flows within visualstudio 2012 using an integrated designer...PBRX: Powerbuilder to Ruby toolkit.PE file reader: readpe, a tool which parses a PE formatted file and displays the requested information. People: PeoplePocket Calculator: This is POC project that implements a simple WinForms Pocket Calculator that uses a Microsoft .NET 4.0.1 State Machine Workflow in the back-end.ProjectZ: Project Z handles the difficulties with Mologs.PSMNTVIS: this is a blabla test for some blabla code practice.Santry DotNetNuke Lightbox Ad Module: Module allows for you to place a modal ad lightbox module on your DotNetNuke page. It integrates with the HTML provider, and cookie management for display.Sevens' Stories: An album for Class Seven of Pingtan No.1 High School.SIS-TRANSPORTES: Projeto em desenvolvimento...SmartSpace: SmartSpaceSO Chat Star Leaderboard: Scrapes stars from chat.stackoverflow.com and calculates statistics and a leaderboard.testdd08282012git01: dtestdd08282012hg01: uiotestddtfs08282012: kltesttom08282012git01: fdsfdstesttom08282012hg01: ftesttom08282012tfs01: bvcvTransportadoraToledo: Projeto Integrado de SistemasVisual Studio Watchers: Custom Visual Studio visualizers.XNC: XNC is an (in-progress) XNA Framework like library for the C programming language.???: ?

    Read the article

  • Backup to disk, encrypted, without any installed local software

    - by user30064
    Hi, Ok, this is a tough one, and it might not even be possible, but no harm in asking I guess. I have a Buffalo Terastation file server that I use for network attached storage. After a couple of phone calls to customer services I realised that there is no way to backup to disk encrypted. In effect, I would be carrying unencrypted company data off-site daily, which is obviously unacceptable. I had a go at TrueCrypt, EncFS, and a few others, and as far as I could see all of them required that you install some software on the machine that is to use the file system, which makes sense. Unfortunately the firmware on the Terastation is closed and I cannot install any software (and I can't build from source either, since Buffalo didn't include a compiler). Are there any ways to copy files to disk, where as soon as they are written to the disk they are transparently encrypted, without having to install additional software? I'm not sure it matters too much, but the Terastation firmware is Linux based, although as I mentioned, closed. Many thanks, Andreas

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Which algorithms/data structures should I "recognize" and know by name?

    - by Earlz
    I'd like to consider myself a fairly experienced programmer. I've been programming for over 5 years now. My weak point though is terminology. I'm self-taught, so while I know how to program, I don't know some of the more formal aspects of computer science. So, what are practical algorithms/data structures that I could recognize and know by name? Note, I'm not asking for a book recommendation about implementing algorithms. I don't care about implementing them, I just want to be able to recognize when an algorithm/data structure would be a good solution to a problem. I'm asking more for a list of algorithms/data structures that I should "recognize". For instance, I know the solution to a problem like this: You manage a set of lockers labeled 0-999. People come to you to rent the locker and then come back to return the locker key. How would you build a piece of software to manage knowing which lockers are free and which are in used? The solution, would be a queue or stack. What I'm looking for are things like "in what situation should a B-Tree be used -- What search algorithm should be used here" etc. And maybe a quick introduction of how the more complex(but commonly used) data structures/algorithms work. I tried looking at Wikipedia's list of data structures and algorithms but I think that's a bit overkill. So I'm looking more for what are the essential things I should recognize?

    Read the article

  • Executing Secondary Applications

    - by JooBlow
    I have an application I am attempting to make "portable". The application contains a lot of secondary utility functions that I would like to execute on external files(from the app). I tried adding them in the build process but I didn't get any "Executables" for them(just the main one and a few others). Is there a way to get these to excute? They are basically command line utility functions to process some text files but use large files in the distribution and are also used by the main application. Thanks

    Read the article

  • Super-silent (mid tower) case and fan combo

    - by Dennis G.
    I want to build a HTPC for music/video/blu-ray playback (no gaming). I don't need an expensive HTPC case but just want to go with a standard medium tower case. However, I want it to be super silent so it doesn't make any annoying fan/disk noises when I watch movies. Ideally, it shouldn't make any noticeable noise at all. I understand that choosing a board, CPU and graphic card that run cool and don't consume a lot of power is important for designing a quiet machine, and I think I got that covered. However, there are so many choices in regards to cases, fans and power supplies that it's hard to get started. What are your recommendations for a case/fan (cpu+case)/power supply combination that run absolutely silent and can cool a standard Intel system with a low-power (possibly passively cooled) graphic card? I'm usually a fan of Antec cases, would an Antec Mini P180 be a good starting point? If so, which case fans, CPU fan and power supply would you recommend?

    Read the article

  • What amount of physical RAM would a typical "commodity class" server have, as of late 2013?

    - by marathon
    I'm trying to spec out servers for my company's infrastructure group to build. They tell me anything more than 2GB is too much, which I find ridiculous considering cheap DRAM is about 15 bucks a dimm in bulk and our particular software runs better with more memory. I tried to find out how much google servers use, and pinning down a number is hard. Best I could find in a google research paper was that in 2008, their commodity servers were using 2 and 4GB dimms, but the paper never said how many. I realize "commodity server" is a vague term, but I'm just looking for a rough range in RAM used. I suspect at least 16GB is going to be the norm.

    Read the article

  • DVD-ROM disappeared under Windows-7

    - by hometoast
    Windows-7 (build 7100) has forgotten about and refused to recognize that I have an optical drive attached. Linux LiveCD sees it (sees it and it boots from it), so it's isolated to the Windows installation. I do have VirtualCloneDrive installed. But I know I had access to the optical drive after the installation of that. Only recently did I try to rip an iso image for mounting in a VM. I do not see the device under device manager (Start,Run,"devmgmt.msc"). I really do not want to do a fresh install. What are my next steps?

    Read the article

  • 2D non-tile based map editor

    - by Jonesy
    I am currently developing a relatively simple 2D, topdown oriented adventure game for the iPhone and was wondering what would be the easiest way to create the maps for my game. I figured I would need some kind of visual editor that would give me immediate feedback and would allow me to place all objects in the world exactly where I want them. I could then load the saved representation of the world I create in the editor in my game. So, I am looking for a simple map editor that allows me to do this. All the objects in my game are simply textured rectangles build up from two triangles. All I need to be able to do is position different rectangles/objects in the map, and give them a texture. I am using texture atlases, so it would be useful to be able to assign portions of textures to the objects. I then need to be able to extract all the objects from the saved representation of my maps, together with the name/identifier of the texture(atlas) they use, and the area of the texture atlas. I have looked at some tile-based map editors like Tiled and Ogmo, but they don't seem to be able to do what I want. Any suggestions? EDIT: a more concrete example: something like the GameMaker level editor, but then with added export functionality in a handy format.

    Read the article

  • How to disable Yoxos popups in Eclipse on Linux

    - by MikeN
    I used the http://eclipsesource.com/en/yoxos/yoxos-ondemand/? tool to build an Eclipse distribution, but they keep giving these annoying Yoxos popups. I don't even get whether Yoxos is free or paid or just annoying as heck. How do you get rid of these Yoxos popups? This is the text of the popup: POPUP: Thank you for evaluating Pydev Extensions. If you wish to license Pydev Extensions, please visit the link below for instructions on how to buy your license: Buy When you license it, not only will you get rid of this dialog, but you will also help fostering the Pydev Open Source version. If you wish more information on the available features, you can check it on the link below: Features Matrix Or you may browse the homepage starting at its initial page: Pydev Extensions Main Page The links don't allow you to register buy the PyDev extensions. How can I switch to free pydev extensions? My other eclipse installs I just used the free PyDev and it is exactly the same.

    Read the article

  • Android-Libgdx-ProGuard: Usefulness without DexGuard? [on hold]

    - by Rico Pablo Mince
    So I'm developing a game for Android - using LibGDX - and noticed that the Android SDK (HDK, MDK, WhatTheHellEvarDK) has ProGuard built-in. Browsing the ProGuard page is like searching Google: you get that the idea is to sell some product (in this case, it's DexGuard). That leaves me wondering what features are left out of ProGuard that a game developer targeting Android should worry about. For instance, the ProGuard FAQs answer the question: "Does ProGuard encrypt string constants?" by saying: "No. String encryption in program code has to be perfectly reversible by definition, so it only improves the obfuscation level. It increases the footprint of the code. However, by popular demand, ProGuard's closed-source sibling for Android, DexGuard, does provide string encryption, along with more protection techniques against static and dynamic analysis." Alright. OK. But isn't "...improves the obfuscation level" EXACTLY what ProGuard is supposed to do? Are there better options that can be implemented at build-time in Eclipse using the Gradle options and Libgdx? In particular, the assets folder and res-specific folders will need some protection. The code itself doesn't cure cancer, but I'd prefer if nobody could copy/paste it with different game art and call it "IhAxEdUrGamE"....

    Read the article

  • VMWare-Tools Installation fails on Ubuntu 11.04

    - by Ajay
    I am trying to install VMwareTools-8.4.6-385536.tar.gz (VMWare Tools) on the following operating system: Ubuntu 11.04 - Linux ubuntu 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 03:31:50 UTC 2011 i686 i686 i386 GNU/Linux I am using VMPlayer version 3.1.4 - build 385536 ==================================================================== After starting the installation i am getting the following errors: What is the directory that contains the init scripts? [/etc/init.d] Error opening No such file or directory ================================================ Distribution provided drivers for Xorg X server are used. Skipping X configuration because X drivers are not included. Creating a new initrd boot image for the kernel. update-initramfs: Generating /boot/initrd.img-2.6.38-8-generic Starting VMware Tools services in the virtual machine: Switching to guest configuration: done Blocking file system: done Guest operating system daemon: failed Virtual Printing daemon: done Unable to start services for VMware Tools Can somebody help in this?

    Read the article

  • USB drives not recognized all of a sudden (usb_storage not loaded lsmod does not report usb_storage)

    - by Siddharth
    I have tried most of the advice on askubuntu and other sites, usb_storage enable to fdisk -l. But I am unable to find steps to get it working again. sudo lsusb results Bus.... skipped 4 lines Bus 004 Device 002: ID 413c:3012 Dell Computer Corp. Optical Wheel Mouse Bus 005 Device 002: ID 413c:2105 Dell Computer Corp. Model L100 Keyboard Bus 001 Device 005: ID 8564:1000 sudo dmseg | tail reports [ 69.567948] usb 1-4: USB disconnect, device number 4 [ 74.084041] usb 1-6: new high-speed USB device number 5 using ehci_hcd [ 74.240484] Initializing USB Mass Storage driver... [ 74.256033] scsi5 : usb-storage 1-6:1.0 [ 74.256145] usbcore: registered new interface driver usb-storage [ 74.256147] USB Mass Storage support registered. [ 74.257290] usbcore: deregistering interface driver usb-storage fdisk -l reports Device Boot Start End Blocks Id System /dev/sda1 * 2048 972656639 486327296 83 Linux /dev/sda2 972658686 976771071 2056193 5 Extended /dev/sda5 972658688 976771071 2056192 82 Linux swap / Solaris I think I need steps to install and get usb_storage module working. Edit : I tried sudo modprobe -v usb-storage reports sudo modprobe -v usb-storage insmod /lib/modules/3.2.0-48-generic-pae/kernel/drivers/usb/storage/usb-storage.ko Edit : jsiddharth@siddharth-desktop:~$ sudo udevadm monitor --udev monitor will print the received events for: UDEV - the event which udev sends out after rule processing UDEV [4757.144372] add /module/usb_storage (module) UDEV [4757.146558] remove /module/usb_storage (module) UDEV [4757.148707] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6 (usb) UDEV [4757.149699] add /bus/usb/drivers/usb-storage (drivers) UDEV [4757.151214] remove /bus/usb/drivers/usb-storage (drivers) UDEV [4757.156873] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0 (usb) UDEV [4757.160903] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) UDEV [4757.164672] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165163] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165440] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) Narrowing down more : Seems like I need usb_storage to load as a module jsiddharth@siddharth-desktop:~$ lsmod | grep usb usbserial 37201 0 usbhid 41937 0 hid 77428 1 usbhid Still no usb driver mounted. Nor does a device show up in /dev. Any step by step process to debug and fix this will be really helpful.

    Read the article

  • ADF & ADF Mobile Bootcamps in UK, Austria, Spain, Switzerland

    - by JuergenKress
    You want to learn more about innovative features ADF and ADF mobile? You want to get answers how to build modern & mobile applications? Join our 2 days ADF & ADF mobile hands-on Bootcamp. UK Spain Switzerland Austria (one day class) Details Training Audience: Developers, Project Managers, Architects & ADF beginners Trainer: Mireille Duroussaud Language: English Cost: Free of charge, cancelation fee 50€ or no-show fee 2.000€ Note: Bootcamp is limited to 20 persons on first come first serve basis. Preparation:  Attend the free online training ADF 11g Presales Specialist. Follow-up  Pass the free online assessment ADF 11g Presales Specialist. Highlights of the Workshop Oracle Fusion Middleware Development Platform Developing Reusable Business Service Developing Rich Web User Interface and Portals Developing Mobile Apps for iOS and Android with Oracle ADF Mobile For details and registration please visit our registration page. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: adf,adf training,education,training,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • What is an effective git process for managing our central code library?

    - by Mathew Byrne
    Quick background: we're a small web agency (3-6 developers at any one time) developing small to medium sized Symfony 1.4 sites. We've used git for a year now, but most of our developers have preferred Subversion and aren't used to a distributed model. For the past 6 months we've put a lot of development time into a central Symfony plugin that powers our custom CMS. This plugin includes a number of features, helpers, base classes etc. that we use to build custom functionality. This plugin is stored in git, but branches wildly as the plugin is used in various products and is pulled from/pushed to constantly. The repository is usually used as a submodule within a major project. The problems we're starting to see now are a large number of Merge conflicts and backwards incompatible changes brought into the repository by developers adding custom functionality in the context of their own project. I've read Vincent Driessen's excellent git branching model and successfully used it for projects in the past, but it doesn't seem to quite apply well to our particular situation; we have a number of projects concurrently using the same core plugin while developing new features for it. What we need is a strategy that provides the following: A methodology for developing major features within the code repository. A way of migrating those features into other projects. A way of versioning the core repository, and of tracking which version each major project uses. A plan for migrating bug fixes back to older versions. A cleaner history that's easier to see where changes have come from. Any suggestions or discussion would be greatly appreciated.

    Read the article

  • NFS or GFS for LVS 10 Server Setup

    - by Michael Robinson
    Currently we have a 10 servers LVS hosting setup. The people we hired to set it up did not anything about GFS which was our preferred Central Storage File System Solution. As we have tight time constraint, we just told them to use whatever they were familiar with which is NFS. I have since done some research and it seems that NFS is not ideal for the type of high traffic site we are hoping to build. I couldn't find much info online about the signaficance differences between the 2. As we to setup all servers again right now, should we stick with NFS or find someone who knows how to setup GFS amd go with that. We need a setup that is highly reliable and scalable as we intend. As after initial setup is done, we expect high increases in traffic and load.

    Read the article

  • links for 2011-01-07

    - by Bob Rhubart
    Enterprise Software Development with Java: GlassFish 3 vs. JBoss 6 - Is the Web Profile ready for the Enterprise? (tags: ping.fm) Bay Area Coherence Special Interest Group (BACSIG) Jan 20 The Jan 20 meeting of the Bay Area Coherence Special Interest Group (BACSIG) features presentations by Rob Lee (Coherence 3.6 Clustering Features), Rao Bhethanabotla (Efficient Management and Update of Coherence Clusters to Reduce Down Time), and Christer Fahlgren (How To Build a Coherence Practice). (tags: oracle otn coherence sig) Michael T. Dinh: VirtualBox Command Line "I have manually configured VirtualBox Host-Only Ethernet Adapter for static IP. However, the IP can change after reboot which affects connectivity with the Guest with static IP." - Michael T. Dinh (tags: oracle virtualization virtualbox) Michel Schildmeijer: Oracle WebLogic - Configuring DyeInjection Monitor "A fairly unknown tool within WLDF (WebLogic Diagnostic Framework) is the DyeInjection Monitor. With this monitor configured one can track a  user or client address within a WebLogic system." - Michel Schildmeijer (tags: oracle weblogic) David Butler: Master Data Management Implementation Styles "Oracle MDM Solutions provide strong data federation and integration capabilities which are key to enabling the use of the Confederated Hub as a possible architectural style approach." - David Butler (tags: oracle otn softwarearchitecture) Kenneth Downs: Can You Really Create A Business Logic Layer? "Don't be afraid to use the database for what it is good for, and leave the arguments about "where everything belongs" to those with too much time on their hands." - Kenneth Downs (tags: businesslogic database softwarearchitecture) IASA Perspectives Magazine - Fall 2010 Fall 2010 edition of International Association of Software Architects (IASA) Perspectives magazine: (tags: softwarearchitecture iasi entarch) Using the DB Adapter in Oracle SOA Suite: returning status information "In this tutorial I will show you an example of how how can implement this within the Oracle SOA Suite (and because the DB Adapter can also be used within the Oracle Service Bus, the principles also apply to implementing it within the OSB)." - Henk Jan van Wijk (tags: oracle otn soa soasuite database) 4th International SOA Symposium + 3rd International Cloud Symposium by Thomas Erl - call for presentations (SOA Partner Community Blog) The International SOA and Cloud Symposium brings together lessons learned and emerging topics from SOA and Cloud projects, practitioners and experts. The two-day conference agenda will be organized into several tracks. (tags: oracle otn soa cloud)

    Read the article

  • Spring Roo Database Reverse Engineer with Oracle

    - by kerry
    So you are trying to reverse engineer an Oracle database with roo? Unfortunately, due to licensing restrictions with the Oracle JDBC Drivers, this is a little difficult. There are a few blog posts and forum threads that address the problem but I figured I would post what worked for me here. First, you need to download the appropriate Oracle Drivers from Oracle. The required login, stringent password requirements, nosy registration form, and general system instability made this a pretty painful step for me. I’d also like to say that companies that have password requirements that don’t allow symbols (or any other non-standard requirement) have a special place in my heart. Having to recover my password every time I go to your site virtually guarantees I will only go there when I absolutely have to (not often). Anyways, once you have it downloaded you need to install is with maven: mvn install:install-file -Dfile=~/Downloads/ojdbc6.jar -DgroupId=com.oracle -DartifactId=ojdbc6 -Dversion=11.2.0.3 -Dpackaging=jar -DgeneratePom=true Here comes the fun part. You need to create an osgi wrapper for the driver to install it in roo. Otherwise, roo cannot see the driver. Create a new folder and put the contents of the oracle roo addon pom gist I created. Now build it with maven. You may want to change some of the artifact ids and dependencies for your particular situation. mvn package No open a roo shell and execute the following command: osgi install --url file:///Users/me/my-osgi-project/target/the-jar-it-built.jar Now run (in roo): jpa setup --provider HIBERNATE --database ORACLE dependency remove --groupId com.oracle --artifactId ojdbc14 --version 10.2.0.2 dependency add --groupId com.oracle --artifactId ojdbc6 --version 11.2.0.3 database properties set --key database.driverClassName --value oracle.jdbc.OracleDriver database properties set --key database.url --value jdbc:oracle:thin:@%YOUR_CONNECTION_INFO% database properties set --key database.username --value %YOUR_USERNAME% database properties set --key database.password --value %YOUR_PASSWORD% database reverse engineer --schema %YOUR_SCHEMA% --package ~.domain If you have any package loading exceptions when running the reverse engineer command you can uninstall the osgi bundle, set the package to optional in the osgi pom in the IncludedPackages tag (javax.some.package.*;resolution:=optional) rebuild, then reinstall in roo.

    Read the article

  • White Paper on Analysis Services Tabular Large-scale Solution #ssas #tabular

    - by Marco Russo (SQLBI)
    Since the first beta of Analysis Services 2012, I worked with many companies designing and implementing solutions based on Analysis Services Tabular. I am glad that Microsoft published a white paper about a case-study using one of these scenarios: An Analysis Services Case Study: Using Tabular Models in a Large-scale Commercial Solution. Alberto Ferrari is the author of the white paper and many people contributed to it. The final result is a very technical document based on a case study, which provides a level of detail that I don’t see often in other case studies (which are usually more marketing-oriented). This white paper has the following structure: Requirements (data model, capacity planning, client tool) Options considered (SQL Server Columnstore Indexes, SSAS Multidimensional, SSAS Tabular) Data Model optimizations (memory compression, query performance, scalability) Partitioning and Processing strategy for near real-time latency Hardware selection (NUMA analysis, Azure VM tests) Scalability tests (estimation of maximum users per node) If you are in charge of evaluating Tabular as analytical engine, or if you have to design your solution based on Tabular, this white paper is a must read. But if you just want to increase your knowledge of Analysis Services, you will find a lot of useful technical information. That said, my favorite quote of the document is the following one, funny but true: […] After several trials, the clear winner was a video gaming machine that one guy on the team used at home. That computer outperformed any available server, running twice as fast as the server-class machines we had in house. At that point, it was clear that the criteria for choosing the server would have to be expanded a bit, simply because it would have been impossible to convince the boss to build a cluster of gaming machines and trust it to serve our customers.  But, honestly, if a business has the flexibility to buy gaming machines (assuming the machines can handle capacity) – do this. Owen Graupman, inContact I want to write a longer discussion about how companies are adopting Tabular in scenarios where it is the hidden engine of a more complex solution (and not the classical “BI system”), because it is more frequent than you might expect (and has several advantages over many alternative approaches).

    Read the article

  • Textures quality issues with Libgdx

    - by user1876708
    I have drawn several vector objets and characters ( in Adobe Illustrator ) for my game on Android. They are all scalable at any size without any quality losses ( of course it's vector ^^ ). I have tried to simulate my gameboard directly on Illustrator just before setting my assests on libdgx to implement them in my game. I set all the objects at the good size, so that they fit perfectly on my XHDPI device I am running my test on. As you can see it works great ( for me at least ^^ ), the PNG quality is good for me, as expected ! So I have edited all my PNG at this size, set my assets on libgdx and build my game apk. And here is a screenshot of my gameboard ( don't pay attention at the differences of placing and objects, but check at the objets presents on both screenshot ). As you can see, I have a loss of my PNG quality in the game. It can be seen clealry on the hedgehog PNG, but also ( but not as obvious ) on the mushroom ( check at the outline ) and the hole PNG. If you really pay attention, on every objects, you can see pixels that are not visible on my first screenshot. And I just can't figure out why this is happening Oo If you have any ideas, you are very welcome ! Thanks. PS : You can check more clearly the 2 gameboard on this two links ( look at them at 100%, display at high resolution ) : Good quality link, from Illustrator Poor quality link, from the game Second phase of tests : We display an object ( the hedgehog ) on our main menu screen to see how it looks like. The things is that it looks like he is suppose to, which means, high quality with no pixels. The hedgehog PNG is coming from an atlas : layer.addActor(hedgehog); No loss of quality with this method So we think the problem is comming from the method we are using to display it on our gameboard : blocks[9][3] = new Block(TextureUtils.hedgehog, new Vector2(9, 3)); the block is getting the size from the vector we are associating to it, but we have a loss of quality with this method.

    Read the article

  • New York Coherence Special Interest Group, Jan 13 2011

    - by ruma.sanyal
    Please join us for our next exciting event. We are pleased to announce that Aleksander Seovic, Craig Blitz and Madhav Sathe will be presenting to our group. Presentation details are provided below. Time: 3:00pm - 6:00pm ET Where: Oracle Office, Room 30076, 520 Madison Avenue, 30th Floor, NY We will be providing snacks and beverages. Register! - Registration is required for building security. Presentations:? Getting the Most out of your Coherence Cluster with Oracle Enterprise Manager - Madhav Sathe, Principal Product Manager (Oracle) How To Build a Coherence Practice - Craig Blitz, Senior Principal Product Manager (Oracle) Congratulations on your decision to buy Oracle Coherence. We believe you have chosen an excellent product. Now the hard work begins. To help you get the most out of Coherence from both a project and enterprise perspective, this talk will introduce you to resources available from Oracle and through the Coherence ecosystem. The talk will also discuss best organizational practices you can implement to ensure success with Coherence. The speaker will use his significant experience with customers' Coherence deployment to show what works and what doesn't in practice. Coherence in the Cloud - Aleksandar Seovic, Founder and Managing Director (S4HC) Amazon Web Services cloud provides great and affordable foundation for the next generation of scalable web applications. Application servers, load balancers, and scalable storage can be provisioned in a matter of minutes and used for pennies an hour. However, AWS cloud also brings a set of new architectural challenges, such as transient file systems and dynamically assigned IP addresses. In this session we will look at a real-world example of how Coherence can be used to address some of these challenges and show why the combination of AWS cloud and Coherence has a great synergy.

    Read the article

  • Maintenance Plan Reporting - Append To File - Clean Up?

    - by Adam J.R. Erickson
    Background: (SQL Server 2005, Standard Ed.) I have a maintenance plan running backups, taking a full backup 1/day, and t-log every 15 minutes. I have it set to create a text file report of each run, but that creates A LOT of files on the file server. These are hard to sort through, which makes them less useful. Question: There is an option in "Reporting and Logging" settings for appending all logs together, but how do you clean this out? If you're appending to the same log file every time, how should you make sure this file doesn't grow indefinitely? Is there a build-in function to clean out portions of appended logs like there is for cleaning out individual old log files?

    Read the article

  • Oracle Java Cloud Service - Platform as a Service for Your Java Applications

    - by GeneEun
    Oracle Java Cloud Service is an enterprise grade Platform as a Service for developing, testing, and deploying business applications. For Java developers, Java Cloud Service provides the power, flexibility, and performance of a true Java EE container in the cloud. Java Cloud Service delivers one of the key advantages of the Java platform, the ability to “write once, run anywhere”. Because of the standards-based approach, there's no need to worry that applications you build and deploy are forever locked into the Oracle Cloud.  In fact, you can use Java Cloud Service just as you would an on-premise Java EE environment and deploy your Java applications on a Java Cloud Service instance as-is. Provisioning of Java Cloud Service instances is self-service and takes only minutes, making access to Java environments both quick and easy. Java Cloud Service instances are also automatically associated with Oracle Database Cloud Service instances, so there's no complex setup involved in order to get a complete application environment up and running.If you're attending Oracle OpenWorld in San Francisco this week, I'm sure you've seen that there are many sessions covering Oracle Cloud services, including Java Cloud Service. Each session will provide a wealth of information, so I highly recommend you consult your conference schedule and try to check them out. In the meantime, here's a short video about Java Cloud Service. Enjoy!

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >