Search Results

Search found 9129 results on 366 pages for 'beta versions'.

Page 144/366 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • No contact list in MSN

    - by David
    Since today I can't see my contact list en empathy IM, using the MSN protocol. I've tried uninstalling, reinstalling, erasing all config files from my computer (using ubuntu tweak and erasing the config files from my /home folder), but nothing solve the problem Time ago people have the same problem, they're solved it changing a line in a script, but that bug was solved in latest versions of empathy. I've tried to change that script, using other lines. /usr/lib/pymodules/python2.6/papyon/service/description/SingleSignOn/RequestMultipleSecurityTokens.py I've changed the line CONTACTS = ("contacts.msn.com", "MBI") by the older one: CONTACTS = ("contacts.msn.com","?fs=1&id=24000&kv=7&rn=93S9SWWw&tw=0&ver=2.1.6000.1") But this no fix the bug In advanced options I have this (in empathy account options): Server: messenger.hotmail.com Port: 1863 How can I solve this? Please help

    Read the article

  • Oracle's Thirteen Engineered Systems

    - by Luis Moreno Campos
    You already need a catalogue to keep up with the many new stuff coming out from Oracle Engineered from factory.In the Exadata portfolio you have 4 systems:- Quarter Rack X2-2 Database Machine- Half-Rack X2-2 Database Machine- Full-Rack X2-2 Database Machine- X2-8 Database MachineBut if Exadata presents a stunning portfolio, Exalogic doesn't fall behind on that by putting out 6 versions: 3 sizes (Quarter, Half and Full) with x86 processors and the same 3 sizes with SPARC based processors.Finally we have 3 new systems called SPARC Superclusters where Solaris 11 was re-engineered to take more out of the power of Infiniband: "Available in the next calendar year, the Oracle SPARC Supercluster will be available in T3-2, T3-4 and M5000-based configurations".I see Oracle delivering on it's promise to tightly integrate Hardware and Software to work closer together.

    Read the article

  • SQL SERVER – What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1

    - by Pinal Dave
    This is the first part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 Statistics are considered one of the most important aspects of SQL Server Performance Tuning. You might have often heard the phrase, with related to performance tuning. “Update Statistics before you take any other steps to tune performance”. Honestly, I have said above statement many times and many times, I have personally updated statistics before I start to do any performance tuning exercise. You may agree or disagree to the point, but there is no denial that Statistics play an extremely vital role in the performance tuning. SQL Server 2014 has a new feature called Incremental Statistics. I have been playing with this feature for quite a while and I find that very interesting. After spending some time with this feature, I decided to write about this subject over here. New in SQL Server 2014 – Incremental Statistics Well, it seems like lots of people wants to start using SQL Server 2014′s new feature of Incremetnal Statistics. However, let us understand what actually this feature does and how it can help. I will try to simplify this feature first before I start working on the demo code. Code for all versions of SQL Server Here is the code which you can execute on all versions of SQL Server and it will update the statistics of your table. The keyword which you should pay attention is WITH FULLSCAN. It will scan the entire table and build brand new statistics for you which your SQL Server Performance Tuning engine can use for better estimation of your execution plan. UPDATE STATISTICS TableName(StatisticsName) WITH FULLSCAN Who should learn about this? Why? If you are using partitions in your database, you should consider about implementing this feature. Otherwise, this feature is pretty much not applicable to you. Well, if you are using single partition and your table data is in a single place, you still have to update your statistics the same way you have been doing. If you are using multiple partitions, this may be a very useful feature for you. In most cases, users have multiple partitions because they have lots of data in their table. Each partition will have data which belongs to itself. Now it is very common that each partition are populated separately in SQL Server. Real World Example For example, if your table contains data which is related to sales, you will have plenty of entries in your table. It will be a good idea to divide the partition into multiple filegroups for example, you can divide this table into 3 semesters or 4 quarters or even 12 months. Let us assume that we have divided our table into 12 different partitions. Now for the month of January, our first partition will be populated and for the month of February our second partition will be populated. Now assume, that you have plenty of the data in your first and second partition. Now the month of March has just started and your third partition has started to populate. Due to some reason, if you want to update your statistics, what will you do? In SQL Server 2012 and earlier version You will just use the code of WITH FULLSCAN and update the entire table. That means even though you have only data in third partition you will still update the entire table. This will be VERY resource intensive process as you will be updating the statistics of the partition 1 and 2 where data has not changed at all. In SQL Server 2014 You will just update the partition of Partition 3. There is a special syntax where you can now specify which partition you want to update now. The impact of this is that it is smartly merging the new data with old statistics and update the entire statistics without doing FULLSCAN of your entire table. This has a huge impact on performance. Remember that the new feature in SQL Server 2014 does not change anything besides the capability to update a single partition. However, there is one feature which is indeed attractive. Previously, when table data were changed 20% at that time, statistics update were triggered. However, now the same threshold is applicable to a single partition. That means if your partition faces 20% data, change it will also trigger partition level statistics update which, when merged to your final statistics will give you better performance. In summary If you are not using a partition, this feature is not applicable to you. If you are using a partition, this feature can be very helpful to you. Tomorrow: We will see working code of SQL Server 2014 Incremental Statistics. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • Black screen in installation when Nvidia graphic card plugged

    - by jopasso1
    When I try to install any recent version of Ubuntu, the screen shows some green and purple mess (like analog TVs when thers no signal). Then, a black screen. I guess it keeps booting in live/install mode. But I can't see it. I tried installing from CD and USB, I tryed changing some BIOS settings... I installed older versions, like 8.04, and it worked, but after updating the system, it crashes again. That's why I discovered that upgrading Nvidia drivers, made system show a black screen again. After that, I unplugged the Nvidia card and installed 12.04 with onboard card. It worked perfectly. Then, I plugged Nvidia again and system booted, but only showing that black screen again. I keep working with onboard card, so far... The nVidia is a GeForce 8500GT.

    Read the article

  • Some notes on Reflector 7

    - by CliveT
    Both Bart and I have blogged about some of the changes that we (and other members of the team) have made to .NET Reflector for version 7, including the new tabbed browsing model, the inclusion of Jason Haley's PowerCommands add-in and some improvements to decompilation such as handling iterator blocks. The intention of this blog post is to cover all of the main new features in one place, and to describe the three new editions of .NET Reflector 7. If you'd simply like to try out the latest version of the beta for yourself you can do so here. Three new editions .NET Reflector 7 will come in three new editions: .NET Reflector .NET Reflector VS .NET Reflector VSPro The first edition is just the standalone Windows application. The latter two editions include the Windows application, but also add the power of Reflector into Visual Studio so that you can save time switching tools and quickly get to the bottom of a debugging issue that involves third-party code. Let's take a look at some of the new features in each edition. Tabbed browsing .NET Reflector now has a tabbed browsing model, in which the individual tabs have independent histories. You can open a new tab to view the selected object by using CTRL+CLICK. I've found this really useful when I'm investigating a particular piece of code but then want to focus on some other methods that I find along the way. For version 7, we wanted to implement the basic idea of tabs to see whether it is something that users will find helpful. If it is something that enhances productivity, we will add more tab-based features in a future version. PowerCommands add-in We have also included Jason Haley's PowerCommands add-in as part of version 7. This add-in provides a number of useful commands, including support for opening .xap files and extracting the constituent assemblies, and a query editor that allows C# queries to be written and executed against the Reflector object model . All of the PowerCommands features can be turned on from the options menu. We will be really interested to see what people are finding useful for further integration into the main tool in the future. My personal favourite part of the PowerCommands add-in is the query editor. You can set up as many of your own queries as you like, but we provide 25 to get you started. These do useful things like listing all extension methods in a given assembly, and displaying other lower-level information, such as the number of times that a given method uses the box IL instruction. These queries can be extracted and then executed from the 'Run Query' context menu within the assembly explorer. Moreover, the queries can be loaded, modified, and saved using the built-in editor, allowing very specific user customization and sharing of queries. The PowerCommands add-in contains many other useful utilities. For example, you can open an item using an external application, work with enumeration bit flags, or generate assembly binding redirect files. You can see Bart's earlier post for a more complete list. .NET Reflector VS .NET Reflector VS adds a brand new Reflector object browser into Visual Studio to save you time opening .NET Reflector separately and browsing for an object. A 'Decompile and Explore' option is also added to the context menu of references in the Solution Explorer, so you don't need to leave Visual Studio to look through decompiled code. We've also added some simple navigation features to allow you to move through the decompiled code as quickly and easily as you can in .NET Reflector. When this is selected, the add-in decompiles the given assembly, Once the decompilation has finished, a clone of the Reflector assembly explorer can be used inside Visual Studio. When Reflector generates the source code, it records the location information. You can therefore navigate from the source file to other decompiled source using the 'Go To Definition' context menu item. This then takes you to the definition in another decompiled assembly. .NET Reflector VSPro .NET Reflector VSPro builds on the features in .NET Reflector VS to add the ability to debug any source code you decompile. When you decompile with .NET Reflector VSPro, a matching .pdb is generated, so you can use Visual Studio to debug the source code as if it were part of the project. You can now use all the standard debugging techniques that you are used to in the Visual Studio debugger, and step through decompiled code as if it were your own. Again, you can select assemblies for decompilation. They are then decompiled. And then you can debug as if they were one of your own source code files. The future of .NET Reflector As I have mentioned throughout this post, most of the new features in version 7 are exploratory steps and we will be watching feedback closely. Although we don't want to speculate now about any other new features or bugs that will or won't be fixed in the next few versions of .NET Reflector, Bart has mentioned in a previous post that there are lots of improvements we intend to make. We plan to do this with great care and without taking anything away from the simplicity of the core product. User experience is something that we pride ourselves on at Red Gate, and it is clear that Reflector is still a long way off our usual standards. We plan for the next few versions of Reflector to be worked on by some of our top usability specialists who have been involved with our other market-leading products such as the ANTS Profilers and SQL Compare. I re-iterate the need for the really great simple mode in .NET Reflector to remain intact regardless of any other improvements we are planning to make. I really hope that you enjoy using some of the new features in version 7 and that Reflector continues to be your favourite .NET development tool for a long time to come.

    Read the article

  • SQL Server Data Tools–BI for Visual Studio 2013 Re-released

    - by Greg Low
    Customers used to complain that the tooling for creating BI projects (Analysis Services MD and Tabular, Reporting Services, and Integration services) has been based on earlier versions of Visual Studio than the ones they were using for their other work in Visual Studio (such as C#, VB, and ASP.NET projects). To alleviate that problem, the shipment of those tools has been decoupled from the shipment of the SQL Server product. In SQL Server 2014, the BI tooling isn’t even included in the released version of SQL Server. This allows the team to keep up-to-date with the releases of Visual Studio. A little while back, I was really pleased to see that the Visual Studio 2013 update for SSDT-BI (SQL Server Data Tools for Business Intelligence) had been released. Unfortunately, they then had to be withdrawn. The good news is that they’re back and you can get the latest version from here: http://www.microsoft.com/en-us/download/details.aspx?id=42313

    Read the article

  • Which iPhone/iPod Hardware to Support for iPhone Game

    - by ashes999
    I'm coming from a background of Android. I would like to figure out what kind of iBlah device hardware I need to support for iPhone game development. From my research, it seems that I need to support both iPod and iPhone. But which versions of hardware? is 3G still used? What about 3GS? For example, the -S phones seem to have dual core instead of single-core CPUs. I also worry about performance. In Android, I had a severely low-end phone, which meant pretty much anything that ran okay there ran okay on other phones. For iPhone, how do I figure out what to support in terms of performance?

    Read the article

  • Embedding the Silverlight version of the Open Media Player

    Im working on a Video Portal Application and have selected the Open Video Player for embedded viewing of videos. There are many video players out there but I selected this one becuase there are SilverLight and Flash versions in the project. Embedding is EASY ! Code Snippet <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="OpenPlayerSample._Default"...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to run Ubuntu fully in initramfs?

    - by miernik
    I have a machine with 10 GB of RAM, and I would like to run Ubuntu on it (Debian also OK if its easier), fully in RAM in such a way: I boot from a compressed image on an USB flash disk, that is uncompressed into RAM, and then I can remove the disk from the USB slot, and use the system only with RAM, without any permanent disk. Whenever I make any changes that I want permanent, I would put the flash disk back into the USB slot (possibly not the same one as I used initially to boot, as I would like to keep many versions of the boot flash disk), and run some command that would save the current state into a compressed image on the disk. How can I set this up?

    Read the article

  • Ops Center zip documentation

    - by Owen Allen
    If you're operating in a dark site, or are otherwise without easy access to the internet, it can be tricky to get access to the docs. The readme comes along with the product, but that's not exactly the same as the whole doc library. Well, we've put a zip file with the whole doc library contents up on the main doc page. So, if you are in a site without internet access, you can get the zip, extract it, and have a portable version of the site, including the pdf and html versions of all of the docs.

    Read the article

  • .NET Reflector 7 Released

    - by Paulo Morgado
    This new version fixes a number of bugs and adds support for more high level C# features such as iterator blocks. A new tabbed browsing model was added and Jason Haley's PowerCommands add-in was included as an exploratory step for future versions. To find out more about version 7 just visit http://www.reflector.net/. The release of version 7 also means that the free version of .NET Reflector is no longer available for download. Maybe you can still get one of the give away licenses that Red Gate provided to communities and individuals.

    Read the article

  • Kernel Panic: line 61: can't open /scripts/functions

    - by Pavlos G.
    I'm facing a problem with all the kernels installed at my system (Ubuntu 10.10 64-bit). Installed kernel versions: 2.6.32-21 up to 2.6.35.23. The booting halted with the following error: init: .: line 61: can't open '/scripts/functions' Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init not tainted Only the first one (2.6.32-21) was working up until know. I asked for help at ubuntuforums.org and i was told to check if there's any problem regarding my graphics card (ATI Radeon). I uninstalled all the ATI-related packages as well as all the unecessary xserver-xorg-video-* drivers that were installed. I then rebooted and from then on ALL of the kernels halt with the same error (i.e. it didn't fix the problematic kernels, it just broke the only one that was working...) Any ideas on what i should try next? Thanks in advance. Pavlos.

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Exposing business logic as WCF service

    - by Oren Schwartz
    I'm working on a middle-tier project which encapsulates the business logic (uses a DAL layer, and serves a web application server [ASP.net]) of a product deployed in a LAN. The BL serves as a bunch of services and data objects that are invoked upon user action. At present times, the DAL acts as a separate application whereas the BL uses it, but is consumed by the web application as a DLL. Both the DAL and the web application are deployed on different servers inside organization, and since the BL DLL is consumed by the web application, it resides in the same server. The worst thing about exposing the BL as a DLL is that we lost track with what we expose. Deployment is not such a big issue since mostly, product versions are deployed together. Would you recommend migrating from DLL to WCF service? If so, why? Do you know anyone who had a similar experience?

    Read the article

  • Unity Dash/Lens - Auto-completion based on recent search strings

    - by Anant
    Sometimes, I find myself typing the same (or quite similar) search strings inside a Unity Lens. So, I thought whether it's possible for the Lens to remember previous searches, and provide a drop-down menu of possible suggestions (based on the past) when I start typing my new query. With Lenses for sites like Wikipedia and DuckDuckGo, the search strings are getting longer, and this feature would lend a helping hand in filling out queries faster. This could be something applicable to all Lenses, with later versions allowing individual Lenses to run their own auto-completion algorithm.

    Read the article

  • Why Move My Oracle Database to New SPARC Hardware?

    - by rickramsey
    If didn't manage to catch all the news during the proverbial Firehose Down the Throat that is Oracle OpenWorld, you'll enjoy these short recaps from Brad Carlile. He makes things clear in just a couple of minutes. photograph copyright by Edge of Day Photography, with permission Video: Latest Improvements to Oracle SPARC Processors with Brad Carlile T5, M5, and M6. Three wicked fast processors that Oracle announced over the last year. Brad Carlile explains how much faster they are, and why they are better than previous versions. Video: Why Move Your Oracle Database to SPARC Servers with Brad Carlile If I'm happy with how my Oracle Database 11g is performing, why should I deploy it on the new Oracle SPARC hardware? For the same reasons that you would want to buy a sports car that goes twice as fast AND gets better gas mileage, Brad Carlile explains. Well, if there are such dramatic performance improvements and cost savings, then why should I move up to Oracle Database 12c? -Rick Follow me on: Blog | Facebook | Twitter | Personal Twitter | YouTube | The Great Peruvian Novel

    Read the article

  • Compatibility of Enum Vs. string constants

    - by Yosi
    I was recently told that using Enum: public enum TaskEndState { Error, Completed, Running } may have compatibility/serialization issues, and thus sometimes it's better to use const string: public const string TASK_END_STATE = "END_STATE"; public const string TASK_END_STATE_ERROR = "TASK_END_STATE_ERROR"; public const string TASK_END_STATE_COMPLETE = "TASK_END_STATE_COMPLETE"; public const string TASK_END_STATE_RUNNING = "TASK_END_STATE_RUNNING"; Can you find practical use case where it may happen, is there any guidelines where Enum's should be avoided? Edit: My production environment has multiple WFC services (different versions of the same product). A later version may/or may not include some new properties as Task end state (this is just an example). If we try to deserialize a new Enum value in an older version of a specific service, it may not work.

    Read the article

  • How to zoom in&zoom out

    - by stanimir
    While using 10.04 and the previous versions I used to put ctrl+F6 to zoom in and subsequently ctrl+F7 to zoom out. Now I can't even find the options to zoom in and zoom out in the "keyboard shortcuts". I tried "the Magnifier" in the Compiz but really can't understand what is going on right there. There is simple question I would like to ask: What to do so as to be able to zoom in with ctrl+F6 and zoom out with ctrl+F7? Thanks a lot.

    Read the article

  • Conditional AddHandler Directive

    - by Itai
    Is it possible to conditionally call AddHandler in the .htaccess under Apache (2.x)? My present situation requires that a certain AddHandler is needed by one production server but that one breaks the development server. This requires to have 2 versions of .htaccess which is pain. So, instead I would like to wrap one AddHandler within a conditional. Something of this sort: IF IP=='1.2.3.4' THEN AddHandler type/foo .ext ENDIF The problem is new but out of my control for now. I know this is far from ideal and the servers used to match 100% as they should but temporarily they cannot.

    Read the article

  • Project migration tips [closed]

    - by Shirish11
    I would like to know what all things are to be considered when your client asks for Project Migration to a different language / version. Migrating an existing project to a higher version is not going to be much of a trouble. Have to take care of some system files, some settings changes and if version specific components included then search for them to comply with the newer version. I understand why you need to migrate projects to newer versions mostly for overcoming the drawbacks of current version. But I have absolutely no idea how to go about language migration. Any help on this is appreciated . Moreover is language migration a healthy idea?

    Read the article

  • A* Jump Point Search - how does pruning really work?

    - by DeadMG
    I've come across Jump Point Search, and it seems pretty sweet to me. However, I'm unsure as to how their pruning rules actually work. More specifically, in Figure 1, it states that we can immediately prune all grey neighbours as these can be reached optimally from the parent of x without ever going through node x However, this seems somewhat at odds. In the second image, node 5 could be reached by first going through node 7 and skipping x entirely through a symmetrical path- that is, 6 -> x -> 5 seems to be symmetrical to 6 -> 7 -> 5. This would be the same as how node 3 can be reached without going through x in the first image. As such, I don't understand how these two images are not entirely equivalent, and not just rotated versions of each other. Secondly, I'd like to understand how this algorithm could be generalized to a three-dimensional search volume.

    Read the article

  • Automated deployment/installation of development tools

    - by thegreendroid
    My team is looking to automate installation/deployment of all of our development tools. The main driver for this is to ensure that everyone in the team has a consistent development environment setup and to also allow a new recruit to get up and running easily. By development environment I mean tools like SCM, toolchains, IDEs etc. and by consistent I mean everyone using the same version of compiler to build code (this is very important!). Here are a few of our requirements – Allow unattended (silent) install of our entire dev setup by running a single script Ability to deploy selective updates (new versions) for specific tools Ability to report which tools are installed and their specific version numbers Must work on Windows (Linux would be a bonus) Must be easy to maintain What are some of the tools that you've used to automate such a task?

    Read the article

  • SQL SERVER – Beginning New Weekly Series – Memory Lane – #002

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Query to Find ByteSize of All the Tables in Database This was my second blog post and today I do not remember what was the business need which has made me build this query. It was built for SQL Server 2000 and it will not directly run on SQL Server 2005 or later version now. It measured the byte size of the tables in the database. This can be done in many different ways as well for example SP_HELPDB as well SP_HELP. I wish to build similar script in 2005 and later version. 2007 This week I had completed my – 1 Year (365 blogs) and very first 1 Million Views. I was pretty excited at that time with this new achievement. SQL SERVER Versions, CodeNames, Year of Release When I started with SQL Server I did not know all the names correctly for each version and I often used to get confused with this. However, as time passed by I started to remember all the codename as well. In this blog post I have not included SQL Server 2012′s code name as it was not released at the time. SQL Server 2012′s code name is Denali. Here is the question for you – anyone know what is the internal name of the SQL Server’s next version? Searching String in Stored Procedure I have already started to work with 2005 by this time and I was personally converting each of my stored procedures to SQL Server 2005 compatible. As we were upgrading from SQL Server 2000 to SQL Server 2005 we had to search each of the stored procedures and make sure that we remove incompatible code from it. For example, syscolumns of SQL Server 2000 was now being replaced by sys.columns of SQL Server 2005. This stored procedure was pretty helpful at that time. Later on I build few additional versions of the same stored procedure. Version 1: This version finds the Stored Procedures related to Table Version 2: This is specific version which works with SQL Server 2005 and later version 2008 Clear Drop Down List of Recent Connection From SQL Server Management Studio It happens to all of us when we connected to some remote client server and we never ever have to connect to it again. However, it keeps on bothering us that the name shows up in the list all the time. In this blog post I covered a quick tip about how we can remove the same. I also wrote a small article about How to Check Database Integrity for all Databases and there was a funny question from a reader requesting T-SQL code to refresh databases. 2009 Stored Procedure are Compiled on First Run – SP is taking Longer to Run First Time A myth is quite prevailing in the industry that Stored Procedures are pre-compiled and they should always run faster. It is not true. Stored procedures are compiled on very first execution of it and that is the reason why it takes longer when it executes first time. In this blog post I had a great time discussing the same concept. If you do not agree with it, you are welcome to read this blog post. Removing Key Lookup – Seek Predicate – Predicate – An Interesting Observation Related to Datatypes Performance Tuning is an interesting concept and my personal favorite one. In many blog posts I have described how to do performance tuning and how to improve the performance of the queries. In this quick quick tip I have explained how one can remove the Key Lookup and improve performance. Here are very relevant articles on this subject: Article 1 | Article 2 | Article 3 2010 Recycle Error Log – Create New Log file without a Server Restart During one of the consulting assignments I noticed DBA restarting server to create new log file. This is absolutely not necessary and restarting server might have many other negative impacts. There is a common sp_cycle_errorlog which can do the same task efficiently and properly. Have you ever used this SP or feature? Additionally I had a great time presenting on SQL Server Best Practices in SharePoint Conference. 2011 SSMS 2012 Reset Keyboard Shortcuts to Default It is very much possible that we mix up various SQL Server shortcuts and at times we feel like resetting it to default. In SQL Server 2012 it is not easy to do it, there is a process to follow and I enjoyed blogging about it. Fundamentals of Columnstore Index Columnstore index is introduced in SQL Server 2012 and have been a very popular subject. It increases the speed of the server dramatically as well can be an extremely useful feature with Datawharehousing. However updating the columnstore index is not as simple as a simple UPDATE statement. Read in a detailed blog post about how Update works with Columnstore Index. Additionally, you can watch a Quick Video on this subject. SQL Server 2012 New Features I had decided to explore SQL Server 2012 features last year and went through pretty much every single concept introduced in separate blog posts. Here are two blog posts where I describe how SQL Server 2012 functions works. Introduction to CUME_DIST – Analytic Functions Introduction to FIRST _VALUE and LAST_VALUE – Analytic Functions OVER clause with FIRST_VALUE and LAST_VALUE – Analytic Functions I indeed enjoyed writing about SQL Server 2012 functions last year. Have you gone through all the new features which are introduced in SQL Server 2012? If not, it is still not late to go through them. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Firefox : vers une interface de plus en plus « Chrome » ? Mozilla met en ligne des captures d'un nouveau projet d'UI

    Firefox : vers une interface de plus en plus « Chrome » ? Mozilla met en ligne les premières captures d'un projet d'UI pour les prochaines versions du navigateur Mozilla est en train de réfléchir à un nouveau toilettage de l'interface de son navigateur. Et le moins que l'on puisse dire, c'est que ces travaux font étrangement penser à Chrome. Sur le Wiki de la Fondation, les premières captures d'écran du projet (soulignons qu'il s'agit bien d'un projet et pas encore d'une décision) montrent que le bouton Firefox (le menu général) a disparu au profit d'un nouveau menu qui centralise toutes les fonctionnalités principales. Ce changement fait penser à la « clef à molette » de Chrome.

    Read the article

  • The Great Ball Contraption: A Massive Automated LEGO Construction

    - by Jason Fitzpatrick
    This massive LEGO construction combines 17 distinct modules into a lengthy factory-like conveyance system for five hundred LEGO balls. The variety and creativity of the methods employed is, dare we say, dazzling. Slotted robotic arms? Screw lifts? Handshake object transfers? Catapults that shoot baskets? The sheer number of creative and novel solutions LEGO builder Akiyuky employs to move the balls through his machine left us mesmerized for the whole seven minute video. Akiyuky’s LEGO Blog (Google Translate Interpreted)[via Make] How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >