Search Results

Search found 15087 results on 604 pages for 'native mode'.

Page 398/604 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • Automated build platform for .NET portfolio - best choice?

    - by jkohlhepp
    I am involved with maintaining a fairly large portfolio of .NET applications. Also in the portfolio are legacy applications built on top of other platforms - native C++, ECLIPS Forms, etc. I have a complex build framework on top of NAnt right now that manages the builds for all of these applications. The build framework uses NAnt to do a number of different things: Pull code out of Subversion, as well as create tags in Subversion Build the code, using MSBuild for .NET or other compilers for other platforms Peek inside AssemblyInfo files to increment version numbers Do deletes of certain files that shouldn't be included in builds / releases Releases code to deployment folders Zips code up for backup purposes Deploy Windows services; start and stop them Etc. Most of those things can be done with just NAnt by itself, but we did build a couple of extension tasks for NAnt to do some things that were specific to our environment. Also, most of those processes above are genericized and reused across a lot of our different application build scripts, so that we don't repeat logic. So it is not simple NAnt code, and not simple build scripts. There are dozens of NAnt files that come together to execute a build. Lately I've been dissatisfied with NAnt for a couple reasons: (1) it's syntax is just awful - programming languages on top of XML are really horrific to maintain, (2) the project seems to have died on the vine; there haven't been a ton of updates lately and it seems like no one is really at the helm. Trying to get it working with .NET 4 has cause some pain points due to this lack of activity. So, with all of that background out of the way, here's my question. Given some of the things that I want to accomplish based on that list above, and given that I am primarily in a .NET shop, but I also need to build non-.NET projects, is there an alternative to NAnt that I should consider switching to? Things on my radar include Powershell (with or without psake), MSBuild by itself, and rake. These all have pros and cons. For example, is MSBuild powerful enough? I remember using it years ago and it didn't seem to have as much power as NAnt. Do I really want to have my team learn Ruby just to do builds using rake? Is psake really mature enough of a project to pin my portfolio to? Is Powershell "too close to the metal" and I'll end up having to write my own build library akin to psake to use it on its own? Are there other tools that I should consider? If you were involved with maintaining a .NET portfolio of significant complexity, what build tool would you be looking at? What does your team currently use?

    Read the article

  • Mysql 5.5 server not working

    - by rajesh
    I had Ubuntu 14.04 installed on my system. I recently updated ubuntu and now my mysql does not start and workbench says that mysql server has been stopped. And when i try to start it gives me the following error 2014-08-12 23:02:04 - Checking server status... 2014-08-12 23:02:04 - Trying to connect to MySQL... 2014-08-12 23:02:04 - Can't connect to MySQL server on '127.0.0.1' (111) (2003) 2014-08-12 23:02:04 - Assuming server is not running 2014-08-12 23:02:04 - Server start done. 2014-08-12 23:02:04 - Checking server status... 2014-08-12 23:02:04 - Trying to connect to MySQL... 2014-08-12 23:02:04 - Can't connect to MySQL server on '127.0.0.1' (111) (2003) 2014-08-12 23:02:04 - Assuming server is not running And also when i try to login using terminal (mysql -u root -p <password>) i get the following error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I have also tried to reinstall Ubuntu but i am unable to do so. Gives me the following error: Reading package lists... Done Building dependency tree Reading state information... Done mysql-server-5.5 is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. I have data which i have not taken backup of as i am unable to log into the server. I am a newbie please help me resolve this issue without losing my data. Awaiting for your earliest response. Below is the error message from cat /var/log/mysql/error.log 140813 21:22:50 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead. 140813 21:22:50 [Note] Plugin 'FEDERATED' is disabled. 140813 21:22:50 InnoDB: The InnoDB memory heap is disabled 140813 21:22:50 InnoDB: Mutexes and rw_locks use GCC atomic builtins 140813 21:22:50 InnoDB: Compressed tables use zlib 1.2.8 140813 21:22:50 InnoDB: Using Linux native AIO 140813 21:22:50 InnoDB: Initializing buffer pool, size = 128.0M 140813 21:22:50 InnoDB: Completed initialization of buffer pool 140813 21:22:50 InnoDB: highest supported file format is Barracuda. 140813 21:22:50 InnoDB: Waiting for the background threads to start 140813 21:22:51 InnoDB: 5.5.38 started; log sequence number 80726593570 140813 21:22:51 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 140813 21:22:51 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 140813 21:22:51 [Note] Server socket created on IP: '127.0.0.1'. 140813 21:22:51 [ERROR] Fatal error: Can't open and lock privilege tables: Incorrect file format 'user'

    Read the article

  • SQL SERVER – Iridium I/O – SQL Server Deduplication that Shrinks Databases and Improves Performance

    - by Pinal Dave
    Database performance is a common problem for SQL Server DBA’s.  It seems like we spend more time on performance than just about anything else.  In many cases, we use scripts or tools that point out performance bottlenecks but we don’t have any way to fix them.  For example, what do you do when you need to speed up a query that is already tuned as well as possible?  Or what do you do when you aren’t allowed to make changes for a database supporting a purchased application? Iridium I/O for SQL Server was originally built at Confio software (makers of Ignite) because DBA’s kept asking for a way to actually fix performance instead of just pointing out performance problems. The technology is certified by Microsoft and was so promising that it was spun out into a separate company that is now run by the Confio Founder/CEO and technology management team. Iridium uses deduplication technology to both shrink the databases as well as boost IO performance.  It is intriguing to see it work.  It will deduplicate a live database as it is running transactions.  You can watch the database get smaller while user queries are running. Iridium is a simple tool to use. After installing the software, you click an “Analyze” button which will spend a minute or two on each database and estimate both your storage and performance savings.  Next, you click an “Activate” button to turn on Iridium I/O for your selected databases.  You don’t need to reboot the operating system or restart the database during any part of the process. As part of my test, I also wanted to see if there would be an impact on my databases when Iridium was removed.  The ‘revert’ process (bringing the files back to their SQL Server native format) was executed by a simple click of a button, and completed while the databases were available for normal processing. I was impressed and enjoyed playing with the software and encourage all of you to try it out.  Here is the link to the website to download Iridium for free. . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Github Organization Repositories, Issues, Multiple Developers, and Forking - Best Workflow Practices

    - by Jim Rubenstein
    A weird title, yes, but I've got a bit of ground to cover I think. We have an organization account on github with private repositories. We want to use github's native issues/pull-requests features (pull requests are basically exactly what we want as far as code reviews and feature discussions). We found the tool hub by defunkt which has a cool little feature of being able to convert an existing issue to a pull request, and automatically associate your current branch with it. I'm wondering if it is best practice to have each developer in the organization fork the organization's repository to do their feature work/bug fixes/etc. This seems like a pretty solid work flow (as, it's basically what every open source project on github does) but we want to be sure that we can track issues and pull requests from ONE source, the organization's repository. So I have a few questions: Is a fork-per-developer approach appropriate in this case? It seems like it could be a little overkill. I'm not sure that we need a fork for every developer, unless we introduce developers who don't have direct push access and need all their code reviewed. In which case, we would want to institute a policy like that, for those developers only. So, which is better? All developers in a single repository, or a fork for everyone? Does anyone have experience with the hub tool, specifically the pull-request feature? If we do a fork-per-developer (or even for less-privileged devs) will the pull-request feature of hub operate on the pull requests from the upstream master repository (the organization's repository?) or does it have different behavior? EDIT I did some testing with issues, forks, and pull requests and found that. If you create an issue on your organization's repository, then fork the repository from your organization to your own github account, do some changes, merge to your fork's master branch. When you try to run hub -i <issue #> you get an error, User is not authorized to modify the issue. So, apparently that work flow won't work.

    Read the article

  • Why does Chrome video performance substantially degrade after waking from suspend in 10.10?

    - by Grant Heaslip
    Note: For some more details, some of which may not be true given what I've figured out, see this post. When I first boot my computer, video performance (both native H.264 HTML5 in YouTube and Vimeo, and in Flash) in Chrome is perfectly reasonable. CPU usage stays slow, everything works correctly, and the video is silky-smooth. But for whatever reason, if I suspend my computer then wake it up, video performance plummets. Full screen HTML5 video is choppy at best, and full-screen Flash video basically brings my computer to its knees (I'm talking less than a frame a second, and a 5 second lead time to leave full-screen after hitting Esc). Restarting Chrome doesn't fix this — I need to completely restart my machine before performance goes back to normal. Video performance in other applications, such as Movie Player, doesn't seem to be affected at all by the suspend cycle — it's only Chrome. I'm using a Lenovo X201, with an Intel GMA HD graphics chipset, and Intel compnents all around (I don't need any proprietary drivers). This didn't happen in 10.04, and I haven't anything that I think would have caused this to happen. It's possible that a Chrome release could have caused this, but it seems less likely than a regression between 10.04 and 10.10. Any ideas? EDIT: In response Georg's comment, logging in and out doesn't fix it. Restarting Compiz or switching to Metacity (at least by using "compiz/metacity --replace & disown" — am I doing it right?) doesn't help (actually, it seemed to help somewhat with Flash once, but I haven't been able to reproduce this). I'm not sure about GDM — when I use "sudo restart gdm" I get kicked back to the Linux shell (?), which I have no idea how to get out of. Also, I want to make very clear that this isn't just a case of Flash sucking (it does,but that's beside the point). I"m seeing the same general problem with HTML5 videos, and Flash is performing better on my Nexus One than it does on my Core i5 laptop. There's something screwy going on with Chrome and/or 10.10.

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Stagnating in programming

    - by Coder
    Time after time this question came up in my mind, but up until today I wasn't thinking about it much. I have been programming for maybe around 8 years now, and for the last two years it seems I'm not as keen to pick up new technologies anymore. Maybe that's a burnout or something, but I'd say it's experience and what I like, that's stopping me from running after the latest and greatest. I'm C++ developer, by this I mean, I love close to metal programming. I have no problems tracing problems through assembly, using tools like WinDbg or HexView. When I use constructs, I think about how they are realized underneath, how the bits are set and unset under the hood. I love battling with complex threading problems and doing everything hardcore way, even by hand if the regular solutions seem half baked. But I also love the C++0x stuff, and use it a lot. And all C++ code as long as it's not cumbersome compared to C counterparts, sometimes I also fall back to sort of "Super C" if the C++ way is ugly. And then there are all other developers who seem to be way more forward looking, .Net 4.0 MVC, WPF, all those Microsoft X#s, LINQ languages, XML and XSLT, mobile devices and so on. I have done a considerable amount of .NET, SQL, ASPX programming, but the further I go, the less I want to try those technologies. Is that bad? Almost every day I hear people saying that managed code is the only way forward, WPF is the way to go. I hear that C++ is godawful, and you can't code anything in it that's somewhat stable. But I don't buy it. With the experience I have, and the knowledge of how native code is compiled and executes, I can say I find it extremely rare that C++ code is unstable, or leaks, or causes crashes that takes more than 30 seconds to identify and fix. And to tell the truth, I've seen enough problems with other "cool" languages that I'd say C++ is even more stable and production proof than the safe languages, at least for me. The only thing that scares me in C++ is new frameworks, I don't trust them, and I use them extra sparingly. STL - yes, ATL - very sparingly, everything else... Well, not very keen on it. Most huge problems I've ran into, all were related to frameworks, not the language itself. Some overrided operator here, bad hierarchy there, poor class design here, mystical castings there. Other than that, C/C++ (yes, I use them together) still seems a very controlled and stable way to develop applications. Am I stagnating? Should I switch a profession, or force myself in all that marketing hype? Are there more developers who feel the same way?

    Read the article

  • Closing the Gap: 2012 IOUG Enterprise Data Security Survey

    - by Troy Kitch
    The new survey from the Independent Oracle Users Group (IOUG) titled "Closing the Security Gap: 2012 IOUG Enterprise Data Security Survey," uncovers some interesting trends in IT security among IOUG members and offers recommendations for securing data stored in enterprise databases. "Despite growing threats and enterprise data security risks, organizations that implement appropriate detective, preventive, and administrative safeguards are seeing significant results," finds the report's author, Joseph McKendrick, analyst, Unisphere Research. Produced by Unisphere Research and underwritten by Oracle, the report is based on responses from 350 IOUG members representing a variety of job roles, organization sizes, and industry verticals. Key findings include Corporate budgets increase, but trailing. Though corporate data security budgets are increasing this year, they still have room to grow to reach the previous year’s spending. Additionally, more than half of respondents say their organizations still do not have, or are unaware of, data security plans to help address contingencies as they arise. Danger of unauthorized access. Less than a third of respondents encrypt data that is either stored or in motion, and at the same time, more than three-fifths say they send actual copies of enterprise production data to other sites inside and outside the enterprise. Privileged user misuse. Only about a third of respondents say they are able to prevent privileged users from abusing data, and most do not have, or are not aware of, ways to prevent access to sensitive data using spreadsheets or other ad hoc tools. Lack of consistent auditing. A majority of respondents actively collect native database audits, but there has not been an appreciable increase in the implementation of automated tools for comprehensive auditing and reporting across databases in the enterprise. IOUG RecommendationsThe report's author finds that securing data requires not just the ability to monitor and detect suspicious activity, but also to prevent the activity in the first place. To achieve this comprehensive approach, the report recommends the following. Apply an enterprise-wide security strategy. Database security requires multiple layers of defense that include a combination of preventive, detective, and administrative data security controls. Get business buy-in and support. Data security only works if it is backed through executive support. The business needs to help determine what protection levels should be attached to data stored in enterprise databases. Provide training and education. Often, business users are not familiar with the risks associated with data security. Beyond IT solutions, what is needed is a well-engaged and knowledgeable organization to help make security a reality. Read the IOUG Data Security Survey Now.

    Read the article

  • Programming and Ubiquitous Language (DDD) in a non-English domain

    - by Sandor Drieënhuizen
    I know there are some questions already here that are closely related to this subject but none of them take Ubiquitous Language as the starting point so I think that justifies this question. For those who don't know: Ubiquitous Language is the concept of defining a (both spoken and written) language that is equally used across developers and domain experts to avoid inconsistencies and miscommunication due to translation problems and misunderstanding. You will see the same terminology show up in code, conversations between any team member, functional specs and whatnot. So, what I was wondering about is how to deal with Ubiquitous Language in non-English domains. Personally, I strongly favor writing programming code in English completely, including comments but ofcourse excluding constants and resources. However, in a non-English domain, I'm forced to make a decision either to: Write code reflecting the Ubiquitous Language in the natural language of the domain. Translate the Ubiquitous Language to English and stop communicating in the natural language of the domain. Define a table that defines how the Ubiquitous Language translates to English. Here are some of my thoughts based on these options: 1) I have a strong aversion against mixed-language code, that is coding using type/member/variable names etc. that are non-English. Most programming languages 'breathe' English to a large extent and most of the technical literature, design pattern names etc. are in English as well. Therefore, in most cases there's just no way of writing code entirely in a non-English language so you end up with mixed languages anyway. 2) This will force the domain experts to start thinking and talking in the English equivalent of the UL, something that will probably not come naturally to them and therefore hinders communication significantly. 3) In this case, the developers communicate with the domain experts in their native language while the developers communicate with each other in English and most importantly, they write code using the English translation of the UL. I'm sure I don't want to go for the first option and I think option 3 is much better than option 2. What do you think? Am I missing other options? UPDATE Today, about year later, having dealt with this issue on a daily basis, I have to say that option 3 has worked out pretty well for me. It wasn't as tedious as I initially feared and translating in real time while talking to the client wasn't a problem either. I also found the following advantages to be true, based on my experience. Translating the UL makes you pay more attention to defining the UL and even the domain itself, especially when you don't know how to translate a term and you have to start looking through dictionaries etc. This has even caused me to reconsider domain modeling decisions a few times. It helps you make your knowledge of the English language more profound. Obviously, your code is much more pleasant to look at instead of being a mind boggling obscenity.

    Read the article

  • Career Advice: finding challenging work in software and web development

    - by dianovich
    Having left my physics degree early, I started out in the realm of web design / front end web development and was able to get work quite quickly. I moved on to spend a chunk of my time on servers and gained experience with frameworks like Wordpress and Drupal, then the likes of Codeigniter and CakePHP and became comfortable in Debian-based and RHEL/CentOS environments. I ventured in to iOS development and published a couple of native apps to the app store too! I have started to spend a good deal of my time writing Python and have invested a little time in Django. The problem is, I still spend a fair chunk of my time doing more front end web development (writing markup and CSS for site themes, design-lead JavaScript, small applications for which application architecture and software engineering are relatively unimportant or too time consuming to invest in) in my job. What I want to do is really exercise the systematic/logical portion of my brain and tackle challenging problems on a daily basis. I want to have to care about big-oh running times, modularity in software, DRY, performance tuning and development methodologies. I want to work for a firm whose clients say: "Yes, these things are important to us and we'll pay you to get them right." But it is difficult: I have no formal training and am potentially becoming a jack of all trades. Not that being a jack of many trades is necessarily a bad thing, but the scope of work I find myself involved in is far too broad. And, there are only so many hours in a day outside of work! My question is: where do I go from here? I am starting to work on a few open source projects and have started to publish content to my blog. But this isn't likely to make it past the recruitment consultants and HR departments of many-a-firm. And I do not, for example, work in a team that practices agile methodologies, so how do I get work in such a team to gain experience? While I have been responsible for implementing version control and some solid working practices into our current environment, there is only so far I can go in this context. What would convince you that i'm worth taking a risk? What would convince you that i'll have caught up the other guys in your employ in next to no time?

    Read the article

  • Error Copying Source File in Audio Spectrum Visualizer [closed]

    - by David Dimalanta
    I'm testing this code using LibGDX, Java, and Eclipse to test the music player that detects the frequency. I saw this one on this website plus the link on GitHub: http://gtomee.com/2012/07/28/audio-spectrum-visualizer-with-libgdx/ It works when running on desktop project folder but not on Android project folder and the result is this: 10-10 13:57:45.320: E/AndroidRuntime(9421): FATAL EXCEPTION: GLThread 16845 10-10 13:57:45.320: E/AndroidRuntime(9421): com.badlogic.gdx.utils.GdxRuntimeException: Error copying source file: soundtrack 1 bioman.mp3 (Internal) 10-10 13:57:45.320: E/AndroidRuntime(9421): To destination: tmp/audio-spectrum.mp3 (External) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.copyFile(FileHandle.java:625) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.copyTo(FileHandle.java:534) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.bodapps.rhythm.Drop.create(Drop.java:393) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.backends.android.AndroidGraphics.onSurfaceChanged(AndroidGraphics.java:292) 10-10 13:57:45.320: E/AndroidRuntime(9421): at android.opengl.GLSurfaceView$GLThread.guardedRun(GLSurfaceView.java:1505) 10-10 13:57:45.320: E/AndroidRuntime(9421): at android.opengl.GLSurfaceView$GLThread.run(GLSurfaceView.java:1240) 10-10 13:57:45.320: E/AndroidRuntime(9421): Caused by: com.badlogic.gdx.utils.GdxRuntimeException: Error stream writing to file: tmp/audio-spectrum.mp3 (External) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.write(FileHandle.java:313) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.copyFile(FileHandle.java:623) 10-10 13:57:45.320: E/AndroidRuntime(9421): ... 5 more 10-10 13:57:45.320: E/AndroidRuntime(9421): Caused by: com.badlogic.gdx.utils.GdxRuntimeException: Error writing file: tmp/audio-spectrum.mp3 (External) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.write(FileHandle.java:293) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.write(FileHandle.java:305) 10-10 13:57:45.320: E/AndroidRuntime(9421): ... 6 more 10-10 13:57:45.320: E/AndroidRuntime(9421): Caused by: java.io.FileNotFoundException: /storage/sdcard0/tmp/audio-spectrum.mp3: open failed: EACCES (Permission denied) 10-10 13:57:45.320: E/AndroidRuntime(9421): at libcore.io.IoBridge.open(IoBridge.java:416) 10-10 13:57:45.320: E/AndroidRuntime(9421): at java.io.FileOutputStream.<init>(FileOutputStream.java:88) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.write(FileHandle.java:289) 10-10 13:57:45.320: E/AndroidRuntime(9421): ... 7 more 10-10 13:57:45.320: E/AndroidRuntime(9421): Caused by: libcore.io.ErrnoException: open failed: EACCES (Permission denied) 10-10 13:57:45.320: E/AndroidRuntime(9421): at libcore.io.Posix.open(Native Method) 10-10 13:57:45.320: E/AndroidRuntime(9421): at libcore.io.BlockGuardOs.open(BlockGuardOs.java:110) 10-10 13:57:45.320: E/AndroidRuntime(9421): at libcore.io.IoBridge.open(IoBridge.java:400) 10-10 13:57:45.320: E/AndroidRuntime(9421): ... 9 more I'm not sure if I come this to the right place for help and suggestions.

    Read the article

  • Happy New Year! Upcoming Events in January 2011

    - by mandy.ho
    Oracle Database kicks off the New Year at the following events during the month of January. Hope to see you there and please send in your pictures and feedback! Jan 20, 2011 - San Francisco, CA LinkShare Symposium West 2011 Oracle is a proud Gold Sponsor at the LinkShare Symposium West 2011 January 20 in San Francisco, California. Year after year LinkShare has been bringing their network the opportunity to come to life. At the LinkShare Symposium online performance marketing leaders meet to optimize face-to-face during a full day of networking. Learn more by attending Oracle Breakout Session, "Omni - Channel Retailing, What is possible now?" on Thursday, January 20, 11:15 a.m. - 12:00 noon, Grand Ballroom. http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=128306&src=6954634&src=6954634&Act=397 Jan 24, 2011 - Cincinnati, OH Greater Cincinnati Oracle User Group Meeting "Tom Kyte Day" - Featuring a day of sessions presented by Senior Technical Architect, Tom Kyte. Sessions include "Top 10, no 11, new features of Oracle Database 11g Release 2" and "What do I really need to know when upgrading", plus more. http://www.gcoug.org/ Jan 25, 2011 - Vancouver, British Columbia Oracle Security Solutions Forum Featuring a Special Keynote Presentation from Tom Kyte - Complete Database Security Join us at this half-day event; Oracle Database Security Solutions: Complete Information Security. Learn how Oracle Database Security solutions help you: • Prevent external threats like SQL injection attacks from reaching your databases • Transparently encrypt application data without application changes • Prevent privileged database users and administrators from accessing data • Use native database auditing to monitor and report on database activity • Mask production data for safe use in nonproduction environments http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=126974&src=6958351&src=6958351&Act=97 Jan 26, 2011 - Halifax, Nova Scotia Oracle Database Security Technology Day Exclusive Seminar on Complete Information Security with Oracle Database 11g The amount of digital data within organizations is growing at unprecedented rates, as is the value of that data and the challenges of safeguarding it. Yet most IT security programs fail to address database security--specifically, insecure applications and privileged users. So how can you protect your mission-critical information? Avoid risky third-party solutions? Defend against security breaches and compliance violations? And resist costly new infrastructure investments? Join us at this half-day seminar, Oracle Database Security Solutions: Complete Information Security, to find out http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=126269&src=6958351&src=6958351&Act=93

    Read the article

  • Spotlight on RIVA: CRM integration for Oracle CRM on Demand and Microsoft Exchange

    - by Richard Lefebvre
    Introducing Riva from Omni - an Oracle ISV partner specializing in Enterprise Management and Integration Solutions Riva delivers advanced, server-side integration for Oracle CRM On Demand and Microsoft Exchange or even Novell GroupWise. Riva allows Oracle customers to go beyond the standard Outlook plug-in to deliver additional value for the end user as they interact between Outlook and CRM On Demand. Riva syncs CRM On Demand to ALL Exchange mail apps, not just Windows Outlook.  So, whether customers are using Outlook 2010, Outlook Web Access (web client), Outlook 2011 for Mac, Apple Mail, Outlook on Citrix  or a mobile device, Riva's got them covered. There are no plug-ins to be installed, configured, managed and maintained on users' desktops, laptops as Riva delivers Server-side synchronisation for CRMOD and Exchange. The automation of CRM and Outlook integration will remove the reliance upon users to synchronise between the two with Riva handling this process. Riva allows administrators to define sync policies and apply them to individuals or groups of users depending on their sync requirements. Administrators will be able to determine and manage the exposure of the most pertinent detail to be synchronised between Outlook and CRM On Demand. Custom and organic contact filtering for large deployments i.e. Based on ownership, groupings and contact frequency, filters can be applied on what contact records are shared with the users. Riva provides the capability to synchronise CRM and Outlook beyond Contacts, Calendar entries and Email. The synchronisation can be extended to cater for  opportunities, quotes and custom objects for example within the Outlook interface. Riva SmartConvert Folders can automate the creation of opportunities and associated contacts for example if they don't already exist. This can facilitate a reduction in manual detail entry through quick association whilst also benefiting user adoption. From a mobile perspective, Riva allows users to view and manage their CRM On Demand contacts, calendar, tasks, opportunities and cases from iPad, iPhone, Android and BlackBerry devices.  Again, there are no mobile apps or additional plugins to install, configure or manage. We sync CRM On Demand to Exchange.  Because the mobile device is connected to an Exchange mailbox, the information automatically syncs down to the native address book, calendar and mail apps on the smartphone or tablet. Riva Datasheet for CRM On Demand Riva Brochure – Oracle CRM On Demand  Technical Knowledgebase & Riva Trial  http://kb.omni-ts.com/47/ Comparison to Outlook Plug-ins Riva Diagram – Riva Comparison with Outlook Plug-ins Contact: Wolfgang Berger - [email protected]

    Read the article

  • Time passage arithmetic explanation

    - by Cyber Axe
    I ported this from http://www.effectgames.com/effect/article.psp.html/joe/Old_School_Color_Cycling_with_HTML5 some time ago. However i'm now wanting to modify it for the purpose of changing it from floating point to fixed point maths for enhanced efficiency (for those who are going to talk about premature optimization and what not, i want to have my entire engine in fixed point both as a learning process for me and so i can port code more easily to systems in the future that dont have native floating points such as arm cpus) My initial conversion to fixed points just resulted in the cycling stuck on either the first or last frame of cycling. Plus it would be nice to understand better how it works so i can add more options and so forth in the future, my maths however sucks and the comments are limited so i don't really know how the maths work for determining the frame it shoud use (cycleAmount) I was also a beginner when i ported it as i had no idea between floating points and integers and what not. So in summary my question is, can anyone give an explination of the arithmatic used for determining the cycleAmount (which determings the "frame" of the cycle) This is the working floating point maths version of the code: public final void cycle(Colour[] sourceColours, double timeNow, double speedAdjust) { // Cycle all animated colour ranges in palette based on timestamp. sourceColours = sourceColours.clone(); int cycleSize; double cycleRate; double cycleAmount; Cycle cycle; for (int i = 0, len = cycles.length; i < len; ++i) { cycle = cycles[i]; cycleSize = (cycle.HIGH - cycle.LOW) + 1; cycleRate = cycle.RATE / (int) (CYCLE_SPEED / speedAdjust); cycleAmount = 0; if (cycle.REVERSE < 3) { // Standard Cycle cycleAmount = DFLOAT_MOD((timeNow / (1000 / cycleRate)), cycleSize); if (cycle.REVERSE < 1) { cycleAmount = cycleSize - cycleAmount; // If below 1 make sure its not reversed. } } else if (cycle.REVERSE == 3) { // Ping-Pong cycleAmount = DFLOAT_MOD((timeNow / (1000 / cycleRate)), cycleSize << 1); if (cycleAmount >= cycleSize) { cycleAmount = (cycleSize * 2) - cycleAmount; } } else if (cycle.REVERSE < 6) { // Sine Wave cycleAmount = DFLOAT_MOD((timeNow / (1000 / cycleRate)), cycleSize); cycleAmount = Math.sin((cycleAmount * 3.1415926 * 2) / cycleSize) + 1; if (cycle.REVERSE == 4) { cycleAmount *= (cycleSize / 4); } else if (cycle.REVERSE == 5) { cycleAmount *= (cycleSize >> 1); } } if (cycle.REVERSE == 2) { reverseColours(sourceColours, cycle); } if (USE_BLEND_SHIFT) { blendShiftColours(sourceColours, cycle, cycleAmount); } else { shiftColours(sourceColours, cycle, cycleAmount); } if (cycle.REVERSE == 2) { reverseColours(sourceColours, cycle); } } colours = sourceColours; } // This utility function allows for variable precision floating point modulus. private double DFLOAT_MOD(final double d, final double b) { return (Math.floor(d * PRECISION) % Math.floor(b * PRECISION)) / PRECISION; }

    Read the article

  • No sound after clean install 11.10

    - by Jorge
    First of all, sorry to ask this, I'm sure that this was asked so many times before. Second, sorry for the English, it's not my native language. And Third, thank you in advance. So, I hope the follow info will help, here's a log. http://www.alsa-project.org/db/?f=07089caf530494bc4bc23e1d1cd56b3a5fae03c6 I already check 'System - Preferences - Sound'. Here's a screenshot http://i.imgur.com/Ghwnj.png > jorge@jorge-desktop:~$ sudo lshw -class multimedia > *-multimedia > description: Multimedia audio controller > product: VT8233/A/8235/8237 AC97 Audio Controller > vendor: VIA Technologies, Inc. > physical id: 11.5 > bus info: pci@0000:00:11.5 > version: 60 > width: 32 bits > clock: 33MHz > capabilities: pm cap_list > configuration: driver=VIA 82xx Audio latency=0 > resources: irq:22 ioport:e400(size=256) Tried with no results: > sudo apt-get remove --purge alsa-base > sudo apt-get remove --purge pulseaudio > sudo apt-get clean && sudo apt-get autoremove > sudo apt-get install alsa-base > sudo apt-get install pulseaudio > sudo apt-get install ubuntu-desktop Also > sudo gedit /etc/default/grub > > from: > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" > > to: > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash radeon.audio=1" > > sudo update-grub > > And Reboot... without any result. EDIT: I made sure that everything it's fine with aplay -l and lspci -v and lsmod; and checked alsamixer, it's not in mute. Well I'm running out of ideas. Thanks.

    Read the article

  • Tackling Big Data Analytics with Oracle Data Integrator

    - by Irem Radzik
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}  By Mike Eisterer  The term big data draws a lot of attention, but behind the hype there's a simple story. For decades, companies have been making business decisions based on transactional data stored in relational databases. Beyond that critical data, however, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and documents that can be mined for useful information.  Companies are facing emerging technologies, increasing data volumes, numerous data varieties and the processing power needed to efficiently analyze data which changes with high velocity. Oracle offers the broadest and most integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships Oracle Data Integrator Enterprise Edition(ODI) is critical to any enterprise big data strategy. ODI and the Oracle Data Connectors provide native access to Hadoop, leveraging such technologies as MapReduce, HDFS and Hive. Alongside with ODI’s metadata driven approach for extracting, loading and transforming data; companies may now integrate their existing data with big data technologies and deliver timely and trusted data to their analytic and decision support platforms. In this session, you’ll learn about ODI and Oracle Big Data Connectors and how, coupled together, they provide the critical integration with multiple big data platforms. Tackling Big Data Analytics with Oracle Data Integrator October 1, 2012 12:15 PM at MOSCONE WEST – 3005 For other data integration sessions at OpenWorld, please check our Focus-On document.  If you are not able to attend OpenWorld, please check out our latest resources for Data Integration.

    Read the article

  • Token based Authentication and Claims for Restful Services

    - by Your DisplayName here!
    WIF as it exists today is optimized for web applications (passive/WS-Federation) and SOAP based services (active/WS-Trust). While there is limited support for WCF WebServiceHost based services (for standard credential types like Windows and Basic), there is no ready to use plumbing for RESTful services that do authentication based on tokens. This is not an oversight from the WIF team, but the REST services security world is currently rapidly changing – and that’s by design. There are a number of intermediate solutions, emerging protocols and token types, as well as some already deprecated ones. So it didn’t make sense to bake that into the core feature set of WIF. But after all, the F in WIF stands for Foundation. So just like the WIF APIs integrate tokens and claims into other hosts, this is also (easily) possible with RESTful services. Here’s how. HTTP Services and Authentication Unlike SOAP services, in the REST world there is no (over) specified security framework like WS-Security. Instead standard HTTP means are used to transmit credentials and SSL is used to secure the transport and data in transit. For most cases the HTTP Authorize header is used to transmit the security token (this can be as simple as a username/password up to issued tokens of some sort). The Authorize header consists of the actual credential (consider this opaque from a transport perspective) as well as a scheme. The scheme is some string that gives the service a hint what type of credential was used (e.g. Basic for basic authentication credentials). HTTP also includes a way to advertise the right credential type back to the client, for this the WWW-Authenticate response header is used. So for token based authentication, the service would simply need to read the incoming Authorization header, extract the token, parse and validate it. After the token has been validated, you also typically want some sort of client identity representation based on the incoming token. This is regardless of how technology-wise the actual service was built. In ASP.NET (MVC) you could use an HttpModule or an ActionFilter. In (todays) WCF, you would use the ServiceAuthorizationManager infrastructure. The nice thing about using WCF’ native extensibility points is that you get self-hosting for free. This is where WIF comes into play. WIF has ready to use infrastructure built-in that just need to be plugged into the corresponding hosting environment: Representation of identity based on claims. This is a very natural way of translating a security token (and again I mean this in the widest sense – could be also a username/password) into something our applications can work with. Infrastructure to convert tokens into claims (called security token handler) Claims transformation Claims-based authorization So much for the theory. In the next post I will show you how to implement that for WCF – including full source code and samples. (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • Developing for Windows CE platform?

    - by grmbl
    I'm looking in creating some applications for workers to use on the workfloor. They'll be using Psion NEO devices running Windows CE 5.0. My skillset allows for C#, PHP, ASP.Net (+ webservices). Application requirements: should connect to our ERP system running on IBM iSeries (AS400). should be run in fullscreen (effectively hiding the OS). usability touch functionality. I have tried the following: Full winform application ran through RDP session: [+] easy deployment using .rdp file. [+] application can be run on desktop environment too. [+] rdp host can easily access DB2 using IBM drivers. [+] GUI works ok on small screen. [-] environment = terminal server. (which is already under heavy use) Full winform application running on device OS: [+] environment = local. [+] responsive. [-] must use a webservice to access DB2. [-] deployment... [-] fixed platform (no desktop) Console application running on device OS: [+] environment = local. [+] very responsive. [-] must use a webservice to access DB2. [-] no fullscreen or other window options? [-] deployment... [-] fixed platform (no desktop) I'm considering creating a web application but it seems the OS comes with IE 5? I don't want to alter the OS in any way! (install other browsers etc.) I would like to have an application that's responsive, easy to deploy, fullscreen and optionally multiplatform. I have seen handheld devices using terminal (emulation?) with a console like interface. This seems to be native to the device but I'm afraid this requires modest knowledge of C++? It seems that using RDP is the way to go but, I came here for advice and look for people that have been in the same situation willing to share their experience. There does not seem to be many "best practices" on the web that could help me decide the best way of working. Greetings

    Read the article

  • Software Architecture and MEF composition location

    - by Leonardo
    Introduction My software (a bunch of webapi's) consist of 4 projects: Core, FrontWebApi, Library and Administration. Library is a code library project that consists of only interfaces and enumerators. All my classes in other projects inherit from at least one interface, and this interface is in the library. Generally speaking, my interfaces define either Entities, Repositories or Controllers. This project references no other project or any special dlls... just the regular .Net stuff... Core is a class-library project where concrete implementation of Entities and Repositories. In some cases i have more than 1 implementation for a Repository (ex: one for azure table storage and one for regular Sql). This project handles the intelligence (business rules mostly) and persistence, and it references only the Library. FrontWebApi is a ASP.NET MVC 4 WebApi project that implements the controllers interfaces to handle web-requests (from a mobile native app)... It references the Core and the Library. Administration is a code-library project that represents a "optional-module", meaning: if it is present, it provides extra-features (such as Access Control Lists) to the application, but if its not, no problem. Administration is also only referencing the Library and implementing concrete classes of a few interfaces such as "IAccessControlEntry"... I intend to make this available with a "setup" that will create any required database table or anything like that. But it is important to notice that the Core has no reference to this project... Development Now, in order to have a decoupled code I decide to use IoC and because this is a small project, I decided to do it using MEF, specially because of its advertised "composition" capabilities. I arranged all the imports/exports and constructors and everything, but something is quite not perfect in my "mental-visualisation": Main Question Where should I "Compose" the objects? I mean: Technically, the only place where real implementation access is required is in the Repositories, because in order to retrieve data from wherever, entities instances will be necessary, and in all other places. The repositories could also provide a public "GetCleanInstanceOf()" right? Then all other places will be just fine working with the interfaces instead of concrete classes... Secondary Question Should "Administration" implement the concrete object for "IAccessControlGeneralRepository" or the Core should?

    Read the article

  • Visual Studio 2010 Launch Events

    - by Jim Duffy
    Don’t miss out on the opportunity to learn about the new features in Visual Studio 2010. Check out the MSDN Events page and find out when the talented folks of the Developer & Evangelism group will be visiting your city to prove to you that /*Life Runs On Code*/. I’ll be attending the Raleigh event June 2, 2010 from 1:00 - 5:00 PM. North Carolina State University, Jane S. McKimmon Conference Center 1101 Gorman St Raleigh North Carolina 27606 United States From the Raleigh Event page: Event Overview Learn about the rich application platforms that Microsoft® Visual Studio® 2010 supports, including Windows® 7, the Web, SharePoint®, Windows Azure™, SQL®, and Windows® Phone 7 Series. From tighter tester and dev collaboration to new ALM tools, there’s a lot that’s new. Here’s what you can expect: Windows Development with Visual Studio 2010 Visual Studio has always been the best way to build compelling visual solutions for Windows. Visual Studio 2010 continues this trend with great new tooling support for Silverlight 4, WPF, and native development. In this demo heavy session, you’ll see how you can build rich Windows applications with Silverlight 4 using new trusted application features including out-of-browser execution, saving to the file system, and even COM Automation. You’ll also see how you can use the new Task Parallel Library from within a WPF application to take advantage of all those cores in today’s modern computers. Web and Cloud Development with Visual Studio 2010 If you build solutions for the web, then this session is for you. Come see how your existing skills move forward with Visual Studio 2010 both for in-house ASP.NET development and the new frontier of the Cloud. In this session, you’ll see improved designers, new HTML and JavaScript snippets, Web Forms enhancements, and how you can quickly build great web sites using Dynamic Data. You’ll see the changes made to testable web sites with MVC 2.0 and how we’ve integrated JQuery support into the platform. You’ll then see how easy it is to leverage your existing code and move to the cloud with Windows Azure. Windows Phone 7 Developer Tools and Platform Overview This session provides an overview of Visual Studio® 2010 for Windows Phone. Learn about the powerful capabilities of this new application platform and the developer tools experience including basic IDE usage, debugging, packaging, and deployment. This session also shows how you can use Microsoft Expression® Blend™ for Windows Phone to build great Silverlight applications. Have a day. :-|

    Read the article

  • JavaScript Class Patterns &ndash; In CoffeeScript

    - by Liam McLennan
    Recently I wrote about JavaScript class patterns, and in particular, my favourite class pattern that uses closure to provide encapsulation. A class to represent a person, with a name and an age, looks like this: var Person = (function() { // private variables go here var name,age; function constructor(n, a) { name = n; age = a; } constructor.prototype = { toString: function() { return name + " is " + age + " years old."; } }; return constructor; })(); var john = new Person("John Galt", 50); console.log(john.toString()); Today I have been experimenting with coding for node.js in CoffeeScript. One of the first things I wanted to do was to try and implement my class pattern in CoffeeScript and then see how it compared to CoffeeScript’s built-in class keyword. The above Person class, implemented in CoffeeScript, looks like this: # JavaScript style class using closure to provide private methods Person = (() -> [name,age] = [{},{}] constructor = (n, a) -> [name,age] = [n,a] null constructor.prototype = toString: () -> "name is #{name} age is #{age} years old" constructor )() I am satisfied with how this came out, but there are a few nasty bits. To declare the two private variables in javascript is as simple as var name,age; but in CoffeeScript I have to assign a value, hence [name,age] = [{},{}]. The other major issue occurred because of CoffeeScript’s implicit function returns. The last statement in any function is returned, so I had to add null to the end of the constructor to get it to work. The great thing about the technique just presented is that it provides encapsulation ie the name and age variables are not visible outside of the Person class. CoffeeScript classes do not provide encapsulation, but they do provide nicer syntax. The Person class using native CoffeeScript classes is: # CoffeeScript style class using the class keyword class CoffeePerson constructor: (@name, @age) -> toString: () -> "name is #{@name} age is #{@age} years old" felix = new CoffeePerson "Felix Hoenikker", 63 console.log felix.toString() So now I have a trade-off: nice syntax against encapsulation. I think I will experiment with both strategies in my project and see which works out better.

    Read the article

  • Configuring keyboard input to eliminate unused diacritics

    - by David Cesarino
    I'd like to change the way diacritics work under Xubuntu. My problem My native language is pt-BR and my notebook has an american keyboard. Thus, I use ' and " followed by keys like u and c to achieve things like ú, ç and ü. It all works well. However, in the case of apostrophes and quotes, that creates a problem when I use ' followed by: letters that won't accept the acute accent ( ´ , ACUTE ACCENT -- 0x00B4) at all, like t; and letters that won't accept the acute in pt-BR, like r. In 1, ' with t does absolutely nothing. In 2, ' with r creates r (LATIN SMALL LETTER R WITH ACUTE -- 0x0155), which is used, afaik, for some eastern european languages like slovak. It isn't used in portuguese, just like ?, s, z, ?, ?, ?, n and all consonants (portuguese do not take diacritics in consonants). Question That said, is there a way to better support the portuguese-brazilian language using an american keyboard? It is very common here --- I actually prefer the american keyboard to our own, known as “ABNT”. Desired solution I'd like to deactivate unused diacritics, so case 2 would behave just like case 1. Additionally, if possible, I'd like case 1 to behave like it does under Windows. As an example, typing ' followed by t should write 't (acute followed by T) instead of doing nothing. About 2, in my humble opinion, doing nothing is counterproductive. I realize the behavior is reasonable according to logic ("there isn't t-acute, so please tell the computer to typeset apostrophe --- ', SPACE --- instead of acute). But from a human, practical point of view, I think it makes more sense (to me, at least). Additional comments I believe this also applies to spanish, french, italian and other western european latin languages. On the console (Ctrl+Alt+F?), case 1 is not a problem. I don't need to press space as apostrophes are automatically added. However, there, I'm unable to access cedilla (ç). Two completely different behaviors. If it's just a matter of customizing text config files (possibly creating a custom layout or whatever), I'd be glad to share my efforts. I just need guide on the "howto" part. Somehow Google only points me to the "enough" people (those who cope with the situation and think that it works "well enough"). And since I have definitely migrated to Linux/Xubuntu after years, I'd like to leave just as I like it (and I'm sure others as well). For example, if there is some kind of scripting or definition to tell the computer to do what I described, so be it.

    Read the article

  • Windows8, JavaScript and HTML5 - A good thing?

    - by Albers
    Most of us have seen the Windows 8 news regarding support for native HTML5/JavaScript applications. The press has pushed this as a potential threat to the .NET developer community because JavaScript and HTML5 were called "our new developer platform". The press release refers to "Web-connected and Web-powered apps built using HTML5 and JavaScript that have access to the full power of the PC.".Microsoft has also been hush on details related to these comments. Before we buy the hype and start worrying about a world where we drop our Visual Studio licenses and buy DreamWeaver - let's think about how Windows 8 HTML/JavaScript applications would be implemented. The HTML5 spec offers support for offline applications, but this won't offer the OS-integrated experience the press release refers to. MS has to be planning a way to extend access beyond the traditional JavaScript feature set. Microsoft has a similar option today: HTML Applications or HTAs. They come close to required features, but HTAs need ActiveX or Java integration to provide the promised OS-level access. I'm guessing that Microsoft's future OS strategy isn't built on developers cranking out ActiveX controls or Java applets. So where is Microsoft headed? One possibility is that MS builds a new JavaScript framework from the ground up outside their current APIs. Another idea would be for Microsoft to add support for JavaScript as a first class .NET language using the Dynamic Language Runtime. A solution based on the DLR could be integrated into an HTA-like model to provide the promised access, along with the full range of features in .NET Framework. Security comes included in the Framework. And the work necessary to support this integration would tie in nicely with the effort MS has recently made providing better JavaScript and HTML5 support in Visual Studio 2010. As a bonus, a full-fledged JavaScript DLR implementation would allow single language web solutions across client and server (think node.js) and would appeal to developers who are familiar with JavaScript but have less experience with the Microsoft tech stack. We will all get a better picture after the Build conference in September. But in the mean time we know that Microsoft has a reputation for providing strong developer support. We might want to reserve our harshest judgement and consider that the press release could hint at new opportunities for .NET development.

    Read the article

  • Installed LibreOffice 4 with ppa, how do I remove it and go back to LibreOffice 3?

    - by MMA
    EDIT This question is not at all a duplicate of How to downgrade from LibreOffice 4.0 to 3.6? The above mentioned question talks about downgrading from a specific version of LibreOffice, namely from 4.0 to 3.6. The solutions mentioned are not the ones I am looking for. They will work but I wanted a general solution without using PPA or downloading .deb files for from a higher version to a lower version. The above solutions suggest either downloading .deb files for LibreOffice 3.6 or adding repository for it. Furthermore, some of the answers put out-of-proportion~(applicable for the solution, however) stress on use of synaptic, not general command-line-solution. That made me wonder, at this very moment, if I take a fresh computer, and install Ubuntu 12.04, LibreOffice installation will work without a hitch. Then why I can not install LibreOffice in my 12.04 machine today from simple command line? This answer to my question, clarified everything. I need to use ppa-purge so that this resets all packages from a PPA to the standard versions released for my distribution. Basically it is like a way to restore my system back to the way it was before my installed packages from a PPA. This article further elaborates the idea. The above mentioned answer worked perfectly for me. Actually, this was an education for me since it taught me how do downgrade a package that was added via PPA. I had upgraded from LibreOffice 3 to LibreOffice 4 using the PPA. Now since I found that LibreOffice 4 has some issues, including handling my native language, I want to move back to LibreOffice 3. In order to accomplish this, I removed the LibreOffice config directory from my home and then purged LibreOffice from my machine. sudo apt-get purge libreoffice-* Then I removed the relevant PPA's using the sudo apt-add-repository --remove command. And then ran sudo apt-get update. Now, when I try to install LibreOffice using the command sudo apt-get install libreoffice I get an avalanche of output about unmet dependencies, something like, The following packages have unmet dependencies: libreoffice : Depends: libreoffice-core (= 1:3.5.7-0ubuntu4) but it is not going to be installed (snipped) If I dig the issue further, by using the command, sudo apt-get install libreoffice-core I get The following packages have unmet dependencies: libreoffice-core : Depends: libreoffice-common (> 1:3.5.7) but it is not going to be installed Depends: libexttextcat0 (>= 2.2-8) but it is not going to be installed Depends: ure (>= 3.5.7~) but it is not going to be installed E: Unable to correct problems, you have held broken packages. Could you please tell me how do I install LibreOffice 3 in my machine? I am using Ubuntu 12.04 LTS.

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >