Search Results

Search found 17976 results on 720 pages for 'old versions'.

Page 222/720 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • help with synclient configuration on an ASUS touchpad

    - by yohbs
    I have recently installed Ubuntu 12.04 on my brand new ASUS K55V. The touchpad behaves weird - two finger tap is interpreted as right-click, click and drag is not working (a double click is needed) and so on. Two finger scrolling (horizontal & vertical) works great. I want the touch pad to behave the "normal" way (that is - like in my old laptop...). I read the synclient documentation and many of the questions posted here, and I can even make some stuff work. Unfortunately, I couldn't figure out how to make these work: Click and drag (that is - physically clicking the button and dragging a finger) Clicking in the right side of the button interpreted as right-click Clicking button with two fingers interpreted as middle-click. specs: The touchpad is equipped with a physical button that clicks. Here's the output of xinput list-props "ETPS/2 Elantech Touchpad" | grep Capabilities: Synaptics Capabilities (294): 1, 0, 1, 1, 1, 1, 1 Any help will be much appreciated.

    Read the article

  • Unable to complete ubuntu installation

    - by Hugh Levinson
    I am not a computer expert and have no programming experience. I have downloaded Ubuntu 12.04 from the website using the Windows installer onto an admittedly old Toshiba Satellite Pro laptop. The download seemed to be fine. When I try to startup the laptop and select Ubuntu I get a long series of messages starting: "error: couldn't read file [0.7392640 Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block (0,0)" I have seen other entries on this messageboard with similar questions but to be honest, I can't follow the answers, which presume more knowledge than I have. I would be extremely grateful if anyone could suggest a solution.

    Read the article

  • Salaries in reverse engineering fields [closed]

    - by John
    I bumped into an old friend at a conference and he told me he was now a consultant doing reverse engineering. I don't have much knowledge of this particular area, but this person (that I can't manage to get in touch with now) just casually mentioned that he was earning big bucks. I was hoping someone at SO may know of the salary range that a skilled and experienced employee/freelancer may earn in this area? I can't find much information on the web - small area maybe? I dunno. Any help would be appreciated.

    Read the article

  • Are there any non-self-taught famous programmers? [closed]

    - by Jon Purdy
    It seems to me that being a self-taught programmer has significant advantages over picking it up only in higher education. Not only does a self-taught developer have a headstart on their 10 000-odd hours of mastery, but their hobby demonstrates genuine interest. This will likely lead to a process of continuous self-improvement over their career, not to mention increased likelihood of producing personal projects that are worthy of fame. A programmer who spends four years in study (not nearly all of which is going to be directly concerned with programming) has far less leisure to explore and learn independently than does a developer who starts college with even a few years of dedicated hobbyist study. I wonder whether there are any famed developers who had no exposure to programming before deciding to study it in university. I simply doubt that an 18-year-old has the capacity to become a brilliant programmer with no prior experience, but that seems like an awfully elitist and unpleasant view, so I'd like to be proven wrong.

    Read the article

  • We're Back: I'm Here

    - by [email protected]
    After a busy Fall and Winter post-Oracle OpenWorld 2009 Oracle's Application Strategy Blog is back. More on what we've been up to shortly. Me, I'm blogging here for the first time. After nearly 6 years at Oracle working on the Oracle Fusion Middleware business I've recently joined the Oracle Applications team. For me, what's old is new again. Prior to working on applications infrastructure at Oracle...and at BEA Systems before that...I worked at PeopleSoft in a number of roles spanning Enterprise Performance Management, Supply Chain, Public Sector and Financial Services and more. Some of the acronyms are the same, there are (of course) some new ones too. But what I'm really excited about is the intersection of Enterprise Applications and Applications Infrastructure that's happening right now. "Aligning IT with Business Strategy" has been the buzzphrase for longer than we can all remember---but what I've seen over the past 5 months makes me start to believe that it's finally starting to happen.

    Read the article

  • Automatic Storage Management (ASM)

    - by jean-marc.gaudron(at)oracle.com
    Master Note for Automatic Storage Management (ASM) (Doc ID 1187723.1)This Master Note is intended to provide an index and references to the most frequently used My Oracle Support Notes with respect to Oracle Automatic Storage Management (ASM) environments. This Master Note is subdivided into categories to allow for easy access and reference to notes that are applicable to your area of interest. This includes the following categories: Automatic Storage Management (ASM) Concepts and Overview Automatic Storage Management (ASM) Installation Automatic Storage Management (ASM) Configuration Automatic Storage Management (ASM) Administration Automatic Storage Management (ASM) Migration and Upgrade Automatic Storage Management (ASM) Monitoring Automatic Storage Management (ASM) Troubleshooting and Debugging Automatic Storage Management (ASM) Best Practices Automatic Storage Management (ASM) Versions and Patches ASMLIB Database Machine, Exadata Storage Server and RAC Documentation Using My Oracle Support Effectively

    Read the article

  • How to get the Ubuntu look back after installing lubuntu-desktop

    - by Wauzl
    I have a fresh install of Ubuntu 13.10. I wanted to try out Lubuntu, so I installed the package lubuntu-desktop. Everthing worked fine, I can do Lubuntu sessions now, as well as normal Ubuntu sessions with unity. I realized that I liked unity better. Unfortunately, since I installed lubuntu-desktop my login screen and my notifications look different. How can I revert this and get my old Ubuntu look back? I already removed the package lubuntu-desktop, but it did’t help. Also, when I installed it, it came with a lot of packages that weren’t removed when I removed lubuntu-desktop

    Read the article

  • Good book/resource recommendation for HTML5 mobile game development?

    - by Greg Bala
    The problem: I am taking an existing, 5 year old, html based MMORTS game and "HTML5-ing" it, "AJAX-ing" it and most importantly, optimizing for mobile devices like iPhone, android etc. For these devices, the application will be packaged as a downloadable app that is a wrapper for a browser which actually shows the game.. The Question Looking for a good book, or books, or in-depth articles that would help me learn: what tools I have in iOS, andriod applications for optimizing an html based game. things like caching of images, etc what kind of connectivity, or interactivity I can expect between the html/javascript pages and the wrapper - can I play sounds in the wrapper by triggering them from javascript? etc tip and tricks to optimize html/html5 & Javascript application to run well on mobile devices ETC :) Any recommendations would be greatly appreciated!!

    Read the article

  • Provisioning Oracle Solaris 11

    - by Owen Allen
    OS Provisioning is one of the major features of Ops Center. You can set up an OS provisioning plan and profile, which specify how an OS is deployed, and then use them to create new operating systems on any number of systems. Oracle Solaris 11 works a bit differently than older versions of Oracle Solaris, though, and even if you've done OS provisioning before you might have some questions about how to provision it. The Provisioning Oracle Solaris 11 OS how-to walks you through discovering the target hardware, creating a simple OS provisioning plan and profile, and launching a job to provision an OS. There's further information in the Provisioning Operating Systems section of the Feature Guide.

    Read the article

  • Is WCF suitable for writing an application which is shared among applications?

    - by RPK
    I have developed and deployed few ASP.NET applications. Sometimes I want to stop the users from either inserting or updating a record when: Maintenance is going on. Stop operations due to payment delay. In one of my recent application I have implemented this feature to first check the database operations for locked status. If any of the above condition fulfils, database operations like insert and update are not carried out. I now need this feature in all the old applications and the future applications I build. I want to know whether WCF is suitable in this scenario as I want to share methods or an independent locking application among various other applications. Is WCF appropriate for this type of scenario?

    Read the article

  • farseer physics xbox samples not working

    - by Hugh
    I have downloaded a few of the sample projects from the official farseer physics website and i just cant get them to run on my xbox. -My connection to the xbox is fine, other xbox projects debug fine on my xbox -I have tried running both the xbox versions of the samples (for example the farseer hello world sample project) and the windows version by right-clicking the project and making a copy for xbox. I get a bunch of errors but what i always get is "unreachable code detected" referring to code in the farseer library, it seems to be a problem to do with referencing/linking the farseer library to the main game project. Help please!

    Read the article

  • WebCenter Content shared folders for clustering

    - by Kyle Hatlestad
    When configuring a WebCenter Content (WCC) cluster, one of the things which makes it unique from some other WebLogic Server applications is its requirement for a shared file system.  This is actually not any different then 10g and previous versions of UCM when it ran directly on a JVM.  And while it is simple enough to say it needs a shared file system, there are some crucial details in how those directories are configured. And if they aren't followed, you may result in some unwanted behavior. This blog post will go into the details on how exactly the file systems should be split and what options are required. [Read More]

    Read the article

  • Multi database link and mix and match email alert

    - by menardmam
    I have a site which is a large database of people that have different knowledge in different domains, such as teaching (maths, french, science etc...) On the site there is a page where you can search people base on different request, such as distance from home, grade, sex. Now, I would like to add a page where people that are looking for mentor will fill a request, and when a tutor in his area of search will match request, a email will be send to this researcher. Because I know for sure, that when in January you look for a math teacher for your 10 year old son, and you find none, you won't go again in February, March... and on and on just to see. Maybe there is one now, you want to be informed when the tutor will get into database automatically (more or less like www.jobboom.com) So the question is, what CMS do I need to be able to do that ? Wordpress, drupal or something custom made?

    Read the article

  • Wireless device not working...supposed problem with b43 driver

    - by Francesco
    I just installed Xubuntu 11.10 onto an old Acer Aspire 3003wlmi. The wireless card is not working. I did all the passages illustrated in the troubleshooting guide but I came up with nothing. It says that driver is b43, but when I booted from the installation cd I noted down some messages saying "Error - b43/ucode " and "Error b43/phy_...something". I had Ubuntu 10.10 installed before and I remember I had similar problems but I don't remember how to solve it. If you please can help me. thanks a lot. Francesco

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Authentication issue with CUPS 5.3.1 on SMB Printer

    - by Julius
    I am trying to print to a samba printer via CUPS. I have configured the printer allright, but there seems to be a problem with authentication. The error message I get is Session setup failed: NT_STATUS_LOGON_FAILURE The GUI also tells me: Idle - Tree connect failed (NT_STATUS_ACCESS_DENIED) It used to work with previous versions of CUPS (1.4.3 and 1.4.6) under Ubuntu 11.04 I am doing this on a clean install of Ubuntu 12.04, CUPS version 1.5.3. I have tried changing some rights relating to apparmor, with no success as described here: http://www.compdigitec.com/labs/2010/01/16/fixing-usrlibcupsbackendsmb-failed-error-in-ubuntu/ I have been working with Ubuntu for years - but this is the kind of problem I need help with.

    Read the article

  • .htaccess RedirectMatch 301 issue

    - by Steve
    Hi. I've moved my Wordpress installation from one domain to another, and I want to use an .htaccess file on the original to redirect visitors to the new page on the new website. The old site is http://www.steve.doig.com.au/wordpress/. The new site is http://www.superlogical.net I tried using tried using the following .htaccess file in the /wordpress directory: RedirectMatch 301 http://www.steve.doig.com.au/wordpress(.*) http://www.superlogical.net/$1 However, all this does is redirect visitors to the URL: http://www.superlogical.net/wordpress/ I guess this is working properly, but I don't have Wordpress installed in a /wordpress folder on the new domain. How do I remove this from the URL redirected to? Thanks..

    Read the article

  • Imaging: Paper Paper Everywhere, but None Should be in Sight

    - by Kellsey Ruppel
    Author: Vikrant Korde, Technical Architect, Aurionpro's Oracle Implementation Services team My wedding photos are stored in several empty shoeboxes. Yes...I got married before digital photography was mainstream...which means I'm old. But my parents are really old. They have shoeboxes filled with vacation photos on slides (I doubt many of you have even seen a home slide projector...and I hope you never do!). Neither me nor my parents should have shoeboxes filled with any form of photographs whatsoever. They should obviously live in the digital world...with no physical versions in sight (other than a few framed on our walls). Businesses grapple with similar challenges. But instead of shoeboxes, they have file cabinets and warehouses jam packed with paper invoices, legal documents, human resource files, material safety data sheets, incident reports, and the list goes on and on. In fact, regulatory and compliance rules govern many industries, requiring that this paperwork is available for any number of years. It's a real challenge...especially trying to find archived documents quickly and many times with no backup. Which brings us to a set of technologies called Image Process Management (or simply Imaging or Image Processing) that are transforming these antiquated, paper-based processes. Oracle's WebCenter Content Imaging solution is a combination of their WebCenter suite, which offers a robust set of content and document management features, and their Business Process Management (BPM) suite, which helps to automate business processes through the definition of workflows and business rules. Overall, the solution provides an enterprise-class platform for end-to-end management of document images within transactional business processes. It's a solution that provides all of the capabilities needed - from document capture and recognition, to imaging and workflow - to effectively transform your ‘shoeboxes’ of files into digitally managed assets that comply with strict industry regulations. The terminology can be quite overwhelming if you're new to the space, so we've provided a summary of the primary components of the solution below, along with a short description of the two paths that can be executed to load images of scanned documents into Oracle's WebCenter suite. WebCenter Imaging (WCI): the electronic document repository that provides security, annotations, and search capabilities, and is the primary user interface for managing work items in the imaging solution SOA & BPM Suites (workflow): provide business process management capabilities, including human tasks, workflow management, service integration, and all other standard SOA features. It's interesting to note that there a number of 'jumpstart' processes available to help accelerate the integration of business applications, such as the accounts payable invoice processing solution for E-Business Suite that facilitates the processing of large volumes of invoices WebCenter Enterprise Capture (WEC): expedites the capture process of paper documents to digital images, offering high volume scanning and importing from email, and allows for flexible indexing options WebCenter Forms Recognition (WFR): automatically recognizes, categorizes, and extracts information from paper documents with greatly reduced human intervention WebCenter Content: the backend content server that provides versioning, security, and content storage There are two paths that can be executed to send data from WebCenter Capture to WebCenter Imaging, both of which are described below: 1. Direct Flow - This is the simplest and quickest way to push an image scanned from WebCenter Enterprise Capture (WEC) to WebCenter Imaging (WCI), using the bare minimum metadata. The WEC activities are defined below: The paper document is scanned (or imported from email). The scanned image is indexed using a predefined indexing profile. The image is committed directly into the process flow 2. WFR (WebCenter Forms Recognition) Flow - This is the more complex process, during which data is extracted from the image using a series of operations including Optical Character Recognition (OCR), Classification, Extraction, and Export. This process creates three files (Tiff, XML, and TXT), which are fed to the WCI Input Agent (the high speed import/filing module). The WCI Input Agent directory is a standard ingestion method for adding content to WebCenter Imaging, the process for doing so is described below: WEC commits the batch using the respective commit profile. A TIFF file is created, passing data through the file name by including values separated by "_" (underscores). WFR completes OCR, classification, extraction, export, and pulls the data from the image. In addition to the TIFF file, which contains the document image, an XML file containing the extracted data, and a TXT file containing the metadata that will be filled in WCI, are also created. All three files are exported to WCI's Input agent directory. Based on previously defined "input masks", the WCI Input Agent will pick up the seeding file (often the TXT file). Finally, the TIFF file is pushed in UCM and a unique web-viewable URL is created. Based on the mapping data read from the TXT file, a new record is created in the WCI application.  Although these processes may seem complex, each Oracle component works seamlessly together to achieve a high performing and scalable platform. The solution has been field tested at some of the largest enterprises in the world and has transformed millions and millions of paper-based documents to more easily manageable digital assets. For more information on how an Imaging solution can help your business, please contact [email protected] (for U.S. West inquiries) or [email protected] (for U.S. East inquiries). About the Author: Vikrant is a Technical Architect in Aurionpro's Oracle Implementation Services team, where he delivers WebCenter-based Content and Imaging solutions to Fortune 1000 clients. With more than twelve years of experience designing, developing, and implementing Java-based software solutions, Vikrant was one of the founding members of Aurionpro's WebCenter-based offshore delivery team. He can be reached at [email protected].

    Read the article

  • Why is concept art not signed by the author?

    - by Gerald
    I am a starting concept artist who would like to enter the gaming industry. I noticed that some AAA titles show their concept art with no artists signature (only a reference to game the game, such as for Star Wars The Old Republic: 2013 ALL RIGHTS RESERVED BioWare, LucasArts). I asked myself a question, what possible harm could my autograph cause on the public concept art if I am not a well known concept artist such as Adam Adamowicz (who did concepts for Skyrim). Why would a prospective boss tells me not to leave my "finger print" on the picture despite, the fact that I am a very talented artist?

    Read the article

  • Change the icon color in LibreOffice Calc

    - by user242234
    There appears to be no way of changing icon color in a calc chart. After creating an XY scatter plot, I can select "format data series" and can change the icon symbol and size, but not the color. There is an drop-down for line color, but when I do not have lines displayed this drop-down menu is grayed out. I have found that I can add lines to my chart, change the color of the lines, then remove the lines and I will have changed the color of the icons, but there should be a direct way to change icon color (and there used to be in previous versions). I am using Libreoffice calc version 4.2.3.3, Build ID: 420m0(Build:3). Running on Ubuntu 14.04.

    Read the article

  • How to do reflective collisions with particles hitting background tiles?

    - by Shawn LeBlanc
    In my 2d pixel old-school platformer, I'm looking for methods for bouncing particles off of background tiles. Particles aren't affected by gravity and collisions are "reflective". By that I mean a particle hitting the side of a square tile at 45 degrees should bounce off at 45 degrees as well. We can assume that tiles will always be perfectly square. No slopes or anything. What are efficient methods and algorithms to do this? I'd be implementing this on a Sega Genesis.

    Read the article

  • Windows XP with Ubuntu 14.04 on 2 separate hard drives

    - by maplenet2
    I am new to Ubuntu. I have Windows XP Professional 32-bit on one 300GB IDE hard drive and Ubuntu 14.04 running on another 61GB IDE hard drive, and I cannot get my Windows XP to boot with Grub! When I select Windows XP from the boot menu, Grub just restarts my computer. The computer I have with those two hard drives is a Dell Optiplex GX240, so the hardware is old, and its BIOS won't let me change the boot priority on the two IDE hard drives. What can I do now? Is there a step I missed when installing Ubuntu? Can I edit Grub to boot Windows XP without messing with the BIOS? Do I have to downgrade to an older release of Ubuntu to make it work? I am willing to reinstall Ubuntu, if that's what it takes.

    Read the article

  • Sortie de première RC de PostgreSQL 9.2, annoncée par le PostgreSQL Global Development Group

    Le PostgreSQL Global Development Group a annoncé la première Release Candidate de PostgreSQL 9.2. Cette version majeure inclut des avancées considérables en termes de performances et d'évolutivité horizontale et verticale. Les utilisateurs qui veulent participer à la traque des éventuels derniers bogues sont invités à télécharger et tester cette RC1 de PostgreSQL 9.2 le plus rapidement possible. Cette RC1 contient de nombreux correctifs des versions Beta précédentes. Citons : de nombreuses mises à jour de la documentation et des traductions ; un correctif au REVOKE de privilèges en cascade ; la suppression des problèmes de boucles dans l'export par pg_dump des vues de niveau sécurité ; des correctifs apportés à ...

    Read the article

  • Setting up UPS monitoring

    - by Andrew Heath
    I have acquired a second hand Uninterrupted Power Supply (UPS) that I have refurbished (new battery) and hope to use with my Ubuntu 12.10 system. It's a SOLA 330 with serial out. I have installed NUT Metapackage and NUT Monitor from Software Centre, but am not sure how to go about setting it all up. A Google search brings up several ways of configuring Network UPS Tools (NUT) or HAL-Drivers, however, HAL-Drivers appears to be obsolete and many commands and config files mentioned to edit do not exist in 12.10 or the current version of NUT (most articles are a few years old). One tutorial seemed to work except the Error: no UPS definitions found in ups.conf even though ups.conf has values in it as laid out in the tutorial. How do I go about setting my system to monitor the UPS for a shut down signal? Also, is there a command to determine the UPS is communicating through the serial connection and on what port (to help with setup and configuring, eg. /dev/ttyS0 is mentioned in one of the tutorials I read).

    Read the article

  • My Thoughts On Twitter

    - by andyleonard
    This is a repost from my old blog. It kept showing up in search results when I looked for articles about Twitter and social networking, so I thought I'd share it here. :{> Introduction There's been lots of speculation about Twitter and what it means to the modern technologist. I've found some of it pretty insightful and some of it misinformed. I use Twitter . A bunch. Not as much as some , but more than average . I like it. The Best Defense... I don't intend to defend Twitter because I do not...(read more)

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >