Search Results

Search found 5714 results on 229 pages for 'power mosfet'.

Page 80/229 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Turn off keyboard back-light Sony (VAIO SVF1521DCXW)

    - by KasiyA
    I have a Sony laptop and I want to turn keyboard back-light off. It doesn't have a shortcut function key for doing this on the keyboard . I can turn off it with VAIO Control Center in Windows but I don't know how can I turn it off in Ubuntu 14.04. There isn't available to me: /sys/devices/platform/sony-laptop/kbd_backlight doesn't exist on my machine. I have this folder /sys/devices/platform/sony-laptop/ and there is three folder one power folder and two shortcut-ed folder driver , subsystem and five file contains battery_care_health , battery_care_limiter , modalias , touchpad and event This is the output of running sudo modinfo sony-laptop: filename: /lib/modules/3.13.0-34-generic/kernel/drivers/platform/x86/sony-laptop.ko version: 0.6 license: GPL description: Sony laptop extras driver (SPIC and SNC ACPI device) author: Stelian Pop, Mattia Dongili srcversion: 5C6E050349475558A231C59 alias: acpi*:SNY6001:* alias: acpi*:SNY5001:* depends: intree: Y vermagic: 3.13.0-34-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: 50:0B:C5:C8:7D:4B:11:5C:F3:C1:50:4F:7A:92:E2:33:C6:14:3D:58 sig_hashalgo: sha512 parm: debug:set this to 1 (and RTFM) if you want to help the development of this driver (int) parm: no_spic:set this if you don't want to enable the SPIC device (int) parm: compat:set this if you want to enable backward compatibility mode (int) parm: mask:set this to the mask of event you want to enable (see doc) (ulong) parm: camera:set this to 1 to enable Motion Eye camera controls (only use it if you have a C1VE or C1VN model) (int) parm: minor:minor number of the misc device for the SPIC compatibility code, default is -1 (automatic) (int) parm: kbd_backlight:set this to 0 to disable keyboard backlight, 1 to enable it (default: no change from current value) (int) parm: kbd_backlight_timeout:meaningful values vary from 0 to 3 and their meaning depends on the model (default: no change from current value) (int) With the suggested command: sudo modprobe -r sony_laptop sudo modprobe -v sony_laptop kbd_backlight=0 Output was: insmod /lib/modules/3.13.0-34-generic/kernel/drivers/platform/x86/sony-laptop.ko kbd_backlight=0 It doesn't seem to affect the keyboard backlight. And also trying this command: sudo modprobe -v sony_laptop kbd_backlight_timeout=3 kbd_backlight=0 and doesn't seem to effect the keyboard backlight I also test it after restart laptop, And I didn't see any effect too. Important : By default, keyboard backlight is off; when I press a key it turns on and after 15 seconds it turns off again. It's the same result on battery and AC power I followed also http://ubuntuforums.org/showthread.php?t=2139597 and Keyboard backlighting not working on a Vaio VPCSB11FX but didn't work so.

    Read the article

  • How to prevent a hacked-server from spoofing a master server?

    - by Cody Smith
    I wish to setup a room-based multilayer game model where players may host matches and serve as host (IE the server with authoritative power). I wish to host a master server which tracks player's items, rank, cash, exp, etc. In such a model, how can I prevent someone that is hosting a game (with a modified server) from spoofing the master server with invalid match results, thus gaining exp, money or rankings. Thanks. -Cody

    Read the article

  • The Database as Intellectual Property

    - by Jonathan Kehayias
    Every so often, a question shows up on the forums in the form of, “How do I prevent anyone from accessing my database schema, including local administrators and sysadmins in SQL Server?”  I usually laugh a little shake my head when I read a question like this because it demonstrates an complete lack of understanding of the power an administrator has over SQL Server.  The simple answer is this: If you don’t want your database schema to ever be accessed or known, don’t distribute your database....(read more)

    Read the article

  • Lubuntu: neither shut-down nor restart works

    - by Rantanplan
    aI have a freshly installed Lubuntu 14.04.1 (installed with forcepae option on a laptop with Pentium M processor). The only problem that I have found so far is that I cannot shut-down or restart the laptop. It always continues showing "Lubuntu" and some dots. Pressing Esc it says wait-for-state stop/waiting * Stopping rsync daemon rsync [OK] * Asking all remaining processes to terminate… [OK] * Killing all remaining processes… [fail] ModemManager [597] : <info> Caught signal, shutting down… ModemManager [597] : <info> ModemManager is shut down nm-dispatcher.action: Could not get the system bus. Make sure the message bus daemon is running! Message: Did not receive a reply. Possible causes: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. * Deactivating swap… [OK] * Will now halt The cursor remains blinking but the only way to switch it off is to hold the power-off key pressed for some seconds. I tried sudo shutdown -h now, sudo halt and sudo poweroff resulting in the same problem. I also tried to add acpi=force in GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" in /etc/default/grub and run sudo update-grub; then, using the taskbar's shot-down button lead to a direct stop of the laptop equal to holding the power-off key pressed for some seconds. Next I followed the answer http://askubuntu.com/a/202481/288322. Now, I directly receive some messages during shut-down starting wait-for-state stop/waiting * Stopping rsync daemon rsync [OK] * Asking all remaining processes to terminate… [OK] [ 240.944277] INFO: task kworker/0:2:24: block for more than 120 seconds. [ 240.944461] Tainted: G S 3.13.0-34-generic #60-Ubuntu [ 240.944623] "echo 0 > /proc/sys/kernel/hung_tasks_timeout_secs" disables this message. followed by some more similar lines and then: * Killing all remaining processes… [fail] ModemManager [576] : <info> Caught signal, shutting down… nm-dispatcher.action: Caught signal 15, shutting down... ModemManager [576] : <info> ModemManager is shut down * Deactivating swap… [OK] * Will now halt [ 600.944276] INFO: task kworker/0:2:24: block for more than 120 seconds. [ 600.944458] Tainted: G S 3.13.0-34-generic #60-Ubuntu [ 600.944619] "echo 0 > /proc/sys/kernel/hung_tasks_timeout_secs" disables this message. Then, nothing more was coming during the next 5 minutes. If you know where can I find relevant error information, I will be happy to search for them.

    Read the article

  • Make Serious Money Online With Google's Keyword Tool

    Before you can make serious money online, you need to understand keywords. You might know that you need to do keyword research before you start an online business, but to use the full power of Google's Keyword Tool, you need to understand how to use its powerful features.

    Read the article

  • C#/.NET Little Wonders: The Generic Func Delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Back in one of my three original “Little Wonders” Trilogy of posts, I had listed generic delegates as one of the Little Wonders of .NET.  Later, someone posted a comment saying said that they would love more detail on the generic delegates and their uses, since my original entry just scratched the surface of them. Last week, I began our look at some of the handy generic delegates built into .NET with a description of delegates in general, and the Action family of delegates.  For this week, I’ll launch into a look at the Func family of generic delegates and how they can be used to support generic, reusable algorithms and classes. Quick Delegate Recap Delegates are similar to function pointers in C++ in that they allow you to store a reference to a method.  They can store references to either static or instance methods, and can actually be used to chain several methods together in one delegate. Delegates are very type-safe and can be satisfied with any standard method, anonymous method, or a lambda expression.  They can also be null as well (refers to no method), so care should be taken to make sure that the delegate is not null before you invoke it. Delegates are defined using the keyword delegate, where the delegate’s type name is placed where you would typically place the method name: 1: // This delegate matches any method that takes string, returns nothing 2: public delegate void Log(string message); This delegate defines a delegate type named Log that can be used to store references to any method(s) that satisfies its signature (whether instance, static, lambda expression, etc.). Delegate instances then can be assigned zero (null) or more methods using the operator = which replaces the existing delegate chain, or by using the operator += which adds a method to the end of a delegate chain: 1: // creates a delegate instance named currentLogger defaulted to Console.WriteLine (static method) 2: Log currentLogger = Console.Out.WriteLine; 3:  4: // invokes the delegate, which writes to the console out 5: currentLogger("Hi Standard Out!"); 6:  7: // append a delegate to Console.Error.WriteLine to go to std error 8: currentLogger += Console.Error.WriteLine; 9:  10: // invokes the delegate chain and writes message to std out and std err 11: currentLogger("Hi Standard Out and Error!"); While delegates give us a lot of power, it can be cumbersome to re-create fairly standard delegate definitions repeatedly, for this purpose the generic delegates were introduced in various stages in .NET.  These support various method types with particular signatures. Note: a caveat with generic delegates is that while they can support multiple parameters, they do not match methods that contains ref or out parameters. If you want to a delegate to represent methods that takes ref or out parameters, you will need to create a custom delegate. We’ve got the Func… delegates Just like it’s cousin, the Action delegate family, the Func delegate family gives us a lot of power to use generic delegates to make classes and algorithms more generic.  Using them keeps us from having to define a new delegate type when need to make a class or algorithm generic. Remember that the point of the Action delegate family was to be able to perform an “action” on an item, with no return results.  Thus Action delegates can be used to represent most methods that take 0 to 16 arguments but return void.  You can assign a method The Func delegate family was introduced in .NET 3.5 with the advent of LINQ, and gives us the power to define a function that can be called on 0 to 16 arguments and returns a result.  Thus, the main difference between Action and Func, from a delegate perspective, is that Actions return nothing, but Funcs return a result. The Func family of delegates have signatures as follows: Func<TResult> – matches a method that takes no arguments, and returns value of type TResult. Func<T, TResult> – matches a method that takes an argument of type T, and returns value of type TResult. Func<T1, T2, TResult> – matches a method that takes arguments of type T1 and T2, and returns value of type TResult. Func<T1, T2, …, TResult> – and so on up to 16 arguments, and returns value of type TResult. These are handy because they quickly allow you to be able to specify that a method or class you design will perform a function to produce a result as long as the method you specify meets the signature. For example, let’s say you were designing a generic aggregator, and you wanted to allow the user to define how the values will be aggregated into the result (i.e. Sum, Min, Max, etc…).  To do this, we would ask the user of our class to pass in a method that would take the current total, the next value, and produce a new total.  A class like this could look like: 1: public sealed class Aggregator<TValue, TResult> 2: { 3: // holds method that takes previous result, combines with next value, creates new result 4: private Func<TResult, TValue, TResult> _aggregationMethod; 5:  6: // gets or sets the current result of aggregation 7: public TResult Result { get; private set; } 8:  9: // construct the aggregator given the method to use to aggregate values 10: public Aggregator(Func<TResult, TValue, TResult> aggregationMethod = null) 11: { 12: if (aggregationMethod == null) throw new ArgumentNullException("aggregationMethod"); 13:  14: _aggregationMethod = aggregationMethod; 15: } 16:  17: // method to add next value 18: public void Aggregate(TValue nextValue) 19: { 20: // performs the aggregation method function on the current result and next and sets to current result 21: Result = _aggregationMethod(Result, nextValue); 22: } 23: } Of course, LINQ already has an Aggregate extension method, but that works on a sequence of IEnumerable<T>, whereas this is designed to work more with aggregating single results over time (such as keeping track of a max response time for a service). We could then use this generic aggregator to find the sum of a series of values over time, or the max of a series of values over time (among other things): 1: // creates an aggregator that adds the next to the total to sum the values 2: var sumAggregator = new Aggregator<int, int>((total, next) => total + next); 3:  4: // creates an aggregator (using static method) that returns the max of previous result and next 5: var maxAggregator = new Aggregator<int, int>(Math.Max); So, if we were timing the response time of a web method every time it was called, we could pass that response time to both of these aggregators to get an idea of the total time spent in that web method, and the max time spent in any one call to the web method: 1: // total will be 13 and max 13 2: int responseTime = 13; 3: sumAggregator.Aggregate(responseTime); 4: maxAggregator.Aggregate(responseTime); 5:  6: // total will be 20 and max still 13 7: responseTime = 7; 8: sumAggregator.Aggregate(responseTime); 9: maxAggregator.Aggregate(responseTime); 10:  11: // total will be 40 and max now 20 12: responseTime = 20; 13: sumAggregator.Aggregate(responseTime); 14: maxAggregator.Aggregate(responseTime); The Func delegate family is useful for making generic algorithms and classes, and in particular allows the caller of the method or user of the class to specify a function to be performed in order to generate a result. What is the result of a Func delegate chain? If you remember, we said earlier that you can assign multiple methods to a delegate by using the += operator to chain them.  So how does this affect delegates such as Func that return a value, when applied to something like the code below? 1: Func<int, int, int> combo = null; 2:  3: // What if we wanted to aggregate the sum and max together? 4: combo += (total, next) => total + next; 5: combo += Math.Max; 6:  7: // what is the result? 8: var comboAggregator = new Aggregator<int, int>(combo); Well, in .NET if you chain multiple methods in a delegate, they will all get invoked, but the result of the delegate is the result of the last method invoked in the chain.  Thus, this aggregator would always result in the Math.Max() result.  The other chained method (the sum) gets executed first, but it’s result is thrown away: 1: // result is 13 2: int responseTime = 13; 3: comboAggregator.Aggregate(responseTime); 4:  5: // result is still 13 6: responseTime = 7; 7: comboAggregator.Aggregate(responseTime); 8:  9: // result is now 20 10: responseTime = 20; 11: comboAggregator.Aggregate(responseTime); So remember, you can chain multiple Func (or other delegates that return values) together, but if you do so you will only get the last executed result. Func delegates and co-variance/contra-variance in .NET 4.0 Just like the Action delegate, as of .NET 4.0, the Func delegate family is contra-variant on its arguments.  In addition, it is co-variant on its return type.  To support this, in .NET 4.0 the signatures of the Func delegates changed to: Func<out TResult> – matches a method that takes no arguments, and returns value of type TResult (or a more derived type). Func<in T, out TResult> – matches a method that takes an argument of type T (or a less derived type), and returns value of type TResult(or a more derived type). Func<in T1, in T2, out TResult> – matches a method that takes arguments of type T1 and T2 (or less derived types), and returns value of type TResult (or a more derived type). Func<in T1, in T2, …, out TResult> – and so on up to 16 arguments, and returns value of type TResult (or a more derived type). Notice the addition of the in and out keywords before each of the generic type placeholders.  As we saw last week, the in keyword is used to specify that a generic type can be contra-variant -- it can match the given type or a type that is less derived.  However, the out keyword, is used to specify that a generic type can be co-variant -- it can match the given type or a type that is more derived. On contra-variance, if you are saying you need an function that will accept a string, you can just as easily give it an function that accepts an object.  In other words, if you say “give me an function that will process dogs”, I could pass you a method that will process any animal, because all dogs are animals.  On the co-variance side, if you are saying you need a function that returns an object, you can just as easily pass it a function that returns a string because any string returned from the given method can be accepted by a delegate expecting an object result, since string is more derived.  Once again, in other words, if you say “give me a method that creates an animal”, I can pass you a method that will create a dog, because all dogs are animals. It really all makes sense, you can pass a more specific thing to a less specific parameter, and you can return a more specific thing as a less specific result.  In other words, pay attention to the direction the item travels (parameters go in, results come out).  Keeping that in mind, you can always pass more specific things in and return more specific things out. For example, in the code below, we have a method that takes a Func<object> to generate an object, but we can pass it a Func<string> because the return type of object can obviously accept a return value of string as well: 1: // since Func<object> is co-variant, this will access Func<string>, etc... 2: public static string Sequence(int count, Func<object> generator) 3: { 4: var builder = new StringBuilder(); 5:  6: for (int i=0; i<count; i++) 7: { 8: object value = generator(); 9: builder.Append(value); 10: } 11:  12: return builder.ToString(); 13: } Even though the method above takes a Func<object>, we can pass a Func<string> because the TResult type placeholder is co-variant and accepts types that are more derived as well: 1: // delegate that's typed to return string. 2: Func<string> stringGenerator = () => DateTime.Now.ToString(); 3:  4: // This will work in .NET 4.0, but not in previous versions 5: Sequence(100, stringGenerator); Previous versions of .NET implemented some forms of co-variance and contra-variance before, but .NET 4.0 goes one step further and allows you to pass or assign an Func<A, BResult> to a Func<Y, ZResult> as long as A is less derived (or same) as Y, and BResult is more derived (or same) as ZResult. Sidebar: The Func and the Predicate A method that takes one argument and returns a bool is generally thought of as a predicate.  Predicates are used to examine an item and determine whether that item satisfies a particular condition.  Predicates are typically unary, but you may also have binary and other predicates as well. Predicates are often used to filter results, such as in the LINQ Where() extension method: 1: var numbers = new[] { 1, 2, 4, 13, 8, 10, 27 }; 2:  3: // call Where() using a predicate which determines if the number is even 4: var evens = numbers.Where(num => num % 2 == 0); As of .NET 3.5, predicates are typically represented as Func<T, bool> where T is the type of the item to examine.  Previous to .NET 3.5, there was a Predicate<T> type that tended to be used (which we’ll discuss next week) and is still supported, but most developers recommend using Func<T, bool> now, as it prevents confusion with overloads that accept unary predicates and binary predicates, etc.: 1: // this seems more confusing as an overload set, because of Predicate vs Func 2: public static SomeMethod(Predicate<int> unaryPredicate) { } 3: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } 4:  5: // this seems more consistent as an overload set, since just uses Func 6: public static SomeMethod(Func<int, bool> unaryPredicate) { } 7: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } Also, even though Predicate<T> and Func<T, bool> match the same signatures, they are separate types!  Thus you cannot assign a Predicate<T> instance to a Func<T, bool> instance and vice versa: 1: // the same method, lambda expression, etc can be assigned to both 2: Predicate<int> isEven = i => (i % 2) == 0; 3: Func<int, bool> alsoIsEven = i => (i % 2) == 0; 4:  5: // but the delegate instances cannot be directly assigned, strongly typed! 6: // ERROR: cannot convert type... 7: isEven = alsoIsEven; 8:  9: // however, you can assign by wrapping in a new instance: 10: isEven = new Predicate<int>(alsoIsEven); 11: alsoIsEven = new Func<int, bool>(isEven); So, the general advice that seems to come from most developers is that Predicate<T> is still supported, but we should use Func<T, bool> for consistency in .NET 3.5 and above. Sidebar: Func as a Generator for Unit Testing One area of difficulty in unit testing can be unit testing code that is based on time of day.  We’d still want to unit test our code to make sure the logic is accurate, but we don’t want the results of our unit tests to be dependent on the time they are run. One way (of many) around this is to create an internal generator that will produce the “current” time of day.  This would default to returning result from DateTime.Now (or some other method), but we could inject specific times for our unit testing.  Generators are typically methods that return (generate) a value for use in a class/method. For example, say we are creating a CacheItem<T> class that represents an item in the cache, and we want to make sure the item shows as expired if the age is more than 30 seconds.  Such a class could look like: 1: // responsible for maintaining an item of type T in the cache 2: public sealed class CacheItem<T> 3: { 4: // helper method that returns the current time 5: private static Func<DateTime> _timeGenerator = () => DateTime.Now; 6:  7: // allows internal access to the time generator 8: internal static Func<DateTime> TimeGenerator 9: { 10: get { return _timeGenerator; } 11: set { _timeGenerator = value; } 12: } 13:  14: // time the item was cached 15: public DateTime CachedTime { get; private set; } 16:  17: // the item cached 18: public T Value { get; private set; } 19:  20: // item is expired if older than 30 seconds 21: public bool IsExpired 22: { 23: get { return _timeGenerator() - CachedTime > TimeSpan.FromSeconds(30.0); } 24: } 25:  26: // creates the new cached item, setting cached time to "current" time 27: public CacheItem(T value) 28: { 29: Value = value; 30: CachedTime = _timeGenerator(); 31: } 32: } Then, we can use this construct to unit test our CacheItem<T> without any time dependencies: 1: var baseTime = DateTime.Now; 2:  3: // start with current time stored above (so doesn't drift) 4: CacheItem<int>.TimeGenerator = () => baseTime; 5:  6: var target = new CacheItem<int>(13); 7:  8: // now add 15 seconds, should still be non-expired 9: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(15); 10:  11: Assert.IsFalse(target.IsExpired); 12:  13: // now add 31 seconds, should now be expired 14: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(31); 15:  16: Assert.IsTrue(target.IsExpired); Now we can unit test for 1 second before, 1 second after, 1 millisecond before, 1 day after, etc.  Func delegates can be a handy tool for this type of value generation to support more testable code.  Summary Generic delegates give us a lot of power to make truly generic algorithms and classes.  The Func family of delegates is a great way to be able to specify functions to calculate a result based on 0-16 arguments.  Stay tuned in the weeks that follow for other generic delegates in the .NET Framework!   Tweet Technorati Tags: .NET, C#, CSharp, Little Wonders, Generics, Func, Delegates

    Read the article

  • Different Style Technique

    - by Muhammad Iqbal Dwi Cahyo
    I'm newbie here.. Please anyone knows, to create a character that his/her Style Tech is had a different kind of movement... I wanna make my character 2d his/her power technique like rasengan, I mean first the ball its just spining around and then going bigger and much more bigger so blow up if it touch his/her opponent? How the coding is, and what I've must do? Please your guide, thank's a lot... ^_^

    Read the article

  • The Great Divorce

    - by BlackRabbitCoder
    I have a confession to make: I've been in an abusive relationship for more than 17 years now.  Yes, I am not ashamed to admit it, but I'm finally doing something about it. I met her in college, she was new and sexy and amazingly fast -- and I'd never met anything like her before.  Her style and her power captivated me and I couldn't wait to learn more about her.  I took a chance on her, and though I learned a lot from her -- and will always be grateful for my time with her -- I think it's time to move on. Her name was C++, and she so outshone my previous love, C, that any thoughts of going back evaporated in the heat of this new romance.  She promised me she'd be gentle and not hurt me the way C did.  She promised me she'd clean-up after herself better than C did.  She promised me she'd be less enigmatic and easier to keep happy than C was.  But I was deceived.  Oh sure, as far as truth goes, it wasn't a complete lie.  To some extent she was more fun, more powerful, safer, and easier to maintain.  But it just wasn't good enough -- or at least it's not good enough now. I loved C++, some part of me still does, it's my first-love of programming languages and I recognize its raw power, its blazing speed, and its improvements over its predecessor.  But with today's hardware, at speeds we could only dream to conceive of twenty years ago, that need for speed -- at the cost of all else -- has died, and that has left my feelings for C++ moribund. If I ever need to write an operating system or a device driver, then I might need that speed.  But 99% of the time I don't.  I'm a business-type programmer and chances are 90% of you are too, and even the ones who need speed at all costs may be surprised by how much you sacrifice for that.   That's not to say that I don't want my software to perform, and it's not to say that in the business world we don't care about speed or that our job is somehow less difficult or technical.  There's many times we write programs to handle millions of real-time updates or handle thousands of financial transactions or tracking trading algorithms where every second counts.  But if I choose to write my code in C++ purely for speed chances are I'll never notice the speed increase -- and equally true chances are it will be far more prone to crash and far less easy to maintain.  Nearly without fail, it's the macro-optimizations you need, not the micro-optimizations.  If I choose to write a O(n2) algorithm when I could have used a O(n) algorithm -- that can kill me.  If I choose to go to the database to load a piece of unchanging data every time instead of caching it on first load -- that too can kill me.  And if I cross the network multiple times for pieces of data instead of getting it all at once -- yes that can also kill me.  But choosing an overly powerful and dangerous mid-level language to squeeze out every last drop of performance will realistically not make stock orders process any faster, and more likely than not open up the system to more risk of crashes and resource leaks. And that's when my love for C++ began to die.  When I noticed that I didn't need that speed anymore.  That that speed was really kind of a lie.  Sure, I can be super efficient and pack bits in a byte instead of using separate boolean values.  Sure, I can use an unsigned char instead of an int.  But in the grand scheme of things it doesn't matter as much as you think it does.  The key is maintainability, and that's where C++ failed me.  I like to tell the other developers I work with that there's two levels of correctness in coding: Is it immediately correct? Will it stay correct? That is, you can hack together any piece of code and make it correct to satisfy a task at hand, but if a new developer can't come in tomorrow and make a fairly significant change to it without jeopardizing that correctness, it won't stay correct. Some people laugh at me when I say I now prefer maintainability over speed.  But that is exactly the point.  If you focus solely on speed you tend to produce code that is much harder to maintain over the long hall, and that's a load of technical debt most shops can't afford to carry and end up completely scrapping code before it's time.  When good code is written well for maintainability, though, it can be correct both now and in the future. And you know the best part is?  My new love is nearly as fast as C++, and in some cases even faster -- and better than that, I know C# will treat me right.  Her creators have poured hundreds of thousands of hours of time into making her the sexy beast she is today.  They made her easy to understand and not an enigmatic mess.  They made her consistent and not moody and amorphous.  And they made her perform as fast as I care to go by optimizing her both at compile time and a run-time. Her code is so elegant and easy on the eyes that I'm not worried where she will run to or what she'll pull behind my back.  She is powerful enough to handle all my tasks, fast enough to execute them with blazing speed, maintainable enough so that I can rely on even fairly new peers to modify my work, and rich enough to allow me to satisfy any need.  C# doesn't ask me to clean up her messes!  She cleans up after herself and she tries to make my life easier for me by taking on most of those optimization tasks C++ asked me to take upon myself.  Now, there are many of you who would say that I am the cause of my own grief, that it was my fault C++ didn't behave because I didn't pay enough attention to her.  That I alone caused the pain she inflicted on me.  And to some extent, you have a point.  But she was so high maintenance, requiring me to know every twist and turn of her vast and unrestrained power that any wrong term or bout of forgetfulness was met with painful reminders that she wasn't going to watch my back when I made a mistake.  But C#, she loves me when I'm good, and she loves me when I'm bad, and together we make beautiful code that is both fast and safe. So that's why I'm leaving C++ behind.  She says she's changing for me, but I have no interest in what C++0x may bring.  Oh, I'll still keep in touch, and maybe I'll see her now and again when she brings her problems to my door and asks for some attention -- for I always have a soft spot for her, you see.  But she's out of my house now.  I have three kids and a dog and a cat, and all require me to clean up after them, why should I have to clean up after my programming language as well?

    Read the article

  • HTG Explains: Should You Build Your Own PC?

    - by Chris Hoffman
    There was a time when every geek seemed to build their own PC. While the masses bought eMachines and Compaqs, geeks built their own more powerful and reliable desktop machines for cheaper. But does this still make sense? Building your own PC still offers as much flexibility in component choice as it ever did, but prebuilt computers are available at extremely competitive prices. Building your own PC will no longer save you money in most cases. The Rise of Laptops It’s impossible to look at the decline of geeks building their own PCs without considering the rise of laptops. There was a time when everyone seemed to use desktops — laptops were more expensive and significantly slower in day-to-day tasks. With the diminishing importance of computing power — nearly every modern computer has more than enough power to surf the web and use typical programs like Microsoft Office without any trouble — and the rise of laptop availability at nearly every price point, most people are buying laptops instead of desktops. And, if you’re buying a laptop, you can’t really build your own. You can’t just buy a laptop case and start plugging components into it — even if you could, you would end up with an extremely bulky device. Ultimately, to consider building your own desktop PC, you have to actually want a desktop PC. Most people are better served by laptops. Benefits to PC Building The two main reasons to build your own PC have been component choice and saving money. Building your own PC allows you to choose all the specific components you want rather than have them chosen for you. You get to choose everything, including the PC’s case and cooling system. Want a huge case with room for a fancy water-cooling system? You probably want to build your own PC. In the past, this often allowed you to save money — you could get better deals by buying the components yourself and combining them, avoiding the PC manufacturer markup. You’d often even end up with better components — you could pick up a more powerful CPU that was easier to overclock and choose more reliable components so you wouldn’t have to put up with an unstable eMachine that crashed every day. PCs you build yourself are also likely more upgradable — a prebuilt PC may have a sealed case and be constructed in such a way to discourage you from tampering with the insides, while swapping components in and out is generally easier with a computer you’ve built on your own. If you want to upgrade your CPU or replace your graphics card, it’s a definite benefit. Downsides to Building Your Own PC It’s important to remember there are downsides to building your own PC, too. For one thing, it’s just more work — sure, if you know what you’re doing, building your own PC isn’t that hard. Even for a geek, researching the best components, price-matching, waiting for them all to arrive, and building the PC just takes longer. Warranty is a more pernicious problem. If you buy a prebuilt PC and it starts malfunctioning, you can contact the computer’s manufacturer and have them deal with it. You don’t need to worry about what’s wrong. If you build your own PC and it starts malfunctioning, you have to diagnose the problem yourself. What’s malfunctioning, the motherboard, CPU, RAM, graphics card, or power supply? Each component has a separate warranty through its manufacturer, so you’ll have to determine which component is malfunctioning before you can send it off for replacement. Should You Still Build Your Own PC? Let’s say you do want a desktop and are willing to consider building your own PC. First, bear in mind that PC manufacturers are buying in bulk and getting a better deal on each component. They also have to pay much less for a Windows license than the $120 or so it would cost you to to buy your own Windows license. This is all going to wipe out the cost savings you’ll see — with everything all told, you’ll probably spend more money building your own average desktop PC than you would picking one up from Amazon or the local electronics store. If you’re an average PC user that uses your desktop for the typical things, there’s no money to be saved from building your own PC. But maybe you’re looking for something higher end. Perhaps you want a high-end gaming PC with the fastest graphics card and CPU available. Perhaps you want to pick out each individual component and choose the exact components for your gaming rig. In this case, building your own PC may be a good option. As you start to look at more expensive, high-end PCs, you may start to see a price gap — but you may not. Let’s say you wanted to blow thousands of dollars on a gaming PC. If you’re looking at spending this kind of money, it would be worth comparing the cost of individual components versus a prebuilt gaming system. Still, the actual prices may surprise you. For example, if you wanted to upgrade Dell’s $2293 Alienware Aurora to include a second NVIDIA GeForce GTX 780 graphics card, you’d pay an additional $600 on Alienware’s website. The same graphics card costs $650 on Amazon or Newegg, so you’d be spending more money building the system yourself. Why? Dell’s Alienware gets bulk discounts you can’t get — and this is Alienware, which was once regarded as selling ridiculously overpriced gaming PCs to people who wouldn’t build their own. Building your own PC still allows you to get the most freedom when choosing and combining components, but this is only valuable to a small niche of gamers and professional users — most people, even average gamers, would be fine going with a prebuilt system. If you’re an average person or even an average gamer, you’ll likely find that it’s cheaper to purchase a prebuilt PC rather than assemble your own. Even at the very high end, components may be more expensive separately than they are in a prebuilt PC. Enthusiasts who want to choose all the individual components for their dream gaming PC and want maximum flexibility may want to build their own PCs. Even then, building your own PC these days is more about flexibility and component choice than it is about saving money. In summary, you probably shouldn’t build your own PC. If you’re an enthusiast, you may want to — but only a small minority of people would actually benefit from building their own systems. Feel free to compare prices, but you may be surprised which is cheaper. Image Credit: Richard Jones on Flickr, elPadawan on Flickr, Richard Jones on Flickr     

    Read the article

  • Microsoft Blacklists Google, Windows 8 Integrated Security

    According to researcher Brian Krebs, millions of surfers were affected by the error which was caused by two of Microsoft's antivirus solutions in the form of Microsoft Security Essentials and the business-related Microsoft Forefront. Both received updates as part of Microsoft's traditional Patch Tuesday on February 14, and those patches are believed to be the cause behind Google's incorrect blacklisting. The false positive alert specifically tagged the search site as being infected with the infamous Blackhole Exploit Kit, which reportedly gives cybercriminals the power to create their own bo...

    Read the article

  • The Diabolical Developer: What You Need to Do to Become Awesome

    - by Tori Wieldt
    Wearing sunglasses and quite possibly hungover, Martijn Verburg's evil persona provided key tips on how to be a Diabolical Developer. His presentation at TheServerSide Java Symposium was heavy on the sarcasm and provided lots of laughter. Martijn insisted that developers take their power back and get rid of all the "modern fluff" that distract developers.He provided several key tips to become a Diabolical Developer:*Learn only from yourself. Don't read blogs or books, and don't attend conferences. If you must go on forums, only do it display your superiority, answer as obscurely as possible.*Work aloneBest coding happens when you alone in your room, lock yourself in for days. Make sure you have a gaming machine in with you.*Keep information to yourselfKnowledge is power. Think job security. Never provide documentation. *Make sure only you can read your code.Don't put comments in your code. Name your variables A,B,C....A1,B1, etc.If someone insists you format your in a standard way, change a small section and revert it back as soon as they walk away from your screen. *Stick to what you knowStay on Java 1.3. Don't bother learning abstractions. Write your application in a single file. Stuff as much code into one class as possible, a 30,000-line class is fine. Makes it easier for you to read and maintain.*Use Real ToolsNo "fancy-pancy" IDEs. Real developers only use vi.*Ignore FadsThe cloud is massively overhyped. Mobile is a big fad for young kids.The big, clunky desktop computer (with a real keyboard) will return.Learn new stuff only to pad your resume. Ajax is great for that. *Skip TestingTest-driven development is a complete waste of time. They sent men to the moon without unit tests.Just write your code properly in the first place and you don't need tests.*Compiled = Ship ItUser acceptance testing is an absolute waste of time. *Use a Single ThreadDon't use multithreading. All you need to do is throw more hardware at the problem.*Don't waste time on SEO.If you've written the contract correctly, you are paid for writing code, not attracting users.You don't want a lot of users, they only report problems. *Avoid meetingsFake being sick to avoid meetings. If you are forced into a meeting, play corporate bingo.Once you stand up and shout "bingo" you will kicked out of the meeting. Job done.Follow these tips and you'll be well on your way to being a Diabolical Developer!

    Read the article

  • Fujitsu LifeBook T4010D Laptop CPU Fan from Pcpartsltd.com

    - by pcpartsltd
    est and 100% work perfectly Fujitsu LifeBook T4010D Laptop CPU Cooling Fan MCF-S4512AM05 Features: * MODEL:MCF-S4512AM05. * Package Content: 1x CPU Cooling Fan * Condition: New * Warranty: 3 Months Warranty Compatible Model: Fujitsu LifeBook T4010D Laptop Pcpartsltd.com limited is a direct Exporter of high quality pc part notebooks, laptop power adapters, laptop batteries, laptop keyboards, laptop Inverters, laptop Hinges, laptop CPU Fan, laptop driver, laptop MotherBoards, Samsung Wall Mount, laptop LCD Bezel/ LCD lid, laptop lcd/led panel and Laptop LCD Video Cable. We are Laptop Parts experts.

    Read the article

  • RealTek RTL8188CE WiFi adapter doesn't connect reliably

    - by ken.ganong
    I recently bought a new system76 laptop which came pre-installed with Ubuntu 11.10. I've been having trouble with my wireless connectivity. It seems that my connection with my wireless network keeps going in and out. It is not my network--I have seen the same problem on multiple WiFi networks and at different distances and reported link qualities. OS version: Ubuntu 11.10 oneiric kernel version: 3.0.0-14-generic lspci: lspci -nnk | grep -iA2 net 04:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter [10ec:8176] (rev 01) Subsystem: Realtek Semiconductor Co., Ltd. Device [10ec:9196] Kernel driver in use: rtl8192ce -- 05:00.0 Ethernet controller [0200]: JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller [197b:0250] (rev 05) Subsystem: CLEVO/KAPOK Computer Device [1558:2500] Kernel driver in use: jme iwconfig: iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"peppermintpatty" Mode:Managed Frequency:2.462 GHz Access Point: 98:FC:11:6C:E0:22 Bit Rate=72.2 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr=2347 B Fragment thr:off Power Management:off Link Quality=49/70 Signal level=-61 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:1103 Missed beacon:0 lshw: sudo lshw -class network *-network description: Wireless interface product: RTL8188CE 802.11b/g/n WiFi Adapter vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:04:00.0 logical name: wlan0 version: 01 serial: 00:1c:7b:a1:95:04 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=rtl8192ce driverversion=3.0.0-14-generic firmware=N/A ip=192.168.1.106 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:18 ioport:e000(size=256) memory:f7d00000-f7d03fff *-network description: Ethernet interface product: JMC250 PCI Express Gigabit Ethernet Controller vendor: JMicron Technology Corp. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 05 serial: 00:90:f5:c0:42:b3 size: 10Mbit/s capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm pciexpress msix msi bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=jme driverversion=1.0.8 duplex=half latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:56 memory:f7c20000-f7c23fff ioport:d100(size=128) ioport:d000(size=256) memory:f7c10000-f7c1ffff memory:f7c00000-f7c0ffff Any help would be appreciated. The last time I've dealt with wireless issues, the most given solution was NDIS wrapper and I seem sorely out-of-date.

    Read the article

  • How to get my IR remote to work? Lirc can't see it

    - by user1234567
    I'm using Ubuntu 11.10 (amd64) and I'm trying to get my infrared remote control working. The IR device is a part of a DVB-T USB stick (Based on a RTL2832u chip). I'm using these drivers - it's the only way of getting this device to work under 11.10 that I found. It's a big impromevent from previous Ubuntu version, where I had to edit the driver's code. The device works quite great - and the IR part of it works, too. The driver's page says that the code it's in alpha stage, but I'm pretty sure that my issue has nothing to do with that. If, and only if driver's module is loaded with parameter rtl2832u_rc_mode=2 (which means "use NEC protocol for IR") the remote kind of works, I can see this by running cat /dev/.. ../input6 - when I press a button, random letters appear. The remote works just like a keyboard, but keys are totally messed up - when I press '5' the volume goes down, etc. I would like to use Lirc to fix that, but Lirc can't detect my device (i.e. irw shows nothing). I suspect, it's because something gets into control of the device and sets it up as a keyboard. Lirc seems to be working, it's KDE settings module works too, but it just doesn't detect the device. The Lirc page describes this issue, but since 2009 - the last year when that page was updated, Ubuntu moved from HAL (described there) to DeviceKit, rendering provided instruction useless. I had a similar issue with my previous remote, but the keys were not messed up so much - the remote was usable, so I gave up trying to get Lirc working. I tried the answer provided here, but it changed nothing. I also tried forcing lircd to use my device, but this didn't work too: for i in /sys/class/input/input* ; do echo -n "$(basename "$i"): "; cat "$i/name"; done shows input0: Power Button input1: Power Button input2: Logitech Logitech USB Keyboard input3: A4Tech PS/2+USB Mouse input6: IR-receiver inside an USB DVB receiver But when I run: lircd -n --device=name='IR*' as root (also tried with the full name) I always see: lircd-0.9.0[3983]: lircd(default) ready, using /var/run/lirc/lircd lircd-0.9.0[3983]: accepted new client on /var/run/lirc/lircd lircd-0.9.0[3983]: could not get file information for name=IR* lircd-0.9.0[3983]: default_init(): No such file or directory lircd-0.9.0[3983]: Failed to initialize hardware So, how to set up Lirc with devinput driver in such case?

    Read the article

  • how to start LXDE session automatically after tightvncserver starts to make me able see desktop when connecting to the host via vncclient?

    - by Oleksandr Dudchenko
    I have system which is equipped with Intel Celeron processor 1.1 GHz s370 with 384 Mb of RAM on Intel d815egew motherboard which supports wake-on-lan function. I want to use such a PC for Internet sharing to the local network. Also this PC is a DHCP+DNS server as well as router/gateway. Based on above I decided to install Lubuntu as it is lightweight system. I installed Lubuntu 10.04.4 LTS from alternate ISO. System has no auto login. System boots and has acceptable performance. Host PC has onboard 4 network adapters: eth0 – ethernet controller which is used for Local Network connections. Has static address 10.0.0.1 eth1 – ethernet controller which is not used and not configured so far, I plan to connect printer here later on. eth2 - ethernet controller which is used to connect to Internet, which we plan to share for the local network wlan0 – wireless controller, it is used in role of access poit for local Network and has address 10.0.0.2 We want to control our gateway remotely. So, we need to be able to power it on remotely. To allow this I’ve done the following things: $ cd /etc/init.d/ made a new file with command $ sudo vim wakeonlanconfig Wrote the following lines to the newly created file, saved and closed it #!/bin/bash ethtool -s eth0 wol g ethtool -s eth2 wol g exit Made the abovementioned file executable $ sudo chmod a+x wakeonlanconfig Then included it into autostart sequence during boot. $ sudo update-rc.d -f wakeonlanconfig defaults after system reboot we will be able to poweron system remotely. Than we need to have a possibility to connect remotely to the host via SSH and VNC. So, I installed following packets with the following commands: $ sudo apt-get update $ sudo apt-get install openssh-server tightvncserver Add ssh daemon into autostart sequence during boot. $ sudo update-rc.d -f ssh defaults Power off the host PC $ sudo halt Then I went to remote place, send magic paket and powered the Host up. System started... And I connected to the host via Putty from remote system under Windows. Than logged in and run the command to start vnc server. $ tightvncserver -geometry 800x600 -depth 16 :2 VNC server successfully started and I got message like follows. New 'X' desktop is gateway:2 Starting applications specified in /home/dolv/.vnc/xstartup Log file is /home/dolv/.vnc/gateway:2.log Using UltraVNC Viewer programm under windows I connected to the host's vnc server, enterd the password and.... sow only mouse cursor in form of cross on a grey background of 800x600 dots, no desktop. Here is my .vnc/xstartup file #!/bin/sh xrdb $HOME/.Xresources xsetroot -solid grey #x-terminal-emulator -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & #x-window-manager & # Fix to make GNOME work export XKL_XMODMAP_DISABLE=1 /etc/X11/Xsession The Question: What I have to change and where to make LXDE session start automatically after tightvncserver starts?

    Read the article

  • What's new in Solaris 11.1?

    - by Karoly Vegh
    Solaris 11.1 is released. This is the first release update since Solaris 11 11/11, the versioning has been changed from MM/YY style to 11.1 highlighting that this is Solaris 11 Update 1.  Solaris 11 itself has been great. What's new in Solaris 11.1? Allow me to pick some new features from the What's New PDF that can be found in the official Oracle Solaris 11.1 Documentation. The updates are very numerous, I really can't include all.  I. New AI Automated Installer RBAC profiles have been introduced to enable delegation of installation tasks. II. The interactive installer now supports installing the OS to iSCSI targets. III. ASR (Auto Service Request) and OCM (Oracle Configuration Manager) have been enabled by default to proactively provide support information and create service requests to speed up support processes. This is optional and can be disabled but helps a lot in supportcases. For further information, see: http://oracle.com/goto/solarisautoreg IV. The new command svcbundle helps you to create SMF manifests without having to struggle with XML editing. (btw, do you know the interactive editprop subcommand in svccfg? The listprop/setprop subcommands are great for scripting and automating, but for an interactive property editing session try, for example, this: svccfg -s svc:/application/pkg/system-repository:default editprop )  V. pfedit: Ever wondered how to delegate editing permissions to certain files? It is well known "sudo /usr/bin/vi /etc/hosts" is not the right way, for sudo elevates the complete vi process to admin levels, and the user can "break" out of the session as root with simply starting a shell from that vi. Now, the new pfedit command provides a solution exactly to this challenge - an auditable, secure, per-user configurable editing possibility. See the pfedit man page for examples.   VI. rsyslog, the popular logging daemon (filters, SSL, formattable output, SQL collect...) has been included in Solaris 11.1 as an alternative to syslog.  VII: Zones: Solaris Zones - as a major Solaris differentiator - got lots of love in terms of new features: ZOSS - Zones on Shared Storage: Placing your zones to shared storage (FC, iSCSI) has never been this easy - via zonecfg.  parallell updates - with S11's bootenvironments updating zones was no problem and meant no downtime anyway, but still, now you can update them parallelly, a way faster update action if you are running a large number of zones. This is like parallell patching in Solaris 10, but with all the IPS/ZFS/S11 goodness.  per-zone fstype statistics: Running zones on a shared filesystems complicate the I/O debugging, since ZFS collects all the random writes and delivers them sequentially to boost performance. Now, over kstat you can find out which zone's I/O has an impact on the other ones, see the examples in the documentation: http://docs.oracle.com/cd/E26502_01/html/E29024/gmheh.html#scrolltoc Zones got RDSv3 protocol support for InfiniBand, and IPoIB support with Crossbow's anet (automatic vnic creation) feature.  NUMA I/O support for Zones: customers can now determine the NUMA I/O topology of the system from within zones.  VIII: Security got a lot of attention too:  Automated security/audit reporting, with builtin reporting templates e.g. for PCI (payment card industry) audits.  PAM is now configureable on a per-user basis instead of system wide, allowing different authentication requirements for different users  SSH in Solaris 11.1 now supports running in FIPS 140-2 mode, that is, in a U.S. government security accredited fashion.  SHA512/224 and SHA512/256 cryptographic hash functions are implemented in a FIPS-compliant way - and on a T4 implemented in silicon! That is, goverment-approved cryptography at HW-speed.  Generally, Solaris is currently under evaluation to be both FIPS and Common Criteria certified.  IX. Networking, as one of the core strengths of Solaris 11, has been extended with:  Data Center Bridging (DCB) - not only setups where network and storage share the same fabric (FCoE, anyone?) can have Quality-of-Service requirements. DCB enables peers to distinguish traffic based on priorities. Your NICs have to support DCB, see the documentation, and additional information on Wikipedia. DataLink MultiPathing, DLMP, enables link aggregation to span across multiple switches, even between those of different vendors. But there are essential differences to the good old bandwidth-aggregating LACP, see the documentation: http://docs.oracle.com/cd/E26502_01/html/E28993/gmdlu.html#scrolltoc VNIC live migration is now supported from one physical NIC to another on-the-fly  X. Data management:  FedFS, (Federated FileSystem) is new, it relies on Solaris 11's NFS referring mechanism to join separate shares of different NFS servers into a single filesystem namespace. The referring system has been there since S11 11/11, in Solaris 11.1 FedFS uses a LDAP - as the one global nameservice to bind them all.  The iSCSI initiator now uses the T4 CPU's HW-implemented CRC32 algorithm - thus improving iSCSI throughput while reducing CPU utilization on a T4 Storage locking improvements are now RAC aware, speeding up throughput with better locking-communication between nodes up to 20%!  XI: Kernel performance optimizations: The new Virtual Memory subsystem ("VM2") scales now to 100+ TB Memory ranges.  The memory predictor monitors large memory page usage, and adjust memory page sizes to applications' needs OSM, the Optimized Shared Memory allows Oracle DBs' SGA to be resized online XII: The Power Aware Dispatcher in now by default enabled, reducing power consumption of idle CPUs. Also, the LDoms' Power Management policies and the poweradm settings in Solaris 11 OS will cooperate. XIII: x86 boot: upgrade to the (Grand Unified Bootloader) GRUB2. Because grub2 differs in the configuration syntactically from grub1, one shall not edit the new grub configuration (grub.cfg) but use the new bootadm features to update it. GRUB2 adds UEFI support and also support for disks over 2TB. XIV: Improved viewing of per-CPU statistics of mpstat. This one might seem of less importance at first, but nowadays having better sorting/filtering possibilities on a periodically updated mpstat output of 256+ vCPUs can be a blessing. XV: Support for Solaris Cluster 4.1: The What's New document doesn't actually mention this one, since OSC 4.1 has not been released at the time 11.1 was. But since then it is available, and it requires Solaris 11.1. And it's only a "pkg update" away. ...aand I seriously need to stop here. There's a lot I missed, Edge Virtual Bridging, lofi tuning, ZFS sharing and crypto enhancements, USB3.0, pulseaudio, trusted extensions updates, etc - but if I mention all those then I effectively copy the What's New document. Which I recommend reading now anyway, it is a great extract of the 300+ new projects and RFE-followups in S11.1. And this blogpost is a summary of that extract.  For closing words, allow me to come back to Request For Enhancements, RFEs. Any customer can request features. Open up a Support Request, explain that this is an RFE, describe the feature you/your company desires to have in S11 implemented. The more SRs are collected for an RFE, the more chance it's got to get implemented. Feel free to provide feedback about the product, as well as about the Solaris 11.1 Documentation using the "Feedback" button there. Both the Solaris engineers and the documentation writers are eager to hear your input.Feel free to comment about this post too. Except that it's too long ;)  wbr,charlie

    Read the article

  • After changing Compiz configuration, I can't use account

    - by Dumitru Cristian
    I think I have made some changes in Compiz and now I can't see anything on the screen when I log in with a user. For the other users everything is OK. Is there any possibility to reset to default compiz? All I can do logged in as the user with problems is pressing ctrl-alt-F2, and then I have to log in a terminal. I'm not at all a power user, so I apologies if my question is not very clear. Thanks in advance

    Read the article

  • What's Old is New Again

    - by David Dorf
    Last night I told my son he could stream music to his tablet "from the cloud" (in this case, the Amazon Cloud).  He paused, then said, "what is the cloud?"  I replied, "a bunch of servers connected to the internet."  Apparently he had visions of something much more magnificent.  Another similar term is "big data."  These marketing terms help to quickly convey topics but are oversimplifications that are open to many interpretations.  At their core, those terms a shiny packages holding recycled ideas. I see many headlines declaring big data changes everything, but it doesn't.  Savvy retailers have been dealing with large volumes of data since the electronic cash register was invented.  But the there have a been a few changes to the landscape that make big data a topic of conversation: 1. Computing power has caught up to storage volumes. Its now possible to more thoroughly analyze the copious volumes of data retailers have been squirreling away.  CPUs are faster, sold state drives more plentiful, and new ways to store and search data are available.  My iPhone is more power than the computer used in the Apollo mission to the moon. 2. Unstructured data is everywhere.  The Web used to be where retailers published product information, but now users are generating the bulk of the content in the form of comments, videos, and "likes."  The variety of information available to retailers is huge, and it meaning difficult to discern. 3. Everything is connected.  Looking at a report from my router, there are no less than 20 active devices on my home network.  We can track the location of mobile phones, tag products with RFID, and set our thermostats (I love my Nest) from a thousand miles away.  Not only is there more data, but its arriving at higher velocity. Careful readers will note the three Vs that help define so-called big data: volume, variety, and velocity. We now have more volume, more variety, and more velocity and different technologies to deal with them.  But at the heart, the objectives are still the same: Informed decisions Accurate forecasts Improved optimizations So don't let the term "big data" throw you off the scent.  Retailers still need to execute on the basics.  But do take a fresh look at the data that's available and the new technologies to process it.  The landscape will continue to change and agile organizations will always be reevaluating their approaches.  You can just add some more weapons to the arsenal.

    Read the article

  • Lightning Wallpaper Collection for Your Nexus 7

    - by Akemi Iwaya
    Lightning can be frightfully powerful and eerily beautiful at the same time, a force of nature that is not to be taken lightly. Harness the ‘power of nature’ by electrifying your Nexus 7′s screen with the first in our series of Lightning Wallpaper collections. Lightning Series 1 Note: Click on the pictures to view and download the full-size versions at their individual homepages. The images shown here are in thumbnail format.

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >