Search Results

Search found 6602 results on 265 pages for 'fedora 14'.

Page 132/265 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • unable to load nvidia(bumblebee) in ubuntu 14.04 (only nouveau loads)

    - by Ubuntuser
    Bumblebee stopped working on my system after upgrading to stable version of Ubuntu 14.04. DUring installation I get this error rmmod: ERROR: Module nouveau is in use Setting up bumblebee (3.2.1-90~trustyppa1) ... Selecting 01:00:0 as discrete nvidia card. If this is incorrect, edit the BusID line in /etc/bumblebee/xorg.conf.nouveau . bumblebeed start/running, process 11133 Processing triggers for initramfs-tools (0.103ubuntu4.1) ... update-initramfs: Generating /boot/initrd.img-3.14.1-031401-generic Setting up bumblebee-nvidia (3.2.1-90~trustyppa1) ... Selecting 01:00:0 as discrete nvidia card. If this is incorrect, edit the BusID line in /etc/bumblebee/xorg.conf.nvidia rmmod: ERROR: Module nouveau is in use bumblebeed start/running, process 18284 It says nouveau is in use. I checked the loaded modules lsmod | grep nouveau nouveau 1097199 1 mxm_wmi 13021 1 nouveau ttm 85115 1 nouveau i2c_algo_bit 13413 2 i915,nouveau drm_kms_helper 52758 2 i915,nouveau drm 302817 7 ttm,i915,drm_kms_helper,nouveau wmi 19177 3 dell_wmi,mxm_wmi,nouveau video 19476 2 i915,nouveau However, I have nouveau in my blacklist cat /etc/modprobe.d/blacklist.conf | grep nouveau blacklist nouveau blacklist lbm-nouveau alias nouveau off alias lbm-nouveau off My grub is also set to nomodeset cat /etc/default/grub | grep nomodeset GRUB_CMDLINE_LINUX_DEFAULT="nomodeset quiet splash" My graphics card is nvidia optimus lspci | grep -i vga 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 18) 01:00.0 VGA compatible controller: NVIDIA Corporation GT218M [GeForce 310M] (rev ff) I've raised a bug in launchpad: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1327598 Note: Nvidia-prime is working for me (partially). Frequent mouse locks. Interestingly, bumblebee works perfectly fine on my fedora 20 partition on this same laptop.

    Read the article

  • NHibernate Conventions

    - by Ricardo Peres
    Introduction It seems that nowadays everyone loves conventions! Not the ones that you go to, but the ones that you use, that is! It just happens that NHibernate also supports conventions, and we’ll see exactly how. Conventions in NHibernate are supported in two ways: Naming of tables and columns when not explicitly indicated in the mappings; Full domain mapping. Naming of Tables and Columns Since always NHibernate has supported the concept of a naming strategy. A naming strategy in NHibernate converts class and property names to table and column names and vice-versa, when a name is not explicitly supplied. In concrete, it must be a realization of the NHibernate.Cfg.INamingStrategy interface, of which NHibernate includes two implementations: DefaultNamingStrategy: the default implementation, where each column and table are mapped to identically named properties and classes, for example, “MyEntity” will translate to “MyEntity”; ImprovedNamingStrategy: underscores (_) are used to separate Pascal-cased fragments, for example, entity “MyEntity” will be mapped to a “my_entity” table. The naming strategy can be defined at configuration level (the Configuration instance) by calling the SetNamingStrategy method: 1: cfg.SetNamingStrategy(ImprovedNamingStrategy.Instance); Both the DefaultNamingStrategy and the ImprovedNamingStrategy classes offer singleton instances in the form of Instance static fields. DefaultNamingStrategy is the one NHibernate uses, if you don’t specify one. Domain Mapping In mapping by code, we have the choice of relying on conventions to do the mapping automatically. This means a class will inspect our classes and decide how they will relate to the database objects. The class that handles conventions is NHibernate.Mapping.ByCode.ConventionModelMapper, a specialization of the base by code mapper, NHibernate.Mapping.ByCode.ModelMapper. The ModelMapper relies on an internal SimpleModelInspector to help it decide what and how to map, but the mapper lets you override its decisions.  You apply code conventions like this: 1: //pick the types that you want to map 2: IEnumerable<Type> types = Assembly.GetExecutingAssembly().GetExportedTypes(); 3:  4: //conventions based mapper 5: ConventionModelMapper mapper = new ConventionModelMapper(); 6:  7: HbmMapping mapping = mapper.CompileMappingFor(types); 8:  9: //the one and only configuration instance 10: Configuration cfg = ...; 11: cfg.AddMapping(mapping); This is a very simple example, it lacks, at least, the id generation strategy, which you can add by adding an event handler like this: 1: mapper.BeforeMapClass += (IModelInspector modelInspector, Type type, IClassAttributesMapper classCustomizer) => 2: { 3: classCustomizer.Id(x => 4: { 5: //set the hilo generator 6: x.Generator(Generators.HighLow); 7: }); 8: }; The mapper will fire events like this whenever it needs to get information about what to do. And basically this is all it takes to automatically map your domain! It will correctly configure many-to-one and one-to-many relations, choosing bags or sets depending on your collections, will get the table and column names from the naming strategy we saw earlier and will apply the usual defaults to all properties, such as laziness and fetch mode. However, there is at least one thing missing: many-to-many relations. The conventional mapper doesn’t know how to find and configure them, which is a pity, but, alas, not difficult to overcome. To start, for my projects, I have this rule: each entity exposes a public property of type ISet<T> where T is, of course, the type of the other endpoint entity. Extensible as it is, NHibernate lets me implement this very easily: 1: mapper.IsOneToMany((MemberInfo member, Boolean isLikely) => 2: { 3: Type sourceType = member.DeclaringType; 4: Type destinationType = member.GetMemberFromDeclaringType().GetPropertyOrFieldType(); 5:  6: //check if the property is of a generic collection type 7: if ((destinationType.IsGenericCollection() == true) && (destinationType.GetGenericArguments().Length == 1)) 8: { 9: Type destinationEntityType = destinationType.GetGenericArguments().Single(); 10:  11: //check if the type of the generic collection property is an entity 12: if (mapper.ModelInspector.IsEntity(destinationEntityType) == true) 13: { 14: //check if there is an equivalent property on the target type that is also a generic collection and points to this entity 15: PropertyInfo collectionInDestinationType = destinationEntityType.GetProperties().Where(x => (x.PropertyType.IsGenericCollection() == true) && (x.PropertyType.GetGenericArguments().Length == 1) && (x.PropertyType.GetGenericArguments().Single() == sourceType)).SingleOrDefault(); 16:  17: if (collectionInDestinationType != null) 18: { 19: return (false); 20: } 21: } 22: } 23:  24: return (true); 25: }); 26:  27: mapper.IsManyToMany((MemberInfo member, Boolean isLikely) => 28: { 29: //a relation is many to many if it isn't one to many 30: Boolean isOneToMany = mapper.ModelInspector.IsOneToMany(member); 31: return (!isOneToMany); 32: }); 33:  34: mapper.BeforeMapManyToMany += (IModelInspector modelInspector, PropertyPath member, IManyToManyMapper collectionRelationManyToManyCustomizer) => 35: { 36: Type destinationEntityType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 37: //set the mapping table column names from each source entity name plus the _Id sufix 38: collectionRelationManyToManyCustomizer.Column(destinationEntityType.Name + "_Id"); 39: }; 40:  41: mapper.BeforeMapSet += (IModelInspector modelInspector, PropertyPath member, ISetPropertiesMapper propertyCustomizer) => 42: { 43: if (modelInspector.IsManyToMany(member.LocalMember) == true) 44: { 45: propertyCustomizer.Key(x => x.Column(member.LocalMember.DeclaringType.Name + "_Id")); 46:  47: Type sourceType = member.LocalMember.DeclaringType; 48: Type destinationType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 49: IEnumerable<String> names = new Type[] { sourceType, destinationType }.Select(x => x.Name).OrderBy(x => x); 50:  51: //set inverse on the relation of the alphabetically first entity name 52: propertyCustomizer.Inverse(sourceType.Name == names.First()); 53: //set mapping table name from the entity names in alphabetical order 54: propertyCustomizer.Table(String.Join("_", names)); 55: } 56: }; We have to understand how the conventions mapper thinks: For each collection of entities found, it will ask the mapper if it is a one-to-many; in our case, if the collection is a generic one that has an entity as its generic parameter, and the generic parameter type has a similar collection, then it is not a one-to-many; Next, the mapper will ask if the collection that it now knows is not a one-to-many is a many-to-many; Before a set is mapped, if it corresponds to a many-to-many, we set its mapping table. Now, this is tricky: because we have no way to maintain state, we sort the names of the two endpoint entities and we combine them with a “_”; for the first alphabetical entity, we set its relation to inverse – remember, on a many-to-many relation, only one endpoint must be marked as inverse; finally, we set the column name as the name of the entity with an “_Id” suffix; Before the many-to-many relation is processed, we set the column name as the name of the other endpoint entity with the “_Id” suffix, as we did for the set. And that’s it. With these rules, NHibernate will now happily find and configure many-to-many relations, as well as all the others. You can wrap this in a new conventions mapper class, so that it is more easily reusable: 1: public class ManyToManyConventionModelMapper : ConventionModelMapper 2: { 3: public ManyToManyConventionModelMapper() 4: { 5: base.IsOneToMany((MemberInfo member, Boolean isLikely) => 6: { 7: return (this.IsOneToMany(member, isLikely)); 8: }); 9:  10: base.IsManyToMany((MemberInfo member, Boolean isLikely) => 11: { 12: return (this.IsManyToMany(member, isLikely)); 13: }); 14:  15: base.BeforeMapManyToMany += this.BeforeMapManyToMany; 16: base.BeforeMapSet += this.BeforeMapSet; 17: } 18:  19: protected virtual Boolean IsManyToMany(MemberInfo member, Boolean isLikely) 20: { 21: //a relation is many to many if it isn't one to many 22: Boolean isOneToMany = this.ModelInspector.IsOneToMany(member); 23: return (!isOneToMany); 24: } 25:  26: protected virtual Boolean IsOneToMany(MemberInfo member, Boolean isLikely) 27: { 28: Type sourceType = member.DeclaringType; 29: Type destinationType = member.GetMemberFromDeclaringType().GetPropertyOrFieldType(); 30:  31: //check if the property is of a generic collection type 32: if ((destinationType.IsGenericCollection() == true) && (destinationType.GetGenericArguments().Length == 1)) 33: { 34: Type destinationEntityType = destinationType.GetGenericArguments().Single(); 35:  36: //check if the type of the generic collection property is an entity 37: if (this.ModelInspector.IsEntity(destinationEntityType) == true) 38: { 39: //check if there is an equivalent property on the target type that is also a generic collection and points to this entity 40: PropertyInfo collectionInDestinationType = destinationEntityType.GetProperties().Where(x => (x.PropertyType.IsGenericCollection() == true) && (x.PropertyType.GetGenericArguments().Length == 1) && (x.PropertyType.GetGenericArguments().Single() == sourceType)).SingleOrDefault(); 41:  42: if (collectionInDestinationType != null) 43: { 44: return (false); 45: } 46: } 47: } 48:  49: return (true); 50: } 51:  52: protected virtual new void BeforeMapManyToMany(IModelInspector modelInspector, PropertyPath member, IManyToManyMapper collectionRelationManyToManyCustomizer) 53: { 54: Type destinationEntityType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 55: //set the mapping table column names from each source entity name plus the _Id sufix 56: collectionRelationManyToManyCustomizer.Column(destinationEntityType.Name + "_Id"); 57: } 58:  59: protected virtual new void BeforeMapSet(IModelInspector modelInspector, PropertyPath member, ISetPropertiesMapper propertyCustomizer) 60: { 61: if (modelInspector.IsManyToMany(member.LocalMember) == true) 62: { 63: propertyCustomizer.Key(x => x.Column(member.LocalMember.DeclaringType.Name + "_Id")); 64:  65: Type sourceType = member.LocalMember.DeclaringType; 66: Type destinationType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 67: IEnumerable<String> names = new Type[] { sourceType, destinationType }.Select(x => x.Name).OrderBy(x => x); 68:  69: //set inverse on the relation of the alphabetically first entity name 70: propertyCustomizer.Inverse(sourceType.Name == names.First()); 71: //set mapping table name from the entity names in alphabetical order 72: propertyCustomizer.Table(String.Join("_", names)); 73: } 74: } 75: } Conclusion Of course, there is much more to mapping than this, I suggest you look at all the events and functions offered by the ModelMapper to see where you can hook for making it behave the way you want. If you need any help, just let me know!

    Read the article

  • C#/.NET Little Wonders: Fun With Enum Methods

    - by James Michael Hare
    Once again lets dive into the Little Wonders of .NET, those small things in the .NET languages and BCL classes that make development easier by increasing readability, maintainability, and/or performance. So probably every one of us has used an enumerated type at one time or another in a C# program.  The enumerated types we create are a great way to represent that a value can be one of a set of discrete values (or a combination of those values in the case of bit flags). But the power of enum types go far beyond simple assignment and comparison, there are many methods in the Enum class (that all enum types “inherit” from) that can give you even more power when dealing with them. IsDefined() – check if a given value exists in the enum Are you reading a value for an enum from a data source, but are unsure if it is actually a valid value or not?  Casting won’t tell you this, and Parse() isn’t guaranteed to balk either if you give it an int or a combination of flags.  So what can we do? Let’s assume we have a small enum like this for result codes we want to return back from our business logic layer: 1: public enum ResultCode 2: { 3: Success, 4: Warning, 5: Error 6: } In this enum, Success will be zero (unless given another value explicitly), Warning will be one, and Error will be two. So what happens if we have code like this where perhaps we’re getting the result code from another data source (could be database, could be web service, etc)? 1: public ResultCode PerformAction() 2: { 3: // set up and call some method that returns an int. 4: int result = ResultCodeFromDataSource(); 5:  6: // this will suceed even if result is < 0 or > 2. 7: return (ResultCode) result; 8: } So what happens if result is –1 or 4?  Well, the cast does not fail, so what we end up with would be an instance of a ResultCode that would have a value that’s outside of the bounds of the enum constants we defined. This means if you had a block of code like: 1: switch (result) 2: { 3: case ResultType.Success: 4: // do success stuff 5: break; 6:  7: case ResultType.Warning: 8: // do warning stuff 9: break; 10:  11: case ResultType.Error: 12: // do error stuff 13: break; 14: } That you would hit none of these blocks (which is a good argument for always having a default in a switch by the way). So what can you do?  Well, there is a handy static method called IsDefined() on the Enum class which will tell you if an enum value is defined.  1: public ResultCode PerformAction() 2: { 3: int result = ResultCodeFromDataSource(); 4:  5: if (!Enum.IsDefined(typeof(ResultCode), result)) 6: { 7: throw new InvalidOperationException("Enum out of range."); 8: } 9:  10: return (ResultCode) result; 11: } In fact, this is often recommended after you Parse() or cast a value to an enum as there are ways for values to get past these methods that may not be defined. If you don’t like the syntax of passing in the type of the enum, you could clean it up a bit by creating an extension method instead that would allow you to call IsDefined() off any isntance of the enum: 1: public static class EnumExtensions 2: { 3: // helper method that tells you if an enum value is defined for it's enumeration 4: public static bool IsDefined(this Enum value) 5: { 6: return Enum.IsDefined(value.GetType(), value); 7: } 8: }   HasFlag() – an easier way to see if a bit (or bits) are set Most of us who came from the land of C programming have had to deal extensively with bit flags many times in our lives.  As such, using bit flags may be almost second nature (for a quick refresher on bit flags in enum types see one of my old posts here). However, in higher-level languages like C#, the need to manipulate individual bit flags is somewhat diminished, and the code to check for bit flag enum values may be obvious to an advanced developer but cryptic to a novice developer. For example, let’s say you have an enum for a messaging platform that contains bit flags: 1: // usually, we pluralize flags enum type names 2: [Flags] 3: public enum MessagingOptions 4: { 5: None = 0, 6: Buffered = 0x01, 7: Persistent = 0x02, 8: Durable = 0x04, 9: Broadcast = 0x08 10: } We can combine these bit flags using the bitwise OR operator (the ‘|’ pipe character): 1: // combine bit flags using 2: var myMessenger = new Messenger(MessagingOptions.Buffered | MessagingOptions.Broadcast); Now, if we wanted to check the flags, we’d have to test then using the bit-wise AND operator (the ‘&’ character): 1: if ((options & MessagingOptions.Buffered) == MessagingOptions.Buffered) 2: { 3: // do code to set up buffering... 4: // ... 5: } While the ‘|’ for combining flags is easy enough to read for advanced developers, the ‘&’ test tends to be easy for novice developers to get wrong.  First of all you have to AND the flag combination with the value, and then typically you should test against the flag combination itself (and not just for a non-zero)!  This is because the flag combination you are testing with may combine multiple bits, in which case if only one bit is set, the result will be non-zero but not necessarily all desired bits! Thanks goodness in .NET 4.0 they gave us the HasFlag() method.  This method can be called from an enum instance to test to see if a flag is set, and best of all you can avoid writing the bit wise logic yourself.  Not to mention it will be more readable to a novice developer as well: 1: if (options.HasFlag(MessagingOptions.Buffered)) 2: { 3: // do code to set up buffering... 4: // ... 5: } It is much more concise and unambiguous, thus increasing your maintainability and readability. It would be nice to have a corresponding SetFlag() method, but unfortunately generic types don’t allow you to specialize on Enum, which makes it a bit more difficult.  It can be done but you have to do some conversions to numeric and then back to the enum which makes it less of a payoff than having the HasFlag() method.  But if you want to create it for symmetry, it would look something like this: 1: public static T SetFlag<T>(this Enum value, T flags) 2: { 3: if (!value.GetType().IsEquivalentTo(typeof(T))) 4: { 5: throw new ArgumentException("Enum value and flags types don't match."); 6: } 7:  8: // yes this is ugly, but unfortunately we need to use an intermediate boxing cast 9: return (T)Enum.ToObject(typeof (T), Convert.ToUInt64(value) | Convert.ToUInt64(flags)); 10: } Note that since the enum types are value types, we need to assign the result to something (much like string.Trim()).  Also, you could chain several SetFlag() operations together or create one that takes a variable arg list if desired. Parse() and ToString() – transitioning from string to enum and back Sometimes, you may want to be able to parse an enum from a string or convert it to a string - Enum has methods built in to let you do this.  Now, many may already know this, but may not appreciate how much power are in these two methods. For example, if you want to parse a string as an enum, it’s easy and works just like you’d expect from the numeric types: 1: string optionsString = "Persistent"; 2:  3: // can use Enum.Parse, which throws if finds something it doesn't like... 4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result == MessagingOptions.Persistent) 7: { 8: Console.WriteLine("It worked!"); 9: } Note that Enum.Parse() will throw if it finds a value it doesn’t like.  But the values it likes are fairly flexible!  You can pass in a single value, or a comma separated list of values for flags and it will parse them all and set all bits: 1: // for string values, can have one, or comma separated. 2: string optionsString = "Persistent, Buffered"; 3:  4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked!"); 9: } Or you can parse in a string containing a number that represents a single value or combination of values to set: 1: // 3 is the combination of Buffered (0x01) and Persistent (0x02) 2: var optionsString = "3"; 3:  4: var result = (MessagingOptions) Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked again!"); 9: } And, if you really aren’t sure if the parse will work, and don’t want to handle an exception, you can use TryParse() instead: 1: string optionsString = "Persistent, Buffered"; 2: MessagingOptions result; 3:  4: // try parse returns true if successful, and takes an out parm for the result 5: if (Enum.TryParse(optionsString, out result)) 6: { 7: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 8: { 9: Console.WriteLine("It worked!"); 10: } 11: } So we covered parsing a string to an enum, what about reversing that and converting an enum to a string?  The ToString() method is the obvious and most basic choice for most of us, but did you know you can pass a format string for enum types that dictate how they are written as a string?: 1: MessagingOptions value = MessagingOptions.Buffered | MessagingOptions.Persistent; 2:  3: // general format, which is the default, 4: Console.WriteLine("Default : " + value); 5: Console.WriteLine("G (default): " + value.ToString("G")); 6:  7: // Flags format, even if type does not have Flags attribute. 8: Console.WriteLine("F (flags) : " + value.ToString("F")); 9:  10: // integer format, value as number. 11: Console.WriteLine("D (num) : " + value.ToString("D")); 12:  13: // hex format, value as hex 14: Console.WriteLine("X (hex) : " + value.ToString("X")); Which displays: 1: Default : Buffered, Persistent 2: G (default): Buffered, Persistent 3: F (flags) : Buffered, Persistent 4: D (num) : 3 5: X (hex) : 00000003 Now, you may not really see a difference here between G and F because I used a [Flags] enum, the difference is that the “F” option treats the enum as if it were flags even if the [Flags] attribute is not present.  Let’s take a non-flags enum like the ResultCode used earlier: 1: // yes, we can do this even if it is not [Flags] enum. 2: ResultCode value = ResultCode.Warning | ResultCode.Error; And if we run that through the same formats again we get: 1: Default : 3 2: G (default): 3 3: F (flags) : Warning, Error 4: D (num) : 3 5: X (hex) : 00000003 Notice that since we had multiple values combined, but it was not a [Flags] marked enum, the G and default format gave us a number instead of a value name.  This is because the value was not a valid single-value constant of the enum.  However, using the F flags format string, it broke out the value into its component flags even though it wasn’t marked [Flags]. So, if you want to get an enum to display appropriately for whether or not it has the [Flags] attribute, use G which is the default.  If you always want it to attempt to break down the flags, use F.  For numeric output, obviously D or  X are the best choice depending on whether you want decimal or hex. Summary Hopefully, you learned a couple of new tricks with using the Enum class today!  I’ll add more little wonders as I think of them and thanks for all the invaluable input!   Technorati Tags: C#,.NET,Little Wonders,Enum,BlackRabbitCoder

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-19

    - by Bob Rhubart
    Discussion: Public, Private, and Hybrid Clouds A conversation about the similarities and differences between public, private, and hybrid clouds; the connection between cows, condos, and cloud computing; and what architects need to know in order to take advantage of cloud computing. (OTN ArchBeat Podcast transcript) InfoQ: Current Trends in Enterprise Mobility Interesting infographics that show current developments and major trends in enterprise mobility. Recap: EMEA User Group Leaders Meeting Latvia May 2012 Tom Scheirsen recaps the recent IOUC event in Riga. Oracle Fusion Middleware Summer Camps in Lisbon: Includes Advanced ADF Training by Oracle Product Management This is how IT people deal with the Summertime Blues. Enterprise 2.0 Conference: Building Social Business | Oracle WebCenter Blog Kellsey Ruppel shares a list of E2.0 conference sessions being presented by members of the Oracle community. Linux 6 Transparent Huge Pages and Hadoop Workloads | Structured Data Greg Rahn documents a problem. BPM Standard Edition to start your BPM project "BPM Standard Edition is an entry level BPM offering designed to help organisations implement their first few processes in order to prove the value of BPM within their own organisation." Troubleshooting ADF Security 11g Login Page Failure | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis takes a deep dive into one of the most common ADF 11g Security issues. It's Alive! - The Oracle OpenWorld Content Catalog It's what you’ve been waiting for—the central repository for information on sessions, demos, labs, user groups, exhibitors, and more. 5 minutes or less: Indexing Attributes in OID | Andre Correa Fusion Middleware A-Team blogger Andre Correa offers help for those who encounter issues when running searches with LDAP filters against OID (Oracle Internet Directory). Condos and Clouds: Thinking about Cloud Computng by Looking at Condominiums | Pat Helland In part two of the OTN ArchBeat Podcast Public, Private, and Hybrid Clouds, Oracle Cloud chief architect Mark Nelson mentions an analogy by Pat Helland that compares condos to cloud computing. After some digging I found the October 2011 presentation in which Helland explains that analogy. Thought for the Day "I have always found that plans are useless, but planning is indispensable." — Dwight Eisenhower (October 14, 1890 – March 28, 1969) Source: Quotes for Software Engineers

    Read the article

  • Silverlight Cream for November 24, 2011 -- #1173

    - by Dave Campbell
    In this Thanksgiving Day Issue: Andrea Boschin, Samidip Basu, Ollie Riches, WindowsPhoneGeek, Sumit Dutta, Dhananjay Kumar, Daniel Egan, Doug Mair, Chris Woodruff, and Debal Saha.Happy Thanksgiving Everybody! Above the Fold: Silverlight: "Silverlight CommandBinding with Simple MVVM Toolkit" Debal Saha WP7: "How many pins can Bing Maps handle in a WP7 app - part 3" Ollie Riches Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com:Windows Phone 7.5 - Play with musicAndrea Boschin's latest WP7 post is up on SilverlightShow... he's talking about the improvements in the music hub and also the programmability of musicOData caching in Windows PhoneSamidip Basu has an OData post up on SilverlightShow also, and he's talking about data caching strategies on WP7How many pins can Bing Maps handle in a WP7 app - part 3Ollie Riches has part 3 of his series on Bing Maps and pins... sepecifically how to deal with a large number of them... after going through discussing pins, he is suggesting using a heat map which looks pretty darn good, and renders fast... except when on a device :(Improvements in the LongListSelector Selection with Nov `11 release of WP ToolkitWindowsPhoneGeek's latest is this tutorial on the LongListSelector in the WP Toolkit... check out the previous info in his free eBook to get ready then dig into this tutorial for improvements in the control.Part 25 - Windows Phone 7 - Device StatusSumit Dutta's latest post is number 25 in his WP7 series, and time out he's digging into device status in the Microsoft.Phone.Info namespaceVideo on How to work with Picture in Windows Phone 7Dhananjay Kumar's latest video tutorial on WP7 is up, and he's talking about working with Photos.Live Tiles–Windows Phone WorkshopDaniel Egan has the video up of a Windows Phone Workshop done earlier this week on Live Tiles31 Days of Mango | Day #15: The Progress BarDoug Mair shares the show with Jeff Blankenburg in Jeff's Day 15 in his 31 Day quest of Mango, talking about the progressbar: Indeterminate and Determinate Modes abound31 Days of Mango | Day #14: Using ODataChris Woodruff has a guest spot on Jeff Blankenburg's 31 Days series with this post on OData... long detailed tutorial with all the codeSilverlight CommandBinding with Simple MVVM ToolkitDebal Saha has a nice detailed tutorial up on CommandBinding.. he's using the SimpleMVVM Toolkit and shows downloading and installing itStay in the 'Light!Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCreamJoin me @ SilverlightCream | Phoenix Silverlight User GroupTechnorati Tags:Silverlight    Silverlight 3    Silverlight 4    Windows PhoneMIX10

    Read the article

  • Call for Papers SOA &amp; Cloud Symposium by Thomas Erl

    - by Jürgen Kress
    3rd International SOA Symposium + 2nd International Cloud Symposium • Call for Presentations Berliner Congress Center, Alexanderstrase 11, 10178 Berlin, Germany (October 5-6, 2010) The International SOA and Cloud Symposium brings together lessons learned and emerging topics from SOA and Cloud projects, practitioners and experts. Please visit the Berlin & The Venue page for a map and more information. The two-day conference agenda will be organized into the following primary tracks: •  Track 1 SOA Architecture & Design •  Track 2 SOA Governance •  Track 3 Business of SOA •  Track 4 BPM, BPMN and Service-Orientation •  Track 5 Modeling from Services to the Enterprise •  Track 6 Real World SOA Case Studies •  Track 7 Real World Cloud Computing Case Studies •  Track 8 Cloud Computing Architecture, Standards & Technologies •  Track 9 REST and Service-Orientation in Practice •  Track 10 SOA Patterns & Practices •  Track 11 Modern ESB and Middleware •  Track 12 Semantic Web •  Track 13 SOA & BPM •  Track 14 Business of Cloud Computing •  Track 15 Cloud Computing Governance, Policies & Security   Presentation Submissions All submissions must be received no later than June 30, 2010. An overview of the tracks can be found here. Wiki with Additional Call for Papers: http://wiki.oracle.com/page/SOA+Call+for+Papers   Technorati Tags: soa,cloud,thomas erl,soasymposium,call for papers

    Read the article

  • Convert your Hash keys to object properties in Ruby

    - by kerry
    Being a Ruby noob (and having a background in Groovy), I was a little surprised that you can not access hash objects using the dot notation.  I am writing an application that relies heavily on XML and JSON data.  This data will need to be displayed and I would rather use book.author.first_name over book[‘author’][‘first_name’].  A quick search on google yielded this post on the subject. So, taking the DRYOO (Don’t Repeat Yourself Or Others) concept.  I came up with this: 1: class ::Hash 2:  3: # add keys to hash 4: def to_obj 5: self.each do |k,v| 6: if v.kind_of? Hash 7: v.to_obj 8: end 9: k=k.gsub(/\.|\s|-|\/|\'/, '_').downcase.to_sym 10: self.instance_variable_set("@#{k}", v) ## create and initialize an instance variable for this key/value pair 11: self.class.send(:define_method, k, proc{self.instance_variable_get("@#{k}")}) ## create the getter that returns the instance variable 12: self.class.send(:define_method, "#{k}=", proc{|v| self.instance_variable_set("@#{k}", v)}) ## create the setter that sets the instance variable 13: end 14: return self 15: end 16: end This works pretty well.  It converts each of your keys to properties of the Hash.  However, it doesn’t sit very well with me because I probably will not use 90% of the properties most of the time.  Why should I go through the performance overhead of creating instance variables for all of the unused ones? Enter the ‘magic method’ #missing_method: 1: class ::Hash 2: def method_missing(name) 3: return self[name] if key? name 4: self.each { |k,v| return v if k.to_s.to_sym == name } 5: super.method_missing name 6: end 7: end This is a much cleaner method for my purposes.  Quite simply, it checks to see if there is a key with the given symbol, and if not, loop through the keys and attempt to find one. I am a Ruby noob, so if there is something I am overlooking, please let me know.

    Read the article

  • Why am I getting this "Connection to PulseAudio failed" error?

    - by Dave M G
    I have a computer that runs Mythbuntu 11.10. It has an external USB Kenwood Digital Audio device. When I open up pavucontrol, I get this message: If I do as the message suggests and run start-pulseaudio-x11, I get this output: $ start-pulseaudio-x11 Connection failure: Connection refused pa_context_connect() failed: Connection refused How do I correct this error? Update: Somewhere during the course of doing the suggested tests in the comments, a new audio device has now become visible in my sound settings. I have not attached or made any new device, so this must be the result of of some setting change. The device I use and know about is the Kenwood Audio device. The "GF108" device will play sound through the Kenwood anyway, but not reliably: Command line output as requested in the comments: $ ls -l ~/.pulse* -rw------- 1 mythbuntu mythbuntu 256 Feb 28 2011 /home/mythbuntu/.pulse-cookie /home/mythbuntu/.pulse: total 200 -rw-r--r-- 1 mythbuntu mythbuntu 8192 Oct 23 01:38 2b98330d36bf53bb85c97fc300000008-card-database.tdb -rw-r--r-- 1 mythbuntu mythbuntu 69 Nov 16 22:51 2b98330d36bf53bb85c97fc300000008-default-sink -rw-r--r-- 1 mythbuntu mythbuntu 68 Nov 16 22:51 2b98330d36bf53bb85c97fc300000008-default-source -rw-r--r-- 1 mythbuntu mythbuntu 49152 Oct 14 12:30 2b98330d36bf53bb85c97fc300000008-device-manager.tdb -rw-r--r-- 1 mythbuntu mythbuntu 61440 Oct 23 01:40 2b98330d36bf53bb85c97fc300000008-device-volumes.tdb lrwxrwxrwx 1 mythbuntu mythbuntu 23 Nov 16 22:50 2b98330d36bf53bb85c97fc300000008-runtime -> /tmp/pulse-EAwvLIQZn7e8 -rw-r--r-- 1 mythbuntu mythbuntu 77824 Nov 1 12:54 2b98330d36bf53bb85c97fc300000008-stream-volumes.tdb And yet more requested command line output: $ ps auxw|grep pulse 1000 2266 0.5 0.2 294184 9152 ? S<l Nov16 4:26 pulseaudio -D 1000 2413 0.0 0.0 94816 3040 ? S Nov16 0:00 /usr/lib/pulseaudio/pulse/gconf-helper 1000 4875 0.0 0.0 8108 908 pts/0 S+ 12:15 0:00 grep --color=auto pulse

    Read the article

  • Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model #ssas #tabular #bism

    - by Marco Russo (SQLBI)
    I, Alberto and Chris spent many months (many nights, holidays and also working days of the last months) writing the book we would have liked to read when we started working with Analysis Services Tabular. A book that explains how to use Tabular, how to model data with Tabular, how Tabular internally works and how to optimize a Tabular model. All those things you need to start on a real project in order to make an happy customer. You know, we’re all consultants after all, so customer satisfaction is really important to be paid for our job! Now the book writing is finished, we’re in the final stage of editing and reviews and we look forward to get our print copy. Its title is very long: Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model. But the important thing is that you can already (pre)order it. This is the list of chapters: 01. BISM Architecture 02. Guided Tour on Tabular 03. Loading Data Inside Tabular 04. DAX Basics 05. Understanding Evaluation Contexts 06. Querying Tabular 07. DAX Advanced 08. Understanding Time Intelligence in DAX 09. Vertipaq Engine 10. Using Tabular Hierarchies 11. Data modeling in Tabular 12. Using Advanced Tabular Relationships 13. Tabular Presentation Layer 14. Tabular and PowerPivot for Excel 15. Tabular Security 16. Interfacing with Tabular 17. Tabular Deployment 18. Optimization and Monitoring And this is the book cover – have a good read!

    Read the article

  • cuda install in ubuntu13.10?

    - by hexiangpeng
    the cuda_install_.log show ERROR: Unable to build the NVIDIA kernel module. ERROR: Installation has failed. Please see the file '/var/log/nvidia-installer.log' for details. You may find suggestions on fixing installation problems in the README available on the Linux driver download page at www.nvidia.com. The driver installation is unable to locate the kernel source. Please make sure that the kernel source packages are installed and set up correctly. and the other .log show ^ /tmp/selfgz3964/NVIDIA-Linux-x86-319.37/kernel/nv-i2c.c: In function ‘nv_i2c_del_adapter’: /tmp/selfgz3964/NVIDIA-Linux-x86-319.37/kernel/nv-i2c.c:327:14: error: void value not ignored as it ought to be osstatus = i2c_del_adapter(pI2cAdapter); ^ make[3]: * [/tmp/selfgz3964/NVIDIA-Linux-x86-319.37/kernel/nv-i2c.o] ?? 1 make[2]: * [module/tmp/selfgz3964/NVIDIA-Linux-x86-319.37/kernel] ?? 2 NVIDIA: left KBUILD. nvidia.ko failed to build! make[1]: * [module] ?? 1 make: * [module] ?? 2 - Error. ERROR: Unable to build the NVIDIA kernel module. ERROR: Installation has failed. Please see the file '/var/log/nvidia-installer.log' for details. You may find suggestions on fixing installation problems in the README available on the Linux driver download page at www.nvidia.com. i don't understand

    Read the article

  • Last Night's Phoenix Silverlight UserGroup Meeting -- thanks!

    - by Dave Campbell
    14 of us gathered last night for a great presentation. As advertised, Les Brown of Sogeti came out to talk to us about the 4.0 enhancements, and brought along a new graduate and fellow-worker Chris Ross (Congratulations on your degree, again). Good discussion about MEF and Les' approach to using it, all of which is available on CodePlex along with other fun things Les has done, for example: FileUpload Control, FlipPanel, Animation Extensions, etc., and also his CodeCamp material. As it turned out I only had one give-away with me, but that was worth probably close to everything I've given away so far: a Telerik Ultimate License graciously provided by Telerik: I also have a Sitefinity license to use on our site from Telerik, but I've been jammed up and haven't had the time to devote to getting it cooking. I included Les and Chris in my spreadsheet for randomly selecting swag awardees, and Chris ended up the winner... Being a presenter, a new graduate, and new job, I thought it was appropriate. Let's not forget our host, Interface Technical Training for taking the burden of providing a facility for us off my agenda. I've been to User Group meetings in many places, but the ITT facilities are the best, so thanks! Also thanks to everyone that came out... we had some new people and some regulars. I have a speaker for August but not July, so if you have something to present, send me an email. Thanks!

    Read the article

  • Java Road Trip: Code to Coast

    - by Tori Wieldt
    tweetmeme_url = 'http://blogs.oracle.com/javaone/2010/06/java_road_trip_code_to_coast.html'; Share .FBConnectButton_Small{background-position:-5px -232px !important;border-left:1px solid #1A356E;} .FBConnectButton_Text{margin-left:12px !important ;padding:2px 3px 3px !important;} The Java Road Trip: Code to CoastJava developers, architects, programmers, and enthusiasts: get ready for a real adrenaline rush! Follow the Java Road Trip: Code to Coast as this high-tech block party on wheels travels to 20 cities across the United States showcasing Oracle's commitment to everything Java. It's a chance to talk to Java leaders and engineers and get your hands on the latest Java technology. The Java Road Trip kicks off June 14 in New York City with Octavian Tanase, Vice President, Java Platform Group at Oracle, headlining the event. Don't miss    EJBs in Boston!    Governance in Washington, DC!    Swing(ing) in Memphis!    Mile-high UIs in Denver!    Java in Seattle! (too easy)     and more!Join or follow the tour here: http://java.com/roadtrip/Read the Oracle Magazine articleUse or follow the hash tag #javaroadtrip

    Read the article

  • JavaOne Latin America Underway

    - by Tori Wieldt
    JavaOne Latin America started officially today, but lots of networking has already happened. Last night some JUG leaders, Java Champions, and members of the Oracle Java development and marketing teams had dinner together. The conversation ranged from the new direction of JavaFX to how to improve JUG attendance. Maricio Leal shared the idea some Brazilian JUGs have of putting Java Evangelists and experts on a boat and having them visit JUGs on cities along the Amazon river.  We discussed ideas, and shared dessert pizza. It was the perfect community get together! If you see Brazilian Java Man Bruno Souza, ask him what he is bringing to the party.Today, at JavaOne Latin America, all the sessions were full, and developers were spilling into the hallways. Session content was selected with the help of 14 Java thought leaders from Latin America. JavaOne Program Committee Chair, Sharat Chander, said "I'm thrilled that at this JavaOne over half of the content is coming from the community." Between sessions, developers take advantage of the Oracle Technology Network lounge to grab a snack and use their laptops.  OTN LoungeIt promises to be a great JavaOne.

    Read the article

  • New Oracle Tutor Class: Create Procedures and Support Documents

    - by [email protected]
    Offered by Oracle University Course Code D66797GC10 July 14-16, 2010 in Chicago, IL This three day Instructor Led class is only US$ 2,250 Oracle® Tutor provides organizations with a powerful pair of applications to develop, deploy, and maintain employee business process documentation. Tutor includes a repository of prewritten process, procedure, and support documents that can be readily modified to reflect your company's unique business processes. The result is a set of job-role specific desk manuals that are easy to update and deploy online. Use Tutor to create content to: Implement new business applications Document for any regulatory compliance initiative Turn every desk into a self service reference center Increase employee productivity The primary challenge for companies faced with documenting policies, processes, and procedures is to realize that they can do this documentation in-house, with existing resources, using Oracle Tutor. Process documentation is a critical success component when implementing or upgrading to a new business application and for supporting corporate governance or other regulatory compliance initiatives. There are over 1000 Oracle Tutor customers worldwide that have used Tutor to create, distribute, and maintain their business procedures. This is easily accomplished because of Tutor's: Ease of use by those who have to write procedures (Microsoft Word based authoring) Ease of company-wide implementation (complex document management activities are centralized) Ease of use by workers who have to follow the procedures (play script format) Ease of access by remote workers (web-enabled) This course is an introduction to the Oracle Tutor suite of products. It focuses on the process documentation feature set of the Tutor applications. Participants will learn about writing procedures and maintaining these particular process document types, all using the Tutor method. Audience Business Analysts End Users Functional Implementer Project Manager Sales Consultants Security Compliance Auditors User Adoption Consultants Prerequisites No Prerequisite Courses strong working knowledge of MS Windows strong working knowledge of MS Word (2007) Objectives • Provide your organization with the next steps to implement the Tutor procedure writing method and system in your organization • Use the Tutor Author application to write employee focused process documents (procedures, instructions, references, process maps) • Use the Tutor Publisher application to create impact analysis reports, Employee Desk Manuals, and Owner Manuals Web site on OU Link to a PDF of the class summary Oracle University Training Centre - Chicago Emily Chorba Product Manager for Oracle Tutor

    Read the article

  • Java Spotlight Episode 107: Adam Bien on JavaEE Patterns and Futures @AdamBien

    - by Roger Brinkley
    Interview with Adam Bien, Java Champion and Ace Director, on his book Real World Java EE Patterns-Rethinking Best Practices and Java EE futures. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News NightHacking Tour Continues - Don't Miss It! JavaFX Ensemble in the Mac App Store12 Announcing the JavaFX UI controls sandbox Java EE 7 Status Update - November 2012 2012 Executive Committee (EC) Elections Events Nov 5-9, Øredev Developer Conference, Malmö, Sweden Nov 13-17, Devoxx, Antwerp, Belgium Nov 20-22, DOAG 2012, Nuremberg, Germany Dec 3-5, jDays, Göteborg, Sweden Dec 4-6, JavaOne Latin America, Sao Paolo, Brazil Dec 14-15, IndicThreads, Pune, India Feature InterviewAdam Bien is a Java Champion, NetBeans Dream Team Founding Member, Oracle ACE Director, Java Developer of the Year 2010. He has worked with Java since JDK 1.0, with Servlets/EJB since 1.0. He participates in the JCP as an Expert Group member for the Java EE 6 and 7, EJB 3.X, JAX-RS, CDI, and JPA 2.X JSRs. The author of several books about JavaFX, J2EE, and Java EE, including Real World Java EE Patterns—Rethinking Best Practices and Real World Java EE Night Hacks—Dissecting the Business Tier.The Kindle version of Real World Java EE Patterns-Rethinking Best Practices was released October 31. It’s only $9.99, but if you are an Amazon Prime members you can “borrow” the book for free. What’s Cool Building OpenJFX 2.2 Again

    Read the article

  • SQLAuthority News – Download SQL Server 2008 R2 Upgrade Technical Reference Guide

    - by pinaldave
    I recently come across very interesting white paper written for Microsoft by Solid Quality Mentors. A successful upgrade to SQL Server 2008 R2 should be smooth and trouble-free. To do that smooth transition, you must plan sufficiently for the upgrade and match the complexity of your database application. Otherwise, you risk costly and stressful errors and upgrade problems. SQL Server 2008 R2 Upgrade Technical Reference Guide is one of the best and comprehensive reference guide I have seen on the subject of SQL Server 2008 R2 upgrade. There are so many various subjects discussed about upgrade which one would always wanted to see. You can find the link of why one has to upgrade to SQL Server 2008 R2 over here: Why upgrade to SQL Server 2008 R2. White paper to upgrade to SQL Server 2008 R2 Upgrade Guide. Here is the quick list of content of the white paper. 1. Upgrade Planning and Deployment 2. Management and Development Tools 3. Relational Databases 4. High Availability 5. Database Security 6. Full-Text Search 7. Service Broker 8. Transact-SQL Queries 9. Notification Services 10. SQL Server Express 11. Analysis Services 12. Data Mining 13. Integration Services 14. Reporting Services 15. Other Microsoft Applications and Platforms Appendix 1: Version and Edition Upgrade Paths Appendix 2: Upgrade Planning Deployment and Tasks Checklist This white paper is indeed huge with 490 pages and 151,956 words.As I said, this is one of the most comprehensive white paper ever published on the subject. Just reading this white paper one can learn a lot about SQL Server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • Does not recognize usb sticks and drives

    - by Peter
    When connecting any usb stick to my thinkpad ubuntu 10.10 does not recognize them. I don't see anything on the desktop. the output of "dmesg | tail -n10" gives me: [ 1965.696388] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1965.884537] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.072503] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.260349] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.506227] usb 1-1: new high speed USB device using ehci_hcd and address 9 [ 1966.572375] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.760379] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.948358] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1967.136335] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1967.325423] hub 1-0:1.0: unable to enumerate USB device on port 1 When connecting my usb scanner to the same port: [ 2008.480135] usb 1-1: new high speed USB device using ehci_hcd and address 65 [ 2008.548389] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2008.736786] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2008.924379] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.112348] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.300443] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.488536] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.732180] usb 1-1: new high speed USB device using ehci_hcd and address 71 [ 2014.796299] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2018.000128] usb 2-1: new full speed USB device using uhci_hcd and address 3 And ubuntu 10.10 recognizes that scanner. So: What can i do to see my usb stick? BTW: on my other Thinkpad running fedora 14 it works perfectly... Cheers -Peter

    Read the article

  • Help find correct alsa model for onboard sound (alc887?) that will work with jack and have correct mixer setup

    - by Jazz
    I have a Gigabyte GA-MA74GMT-S2 motherboard. I am using Jack for sound - connected to ALSA. I am running Ubuntu 12.04. aplay -l reports card 0: SB [HDA ATI SB], device 0: ALC887 Analog [ALC887 Analog] The problem is that the default setup, that alsa decides to use, causes stuttering and xruns no matter how generous I set the frames/period or periods/buffer etc. Also, Jack works fine if I plug in an external USB sound system and use that. My processor is an AMD phenom x4 945, and I have 8GB ram, and Video card is Geforce GTX550 Ti, all of which should be quite capable enough. I also tried Pulseaudio and that works fine, but I need to use Jack At first I thought it might be an interrupt conflict, but I have found that adding "options snd-hda-intel model=generic" to /etc/modprobe.d/alsa-base.conf causes it to play correctly, but the limited mixer setup lacks controls I need - so this setup isn't good enough. Still, it seems to prove it isn't a hardware conflict. I have tried many other models, such as 3stack, 6stack, auto and even basic, and they all suffer from the stuttering. I eventually found "options snd-hda-intel model=3stack-6ch-intel" works without stuttering, and mixer is much closer to what it needs to be. Can anyone help on how to get a correct and accurate model for ALSA to use? More info on the hardware that might help... *-multimedia description: Audio device product: SBx00 Azalia (Intel HDA) vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 14.2 bus info: pci@0000:00:14.2 version: 00 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: driver=snd_hda_intel latency=32 resources: irq:16 memory:fe024000-fe027fff

    Read the article

  • How to open a VirtualBox (.VDI) Virtual Machine

    - by [email protected]
     How to open a .VDI Virtual MachineSometimes someone share with us one Virtual machine with extension .VDI, after that we can wonder how and what with?Well the answer is... It is a VirtualBox - Virtual Machine. If you have not downloaded it you can do this easily, just follow this post.http://listeningoracle.blogspot.com/2010/04/que-es-virtualbox.htmlorhttp://oracleoforacle.wordpress.com/2010/04/14/ques-es-virtualbox/Ok, Now with VirtualBox Installed open it and proceed with the following:1. Open the Virtual File Manager. 2. Click on Actions ? Add and select the .VDI fileClick "Ok"3.  A new Virtual machine will be displayed, (in this Case, an OEL5 32GB Virtual Machine is available.)4. This step is important. Once you have open the settings, under General option click the advanced settings. Here you must change the default directory to save your Snapshots; my recommendation set it to the same directory where the .Vdi file is. Otherwise you can have the same Virtual Machine and its snapshots in different paths.5. Now Click on System, and proceed to assign the correct memory and define the processors for the Virtual machine. Note: Enable  "Enable IO APIC" if you are planning to assign more than one CPU to the Virtual Machine.6. Associated the storage disk to the Virtual machineThe disk must be selected as IDE Primary Master. 7. Well you can verify the other options, but with these changes you will be able to start the VM. Note: Sometime the VM owner may share some instructions, if so follow his instructions.8. Click Ok and Push Start Button, and enjoy your Virtual Machine

    Read the article

  • DENY select on sys.dm_db_index_physical_stats

    - by steveh99999
    Technorati Tags: security,DMV,permission,sys.dm_db_index_physical_stats I recently saw an interesting blog article by Paul Randal about the performance overhead of querying the sys.dm_db_index_physical_stats. So I was thinking, would it be possible to let non-sysadmin users query DMVs on a SQL server but stop them querying this I/O intensive DMV ? Yes it is, here’s how… 1. Create a new login for test purposes, with permissions to access AdventureWorks database only … CREATE LOGIN [test] WITH PASSWORD='xxxx', DEFAULT_DATABASE=[AdventureWorks] GO USE [AdventureWorks] GO CREATE USER [test] FOR LOGIN [test] WITH DEFAULT_SCHEMA=[dbo] GO 2.login as user test and issue command SELECT  * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'),NULL,NULL,NULL,'DETAILED') gets error :-  Msg 297, Level 16, State 12, Line 1 The user does not have permission to perform this action. 3.As a sysadmin, issue command :- USE AdventureWorks GRANT VIEW DATABASE STATE TO [test] or GRANT VIEW SERVER STATE TO [test] if all databases can be queried via DMV. 4. Try again as user test to issue command SELECT * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks '),NULL,NULL,NULL,'DETAILED') -- now produces valid results from the DMV.. 5 now create the test user in master database, public role only USE master CREATE USER [test] FOR LOGIN [test] 6 issue command :- USE master DENY SELECT ON sys.dm_db_index_physical_stats TO [test] 7 Now go back to AdventureWorks using test login and try SELECT * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks’),NULL,NULL,NULL,’DETAILED') Now gets error... Msg 229, Level 14, State 5, Line 1 The SELECT permission was denied on the object 'dm_db_index_physical_stats', database 'mssqlsystemresource', schema 'sys'. but the user is still able to query all other non-IO-intensive DMVs. If the user attempts to view the index physical stats via a builtin management studio report  – see recent blog post by Pinal Dave they get an error also

    Read the article

  • How to open a VirtualBox (.VDI) Virtual Machine

    - by [email protected]
     How to open a .VDI Virtual MachineSometimes someone share with us one Virtual machine with extension .VDI, after that we can wonder how and what with?Well the answer is... It is a VirtualBox - Virtual Machine. If you have not downloaded it you can do this easily, just follow this post.http://listeningoracle.blogspot.com/2010/04/que-es-virtualbox.htmlorhttp://oracleoforacle.wordpress.com/2010/04/14/ques-es-virtualbox/Ok, Now with VirtualBox Installed open it and proceed with the following:1. Open the Virtual File Manager. 2. Click on Actions ? Add and select the .VDI fileClick "Ok"3.  A new Virtual machine will be displayed, (in this Case, an OEL5 32GB Virtual Machine is available.)4. This step is important. Once you have open the settings, under General option click the advanced settings. Here you must change the default directory to save your Snapshots; my recommendation set it to the same directory where the .Vdi file is. Otherwise you can have the same Virtual Machine and its snapshots in different paths.5. Now Click on System, and proceed to assign the correct memory and define the processors for the Virtual machine. Note: Enable  "Enable IO APIC" if you are planning to assign more than one CPU to the Virtual Machine.6. Associated the storage disk to the Virtual machineThe disk must be selected as IDE Primary Master. 7. Well you can verify the other options, but with these changes you will be able to start the VM. Note: Sometime the VM owner may share some instructions, if so follow his instructions.8. Click Ok and Push Start Button, and enjoy your Virtual Machine

    Read the article

  • How to switch sound-drivers, and to which? [AMD] Hudson Azalia Controller

    - by Anders Martini
    System settings/sound does not open, freezes and I have to force close. Speaker symbol with volume control does not open scroll-down menu, and there is no sounds. Many people have problems with Hudson Azalia in Ubuntu, but I found no working solution. I don't really understand much of this, but here are some more details: aplay -l : **** List of PLAYBACK Hardware Devices **** (after running this one, it starts some kind of process that doesn't get any results, and doesn't stop, terminal has to be shut down) lspci -vnn | grep -iA5 audio: 00:01.1 Audio device [0403]: Advanced Micro Devices [AMD] nee ATI Device [1002:9902] Subsystem: Hewlett-Packard Company Device [103c:184c] Flags: bus master, fast devsel, latency 0, IRQ 53 Memory at f0444000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel -- 00:14.2 Audio device [0403]: Advanced Micro Devices [AMD] Hudson Azalia Controller [1022:780d] (rev 01) Subsystem: Hewlett-Packard Company Device [103c:184c] Flags: bus master, slow devsel, latency 32, IRQ 54 Memory at f0440000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel It seems to me that I'm currently running hda Intel drivers on my AMD Hudson Azalia soundcard. I can't see what drivers this soundcard uses. Do I need any additional drivers for my soundcard, and where would I find them?

    Read the article

  • Fix corrupt NTFS partition without Windows

    - by Capt.Nemo
    MY NTFS Partition has gotten corrupt somehow (it's a relic from the days when I had Windows installed). I'm putting the debug output of fdisk and blkid here. At the same time, any OS is unable to mount my root partition, which is located next to my NTFS partition. I'm not sure if this has anything to do with it, though. I get the following error while trying to mount my root partition (sda5) mount: wrong fs type, bad option, bad superblock on /dev/sda5, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so ubuntu@ubuntu:~$ dmesg | tail [ 1019.726530] Descriptor sense data with sense descriptors (in hex): [ 1019.726533] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 [ 1019.726551] 1a 3e ed 92 [ 1019.726558] sd 0:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed [ 1019.726568] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 1a 3e ed 40 00 01 00 00 [ 1019.726584] end_request: I/O error, dev sda, sector 440331666 [ 1019.726602] JBD: Failed to read block at offset 462 [ 1019.726609] ata1: EH complete [ 1019.726612] JBD: recovery failed [ 1019.726617] EXT4-fs (sda5): error loading journal When I open gparted (using live CD), I get an exclamation next to my NTFS drive which states Is there a way to run chkdsk without using windows ? My attempt to run fsck results in the following : ubuntu@ubuntu:~$ sudo fsck /dev/sda fsck from util-linux-ng 2.17.2 e2fsck 1.41.14 (22-Dec-2010) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Update : I was able to fix the NTFS partition running chkdsk off HBCD, but it seems that the superblock problem still remains. *Update 2: * Fixed superblock issue using e2fsck -c /dev/sda5

    Read the article

  • Problem with SNMP and MIBs

    - by jap1968
    I am installing Zabbix to monitor via snmp some devices from a machine running Ubuntu 12.04 server. There is a problem with MIB definitions, since snmp commands do no properly translate some of the MIBs. I have already installed the "snmp-mibs-downloader" package, so the files containing the MIB descriptions are properly installed. The MIB are only translated to obtain the numeric key (the MIB files are accessible to the snmp commands), but the results returned by the snmpget command do not properly translate the key. The zabbix templates that I am using do expect the key translated (SNMPv2-MIB::sysUpTime.0) , so, the current results are not recognised and these are ignored. Test case: $ snmptranslate -On SNMPv2-MIB::sysUpTime.0 .1.3.6.1.2.1.1.3.0 $ snmpget -v 2c -c public 192.168.1.1 1.3.6.1.2.1.1.3.0 iso.3.6.1.2.1.1.3.0 = Timeticks: (2911822510) 337 days, 0:23:45.10 On another machine (running a very old Red Hat based distribution), the snmp commands perform both, the directe and reverse traslation, as expected: # snmptranslate -On SNMPv2-MIB::sysUpTime.0 .1.3.6.1.2.1.1.3.0 # snmpget -v 2c -c public 192.168.1.1 1.3.6.1.2.1.1.3.0 SNMPv2-MIB::sysUpTime.0 = Timeticks: (2911819485) 337 days, 0:23:14.85 What is the problem on my Ubuntu box? Is there something I am missing?

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >