Search Results

Search found 5472 results on 219 pages for 'faceless1 14'.

Page 106/219 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Silverlight Cream for November 24, 2011 -- #1173

    - by Dave Campbell
    In this Thanksgiving Day Issue: Andrea Boschin, Samidip Basu, Ollie Riches, WindowsPhoneGeek, Sumit Dutta, Dhananjay Kumar, Daniel Egan, Doug Mair, Chris Woodruff, and Debal Saha.Happy Thanksgiving Everybody! Above the Fold: Silverlight: "Silverlight CommandBinding with Simple MVVM Toolkit" Debal Saha WP7: "How many pins can Bing Maps handle in a WP7 app - part 3" Ollie Riches Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com:Windows Phone 7.5 - Play with musicAndrea Boschin's latest WP7 post is up on SilverlightShow... he's talking about the improvements in the music hub and also the programmability of musicOData caching in Windows PhoneSamidip Basu has an OData post up on SilverlightShow also, and he's talking about data caching strategies on WP7How many pins can Bing Maps handle in a WP7 app - part 3Ollie Riches has part 3 of his series on Bing Maps and pins... sepecifically how to deal with a large number of them... after going through discussing pins, he is suggesting using a heat map which looks pretty darn good, and renders fast... except when on a device :(Improvements in the LongListSelector Selection with Nov `11 release of WP ToolkitWindowsPhoneGeek's latest is this tutorial on the LongListSelector in the WP Toolkit... check out the previous info in his free eBook to get ready then dig into this tutorial for improvements in the control.Part 25 - Windows Phone 7 - Device StatusSumit Dutta's latest post is number 25 in his WP7 series, and time out he's digging into device status in the Microsoft.Phone.Info namespaceVideo on How to work with Picture in Windows Phone 7Dhananjay Kumar's latest video tutorial on WP7 is up, and he's talking about working with Photos.Live Tiles–Windows Phone WorkshopDaniel Egan has the video up of a Windows Phone Workshop done earlier this week on Live Tiles31 Days of Mango | Day #15: The Progress BarDoug Mair shares the show with Jeff Blankenburg in Jeff's Day 15 in his 31 Day quest of Mango, talking about the progressbar: Indeterminate and Determinate Modes abound31 Days of Mango | Day #14: Using ODataChris Woodruff has a guest spot on Jeff Blankenburg's 31 Days series with this post on OData... long detailed tutorial with all the codeSilverlight CommandBinding with Simple MVVM ToolkitDebal Saha has a nice detailed tutorial up on CommandBinding.. he's using the SimpleMVVM Toolkit and shows downloading and installing itStay in the 'Light!Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCreamJoin me @ SilverlightCream | Phoenix Silverlight User GroupTechnorati Tags:Silverlight    Silverlight 3    Silverlight 4    Windows PhoneMIX10

    Read the article

  • Convert your Hash keys to object properties in Ruby

    - by kerry
    Being a Ruby noob (and having a background in Groovy), I was a little surprised that you can not access hash objects using the dot notation.  I am writing an application that relies heavily on XML and JSON data.  This data will need to be displayed and I would rather use book.author.first_name over book[‘author’][‘first_name’].  A quick search on google yielded this post on the subject. So, taking the DRYOO (Don’t Repeat Yourself Or Others) concept.  I came up with this: 1: class ::Hash 2:  3: # add keys to hash 4: def to_obj 5: self.each do |k,v| 6: if v.kind_of? Hash 7: v.to_obj 8: end 9: k=k.gsub(/\.|\s|-|\/|\'/, '_').downcase.to_sym 10: self.instance_variable_set("@#{k}", v) ## create and initialize an instance variable for this key/value pair 11: self.class.send(:define_method, k, proc{self.instance_variable_get("@#{k}")}) ## create the getter that returns the instance variable 12: self.class.send(:define_method, "#{k}=", proc{|v| self.instance_variable_set("@#{k}", v)}) ## create the setter that sets the instance variable 13: end 14: return self 15: end 16: end This works pretty well.  It converts each of your keys to properties of the Hash.  However, it doesn’t sit very well with me because I probably will not use 90% of the properties most of the time.  Why should I go through the performance overhead of creating instance variables for all of the unused ones? Enter the ‘magic method’ #missing_method: 1: class ::Hash 2: def method_missing(name) 3: return self[name] if key? name 4: self.each { |k,v| return v if k.to_s.to_sym == name } 5: super.method_missing name 6: end 7: end This is a much cleaner method for my purposes.  Quite simply, it checks to see if there is a key with the given symbol, and if not, loop through the keys and attempt to find one. I am a Ruby noob, so if there is something I am overlooking, please let me know.

    Read the article

  • cuda install in ubuntu13.10?

    - by hexiangpeng
    the cuda_install_.log show ERROR: Unable to build the NVIDIA kernel module. ERROR: Installation has failed. Please see the file '/var/log/nvidia-installer.log' for details. You may find suggestions on fixing installation problems in the README available on the Linux driver download page at www.nvidia.com. The driver installation is unable to locate the kernel source. Please make sure that the kernel source packages are installed and set up correctly. and the other .log show ^ /tmp/selfgz3964/NVIDIA-Linux-x86-319.37/kernel/nv-i2c.c: In function ‘nv_i2c_del_adapter’: /tmp/selfgz3964/NVIDIA-Linux-x86-319.37/kernel/nv-i2c.c:327:14: error: void value not ignored as it ought to be osstatus = i2c_del_adapter(pI2cAdapter); ^ make[3]: * [/tmp/selfgz3964/NVIDIA-Linux-x86-319.37/kernel/nv-i2c.o] ?? 1 make[2]: * [module/tmp/selfgz3964/NVIDIA-Linux-x86-319.37/kernel] ?? 2 NVIDIA: left KBUILD. nvidia.ko failed to build! make[1]: * [module] ?? 1 make: * [module] ?? 2 - Error. ERROR: Unable to build the NVIDIA kernel module. ERROR: Installation has failed. Please see the file '/var/log/nvidia-installer.log' for details. You may find suggestions on fixing installation problems in the README available on the Linux driver download page at www.nvidia.com. i don't understand

    Read the article

  • Why am I getting this "Connection to PulseAudio failed" error?

    - by Dave M G
    I have a computer that runs Mythbuntu 11.10. It has an external USB Kenwood Digital Audio device. When I open up pavucontrol, I get this message: If I do as the message suggests and run start-pulseaudio-x11, I get this output: $ start-pulseaudio-x11 Connection failure: Connection refused pa_context_connect() failed: Connection refused How do I correct this error? Update: Somewhere during the course of doing the suggested tests in the comments, a new audio device has now become visible in my sound settings. I have not attached or made any new device, so this must be the result of of some setting change. The device I use and know about is the Kenwood Audio device. The "GF108" device will play sound through the Kenwood anyway, but not reliably: Command line output as requested in the comments: $ ls -l ~/.pulse* -rw------- 1 mythbuntu mythbuntu 256 Feb 28 2011 /home/mythbuntu/.pulse-cookie /home/mythbuntu/.pulse: total 200 -rw-r--r-- 1 mythbuntu mythbuntu 8192 Oct 23 01:38 2b98330d36bf53bb85c97fc300000008-card-database.tdb -rw-r--r-- 1 mythbuntu mythbuntu 69 Nov 16 22:51 2b98330d36bf53bb85c97fc300000008-default-sink -rw-r--r-- 1 mythbuntu mythbuntu 68 Nov 16 22:51 2b98330d36bf53bb85c97fc300000008-default-source -rw-r--r-- 1 mythbuntu mythbuntu 49152 Oct 14 12:30 2b98330d36bf53bb85c97fc300000008-device-manager.tdb -rw-r--r-- 1 mythbuntu mythbuntu 61440 Oct 23 01:40 2b98330d36bf53bb85c97fc300000008-device-volumes.tdb lrwxrwxrwx 1 mythbuntu mythbuntu 23 Nov 16 22:50 2b98330d36bf53bb85c97fc300000008-runtime -> /tmp/pulse-EAwvLIQZn7e8 -rw-r--r-- 1 mythbuntu mythbuntu 77824 Nov 1 12:54 2b98330d36bf53bb85c97fc300000008-stream-volumes.tdb And yet more requested command line output: $ ps auxw|grep pulse 1000 2266 0.5 0.2 294184 9152 ? S<l Nov16 4:26 pulseaudio -D 1000 2413 0.0 0.0 94816 3040 ? S Nov16 0:00 /usr/lib/pulseaudio/pulse/gconf-helper 1000 4875 0.0 0.0 8108 908 pts/0 S+ 12:15 0:00 grep --color=auto pulse

    Read the article

  • Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model #ssas #tabular #bism

    - by Marco Russo (SQLBI)
    I, Alberto and Chris spent many months (many nights, holidays and also working days of the last months) writing the book we would have liked to read when we started working with Analysis Services Tabular. A book that explains how to use Tabular, how to model data with Tabular, how Tabular internally works and how to optimize a Tabular model. All those things you need to start on a real project in order to make an happy customer. You know, we’re all consultants after all, so customer satisfaction is really important to be paid for our job! Now the book writing is finished, we’re in the final stage of editing and reviews and we look forward to get our print copy. Its title is very long: Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model. But the important thing is that you can already (pre)order it. This is the list of chapters: 01. BISM Architecture 02. Guided Tour on Tabular 03. Loading Data Inside Tabular 04. DAX Basics 05. Understanding Evaluation Contexts 06. Querying Tabular 07. DAX Advanced 08. Understanding Time Intelligence in DAX 09. Vertipaq Engine 10. Using Tabular Hierarchies 11. Data modeling in Tabular 12. Using Advanced Tabular Relationships 13. Tabular Presentation Layer 14. Tabular and PowerPivot for Excel 15. Tabular Security 16. Interfacing with Tabular 17. Tabular Deployment 18. Optimization and Monitoring And this is the book cover – have a good read!

    Read the article

  • Last Night's Phoenix Silverlight UserGroup Meeting -- thanks!

    - by Dave Campbell
    14 of us gathered last night for a great presentation. As advertised, Les Brown of Sogeti came out to talk to us about the 4.0 enhancements, and brought along a new graduate and fellow-worker Chris Ross (Congratulations on your degree, again). Good discussion about MEF and Les' approach to using it, all of which is available on CodePlex along with other fun things Les has done, for example: FileUpload Control, FlipPanel, Animation Extensions, etc., and also his CodeCamp material. As it turned out I only had one give-away with me, but that was worth probably close to everything I've given away so far: a Telerik Ultimate License graciously provided by Telerik: I also have a Sitefinity license to use on our site from Telerik, but I've been jammed up and haven't had the time to devote to getting it cooking. I included Les and Chris in my spreadsheet for randomly selecting swag awardees, and Chris ended up the winner... Being a presenter, a new graduate, and new job, I thought it was appropriate. Let's not forget our host, Interface Technical Training for taking the burden of providing a facility for us off my agenda. I've been to User Group meetings in many places, but the ITT facilities are the best, so thanks! Also thanks to everyone that came out... we had some new people and some regulars. I have a speaker for August but not July, so if you have something to present, send me an email. Thanks!

    Read the article

  • Java Road Trip: Code to Coast

    - by Tori Wieldt
    tweetmeme_url = 'http://blogs.oracle.com/javaone/2010/06/java_road_trip_code_to_coast.html'; Share .FBConnectButton_Small{background-position:-5px -232px !important;border-left:1px solid #1A356E;} .FBConnectButton_Text{margin-left:12px !important ;padding:2px 3px 3px !important;} The Java Road Trip: Code to CoastJava developers, architects, programmers, and enthusiasts: get ready for a real adrenaline rush! Follow the Java Road Trip: Code to Coast as this high-tech block party on wheels travels to 20 cities across the United States showcasing Oracle's commitment to everything Java. It's a chance to talk to Java leaders and engineers and get your hands on the latest Java technology. The Java Road Trip kicks off June 14 in New York City with Octavian Tanase, Vice President, Java Platform Group at Oracle, headlining the event. Don't miss    EJBs in Boston!    Governance in Washington, DC!    Swing(ing) in Memphis!    Mile-high UIs in Denver!    Java in Seattle! (too easy)     and more!Join or follow the tour here: http://java.com/roadtrip/Read the Oracle Magazine articleUse or follow the hash tag #javaroadtrip

    Read the article

  • JavaOne Latin America Underway

    - by Tori Wieldt
    JavaOne Latin America started officially today, but lots of networking has already happened. Last night some JUG leaders, Java Champions, and members of the Oracle Java development and marketing teams had dinner together. The conversation ranged from the new direction of JavaFX to how to improve JUG attendance. Maricio Leal shared the idea some Brazilian JUGs have of putting Java Evangelists and experts on a boat and having them visit JUGs on cities along the Amazon river.  We discussed ideas, and shared dessert pizza. It was the perfect community get together! If you see Brazilian Java Man Bruno Souza, ask him what he is bringing to the party.Today, at JavaOne Latin America, all the sessions were full, and developers were spilling into the hallways. Session content was selected with the help of 14 Java thought leaders from Latin America. JavaOne Program Committee Chair, Sharat Chander, said "I'm thrilled that at this JavaOne over half of the content is coming from the community." Between sessions, developers take advantage of the Oracle Technology Network lounge to grab a snack and use their laptops.  OTN LoungeIt promises to be a great JavaOne.

    Read the article

  • Java Spotlight Episode 107: Adam Bien on JavaEE Patterns and Futures @AdamBien

    - by Roger Brinkley
    Interview with Adam Bien, Java Champion and Ace Director, on his book Real World Java EE Patterns-Rethinking Best Practices and Java EE futures. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News NightHacking Tour Continues - Don't Miss It! JavaFX Ensemble in the Mac App Store12 Announcing the JavaFX UI controls sandbox Java EE 7 Status Update - November 2012 2012 Executive Committee (EC) Elections Events Nov 5-9, Øredev Developer Conference, Malmö, Sweden Nov 13-17, Devoxx, Antwerp, Belgium Nov 20-22, DOAG 2012, Nuremberg, Germany Dec 3-5, jDays, Göteborg, Sweden Dec 4-6, JavaOne Latin America, Sao Paolo, Brazil Dec 14-15, IndicThreads, Pune, India Feature InterviewAdam Bien is a Java Champion, NetBeans Dream Team Founding Member, Oracle ACE Director, Java Developer of the Year 2010. He has worked with Java since JDK 1.0, with Servlets/EJB since 1.0. He participates in the JCP as an Expert Group member for the Java EE 6 and 7, EJB 3.X, JAX-RS, CDI, and JPA 2.X JSRs. The author of several books about JavaFX, J2EE, and Java EE, including Real World Java EE Patterns—Rethinking Best Practices and Real World Java EE Night Hacks—Dissecting the Business Tier.The Kindle version of Real World Java EE Patterns-Rethinking Best Practices was released October 31. It’s only $9.99, but if you are an Amazon Prime members you can “borrow” the book for free. What’s Cool Building OpenJFX 2.2 Again

    Read the article

  • New Oracle Tutor Class: Create Procedures and Support Documents

    - by [email protected]
    Offered by Oracle University Course Code D66797GC10 July 14-16, 2010 in Chicago, IL This three day Instructor Led class is only US$ 2,250 Oracle® Tutor provides organizations with a powerful pair of applications to develop, deploy, and maintain employee business process documentation. Tutor includes a repository of prewritten process, procedure, and support documents that can be readily modified to reflect your company's unique business processes. The result is a set of job-role specific desk manuals that are easy to update and deploy online. Use Tutor to create content to: Implement new business applications Document for any regulatory compliance initiative Turn every desk into a self service reference center Increase employee productivity The primary challenge for companies faced with documenting policies, processes, and procedures is to realize that they can do this documentation in-house, with existing resources, using Oracle Tutor. Process documentation is a critical success component when implementing or upgrading to a new business application and for supporting corporate governance or other regulatory compliance initiatives. There are over 1000 Oracle Tutor customers worldwide that have used Tutor to create, distribute, and maintain their business procedures. This is easily accomplished because of Tutor's: Ease of use by those who have to write procedures (Microsoft Word based authoring) Ease of company-wide implementation (complex document management activities are centralized) Ease of use by workers who have to follow the procedures (play script format) Ease of access by remote workers (web-enabled) This course is an introduction to the Oracle Tutor suite of products. It focuses on the process documentation feature set of the Tutor applications. Participants will learn about writing procedures and maintaining these particular process document types, all using the Tutor method. Audience Business Analysts End Users Functional Implementer Project Manager Sales Consultants Security Compliance Auditors User Adoption Consultants Prerequisites No Prerequisite Courses strong working knowledge of MS Windows strong working knowledge of MS Word (2007) Objectives • Provide your organization with the next steps to implement the Tutor procedure writing method and system in your organization • Use the Tutor Author application to write employee focused process documents (procedures, instructions, references, process maps) • Use the Tutor Publisher application to create impact analysis reports, Employee Desk Manuals, and Owner Manuals Web site on OU Link to a PDF of the class summary Oracle University Training Centre - Chicago Emily Chorba Product Manager for Oracle Tutor

    Read the article

  • SQLAuthority News – Download SQL Server 2008 R2 Upgrade Technical Reference Guide

    - by pinaldave
    I recently come across very interesting white paper written for Microsoft by Solid Quality Mentors. A successful upgrade to SQL Server 2008 R2 should be smooth and trouble-free. To do that smooth transition, you must plan sufficiently for the upgrade and match the complexity of your database application. Otherwise, you risk costly and stressful errors and upgrade problems. SQL Server 2008 R2 Upgrade Technical Reference Guide is one of the best and comprehensive reference guide I have seen on the subject of SQL Server 2008 R2 upgrade. There are so many various subjects discussed about upgrade which one would always wanted to see. You can find the link of why one has to upgrade to SQL Server 2008 R2 over here: Why upgrade to SQL Server 2008 R2. White paper to upgrade to SQL Server 2008 R2 Upgrade Guide. Here is the quick list of content of the white paper. 1. Upgrade Planning and Deployment 2. Management and Development Tools 3. Relational Databases 4. High Availability 5. Database Security 6. Full-Text Search 7. Service Broker 8. Transact-SQL Queries 9. Notification Services 10. SQL Server Express 11. Analysis Services 12. Data Mining 13. Integration Services 14. Reporting Services 15. Other Microsoft Applications and Platforms Appendix 1: Version and Edition Upgrade Paths Appendix 2: Upgrade Planning Deployment and Tasks Checklist This white paper is indeed huge with 490 pages and 151,956 words.As I said, this is one of the most comprehensive white paper ever published on the subject. Just reading this white paper one can learn a lot about SQL Server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Does not recognize usb sticks and drives

    - by Peter
    When connecting any usb stick to my thinkpad ubuntu 10.10 does not recognize them. I don't see anything on the desktop. the output of "dmesg | tail -n10" gives me: [ 1965.696388] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1965.884537] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.072503] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.260349] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.506227] usb 1-1: new high speed USB device using ehci_hcd and address 9 [ 1966.572375] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.760379] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.948358] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1967.136335] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1967.325423] hub 1-0:1.0: unable to enumerate USB device on port 1 When connecting my usb scanner to the same port: [ 2008.480135] usb 1-1: new high speed USB device using ehci_hcd and address 65 [ 2008.548389] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2008.736786] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2008.924379] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.112348] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.300443] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.488536] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.732180] usb 1-1: new high speed USB device using ehci_hcd and address 71 [ 2014.796299] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2018.000128] usb 2-1: new full speed USB device using uhci_hcd and address 3 And ubuntu 10.10 recognizes that scanner. So: What can i do to see my usb stick? BTW: on my other Thinkpad running fedora 14 it works perfectly... Cheers -Peter

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • Help find correct alsa model for onboard sound (alc887?) that will work with jack and have correct mixer setup

    - by Jazz
    I have a Gigabyte GA-MA74GMT-S2 motherboard. I am using Jack for sound - connected to ALSA. I am running Ubuntu 12.04. aplay -l reports card 0: SB [HDA ATI SB], device 0: ALC887 Analog [ALC887 Analog] The problem is that the default setup, that alsa decides to use, causes stuttering and xruns no matter how generous I set the frames/period or periods/buffer etc. Also, Jack works fine if I plug in an external USB sound system and use that. My processor is an AMD phenom x4 945, and I have 8GB ram, and Video card is Geforce GTX550 Ti, all of which should be quite capable enough. I also tried Pulseaudio and that works fine, but I need to use Jack At first I thought it might be an interrupt conflict, but I have found that adding "options snd-hda-intel model=generic" to /etc/modprobe.d/alsa-base.conf causes it to play correctly, but the limited mixer setup lacks controls I need - so this setup isn't good enough. Still, it seems to prove it isn't a hardware conflict. I have tried many other models, such as 3stack, 6stack, auto and even basic, and they all suffer from the stuttering. I eventually found "options snd-hda-intel model=3stack-6ch-intel" works without stuttering, and mixer is much closer to what it needs to be. Can anyone help on how to get a correct and accurate model for ALSA to use? More info on the hardware that might help... *-multimedia description: Audio device product: SBx00 Azalia (Intel HDA) vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 14.2 bus info: pci@0000:00:14.2 version: 00 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: driver=snd_hda_intel latency=32 resources: irq:16 memory:fe024000-fe027fff

    Read the article

  • How to open a VirtualBox (.VDI) Virtual Machine

    - by [email protected]
     How to open a .VDI Virtual MachineSometimes someone share with us one Virtual machine with extension .VDI, after that we can wonder how and what with?Well the answer is... It is a VirtualBox - Virtual Machine. If you have not downloaded it you can do this easily, just follow this post.http://listeningoracle.blogspot.com/2010/04/que-es-virtualbox.htmlorhttp://oracleoforacle.wordpress.com/2010/04/14/ques-es-virtualbox/Ok, Now with VirtualBox Installed open it and proceed with the following:1. Open the Virtual File Manager. 2. Click on Actions ? Add and select the .VDI fileClick "Ok"3.  A new Virtual machine will be displayed, (in this Case, an OEL5 32GB Virtual Machine is available.)4. This step is important. Once you have open the settings, under General option click the advanced settings. Here you must change the default directory to save your Snapshots; my recommendation set it to the same directory where the .Vdi file is. Otherwise you can have the same Virtual Machine and its snapshots in different paths.5. Now Click on System, and proceed to assign the correct memory and define the processors for the Virtual machine. Note: Enable  "Enable IO APIC" if you are planning to assign more than one CPU to the Virtual Machine.6. Associated the storage disk to the Virtual machineThe disk must be selected as IDE Primary Master. 7. Well you can verify the other options, but with these changes you will be able to start the VM. Note: Sometime the VM owner may share some instructions, if so follow his instructions.8. Click Ok and Push Start Button, and enjoy your Virtual Machine

    Read the article

  • How to open a VirtualBox (.VDI) Virtual Machine

    - by [email protected]
     How to open a .VDI Virtual MachineSometimes someone share with us one Virtual machine with extension .VDI, after that we can wonder how and what with?Well the answer is... It is a VirtualBox - Virtual Machine. If you have not downloaded it you can do this easily, just follow this post.http://listeningoracle.blogspot.com/2010/04/que-es-virtualbox.htmlorhttp://oracleoforacle.wordpress.com/2010/04/14/ques-es-virtualbox/Ok, Now with VirtualBox Installed open it and proceed with the following:1. Open the Virtual File Manager. 2. Click on Actions ? Add and select the .VDI fileClick "Ok"3.  A new Virtual machine will be displayed, (in this Case, an OEL5 32GB Virtual Machine is available.)4. This step is important. Once you have open the settings, under General option click the advanced settings. Here you must change the default directory to save your Snapshots; my recommendation set it to the same directory where the .Vdi file is. Otherwise you can have the same Virtual Machine and its snapshots in different paths.5. Now Click on System, and proceed to assign the correct memory and define the processors for the Virtual machine. Note: Enable  "Enable IO APIC" if you are planning to assign more than one CPU to the Virtual Machine.6. Associated the storage disk to the Virtual machineThe disk must be selected as IDE Primary Master. 7. Well you can verify the other options, but with these changes you will be able to start the VM. Note: Sometime the VM owner may share some instructions, if so follow his instructions.8. Click Ok and Push Start Button, and enjoy your Virtual Machine

    Read the article

  • DENY select on sys.dm_db_index_physical_stats

    - by steveh99999
    Technorati Tags: security,DMV,permission,sys.dm_db_index_physical_stats I recently saw an interesting blog article by Paul Randal about the performance overhead of querying the sys.dm_db_index_physical_stats. So I was thinking, would it be possible to let non-sysadmin users query DMVs on a SQL server but stop them querying this I/O intensive DMV ? Yes it is, here’s how… 1. Create a new login for test purposes, with permissions to access AdventureWorks database only … CREATE LOGIN [test] WITH PASSWORD='xxxx', DEFAULT_DATABASE=[AdventureWorks] GO USE [AdventureWorks] GO CREATE USER [test] FOR LOGIN [test] WITH DEFAULT_SCHEMA=[dbo] GO 2.login as user test and issue command SELECT  * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'),NULL,NULL,NULL,'DETAILED') gets error :-  Msg 297, Level 16, State 12, Line 1 The user does not have permission to perform this action. 3.As a sysadmin, issue command :- USE AdventureWorks GRANT VIEW DATABASE STATE TO [test] or GRANT VIEW SERVER STATE TO [test] if all databases can be queried via DMV. 4. Try again as user test to issue command SELECT * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks '),NULL,NULL,NULL,'DETAILED') -- now produces valid results from the DMV.. 5 now create the test user in master database, public role only USE master CREATE USER [test] FOR LOGIN [test] 6 issue command :- USE master DENY SELECT ON sys.dm_db_index_physical_stats TO [test] 7 Now go back to AdventureWorks using test login and try SELECT * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks’),NULL,NULL,NULL,’DETAILED') Now gets error... Msg 229, Level 14, State 5, Line 1 The SELECT permission was denied on the object 'dm_db_index_physical_stats', database 'mssqlsystemresource', schema 'sys'. but the user is still able to query all other non-IO-intensive DMVs. If the user attempts to view the index physical stats via a builtin management studio report  – see recent blog post by Pinal Dave they get an error also

    Read the article

  • How to switch sound-drivers, and to which? [AMD] Hudson Azalia Controller

    - by Anders Martini
    System settings/sound does not open, freezes and I have to force close. Speaker symbol with volume control does not open scroll-down menu, and there is no sounds. Many people have problems with Hudson Azalia in Ubuntu, but I found no working solution. I don't really understand much of this, but here are some more details: aplay -l : **** List of PLAYBACK Hardware Devices **** (after running this one, it starts some kind of process that doesn't get any results, and doesn't stop, terminal has to be shut down) lspci -vnn | grep -iA5 audio: 00:01.1 Audio device [0403]: Advanced Micro Devices [AMD] nee ATI Device [1002:9902] Subsystem: Hewlett-Packard Company Device [103c:184c] Flags: bus master, fast devsel, latency 0, IRQ 53 Memory at f0444000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel -- 00:14.2 Audio device [0403]: Advanced Micro Devices [AMD] Hudson Azalia Controller [1022:780d] (rev 01) Subsystem: Hewlett-Packard Company Device [103c:184c] Flags: bus master, slow devsel, latency 32, IRQ 54 Memory at f0440000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel It seems to me that I'm currently running hda Intel drivers on my AMD Hudson Azalia soundcard. I can't see what drivers this soundcard uses. Do I need any additional drivers for my soundcard, and where would I find them?

    Read the article

  • Fix corrupt NTFS partition without Windows

    - by Capt.Nemo
    MY NTFS Partition has gotten corrupt somehow (it's a relic from the days when I had Windows installed). I'm putting the debug output of fdisk and blkid here. At the same time, any OS is unable to mount my root partition, which is located next to my NTFS partition. I'm not sure if this has anything to do with it, though. I get the following error while trying to mount my root partition (sda5) mount: wrong fs type, bad option, bad superblock on /dev/sda5, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so ubuntu@ubuntu:~$ dmesg | tail [ 1019.726530] Descriptor sense data with sense descriptors (in hex): [ 1019.726533] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 [ 1019.726551] 1a 3e ed 92 [ 1019.726558] sd 0:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed [ 1019.726568] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 1a 3e ed 40 00 01 00 00 [ 1019.726584] end_request: I/O error, dev sda, sector 440331666 [ 1019.726602] JBD: Failed to read block at offset 462 [ 1019.726609] ata1: EH complete [ 1019.726612] JBD: recovery failed [ 1019.726617] EXT4-fs (sda5): error loading journal When I open gparted (using live CD), I get an exclamation next to my NTFS drive which states Is there a way to run chkdsk without using windows ? My attempt to run fsck results in the following : ubuntu@ubuntu:~$ sudo fsck /dev/sda fsck from util-linux-ng 2.17.2 e2fsck 1.41.14 (22-Dec-2010) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Update : I was able to fix the NTFS partition running chkdsk off HBCD, but it seems that the superblock problem still remains. *Update 2: * Fixed superblock issue using e2fsck -c /dev/sda5

    Read the article

  • MSDN Subscriber Benefits

    - by kaleidoscope
    Windows Azure Platform offer Introductory MSDN Premium offer Ongoing MSDN Subscription Benefits Windows Azure Compute hours per month 750 hours 250 100 50 Storage 10 GB 7.5 GB 5 GB 3 GB Transactions per month 1,000,000 750,000 500,000 300,000 AppFabric Service bus messages per month 1,000,000 1,000,000 500,000 300,000 SQL Azure Web Edition (1GB databases) 3 3 2 1 Data Transfers per month Europe and North America 7 GB in / 14 GB out 5 GB in / 10 GB out 3 GB in / 6 GB out 2 GB in / 4 GB out Asia Pacific 2.5 GB in / 5 GB out 2 GB in / 4 GB out 1 GB in / 2 GB out .5 GB in / 1 GB out Available for sign-up January 4, 2010* After completion of your 8 month introductory Windows Azure benefit Duration of benefit 8 months While MSDN Subscription remains active Subscription levels receiving benefit** MSDN Premium & BizSpark Visual Studio Ultimate with MSDN & BizSpark Visual Studio Premium with MSDN Visual Studio Professional with MSDN Estimated Retail Value: $1038 (8 months) $812/year $436/year $223/year This introductory offer will last for 8 months from the time you sign up. After that, you'll cancel your introductory account and sign up for the ongoing MSDN benefit based on your subscription level. The easiest way to cancel your introductory account is to set it to not "auto-renew". Think of "compute" as an instance of your application running in the cloud. So with 750 hours per month, you can keep a single instance running non-stop all month long. Or run 2 compute instances for two weeks a month. Or 4 for a week a piece. Lokesh, M

    Read the article

  • Problem with SNMP and MIBs

    - by jap1968
    I am installing Zabbix to monitor via snmp some devices from a machine running Ubuntu 12.04 server. There is a problem with MIB definitions, since snmp commands do no properly translate some of the MIBs. I have already installed the "snmp-mibs-downloader" package, so the files containing the MIB descriptions are properly installed. The MIB are only translated to obtain the numeric key (the MIB files are accessible to the snmp commands), but the results returned by the snmpget command do not properly translate the key. The zabbix templates that I am using do expect the key translated (SNMPv2-MIB::sysUpTime.0) , so, the current results are not recognised and these are ignored. Test case: $ snmptranslate -On SNMPv2-MIB::sysUpTime.0 .1.3.6.1.2.1.1.3.0 $ snmpget -v 2c -c public 192.168.1.1 1.3.6.1.2.1.1.3.0 iso.3.6.1.2.1.1.3.0 = Timeticks: (2911822510) 337 days, 0:23:45.10 On another machine (running a very old Red Hat based distribution), the snmp commands perform both, the directe and reverse traslation, as expected: # snmptranslate -On SNMPv2-MIB::sysUpTime.0 .1.3.6.1.2.1.1.3.0 # snmpget -v 2c -c public 192.168.1.1 1.3.6.1.2.1.1.3.0 SNMPv2-MIB::sysUpTime.0 = Timeticks: (2911819485) 337 days, 0:23:14.85 What is the problem on my Ubuntu box? Is there something I am missing?

    Read the article

  • One eye on my dinner and one eye on SQL server

    - by fatherjack
    LiveJournal Tags: RedGate,Work Life Balance,Tips and Tricks,SQL Server This is somewhere between a Tweet and a proper blog article - would that be a Bleet? Anyway, I was at a local restaurant yesterday and after placing my order I was thinking about having to get home and log in to check some SQL Servers and then the thought came to me that as we were near civilisation there was likely to be a 3G signal that might actually make using the web browser on my phone bearable. It was surprisingly fast on my HTC Desire, it was almost as good as Wi-Fi. RedGate SQL Monitor works fine on the default HTC browser and here is the proof, me checking the servers while I am waiting for the meal to arrive. Everything checked out OK so I had the evening free from SQL Server. You can get a free 14 day full trial of a SQL Monitor from RedGate here or find out more about it at The Future of Monitoring. Disclosure: I am a friend of RedGate and as such regularly make positive comments about their products. I don't get paid for it but I do get free licenses for testing and reviewing purposes.

    Read the article

  • How to improve battery life on Samsung 13.3” Series 7 Ultra (NP730U3E-S01AU)?

    - by beam022
    Recently I've bought a Series 7 Ultra Samsung ultrabook and decided to change the OS from originally installed Windows 8 to Ubuntu 14.04LTS. However, it's difficult not to notice great decrease in battery life: on pre-installed Windows 8 battery would last for about 6 hours while on Ubuntu it's almost empty after 2 hours of same kind of work (wi-fi, web, vlc, spotify, intellij idea). I'm not here to say that Ubuntu's battery performance is worse than Windows, but to ask for suggestions how to improve the situation (2 hours of work is pretty poor battery life). Can you recommend some sources, applications or tips/tricks that would improve battery life on my ultrabook? I really like the Ubuntu experience, but this makes my machine much less reliable. I suspect that graphic video card might be one of the issues here. Let me give you tech specs of the ultrabook: Processor: Intel® Core™ i5 Processor 3337U (1.80GHz, 3MB L3 Cache) Chipset: Intel HM76 Graphic: AMD Radeon™ HD 8570M Graphics with 1GB gDDR3 Graphic Memory (PowerExpress) and Intel(R) HD Graphics 4000 Display: 13.3" SuperBright+ 350nit FHD LED Display (1920 x 1080), Anti-Reflective Memory: 10GB DDR3 System Memory at 1,600MHz Hard-drive: 128GB Solid-state Drive More informations here, on the official page. If it's helpful to provide additional info, I'm happy to do it, just let me know what you need. Thank you.

    Read the article

  • AMR's 2010 Supply Chain Top 25 Report: Early Predictions

    - by [email protected]
    On April 6th, AMR's Debra Hoffman and Kevin O'Marah presented their annual 'Top 25 Supply Chain' predictions.  For supply chain professionals, it was a 'must-hear' event especially with the new focus on both operational excellence as well as innovation excellence.  Most people think of R&D as the primary driver for innovation, but in today's 'new-normal' firms need to constantly review, evaluate and update their workflow procedures and business processes to maintian a sharp-blade on the leading edge.  Having the right tools in place to be able to monitor supply chain effectiveness becomes paramount to firms as they compete in the global marketplace. Organizations need  user-friendly and role based dashboards with early alerts to contextualize activities and post the best-options for managers to make better and more informed decisions. 2009 Winners were 1.Apple 2.Dell 3.P&G 4.IBM 5.Cisco 6.Nokia 7. Walmart 8.Samsung 9.PepsiCo 10.Toyota 11.Schulmberger 12. J&J 13.Coke 14. Nike 15.Tesco 16.Disney 17.HP 18.TI 19.LockheedMartin 20.Colgate 21.BestBuy 22.Unilever 23.Publix 24.SonyEricsson 25.Intel    

    Read the article

  • configure: error: Could not find libavformat - part of ffmpeg after installing libav from source

    - by Patryk
    I tried to install minidlna on my machine but it appeared to me that it's not in repositories anymore. Well then I decided to compile it myself. After downloading version 1.1.3 I tried to compile but I needed libav headers which I couldn't install via apt - no idea why there has been a lot of broken packages: $ sudo apt-get install libavcodec-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libavcodec-dev : Depends: libavutil-dev (= 6:9.13-0ubuntu0.14.04.1) but it is not going to be installed E: Unable to correct problems, you have held broken packages. Anyways I installed libav 9.13 from source and now I came to this point: $ ./configure ... checking for av_open_input_file in -lavformat... no checking for avformat_open_input in -lavformat... no checking for av_open_input_file in -lavformat... no checking for avformat_open_input in -lavformat... no configure: error: Could not find libavformat - part of ffmpeg but I have installed that! Even in the install log I can see : ... INSTALL libavdevice/libavdevice.a INSTALL libavfilter/libavfilter.a INSTALL libavformat/libavformat.a INSTALL libavresample/libavresample.a INSTALL libavcodec/libavcodec.a INSTALL libswscale/libswscale.a INSTALL libavutil/libavutil.a INSTALL libavdevice/avdevice.h INSTALL libavdevice/version.h INSTALL libavdevice/libavdevice.pc INSTALL libavfilter/avfilter.h INSTALL libavfilter/avfiltergraph.h INSTALL libavfilter/buffersink.h INSTALL libavfilter/buffersrc.h INSTALL libavfilter/version.h INSTALL libavfilter/libavfilter.pc INSTALL libavformat/avformat.h ....

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >