Search Results

Search found 15879 results on 636 pages for 'team building'.

Page 25/636 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • New Windows Phone 7 Developer Guidance released for building line of business applications

    - by Eric Nelson
    Several partners have been asking about guidance on combining Windows Phone 7 applications with Windows Azure. The patterns and practices team recently released new guidance on Windows Phone 7.  This is a continuation of the Windows Azure Guidance. It takes the survey application and makes a version for Windows Phone 7.  The guide includes the following topics: Prism for Windows Phone 7 Reactive Extensions WCF Services on top of Windows Azure Push Notifications Camera & Voice Panorama Much more... Well worth a read if you are an ISV looking at taking Line of Business applications to Windows Phone 7. Related Links: We have created Microsoft Platform Ready to help software houses develop applications for Windows Azure and On-Premise. Check it out and the goodies it can deliver for little effort.

    Read the article

  • Building lirc package from source with patches

    - by joystick
    I'd like to build latest lirc package for 12.04 with two patches from http://bit.ly/17779VW to make USB Infrared toy v2 work: Running sudo apt-build source lirc gave me ? build ll total 960 drwxr-xr-x 10 root root 4096 Nov 5 07:07 lirc-0.9.0 -rw-r--r-- 1 root root 113909 May 5 2011 lirc_0.9.0-0ubuntu1.debian.tar.gz -rw-r--r-- 1 root root 1553 May 5 2011 lirc_0.9.0-0ubuntu1.dsc -rw-r--r-- 1 root root 857286 May 5 2011 lirc_0.9.0.orig.tar.bz2 in /var/cache/apt-build/build. Running sudo apt-build build-source lirc then gave me Some error occured building package which is not really informative. I have successfully built patched lirc from source but now I would like to get a deb package. Where can I look for this 'some errors' in detail? Thank you, Alexei

    Read the article

  • C++ Building Static Library Project with a Folder Structure

    - by Jake
    I'm working on some static libraries using visual studio 2012, and after building I copy .lib and .h files to respective directories to match a desired hierarchy such as: drive:/libraries/libname/includes/libname/framework drive:/libraries/libname/includes/libname/utitlies drive:/libraries/libname/lib/... etc I'm thinking something similar to the boost folder layout. I have been doing this manually so far. My library solution contains projects, and when I update and recompile I simply recopy files where they need to be. Is there a simpler way to do this? Perhaps a way to compile the project with certain rules per project as to where the projects .h and .lib files should go?

    Read the article

  • General Session: Building and Managing a Private Oracle Java and Middleware Cloud

    - by Ruma Sanyal
    If you are developing, managing, or planning enterprise Java and business application deployments on Oracle WebLogic Server with Oracle Coherence or Oracle GlassFish Server applications or continue to have deployments of Oracle Application Server, this session will give you the roadmap of how Oracle is evolving this infrastructure to be the next-generation application foundation for its customers to build on in a private cloud setting. In the session, Ajay Patel, VP of Product Management, and the product management team shares Oracle's vision, product plans, and roadmap for this server infrastructure and how it will be used in the rapidly maturing cloud infrastructure space. The presentation will help you make key decisions about running your enterprise applications on Oracle's enterprise Java server foundation. For more information about this and other Cloud Application Foundation sessions, review the Cloud Application Foundation Focus On document. Details: Monday, 10/1; 4.45-5.45pm; Moscone West Room 3014

    Read the article

  • Free Training - Building Silverlight Business Applications

    We recently released a new free Silverlight 4 training kit that walks you through building business applications with Silverlight 4. You can also download the entire offline version of the kite here.  You can use the 8 modules, 25 videos, and several hands on labs online or offline from links on the Channel 9 site. Ive included a breakdown and links to all of the content here in this post. The key to this training material is not the features it covers (though it covers a variety of topics including...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Building a Roadmap for an IAM Platform

    - by B Shashikumar
    Identity Management is no longer a departmental solution, it has become a strategic part of every organization's security posture. Enterprises require a forward thinking Identity Management strategy. In our previous blog post on "The Oracle Platform Approach", we discussed a recent study by Aberdeen which showed that organizations taking a platform approach can reduce cost by as much as 48% and have 35% fewer audit deficiencies. So how does an organization get started with an Identity Management (IAM) Platform? What are the components of such a platform and how can an organization continuously evolve it for better ROI and IT agility. What are some of the best practices to begin an IAM deployment? To find out the answers and to learn how ot build a comprehensive IAM roadmap, check out this presentation which discusses how Oracle can provide a quick start to your IAM program.  Platform approach-series-building a-roadmap-finalv1 View more presentations from OracleIDM

    Read the article

  • Simplified Tips on Building a Website Online

    It is not easy to create a wonderful website. There are a lot of things to take into account so as to put up a perfect site. The first thing you need to do is to assess your target market. This is important because this will be the basis for the content that you will put in your site. You must come up with a great content that would suffice the interests, wants and desires of your audience. In this regard, this article will discuss how building a website online can give you greater comfort and convenience.

    Read the article

  • What is a proper way of building Winform apps with multiple "screens"

    - by CurtisHx
    What's a proper way of building a Winform app that has multiple 'screens'? For example, I'm trying to write a small backup program (mainly for giggles), and I've been dumping controls and containers onto the form. I'm using panels and group boxes to separate out the different screens (eg: I'm using a panel to hold all of the controls for the "Settings" window, and another panel to show all the current backups that have been set up). Well, my form.cs file ballooned into a massive amount of code, and I feel like I'm doing something wrong. I can hardly find anything in the file, and I'm ready to start over. This project was just for me to expand my knowledge of C# and .NET, so starting a new project is not a huge deal.

    Read the article

  • What is the proper SEO way to add and remove links to our site and sitemap?

    - by ElHaix
    As content fluctuates on our site, we will obviously add the titles to the new pages to our sitemap, and link list for search engine indexing. Through time, certain links will become less relevant, and would like to know how to avoid them being crawled. The links themselves may not be removed - but we don't want to dilute the link list with less relevant links as time passes. I'm guessing the status code of the page would change - to what? Should they also removed from the sitemap?

    Read the article

  • The Best Ways For Link Building For New Websites

    Creating a website can be a very interesting and fun activity but you also need to build links so that you can make your website more successful. Building the links helps to make the website more visible and to have more traffic flow. One of the best ways to do this is to create easy and relevant keywords which can be easily remembered so that you can get new readers frequently and you can be able to earn money from the affiliate companies. You should also submit your articles to web directories as this can help you to get hundreds of backlinks which can be very good for your website.

    Read the article

  • GDD-BR 2010 [2C] Building for a Faster Web

    GDD-BR 2010 [2C] Building for a Faster Web Speakers: Eric Bidelman Track: Chrome and HTML5 Time: C [12:05 - 12:50] Room: 2 Level: 151 Why should a web app be less performant than a native app? This session will focus on creating the next generation of web applications. We'll look HTML5 features that increase app performance, Chrome Developer Tools to speed development, Native Client for running native C++ in the browser, and Google Chrome Frame to bring these awesome features to users with older browsers. From: GoogleDevelopers Views: 84 1 ratings Time: 37:32 More in Science & Technology

    Read the article

  • We're Building Both Our Client Base & Our Partners!

    - by TheSilverlightGroup
    In a timeframe of a week or so, all our previous work is beginning to pay off! We are getting client offers that are both interesting, for huge companies, & utilizing Silverlight 4! In addition, we are building our database of Partners. Moving along, The Silverlight Group now has a toll-free phone number: 1-888-863-6989, so please call us for your most challenging Silverlight Project as well as training, Silverlight tools, etc. The Silverlight Group is carving our mark in the Universe!

    Read the article

  • "Google files": Building a web interface to find/ack/grep

    - by user27915816
    I am working on a project where we would like to have build a web interface that gives the user the ability to "Google" files in a directory in a remote machine. For example, the user would type a string in a box, and then the system would find all files that contain that string and present them in the browser. The system would then give the user the ability click on any of the files to open them/display them in the browser. We want to avoid reinventing the wheel if possible, but don't really know where to start (none of us in the team have much experience building websites). What software packages, libraries or tools exist that can help us get this done?

    Read the article

  • Bad search result due to strange linked domains

    - by VDesign
    I have a website which is not scoring good in Google's search results. I use Majestic SEO and Open Site Explorer in order to have a view about my link profile. I now see different backlink domains, some of them already removed, that contains sexual content or other non relative content linking to my domain. How much influence does these strange linked domains have on my search result? Even if some of them are already removed for a couple of months. I have already disavow open sexual domains using the tool that Google provides.

    Read the article

  • Building Website with JAX-RS (Jersey)

    - by 0xMG
    Is it discouraged/not-common to build Websites (not web-services!) using Jersey or any other JAX-RS implementation ? I didn't find any guide/tutorial/article regarding that.. At first impression , it seems to me that building website using Jersey (with JSPs as Viewables) is easier and more efficient than using Servlets & JSPs. If anyone did it before , I will be pleased to get tips, Do's & Don'ts, best practices etc... And maybe a good tutorial.

    Read the article

  • Building KPIs to monitor your business Its not really about the Technology

    When I have discussions with people about Business Intelligence, one of the questions the inevitably come up is about building KPIs and how to accomplish that. From a technical level the concept of a KPI is very simple, almost too simple in that it is like the tip of an iceberg floating above the water. The key to that iceberg is not really the tip, but the mass of the iceberg that is hidden beneath the surface upon which the tip sits. The analogy of the iceberg is not meant to indicate that the foundation of the KPI is overly difficult or complex. The disparity in size in meant to indicate that the larger thing that needs to be defined is not the technical tip, but the underlying business definition of what the KPI means. From a technical perspective the KPI consists of primarily the following items: Actual Value This is the actual value data point that is being measured. An example would be something like the amount of sales. Target Value This is the target goal for the KPI. This is a number that can be measured against Actual Value. An example would be $10,000 in monthly sales. Target Indicator Range This is the definition of ranges that define what type of indicator the user will see comparing the Actual Value to the Target Value. Most often this is defined by stoplight, but can be any indicator that is going to show a status in a quick fashion to the user. Typically this would be something like: Red Light = Actual Value more than 5% below target; Yellow Light = Within 5% of target either direction; Green Light = More than 5% higher than Target Value Status\Trend Indicator This is an optional attribute of a KPI that is typically used to show some kind of trend. The vast majority of these indicators are used to show some type of progress against a previous period. As an example, the status indicator might be used to show how the monthly sales compare to last month. With this type of indicator there needs to be not only a definition of what the ranges are for your status indictor, but then also what value the number needs to be compared against. So now we have an idea of what data points a KPI consists of from a technical perspective lets talk a bit about tools. As you can see technically there is not a whole lot to them and the choice of technology is not as important as the definition of the KPIs, which we will get to in a minute. There are many different types of tools in the Microsoft BI stack that you can use to expose your KPI to the business. These include Performance Point, SharePoint, Excel, and SQL Reporting Services. There are pluses and minuses to each technology and the right technology is based a lot on your goals and how you want to deliver the information to the users. Additionally, there are other non-Microsoft tools that can be used to expose KPI indicators to your business users. Regardless of the technology used as your front end, the heavy lifting of KPI is in the business definition of the values and benchmarks for that KPI. The discussion about KPIs is very dependent on the history of an organization and how much they are exposed to the attributes of a KPI. Often times when discussing KPIs with a business contact who has not been exposed to KPIs the discussion tends to also be a session educating the business user about what a KPI is and what goes into the definition of a KPI. The majority of times the business user has an idea of what their actual values are and they have been tracking those numbers for some time, generally in Excel and all manually. So they will know the amount of sales last month along with sales two years ago in the same month. Where the conversation tends to get stuck is when you start discussing what the target value should be. The actual value is answering the What and How much questions. When you are talking about the Target values you are asking the question Is this number good or bad. Typically, the user will know whether or not the value is good or bad, but most of the time they are not able to quantify what is good or bad. Their response is usually something like I just know. Because they have been watching the sales quantity for years now, they can tell you that a 5% decrease in sales this month might actually be a good thing, maybe because the salespeople are all waiting until next month when the new versions come out. It can sometimes be very hard to break the business people of this habit. One of the fears generally is that the status indicator is not subjective. Thus, in the scenario above, the business user is going to be fearful that their boss, just looking at a negative red indicator, is going to haul them out to the woodshed for a bad month. But, on the flip side, if all you are displaying is the amount of sales, only a person with knowledge of last month sales and the target amount for this month would have any idea if $10,000 in sales is good or not. Here is where a key point about KPIs needs to be communicated to both the business user and any user who might be viewing the results of that KPI. The KPI is just one tool that is used to report on business performance. The KPI is meant as a quick indicator of one business statistic. It is not meant to tell the entire story. It does not answer the question Why. Its primary purpose is to objectively and quickly expose an area of the business that might warrant more review. There is always going to be the need to do further analysis on any potential negative or neutral KPI. So, hopefully, once you have convinced your business user to come up with some target numbers and ranges for status indicators, you then need to take the next step and help them answer the Why question. The main question here to ask is, Okay, you see the indicator and you need to discover why the number is what is, where do you go?. The answer is usually a combination of sources. A sales manager might have some of the following items at their disposal (Marketing report showing a decrease in the promotional discounts for the month, Pricing Report showing the reduction of prices of older models, an Inventory Report showing the discontinuation of a particular product line, or a memo showing the ending of a large affiliate partnership. The answers to the question Why are never as simple as a single indicator value. Bring able to quickly get to this information is all about designing how a user accesses the KPIs and then also how easily they can get to the additional information they need. This is where a Dashboard mentality can come in handy. For example, the business user can have a dashboard that shows their KPIs, but also has links to some of the common reports that they run regarding Sales Data. The users boss may have the same KPIs on their dashboard, but instead of links to individual reports they are going to have a link to a status report that was created by the user that pulls together all the data about the KPI in a summary format the users boss can review. So some of the key things to think about when building or evaluating KPIs for your organization: Technology should not be the driving factor KPIs are of little value without some indicator for whether a value is good, bad or neutral. KPIs only give an answer to the Is this number good\bad? question Make sure the ability to drill into the Why of a KPI is close at hand and relevant to the user who is viewing the KPI. The KPI is a key business tool when defined properly to help monitor business performance across the enterprise in an objective and consistent manner. At times it might feel like the process of defining the business aspects of a KPI can sometimes be arduous, the payoff in the end can far outweigh the costs. Some of the benefits of going through this process are a better understanding of the key metrics for an organization and the measure of those metrics and a consistent snapshot of business performance that can be utilized across the organization. And I think that these are benefits to any organization regardless of the technology or the implementation.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Building an OpenStack Cloud for Solaris Engineering, Part 1

    - by Dave Miner
    One of the signature features of the recently-released Solaris 11.2 is the OpenStack cloud computing platform.  Over on the Solaris OpenStack blog the development team is publishing lots of details about our version of OpenStack Havana as well as some tips on specific features, and I highly recommend reading those to get a feel for how we've leveraged Solaris's features to build a top-notch cloud platform.  In this and some subsequent posts I'm going to look at it from a different perspective, which is that of the enterprise administrator deploying an OpenStack cloud.  But this won't be just a theoretical perspective: I've spent the past several months putting together a deployment of OpenStack for use by the Solaris engineering organization, and now that it's in production we'll share how we built it and what we've learned so far.In the Solaris engineering organization we've long had dedicated lab systems dispersed among our various sites and a home-grown reservation tool for developers to reserve those systems; various teams also have private systems for specific testing purposes.  But as a developer, it can still be difficult to find systems you need, especially since most Solaris changes require testing on both SPARC and x86 systems before they can be integrated.  We've added virtual resources over the years as well in the form of LDOMs and zones (both traditional non-global zones and the new kernel zones).  Fundamentally, though, these were all still deployed in the same model: our overworked lab administrators set up pre-configured resources and we then reserve them.  Sounds like pretty much every traditional IT shop, right?  Which means that there's a lot of opportunity for efficiencies from greater use of virtualization and the self-service style of cloud computing.  As we were well into development of OpenStack on Solaris, I was recruited to figure out how we could deploy it to both provide more (and more efficient) development and test resources for the organization as well as a test environment for Solaris OpenStack.At this point, let's acknowledge one fact: deploying OpenStack is hard.  It's a very complex piece of software that makes use of sophisticated networking features and runs as a ton of service daemons with myriad configuration files.  The web UI, Horizon, doesn't often do a good job of providing detailed errors.  Even the command-line clients are not as transparent as you'd like, though at least you can turn on verbose and debug messaging and often get some clues as to what to look for, though it helps if you're good at reading JSON structure dumps.  I'd already learned all of this in doing a single-system Grizzly-on-Linux deployment for the development team to reference when they were getting started so I at least came to this job with some appreciation for what I was taking on.  The good news is that both we and the community have done a lot to make deployment much easier in the last year; probably the easiest approach is to download the OpenStack Unified Archive from OTN to get your hands on a single-system demonstration environment.  I highly recommend getting started with something like it to get some understanding of OpenStack before you embark on a more complex deployment.  For some situations, it may in fact be all you ever need.  If so, you don't need to read the rest of this series of posts!In the Solaris engineering case, we need a lot more horsepower than a single-system cloud can provide.  We need to support both SPARC and x86 VM's, and we have hundreds of developers so we want to be able to scale to support thousands of VM's, though we're going to build to that scale over time, not immediately.  We also want to be able to test both Solaris 11 updates and a release such as Solaris 12 that's under development so that we can work out any upgrade issues before release.  One thing we don't have is a requirement for extremely high availability, at least at this point.  We surely don't want a lot of down time, but we can tolerate scheduled outages and brief (as in an hour or so) unscheduled ones.  Thus I didn't need to spend effort on trying to get high availability everywhere.The diagram below shows our initial deployment design.  We're using six systems, most of which are x86 because we had more of those immediately available.  All of those systems reside on a management VLAN and are connected with a two-way link aggregation of 1 Gb links (we don't yet have 10 Gb switching infrastructure in place, but we'll get there).  A separate VLAN provides "public" (as in connected to the rest of Oracle's internal network) addresses, while we use VxLANs for the tenant networks. One system is more or less the control node, providing the MySQL database, RabbitMQ, Keystone, and the Nova API and scheduler as well as the Horizon console.  We're curious how this will perform and I anticipate eventually splitting at least the database off to another node to help simplify upgrades, but at our present scale this works.I had a couple of systems with lots of disk space, one of which was already configured as the Automated Installation server for the lab, so it's just providing the Glance image repository for OpenStack.  The other node with lots of disks provides Cinder block storage service; we also have a ZFS Storage Appliance that will help back-end Cinder in the near future, I just haven't had time to get it configured in yet.There's a separate system for Neutron, which is our Elastic Virtual Switch controller and handles the routing and NAT for the guests.  We don't have any need for firewalling in this deployment so we're not doing so.  We presently have only two tenants defined, one for the Solaris organization that's funding this cloud, and a separate tenant for other Oracle organizations that would like to try out OpenStack on Solaris.  Each tenant has one VxLAN defined initially, but we can of course add more.  Right now we have just a single /24 network for the floating IP's, once we get demand up to where we need more then we'll add them.Finally, we have started with just two compute nodes; one is an x86 system, the other is an LDOM on a SPARC T5-2.  We'll be adding more when demand reaches the level where we need them, but as we're still ramping up the user base it's less work to manage fewer nodes until then.My next post will delve into the details of building this OpenStack cloud's infrastructure, including how we're using various Solaris features such as Automated Installation, IPS packaging, SMF, and Puppet to deploy and manage the nodes.  After that we'll get into the specifics of configuring and running OpenStack itself.

    Read the article

  • RAID 0 performance gains?

    - by NickAldwin
    I'm building a new computer over the summer. I'm fairly competent in computer hardware, and am thus building the computer from scratch. I have everything planned out, but I was wondering about RAID. I asked which RAID I should use earlier, but now that it's pretty clear that RAID 1 isn't really that great, I think I'll go with cloud-backup instead of disk-redundancy. However, I still face a choice: use two 1TB drives as two 1TB drives, or combine them into a RAID 0 striped array. Is there any performance gain at all? I know that if one drive dies, everything is gone, so is the performance gain worth it? I'm building a pretty advanced computer, with SLI video cards and a fast CPU, so I'm thinking RAID 0 would give me some good hard drive performance. From your experience, is RAID 0 viable?

    Read the article

  • Great Free Courses on Building HTML5 apps using ASP.NET Web API, Knockout.js and jQuery

    - by ScottGu
    Pluralsight has developed some great training courses on the new .NET 4.5 and VS 2012 release, including two fantastic courses from John Papa that cover how to build HTML5 web apps using ASP.NET Web API, Knockout and jQuery: Single Page Apps with HTML5, Web API, Knockout and jQuery Building HTML5 and JavaScript Apps with MVVM and Knockout Free 1-Month Subscription to the Courses Pluralsight is offering a special promotion that allows you to get a free 1-month subscription to watch the above courses at no cost.  There is no obligation to buy anything at the end of the offer and you don’t need to supply a credit card in order to take part in it. To get access to the course you simply follow @pluralsight and @john_papa on Twitter and then visit this page and enter your Twitter name using the form on it.  Pluralsight will then send you a private twitter message containing the access code that you can use to subscribe to the courses (and download the course exercise files).  Once you are subscribed to the course you have one month to watch the course (and you can watch it as many times as you want during the month). Pluralsight is running the promotion through Sept 18th – so sign-up now to get access.  Once you are signed up you then have a month to watch the course. Hope this helps, Scott P.S. And if you are new to Twitter you can also optionally follow me: @scottgu

    Read the article

  • Business guy building a software company [closed]

    - by Dreamer
    I am a business guy who is about to embark on a very risky journey to start this own software company. I have done sales for several software companies and in the last 8 years, I have managed to generate over $15 million in pure SAAS revenues for my employers. I think now its time to do it for myself and see where I can take the business. I have an idea in mind which I would like to develop and have been speaking with several companies who I may hire to convert that idea into a SAAS based offering. I am scared of the following: Being ripped off as I have no technical knowledge Over-charged Building something and realizing the foundation was weak, not scalable etc. Can anyone help me identify what I need to do before I sign a software development company to start my project. What do I need to know? What is the typical cost? What is a realistic time frame? Which coding language is better? What steps can I take to prevent myself from being ripped-off?

    Read the article

  • Building Web Applications with ACT and jQuery

    - by dwahlin
    My second talk at TechEd is focused on integrating ASP.NET AJAX and jQuery features into websites (if you’re interested in Silverlight you can download code/slides for that talk here). The content starts out by discussing ScriptManager features available in ASP.NET 3.5 and ASP.NET 4 and provides details on why you should consider using a Content Delivery Network (CDN).  If you’re running an external facing site then checking out the CDN features offered by Microsoft or Google is definitely recommended. The talk also goes into the process of contributing to the Ajax Control Toolkit as well as the new Ajax Minifier tool that’s available to crunch JavaScript and CSS files. The extra fun starts in the next part of the talk which details some of the work Microsoft is doing with the jQuery team to donate template, globalization and data linking code to the project. I go into jQuery templates, data linking and a new globalization option that are all being worked on. I want to thank Stephen Walther, Dave Reed and James Senior for their thoughts and contributions since some of the topics covered are pretty bleeding edge right now.The slides and sample code for the talk can be downloaded below.     Download Slides and Samples

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • Building in Change: Project Construction in Asset Intensive Industries

    - by Sylvie MacKenzie, PMP
    According to a recent survey by the Economist Intelligence Unit, sponsored by Oracle, only 51% of project owners rated themselves as effective at delivering their projects to scope, budget, and schedule when confronted with change. In addition only 43% rated themselves as effective at anticipating potential change. Even with the best processes and technology in place, change is often an unavoidable part of the construction process. How organizations respond to change can mean the difference between delays and cost overruns, and projects being completed on schedule and on budget. Implementing Enterprise Project Portfolio Management and using a solution to help manage and automate those process can help asset intensive organizations: Govern project and program compliance and regulatory requirements for project success Unite project teams and stakeholders through collaboration and strong feedback methods to speed project completion Reduce the risk of cost and schedule overruns and any resulting penalties to deliver on time and on budget Effectively manage change throughout the project life cycle Ensure sufficient capacity, utilization, and availability of people, skills, and other resources to meet commitments. The results of the recent EIU survey, sponsored by Oracle:"Building in Change: Project Construction in Asset-Intensive Industries", will be revealed in an upcoming webinar with Hart Energy / Oil & Gas Investor, featuring the Economist Intelligence Unit and Oracle on April 11th at 1pm CST. Click here for further information or visit http://www.oilandgasinvestor.com/

    Read the article

  • Can I use access used by Visual Basic for building a database [on hold]

    - by user3413537
    I am the only programmer where I work (summer job) and I am a student with only a few years of programming experience. So I was asked to build a database and I am very excited about this project because hopefully I can learn a lot from this. Using this database my manager is supposed to be able to assign work (dealing with businesses) to different people within the company using an interface (all workers have a shared drive). When workers are done with that paperwork related to the business, they can check off that its done, add comments at the bottom of the interface, and then move on to the next business. The only experience I've had with databases is some querying with SQL, and I've built GUI interfaces with JAVA. The information on the interface will be populated from Excel so workers know what businesses they are dealing with. I've done some research and I believe the best way to build this would be building a GUI using Microsoft Visual Studio (Visual Basic) first, then figuring out a way to populate the Interface from Excel. Also because the data is pretty straight forward and not complicated I will be using MS Access to store and track the database. I know this won't be easy, but for all you geniuses out there, is this on the right path? Thanks.

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API (Part 2)

    Last week's article, Building a Store Locator ASP.NET Application Using Google Maps API (Part 1), was the first in a multi-part article series exploring how to add store locator-type functionality to your ASP.NET website using the free Google Maps API. Part 1 started with an examination of the database used to power the store locator, which contains a single table named Stores with columns capturing the store number, its address and its latitude and longitude coordinates. Next, we looked at using Google Maps API's geocoding service to translate a user-entered address, such as San Diego, CA or 92101 into its latitude and longitude coordinates. Knowing the coordinates of the address entered by the user, we then looked at writing a SQL query to return those stores within (roughly) 15 miles of the user-entered address. These nearby stores were then displayed in a grid, listing the store number, the distance from the address entered to each store, and the store's address. While a list of nearby stores and their distances certainly qualifies as a store locator, most store locators also include a map showing the area searched, with markers denoting the store locations. This article looks at how to use the Google Maps API, a sprinkle of JavaScript, and a pinch of server-side code to add such functionality to our store locator. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >