Search Results

Search found 3134 results on 126 pages for 'bill jobs'.

Page 36/126 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Vista ICS issue

    - by Bill Grey
    I have a strange problem with Internet Connection Sharing on a laptop running Vista Business. This laptop is connected to the internet via the ethernet port, which goes to an ADSL modem. it is automatically assigned the IP address 192.168.1.50, and the modem/gateway is 192.168.1.1 My friends laptop is running Vista Home. Previously, I would create an ad hoc wireless network, enable ICS, and everything would be perfect. My friend would have internet access via this. However, something has now mysteriously broken. If I enable ICS on the wireless connection, it resets my Local Area Connection, assigning it the manual IP address of 192.168.0.1, which means my connection to the internet is destroyed. Both wireless adapters on each network are assigned auto configuration addresses, in the 168. range. They can see each other fine, but my friends laptop cannot access the internet via mine, even after I have restored the Local Area Connection settings. I understand the computer with ICS enabled must have the IP of 192.168.0.1, but previously, before whatever went wrong, my wireless adapter would be 192.168.0.1 and my friends computer would get an IP via DHCP. I have also tried setting static IP address and making a bridge, none of which works. How can I fix this problem, and prevent enabling ICS from touching my Local Area Connection? Both machines have no firewall, have appropriate settings etc...

    Read the article

  • Variable parsing with Bash and wget

    - by Bill Westrup
    I'm attempting to use wget in a simple bash script to grab a jpeg image from an Axis camera. This script outputs a file named JPEGOUT, instead of the desired output, which should be a timestamp jpeg (ex: 201209292040.jpg) . Changing the variable in the wget statement from JPEGOUT to $JPEGOUT makes wget fail with "wget: missing URL" error. The weird thing is wget parses the $IP vairable correctly. No luck on the output file name. I've tried single quotes, double quotes, parenthesis: all to no luck. Here's the script !/bin/bash IP=$1 JPEGOUT= date +%Y%m%d%H%M.jpg wget -O JPEGOUT http://$IP/axis-cgi/jpg/image.cgi?resolution=640x480&compression=25 Any ideas on how to get the output file name to parse correctly?

    Read the article

  • New Demos SOA Suite (11.1.1.6) & SOA Suite Foundation Pack (11.1.1.6)

    - by JuergenKress
    For access to the Oracle demo systems please visit OPN and talk to your Partner Expert GSE: SOA & FP (11.1.1.6) Platforms Portable Version – Available SOA 11g Platform FP 11g Platform All SOA/BPM 11g Solutions OFM Demos Corner GSE Offerings Scheduling Demos on GSE Support GSE is pleased to announce the availability of SOA and Foundation Pack 11g (11.1.1.6) Platform Portable images. Portable images now come as a VBox appliance. SOA 11.1.1.6 Platform Portable Version This portable image comes with latest SOA Suite products installed and configured. Vbox appliance facilitates easy maintenance of the image. Click here to download the portable image. FP 11.1.1.6 Platform Portable Version Foundation Pack installed and configured on SOA image and stands as a base for building cross-application integrations. Click here to download the portable image. In addition to Portable images, Global Sales Engineering would like to inform availability of Hosted version of SOA & BPM 11g (11.1.1.6) Solutions. Click here for more information. SOA Suite Foundation Pack Demo Demo Overview Business Process Artifacts Demo Architecture Bill of Materials Demo Collateral DSS Offerings OFM Demos Corner Scheduling Demos on DSS DSS Support The Foundation Pack(FP) demo showcases various tools and utilities of Foundation Pack like Project Lifecycle Workbench(PLW) JDeveloper - Service Constructor Harvesting services to PLW/ Oracle Enterprise Repository Generation of Bill of Materials (BOM) Creation of Deployment Plans / Harvestor Settings Track Foundation Pack Fusion Order demo flow in Enterprise Manager Console For more information on the demo click here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA DEmo System,DSS,SOA,sales,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • robocopy transfer file and not folder

    - by Bill McKay
    I'm trying to use robocopy to tranfer a single file from one location to another but robocopy seems to think I'm always specifying a folder. Here is an example: robocopy "c:\transfer_this.txt" "z:\transferred.txt" But I get this error instead: 2009/08/11 15:21:57 ERROR 123 (0x0000007B) Accessing Source Directory c:\transfer_this.txt\ (note the '\' at the end of transfer_this.txt) But if I treat it like an entire folder: robocopy "c:\folder" "z:\folder" It works but then I have to transfer everything in the folder. How can I only transfer a single file with robocopy?

    Read the article

  • How to fix network connection dead on startup, but okay after disable/enable?

    - by bill weaver
    on Startup When my system starts up, the internet connection is dead. This causes various problems with startup items such as updates and auto-start programs failing. However, the connection is fine after going into Network and Sharing, Change adapter settings, then disabling and re-enabling the adapter. Any suggestions on why this is happening and how to fix it? System summary: Windows 7, 64 bit, Realtek PCIe GBE Family Controller, Linksys WRT610N router, Sci Atlanta cable modem.

    Read the article

  • EM12c Release 4: Cloud Control to Major Tom...

    - by abulloch
    With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages ! 

    Read the article

  • How to fix: faulting application w3wp.exe?

    - by Bill Paetzke
    In the Event Viewer: Source: Application Error Category: 100 Event ID: 1000 Faulting application w3wp.exe, version 6.0.3790.3959, faulting module kernel32.dll, version 5.2.3790.4480, fault address 0x000000000000dd50. This error is happening on my background-app server (not web server). It's happening every 3-5 minutes. What's the problem in laymen's terms? And then the solution? :) Or if this isn't enough info, how can I troubleshoot?

    Read the article

  • Iterative Conversion

    - by stuart ramage
    Question Received: I am toying with the idea of migrating the current information first and the remainder of the history at a later date. I have heard that the conversion tool copes with this, but haven't found any information on how it does. Answer: The Toolkit will support iterative conversions as long as the original master data key tables (the CK_* tables) are not cleared down from Staging (the already converted Transactional Data would need to be cleared down) and the Production instance being migrated into is actually Production (we have migrated into a pre-prod instance in the past and then unloaded this and loaded it into the real PROD instance, but this will not work for your situation. You need to be migrating directly into your intended environment). In this case the migration tool will still know all about the original keys and the generated keys for the primary objects (Account, SA, etc.) and as such it will be able to link the data converted as part of a second pass onto these entities. It should be noted that this may result in the original opening balances potentially being displayed with an incorrect value (if we are talking about Financial Transactions) and also that care will have to be taken to ensure that all related objects are aligned (eg. A Bill must have a set to bill segments, meter reads and a financial transactions, and these entities cannot exist independantly). It should also be noted that subsequent runs of the conversion tool would need to be 'trimmed' to ensure that they are only doing work on the objects affected. You would not want to revalidate and migrate all Person, Account, SA, SA/SP, SP and Premise details since this information has already been processed, but you would definitely want to run the affected transactional record validation and keygen processes. There is no real "hard-and-fast" rule around this processing since is it specific to each implmentations needs, but the majority of the effort required should be detailed in the Conversion Tool section of the online help (under Adminstration/ The Conversion Tool). The major rule is to ensure that you only run the steps and validation/keygen steps that you need and do not do a complete rerun for your subsequent conversion.

    Read the article

  • Oracle Releases New Mainframe Re-Hosting in Oracle Tuxedo 11g

    - by Jason Williamson
    I'm excited to say that we've released our next generation of Re-hosting in 11g. In fact I'm doing some hands-on labs now for our Systems Integrators in Italy in a couple of weeks and targeting Latin America next month. If you are an SI, or Rehosting firm and are looking to become an Oracle Partner or get a better understanding of Tuxedo and how to use the workbench for rehosting...drop me a line. Oracle Tuxedo Application Runtime for CICS and Batch 11g provides a CICS API emulation and Batch environment that exploits the full range of Oracle Tuxedo's capabilities. Re-hosted applications run in a multi-node, grid environment with centralized production control. Also, enterprise integration of CICS application services benefits from an open and SOA-enabled framework. Key features include: CICS Application Runtime: Can run IBM CICS applications unchanged in an application grid, which enables the distribution of large workloads across multiple processors and nodes. This simplifies CICS administration and can scale to over 100,000 users and over 50,000 transactions per second. 3270 Terminal Server: Protects business users from change through support for tn3270 terminal emulation. Distributed CICS Resource Management: Simplifies deployment and administration by allowing customers to run CICS regions in a distributed configuration. Batch Application Runtime: Provides robust IBM JES-like job management that enables local or remote job submissions. In addition, distributed batch initiators can enable parallelization of jobs and support fail-over, shortening the batch window and helping to meet stringent SLAs. Batch Execution Environment: Helps to run IBM batch unchanged and also supports JCL functionality and all common batch utilities. Oracle Tuxedo Application Rehosting Workbench 11g provides a set of automated migration tools integrated around a central repository. The tools provide high precision which results in very low error rates and the ability to handle large applications. This enables less expensive, low-risk migration projects. Key capabilities include: Workbench Repository and Cataloguer: Ensures integrity of the migrated application assets through full dependency checking. The Cataloguer generates and maintains all relevant meta-data on source and target components. File Migrator: Supports reliable migration of datasets and flat files to an ISAM or Oracle Database 11g. This is done through the automated migration utilities for data unloading, reloading and validation. It also generates logical access functions to shield developers from data repository changes. DB2 Migrator: Similarly, this tool automates the migration of DB2 schema and data to Oracle Database 11g. COBOL Migrator: Supports migration of IBM mainframe COBOL assets (OLTP and Batch) to open systems. Adapts programs for compiler dialects and data access variations. JCL Migrator: Supports migration of IBM JCL jobs to a Tuxedo ART environment, maintaining the flow and characteristics of batch jobs.

    Read the article

  • Silverlight Cream for April 23, 2010 -- #845

    - by Dave Campbell
    In this Issue: Jason Allor, Bill Reiss, Mike Snow, Tim Heuer, John Papa, Jeremy Likness, and Dave Campbell. Shoutouts: You saw it at MIX10 and DevConnections... now you can give it a dance, John Papa announced eBay Simple Lister Beta Now Available Mike Snow posted some info about and a link to his new Flickr/Bing/Google High End Image Viewer and he's looking for feedback From SilverlightCream.com: Hierarchical Data Trees With A Custom DataSource Jason Allor is rounding out a series here in his new blog (bookmark it), and he's created his own custom HierarchicalDataSource class for use with the TreeView. Space Rocks game step 11: Start level logic Bill Reiss has Episode 11 up in his Space Rocks game ... working on NewGame and start level logic Silverlight Tip of the Day #3 – Mouse Right Clicks Mike Snow has Tip 3 up ... about handling right-mouse clicks in Silverlight 4 -- oh yeah, we got right mouse now ... grab Mike's project to check it out. Silverlight 4 enables Authorization header modification Tim Heuer talks about the ability to modify the Authorization header in network calls with Silverlight 4. He gives not only the quick-and-dirty of how to use it, but has some good examples, code, and code results for show and tell. WCF RIA Services - Hands On Lab John Papa built a bookstore app in roughly 10 minutes in the keynote at DevConnections. He now has a tutorial on doing just that plus all the code up. Transactions with MVVM Not strictly Silverlight (or WPF), but Jeremy Likness has an interesting article up on MVVM and transaction processing. Read the post then grab his helper class. Your First Windows Phone 7 Application As with the First Silverlight App a couple weeks ago, if you've got any WP7 experience at all, just keep going... this is for folks that have not looked at it yet, have not downloaded anything... oh, and it's by Dave Campbell Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • wrt54gl reboots; troubleshooting steps?

    - by Bill
    I am using about 10 wrt54gl's in a small school. I am using a combination of stock firmware and Tomato 1.25, slowly moving towards all Tomato. We have had these devices installed for several years without problems. Recently, more and more of the units have started to spontaneously reboot, usually during high-traffic times (but not always). For the most part, the rebooting is not critical for us, but the wrt54gl's temporarily revert to 192.168.1.1 on the LAN ethernet ports and conflict with a critical server that's already installed with that IP. (Yes -- we plan to move the server off that address, but it is an involved process.) Both Tomato and the stock firmware (several versions from recent to several years old) exhibit the same problem: random reboots and reverting to 192.168.1.1 and conflicting temporarily with our server until the firmware boot process finishes. Here are my questions: Any way to prevent the wrt54gl's from reverting to 192.168.1.1 during the boot process? I was thinking of doing a custom firmware mod, although I hate to go that direction. Any steps to take in troubleshooting the reboots? Only some of the wrt54gl's reboot, which is odd. Others stay online for weeks and months without issues. Thanks.

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • How can I automate FTP downloads based on date without bi-directional syncing?

    - by Bill
    I have a particular FTP-related situation that I'm having trouble finding a solution for. I need an FTP download/syncing application that can operate within the following parameters: It must run under Windows (installing Python to be able to run a script or some such thing is an acceptable solution). It must be able to ignore files before a certain date (I want to start downloading new files, not all the files that exist in this very large FTP directory). I don't want bi-directional syncing (e.g. I don't want changes I make to the local files and directory structure to change the remote FTP server, the FTP server needs to be left completely alone). Automating it in some fashion would be ideal. What would you guys suggest? The solutions I'm turning up are all missing the mark in some fashion (e.g. they have bi-directional syncing or they have no way of starting the syncing today instead of trying to pull down the entire directory).

    Read the article

  • Steve Ballmer réfute la disparition des PC au profit des tablettes, quel sera l'avenir des appareils

    Mise à jour du 04.06.2010 par Katleen Steve Ballmer réfute la disparition des PCs au profit des tablettes en réponse à Steve Jobs, quel sera alors l'avenir des appareils numériques ? Suite au passage de Steve Jobs sur le grill des journalistes lors de la conférence D8, Steve Ballmer (CEO de Microsoft) s'est lui aussi assis dans la même chaise hier, où il a répondu aux questions de l'équipe de Wall Street Journal. «Quand nous étions une nation agraire, toutes les voitures étaient des camions. Mais à partir du moment où la population a commencé à migrer vers les villes, les gens ont commencé à utiliser des voitures. Je pense que les PC vont connaître le même sort que les camions. De moins en moins de ...

    Read the article

  • Make 'volume up' from 'mute' go to volume step one not previous volume on Macbook Pro

    - by Bill Cheatham
    Currently, on my Macbook Pro (OSX 10.6.5), I will often use the 'mute' button to turn the volume off, for example if I'm working at night. I may then watch a video or something, and decide to have the volume on very quietly. So I press the 'volume up' key—but instead of going to volume level one (the logical 'volume step up' from mute), the volume jumps to whatever mega-decibels I was blasting out before I muted in the first place. Is there a way to change the behavior of this? Ideally the 'mute' button would be changed to actually turn the volume to zero, rather than just muting. (And re-pressing mute would turn the volume back up to the original). Thanks!

    Read the article

  • HaProxy - Http and SSL pass through config

    - by Bill
    I've currently got an HaProxy LB solution in place and everything is working fine however we are having an issue with a very few clients who cannot get to our site via HTTPS (SSL) they can browse our site in Http but as soon as they click on an absolute HTTPS link they are taken to our home page instead. Wondering if anyone can look at our config below and see if there's something awry. I believe we are on HaProxy 1.2.17 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 6144 #debug #quiet user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 stats auth # admin password stats uri /monitor listen webfarm # bind :80,:443 bind :443 mode tcp balance source #cookie SERVERID insert indirect #option httpclose #option forwardfor #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen webfarmhttp :80 mode http balance source # option httpclose option forwardfor # option httpchk HEAD /check.cfm HTTP/1.0 option httpchk /check.cfm server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen monitor :8443 mode http balance roundrobin #cookie SERVERID insert indirect option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 server webB 111.10.10.2

    Read the article

  • Does professionalism in job postings matter?

    - by Wings87
    I came across this job posting: http://www.justin.tv/jobs/jobs. The page contains some bad language ("No bulls--t"). I personally find this vaguely offensive, and it would certainly put me off applying to work there: It made me wonder what kind/personality of people work at this company. At least where I'm from (EU), the language would be considered bad form, and it would be seen as badly representing the company. Some job postings seem to go to the other extreme, filled with vacuous people skill descriptions and irrelevant details. It's tempting to dream up pictures of each company based on their job posting. So, does professionalism in job postings matter? Are you inclined to see through bad language, irrelevant corporate speak and so on, or do these affect interest in a job?

    Read the article

  • Any tips for designing the invoicing/payment system of a SaaS?

    - by Alexandru Trandafir Catalin
    The SaaS is for real estate companies, and they can pay a monthly fee that will offer them 1000 publications but they can also consume additional publications or other services that will appear on their bill as extras. On registration the user can choose one of the 5 available plans that the only difference will be the quantity of publications their plan allows them to make. But they can pass that limit if they wish, and additional payment will be required on the next bill. A publication means: Publishing a property during one day, for 1 whole month would be: 30 publications. And 5 properties during one day would be 5 publications. So basically the user can: Make publications (already paid in the monthly fee, extra payment only if it passes the limit) Highlight that publication (extra payment) Publish on other websites or printed catalogues (extra payment) Doubts: How to handle modifications in pricing plans? Let's say quantities change, or you want to offer some free stuff. How to handle unpaid invoices? I mean, freeze the service until the payment has been done and then resume it. When to make the invoices? The idea is to make one invoice for the monthly fee and a second invoice for the extra services that were consumed. What payment methods to use? The choosen now is by bank account, and mobile phone validation with a SMS. If user doesn't pay we call that phone and ask for payment. Any examples on billing online services will be welcome! Thanks!

    Read the article

  • Why can't Logman start?

    - by Bill Paetzke
    I'm setting up my first logman counter. But it's not working! There is some file or folder permissions problem. Or maybe I wrote the create-counter statement wrong. Here's my counter commands: logman create counter BillTest -si 30 -v nnnnnn -max 200 -o "C:\Temp" -c "\Processor(*)\*" "\Memory(*)\*" "\LogicalDisk(*)\*" logman start BillTest The first command works. It says counter creation successful. The second command fails: Collection "BillTest" did not start, check the application event log for any errors Here's the error in the Event Viewer: The service was unable to open the log file C:\Temp_000001.blg for log BillTest and will be stopped. Check the log folder for existence, spelling, permissions, and ensure that no other logs or applications are writing to this log file. You can reenter the log file name using the configuration program. This log will not be started. The error returned is: Access is denied. I verified that C:\Temp exists. I'm not a permissions guru, but I did set all the accounts in the security tab of that folder to "full control." Still, the logman start command failed with the same error. I noticed that it was trying to write to C:\Temp_000001.blg instead of C:\Temp\000001.blg. That might be part of the problem. So, I tried to update my counter to "C:\Temp\" instead of "C:\Temp", but that failed with a path-invalid error. Also, all the examples I saw online used did not put a trailing slash. So, no dice there. I tried this on my machine (Windows XP) and my dev server (Windows Server 2003). Both failed with the same error. How can I fix this?

    Read the article

  • How could this diagram of the current most lucrative technologies be improved?

    - by Edward Tanguay
    I'm giving a talk in April 2011 on the "Developer English" and showing my non-developer audience, mostly English teachers, various diagrams to explain how developers see their industry etc. One of these diagrams is "Hot Technologies", basically, if you want to become a developer, what technologies should you learn to have the highest chance of (1) getting a job (2) making a good salary, and (3) work with the most exciting technology. This is a draft I made just to get some ideas out, basically C#, PHP, Java are where the bulk of the jobs are. Mobile development has a big future. JavaScript is becoming more and more important, and I want to list "minor technologies" such a Python, Ruby on Rails to the side, I assume e.g. that in general, there are a much smaller percentage of jobs in these technologies as in C#, PHP, Java. How could this diagram be improved?

    Read the article

  • How to create full-text catalog as default catalog?

    - by Bill Paetzke
    This would save me the redundant ON MyCatalog phrase when I create full-text indices. For example, CREATE FULLTEXT INDEX ON MyTable ( MyField1, MyField2, MyField3 ) KEY INDEX PK_MyKey ON MyCatalog WITH CHANGE_TRACKING AUTO With MyCatalog set as default catalog, I wouldn't have to specify ON MyCatalog every time I want to create a full-text index. So how can I make MyCatalog default on this database?

    Read the article

  • Server room kit?

    - by Bill Weiss
    I feel like this is a question I've seen on here before, but some searching didn't do me any good. This looks similar, but I'm looking for stuff I leave there, not what's in my go-bag. What would you say is indispensable equipment in your server room? I've inherited one that's a bit light on stuff (except for servers, those are in there). We're in the single digits of racks, if that matters. I'm thinking of things like: Cable labeler Ethernet tester (copper at least, fibre if you need) ... ? Community wiki, because, really. [Edit] I suppose it's important to say that it's a colo facility, kind of far from the office. No food, water, etc. :(

    Read the article

  • How to get permission to create full-text index?

    - by Bill Paetzke
    I tried to create a full-text index and got this error: Msg 9967, Level 16, State 1, Line 1 A default full-text catalog does not exist in database 'foo' or user does not have permission to perform this action. FYI--I connected to the target sql server with Windows Authentication. What do I need to do in Sql Server 2005 and/or in Windows Server 2003 to get permissions? Please be thorough (assume I am a n00b). Thank you.

    Read the article

  • WPA2 and the linux wireless tools

    - by Bill Grey
    I would like to know a distribution independent way to connect to WPA2 wireless networks. Do the wireless tools support wpa2? iwconfig and such? Or is it necessary to use wpa_supplicant? Having to edit a config file every time if changing between many networks is quickly frustrating. I am aware of tools like wicd, but would like to know if there is a standard way to do this on all distributions without requiring third party software.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >