Search Results

Search found 3070 results on 123 pages for 'cron jobs'.

Page 78/123 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Measuring custom statistics with sar

    - by Will Glass
    I have a server application which I think is leaking file handles. I want to track the usage of file descriptors over time on my Linux (ubuntu) server. I've figured out that I can track the number of file descriptors in use by a process with lsof -p `pgrep the-process-name` | wc -l Since I'm already using sysstat and sar to track various metrics, I thought it'd be nice to display with sar. I want to measure this every 10 minutes. Is it possible to add a custom metric to sar? Then I can easily report it out. If not, I'll write a simple cron job to collect this data and store it separately in a log file.

    Read the article

  • Does it make a difference to read from a file instead of from MySQL?

    - by Joe Huang
    My web server currently is quite loaded. And I have a PHP file that is accessed very often remotely. The PHP file basically makes a MySQL query and returns a JSON formatted string. I am thinking to use a Cron job to write the necessary data into a file every 15 mins, so the PHP file doesn't make a MySQL query, instead it reads from the file. Does it make a difference? I mean to alleviate the server loading (CPU/MySQL) a bit?

    Read the article

  • Setting thresholds via smartd.conf

    - by JPerkSter
    Hello! We currently use smartd to monitor SMART health on our disks. I would like to set the 'thresholds' that smartd uses to report. For example: 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 9 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 9 I would like to be able to set a threshold for these two attributes to only report if they are above 10. I haven't been able to find a way to set this via smartd.conf, and I'm also not looking to actually shutdown the daemon. Has anyone ever tried this, or know how I might be able to accomplish this better then throwing a script into cron.hourly?

    Read the article

  • Best server for mailing application [closed]

    - by Cyber Junkie
    My application is similar to a reminder service that reminds users of events that they scheduled. I'm sending emails to users through a PHP script. I'm not sending one email to multiple recipients. Each recipient receives a different message. I plan to use cron jobs every minute and expect the application to send roughly 200 individual emails in 1 hour (for a small user base that may grow). I don't have hosting experience with this type of application. I plan to start on a shared host and move up in the future to vps or dedicated. Most shared hosts that I looked into allow 50-100 emails per hour with delays between mailings. Please kindly inform me what I should look for in web hosts for this kind of application.

    Read the article

  • Automatically kill processes that over time uses 95%+ of resources? Ubuntu

    - by data_jepp
    I don't know about you're computer but when mine is working properly no process is sucking 95%+ over time. I would like to have some failsafe that kills any processes behaving like that. This comes to mind because when I woke up this morning my laptop had been crunching all night long on a stray chromium child process. This can probably be done as a cron job, but before I make it a full time job creating something like this I'd thought I should check here. :) I hate reinventing the wheel.

    Read the article

  • So i ran sudo apt-get install kubuntu-full on my Ubuntu... and saw all the apps...now I want it off...help?

    - by Alex Poulos
    I'm running 12.04 - I installed kubuntu to try it out and realized that with all the bloatware applications that I didn't want it anymore - I was able to uninstall the kubuntu-desktop but there are still packages left over... How can I make sure I get rid of EVERYTHING Kubuntu installed - even the kde leftovers? Here's some of what's left when I ran sudo apt-get autoremove kde then "tab" it displayed this: kdeaccessibility kdepim-runtime kdeadmin kde-runtime kde-baseapps kde-runtime-data kde-baseapps-bin kdesdk-dolphin-plugins kde-baseapps-data kde-style-oxygen kde-config-cron kdesudo kde-config-gtk kdeutils kde-config-touchpad kde-wallpapers kdegames-card-data kde-wallpapers-default kdegames-card-data-extra kde-window-manager kde-icons-mono kde-window-manager-common kdelibs5-data kde-workspace kdelibs5-plugins kde-workspace-bin kdelibs-bin kde-workspace-data kdemultimedia-kio-plugins kde-workspace-data-extras kdenetwork kde-workspace-kgreet-plugins kdenetwork-filesharing kde-zeroconf kdepasswd kdf kdepim-kresources kdm kdepimlibs-kio-plugins kdoctools Those are all installed by kubuntu... correct? I just want to go back to my Ubuntu 12.04LTS with Gnome2-classic and without all the kubuntu extras. I started it off by just removing unnecessary apps that came with kubuntu-full - then realized I didnt want the whole thing at all and uninstalled kubuntu-full but it still says I have these as well: alex@griever:~$ sudo apt-get --purge remove kubuntu- kubuntu-debug-installer kubuntu-netbook-default-settings kubuntu-default-settings kubuntu-notification-helper kubuntu-firefox-installer kubuntu-web-shortcuts

    Read the article

  • EM12c Release 4: New EMCLI Verbs

    - by SubinDaniVarughese
    Here are the new EM CLI verbs in Enterprise Manager 12c Release 4 (12.1.0.4). This helps you in writing new scripts or enhancing your existing scripts for further automation. Basic Administration Verbs invoke_ws - Invoke EM web service.ADM Verbs associate_target_to_adm - Associate a target to an application data model. export_adm - Export Application Data Model to a specified .xml file. import_adm - Import Application Data Model from a specified .xml file. list_adms - List the names, target names and application suites of existing Application Data Models verify_adm - Submit an application data model verify job for the target specified.Agent Update Verbs get_agent_update_status -  Show Agent Update Results get_not_updatable_agents - Shows Not Updatable Agents get_updatable_agents - Show Updatable Agents update_agents - Performs Agent Update Prereqs and submits Agent Update JobBI Publisher Reports Verbs grant_bipublisher_roles - Grants access to the BI Publisher catalog and features. revoke_bipublisher_roles - Revokes access to the BI Publisher catalog and features.Blackout Verbs create_rbk - Create a Retro-active blackout.CFW Verbs cancel_cloud_service_requests -  To cancel cloud service requests delete_cloud_service_instances -  To delete cloud service instances delete_cloud_user_objects - To delete cloud user objects. get_cloud_service_instances - To get information about cloud service instances get_cloud_service_requests - To get information about cloud requests get_cloud_user_objects - To get information about cloud user objects.Chargeback Verbs add_chargeback_entity - Adds the given entity to Chargeback. assign_charge_plan - Assign a plan to a chargeback entity. assign_cost_center - Assign a cost center to a chargeback entity. create_charge_entity_type - Create  charge entity type export_charge_plans - Exports charge plans metadata to file export_custom_charge_items -  Exports user defined charge items to a file import_charge_plans - Imports charge plans metadata from given file import_custom_charge_items -  Imports user defined charge items metadata from given file list_charge_plans - Gives a list of charge plans in Chargeback. list_chargeback_entities - Gives a list of all the entities in Chargeback list_chargeback_entity_types - Gives a list of all the entity types that are supported in Chargeback list_cost_centers - Lists the cost centers in Chargeback. remove_chargeback_entity - Removes the given entity from Chargeback. unassign_charge_plan - Un-assign the plan associated to a chargeback entity. unassign_cost_center - Un-assign the cost center associated to a chargeback entity.Configuration/Association History disable_config_history - Disable configuration history computation for a target type. enable_config_history - Enable configuration history computation for a target type. set_config_history_retention_period - Sets the amount of time for which Configuration History is retained.ConfigurationCompare config_compare - Submits the configuration comparison job get_config_templates - Gets all the comparison templates from the repositoryCompliance Verbs fix_compliance_state -  Fix compliance state by removing references in deleted targets.Credential Verbs update_credential_setData Subset Verbs export_subset_definition - Exports specified subset definition as XML file at specified directory path. generate_subset - Generate subset using specified subset definition and target database. import_subset_definition - Import a subset definition from specified XML file. import_subset_dump - Imports dump file into specified target database. list_subset_definitions - Get the list of subset definition, adm and target nameDelete pluggable Database Job Verbs delete_pluggable_database - Delete a pluggable databaseDeployment Procedure Verbs get_runtime_data - Get the runtime data of an executionDiscover and Push to Agents Verbs generate_discovery_input - Generate Discovery Input file for discovering Auto-Discovered Domains refresh_fa - Refresh Fusion Instance run_fa_diagnostics - Run Fusion Applications DiagnosticsFusion Middleware Provisioning Verbs create_fmw_domain_profile - Create a Fusion Middleware Provisioning Profile from a WebLogic Domain create_fmw_home_profile - Create a Fusion Middleware Provisioning Profile from an Oracle Home create_inst_media_profile - Create a Fusion Middleware Provisioning Profile from Installation MediaGold Agent Image Verbs create_gold_agent_image - Creates a gold agent image. decouple_gold_agent_image - Decouples the agent from gold agent image. delete_gold_agent_image - Deletes a gold agent image. get_gold_agent_image_activity_status -  Gets gold agent image activity status. get_gold_agent_image_details - Get the gold agent image details. list_agents_on_gold_image - Lists agents on a gold agent image. list_gold_agent_image_activities - Lists gold agent image activities. list_gold_agent_image_series - Lists gold agent image series. list_gold_agent_images - Lists the available gold agent images. promote_gold_agent_image - Promotes a gold agent image. stage_gold_agent_image - Stages a gold agent image.Incident Rules Verbs add_target_to_rule_set - Add a target to an enterprise rule set. delete_incident_record - Delete one or more open incidents remove_target_from_rule_set - Remove a target from an enterprise rule set. Job Verbs export_jobs - Export job details in to an xml file import_jobs - Import job definitions from an xml file job_input_file - Supply details for a job verb in a property file resume_job - Resume a job or set of jobs suspend_job - Suspend a job or set of jobs Oracle Database as Service Verbs config_db_service_target - Configure DB Service target for OPCPrivilege Delegation Settings Verbs clear_default_privilege_delegation_setting - Clears the default privilege delegation setting for a given list of platforms set_default_privilege_delegation_setting - Sets the default privilege delegation setting for a given list of platforms test_privilege_delegation_setting - Tests a Privilege Delegation Setting on a hostSSA Verbs cleanup_dbaas_requests - Submit cleanup request for failed request create_dbaas_quota - Create Database Quota for a SSA User Role create_service_template - Create a Service Template delete_dbaas_quota - Delete the Database Quota setup for a SSA User Role delete_service_template - Delete a given service template get_dbaas_quota - List the Database Quota setup for all SSA User Roles get_dbaas_request_settings - List the Database Request Settings get_service_template_detail - Get details of a given service template get_service_templates -  Get the list of available service templates rename_service_template -  Rename a given service template update_dbaas_quota - Update the Database Quota for a SSA User Role update_dbaas_request_settings - Update the Database Request Settings update_service_template -  Update a given service template. SavedConfigurations get_saved_configs  - Gets the saved configurations from the repository Server Generated Alert Metric Verbs validate_server_generated_alerts  - Server Generated Alert Metric VerbServices Verbs edit_sl_rule - Edit the service level rule for the specified serviceSiebel Verbs list_siebel_enterprises -  List Siebel enterprises currently monitored in EM list_siebel_servers -  List Siebel servers under a specified siebel enterprise update_siebel- Update a Siebel enterprise or its underlying serversSiteGuard Verbs add_siteguard_aux_hosts -  Associate new auxiliary hosts to the system configure_siteguard_lag -  Configure apply lag and transport lag limit for databases delete_siteguard_aux_host -  Delete auxiliary host associated with a site delete_siteguard_lag -  Erases apply lag or transport lag limit for databases get_siteguard_aux_hosts -  Get all auxiliary hosts associated with a site get_siteguard_health_checks -  Shows schedule of health checks get_siteguard_lag -  Shows apply lag or transport lag limit for databases schedule_siteguard_health_checks -  Schedule health checks for an operation plan stop_siteguard_health_checks -  Stops all future health check execution of an operation plan update_siteguard_lag -  Updates apply lag and transport lag limit for databasesSoftware Library Verbs stage_swlib_entity_files -  Stage files of an entity from Software Library to a host target.Target Data Verbs create_assoc - Creates target associations delete_assoc - Deletes target associations list_allowed_pairs - Lists allowed association types for specified source and destination list_assoc - Lists associations between source and destination targets manage_agent_partnership - Manages partnership between agents. Used for explicitly assigning agent partnershipsTrace Reports generate_ui_trace_report  -  Generate and download UI Page performance report (to identify slow rendering pages)VI EMCLI Verbs add_virtual_platform - Add Oracle Virtual PLatform(s). modify_virtual_platform - Modify Oracle Virtual Platform.To get more details about each verb, execute$ emcli help <verb_name>Example: $ emcli help list_assocNew resources in list verbThese are the new resources in EM CLI list verb :Certificates  WLSCertificateDetails Credential Resource Group  PreferredCredentialsDefaultSystemScope - Preferred credentials (System Scope)   PreferredCredentialsSystemScope - Target preferred credentialPrivilege Delegation Settings  TargetPrivilegeDelegationSettingDetails  - List privilege delegation setting details on a host  TargetPrivilegeDelegationSetting - List privilege delegation settings on a host   PrivilegeDelegationSettings  - Lists all Privilege Delegation Settings   PrivilegeDelegationSettingDetails - Lists details of  Privilege Delegation Settings To get more details about each resource, execute$ emcli list -resource="<resource_name>" -helpExample: $ emcli list -resource="PrivilegeDelegationSettings" -helpDeprecated Verbs:Agent Administration Verbs resecure_agent - Resecure an agentTo get the complete list of verbs, execute:$ emcli help Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • Help with a simple incremental backup script

    - by Evan
    I'd like to run the following incomplete script weekly in as a cron job to backup my home directory to an external drive mounted as /mnt/backups #!/bin/bash # TIMEDATE=$(date +%b-%d-%Y-%k:%M) LASTBACKUP=pathToDirWithLastBackup rsync -avr --numeric-ids --link-dest=$LASTBACKUP /home/myfiles /mnt/backups/myfiles$TIMEDATE My first question is how do I correctly set LASTBACKUP to the the the directory in /backs most recently created? Secondly, I'm under the impression that using --link-desk will mean that files in previous backups will not will not copied in later backups if they still exist but will rather symbolically link back to the originally copied files? However, I don't want to retain old files forever. What would be the best way to remove all the backups before a certain date without losing files that may think linked in those backups by currents backups? Basically I'm looking to merge all the files before a certain date to a certain date if that makes more sense than the way I initially framed the question :). Can --link-dest create hard links, and if so, just deleting previous directories wouldn't actually remove linked file? Finally I'd like to add a line to my script that compresses each newly created backup folder (/mnt/backups/myfiles$TIMEDATE). Based on reading this question, I was wondering if I could just use this line gzip --rsyncable /backups/myfiles$TIMEDATE after I run rsync so that sequential rsync --link-dest executions would find already copied and compressed files? I know that's a lot, so many thanks in advance for your help!!

    Read the article

  • Dependent on CVS tagging for automated builds

    - by OMG Ponies
    My current work relies on using tags in CVS for an automated build process (ANT currently) to build for respective environments (development, QA, production). From our research, neither Git or Subversion support tagging in the same manner. If we use Subversion or Git, they don't support tags (in the same manner - please correct me?). So how would ANT or Maven know what to pick up for the respective build? Example: For a webapp, when viewing our repository say for the web.xml file -- the history would look like: web.xml v1 ... web.xml v1.2.3 Tag: Prod web.xml v1.2.4 web.xml v1.2.5 Tag: QA web.xml v1.2.6 web.xml v1.2.7 Head The ANT build scripts are run as CRON jobs, at different times & intervals for different environments. The environment build is based on the repository checkout, based on the tag. Development continues, and eventually the respective tags are moved: web.xml v1 ... web.xml v1.2.3 web.xml v1.2.4 web.xml v1.2.5 web.xml v1.2.6 Tag: Prod web.xml v1.2.7 Tag: QA web.xml v1.2.8 Head

    Read the article

  • Remote reboot over ssh does not restart

    - by Finn Årup Nielsen
    I would like to remotely reboot my Ubuntu 12.04 LTS server via ssh. I do sudo reboot and I loose connection and the server connection does not reappear. It does not ping. When I go the the physical computer with a screen attached I see a black screen and hear that the server is still on. I do a hard power off (press power on button for a few seconds) and the server halts. After I press power on the server boots with no problem. As far as I remember the remote reboot has previously worked on that server. I wonder if sudo reboot & will help? I suppose I could also try sudo shutdown -r and see if that does any difference. I have listed an excerpt of /etc/log/syslog below. The last thing it records is the stopping of the logging. Oct 24 10:14:49 servername kernel: [1354427.594709] init: cron main process (1060) killed by TERM signal Oct 24 10:14:49 servername kernel: [1354427.594908] init: irqbalance main process (1080) killed by TERM signal Oct 24 10:14:49 servername kernel: [1354427.595299] init: tty1 main process (1424) killed by TERM signal Oct 24 10:14:49 servername kernel: [1354427.637747] init: plymouth-upstart-bridge main process (20873) terminated with status 1 Oct 24 10:14:49 servername kernel: Kernel logging (proc) stopped. Oct 24 10:14:49 servername rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="876" x-info="http://www.rsyslog.com"] exiting on signal 15. Oct 24 10:25:34 servername kernel: imklog 5.8.6, log source = /proc/kmsg started. Oct 24 10:25:34 servername rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="862" x-info="http://www.rsyslog.com"] start

    Read the article

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

  • basic beginning emacs questions - install latest version and pick appropriate UI

    - by MountainX
    I'm running the latest Kubuntu (12.04 beta 2) and I would like to run the latest emacs (currently v24). The repos are one version behind. What's the best way to install v24 or later (and avoid future version conflicts)? Also, is there any reason not to aways use the GUI version of emacs if X is running? For example, could I set the GUI emacs version as the default text editor and use it to edit cron jobs (crontab -e)? I'm assuming the answer is yes, but since I haven't done that yet (my default editor is nano), I want to check if there are reasons I should leave nano as the default editor. Usually when I'm working on the command line I end up using nano. Now that I think about it, I have no idea why I keep doing that. Is there any downside to calling a GUI editor when working in an X terminal? EDIT: I briefly tested these two versions GNU Emacs 24.0.94.1 (x86_64-pc-linux-gnu, GTK+ Version 3.3.20) from GNU Emacs 23.3.1 (x86_64-pc-linux-gnu) installed by default in Kubuntu. This post explains some of the differences between versions. Unfortunately (for me) the defaults installed version (23.3.1, 23.3+1-1ubuntu9) is the nox version. Package: emacs23-nox Status: install ok installed Version: 23.3+1-1ubuntu9 Replaces: emacs23, emacs23-gtk, emacs23-lucid The package with version 24 opens in GUI mode by default. That's what I prefer. Some of the version 24 changes that interest me are listed in the references below. But there appear to be a multitude of different packages and versions I could install. References: What’s New In Emacs 24 (part 1) | Mastering Emacs http://www.masteringemacs.org/articles/2011/12/06/what-is-new-in-emacs-24-part-1/ " shell-mode uses pcomplete rules, with the standard completion UI. Yowzah! There’s a lot of cool, new functionality hidden away in this gem of a change." EmacsWiki: Recent Changes http://www.emacswiki.org/emacs/?action=rc;showedit=0

    Read the article

  • Entry / JR Php Programmer - What do I learn next?

    - by dtj
    I got very interested in programming toward the end of college. Took a few classes, but learned most everything on my own via books and such. Its mostly been Php and MySQL. Right out of school, I got a job working at a company for 2 years (web media) and ended up learning a lot of stuff and programming some things for them. I am no longer at that company but I am looking for my next steps as a programmer. I really enjoy Web Development and Php and MySQL seems to be my thing. Basically, I know how to do CRUD operations, i am mediocre at OOP and still have more to learn, I know HTML and CSS quite well, I know my way around a Unix terminal and can access MySQL through it and set up cron jobs and such. I know some basic Javascript. Whats a good next step? I don't anything about 3rd party services, PDO, APIs (twitter, facebook, etc), Drupal / Joomla, Unit Testing, E-Commerce, PECL, PEAR ....in other words A LOT I get easily overwhelmed by the amount of stuff there is to learn, so I'm sort of trying to find a path. Right now, I'm digging into OOP more, as that seems like a good conceptual first-step. Any suggestions?

    Read the article

  • Why We Should Learn to Stop Worrying and Love Millennials

    - by HCM-Oracle
    By Christine Mellon Much is said and written about the new generations of employees entering our workforce, as though they are a strange specimen, a mysterious life form to be “figured out,” accommodated and engaged – at a safe distance, of course.  At its worst, this talk takes a critical and disapproving tone, with baby boomer employees adamantly refusing to validate this new breed of worker, let alone determine how to help them succeed and achieve their potential.   The irony of our baby-boomer resentments and suspicions is that they belie the fact that we created the very vision that younger employees are striving to achieve.  From our frustrations with empty careers that did not fulfill us, from our opposition to “the man,” from our sharp memories of our parents’ toiling for 30 years just for the right to retire, from the simple desire not to live our lives in a state of invisibility, came the seeds of hope for something better. One characteristic of Millennial workers that grew from these seeds is the desire to experience as much as possible.  They are the “Experiential Employee”, with a passion for growing in diverse ways and expanding personal and professional horizons.  Rather than rooting themselves in a single company for a career, or even in a single career path, these employees are committed to building a broad portfolio of experiences and capabilities that will enable them to make a difference and to leave a mark of significance in the world.  How much richer is the organization that nurtures and leverages this inclination?  Our curmudgeonly ways must be surrendered and our focus redirected toward building the next generation of talent ecosystems, if we are to optimize what future generations have to offer.   Accelerating Professional Development In spite of our Boomer grumblings about Millennials’ “unrealistic” expectations, the truth is that we have a well-matched set of circumstances.  We have executives-in-waiting who want to learn quickly and a concurrent, urgent need to ramp up their development time, based on anticipated high levels of retirement in the next 10+ years.  Since we need to rapidly skill up these heirs to the corporate kingdom, isn’t it a fortunate coincidence that they are hungry to learn, develop and move fluidly throughout our organizations??  So our challenge now is to efficiently operationalize the wisdom we have acquired about effective learning and development.   We have already evolved from classroom-based models to diverse instructional methods.  The next step is to find the best approaches to help younger employees learn quickly and apply new learnings in an impactful way.   Creating temporary or even permanent functional partnerships among Millennial employees is one way to maximize outcomes.  This might take the form of 2 or more employees owning aspects of what once fell under a single role.  While one might argue this would mean duplication of resources, it could be a short term cost while employees come up to speed.  And the potential benefits would be numerous:  leveraging and validating the inherent sense of community of new generations, creating cross-functional skills with broad applicability, yielding additional perspectives and approaches to traditional work outcomes, and accelerating the performance curve for incumbents through Cooperative Learning (Johnson, D. and Johnson R., 1989, 1999).  This well-researched teaching strategy, where students support each other in the absorption and application of new information, has been shown to deliver faster, more efficient learning, and greater retention. Alternately, perhaps short term contracts with exiting retirees, or former retirees, to help facilitate the development of following generations may have merit.  Again, a short term cost, certainly.  However, the gains realized in shortening the learning curve, and strengthening engagement are substantial and lasting. Ultimately, there needs to be creative thinking applied for each organization on how to accelerate the capabilities of our future leaders in unique ways that mesh with current culture. The manner in which performance is evaluated must finally shift as well.  Employees will need to be assessed on how well they have developed key skills and capabilities vs. end-to-end mastery of functional positions they have no interest in keeping for an entire career. As we become more comfortable in placing greater and greater weight on competencies vs. tasks, we will realize increased organizational agility via this new generation of workers, which will be further enhanced by their natural flexibility and appetite for change. Revisiting Succession  For many years, organizations have failed to deliver desired succession planning outcomes.  According to CEB’s 2013 research, only 28% of current leaders were pre-identified in a succession plan. These disappointing results, along with the entrance of the experiential, Millennial employee into the workforce, may just provide the needed impetus for HR to reinvent succession processes.   We have recognized that the best professional development efforts are not always linear, and the time has come to fully adopt this philosophy in regard to succession as well.  Paths to specific organizational roles will not look the same for newer generations who seek out unique learning opportunities, without consideration of a singular career destination.  Rather than charting particular jobs as precursors for key positions, the experiences and skills behind what makes an incumbent successful must become essential in succession mapping.  And the multitude of ways in which those experiences and skills may be acquired must be factored into the process, along with the individual employee’s level of learning agility. While this may seem daunting, it is necessary and long overdue.  We have talked about the criticality of competency-based succession, however, we have not lived up to our own rhetoric.  Many Boomers have experienced the same frustration in our careers; knowing we are capable of shining in a particular role, but being denied the opportunity due to how our career history lined up, on paper, with documented job requirements.  These requirements usually emphasized past jobs/titles and specific tasks, versus capabilities, drive and willingness (let alone determination) to learn new things.  How satisfying would it be for us to leave a legacy where such narrow thinking no longer applies and potential is amplified? Realizing Diversity Another bloom from the seeds we Boomers have tried to plant over the past decades is a completely evolved view of diversity.  Millennial employees assume a diverse workforce, and are startled by anything less.  Their social tolerance, nurtured by wide and diverse networks, is unprecedented.  College graduates expect a similar landscape in the “real world” to what they experienced throughout their lives.  They appreciate and seek out divergent points of view and experiences without needing any persuasion.  The face of our U.S. workforce will likely see dramatic change as Millennials apply their fresh take on hiring and building strong teams, with an inherent sense of inclusion.  This wonderful aspect of the Millennial wave should be celebrated and strongly encouraged, as it is the fulfillment of our own aspirations. Future Perfect The Experiential Employee is operating more as a free agent than a long term player, and their commitment will essentially last as long as meaningful organizational culture and personal/professional opportunities keep their interest.  As Boomers, we have laid the foundation for this new, spirited employment attitude, and we should take pride in knowing that.  Generations to come will challenge organizations to excel in how they identify, manage and nurture talent. Let’s support and revel in the future that we’ve helped invent, rather than lament what we think has been lost.  After all, the future is always connected to the past.  And as so eloquently phrased by Antoine Lavoisier, French nobleman, chemist and politico:  “Nothing is Lost, Nothing is Created, and Everything is Transformed.” Christine has over 25 years of diverse HR experience.  She has held HR consulting and corporate roles, including CHRO positions for Echostar in Denver, a 6,000+ employee global engineering firm, and Aepona, a startup software firm, successfully acquired by Intel. Christine is a resource to Oracle clients, to assist in Human Capital Management strategy development and implementation, compensation practices, talent development initiatives, employee engagement, global HR management, and integrated HR systems and processes that support the full employee lifecycle. 

    Read the article

  • Building an Infrastructure Cloud with Oracle VM for x86 + Enterprise Manager 12c

    - by Richard Rotter
    Cloud Computing? Everyone is talking about Cloud these days. Everyone is explaining how the cloud will help you to bring your service up and running very fast, secure and with little effort. You can find these kinds of presentations at almost every event around the globe. But what is really behind all this stuff? Is it really so simple? And the answer is: Yes it is! With the Oracle SW Stack it is! In this post, I will try to bring this down to earth, demonstrating how easy it could be to build a cloud infrastructure with Oracle's solution for cloud computing.But let me cover some basics first: How fast can you build a cloud?How elastic is your cloud so you can provide new services on demand? How much effort does it take to monitor and operate your Cloud Infrastructure in order to meet your SLAs?How easy is it to chargeback for your services provided? These are the critical success factors of Cloud Computing. And Oracle has an answer to all those questions. By using Oracle VM for X86 in combination with Enterprise Manager 12c you can build and control your cloud environment very fast and easy. What are the fundamental building blocks for your cloud? Oracle Cloud Building Blocks #1 Hardware Surprise, surprise. Even the cloud needs to run somewhere, hence you will need hardware. This HW normally consists of servers, storage and networking. But Oracles goes beyond that. There are Optimized Solutions available for your cloud infrastructure. This is a cookbook to build your HW cloud platform. For example, building your cloud infrastructure with blades and our network infrastructure will reduce complexity in your datacenter (Blades with switch network modules, splitter cables to reduce the amount of cables, TOR (Top Of the Rack) switches which are building the interface to your infrastructure environment. Reducing complexity even in the cabling will help you to manage your environment more efficient and with less risk. Of course, our engineered systems fit into the cloud perfectly too. Although they are considered as a PaaS themselves, having the database SW (for Exadata) and the application development environment (for Exalogic) already deployed on them, in general they are ideal systems to enable you building your own cloud and PaaS infrastructure. #2 Virtualization The next missing link in the cloud setup is virtualization. For me personally, it's one of the most hidden "secret", that oracle can provide you with a complete virtualization stack in terms of a hypervisor on both architectures: X86 and Sparc CPUs. There is Oracle VM for X86 and Oracle VM for Sparc available at no additional  license costs if your are running this virtualization stack on top of Oracle HW (and with Oracle Premier Support for HW). This completes the virtualization portfolio together with Solaris Zones introduced already with Solaris 10 a few years ago. Let me explain how Oracle VM for X86 works: Oracle VM for x86 consists of two main parts: - The Oracle VM Server: Oracle VM Server is installed on bare metal and it is the hypervisor which is able to run virtual machines. It has a very small footprint. The ISO-Image of Oracle VM Server is only 200MB large. It is very small but efficient. You can install a OVM-Server in less than 5 mins by booting the Server with the ISO-Image assigned and providing the necessary configuration parameters (like installing an Linux distribution). After the installation, the OVM-Server is ready to use. That's all. - The Oracle VM-Manager: OVM-Manager is the central management tool where you can control your OVM-Servers. OVM-Manager provides the graphical user interface, which is an Application Development Framework (ADF) application, with a familiar web-browser based interface, to manage Oracle VM Servers, virtual machines, and resources. The Oracle VM Manager has the following capabilities: Create virtual machines Create server pools Power on and off virtual machines Manage networks and storage Import virtual machines, ISO files, and templates Manage high availability of Oracle VM Servers, server pools, and virtual machines Perform live migration of virtual machines I want to highlight one of the goodies which you can use if you are running Oracle VM for X86: Preconfigured, downloadable Virtual Machine Templates form edelivery With these templates, you can download completely preconfigured Virtual Machines in your environment, boot them up, configure them at first time boot and use it. There are templates for almost all Oracle SW and Applications (like Fusion Middleware, Database, Siebel, etc.) available. #3) Cloud Management The management of your cloud infrastructure is key. This is a day-to-day job. Acquiring HW, installing a virtualization layer on top of it is done just at the beginning and if you want to expand your infrastructure. But managing your cloud, keeping it up and running, deploying new services, changing your chargeback model, etc, these are the daily jobs. These jobs must be simple, secure and easy to manage. The Enterprise Manager 12c Cloud provides this functionality from one management cockpit. Enterprise Manager 12c uses Oracle VM Manager to control OVM Serverpools. Once you registered your OVM-Managers in Enterprise Manager, then you are able to setup your cloud infrastructure and manage everything from Enterprise Manager. What you need to do in EM12c is: ">Register your OVM Manager in Enterprise ManagerAfter Registering your OVM Manager, all the functionality of Oracle VM for X86 is also available in Enterprise Manager. Enterprise Manager works as a "Manger" of the Manager. You can register as many OVM-Managers you want and control your complete virtualization environment Create Roles and Users for your Self Service Portal in Enterprise ManagerWith this step you allow users to logon on the Enterprise Manager Self Service Portal. Users can request Virtual Machines in this portal. Setup the Cloud InfrastructureSetup the Quotas for your self service users. How many VMs can they request? How much of your resources ( cpu, memory, storage, network, etc. etc.)? Which SW components (templates, assemblys) can your self service users request? In this step, you basically set up the complete cloud infrastructure. Setup ChargebackOnce your cloud is set up, you need to configure your chargeback mechanism. The Enterprise Manager collects the resources metrics, which are used in a very deep level. Almost all collected Metrics could be used in the chargeback module. You can define chargeback plans based on configurations (charge for the amount of cpu, memory, storage is assigned to a machine, or for a specific OS which is installed) or chargeback on resource consumption (% of cpu used, storage used, etc). Or you can also define a combination of configuration and consumption chargeback plans. The chargeback module is very flexible. Here is a overview of the workflow how to handle infrastructure cloud in EM: Summary As you can see, setting up an Infrastructure Cloud Service with Oracle VM for X86 and Enterprise Manager 12c is really simple. I personally configured a complete cloud environment with three X86 servers and a small JBOD san box in less than 3 hours. There is no magic in it, it is all straightforward. Of course, you have to have some experience with Oracle VM and Enterprise Manager. Experience in setting up Linux environments helps as well. I plan to publish a technical cookbook in the next few weeks. I hope you found this post useful and will see you again here on our blog. Any hints, comments are welcome!

    Read the article

  • How can I check Internet connectivity in a console?

    - by Ashfame
    Is there an easy way to check Internet connectivity from console? I am trying to play around in a shell script. One idea I seem is to wget --spider http://www.google.co.in/ and check the HTTP response code to interpret if the Internet connection is working fine. But I think there must be easy way without the need of checking a site that never crash ;) Edit: Seems like there can be a lot of factors which can be individually examined, good thing. My intention at the moment is to check if my blog is down. I have setup cron to check it every minute. For this, I am checking the HTTP response code of wget --spider to my blog. If its not 200, it notifies me (I believe this will be better than just pinging it, as the site may under be heavy load and may be timing out or respond very late). Now yesterday, there was some problem with my Internet. LAN was connected fine but just I couldn't access any site. So I keep on getting notifications as the script couldn't find 200 in the wget response. Now I want to make sure that it displays me notification when I do have internet connectivity. So, checking for DNS and LAN connectivity is a bit overkill for me as I don't have that much specific need to figure out what problem it is. So what do you suggest how I do it?

    Read the article

  • Why is sudo bash different from regular bash

    - by cyberjar09
    Problem description : I am using something called play framework in my development which requires me to make the python script play available in the path. Hence I create a symbolic link in /usr/local/bin ... Now I have written a shell script (call it status.sh) which calls this python script as follows : play status <some values here related to my app> &> /tmp/xyz.txt and this shell script then sends me the file via email. This works perfectly when I execute the script as follows ./script.sh. However when the script is executed as a cron expression everyday I get an output from stderr saying 'play: command not found'. Hence I did some digging on my own and here are my findings : echo $PATH when I am on the shell shows that I have /usr/local/bin available to me hence I can successfully execute the command play status however when I type in sudo bash and then echo $PATH I do not have the path /usr/local/bin anymore. It is a limited set of folders (one of them being /usr/bin). Q : Why this behavior ?! I fail to understand why the path is different. Also as a workaround would you suggest I do : new symbolic link from /usr/bin to /usr/local/bin (what are the side effects of this?) remove /usr/local/bin sym link altogether and only use /usr/bin is there a convention that I am not following here for linking new programs and executing them from $PATH ? Thanks.

    Read the article

  • login takes long time

    - by Arkaprovo Bhattacharjee
    I am using Ubuntu 12.04 from past 12 days. In the beginning login was fast enough after I put the password it hardly takes 3 to 4 sec to enter in desktop, but now its taking like more that 40 sec to show desktop after entering password. whats the problem, is there any solution? P.S there is only two programs (psensor and jupiter) that starts automatically after login. boot.log fsck from util-linux 2.20.1 /dev/sda6: clean, 254544/3325952 files, 2133831/13285632 blocks * Stopping Userspace bootsplash[164G[ OK ] * Stopping Flush boot log to disk[164G[ OK ] * Starting mDNS/DNS-SD daemon[164G[ OK ] Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox * Starting bluetooth daemon[164G[ OK ] * Starting network connection manager[164G[ OK ] * Starting AppArmor profiles [170G [164G[ OK ] * Stopping System V initialisation compatibility[164G[ OK ] * Starting CUPS printing spooler/server[164G[ OK ] * Starting System V runlevel compatibility[164G[ OK ] * Starting Bumblebee supporting nVidia Optimus cards[164G[ OK ] * Starting LightDM Display Manager[164G[ OK ] * Starting save kernel messages[164G[ OK ] * Starting anac(h)ronistic cron[164G[ OK ] * Starting ACPI daemon[164G[ OK ] * Starting regular background program processing daemon[164G[ OK ] * Starting deferred execution scheduler[164G[ OK ] speech-dispatcher disabled; edit /etc/default/speech-dispatcher * Starting CPU interrupts balancing daemon[164G[ OK ]

    Read the article

  • 5 Ways Android Still Disappoints (Me)

    - by TStewartDev
    Let me make this clear: I'm annoyed with Apple. I don't like their current policies and I don't like where Steve Jobs is taking the company. In general, I don't like it when any one company gets too much control in a market. When that happens, the leading company dictates the game and as consumers, our options all but disappear. That said, I'm still going to buy a new iPhone next week. My Apple-hating friends seem to desperately want me to go Android instead, but frankly, it's not good enough for me, and here are the reasons why. The Modern WinMo One of the reasons that Microsoft has identified for Windows Mobile's rapid decline is the breadth of hardware. They exercised little control over manufacturer's implementations. In theory, that sounds great. We as consumers have lots of choice. In practice, though, it meant among other things that updates to the devices were left up to the manufacturers. As a result, that rarely happened. (I'm still bitter at Toshiba for leaving me hanging back in 2002.) And now, Google is doing the same thing with Android. Case in point: my wife has a Motorola Backflip that we bought in April. It was released in March. Motorola says it will get Android 2.1 "sometime in Q3". Great. Meanwhile, I pull down the latest version of iPhone OS (now iOS) and install it the same day it's released. You may say that I can't judge Android by one lazy manufacturer. Yup, I sure can. With Apple, my original iPhone has been supported perfectly for 3 years. With Android, I will have to wait for upgrades after Google releases them, possibly indefinitely. Not cool. AT&T We signed a new contract with AT&T in April to get my wife's phone. I've had a reasonable experience with them. I don't imagine my experience with Verizon would be any better, and I'm relatively confident that Sprint doesn't have the coverage it takes to work well for us. The fact is, AT&T, for whatever reason, doesn't have jack for Android phones. May not be Android's fault, but it's still a shortcoming that prevents me from having it just like the iPhone's exclusivity keeps some folks on other networks from having it. Innovation? What Innovation? Android has a nice dashboard and a great notification system and… nothing else original. I keep reading about how disappointing the iPhone is nowadays. "It has no innovation," people say. Who does? Android has modeled its behavior after the iPhone. That's fine, but if all you've got is a similar product and I'm invested both skill-wise and app-wise in my current platform, why should I change? Microsoft's new Windows Phone 7 looks somewhat innovative, and I'm pretty excited to see what they'll bring to the table, but that's another six months away, at least. I've got a 3 year old phone that has some annoying issues now (thanks to recent encounters with water). I need a new phone now. Is This Going to Work? There's no shortage of criticism of Apple over its App Store policies, and I've vented my own anger about it. However, I will give them credit: their screening of apps has done a great job of weeding out the crap and gives an excellent indication that the app will work on my device. How about Android? Nope. It might work on your phone. Maybe. You'll have to try it to see. Get burned by it? Well, write a nasty review to try to keep others from making the mistake you did. If you don't mind doing that stuff, then Android is the platform for you. Personally, I'd rather have a receptionist screening out the telemarketing and survey calls than hang up on them myself, but that's your call. Slow, Slowing, Slower All this yapping about multitasking. This is an area I've been on Apple's side from the beginning. Sorry folks, but this is the number one reason I hated Windows Mobile: the longer you use it, the slower it gets because it doesn't kill apps. I'm with Steve Jobs on this one: if you see a task manager, we're doing it wrong. I don't want to have to manually kill apps. I hate doing that on Windows let alone on a mobile device. To me, priority one should be keeping the device speedy. Waiting for your device to respond is unacceptable. Bonus! Taken from iPhone Letdown? 8 Things Apple Didn't Announce, here are my responses: 4G Yeah, let me know if your area actually has it. I live in Lincoln, Nebraska. No carrier is going to have 4G here for at least 3 years. Meanwhile, you still get to pay for it. Yay! Cloud iTunes/OTA Sync You got me here. Of course, whether or not your Android device will be able to do it is always a good question. 3G Video Chat You got me here, too. I'm sure you spent countless hours in front of your phone with video chat. Also, I can't wait for the "No Video Chat While Driving" laws. Mobile Hotspot This is a neat feature, but as the author points out, it's left up to the carrier whether to implement it or not. Pretty sure any Android phones that come to AT&T won't have this enabled in the foreseeable future. Is Verizon even allowing this? I just figured Sprint was because they're failing so hard at keeping customers. Free MobileMe I use Google's services with my iPhone. The only people I know who use MobileMe are Apple fanboys and fangirls. If you choose to pay for a service that you can get for free, that's your decision, not Apple's. Voice Input Voice input has been available on phones (even "dumb" phones) for years now. iPhone does have the ability, though limited. Why don't I hear people telling their phones what to do? Maybe because it's still easier to use your fingers than talk to it. Get back to me when this becomes an important feature. Free Navigation Maybe this will be a bigger deal to me now that I'm getting a phone with GPS, but when using my buddy's 3gs, Google maps has worked just fine. Maybe I just don't trust turn-by-turn navigation enough to want it. Dashboard The only legitimate complaint on this list, to me. iPhone's home screen is pathetic, doubly so for the iPad. What a waste of perfectly usable space. I also want to add notifications to this list. Android's notification panel is far superior to the iPhone's. I don't want to hunt all over my screen to find little red dots. Put 'em in one place, Apple.

    Read the article

  • check what process was causing the problem of high cpu load

    - by linuxk
    I'm running nginx wordpress server in KVM using 12.04 server x86. It was running very well about 4 month until 2 hours ago. I found that my website is down and no ping response. Virt-manager logged high cpu load(plz see the picture below) before unexpected shut down. I want to know what process caused unexpected shutdown. The following log files make me think my server is attacked. Any suggestions and help would be appreciated. kern.log and syslog showed me same output. Nov 11 03:54:11 www kernel: [1344541.156239] [UFW BLOCK] IN=eth0 OUT= MAC= SRC=0.0.0.0 DST=224.0. 0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Nov 11 03:54:11 www kernel: [1344541.156315] [UFW BLOCK] IN=eth0 OUT= MAC= SRC=0101:080a:2334:c90 0:0100:0000:0000:0000 DST=ff02:0000:0000:0000:0000:0000:0000:0001 LEN=72 TC=0 HOPLIMIT=1 FLOWLBL=0 PROTO=ICMPv6 TYPE=130 CODE=0 /nginx/access.log showed me 119.235.237.17 - - [11/Nov/2012:03:45:29 +0900] "GET /blog HTTP/1.1" 200 30493 "-" "Yeti/1.0 (NHN Corp.; http://help.naver.com/robots/)" my-server-ip - - [11/Nov/2012:11:05:30 +0900] "POST /wp-cron.php?doing_wp_cron=13 HTTP/1.0" 499 0 "-" "WordPress/3.4.2; http://mywebsite.com" Server turned on in here. 119.235.237.16 - - [11/Nov/2012:11:05:30 +0900] "GET /blog HTTP/1.1" 200 32935 "-" "Yeti/1.0 (NHN Corp.; http://help.naver.com/robots/)"

    Read the article

  • How to spread XML Sitemaps over several webservers behind AWS loadbalancer?

    - by Jurik
    We have a web portal with almost a million products and way more other urls. I wrote a script that checks database. If there is a new url needed or an old one update, this script will update/create the XML Sitemaps. But we have several servers behind the load balancer at our rented AWS space. Further this script checks database for each url if there was an update so that it updates the appropriate xml file too. My question is how to spread those XML Sitemaps over all webservers behind this AWS load balancer? Our approaches/ideas: we could just generate them on one server with a cron job and copy them to the other servers, but this could be difficult because of automatic raising numbers of servers and so on. we put them on our S3 - but this one is not avaible thru our domain, so I guess google will have a problem with it I let my script run on every webserver but change it in a way that it will generate each time all xml files if they do not exist. But then I would have conflicts with updated URLs in my database, where I saved timestamp of last changed value of every url Is there another - better - solution that I do not know? Are there any special services by amazon for such cases?

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • UPDATE MANAGER UNABLE TO UPDATE

    - by muguro
    Requires installation of untrusted packages The action would require the installation of packages from not authenticated sources. i get this error every time i try updating. the system shows that it has 466 updates but fails after clicking update more details have this accountsservice apparmor apport apport-gtk apt apt-transport-https apt-utils aptdaemon aptdaemon-data at-spi2-core bamfdaemon base-files bcmwl-kernel-source bind9-host compiz compiz-core compiz-gnome compiz-plugins-default cron cups cups-bsd cups-client cups-common cups-filters cups-ppdc dbus dbus-x11 dconf-gsettings-backend dconf-service desktop-file-utils dmsetup dnsutils empathy empathy-common eog evince evince-common evolution-data-server evolution-data-server-common firefox firefox-globalmenu firefox-gnome-support firefox-locale-en fontconfig fontconfig-config fonts-liberation fonts-opensymbol foomatic-filters gcalctool gdb ghostscript ghostscript-cups ghostscript-x ginn gir1.2-atspi-2.0 gir1.2-dbusmenu-glib-0.4 gir1.2-dbusmenu-gtk-0.4 gir1.2-gst-plugins-base-0.10 gir1.2-gtk-3.0 gir1.2-gtksource-3.0 gir1.2-gudev-1.0 gir1.2-javascriptcoregtk-3.0 gir1.2-launchpad-integration-3.0 gir1.2-pango-1.0 gir1.2-rb-3.0 gir1.2-totem-1.0 gir1.2-ubuntuoneui-3.0 gir1.2-unity-5.0 gir1.2-webkit-3.0 glib-networking glib-networking-common glib-networking-services gnome-accessibility-themes gnome-control-center gnome-control-center-data gnome-desktop3-data gnome-games-data gnome-icon-theme gnome-media gnome-orca gnome-settings-daemon gnome-sudoku gnomine gnupg google-talkplugin gpgv grub-common grub-pc grub-pc-bin grub2-common gstreamer0.10-alsa gstreamer0.10-plugins-base gstreamer0.10-plugins-base-apps gstreamer0.10-x gvfs gvfs-backends gvfs-bin gvfs-common gvfs-daemons gvfs-fuse gvfs-libs gwibber gwibber-service gwibber-service-facebook gwibber-service-identica gwibber-service-twitter hdparm hplip hplip-data indicator-sound initscripts isc-dhcp-client isc-dhcp-common jockey-common jockey-gtk krb5-locales landscape-client-ui-install language-pack-en language-pack-en-base language-pack-gnome-en language-pack-gnome-en-base launchpad-integration libaccountsservice0 libapt-inst1.4 libapt-pkg4.12 libart-2.0-2 libasound2 libatspi2.0-0 libbamf0 libbamf3-0 libbind9-80 libc-bin libc-dev-bin libc6 libc6-dev libcairo-gobject2 libcairo2

    Read the article

  • Is there any way to kill a zombie process without reboot?

    - by Pedram
    Is there any way to kill a zombie process without reboot?Here is how it happens: I wanted to download a 12GB torrent.After adding the .torrent file, transmission turned into a zombie process.I tried ktorrent too.Same behavior.Finally I could download the file using µTorrent but after closing the program, it turns into a zombie as well. I tried using kill skill and pkill with different options and -9 signal but no success. In some answers in web I found out killing the parent can kill the zombie, but killing wine didn't help either. Is there another way? Edit: ps -o pid,ppid,stat,comm PID PPID STAT COMMAND 7121 2692 Ss bash 7317 7121 R+ ps pstree output: init---GoogleTalkPlugi---4*[{GoogleTalkPlug}] +-NetworkManager---dhclient ¦ +-{NetworkManager} +-acpid +-apache2---5*[apache2] +-atd +-avahi-daemon---avahi-daemon +-bonobo-activati---{bonobo-activat} +-clock-applet +-console-kit-dae---63*[{console-kit-da}] +-cron +-cupsd +-2*[dbus-daemon] +-2*[dbus-launch] +-desktopcouch-se---desktopcouch-se +-explorer.exe +-firefox---run-mozilla.sh---firefox-bin---plugin-containe---8*[{plugin-contain}] ¦ +-14*[{firefox-bin}] +-gconfd-2 +-gdm-binary---gdm-simple-slav---Xorg ¦ ¦ +-gdm-session-wor---gnome-session---bluetooth-apple ¦ ¦ ¦ ¦ +-fusion-icon---compiz---sh---gtk-window-deco ¦ ¦ ¦ ¦ +-gdu-notificatio ¦ ¦ ¦ ¦ +-gnome-panel ¦ ¦ ¦ ¦ +-gnome-power-man ¦ ¦ ¦ ¦ +-gpg-agent ¦ ¦ ¦ ¦ +-nautilus---bash ¦ ¦ ¦ ¦ ¦ +-{nautilus} ¦ ¦ ¦ ¦ +-nm-applet ¦ ¦ ¦ ¦ +-polkit-gnome-au ¦ ¦ ¦ ¦ +-2*[python] ¦ ¦ ¦ ¦ +-qstardict---{qstardict} ¦ ¦ ¦ ¦ +-ssh-agent ¦ ¦ ¦ ¦ +-tracker-applet ¦ ¦ ¦ ¦ +-trackerd ¦ ¦ ¦ ¦ +-wakoopa---wakoopa ¦ ¦ ¦ ¦ ¦ +-3*[{wakoopa}] ¦ ¦ ¦ ¦ +-{gnome-session} ¦ ¦ ¦ +-{gdm-session-wo} ¦ ¦ +-{gdm-simple-sla} ¦ +-{gdm-binary} +-6*[getty] +-gnome-keyring-d---2*[{gnome-keyring-}] +-gnome-screensav +-gnome-settings- +-gnome-system-mo---{gnome-system-m} +-gnome-terminal---bash---ssh ¦ +-bash---pstree ¦ +-gnome-pty-helpe ¦ +-{gnome-terminal} +-gvfs-afc-volume---{gvfs-afc-volum} +-gvfs-fuse-daemo---3*[{gvfs-fuse-daem}] +-gvfs-gdu-volume +-gvfsd +-gvfsd-burn +-gvfsd-http +-gvfsd-metadata +-gvfsd-trash +-hald---hald-runner---hald-addon-acpi ¦ ¦ +-hald-addon-cpuf ¦ ¦ +-hald-addon-inpu ¦ ¦ +-hald-addon-stor ¦ +-{hald} +-hotot---xdg-open ¦ +-3*[{hotot}] +-indicator-apple +-indicator-me-se +-indicator-sessi +-irqbalance +-kded4 +-kdeinit4---kio_http_cache_ ¦ +-klauncher +-kglobalaccel +-knotify4 +-modem-manager +-multiload-apple +-mysqld---10*[{mysqld}] +-named---10*[{named}] +-nmbd +-notification-ar +-notify-osd +-pidgin---{pidgin} +-polkitd +-pulseaudio---gconf-helper ¦ +-2*[{pulseaudio}] +-rsyslogd---2*[{rsyslogd}] +-rtkit-daemon---2*[{rtkit-daemon}] +-services.exe---plugplay.exe---2*[{plugplay.exe}] ¦ +-winedevice.exe---3*[{winedevice.exe}] ¦ +-3*[{services.exe}] +-smbd---smbd +-snmpd +-sshd +-timidity +-trashapplet +-udevd---2*[udevd] +-udisks-daemon---udisks-daemon ¦ +-{udisks-daemon} +-upowerd +-upstart-udev-br +-utorrent.exe---8*[winemenubuilder] ¦ +-{utorrent.exe} +-vnstatd +-winbindd---2*[winbindd] +-2*[winemenubuilder] +-wineserver +-wnck-applet +-wpa_supplicant +-xinetd System monitor and top screenshots which show the zombie process is using resources:

    Read the article

  • django & postgres linux hosting (with SSH access) recommendations

    - by Justin Grant
    We're looking for a good place to host our custom Django app (a fork of OSQA) and its postgresql backend. Requirements include: Linux Python 2.6 or (ideally) Python 2.7 Django 1.2 Postgres 8.4 or later DB backup/restore handled by the hoster, not us OS & dev-platform-stack patching/maintenance handled by the hoster, not us SSH access (so we can pull source code from GitHub, so we can install python eggs, etc.) ability to set up cron jobs (e.g. to send out dail email updates) ability to send up to 10K emails/day good performance (not ganged up with a zillion other sites on one CPU, not starved for RAM) FTP or SCP access to web logs dedicated public IP SSL support Costs under $1000/month for a relatively small site (<5M pageviews/month) Good customer service We already have a prototype site running on EC2 on top of a Bitnami DjangoStack. The problem is that we have to patch the OS, patch postgres, etc. We'd really prefer a platform-as-a-service (PaaS) offering, like Heroku offers for Rails apps, where all we need to worry about is deploying our code instead of worrying about system software patching and maintenance. Google App Engine is closest to what we're looking for, but they don't offer relational DB access (not yet at least). Anyone have a recommendation?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >