Search Results

Search found 5679 results on 228 pages for 'kill processes'.

Page 35/228 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Issues with Cinnamon?

    - by Corrodie
    I just recently switched my system over to Ubuntu 12.10, and decided on Cinnamon as my environment-- it all worked fine, at first. But I was poorly educated and started using Compiz and Emerald along with it--Setting both as replacements in startup processes. I now know, that's a big, big mistake. Now when loading Cinnamon, I am greeted to my background image, and only that. My only options seem to be to open a terminal. I was advised to attempt muffin --replace and mutter --replace Neither to any avail, the terminal closes, and I cannot load another one unless I completely reload. I went back to Unity, purged and autoremoved Cinnamon, emerald, and compizconfig, and attempted to reinstall Cinnamon, thinking that would solve the problem--no, it came back just as broken as before. So, I reinstalled ubuntu, then cinnamon---still broken. I'm assuming I must find a way to remove the replace commands-- but as I have no menu, I'm not positive I can do that. Is there any way I can access the startup processes via terminal? I'd think though, if I completely removed Cinnamon, all configurations would be gone too, so, it's just not making much sense. Is there some kind of reset I could possibly do? I've been browsing forums and questions here, all leading to things I'd already done, so, it can't hurt to ask for myself. I apologize if you would rather I have posted this over at mint. Next time, I will definitely check compatibility instead of assuming something just has to work. Any help is greatly appreciated, thanks! ****EDIT***** It seems although it didn't allow me to do it before, I was now allowed to access the settings and startup processes for Cinnamon via Unity, and, after quickly removing aforementioned processes- I'm up and running again.

    Read the article

  • X-notifier doesn't work in Chromium Browser

    - by cipricus
    It just keeps checking in vain. Also cannot import or export data, but get this error I use the latest versions of both in Lubuntu 12.04. In Google Chrome it works. What could it be the problem? Edit - following vasa1's comment - running sudo aa-status i get apparmor module is loaded. 16 profiles are loaded. 16 profiles are in enforce mode. /sbin/dhclient /usr/bin/evince /usr/bin/evince-previewer /usr/bin/evince-previewer//launchpad_integration /usr/bin/evince-previewer//sanitized_helper /usr/bin/evince-thumbnailer /usr/bin/evince-thumbnailer//sanitized_helper /usr/bin/evince//launchpad_integration /usr/bin/evince//sanitized_helper /usr/lib/NetworkManager/nm-dhcp-client.action /usr/lib/connman/scripts/dhclient-script /usr/lib/cups/backend/cups-pdf /usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper /usr/sbin/cupsd /usr/sbin/ntpd /usr/sbin/tcpdump 0 profiles are in complain mode. 3 processes have profiles defined. 3 processes are in enforce mode. /sbin/dhclient (1562) /usr/sbin/cupsd (916) /usr/sbin/ntpd (1695) 0 processes are in complain mode. 0 processes are unconfined but have a profile defined.

    Read the article

  • Linux C: "Interactive session" with separate read and write named pipes?

    - by ~sd-imi
    Hi all, I am trying to work with "Introduction to Interprocess Communication Using Named Pipes - Full-Duplex Communication Using Named Pipes", http://developers.sun.com/solaris/articles/named_pipes.html#5 ; in particular fd_server.c (included below for reference) Here is my info and compile line: :~$ cat /etc/issue Ubuntu 10.04 LTS \n \l :~$ gcc --version gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3 :~$ gcc fd_server.c -o fd_server fd_server.c creates two named pipes, one for reading and one for writing. What one can do, is: in one terminal, run the server and read (through cat) its write pipe: :~$ ./fd_server & 2/dev/null [1] 11354 :~$ cat /tmp/np2 and in another, write (using echo) to server's read pipe: :~$ echo "heeellloooo" /tmp/np1 going back to first terminal, one can see: :~$ cat /tmp/np2 HEEELLLOOOO 0[1]+ Exit 13 ./fd_server 2 /dev/null What I would like to do, is make sort of a "interactive" (or "shell"-like) session; that is, the server is run as usual, but instead of running "cat" and "echo", I'd like to use something akin to screen. What I mean by that, is that screen can be called like screen /dev/ttyS0 38400, and then it makes a sort of a interactive session, where what is typed in terminal is passed to /dev/ttyS0, and its response is written to terminal. Now, of course, I cannot use screen, because in my case the program has two separate nodes, and as far as I can tell, screen can refer to only one. How would one go about to achieve this sort of "interactive" session in this context (with two separate read/write pipes)? Thanks, Cheers! Code below: #include <stdio.h> #include <errno.h> #include <ctype.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> //#include <fullduplex.h> /* For name of the named-pipe */ #define NP1 "/tmp/np1" #define NP2 "/tmp/np2" #define MAX_BUF_SIZE 255 #include <stdlib.h> //exit #include <string.h> //strlen int main(int argc, char *argv[]) { int rdfd, wrfd, ret_val, count, numread; char buf[MAX_BUF_SIZE]; /* Create the first named - pipe */ ret_val = mkfifo(NP1, 0666); if ((ret_val == -1) && (errno != EEXIST)) { perror("Error creating the named pipe"); exit (1); } ret_val = mkfifo(NP2, 0666); if ((ret_val == -1) && (errno != EEXIST)) { perror("Error creating the named pipe"); exit (1); } /* Open the first named pipe for reading */ rdfd = open(NP1, O_RDONLY); /* Open the second named pipe for writing */ wrfd = open(NP2, O_WRONLY); /* Read from the first pipe */ numread = read(rdfd, buf, MAX_BUF_SIZE); buf[numread] = '0'; fprintf(stderr, "Full Duplex Server : Read From the pipe : %sn", buf); /* Convert to the string to upper case */ count = 0; while (count < numread) { buf[count] = toupper(buf[count]); count++; } /* * Write the converted string back to the second * pipe */ write(wrfd, buf, strlen(buf)); } Edit: Right, just to clarify - it seems I found a document discussing something very similar, it is http://en.wikibooks.org/wiki/Serial_Programming/Serial_Linux#Configuration_with_stty - a modification of the script there ("For example, the following script configures the device and starts a background process for copying all received data from the serial device to standard output...") for the above program is below: # stty raw # ( ./fd_server 2>/dev/null; )& bgPidS=$! ( cat < /tmp/np2 ; )& bgPid=$! # Read commands from user, send them to device echo $(kill -0 $bgPidS 2>/dev/null ; echo $?) while [ "$(kill -0 $bgPidS 2>/dev/null ; echo $?)" -eq "0" ] && read cmd; do # redirect debug msgs to stderr, as here we're redirected to /tmp/np1 echo "$? - $bgPidS - $bgPid" >&2 echo "$cmd" echo -e "\nproc: $(kill -0 $bgPidS 2>/dev/null ; echo $?)" >&2 done >/tmp/np1 echo OUT # Terminate background read process - if they still exist if [ "$(kill -0 $bgPid 2>/dev/null ; echo $?)" -eq "0" ] ; then kill $bgPid fi if [ "$(kill -0 $bgPidS 2>/dev/null ; echo $?)" -eq "0" ] ; then kill $bgPidS fi # stty cooked So, saving the script as say starter.sh and calling it, results with the following session: $ ./starter.sh 0 i'm typing here and pressing [enter] at end 0 - 13496 - 13497 I'M TYPING HERE AND PRESSING [ENTER] AT END 0~?.N=?(?~? ?????}????@??????~? [garble] proc: 0 OUT which is what I'd call for "interactive session" (ignoring the debug statements) - server waits for me to enter a command; it gives its output after it receives a command (and as in this case it exits after first command, so does the starter script as well). Except that, I'd like to not have buffered input, but sent character by character (meaning the above session should exit after first key press, and print out a single letter only - which is what I expected stty raw would help with, but it doesn't: it just kills reaction to both Enter and Ctrl-C :) ) I was just wandering if there already is an existing command (akin to screen in respect to serial devices, I guess) that would accept two such named pipes as arguments, and establish a "terminal" or "shell" like session through them; or would I have to use scripts as above and/or program own 'client' that will behave as a terminal..

    Read the article

  • Why Fusion Middleware matters to Oracle Applications and Fusion Applications customers?

    - by Harish Gaur
    Did you miss this general session on Monday morning presented by Amit Zavery, VP of Oracle Fusion Middleware Product Management? There will be a recording made available shortly and in the meanwhile, here is a recap. Amit presented 5 strategies customers can leverage today to extend their applications. Figure 1: 5 Oracle Fusion Middleware strategies to extend Oracle Applications & Oracle Fusion Apps 1. Engage Everyone – Provide intuitive and social experience for application users using Oracle WebCenter 2. Extend Enterprise – Extend Oracle Applications to mobile devices using Oracle ADF Mobile 3. Orchestrate Processes – Automate key organization processes across on-premise & cloud applications using Oracle BPM Suite & Oracle SOA Suite 4. Secure the core – Provide single sign-on and self-service provisioning across multiple apps using Oracle Identity Management 5. Optimize Performance – Leverage Exalogic stack to consolidate multiple instance and improve performance of Oracle Applications Session included 3 demonstrations to illustrate these strategies. 1. First demo highlighted significance of mobile applications for unlocking existing investment in Applications such as EBS. Using a native iPhone application interacting with e-Business Suite, demo showed how expense approval can be mobile enabled with enhanced visibility using BI dashboards. 2. Second demo showed how you can extend a banking process in Siebel and Oracle Policy Automation with Oracle BPM Suite.Process starts in Siebel with a customer requesting a loan, and then jumps to OPA for loan recommendations and decision making and loan processing with approvals in handled in BPM Suite. Once approvals are completed Siebel is updated to complete the process. 3. Final demo showcased FMW components inside Fusion Applications, specifically WebCenter. Boeing, Underwriter Laboratories and Electronic Arts joined this quest and discussed 3 different approaches of leveraging Fusion Middleware stack to maximize their investment in Oracle Applications and/or Fusion Applications technology. Let’s briefly review what these customers shared during the session: 1. Extend Fusion Applications We know that Oracle Fusion Middleware is the underlying technology infrastructure for Oracle Fusion Applications. Architecturally, Oracle Fusion Apps leverages several components of Oracle Fusion Middleware from Oracle WebCenter for rich collaborative interface, Oracle SOA Suite & Oracle BPM Suite for orchestrating key underlying processes to Oracle BIEE for dash boarding and analytics. Boeing talked about how they are using Oracle BPM Suite 11g, a key component of Oracle Fusion Middleware with Oracle Fusion Apps to transform their supply chain. Tim Murnin, Director of Supply Chain talked about Boeing’s 5 year supply chain transformation journey. Boeing’s Integrated and Information Management division began with automation of critical RFQ process using Oracle BPM Suite. This 1st phase resulted in 38% reduction in labor costs for RFP. As a next step in this effort, Boeing is now creating a platform to enable electronic Order Management. Fusion Apps are playing a significant role in this phase. Boeing has gone live with Oracle Fusion Product Hub and efforts are underway with Oracle Fusion Distributed Order Orchestration (DOO). So, where does Oracle BPM Suite 11g fit in this equation? Let me explain. Business processes within Fusion Apps are designed using 2 standards: Business Process Execution Language (BPEL) and Business Process Modeling Notation (BPMN). These processes can be easily configured using declarative set of tools. Boeing leverages Oracle BPM Suite 11g (which supports BPMN 2.0) and Oracle SOA Suite (which supports BPEL) to “extend” these applications. Traditionally, customizations are done within an app using native technologies. But, instead of making process changes within Fusion Apps, Boeing has taken an approach of building “extensions” layer on top of the application. Fig 2: Boeing’s use of Oracle BPM Suite to orchestrate key supply chain processes across Fusion Apps 2. Maximize Oracle Applications investment Fusion Middleware appeals not only to Fusion Apps customers, but is also leveraged by Oracle E-Business Suite, PeopleSoft, Siebel and JD Edwards customers significantly. Using Oracle BPM Suite and Oracle SOA Suite is the recommended extension strategy for Oracle Fusion Apps and Oracle Applications Unlimited customers. Electronic Arts, E-Business Suite customer, spoke about their strategy to transform their order-to-cash process using Oracle SOA Suite, Oracle Foundation Packs and Oracle BAM. Udesh Naicker, Sr Director of IT at Elecronic Arts (EA), discussed how growth of social and digital gaming had started to put tremendous pressure on EA’s existing IT infrastructure. He discussed the challenge with millions of micro-transactions coming from several sources – Microsoft Xbox, Paypal, several service providers. EA found Order-2-Cash processes stretched to their limits. They lacked visibility into these transactions across the entire value chain. EA began by consolidating their E-Business Suite R11 instances into single E-Business Suite R12. EA needed to cater to a variety of service requirements, connectivity methods, file formats, and information latency. Their integration strategy was tactical, i.e., using file uploads, TIBCO, SQL scripts. After consolidating E-Business suite, EA standardized their integration approach with Oracle SOA Suite and Oracle AIA Foundation Pack. Oracle SOA Suite is the platform used to extend E-Business Suite R12 and standardize 60+ interfaces across several heterogeneous systems including PeopleSoft, Demantra, SF.com, Workday, and Managed EDI services spanning on-premise, hosted and cloud applications. EA believes that Oracle SOA Suite 11g based extension strategy has helped significantly in the followings ways: - It helped them keep customizations out of E-Business Suite, thereby keeping EBS R12 vanilla and upgrade safe - Developers are now proficient in technology which is also leveraged by Fusion Apps. This has helped them prepare for adoption of Fusion Apps in the future Fig 3: Using Oracle SOA Suite & Oracle e-Business Suite, Electronic Arts built new platform for order processing 3. Consolidate apps and improve scalability Exalogic is an optimal platform for customers to consolidate their application deployments and enhance performance. Underwriter Laboratories talked about their strategy to run their mission critical applications including e-Business Suite on Exalogic. Christian Anschuetz, CIO of Underwriter Laboratories (UL) shared how UL is on a growth path - $1B to $2.5B in 5 years- and planning a significant business transformation from a not-for-profit to a for-profit business. To support this growth, UL is planning to simplify its IT environment and the deployment complexity associated with ERP applications and technology it runs on. Their current applications were deployed on variety of hardware platforms and lacked comprehensive disaster recovery architecture. UL embarked on a mission to deploy E-Business Suite on Exalogic. UL’s solution is unique because it is one of the first to deploy a large number of Oracle applications and related Fusion Middleware technologies (SOA, BI, Analytical Applications AIA Foundation Pack and AIA EBS to Siebel UCM prebuilt integration) on the combined Exalogic and Exadata environment. UL is planning to move to a virtualized architecture toward the end of 2012 to securely host external facing applications like iStore Fig 4: Underwrites Labs deployed e-Business Suite on Exalogic to achieve performance gains Key takeaways are: - Fusion Middleware platform is certified with major Oracle Applications Unlimited offerings. Fusion Middleware is the underlying technological infrastructure for Fusion Apps - Customers choose Oracle Fusion Middleware to extend their applications (Apps Unlimited or Fusion Apps) to keep applications upgrade safe and prepare for Fusion Apps - Exalogic is an optimum platform to consolidate applications deployments and enhance performance

    Read the article

  • Why Fusion Middleware matters to Oracle Applications and Fusion Applications customers?

    - by Harish Gaur
    Did you miss this general session on Monday morning presented by Amit Zavery, VP of Oracle Fusion Middleware Product Management? There will be a recording made available shortly and in the meanwhile, here is a recap. Amit presented 5 strategies customers can leverage today to extend their applications. Figure 1: 5 Oracle Fusion Middleware strategies to extend Oracle Applications & Oracle Fusion Apps 1. Engage Everyone – Provide intuitive and social experience for application users using Oracle WebCenter 2. Extend Enterprise – Extend Oracle Applications to mobile devices using Oracle ADF Mobile 3. Orchestrate Processes – Automate key organization processes across on-premise & cloud applications using Oracle BPM Suite & Oracle SOA Suite 4. Secure the core – Provide single sign-on and self-service provisioning across multiple apps using Oracle Identity Management 5. Optimize Performance – Leverage Exalogic stack to consolidate multiple instance and improve performance of Oracle Applications Session included 3 demonstrations to illustrate these strategies. 1. First demo highlighted significance of mobile applications for unlocking existing investment in Applications such as EBS. Using a native iPhone application interacting with e-Business Suite, demo showed how expense approval can be mobile enabled with enhanced visibility using BI dashboards. 2. Second demo showed how you can extend a banking process in Siebel and Oracle Policy Automation with Oracle BPM Suite.Process starts in Siebel with a customer requesting a loan, and then jumps to OPA for loan recommendations and decision making and loan processing with approvals in handled in BPM Suite. Once approvals are completed Siebel is updated to complete the process. 3. Final demo showcased FMW components inside Fusion Applications, specifically WebCenter. Boeing, Underwriter Laboratories and Electronic Arts joined this quest and discussed 3 different approaches of leveraging Fusion Middleware stack to maximize their investment in Oracle Applications and/or Fusion Applications technology. Let’s briefly review what these customers shared during the session: 1. Extend Fusion Applications We know that Oracle Fusion Middleware is the underlying technology infrastructure for Oracle Fusion Applications. Architecturally, Oracle Fusion Apps leverages several components of Oracle Fusion Middleware from Oracle WebCenter for rich collaborative interface, Oracle SOA Suite & Oracle BPM Suite for orchestrating key underlying processes to Oracle BIEE for dash boarding and analytics. Boeing talked about how they are using Oracle BPM Suite 11g, a key component of Oracle Fusion Middleware with Oracle Fusion Apps to transform their supply chain. Tim Murnin, Director of Supply Chain talked about Boeing’s 5 year supply chain transformation journey. Boeing’s Integrated and Information Management division began with automation of critical RFQ process using Oracle BPM Suite. This 1st phase resulted in 38% reduction in labor costs for RFP. As a next step in this effort, Boeing is now creating a platform to enable electronic Order Management. Fusion Apps are playing a significant role in this phase. Boeing has gone live with Oracle Fusion Product Hub and efforts are underway with Oracle Fusion Distributed Order Orchestration (DOO). So, where does Oracle BPM Suite 11g fit in this equation? Let me explain. Business processes within Fusion Apps are designed using 2 standards: Business Process Execution Language (BPEL) and Business Process Modeling Notation (BPMN). These processes can be easily configured using declarative set of tools. Boeing leverages Oracle BPM Suite 11g (which supports BPMN 2.0) and Oracle SOA Suite (which supports BPEL) to “extend” these applications. Traditionally, customizations are done within an app using native technologies. But, instead of making process changes within Fusion Apps, Boeing has taken an approach of building “extensions” layer on top of the application. Fig 2: Boeing’s use of Oracle BPM Suite to orchestrate key supply chain processes across Fusion Apps 2. Maximize Oracle Applications investment Fusion Middleware appeals not only to Fusion Apps customers, but is also leveraged by Oracle E-Business Suite, PeopleSoft, Siebel and JD Edwards customers significantly. Using Oracle BPM Suite and Oracle SOA Suite is the recommended extension strategy for Oracle Fusion Apps and Oracle Applications Unlimited customers. Electronic Arts, E-Business Suite customer, spoke about their strategy to transform their order-to-cash process using Oracle SOA Suite, Oracle Foundation Packs and Oracle BAM. Udesh Naicker, Sr Director of IT at Elecronic Arts (EA), discussed how growth of social and digital gaming had started to put tremendous pressure on EA’s existing IT infrastructure. He discussed the challenge with millions of micro-transactions coming from several sources – Microsoft Xbox, Paypal, several service providers. EA found Order-2-Cash processes stretched to their limits. They lacked visibility into these transactions across the entire value chain. EA began by consolidating their E-Business Suite R11 instances into single E-Business Suite R12. EA needed to cater to a variety of service requirements, connectivity methods, file formats, and information latency. Their integration strategy was tactical, i.e., using file uploads, TIBCO, SQL scripts. After consolidating E-Business suite, EA standardized their integration approach with Oracle SOA Suite and Oracle AIA Foundation Pack. Oracle SOA Suite is the platform used to extend E-Business Suite R12 and standardize 60+ interfaces across several heterogeneous systems including PeopleSoft, Demantra, SF.com, Workday, and Managed EDI services spanning on-premise, hosted and cloud applications. EA believes that Oracle SOA Suite 11g based extension strategy has helped significantly in the followings ways: - It helped them keep customizations out of E-Business Suite, thereby keeping EBS R12 vanilla and upgrade safe - Developers are now proficient in technology which is also leveraged by Fusion Apps. This has helped them prepare for adoption of Fusion Apps in the future Fig 3: Using Oracle SOA Suite & Oracle e-Business Suite, Electronic Arts built new platform for order processing 3. Consolidate apps and improve scalability Exalogic is an optimal platform for customers to consolidate their application deployments and enhance performance. Underwriter Laboratories talked about their strategy to run their mission critical applications including e-Business Suite on Exalogic. Christian Anschuetz, CIO of Underwriter Laboratories (UL) shared how UL is on a growth path - $1B to $2.5B in 5 years- and planning a significant business transformation from a not-for-profit to a for-profit business. To support this growth, UL is planning to simplify its IT environment and the deployment complexity associated with ERP applications and technology it runs on. Their current applications were deployed on variety of hardware platforms and lacked comprehensive disaster recovery architecture. UL embarked on a mission to deploy E-Business Suite on Exalogic. UL’s solution is unique because it is one of the first to deploy a large number of Oracle applications and related Fusion Middleware technologies (SOA, BI, Analytical Applications AIA Foundation Pack and AIA EBS to Siebel UCM prebuilt integration) on the combined Exalogic and Exadata environment. UL is planning to move to a virtualized architecture toward the end of 2012 to securely host external facing applications like iStore Fig 4: Underwrites Labs deployed e-Business Suite on Exalogic to achieve performance gains Key takeaways are: - Fusion Middleware platform is certified with major Oracle Applications Unlimited offerings. Fusion Middleware is the underlying technological infrastructure for Fusion Apps - Customers choose Oracle Fusion Middleware to extend their applications (Apps Unlimited or Fusion Apps) to keep applications upgrade safe and prepare for Fusion Apps - Exalogic is an optimum platform to consolidate applications deployments and enhance performance TAGS: Fusion Apps, Exalogic, BPM Suite, SOA Suite, e-Business Suite Integration

    Read the article

  • Taking the training wheels off: Accelerating the Business with Oracle IAM by Brian Mozinski (Accenture)

    - by Greg Jensen
    Today, technical requirements for IAM are evolving rapidly, and the bar is continuously raised for high performance IAM solutions as organizations look to roll out high volume use cases on the back of legacy systems.  Existing solutions were often designed and architected to support offline transactions and manual processes, and the business owners today demand globally scalable infrastructure to support the growth their business cases are expected to deliver. To help IAM practitioners address these challenges and make their organizations and themselves more successful, this series we will outline the: • Taking the training wheels off: Accelerating the Business with Oracle IAM The explosive growth in expectations for IAM infrastructure, and the business cases they support to gain investment in new security programs. • "Necessity is the mother of invention": Technical solutions developed in the field Well proven tricks of the trade, used by IAM guru’s to maximize your solution while addressing the requirements of global organizations. • The Art & Science of Performance Tuning of Oracle IAM 11gR2 Real world examples of performance tuning with Oracle IAM • No Where to go but up: Extending the benefits of accelerated IAM Anything is possible, compelling new solutions organizations are unlocking with accelerated Oracle IAM Let’s get started … by talking about the changing dynamics driving these discussions. Big Companies are getting bigger everyday, and increasingly organizations operate across state lines, multiple times zones, and in many countries or continents at the same time.  No longer is midnight to 6am a safe time to take down the system for upgrades, to run recon’s and import or update user accounts and attributes.  Further IT organizations are operating as shared services with SLA’s similar to telephone carrier levels expected by their “clients”.  Workers are moved in and out of roles on a weekly, daily, or even hourly rate and IAM is expected to support those rapid changes.  End users registering for services during business hours in Singapore are expected their access to be green-lighted in custom apps hosted in Portugal within the hour.  Many of the expectations of asynchronous systems and batched updates are not adequate and the number and types of users is growing. When organizations acted more like independent teams at functional or geographic levels it was manageable to have processes that relied on a handful of people who knew how to make things work …. Knew how to get you access to the key systems to get your job done.  Today everyone is expected to do more with less, the finance administrator previously supporting their local Atlanta sales office might now be asked to help close the books for the Johannesburg team, and access certification process once completed monthly by Joan on the 3rd floor is now done by a shared pool of resources in Sao Paulo.   Fragmented processes that rely on institutional knowledge to get access to systems and get work done quickly break down in these scenarios.  Highly robust processes that have automated workflows for connected or disconnected systems give organizations the dynamic flexibility to share work across these lines and cut costs or increase productivity. As the IT industry computing paradigms continue to change with the passing of time, and as mature or proven approaches become clear, it is normal for organizations to adjust accordingly. Businesses must manage identity in an increasingly hybrid world in which legacy on-premises IAM infrastructures are extended or replaced to support more and more interconnected and interdependent services to a wider range of users. The old legacy IAM implementation models we had relied on to manage identities no longer apply. End users expect to self-request access to services from their tablet, get supervisor approval over mobile devices and email, and launch the application even if is hosted on the cloud, or run by a partner, vendor, or service provider. While user expectations are higher, they are also simpler … logging into custom desktop apps to request approvals, or going through email or paper based processes for certification is unacceptable.  Users expect security to operate within the paradigm of the application … i.e. feel like the application they are using. Citizen and customer facing applications have evolved from every where, with custom applications, 3rd party tools, and merging in from acquired entities or 3rd party OEM’s resold to expand your portfolio of services.  These all have their own user stores, authentication models, user lifecycles, session management, etc.  Often the designers/developers are no longer accessible and the documentation is limited.  Bringing together underlying directories to scale for growth, and improve user experience is critical for revenue … but also for operations. Job functions are more dynamic.... take the Olympics for example.  Endless organizations from corporations broadcasting, endorsing, or marketing through the event … to non-profit athletic foundations and public/government entities for athletes and public safety, all operate simultaneously on the world stage.  Each organization needs to spin up short-term teams, often dealing with proprietary information from hot ads to racing strategies or security plans.  IAM is expected to enable team’s to spin up, enable new applications, protect privacy, and secure critical infrastructure.  Then it needs to be disabled just as quickly as users go back to their previous responsibilities. On a more technical level … Optimized system directory; tuning guidelines and parameters are needed by businesses today. Business’s need to be making the right choices (virtual directories) and considerations via choosing the correct architectural patterns (virtual, direct, replicated, and tuning), challenge is that business need to assess and chose the correct architectural patters (centralized, virtualized, and distributed) Today's Business organizations have very complex heterogeneous enterprises that contain diverse and multifaceted information. With today's ever changing global landscape, the strategic end goal in challenging times for business is business agility. The business of identity management requires enterprise's to be more agile and more responsive than ever before. The continued proliferation of networking devices (PC, tablet, PDA's, notebooks, etc.) has caused the number of devices and users to be granted access to these devices to grow exponentially. Business needs to deploy an IAM system that can account for the demands for authentication and authorizations to these devices. Increased innovation is forcing business and organizations to centralize their identity management services. Access management needs to handle traditional web based access as well as handle new innovations around mobile, as well as address insufficient governance processes which can lead to rouge identity accounts, which can then become a source of vulnerabilities within a business’s identity platform. Risk based decisions are providing challenges to business, for an adaptive risk model to make proper access decisions via standard Web single sign on for internal and external customers,. Organizations have to move beyond simple login and passwords to address trusted relationship questions such as: Is this a trusted customer, client, or citizen? Is this a trusted employee, vendor, or partner? Is this a trusted device? Without a solid technological foundation, organizational performance, collaboration, constituent services, or any other organizational processes will languish. A Single server location presents not only network concerns for distributed user base, but identity challenges. The network risks are centered on latency of the long trip that the traffic has to take. Other risks are a performance around availability and if the single identity server is lost, all access is lost. As you can see, there are many reasons why performance tuning IAM will have a substantial impact on the success of your organization.  In our next installment in the series we roll up our sleeves and get into detailed tuning techniques used everyday by thought leaders in the field implementing Oracle Identity & Access Management Solutions.

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

  • Why a SIGHUP signal to httpd kills the tomcat process?

    - by Geo
    I have a server with a tomcat process binding to port 80 and a httpd processes binding to port 5000. For some reason every time any process send a SIGHUP signal to httpd process my tomcat process disappears without error or anything. I fixed the issue on the server the following way, added an explicit ServerName directive in the httpd.conf and that fixed the issue. I still don't understand why the SIGHUP to httpd killed the tomcat process. NOTE 1: I replicated the kill signal with the following command: find out what the httpd pid is. cat /etc/httpd/run/httpd.pid 4056 then kill with a sighup signal kill -s SIGHUP 4056 NOTE 2: We troubleshoot the issue and find that logrotate running every morning at 4am was sending a SIGHUP signal to realease the logs to be able to rotate them, thus killing tomcat as well.

    Read the article

  • Detecting walls or floors in pygame

    - by Serial
    I am trying to make bullets bounce of walls, but I can't figure out how to correctly do the collision detection. What I am currently doing is iterating through all the solid blocks and if the bullet hits the bottom, top or sides, its vector is adjusted accordingly. However, sometimes when I shoot, the bullet doesn't bounce, I think it's when I shoot at a border between two blocks. Here is the update method for my Bullet class: def update(self, dt): if self.can_bounce: #if the bullet hasnt bounced find its vector using the mousclick pos and player pos speed = -10. range = 200 distance = [self.mouse_x - self.player[0], self.mouse_y - self.player[1]] norm = math.sqrt(distance[0] ** 2 + distance[1] ** 2) direction = [distance[0] / norm, distance[1 ] / norm] bullet_vector = [direction[0] * speed, direction[1] * speed] self.dx = bullet_vector[0] self.dy = bullet_vector[1] #check each block for collision for block in self.game.solid_blocks: last = self.rect.copy() if self.rect.colliderect(block): topcheck = self.rect.top < block.rect.bottom and self.rect.top > block.rect.top bottomcheck = self.rect.bottom > block.rect.top and self.rect.bottom < block.rect.bottom rightcheck = self.rect.right > block.rect.left and self.rect.right < block.rect.right leftcheck = self.rect.left < block.rect.right and self.rect.left > block.rect.left each test tests if it hit the top bottom left or right side of the block its colliding with if self.can_bounce: if topcheck: self.rect = last self.dy *= -1 self.can_bounce = False print "top" if bottomcheck: self.rect = last self.dy *= -1 #Bottom check self.can_bounce = False print "bottom" if rightcheck: self.rect = last self.dx *= -1 #right check self.can_bounce = False print "right" if leftcheck: self.rect = last self.dx *= -1 #left check self.can_bounce = False print "left" else: # if it has already bounced and colliding again kill it self.kill() for enemy in self.game.enemies_list: if self.rect.colliderect(enemy): self.kill() #update position self.rect.x -= self.dx self.rect.y -= self.dy This definitely isn't the best way to do it but I can't think of another way. If anyone has done this or can help that would be awesome!

    Read the article

  • World Record Batch Rate on Oracle JD Edwards Consolidated Workload with SPARC T4-2

    - by Brian
    Oracle produced a World Record batch throughput for single system results on Oracle's JD Edwards EnterpriseOne Day-in-the-Life benchmark using Oracle's SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2. The workload includes both online and batch workload. The SPARC T4-2 server delivered a result of 8,000 online users while concurrently executing a mix of JD Edwards EnterpriseOne Long and Short batch processes at 95.5 UBEs/min (Universal Batch Engines per minute). In order to obtain this record benchmark result, the JD Edwards EnterpriseOne, Oracle WebLogic and Oracle Database 11g Release 2 servers were executed each in separate Oracle Solaris Containers which enabled optimal system resources distribution and performance together with scalable and manageable virtualization. One SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2 utilized only 55% of the available CPU power. The Oracle DB server in a Shared Server configuration allows for optimized CPU resource utilization and significant memory savings on the SPARC T4-2 server without sacrificing performance. This configuration with SPARC T4-2 server has achieved 33% more Users/core, 47% more UBEs/min and 78% more Users/rack unit than the IBM Power 770 server. The SPARC T4-2 server with 2 processors ran the JD Edwards "Day-in-the-Life" benchmark and supported 8,000 concurrent online users while concurrently executing mixed batch workloads at 95.5 UBEs per minute. The IBM Power 770 server with twice as many processors supported only 12,000 concurrent online users while concurrently executing mixed batch workloads at only 65 UBEs per minute. This benchmark demonstrates more than 2x cost savings by consolidating the complete solution in a single SPARC T4-2 server compared to earlier published results of 10,000 users and 67 UBEs per minute on two SPARC T4-2 and SPARC T4-1. The Oracle DB server used mirrored (RAID 1) volumes for the database providing high availability for the data without impacting performance. Performance Landscape JD Edwards EnterpriseOne Day in the Life (DIL) Benchmark Consolidated Online with Batch Workload System Rack Units BatchRate(UBEs/m) Online Users Users /Units Users /Core Version SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 3 95.5 8,000 2,667 500 9.0.2 IBM Power 770 (4 x POWER7, 3.3 GHz, 32 cores) 8 65 12,000 1,500 375 9.0.2 Batch Rate (UBEs/m) — Batch transaction rate in UBEs per minute Configuration Summary Hardware Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 4 x 300 GB 10K RPM SAS internal disk 2 x 300 GB internal SSD 2 x Sun Storage F5100 Flash Arrays Software Configuration: Oracle Solaris 10 Oracle Solaris Containers JD Edwards EnterpriseOne 9.0.2 JD Edwards EnterpriseOne Tools (8.98.4.2) Oracle WebLogic Server 11g (10.3.4) Oracle HTTP Server 11g Oracle Database 11g Release 2 (11.2.0.1) Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE – Universal Business Engine workload of 61 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large and medium UBEs, and the QPROCESS queue for short UBEs run concurrently. Oracle's UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers, two Oracle WebLogic Servers 11g Release 1 coupled with two Oracle Web Tier HTTP server instances and one Oracle Database 11g Release 2 database on a single SPARC T4-2 server were hosted in separate Oracle Solaris Containers bound to four processor sets to demonstrate consolidation of multiple applications, web servers and the database with best resource utilizations. Interrupt fencing was configured on all Oracle Solaris Containers to channel the interrupts to processors other than the processor sets used for the JD Edwards Application server, Oracle WebLogic servers and the database server. A Oracle WebLogic vertical cluster was configured on each WebServer Container with twelve managed instances each to load balance users' requests and to provide the infrastructure that enables scaling to high number of users with ease of deployment and high availability. The database log writer was run in the real time RT class and bound to a processor set. The database redo logs were configured on the raw disk partitions. The Oracle Solaris Container running the Enterprise Application server completed 61 Short UBEs, 4 Long UBEs concurrently as the mixed size batch workload. The mixed size UBEs ran concurrently from the Enterprise Application server with the 8,000 online users driven by the LoadRunner. See Also SPARC T4-2 Server oracle.com OTN JD Edwards EnterpriseOne oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Oracle Fusion Middleware oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 09/30/2012.

    Read the article

  • Operating systems theory -- using minimum number of semaphores

    - by stackuser
    This situation is prone to deadlock of processes in an operating system and I'd like to solve it with the minimum of semaphores. Basically there are three cooperating processes that all read data from the same input device. Each process, when it gets the input device, must read two consecutive data. I want to use mutual exclusion to do this. Semaphores should be used to synchronize: P1: P2: P3: input(a1,a2) input (b1,b2) input(c1,c2) Y=a1+c1 W=b2+c2 Z=a2+b1 Print (X) X=Z-Y+W The declaration and initialization that I think would work here are: semaphore s=1 sa1 = 0, sa2 = 0, sb1 = 0, sb2 = 0, sc1 = 0, sc2 = 0 I'm sure that any kernel programmers that happen on this can knock this out in a minute or 2. Diagram of cooperating Processes and one input device: It seems like P1 and P2 would start something like: wait(s) input (a1/b1, a2/b2) signal(s)

    Read the article

  • Selling Visual Studio ALM

    - by Tarun Arora
    Introduction As a consultant I have been selling Application Lifecycle Management services using Visual Studio and Team Foundation Server. I’ve been contacted various times by friends working in organization telling me that ALM processes in their company were benchmarked when dinosaurs walked the earth. Most of these individuals already know the great features Microsoft ALM tools offer and are keen to start a conversation with the CIO but don’t exactly know where to start. It is very important how you engage in your first conversation, if you start the conversation with ‘There is this great tooling from Microsoft which offers amazing features to boost developer productivity, … ‘ from experience I can tell you the reply from your CIO would be ‘I already know! Our existing landscape has a combination of bleeding edge open source and cutting edge licensed tools which already cover these features quite well, more over Microsoft products have a high licensing cost associated to them.’ You will always find it harder to sell by feature, the trick is to highlight the gap in the existing processes & tools and then highlight the impact of these gaps to the overall development processes, by now you would have captured enough attention to show off how the ALM tooling offered by Microsoft not only fills those gaps but offers great value adds to take their development practices to the next level. Rangers ALM Assessment Guide Image 1 – Welcome! First look at the Rangers ALM assessment guide Most organization already have some processes in place to cover aspects of ALM. How do you go about proving that there isn’t enough cover in place? This is where Visual Studio ALM Rangers ALM Assessment guide can help. The ALM assessment guide is really a tool that helps you gather information about Development practices and processes within a customer's environment. Several questionnaires are used to identify the current state of individual development lifecycle areas and decide on a desired state for those processes. It also presents guidance and roll-up summaries to help with recommendations moving forward. The ALM Rangers assessment guide can be downloaded from here. Image 2 – ALM Assessment guide divided into different functions of SDLC The assessment guide is divided into different functions of Software Development Lifecycle (listed below), this gives you the ability to access how mature the company is in different areas of SDLC. Architecture & Design Requirement Engineering & UX Development Software Configuration Management Governance Deployment & Operations Testing & Quality Assurance Project Planning & Management Each section has a set of questions, fill in the assessment by selecting “Never/Sometimes/Always” from the Answer column in the question sheets.  Each answer has weightage to the overall score. Each question has a link next to it, clicking the link takes you to the Reference sheet which gives you more details about the question along with a reason for “why you need to ask this question?”, “other ways to phrase the question” and “what to expect as an answer from the customer”. The trick is to engage the customer in a discussion. You need to probe a lot, listen to the customer and have a discussion with several team members, preferably without management to ensure that you receive candid feedback. This reminds me of a funny incident when during an ALM review a customer told me that they have a sophisticated semi-automated application deployment process, further discussions revealed that deployment actually involved 72 manual configuration steps per production node. Such observations can be recorded in the Issue Brainstorming worksheet for further consideration later. It is also worth mentioning the different levels of ALM maturity to the customer. By default the desired state of ALM maturity is set to Standard, it is possible to set a desired state by area, you should strive for Advanced or Dynamic, it always helps by explaining the classification and advantages. Image 3 – ALM levels by description The ALM assessment guide helps you arrive at a quantitative measure of the company’s ALM maturity. The resultant graph plotted on a spider’s web shows you the company’s current state of ALM maturity and the desired state of ALM maturity. Further since the results are classified by area you can immediately spot the areas where the customer needs immediate help. Image 4 – The spiders web! The red cross icons are areas shouting out for immediate attention, the yellow exclamation icons are areas that need improvement. These icons are calculated on the difference between the Current State of ALM maturity VS the Desired state of ALM maturity. Image 5 – Results by area Conclusion To conclude the Rangers ALM assessment guide gives you the ability to, Measure the customer’s current ALM maturity level Understand the ALM maturity level the customer desires to achieve Capture a healthy list of issues the customer wants to brainstorm further Now What’s next…? Download and get started with the Rangers ALM Assessment Guide. If you have successfully captured the above listed three pieces of information you are in a great state to make recommendations on the identified areas highlighting the benefits that Visual Studio ALM tools would offer. In the next post I will be covering how to take the ALM assessment results as the base to actually convert your recommendation into a sell.  Remember to subscribe to http://feeds.feedburner.com/TarunArora. I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. *** A special thanks goes out to fellow ranges Willy, Ethem and Philip for reviewing the blog post and providing valuable feedback. ***

    Read the article

  • OS Analytics with Oracle Enterprise Manager (by Eran Steiner)

    - by Zeynep Koch
    Oracle Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. The recording of our call to discuss this blog is available here: https://oracleconferencing.webex.com/oracleconferencing/ldr.php?AT=pb&SP=MC&rID=71517797&rKey=4ec9d4a3508564b3Download the presentation here See also: Blog about Alert Monitoring and Problem Notification Blog about Using Operational Profiles to Install Packages and other content Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data Drill down into a process details Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Operating systems -- using minimum number of semaphores

    - by stackuser
    The three cooperating processes all read data from the same input device. Each process, when it gets the input device, must read two consecutive data. I want to use mutual exclusion to do this. The declaration and initialization that I think would work here are: semaphore s=1 sa1 = 0, sa2 = 0, sb1 = 0, sb2 = 0, sc1 = 0, sc2 = 0 I'd like to use semaphores to synchronize the following processes: P1: P2: P3: input(a1,a2) input (b1,b2) input(c1,c2) Y=a1+c1 W=b2+c2 Z=a2+b1 Print (X) X=Z-Y+W I'm wondering how to use the minimum number of semaphores to solve this. Diagram of cooperating Processes and one input device: It seems like P1 and P2 would start something like: wait(s) input (a1/b1, a2/b2) signal(s)

    Read the article

  • AWR Performance Report and Read by Other Session Waits

    - by user702295
    For the questions regarding "read by other session" and its relation to "db file sequential/scattered read", the logic is like this: When a "db file sequential/scattered read" is done, the blocks are either already in the cache or on the disk.  Since any operation on blocks is done in the cache and since and the issue is "read by other session" I will relate to the case the blocks are on the disk. Process A is reading the needed block from the disk to the cache.  During that time, if process B (and C and others) need the same block, it will wait on "read by other session".  A and B can be threads of the same process running in parallel or unrelated processes.  For example two processes doing full table scan on mdp_matrix etc. Solutions for that can be lowering the number of processes competing on the same blocks, increasing PCTFREE.  If it is a full table scan, maybe an index is missing that can result in less blocks being read from the cache and so on.

    Read the article

  • Why don't mails show up in the recipient's mailspool?

    - by Jason
    I have postfix dovecot running with local email system on thunderbird. I have two users on by ubuntu, mailuser 1 and mailuser 2 whom i added to thunderbird. Everything went fine, except the users dont have anything on their inbox on thunderbird and sent mails dont get through. Im using maildir as well. Checking /var/log/mail.log reveals this This what is happining: Restarting postfix and dovecot and then sending mail from one user to another user... I believe this line is the problem May 30 18:31:55 postfix/smtpd[12804]: disconnect from localhost[127.0.0.1] Why is it not connecting ? What could be wrong ? /var/log/mail.log May 30 18:30:21 dovecot: imap: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) May 30 18:30:21 dovecot: master: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) May 30 18:30:21 dovecot: imap: Server shutting down. in=467 out=475 May 30 18:30:21 dovecot: config: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) May 30 18:30:21 dovecot: log: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) May 30 18:30:21 dovecot: anvil: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) May 30 18:30:21 dovecot: master: Dovecot v2.2.9 starting up (core dumps disabled) May 30 18:30:54 dovecot: imap-login: Login: user=<mailuser2>, method=PLAIN, rip=::1, lip=::1, mpid=12638, TLS, session=<xUfQkaD66gAAAAAAAAAAAAAAAAAAAAAB> May 30 18:31:04 postfix/master[12245]: terminating on signal 15 May 30 18:31:04 postfix/master[12795]: daemon started -- version 2.11.0, configuration /etc/postfix May 30 18:31:55 postfix/postscreen[12803]: CONNECT from [127.0.0.1]:33668 to [127.0.0.1]:25 May 30 18:31:55 postfix/postscreen[12803]: WHITELISTED [127.0.0.1]:33668 May 30 18:31:55 postfix/smtpd[12804]: connect from localhost[127.0.0.1] May 30 18:31:55 postfix/smtpd[12804]: 1ED7120EB9: client=localhost[127.0.0.1] May 30 18:31:55 postfix/cleanup[12809]: 1ED7120EB9: message-id=<[email protected]> May 30 18:31:55 postfix/qmgr[12799]: 1ED7120EB9: from=<[email protected]>, size=546, nrcpt=1 (queue active) May 30 18:31:55 postfix/local[12810]: 1ED7120EB9: to=<mailuser2@mysitecom>, relay=local, delay=0.03, delays=0.02/0.01/0/0, dsn=2.0.0, status=sent (delivered to maildir) May 30 18:31:55 postfix/qmgr[12799]: 1ED7120EB9: removed May 30 18:31:55 postfix/smtpd[12804]: disconnect from localhost[127.0.0.1] May 30 18:31:55 dovecot: imap-login: Login: user=<mailuser1>, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=12814, TLS, session=<sD9plaD6PgB/AAAB> This is my postfix main.cf See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination myhostname = server mydomain = mysite.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = $mydomain mydestination = mysite.com #relayhost = smtp.192.168.10.1.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.10.0/24 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all home_mailbox = Maildir / mailbox_command= All ports are listening tcp 0 0 *:imaps *:* LISTEN tcp 0 0 *:submission *:* LISTEN tcp 0 0 *:imap2 *:* LISTEN tcp 0 0 s148134.s148134.:domain *:* LISTEN tcp 0 0 192.168.56.101:domain *:* LISTEN tcp 0 0 10.0.2.15:domain *:* LISTEN tcp 0 0 localhost:domain *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 *:smtp *:* LISTEN tcp 0 0 localhost:953 *:* LISTEN tcp6 0 0 [::]:imaps [::]:* LISTEN tcp6 0 0 [::]:submission [::]:* LISTEN tcp6 0 0 [::]:imap2 [::]:* LISTEN tcp6 0 0 [::]:domain [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN tcp6 0 0 [::]:smtp [::]:* LISTEN tcp6 0 0 localhost:953 [::]:* LISTEN

    Read the article

  • Hadoop streaming with Python and python subprocess

    - by Ganesh
    I have established a basic hadoop master slave cluster setup and able to run mapreduce programs (including python) on the cluster. Now I am trying to run a python code which accesses a C binary and so I am using the subprocess module. I am able to use the hadoop streaming for a normal python code but when I include the subprocess module to access a binary, the job is getting failed. As you can see in the below logs, the hello executable is recognised to be used for the packaging, but still not able to run the code. . . packageJobJar: [/tmp/hello/hello, /app/hadoop/tmp/hadoop-unjar5030080067721998885/] [] /tmp/streamjob7446402517274720868.jar tmpDir=null JarBuilder.addNamedStream hello . . 12/03/07 22:31:32 INFO mapred.FileInputFormat: Total input paths to process : 1 12/03/07 22:31:32 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local] 12/03/07 22:31:32 INFO streaming.StreamJob: Running job: job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:31:32 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:31:33 INFO streaming.StreamJob: map 0% reduce 0% 12/03/07 22:32:05 INFO streaming.StreamJob: map 100% reduce 100% 12/03/07 22:32:05 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:32:05 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:32:05 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:32:05 ERROR streaming.StreamJob: Job not Successful! 12/03/07 22:32:05 INFO streaming.StreamJob: killJob... Streaming Job Failed! Command I am trying is : hadoop jar contrib/streaming/hadoop-*streaming*.jar -mapper /home/hduser/MARS.py -reducer /home/hduser/MARS_red.py -input /user/hduser/mars_inputt -output /user/hduser/mars-output -file /tmp/hello/hello -verbose where hello is the C executable. It is a simple helloworld program which I am using to check the basic functioning. My Python code is : #!/usr/bin/env python import subprocess subprocess.call(["./hello"]) Any help with how to get the executable run with Python in hadoop streaming or help with debugging this will get me forward in this. Thanks, Ganesh

    Read the article

  • Eclipse CDT: cannot debug or terminate application

    - by Paul Lammertsma
    I have Eclipse set up fairly nicely to run the G++ compiler through Cygwin. Even the character encoding is set up correctly! There still seems to be something wrong with my configuration: I can't debug. The pause button in the debug view is simply disabled, and no threads appear in my application tree. It seems that gdb is simply not communicating with Eclipse. Presently, I have the debug settings as follows: Debugger: "Cygwin gdb Debugger" GDB debugger: gdb GDB command file: .gdbinit Protocol: Default I should mention here that I have no idea what .gdbinit does; in my project it is merely an empty file. What is wrong with my configuration? Debugging When attempting to terminate the application in debug mode, Eclipse displays the following error: Target request failed: failed to interrupt. I can't kill the process, either; I have to kill its parent gdb.exe, which in turn kills my application. Running When running it normally, a bunch of kill.exes are called, doing nothing, while Eclipse displays the following error: Terminate failed. I can kill FaceDetector.exe from the task manager. Process Explorer This is what it looks like in Process Explorer (debugging left, running right):

    Read the article

  • write to fifo/pipe from shell, with timeout

    - by Tim
    I have a pair of shell programs that talk over a named pipe. The reader creates the pipe when it starts, and removes it when it exits. Sometimes, the writer will attempt to write to the pipe between the time that the reader stops reading and the time that it removes the pipe. reader: while condition; do read data <$PIPE; do_stuff; done writer: echo $data >>$PIPE reader: rm $PIPE when this happens, the writer will hang forever trying to open the pipe for writing. Is there a clean way to give it a timeout, so that it won't stay hung until killed manually? I know I can do #!/bin/sh # timed_write <timeout> <file> <args> # like "echo <args> >> <file>" with a timeout TIMEOUT=$1 shift; FILENAME=$1 shift; PID=$$ (X=0; # don't do "sleep $TIMEOUT", the "kill %1" doesn't kill the sleep while [ "$X" -lt "$TIMEOUT" ]; do sleep 1; X=$(expr $X + 1); done; kill $PID) & echo "$@" >>$FILENAME kill %1 but this is kind of icky. Is there a shell builtin or command to do this more cleanly (without breaking out the C compiler)?

    Read the article

  • SQL SERVER – Shard No More – An Innovative Look at Distributed Peer-to-peer SQL Database

    - by pinaldave
    There is no doubt that SQL databases play an important role in modern applications. In an ideal world, a single database can handle hundreds of incoming connections from multiple clients and scale to accommodate the related transactions. However the world is not ideal and databases are often a cause of major headaches when applications need to scale to accommodate more connections, transactions, or both. In order to overcome scaling issues, application developers often resort to administrative acrobatics, also known as database sharding. Sharding helps to improve application performance and throughput by splitting the database into two or more shards. Unfortunately, this practice also requires application developers to code transactional consistency into their applications. Getting transactional consistency across multiple SQL database shards can prove to be very difficult. Sharding requires developers to think about things like rollbacks, constraints, and referential integrity across tables within their applications when these types of concerns are best handled by the database. It also makes other common operations such as joins, searches, and memory management very difficult. In short, the very solution implemented to overcome throughput issues becomes a bottleneck in and of itself. What if database sharding was no longer required to scale your application? Let me explain. For the past several months I have been following and writing about NuoDB, a hot new SQL database technology out of Cambridge, MA. NuoDB is officially out of beta and they have recently released their first release candidate so I decided to dig into the database in a little more detail. Their architecture is very interesting and exciting because it completely eliminates the need to shard a database to achieve higher throughput. Each NuoDB database consists of at least three or more processes that enable a single database to run across multiple hosts. These processes include a Broker, a Transaction Engine and a Storage Manager.  Brokers are responsible for connecting client applications to Transaction Engines and maintain a global view of the network to keep track of the multiple Transaction Engines available at any time. Transaction Engines are in-memory processes that client applications connect to for processing SQL transactions. Storage Managers are responsible for persisting data to disk and serving up records to the Transaction Managers if they don’t exist in memory. The secret to NuoDB’s approach to solving the sharding problem is that it is a truly distributed, peer-to-peer, SQL database. Each of its processes can be deployed across multiple hosts. When client applications need to connect to a Transaction Engine, the Broker will automatically route the request to the most available process. Since multiple Transaction Engines and Storage Managers running across multiple host machines represent a single logical database, you never have to resort to sharding to get the throughput your application requires. NuoDB is a new pioneer in the SQL database world. They are making database scalability simple by eliminating the need for acrobatics such as sharding, and they are also making general administration of the database simpler as well.  Their distributed database appears to you as a user like a single SQL Server database.  With their RC1 release they have also provided a web based administrative console that they call NuoConsole. This tool makes it extremely easy to deploy and manage NuoDB processes across one or multiple hosts with the click of a mouse button. See for yourself by downloading NuoDB here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: NuoDB

    Read the article

  • Creating a Synchronous BPEL composite using File Adapter

    - by [email protected]
    By default, the JDeveloper wizard generates asynchronous WSDLs when you use technology adapters. Typically, a user follows these steps when creating an adapter scenario in 11g: 1) Create a SOA Application with either "Composite with BPEL" or an "Empty Composite". Furthermore, if  the user chooses "Empty Composite", then he or she is required to drop the "BPEL Process" from the "Service Components" pane onto the SOA Composite Editor. Either way, the user comes to the screen below where he/she fills in the process details. Please note that the user is required to choose "Define Service Later" as the template. 2) Creates the inbound service and outbound references and wires them with the BPEL component:     3) And, finally creates the BPEL process with the initiating <receive> activity to retrieve the payload and an <invoke> activity to write the payload.     This is how most BPEL processes that use Adapters are modeled. And, if we scrutinize the generated WSDL, we can clearly see that the generated WSDL is one way and that makes the BPEL process asynchronous (see below)   In other words, the inbound FileAdapter would poll for files in the directory and for every file that it finds there, it would translate the content into XML and publish to BPEL. But, since the BPEL process is asynchronous, the adapter would return immediately after the publish and perform the required post processing e.g. deletion/archival and so on.  The disadvantage with such asynchronous BPEL processes is that it becomes difficult to throttle the inbound adapter. In otherwords, the inbound adapter would keep sending messages to BPEL without waiting for the downstream business processes to complete. This might lead to several issues including higher memory usage, CPU usage and so on. In order to alleviate these problems, we will manually tweak the WSDL and BPEL artifacts into synchronous processes. Once we have synchronous BPEL processes, the inbound adapter would automatically throttle itself since the adapter would be forced to wait for the downstream process to complete with a <reply> before processing the next file or message and so on. Please see the tweaked WSDL below and please note that we have converted the one-way to a two-way WSDL and thereby making the WSDL synchronous: Add a <reply> activity to the inbound adapter partnerlink at the end of your BPEL process e.g.   Finally, your process will look like this:   You are done.   Please remember that such an excercise is NOT required for Mediator since the Mediator routing rules are sequential by default. In other words, the Mediator uses the caller thread (inbound file adapter thread) for processing the routing rules. This is the case even if the WSDL for mediator is one-way.

    Read the article

  • ARTS Reference Model for Retail

    - by Sanjeev Sharma
    Consider a hypothetical scenario where you have been tasked to set up retail operations for a electronic goods or daily consumables or a luxury brand etc. It is very likely you will be faced with the following questions: What are the essential business capabilities that you must have in place?  What are the essential business activities under-pinning each of the business capabilities, identified in Step 1? What are the set of steps that you need to perform to execute each of the business activities, identified in Step 2? Answers to the above will drive your investments in software and hardware to enable the core retail operations. More importantly, the choices you make in responding to the above questions will several implications in the short-run and in the long-run. In the short-term, you will incur the time and cost of defining your technology requirements, procuring the software/hardware components and getting them up and running. In the long-term, as you grow in operations organically or through M&A, partnerships and franchiser business models  you will invariably need to make more technology investments to manage the greater complexity (scale and scope) of business operations.  "As new software applications, such as time & attendance, labor scheduling, and POS transactions, just to mention a few, are introduced into the store environment, it takes a disproportionate amount of time and effort to integrate them with existing store applications. These integration projects can add up to 50 percent to the time needed to implement a new software application and contribute significantly to the cost of the overall project, particularly if a systems integrator is called in. This has been the reality that all retailers have had to live with over the last two decades. The effect of the environment has not only been to increase costs, but also to limit retailers' ability to implement change and the speed with which they can do so." (excerpt taken from here) Now, one would think a lot of retailers would have already gone through the pain of finding answers to these questions, so why re-invent the wheel? Precisely so, a major effort began almost 17 years ago in the retail industry to make it less expensive and less difficult to deploy new technology in stores and at the retail enterprise level. This effort is called the Association for Retail Technology Standards (ARTS). Without standards such as those defined by ARTS, you would very likely end up experiencing the following: Increased Time and Cost due to resource wastage arising from re-inventing the wheel i.e. re-creating vanilla processes from scratch, and incurring, otherwise avoidable, mistakes and errors by ignoring experience of others Sub-optimal Process Efficiency due to narrow, isolated view of processes thereby ignoring process inter-dependencies i.e. optimizing parts but not the whole, and resulting in lack of transparency and inter-departmental finger-pointing Embracing ARTS standards as a blue-print for establishing or managing or streamlining your retail operations can benefit you in the following ways: Improved Time-to-Market from parity with industry best-practice processes e.g. ARTS, thus avoiding “reinventing the wheel” for common retail processes and focusing more on customizing processes for differentiations, and lowering integration complexity and risk with a standardized vocabulary for exchange between internal and external i.e. partner systems Lower Operating Costs by embracing the ARTS enterprise-wide process reference model for developing and streamlining retail operations holistically instead of a narrow, silo-ed view, and  procuring IT systems in compliance with ARTS thus avoiding IT budget marginalization While parity with industry standards such as ARTS business process model by itself does not create a differentiation, it does however provide a higher starting point for bridging the strategy-execution gap in setting up and improving retail operations.

    Read the article

  • fork() within a fork()

    - by codingfreak
    Hi Is there any way to differentiate the child processes created by different fork() functions within a program. global variable i; SIGCHLD handler function() { i--; } handle() { fork() --> FORK2 } main() { while(1) { if(i<5) { i++; if( (fpid=fork())==0) --> FORK1 handle() else (fpid>0) ..... } } } Is there any way I can differentiate between child processes created by FORK1 and FORK2 ?? because I am trying to decrement the value of global variable 'i' in SIGCHLD handler function and it should be decremented only for the processes created by FORK1 .. I tried to use an array and save the process id [this code is placed in fpid0 part] of the child processes created by FORK1 and then decrement the value of 'i' only if the process id of dead child is within the array ... But this didn't work out as sometimes child processes dead so fastly that updating the array is not done perfectly and everything messed up. So is there any better solution for this problem ??

    Read the article

  • WCF over named pipes

    - by Christoph
    Hi I have a problem with following scenario: There is a windows service running which spawns several processes. These processes open a WCF service host over a named pipe binding. Now the parent windows service tries to ping (connect) to the child processes using the wcf proxy over the well known named pipe. This, however fails saying: "Endpoint not found" If I run the parent process as a console application it works fine. Any ideas? I was thinking about permissions but the child processes should inherit the permission of the service, besides they are in the same session as well. thanks, Christoph

    Read the article

  • Can someone explain the "use-cases" for the default munin graphs?

    - by exhuma
    When installing munin, it activates a default set of plugins (at least on ubuntu). Alternatively, you can simply run munin-node-configure to figure out which plugins are supported on your system. Most of these plugins plot straight-forward data. My question is not to explain the nature of the data (well... maybe for some) but what is it that you look for in these graphs? It is easy to install munin and see fancy graphs. But having the graphs and not being able to "read" them renders them totally useless. I am going to list standard plugins which are enabled by default on my system. So it's going to be a long list. For completeness, I am also going to list plugins which I think to understand and give a short explanation as to what I think it's used for. Pleas correct if I am wrong with any of them. So let me split this questions in three parts: Plugins where I don't even understand the data Plugins where I understand the data but don't know what I should look out for Plugins which I think to understand Plugins where I don't even understand the data These may contain questions that are not necessarily aimed at munin alone. Not understanding the data usually mean a gap in fundamental knowledge on operating systems/hardware.... ;) Feel free to respond with a "giyf" answer. These are plugins where I can only guess what's going on... I hardly want to look at these "guessing"... Disk IOs per device (IOs/second)What's an IO. I know it stands for input/output. But that's as far as it goes. Disk latency per device (Average IO wait)Not a clue what an "IO wait" is... IO Service TimeThis one is a huge mess, and it's near impossible to see something in the graph at all. Plugins where I understand the data but don't know what I should look out for IOStat (blocks/second read/written)I assume, the thing to look out for in here are spikes? Which would mean that the device is in heavy use? Available entropy (bytes)I assume that this is important for random number generation? Why would I graph this? So far the value has always been near constant. VMStat (running/I/O sleep processes)What's the difference between this one and the "processes" graph? Both show running/sleeping processes, whereas the "Processes" graph seems to have more details. Disk throughput per device (bytes/second read/written) What's thedifference between this one and the "IOStat" graph? inode table usageWhat should I look for in this graph? Plugins which I think to understand I'll be guessing some things here... correct me if I am wrong. Disk usage in percent (percent)How much disk space is used/remaining. As this is approaching 100%, you should consider cleaning up or extend the partition. This is extremely important for the root partition. Firewall Throughput (packets/second)The number of packets passing through the firewall. If this is spiking for a longer period of time, it could be a sign of a DOS attack (or we are simply recieving a large file). It can also give you an idea about your firewall performance. If it's levelling out and you need more "power" you should consider load balancing. If it's levelling out and see a correlation with your CPU load, it could also mean that your hardware is not fast enough. Correlations with disk usage could point to excessive LOG targets in you FW config. eth0 errors (packets in/out)Network errors. If this value is increasing, it could be a sign of faulty hardware. eth0 traffic (bits/second in/out)Raw network traffic. This should correlate with Firewall throughput. number of threadsAn ever-increasing value might point to a process not properly closing threads. Investigate! processesBreakdown of active processes (including sleeping). A quick spike in here might point to a fork-bomb. A slowly, but ever-increasing value might point to an application spawning sub-processes but not properly closing them. Investigate using ps faux. process priorityThis shows the distribution of process priorities. Having only high-priority processes is not of much use. Consider de-prioritizing some. cpu usageFairly straight-forward. If this is spiking, you may have an attack going on, or a process is hogging the CPU. Idf it's slowly increasing and approaching max in normal operations, you should consider upgrading your hardware (or load-balancing). file table usageNumber of actively open files. If this is reaching max, you may have a process opening, but not properly releasing files. load averageShows an summarized value for the system load. Should correlate with CPU usage. Increasing values can come from a number of sources. Look for correlations with other graphs. memory usageA graphical representation of you memory. As long as you have a lot of unused+cache+buffers you are fine. swap in/outShows the activity on your swap partition. This should always be 0. If you see activity on this, you should add more memory to your machine!

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >