Search Results

Search found 657 results on 27 pages for 'metrics'.

Page 16/27 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Which JMX statistics to watch out for in Catalina/Tomcat?

    - by geoaxis
    I have configured OpenNMS to collect all kinds of numeric data coming out of tomcat7 jmx. There are a lot of things. I am interested in monitoring this tomcat instance so that I can avoid down time and lockups. What metrics should I be watching out for? I am already monitoring things like CPU, Memory, Network via SNMP. With this JMX connection the things that I find interesting are Catalina:type=GlobalRequestProcessor,name="ajp-bio-/a.b.c.d-XXXX" RequestsCount so far. Catalina:type=Manager,context=/myApp,host=localhost Active sessions and its maximum so far

    Read the article

  • SUM of metric for normalized logical hierarchy

    - by Alex254
    Suppose there's a following table Table1, describing parent-child relationship and metric: Parent | Child | Metric (of a child) ------------------------------------ name0 | name1 | a name0 | name2 | b name1 | name3 | c name2 | name4 | d name2 | name5 | e name3 | name6 | f Characteristics: 1) Child always has 1 and only 1 parent; 2) Parent can have multiple children (name2 has name4 and name5 as children); 3) Number of levels in this "hierarchy" and number of children for any given parent are arbitrary and do not depend on each other; I need SQL request that will return result set with each name and a sum of metric of all its descendants down to the bottom level plus itself, so for this example table the result would be (look carefully at name1): Name | Metric ------------------ name1 | a + c + f name2 | b + d + e name3 | c + f name4 | d name5 | e name6 | f (name0 is irrelevant and can be excluded). It should be ANSI or Teradata SQL. I got as far as a recursive query that can return a SUM (metric) of all descendants of a given name: WITH RECURSIVE temp_table (Child, metric) AS ( SELECT root.Child, root.metric FROM table1 root WHERE root.Child = 'name1' UNION ALL SELECT indirect.Child, indirect.metric FROM temp_table direct, table1 indirect WHERE direct.Child = indirect.Parent) SELECT SUM(metric) FROM temp_table; Is there a way to turn this query into a function that takes name as an argument and returns this sum, so it can be called like this? SELECT Sum_Of_Descendants (Child) FROM Table1; Any suggestions about how to approach this from a different angle would be appreciated as well, because even if the above way is implementable, it will be of poor performance - there would be a lot of iterations of reading metrics (value f would be read 3 times in this example). Ideally, the query should read a metric of each name only once.

    Read the article

  • Justifying a memory upgrade, take 2

    - by AngryHacker
    Previously I asked a question on what metrics I should measure (e.g. before and after) to justify a memory upgrade. Perfmon was suggested. I'd like to know which specific perfmon counters I should be measuring. So far I got: PhysicalDisk/Avg. Disk Queue Length (for each drive) PhysicalDisk/Avg. Disk Write Queue Length (for each drive) PhysicalDisk/Avg. Disk Read Queue Length (for each drive) Processor/Processor Time% SQLServer:BufferManager/Buffer cache hit ratio What other ones should I use?

    Read the article

  • What are performance limits of a database?

    - by Tommy
    What are some rough performance limits (read/s, write/s) for a single database server (no master-slave architecture), assuming storage on disk? How many read/s, write/s, depending on the kind of disk? (SSD vs non-SSD) , assuming simple operations (select one row by primary key, update one row, correctly indexed). I assume this limit is dependent on disk seek/write. EDIT: My question is more about getting rough metrics of the number of operations a database supports: to be able to know for example, if a new feature triggering 300 inserts/s can be supported without scaling out with additional servers.

    Read the article

  • Oracle OpenWorld Session: “Business Driven Development with BPM: Lessons from the Real World”

    - by Ajay Khanna
    One of key values that BPM promises is “Business Empowerment”. People closest to the processes, who participate in the process every day, are the ones who know most about the process. These are the people who run day-to-day operations, people who triage customer issues, people who envision improvements and innovations. It is, therefore, imperative that when a company decides to use BPM technology to automate their business processes, business people take the driver’s seat. BPM is not an IT only project. Oracle BPM suite has been designed keeping this core tenet of BPM, Business Empowerment, in mind. The result is business user centered design of Process Composer. Process Composer is designed to let business users document their processes, analyze them using simulation, create web forms, specify business rules and even run them in testing mode using process player, to see if the designed process meets their needs. This does not mean that IT has no role in this process. In fact, Oracle BPM Suite has made it very easy for Business and IT to collaborate. The same process can be shared among business, and IT stakeholders and each can collaborate to create model-driven, process based executable applications. A process may need to integrate with multiple systems via various mechanisms, and IT leads system and data integration effort. IT helps fine tune the performance of process applications and ensures that the deployment of process application meets scalability and failover standards. In this session, we saw Harish Gaur and Satya Narayanan from Oracle demonstrate roles Business and IT play in BPM projects and how Oracle BPM Suite enables business and IT collaboration to design and automate process based applications. They also discussed real life customer stories. Some key takeaways from this session: There are no IT projects, only business initiatives, requiring IT support Identify high impact processes – critical, better BPM ROI Identify key metrics to measure process performance Align business with IT layer

    Read the article

  • Spotlight on GlassFish 4.1: #7 WebSocket Session Throttling and JMX Monitoring

    - by delabassee
    'Spotlight on GlassFish 4.1' is a series of posts that highlights specific enhancements of the upcoming GlassFish 4.1 release. It could be a new feature, a fix, a behavior change, a tip, etc. #7 WebSocket Session Throttling and JMX Monitoring GlassFish 4.1 embeds Tyrus 1.8.1 which is compliant with the Maintenance Release of JSR 356 ("WebSocket API 1.1"). This release also brings brings additional features to the WebSocket support in GlassFish. JMX Monitoring: Tyrus now exposes WebSocket metrics through JMX . In GF 4.1, the following message statistics are monitored for both sent and received messages: messages count messages count per second average message size smallest message size largest message size Those statistics are collected independently of the message type (global count) and per specific message type (text, binary and control message). In GF 4.1, Tyrus also monitors, and exposes through JMX, errors at the application and endpoint level. For more information, please check Tyrus JMX Monitoring Session Throttling To preserve resources on the server hosting websocket endpoints, Tyrus now offers ways to limit the number of open sessions. Those limits can be configured at different level: per whole application per endpoint per remote endpoint address (client IP address)   For more details, check Tyrus Session Throttling. The next entry will focus on Tyrus new clients-side features.

    Read the article

  • Oracle SOA Suite for healthcare integration Dashboard

    - by Nitesh Jain
    Oracle SOA Suite Healthcare came up with a new way of monitoring where user can configure a dashboard and follow the dynamic runtime changes.Oracle SOA Suite for healthcare integration dashboards display information about the current health of the endpoints in a healthcare integration application. You can create and configure multiple dashboards as needed to monitor the status and volume metrics for the endpoints you have defined. The Dashboards reflects changes that occur in the runtime repository, such as purging runtime instance data, new messages processed, and new error messages. You can display data for various time periods, and you can manually refresh the data in real time or set the dashboard to automatically refresh at set intervals.Dashboard shows the following information: Status: The current status of the endpoint, such as Running, Idle, Disabled, or Errors. Messages Sent: The number of messages sent by the endpoint in the specified time period. Messages Received: The number of messages received by the endpoint in the specified time period. Errors: The number of messages with errors for the endpoint in the given time period. Last Sent: The date and time the last message was sent from the endpoint. Last Received: The date and time the last message was received from the endpoint. Last Error: The date and time of the last error for the endpoint.  It also shows the detailed view of a specific Endpoint The document type. The number of messages received per second. The total number of message processed in the specified time period. The average size of each message.

    Read the article

  • Feature Updates to the Windows Azure Portal

    - by Clint Edmonson
    Lots of activity over at the Windows Azure portal this weekend, including some exciting new features and major improvements to existing features. Here are the highlights: Support for Managing Co-administrators Set up account co-administrators to allow others to share service management duties for each Azure subscription Import/Export support for SQL Databases Export existing SQL Azure databases to blob storage using SQL Server 2012’s BACPAC format. Create a new SQL Azure database from an existing BACPAC stored in blob storage Storage Container Management and Access Control Create blob storage containers directly within the portal Edit their public/private access settings Drill into storage containers and see the blobs contained within them Improved Cloud Service Status Notifications Detailed health status information about cloud services and roles as they transition between states Virtual Machine Experience Enhancements Option to automatically delete corresponding VHD files from blob storage when deleting VM disks Service Bus Management and Monitoring Ability to create and manage service bus Namespaces, Queues, Topics, Relays and Subscriptions Rich monitoring of Topics, Queues, and Subscriptions with detailed and customizable dashboard metrics Entity status (Topic, Queue, or Subscription) can be changed interactively via dashboard Direct links to the Access Control Services (ACS) namespaces when working with service bus access keys Media Services Monitoring Support Monitor encoding jobs that are queued for processing as well as active, failed and queued tasks for encoding jobs The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted Reference ID: P7VVJCM38V8R

    Read the article

  • No Internet access while being connected to VPN using Cisco VPN Client 5.

    - by szeldon
    Hi, I have an access to corporate VPN using Cisco VPN Client 5.0.00:0340, but when I'm connected to it, I don't have an Internet access. I'm using Windows XP SP3. As it was suggested here http://forums.speedguide.net/showthread.php?t=209167 , I tried to enable "Allow local LAN Access" but it doesn't work. I also tried a second solution - deleting entry using "route" command, but it didn't help. I used "route delete 192.168.100.222". It's a third day of my attempts to solve this issue and I don't have an idea what else to do. I'm not very experienced in VPN stuff, but I know something about networking. Basing on my knowledge, I think that it's theoretically possible to achieve Internet access using my local network and only corporate stuff to be routed using VPN connection. I think that theoretically this should look like this: every IP being inside by corporation - VPN interface IP every other IP - my ethernet interface I've tried many possibilities of how to change those routes, but neither of them work. I'd really appreciate any help. My route configuration before connecting to VPN: =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...00 c0 a8 de 79 01 ...... Atheros AR5006EG Wireless Network Adapter - Teefer2 Miniport 0x10005 ...02 00 4c 4f 4f 50 ...... Microsoft Loopback Card 0x160003 ...00 17 42 31 0e 16 ...... Marvell Yukon 88E8055 PCI-E Gigabit Ethernet Controller - Teefer2 Miniport =========================================================================== =========================================================================== Active routes: Network Destination Netmask Gateway Interface Metrics 0.0.0.0 0.0.0.0 192.168.101.254 192.168.100.222 10 10.0.0.0 255.255.255.0 10.0.0.10 10.0.0.10 30 10.0.0.10 255.255.255.255 127.0.0.1 127.0.0.1 30 10.255.255.255 255.255.255.255 10.0.0.10 10.0.0.10 30 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.100.0 255.255.254.0 192.168.100.222 192.168.100.222 1 192.168.100.222 255.255.255.255 127.0.0.1 127.0.0.1 1 192.168.100.255 255.255.255.255 192.168.100.222 192.168.100.222 1 224.0.0.0 240.0.0.0 10.0.0.10 10.0.0.10 3 224.0.0.0 240.0.0.0 192.168.100.222 192.168.100.222 1 255.255.255.255 255.255.255.255 10.0.0.10 10.0.0.10 1 255.255.255.255 255.255.255.255 192.168.100.222 192.168.100.222 1 255.255.255.255 255.255.255.255 192.168.100.222 2 1 Default gateway: 192.168.101.254. =========================================================================== My route configuration after connection to VPN: =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...00 c0 a8 de 79 01 ...... Atheros AR5006EG Wireless Network Adapter - Teefer2 Miniport 0x10005 ...02 00 4c 4f 4f 50 ...... Microsoft Loopback Card 0x160003 ...00 17 42 31 0e 16 ...... Marvell Yukon 88E8055 PCI-E Gigabit Ethernet Controller - Teefer2 Miniport 0x170006 ...00 05 9a 3c 78 00 ...... Cisco Systems VPN Adapter - Teefer2 Miniport =========================================================================== =========================================================================== Active routes: Network Destination Netmask Gateway Interface Metrics 0.0.0.0 0.0.0.0 10.251.6.1 10.251.6.51 1 10.0.0.0 255.255.255.0 10.0.0.10 10.0.0.10 30 10.0.0.0 255.255.255.0 10.251.6.1 10.251.6.51 10 10.0.0.10 255.255.255.255 127.0.0.1 127.0.0.1 30 10.1.150.10 255.255.255.255 192.168.101.254 192.168.100.222 1 10.251.6.0 255.255.255.0 10.251.6.51 10.251.6.51 20 10.251.6.51 255.255.255.255 127.0.0.1 127.0.0.1 20 10.255.255.255 255.255.255.255 10.0.0.10 10.0.0.10 30 10.255.255.255 255.255.255.255 10.251.6.51 10.251.6.51 20 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.100.0 255.255.254.0 192.168.100.222 192.168.100.222 10 192.168.100.0 255.255.254.0 10.251.6.1 10.251.6.51 10 192.168.100.222 255.255.255.255 127.0.0.1 127.0.0.1 10 192.168.100.255 255.255.255.255 192.168.100.222 192.168.100.222 10 213.158.197.124 255.255.255.255 192.168.101.254 192.168.100.222 1 224.0.0.0 240.0.0.0 10.0.0.10 10.0.0.10 30 224.0.0.0 240.0.0.0 10.251.6.51 10.251.6.51 20 224.0.0.0 240.0.0.0 192.168.100.222 192.168.100.222 10 255.255.255.255 255.255.255.255 10.0.0.10 10.0.0.10 1 255.255.255.255 255.255.255.255 10.251.6.51 10.251.6.51 1 255.255.255.255 255.255.255.255 192.168.100.222 192.168.100.222 1 255.255.255.255 255.255.255.255 192.168.100.222 2 1 Default gateway: 10.251.6.1. ===========================================================================

    Read the article

  • Solaris TCP/IP performance tuning

    - by Andy Faibishenko
    I am trying to tune a high message traffic system running on Solaris. The architecture is a large number (600) of clients which connect via TCP to a big Solaris server and then send/receive relatively small messages (.5 to 1K payload) at high rates. The goal is to minimize the latency of each message processed. I suspect that the TCP stack of the server is getting overwhelmed by all the traffic. What are some commands/metrics that I can use to confirm this, and in case this is true, what is the best way to alleviate this bottleneck? PS I posted this on StackOverflow originally. One person suggested snoop and dtrace. dtrace seems pretty general - are there any additional pointers on how to use it to diagnose TCP issues?

    Read the article

  • SharePoint Server 2007 and HTML Forms - How to control access rights

    - by Anarkie
    I'm working with Hosted SharePoint 2007 with Forms Server. I need to allow client access to submit HTML forms designed in Infopath. Problem is, I need to make sure the clients don't see the library. There is sensitive data that will be on these forms. I also need to have a repeated library that is based on the Internal Admin records and requirements. Outside of making a seperate library per customer, does anyone have any suggestions? My Goal: 1: Customers enter their requests through a link or provided page 2: Internally address the requests and perform required arrangements, add billing and payment fields 3: Have SharePoint metrics, reports, etc... based on the provided intormation and status. Thanks in Advance!!

    Read the article

  • How important is Domain knowledge vs. Technical knowledge?

    - by Mayank
    I am working on a Trading and Risk Management application and although from a C# background, I have been asked to work on SSIS packages. Now I can live with that. The pain point is that there is too much emphasis on business understanding. Trading (Energy Trading to be exact) is a HUGE area and understanding every little bit of it is overwhelming. But for the past two months I have been working on understanding the business terms - Mark To Market, Risk Metrics, Positions, PnL, Greeks, Instruments, Book Structure... every little detail (you get the point). Now IMHO, this is the job of a BA. Sure it is very important for developers to understand the business but where do you draw the line? When I talked to my manager about this, he almost mocked me by saying that anybody can learn a technology in a week. It's the business that's harder. My long term aspiration is to remain on the technical side, probably become an architect (if possible). If I wanted to focus so much on business I would have pursued an MBA! I want to know if I am wrong or too naive in understanding the business importance or is my frustration justified?

    Read the article

  • Excel - add target line to stacked bar chart

    - by Chris W
    I've got a stacked bar chart. I'm displaying a set of floating bars to represent hi/low ranges for some metrics, by using a transparent fill on the bottom section of the bar I achieve the desired look. What I now need to do is add a horizontal line across the chart to indicate how a particular users score relates to all of these hi/low ranges therefore the placement of this line needs to be dynamic based on a value in a cell. Is there anyway to do this as I can't find an easy option. If this was a simple bar chart I could add the target scores as new series and use the line chart type but I don't seem able to overlay a second series on the stacked bar chart. I'm using 2003 at the moment but run this in 2007 if that helps.

    Read the article

  • Monitoring host and app parameters in real-time

    - by devopsdude42
    I have a bunch of VMs that I need to monitor in real-time. For all nodes I need to watch host parameters like load, network usage and free memory; and for some I need app-specific metrics too, like redis (some vars from the output of INFO command) and nginx (like requests/sec, avg. request time). Ideally I'd also like to track some parameters from the custom apps that run on these node too. These parameters should get tracked as a bunch of line charts on a dashboard. I checked out graphite and it looks suitable (although the UX and aesthetics looks like it needs some love). But setting up and maintaining graphite looks to be a pain, esp. since we don't have a full-time person just for this. Are there any alternatives? Or at least something that is simpler to setup and will scale? Reasonable paid services are also ok.

    Read the article

  • Open Source Visualization and Dashboard Software

    - by helios
    I am working on an open source Application Performance Monitoring (APM) software and looking for a visualization tool with dashboard capabilities. I came across Graphite which looks pretty good but wondering if there is anything better out there before I settle down with that tool. Here's the list of features I am interested in: Must-Have Open Source license API to submit real time data Web-based visualization interface Persistence - file or database Nice-To-Have Dashboard Capabilities: Allow users to select a few metrics (CPU, Heap Usage, # of Active Users etc.) and place them on a single page for easier monitoring. Any suggestions?

    Read the article

  • txt file descriptor in lsof

    - by wfaulk
    In my experience, files that have the file descriptor of txt in lsof output are the executable file itself and shared objects. The lsof man page says that it means "program text (code and data)". While debugging a problem, I found a large number of data files (specifically, ElasticSearch database index files) that lsof reported as txt. These are definitely not executable files. The process was ElasticSearch itself, which is a java process, if that helps point someone in the right direction. I want to understand how this process is opening and using these files that gets it to be reported in this way. I'm trying to understand some memory utilization, and I suspect that these open files are related to some metrics I'm seeing in some way. The system is Solaris 10 x86.

    Read the article

  • Picking only the value field out of Cloudwatch Dimensions, Java

    - by GroovyUser
    I have some data that are retried from the cloudwatch api's. Specifically I have used listMetrics. The data that I got from this call is : {Metrics: [{Namespace: Metric from grails, MetricName: hello123, Dimensions: [{Name: name, Value: 1425, }], }, {Namespace: Metric from grails, MetricName: hello123, Dimensions: [{Name: name, Value: 1068, }], }, That was the correct data as I would expect. I need a way to return only the value fields. Not others things. Is there any way to do this, in java? Thanks in advance.

    Read the article

  • EDQ Technical Enablement for OPN (Prague - June 17-19)

    - by milomir.vojvodic
    Oracle Enterprise Data Quality (EDQ) Technical Enablement and Partner Training Trusted Data for Your Enterprise Applications Oracle Enterprise Data Quality helps organizations achieve maximum value from their business-critical applications by delivering fit-for-purpose data. These products also enable individuals and collaborative teams to quickly and easily identify and resolve any problems in underlying data. With Oracle Enterprise Data Quality, customers can identify new opportunities, improve operational efficiency, and more efficiently comply with industry or governmental regulation. Oracle Enterprise Data Quality is designed to serve as a very channel friendly platform to OPN.  This means that pre-built extensions, components and even complete business solutions can readily be built and shared.  This allows our customers/partners to be highly efficient in how they deploy custom business solutions, but also allows our partners to develop specialized components, domain knowledge and even complete business solutions. Training is suitable for: · Database administrators · Architects · Technical staff Objectives of the training: After completing this course, participants should: · Have an understanding of the core functionality of EDQ across profiling, auditing, transforming, parsing and matching data · Be able to describe some of the key capabilities and benefits delivered by EDQ · Be able to create and run standalone EDQ processes and jobs · Be ready to start working with data from customers and (with practice) be able to demonstrate EDQ to customers Agenda 17th June Fundamentals For Demoing (Profile, Audit, Transform and More) Profiling Auditing Transforming Writing and exporting data Jobs and scheduling Publishing, packaging and copying EDQ processes Introduction to the Customer Data Extension Pack Realtime Processing via Web Services The Server Console Run Profiles Data Interfaces Sampling Publishing metrics to the Dashboard Users and security 18th June Matching Matching overview Basic matching configuration Matching rule hierarchies Clustering Merging Reviewing possible matches Outputting Match Data Case study 19th June Address Verification Address Verification Overview Configuration Accuracy Flags Parsing Parsing Overview Phrase profiling Tailoring a CDEP Parser Base Tokenization Classification Reclassification Selection Resolution Register Here Don’t miss this FREE event. Space is limited. Oracle University V Parku 2294/4 148 00 Praha 4 17.6. – 19.6. 2014 09:00 a.m.– 17:30 p.m.

    Read the article

  • Quantifying the Value Derived from Your PeopleSoft Implementation

    - by Mark Rosenberg
    As product strategists, we often receive the question, "What's the value of implementing your PeopleSoft software?" Prospective customers and existing customers alike are compelled to justify the cost of new tools, business process changes, and the business impact associated with adopting the new tools. In response to this question, we have been working with many of our customers and implementation partners during the past year to obtain metrics that demonstrate the value obtained from an investment in PeopleSoft applications. The great news is that as a result of our quest to identify value achieved, many of our customers began to monitor their businesses differently and more aggressively than in the past, and a number of them informed us that they have some great achievements to share. For this month, I'll start by pointing out that we have collaborated with one of our implementation partners, Huron Consulting Group, Inc., to articulate the levers for extracting value from implementing the PeopleSoft Grants solution. Typically, education and research institutions, healthcare organizations, and non-profit organizations are the types of enterprises that seek to facilitate and automate research administration business processes with the PeopleSoft Grants solution. If you are interested in understanding the ways in which you can look for value from an implementation, please consider registering for the webcast scheduled for Friday, December 14th at 1pm Central Time in which you'll get to see and hear from our team, Huron Consulting, and one of our leading customers. In the months ahead, we'll plan to post more information about the value customers have measured and reported to us from their implementations and upgrades. If you have a great story about return on investment and want to share it, please contact either [email protected]  or [email protected]. We'd love to hear from you.

    Read the article

  • Conventions for search result scoring

    - by DeaconDesperado
    I assume this type of question is more on-topic here than on regular SO. I have been working on a search feature for my team's web application and have had a lot of success building a multithreaded, "divide and conquer" processing system to work through a large amount of fulltext. Our problem domain is pretty specific. Users of the app generate posts, and as a general rule, posts that are more recent are considered to be of greater relevance. Some of the data we are trying to extract from search is very specific (user's feelings about specific items or things) and we are using python nltk to do named-entity extraction to find interesting likely query terms. Essentially we look for descriptive adjective-noun pairs and generate a general picture of a user's expressed sentiment as a list of tokens. This search is intended as an internal tool for our team to draw out a local picture of sentiments like "soggy pizza." There's some machine learning in there too to do entity resolution on terms like "soggy" to all manner of adjectives expressing nastiness. My problem is I am at a loss for how to go about scoring these results. The text being searched is split up into tokens in a list, so my initial approach would be to normalize a float score between 0.0-1.0 generated off of how far into the list the terms appear and how often they are repeated (a later mention of the term being worth less, earlier more, greater frequency-greater score, etc.) A certain amount of weight could be given to the timestamp as well, though I am not certain how to calculate this. I am curious if anyone has had to solve a similar problem in a search relevance grading between appreciable metrics (frequency, term location/colocation, recency) and if there are and guidelines for how to weight each. I should mention as well that the final fallback procedure in the search is to pipe the query to Sphinx, which has its own scoring practices. Sphinx operates as the last resort in case our application specific processing can't find any eligible candidates.

    Read the article

  • SQLIO help decipher output

    - by SQL Learner
    When load testing on a SQL Server Box, using following (testfile is 25 GB) sqlio -kW -t8 -s360 -o8 -frandom -b8 -BH -LS g:\testfile.dat > result.txt sqlio -kW -t8 -s360 -o8 -frandom -b64 -BH -LS g:\testfile.dat >> result.txt sqlio -kW -t8 -s360 -o8 -frandom -b128 -BH -LS g:\testfile.dat >> result.txt sqlio -kW -t8 -s360 -o8 -frandom -b256 -BH -LS g:\testfile.dat >> result.txt Can anyone help me decipher output.. I do not understand latency min and average....? What does this number means IOs/sec: 10968.80 MBs/sec: 685.55 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 5 Max_Latency(ms): 21

    Read the article

  • Root cause for high CPU usage; which measurement to trust more: Windows Task Manager or Process Explorer?

    - by p.campbell
    Consider this Windows 8.1 machine (in-place upgrade from Windows 8) with differing reports on its CPU usage. The machine is idle, and has been for 3 days. There are no CPU intensive tasks running currently nor over the 3 day idle period. Windows Task Manager is reporting CPU usage constantly at an incredibly high value (and increasing over time!) at around 75%. Process Explorer from SysInternals reports that the CPU usage is much different at around 42% How does Process Explorer report 42.14% usage, but its columns report Idle at 57%, with the sum of the other processes not even approaching 10%? Which of these two values should I trust more, and why should it be trusted over the other measurement? How can I actually determine which process is causing Task Manager to report its values? These Proc Exp metrics were taken with Administrator privileges, and with option 'Show Details for All Processes' Click for larger view:

    Read the article

  • monitoring TCP/IP performance on Solaris

    - by Andy Faibishenko
    I am trying to tune a high message traffic system running on Solaris. The architecture is a large number (600) of clients which connect via TCP to a big Solaris server and then send/receive relatively small messages (.5 to 1K payload) at high rates. The goal is to minimize the latency of each message processed. I suspect that the TCP stack of the server is getting overwhelmed by all the traffic. What are some commands/metrics that I can use to confirm this, and in case this is true, what is the best way to alleviate this bottleneck?

    Read the article

  • Tracking form abandonment

    - by Alec Sanger
    I'm looking for a decent way to track form abandonment. Ideally, I would like to see how many people start filling out a form but do not complete it, as well as the last field that was filled out. The website is a fairly large Wordpress site with quite a few forms. Some of these forms are to register for events, some are for donations, some are for information requests. My first attempt at this was adding a generic jquery that bound functions to all forms on the site. When a form element was blurred, I would trigger a Google Analytics event with the name of the form, the name of the field, and whether or not it was filled. I expected to be able to go to the Event Flow section in Google Analytics and see the flow of these form events, however since there are so many forms and other events occurring on the website, Google wouldn't let me break them out very well. The other issue was the Quform doesn't name their fields anything relevant, and it doesn't look like we can name them ourselves. This results in a lot of ugly form names that don't mean anything without cross-referencing the actual form. Does anybody have any suggestions on how I can achieve more usable form abandonment metrics in a scenario like this?

    Read the article

  • 724% Return on an SFA project with Oracle Sales Cloud and Marketing Cloud combined!

    - by Richard Lefebvre
    Oracle Sales Cloud and Marketing Cloud customer Apex IT gained just that?a 724% return on investment (ROI) when it implemented these Oracle Cloud solutions in its fast-moving, rapidly-growing business. Apex IT was just announced as a winner of the Nucleus Research 11th annual Technology ROI Awards. The award, given by the analyst firm, highlights organizations that have successfully leveraged IT deployments to maximize value per dollar spent. Fast Facts: Return on Investment – 724% Payback – 2 months Average annual benefit – $91,534 Cost : Benefit Ratio – 1:48 Business Benefits In addition to the ROI and cost metrics the award calls out improvements in Apex IT’s business operations—across both Sales and Marketing teams: Improved ability to identify new opportunities and focus sales resources on higher-probability deals Reduced administration and manual lead tracking—resulting in more time selling and a net new client increase of 46% Increased campaign productivity for both Marketing and Sales, including Oracle Marketing Cloud’s automation of campaign tracking and nurture programs Improved margins with more structured and disciplined sales processes—resulting in more effective deal negotiations Read the full Apex IT ROI Case Study. You also can learn more about Apex IT’s business, including the company’s work with Oracle Sales and Marketing Cloud on behalf of its clients. You can point your prospects and customers to the CX blog for a similar recap of the Apex IT award and a link to the Case Study.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >