Search Results

Search found 1172 results on 47 pages for 'measure'.

Page 22/47 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Performance analysis for java application

    - by user1827614
    I want to do a performance measurement of my application and would like to be able to configure the stats for specific module like (enable for specific module and disable for some) and I want to measure things like memory usage, threads, average band width etc.. Can any one suggest something please, I am new to this. I think Visual VM is good but it doesnot support configuring for different modules. Does Perf4j or Admin4j work here? any one has used these before?

    Read the article

  • A Good Developer is So Hard to Find

    - by James Michael Hare
    Let me start out by saying I want to damn the writers of the Toughest Developer Puzzle Ever – 2. It is eating every last shred of my free time! But as I've been churning through each puzzle and marvelling at the brain teasers and trivia within, I began to think about interviewing developers and why it seems to be so hard to find good ones.  The problem is, it seems like no matter how hard we try to find the perfect way to separate the chaff from the wheat, inevitably someone will get hired who falls far short of expectations or someone will get passed over for missing a piece of trivia or a tricky brain teaser that could have been an excellent team member.   In shops that are primarily software-producing businesses or other heavily IT-oriented businesses (Microsoft, Amazon, etc) there often exists a much tighter bond between HR and the hiring development staff because development is their life-blood. Unfortunately, many of us work in places where IT is viewed as a cost or just a means to an end. In these shops, too often, HR and development staff may work against each other due to differences in opinion as to what a good developer is or what one is worth.  It seems that if you ask two different people what makes a good developer, often you will get three different opinions.   With the exception of those shops that are purely development-centric (you guys have it much easier!), most other shops have management who have very little knowledge about the development process.  Their view can often be that development is simply a skill that one learns and then once aquired, that developer can produce widgets as good as the next like workers on an assembly-line floor.  On the other side, you have many developers that feel that software development is an art unto itself and that the ability to create the most pure design or know the most obscure of keywords or write the shortest-possible obfuscated piece of code is a good coder.  So is it a skill?  An Art?  Or something entirely in between?   Saying that software is merely a skill and one just needs to learn the syntax and tools would be akin to saying anyone who knows English and can use Word can write a 300 page book that is accurate, meaningful, and stays true to the point.  This just isn't so.  It takes more than mere skill to take words and form a sentence, join those sentences into paragraphs, and those paragraphs into a document.  I've interviewed candidates who could answer obscure syntax and keyword questions and once they were hired could not code effectively at all.  So development must be more than a skill.   But on the other end, we have art.  Is development an art?  Is our end result to produce art?  I can marvel at a piece of code -- see it as concise and beautiful -- and yet that code most perform some stated function with accuracy and efficiency and maintainability.  None of these three things have anything to do with art, per se.  Art is beauty for its own sake and is a wonderful thing.  But if you apply that same though to development it just doesn't hold.  I've had developers tell me that all that matters is the end result and how you code it is entirely part of the art and I couldn't disagree more.  Yes, the end result, the accuracy, is the prime criteria to be met.  But if code is not maintainable and efficient, it would be just as useless as a beautiful car that breaks down once a week or that gets 2 miles to the gallon.  Yes, it may work in that it moves you from point A to point B and is pretty as hell, but if it can't be maintained or is not efficient, it's not a good solution.  So development must be something less than art.   In the end, I think I feel like development is a matter of craftsmanship.  We use our tools and we use our skills and set about to construct something that satisfies a purpose and yet is also elegant and efficient.  There is skill involved, and there is an art, but really it boils down to being able to craft code.  Crafting code is far more than writing code.  Anyone can write code if they know the syntax, but so few people can actually craft code that solves a purpose and craft it well.  So this is what I want to find, I want to find code craftsman!  But how?   I used to ask coding-trivia questions a long time ago and many people still fall back on this.  The thought is that if you ask the candidate some piece of coding trivia and they know the answer it must follow that they can craft good code.  For example:   What C++ keyword can be applied to a class/struct field to allow it to be changed even from a const-instance of that class/struct?  (answer: mutable)   So what do we prove if a candidate can answer this?  Only that they know what mutable means.  One would hope that this would infer that they'd know how to use it, and more importantly when and if it should ever be used!  But it rarely does!  The problem with triva questions is that you will either: Approve a really good developer who knows what some obscure keyword is (good) Reject a really good developer who never needed to use that keyword or is too inexperienced to know how to use it (bad) Approve a really bad developer who googled "C++ Interview Questions" and studied like hell but can't craft (very bad) Many HR departments love these kind of tests because they are short and easy to defend if a legal issue arrises on hiring decisions.  After all it's easy to say a person wasn't hired because they scored 30 out of 100 on some trivia test.  But unfortunately, you've eliminated a large part of your potential developer pool and possibly hired a few duds.  There are times I've hired candidates who knew every trivia question I could throw out them and couldn't craft.  And then there are times I've interviewed candidates who failed all my trivia but who I took a chance on who were my best finds ever.    So if not trivia, then what?  Brain teasers?  The thought is, these type of questions measure the thinking power of a candidate.  The problem is, once again, you will either: Approve a good candidate who has never heard the problem and can solve it (good) Reject a good candidate who just happens not to see the "catch" because they're nervous or it may be really obscure (bad) Approve a candidate who has studied enough interview brain teasers (once again, you can google em) to recognize the "catch" or knows the answer already (bad). Once again, you're eliminating good candidates and possibly accepting bad candidates.  In these cases, I think testing someone with brain teasers only tests their ability to answer brain teasers, not the ability to craft code. So how do we measure someone's ability to craft code?  Here's a novel idea: have them code!  Give them a computer and a compiler, or a whiteboard and a pen, or paper and pencil and have them construct a piece of code.  It just makes sense that if we're going to hire someone to code we should actually watch them code.  When they're done, we can judge them on several criteria: Correctness - does the candidate's solution accurately solve the problem proposed? Accuracy - is the candidate's solution reasonably syntactically correct? Efficiency - did the candidate write or use the more efficient data structures or algorithms for the job? Maintainability - was the candidate's code free of obfuscation and clever tricks that diminish readability? Persona - are they eager and willing or aloof and egotistical?  Will they work well within your team? It may sound simple, or it may sound crazy, but when I'm looking to hire a developer, I want to see them actually develop well-crafted code.

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • When is a Seek not a Seek?

    - by Paul White
    The following script creates a single-column clustered table containing the integers from 1 to 1,000 inclusive. IF OBJECT_ID(N'tempdb..#Test', N'U') IS NOT NULL DROP TABLE #Test ; GO CREATE TABLE #Test ( id INTEGER PRIMARY KEY CLUSTERED ); ; INSERT #Test (id) SELECT V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 1000 ; Let’s say we need to find the rows with values from 100 to 170, excluding any values that divide exactly by 10.  One way to write that query would be: SELECT T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; That query produces a pretty efficient-looking query plan: Knowing that the source column is defined as an INTEGER, we could also express the query this way: SELECT T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; We get a similar-looking plan: If you look closely, you might notice that the line connecting the two icons is a little thinner than before.  The first query is estimated to produce 61.9167 rows – very close to the 63 rows we know the query will return.  The second query presents a tougher challenge for SQL Server because it doesn’t know how to predict the selectivity of the modulo expression (T.id % 10 > 0).  Without that last line, the second query is estimated to produce 68.1667 rows – a slight overestimate.  Adding the opaque modulo expression results in SQL Server guessing at the selectivity.  As you may know, the selectivity guess for a greater-than operation is 30%, so the final estimate is 30% of 68.1667, which comes to 20.45 rows. The second difference is that the Clustered Index Seek is costed at 99% of the estimated total for the statement.  For some reason, the final SELECT operator is assigned a small cost of 0.0000484 units; I have absolutely no idea why this is so, or what it models.  Nevertheless, we can compare the total cost for both queries: the first one comes in at 0.0033501 units, and the second at 0.0034054.  The important point is that the second query is costed very slightly higher than the first, even though it is expected to produce many fewer rows (20.45 versus 61.9167). If you run the two queries, they produce exactly the same results, and both complete so quickly that it is impossible to measure CPU usage for a single execution.  We can, however, compare the I/O statistics for a single run by running the queries with STATISTICS IO ON: Table '#Test'. Scan count 63, logical reads 126, physical reads 0. Table '#Test'. Scan count 01, logical reads 002, physical reads 0. The query with the IN list uses 126 logical reads (and has a ‘scan count’ of 63), while the second query form completes with just 2 logical reads (and a ‘scan count’ of 1).  It is no coincidence that 126 = 63 * 2, by the way.  It is almost as if the first query is doing 63 seeks, compared to one for the second query. In fact, that is exactly what it is doing.  There is no indication of this in the graphical plan, or the tool-tip that appears when you hover your mouse over the Clustered Index Seek icon.  To see the 63 seek operations, you have click on the Seek icon and look in the Properties window (press F4, or right-click and choose from the menu): The Seek Predicates list shows a total of 63 seek operations – one for each of the values from the IN list contained in the first query.  I have expanded the first seek node to show the details; it is seeking down the clustered index to find the entry with the value 101.  Each of the other 62 nodes expands similarly, and the same information is contained (even more verbosely) in the XML form of the plan. Each of the 63 seek operations starts at the root of the clustered index B-tree and navigates down to the leaf page that contains the sought key value.  Our table is just large enough to need a separate root page, so each seek incurs 2 logical reads (one for the root, and one for the leaf).  We can see the index depth using the INDEXPROPERTY function, or by using the a DMV: SELECT S.index_type_desc, S.index_depth FROM sys.dm_db_index_physical_stats ( DB_ID(N'tempdb'), OBJECT_ID(N'tempdb..#Test', N'U'), 1, 1, DEFAULT ) AS S ; Let’s look now at the Properties window when the Clustered Index Seek from the second query is selected: There is just one seek operation, which starts at the root of the index and navigates the B-tree looking for the first key that matches the Start range condition (id >= 101).  It then continues to read records at the leaf level of the index (following links between leaf-level pages if necessary) until it finds a row that does not meet the End range condition (id <= 169).  Every row that meets the seek range condition is also tested against the Residual Predicate highlighted above (id % 10 > 0), and is only returned if it matches that as well. You will not be surprised that the single seek (with a range scan and residual predicate) is much more efficient than 63 singleton seeks.  It is not 63 times more efficient (as the logical reads comparison would suggest), but it is around three times faster.  Let’s run both query forms 10,000 times and measure the elapsed time: DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON; SET STATISTICS XML OFF; ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; GO DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; On my laptop, running SQL Server 2008 build 4272 (SP2 CU2), the IN form of the query takes around 830ms and the range query about 300ms.  The main point of this post is not performance, however – it is meant as an introduction to the next few parts in this mini-series that will continue to explore scans and seeks in detail. When is a seek not a seek?  When it is 63 seeks © Paul White 2011 email: [email protected] twitter: @SQL_kiwi

    Read the article

  • The Sensemaking Spectrum for Business Analytics: Translating from Data to Business Through Analysis

    - by Joe Lamantia
    One of the most compelling outcomes of our strategic research efforts over the past several years is a growing vocabulary that articulates our cumulative understanding of the deep structure of the domains of discovery and business analytics. Modes are one example of the deep structure we’ve found.  After looking at discovery activities across a very wide range of industries, question types, business needs, and problem solving approaches, we've identified distinct and recurring kinds of sensemaking activity, independent of context.  We label these activities Modes: Explore, compare, and comprehend are three of the nine recognizable modes.  Modes describe *how* people go about realizing insights.  (Read more about the programmatic research and formal academic grounding and discussion of the modes here: https://www.researchgate.net/publication/235971352_A_Taxonomy_of_Enterprise_Search_and_Discovery) By analogy to languages, modes are the 'verbs' of discovery activity.  When applied to the practical questions of product strategy and development, the modes of discovery allow one to identify what kinds of analytical activity a product, platform, or solution needs to support across a spread of usage scenarios, and then make concrete and well-informed decisions about every aspect of the solution, from high-level capabilities, to which specific types of information visualizations better enable these scenarios for the types of data users will analyze. The modes are a powerful generative tool for product making, but if you've spent time with young children, or had a really bad hangover (or both at the same time...), you understand the difficult of communicating using only verbs.  So I'm happy to share that we've found traction on another facet of the deep structure of discovery and business analytics.  Continuing the language analogy, we've identified some of the ‘nouns’ in the language of discovery: specifically, the consistently recurring aspects of a business that people are looking for insight into.  We call these discovery Subjects, since they identify *what* people focus on during discovery efforts, rather than *how* they go about discovery as with the Modes. Defining the collection of Subjects people repeatedly focus on allows us to understand and articulate sense making needs and activity in more specific, consistent, and complete fashion.  In combination with the Modes, we can use Subjects to concretely identify and define scenarios that describe people’s analytical needs and goals.  For example, a scenario such as ‘Explore [a Mode] the attrition rates [a Measure, one type of Subject] of our largest customers [Entities, another type of Subject] clearly captures the nature of the activity — exploration of trends vs. deep analysis of underlying factors — and the central focus — attrition rates for customers above a certain set of size criteria — from which follow many of the specifics needed to address this scenario in terms of data, analytical tools, and methods. We can also use Subjects to translate effectively between the different perspectives that shape discovery efforts, reducing ambiguity and increasing impact on both sides the perspective divide.  For example, from the language of business, which often motivates analytical work by asking questions in business terms, to the perspective of analysis.  The question posed to a Data Scientist or analyst may be something like “Why are sales of our new kinds of potato chips to our largest customers fluctuating unexpectedly this year?” or “Where can innovate, by expanding our product portfolio to meet unmet needs?”.  Analysts translate questions and beliefs like these into one or more empirical discovery efforts that more formally and granularly indicate the plan, methods, tools, and desired outcomes of analysis.  From the perspective of analysis this second question might become, “Which customer needs of type ‘A', identified and measured in terms of ‘B’, that are not directly or indirectly addressed by any of our current products, offer 'X' potential for ‘Y' positive return on the investment ‘Z' required to launch a new offering, in time frame ‘W’?  And how do these compare to each other?”.  Translation also happens from the perspective of analysis to the perspective of data; in terms of availability, quality, completeness, format, volume, etc. By implication, we are proposing that most working organizations — small and large, for profit and non-profit, domestic and international, and in the majority of industries — can be described for analytical purposes using this collection of Subjects.  This is a bold claim, but simplified articulation of complexity is one of the primary goals of sensemaking frameworks such as this one.  (And, yes, this is in fact a framework for making sense of sensemaking as a category of activity - but we’re not considering the recursive aspects of this exercise at the moment.) Compellingly, we can place the collection of subjects on a single continuum — we call it the Sensemaking Spectrum — that simply and coherently illustrates some of the most important relationships between the different types of Subjects, and also illuminates several of the fundamental dynamics shaping business analytics as a domain.  As a corollary, the Sensemaking Spectrum also suggests innovation opportunities for products and services related to business analytics. The first illustration below shows Subjects arrayed along the Sensemaking Spectrum; the second illustration presents examples of each kind of Subject.  Subjects appear in colors ranging from blue to reddish-orange, reflecting their place along the Spectrum, which indicates whether a Subject addresses more the viewpoint of systems and data (Data centric and blue), or people (User centric and orange).  This axis is shown explicitly above the Spectrum.  Annotations suggest how Subjects align with the three significant perspectives of Data, Analysis, and Business that shape business analytics activity.  This rendering makes explicit the translation and bridging function of Analysts as a role, and analysis as an activity. Subjects are best understood as fuzzy categories [http://georgelakoff.files.wordpress.com/2011/01/hedges-a-study-in-meaning-criteria-and-the-logic-of-fuzzy-concepts-journal-of-philosophical-logic-2-lakoff-19731.pdf], rather than tightly defined buckets.  For each Subject, we suggest some of the most common examples: Entities may be physical things such as named products, or locations (a building, or a city); they could be Concepts, such as satisfaction; or they could be Relationships between entities, such as the variety of possible connections that define linkage in social networks.  Likewise, Events may indicate a time and place in the dictionary sense; or they may be Transactions involving named entities; or take the form of Signals, such as ‘some Measure had some value at some time’ - what many enterprises understand as alerts.   The central story of the Spectrum is that though consumers of analytical insights (represented here by the Business perspective) need to work in terms of Subjects that are directly meaningful to their perspective — such as Themes, Plans, and Goals — the working realities of data (condition, structure, availability, completeness, cost) and the changing nature of most discovery efforts make direct engagement with source data in this fashion impossible.  Accordingly, business analytics as a domain is structured around the fundamental assumption that sense making depends on analytical transformation of data.  Analytical activity incrementally synthesizes more complex and larger scope Subjects from data in its starting condition, accumulating insight (and value) by moving through a progression of stages in which increasingly meaningful Subjects are iteratively synthesized from the data, and recombined with other Subjects.  The end goal of  ‘laddering’ successive transformations is to enable sense making from the business perspective, rather than the analytical perspective.Synthesis through laddering is typically accomplished by specialized Analysts using dedicated tools and methods. Beginning with some motivating question such as seeking opportunities to increase the efficiency (a Theme) of fulfillment processes to reach some level of profitability by the end of the year (Plan), Analysts will iteratively wrangle and transform source data Records, Values and Attributes into recognizable Entities, such as Products, that can be combined with Measures or other data into the Events (shipment of orders) that indicate the workings of the business.  More complex Subjects (to the right of the Spectrum) are composed of or make reference to less complex Subjects: a business Process such as Fulfillment will include Activities such as confirming, packing, and then shipping orders.  These Activities occur within or are conducted by organizational units such as teams of staff or partner firms (Networks), composed of Entities which are structured via Relationships, such as supplier and buyer.  The fulfillment process will involve other types of Entities, such as the products or services the business provides.  The success of the fulfillment process overall may be judged according to a sophisticated operating efficiency Model, which includes tiered Measures of business activity and health for the transactions and activities included.  All of this may be interpreted through an understanding of the operational domain of the businesses supply chain (a Domain).   We'll discuss the Spectrum in more depth in succeeding posts.

    Read the article

  • TGIF: Engagement Wrap-up

    - by Michael Snow
    We've had a very busy week here at Oracle and as we build up to Oracle OpenWorld starting in less than 10 days - it doesn't look like things will be slowing down. Engagement is definitely in the air this week. Our friend, John Mancini published a great article entitled: "The World of Engagement" on his Digital Landfill blog yesterday and we hosted a great webcast with R "Ray" Wang from Constellation Research yesterday on the "9 C's of Engagement". 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} I wanted to wrap-up the week with some key takeaways from our webcast yesterday with Ray Wang. If you missed the webcast yesterday, fear not - it is now available  On-Demand. We'll leave you this week with lots of questions about how to navigate these churning waters of engagement. Stay tuned to the Oracle WebCenter Social Business Thought Leaders Webcast Series as we fuel this dialogue. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Company Culture Does company support a culture of putting customer satisfaction ahead of profits? Does culture promote creativity and cross functional employee collaboration? Does culture accept different views of multi-generational workforce? Does culture promote employee training and skills development Does culture support upward mobility and long term retention? Does culture support work-life balance? Does the culture provide rewards for employee for outstanding customer support? Channels What are the current primary channels for customer communications? What do you think will be the primary channels in two years? Is company developing support model for emerging channels? Do all channels consistently deliver the same level of customer support? Do you know the cost per transaction across all channels? Do you engage customers proactively across multiple channels? Do all channels have access to the same customer information? Community Does company extend customer support into virtual communities of interest? Does company facilitate educating users through its virtual communities? Does company mine its customer’s experience into useful data? Does company increase the value for customers through using data to deliver new products and services? Does company support two way interactions with its customers through communities of interest? Does company actively support social CRM, online communities and social media markets? Credibility Does company market its trustworthiness through external certificates such as business licenses, BBB certificates or other validations? Does company promote trust through customer testimonials and case studies on ethical business practices? Does company promote truthful market campaigns Does company make it easy for customers to complain? Does company build its reputation for standing behind its products with guarantees for satisfaction? Does company protect its customer data with high security measures> Content What sources do you use to create customer content? Does company mine social media and blogs for customer content? How does your company sort, store and retain its customer content? How frequently does content get updated? What external sources do you use for customer content? How many responses are typically received from a knowledge management system inquiry? Does your company use customer content to design and develop new product and services? Context Does your company market to customers in clusters or individually? Does your company customize its messages and personalize them to specific needs of each individual customer? Does your company store customer data based on their past behaviors, purchases, sentiment analysis and current activities? Does your company manage customer context according to channels used? For example identify personal use channels versus business channels? What is your frequency of collecting customer activities across various touch points? How is your customer data stored and analyzed? Is contextual data used for future customer outreach? Cadence Which channels does your company measure-web site visits, phone calls, IVR, store visits, face to face, social media? Does company make effective use of cross channel marketing to promote more frequent customer engagement? Does your company rate the patterns relevant for your product or service and monitor usage against this pattern? Does your company measure the frequency of both online and offline channels? Does your company apply metrics to the frequency of customer engagements with product or services revenues? Does your company consolidate data for customer engagement across various channels for a complete view of its customer? Catalyst Does company offer coupon discounts? Does company have a customer loyalty program or a VIP membership program? Does company mine customer data to target specific groups of buyers? Do internal employees serve as ambassadors for customer programs? Does company drive loyalty through social media loyalty programs? Does company build rewards based on using loyalty data? Does company offer an employee incentive program to drive customer loyalty?

    Read the article

  • Google Analytics on Android

    - by pjv
    There is a specific and official analytics SDK for native Android apps (note that I'm not talking about webpages in apps on a phone). This library basically sends pages and events to Google Analytics and you can view your analytics in exactly the same dashboard as for websites. Since my background is apps rather than websites, and since a lot of the Google Analytics terminology seems particularly inapplicable to a native app, I need some pointers. Please discuss my remarks, provide some clarification where you think I'm off-track, and above all share good experiences! 1. Page Views Pages mostly can match different Activities (and Dialogs) being displayed. Activities can be visible behind non-full-screen Activities however, though only the top-level Activity can be interacted. This sort-off clashes with a "(page) view". You'd also want at least one page view for each visit and therefore put one page view tracker in the Application class. However this does not constitute a window or sorts. Usually an Activity will open at the same time, so the time spent on that page will have been 0. This will influence your "time spent" statistics. How are these counted anyway? Moreover, there is a loose coupling between the Activities, by means of Intents. A user can, much like on any website, step in at any Activity, although usually this then concerns resuming the application where he left off. This makes that the hierarchy of Activities usually is very flat. And since there are no url's involved. What meaning would using slashes in page titles have, such as "/Home"? All pages would appear on an equal level in the reports, so no content drilldown. Non-unique page views seem to be counted as some kind of indicator of successfulness: how often does the visitor revisit the page. When the user rotates the screen however usually an Activity resumes again, thus making it a new page view. This happens a lot. Maybe a well-thought-through placement of the call might solve this, or placing several, I'm not sure. How to deal with Page Views? 2. Events I'd say there are two sorts: A user event Something that happened, usually as an indirect consequence of the above. The latter particularly is giving me headaches. First of all, many events aren't written in code any more, but pieced logically together by means of Intents. This means that there is no place to put the analytics call. You'd either have to give up this advantage and start doing it the old-fashioned way in favor of good analytics, or, just be missing some events. Secondly, as a developer you're not so much interested in when a user clicks a button, but if the action that should have been performed really was performed and what the result was. There seems to be no clear way to get resulting data into Google Analytics (what's up with the integers? I want to put in Strings!). The same that applies to the flat pages hierarchy, also goes for the event categories. You could do "vertical" categories (topically, that is), but some code is shared "horizontally" and the tracking will be equally shared. Just as with the Intents mechanism, inheritance makes it hard for you to put the tracking in the right places at all times. And I can't really imagine "horizontal" categories. Unless you start making really small categories, such as all the items form the same menu in one category, I have a hard time grasping the concept. Finally, how do you deal with cancelling? Usually you both have an explicit cancel mechanism by ways of a button, as well as the implicit cancel when the "back"-button is pressed to leave the activity and there were no changes. The latter also applies to "saves", when the back button is pressed and there ARE changes. How are you consequently going to catch all these if not by doing all the "back"-button work yourself? How to deal with events? 3. Goals For goal types I have choice of: URL Destination, Time on Site, and Pages/Visit. Most apps don't have a funnel that leads the user to some "registration done" or "order placed" page. Apps have either already been bought (in which case you want to stimulate the user to love your app, so that he might bring on new buyers) or are paid for by in-app ads. So URL Destination is not a very important goal. Time on Site also seems troublesome. First, I have some doubt on how this would be measured. Second, I don't necessarily want my user to spend a lot of time in my already paid app, just be active and content. Equivalently, why not mention how frequent a user uses your app? Regarding Pages/Visit I already mentioned how screen orientation changes blow up the page view numbers. In an app I'd be most interested in events/visit to measure the user's involvement/activity. If he's intensively using the app then he must be loving it right? Furthermore, I also have some small funnels (that do not lead to conversion though) that I want to see streamlined. In my mind those funnels would end in events rather than page views but that seems not to be possible. I could also measure clickthroughs on in-app ads, but then I'd need to track those as Page Views rather than Events, in view of "URL Destination". What are smart goals for apps and how can you fit them on top of Analytics? 4. Optimisation Is there a smart way to manually do what "Website Optimiser" does for websites? Most importantly, how would I track different landing page designs? 5. Traffic Sources Referrals deal with installation time referrals, if you're smart enough to get them included. But perhaps I'd also want to get some data which third-party app sends users to my app to perform some actions (this app interoperability is possible via Intents). Many of the terminologies related to "Traffic Sources" seem totally meaningless and there is no possibility of connecting in AdSense. What are smart uses of this data? 6. Visitors Of the "Browser capabilities", "Network Properties" and "Mobile" tabs, many things are pointless as they have no influence on / relation with my mostly offline app that won't use flash anyway. Only if you drill down far enough, can you get to OS versions, which do matter a lot. I even forgot where you could check what exact Android devices visited. What are smart uses of this data? How can you make the relevant info more prominent? 7. Other No in-page analytics. I have to register my app as a web-url (What!?)?

    Read the article

  • Windows Server 2003 Terminal Server with installed licenses will not go beyond 2 connections

    - by Erwin Blonk
    I installed the Terminal Server role in Windows Server 2003 Standard 64-bits. Still, only 2 connections are allowed. The License Manager says that there are 10 Device CALs available, which is correct, and that none are given out. For good measure I let the server reboot, to no effect. Before this, there was another server (same Windows, except that it is 32 bits) active as a licensing server. I removed the role first and then then added it to the new server. I then removed the Terminal Server Licensing Server component off the old one and added it to the new one. After that, I added to licenses. When that didn't give the required result, I rebooted to new server. Still, the new server, with licenses and all, acts as if it has the 2 license RDP. The server are all stand-alone, there is no active directory been set up. Both servers are in different workgroups.

    Read the article

  • ESXi and Windows Server CPU parking

    - by Chris J
    For those that don't know, CPU parking is a feature in recent Windows Server releases that allows Windows to pretty much drop a CPU core to zero use, and having nothing use it. It's been introduced as a power-saving measure. There's more detail about it here, amongst other places. However what I'm curious about is whether this matter on a virtualised guest - or is CPU parking more of a hindrance than a help, given that the physical CPUs are managed by ESXi, not Windows, and that a parked CPU is less likely to deal with traffic unless the scheduler deems there's enough work to unpark the CPU? I've not found anything about this - I do suspect it will be very much based on a given workload, but I've not seen any discussion (unlike, say, whether hyper-threading has any effect, which seems to be discussed regularly). Whilst I do understand the "test with your workload" I was wondering if there was any advice/guidelines out there that I've missed.

    Read the article

  • core temperature vs CPU temperature

    - by Karl Nicoll
    I have recently installed a new heat sink & fan combination on my Core 2 Quad since my CPU was hitting about 70C under load. This has managed reduce temperatures while running Prime95 to about 54C, which I'm taking as a win (this is ~30 minutes after fitting). I'm a little confused though. The temperature readings given above are for CORE temperatures, but HWMonitor is showing a 5th "CPU" temperature (4 temperatures being the individual core temps) which is showing 21C idle, when idle temperatures for the cores vary between 37C and 42C. I guess there are two questions here: Are my CPU/Core temperatures decent, and is it safe to overclock when these are stock clock temperatures? I gather that the maximum safe operating temperature for a C2Q is ~70C, so which temperature should I measure against, the core temperatures (which are higher), or the CPU temperature reading?

    Read the article

  • Measuring accesses to files - apache

    - by George
    So, I run a website, that among other things serves some files (usually PDFs). All of these are stored under a specific directory on the server: /var/www/vhosts/mysite.com/httpdocs/site/pdf_files Due to storage issues on my VPS I am thinking of getting some S3 or other cloud storage, and mount it as a drive using S3QL/S3FS. Then I will be able to have the pdf_files folder symlinked to the cloud folder and serve those files using that, without any changes on the web app (is that a good plan?) Now, before doing that, to estimate costs, I need to measure how many file accesses people do, how many times those pdf files are downloaded each month for example. Basically how many times those pdf files are accessed through the webserver. I'd like to do it on the apache level. What's the best way that this can be done? e.g.: measuring the bandwidth used by files in that specific folder would also be nice, but estimating the GET requests I'll be doing to amazon is more important.

    Read the article

  • Measuring 'total bytes written' under Linux

    - by badnews
    We're quite interested in exploring the possibility of using SSD drives in a server environment. However, one thing that we need to establish is expected drive longevity. According to this article manufacturer's are reporting drive endurance in terms of 'total bytes written' (TBW). E.g. from that article a Crucial C400 SSD is rated at 72TB TBW. Do any scripts/tools exist under the Linux ecosystem to help us measure TBW? (and then make a more educated decision on the feasibility of using SSD drives)

    Read the article

  • Throughput tool with decent graphing.

    - by Cory J
    I've been looking through some of the tools available for measuring network throughput, namely iperf, bwping, ttcp, etc. I am planning on doing throughput tests over a long period of time, so what I really need is good graphing output, preferably rrd graphs. The Jperf frontend for iperf will generate a graph, and bmon has a nice command-line graph, but these simply count seconds since the test was started. I am trying to measure trends in throughput over times of the day, so a graph with times and days is necessary. So a way to get iperf to log to RRDs would be best, if this isn't possible could someone point me toward another solution?

    Read the article

  • Where should I go to learn about networking? [closed]

    - by Ollie Saunders
    I wonder if anyone could recommend resource or resources such as a good book that: explains how all the important protocols work and interact. I’m interested in those that are relevant in a typical home network and used over the Internet explains in detail how ADSL Internet connections work to the level of depth necessary so that I’m able to tweak and measure performance settings starts from the beginning but attempts to provide proper understanding rather than idiot-oriented steps to follow Basically, I’m interested in how these technologies work and tend to be implemented in hardware and software rather than “here’s what to do if…” I’m interested in Computer Networking by Andrew S. Tanenbaum and I wonder if anyone else has any experience with that title. It’s expensive but I could probably loan a copy for £3 from the library or so.

    Read the article

  • OpenVZ vs Xen, how much difference in performance?

    - by Aleksandr Levchuk
    There is a Xen vs. KVM in performance question on ServerFault. What will be the performance difference if the choice is between Xen and OpenVZ? How is it best to measure? Some may say "you're comparing apples and oranges" but I have to choose one of the two and it needs to be wise choice. Performance is most important to us. We may switching to Xen from OpenVZ because Xen is more ubiquitous but only if performance difference is not significant. In January 2011 I'm thinking of doing a head to head performance comparison - here is my project proposal to our Bioinformatics facility director.

    Read the article

  • AWS Autoscaling issue with existing nodes in ELB

    - by Ram Prasad
    I already have a ELB setup called MyLoadBalancer. I already have 2 nodes running on it with health checks (that checks a URL on the node to see if they are up) Created an autoscaling group (min 2, Max 10) Associated launchconfig mylaunchconfig that provisions a node using an AMI Created a trigger, that checks for avg min connections of 100 and Max of 500 (checks the load balancer and it is support to increase the node count by 1, if avg connections are 500 and decrease by one if less than 100) as-create-or-update-trigger MyTrigger --auto-scaling-group MyAutoScalingGroup --namespace "AWS/ELB" --measure RequestCount --statistic Average --dimensions "LoadBalancerName=MyLoadBalancer" --period 60 --lower-threshold 500 --upper-threshold 800 --lower-breach-increment=-1 --upper-breach-increment=1 --breach-duration 600 Now the issue is, as soon as I put in the trigger, it start 2 nodes .... but there are already two nodes in the LB. So, why is it provisioning 2 more nodes, when the nodes are there ? is it because it is not recognizing the existing 2 nodes ? then how do I add the existing nodes to the AutoScaling group ?

    Read the article

  • What metric captures why my OSX machine is so slow during XCode indexing

    - by Ben Flynn
    My entire machine OSX Lion machine slows down while XCode 4.4 is indexing. The CPU is less than 10% busy, I've got over 500 MB free memory, plenty of disk space, disk IO rate is not high, network activity is not high. Indexing just a few files can take minutes and builds are extremely slow. While this is going on, even loading a new web page in Chrome can be slow. Knowing how to fix it would be great, but more fundamentally how can I measure what is actually going slowly? What metrics should I be looking at? Nothing in Activity Monitor, iostat, top, or sar betray anything about what's going on to me. Even getting a man page is interminable.

    Read the article

  • Go frame-by-frame through a movie with a precise timer

    - by Matchu
    Hello, world: I have a video I made for physics class, which I intend to use to measure just how long an event took to take place. I can find the start frame and end frame easily using VLC's frame-by-frame feature. However, VLC's timer seems only to be precise to a single second, giving me no more precise an answer than "5 seconds." Is there a way in VLC, or any other program, to identify at precisely what time a particular frame in a video takes place? I have easy access to Ubuntu and Windows, and acquire a Mac if need be. If precise timer is not available, what number frame I am on will also work, since I know the framerate.

    Read the article

  • What is the best free software to learn touch-typing?

    - by gojira
    What is the best free software to learn touch-typing? Features it needs to have: should NOT display the keyboard layout on the screen! should give detailed statistics which actually measure progress (for which key do I have the highest error rate, graphs showing how typing speed improved over time, etc) should enable me to actually learn touch-typing in about one long weekend were I don't do much else than learning to touch-type. it would be very good if it were possible to load a text-file and the program will use words from the text-file for the exercises as well My goals are: at least same typing speed as I have now but which touch-typing, want to be able to look at the screen only when typing P.S.: EDIT: I forgot to mention, I'm using Win 7. And I know what the Dvorak and Colemak keyboard layouts are, but I'm not interested in them. My question was with respect to standard US keyboard layout.

    Read the article

  • Anti-spam measures for websites

    - by acidzombie24
    What are anti-spam measure I should consider before launching my user content website? Some things I have considered: Silent JavaScript based CAPTCHA on the register page (I do not have an implementation) Validate emails by forcing a confirmation link/number Allow X amount of comments per 10 minutes and Y per 2 hours (I am considering excited first time users who want to experience the site) Disallow link until user is trusted (I am not sure how a user will become trusted) Run all comments, messages, etc. through a spam filter. Check to see if messages are duplicate or similar (I may not bother with this. I'd like the system to be strong without this) I also timestamp everything which I then can retrieve as a long on my administrator page. What other measures can I take or consider?

    Read the article

  • Web Application Vulnerability Scanner suggestions?

    - by Chris_K
    I'm looking for a new tool for the ol' admin toolkit and would value some suggestions. I would like to do some "automated" testing of handful of websites for XSS (cross site scripting) vulns, along with checking for SQL injection opportunities. I realize that an automated tool approach isn't necessarily the only or best solution, but I'm hoping it would give me a nice start. The sites I need to scan cover the range in stacks from PHP / MySQL to Coldfusion, with some classic ASP and ASP.NET mixed in for good measure. What tools would you use to scan for Web application vulns? (Please note I'm focusing on the web apps directly, not the servers themselves).

    Read the article

  • Very low throughput on 10GbE network

    - by aix
    I have two Linux machines, each equipped with a Solarflare SFN5122F 10GbE NIC. The two NICs are connected together with an SFP+ Direct Attach cable. I am using netperf to measure TCP throughput between the two machines. On one box, I run: netserver and on the other: netperf -t TCP_STREAM -H 192.168.x.x -- -m 32768 I get: MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.x.x (192.168.x.x) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 32768 10.02 1321.34 The measured throughput is 1.3Gb/s. This is 7.5x below the theoretical maximum, and only 30% faster than 1GbE. What steps can I take to troubleshoot this?

    Read the article

  • How can I convert audio files to this format?

    - by jeffamaphone
    I have a bunch of audio files that are named .wav but it seems not all .wavs are created equal. For example: $ file * file1.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 44100 Hz file2.wav: Audio file with ID3 version 2.2.0, contains: MPEG ADTS, layer III, v1, 160 kbps, 44.1 kHz, JntStereo file3.wav: Claris clip art? file4.wav: Audio file with ID3 version 2.2.0, contains: MPEG ADTS, layer III, v1, 160 kbps, 44.1 kHz, JntStereo And for good measure, a non-wav: file5.m4a: ISO Media, MPEG v4 system, iTunes AAC-LC I would like to convert all of these files to the format that file1.wav is: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 44100 Hz What is the proper set of arguments to pass to afconvert to make that happen?

    Read the article

  • need for tcp fine-tuning on heavily used proxy server

    - by Vijay Gharge
    Hi all, I am using squid like Internet proxy server on RHEL 4 update 6 & 8 with quite heavy load i.e. 8k established connections during peak hour. Without depending much on application provider's expertise I want to achieve maximum o/p from linux. W.r.t. that I have certain questions as following: How to find out if there is scope for further tcp fine-tuning (without exhausting available resources) as the benchmark values given by vendor looks poor! Is there any parameter value that is available from OS / network stack that will show me the results. If at all there is scope, how shall I identify & configure OS tcp stack parameters i.e. using sysctl or any specific parameter Post tuning how shall I clearly measure performance enhancement / degradation ?

    Read the article

  • Where can I get a metal mug to fit my USB mug warmer?

    - by tjrobinson
    I got one of these for Christmas (this exact model): But it doesn't work very well with normal ceramic mugs as they're too thick and insulated. What I really want is a mug like this that fits well and transmits the heat from the warmer to my drink (so no air gaps or thick bases): Where can I get a mug like the one pictured above from? I like in the UK but don't mind ordering from overseas. Update: I've tried camping mugs but they all seem to be too wide. I will measure the base later and post exact dimensions. I am really after the exact mug shown in the picture, it must exist somewhere?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >