Search Results

Search found 2562 results on 103 pages for 'morphological analysis'.

Page 28/103 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Visual Studio hangs when deploying a cube

    - by Richie
    Hello All, I'm having an issue with an Analysis Services project in Visual Studio 2005. My project always builds but only occasionally deploys. No errors are reported and VS just hangs. This is my first Analysis Services project so I am hoping that there is something obvious that I am just missing. Here is the situation I have a cube that I have successfully deployed. I then make some change, e.g., adding a hierarchy to a dimension. When I try to deploy again VS hangs. I have to restart Analysis Services to regain control of VS so I can shut it down. I restart everything sometimes once, sometimes twice or more before the project will eventually deploy. This happens with any change I make there seems to be no pattern to this behaviour. Sometimes I have to delete the cube from Analysis Services before restarting everything to get a successful deploy. Also I have successfully deployed the cube, and then subsequently successfully reprocessed a dimension then when I open a query window in SQL Server Management Studio it says that it can find any cubes. As a test I have deployed a cube successfully. I have then deleted it in Analysis Services and attempted to redeploy it, without making any changes to the cube, only to have the same behaviour mentioned above. VS just hangs with no reason so I have no idea where to start hunting down the problem. It is taking 15-20 minutes to make a change as simple as setting the NameColumn of a dimension attribute. As you can imagine this is taking hours of my time so I would greatly appreciate any assistance anyone can give me.

    Read the article

  • How does the Amazon Recommendation feature work?

    - by Rachel
    What technology goes in behind the screens of Amazon recommendation technology? I believe that Amazon recommendation is currently the best in the market, but how do they provide us with such relevant recommendations? Recently, we have been involved with similar recommendation kind of project, but would surely like to know about the in and outs of the Amazon recommendation technology from a technical standpoint. Any inputs would be highly appreciated. Update: This patent explains how personalized recommendations are done but it is not very technical, and so it would be really nice if some insights could be provided. From the comments of Dave, Affinity Analysis forms the basis for such kind of Recommendation Engines. Also here are some good reads on the Topic Demystifying Market Basket Analysis Market Basket Analysis Affinity Analysis Suggested Reading: Data Mining: Concepts and Technique

    Read the article

  • SQL Server 2008 and SP1

    - by andrew007
    Hi, I have a server where I installed SQL Server 2008 and after I applied the SP1. Now, I want also to add the Analysis Services to this instance by using the "Add or remove features...". My questions are: Is it possible to add the Analysis Services on a server with SP1 already installed? How can I apply the SP1 also to the new Analysis Service feature? THANKS!

    Read the article

  • An XEvent a Day (8 of 31) – Targets Week – synchronous_event_counter

    - by Jonathan Kehayias
    Yesterday’s post, Targets Week - Bucketizers , looked at the bucketizer Targets in Extended Events and how they can be used to simplify analysis and perform more targeted analysis based on their output.  Today’s post will be fairly short, by comparison to the previous posts, while we look at the synchronous_event_counter target, which can be used to test the impact of an Event Session without actually incurring the cost of Event collection. What is the synchronous_event_counter? The synchronous_event_count...(read more)

    Read the article

  • SQL SERVER – Maximize Database Performance with DB Optimizer – SQL in Sixty Seconds #054

    - by Pinal Dave
    Performance tuning is an interesting concept and everybody evaluates it differently. Every developer and DBA have different opinion about how one can do performance tuning. I personally believe performance tuning is a three step process Understanding the Query Identifying the Bottleneck Implementing the Fix While, we are working with large database application and it suddenly starts to slow down. We are all under stress about how we can get back the database back to normal speed. Most of the time we do not have enough time to do deep analysis of what is going wrong as well what will fix the problem. Our primary goal at that time is to just fix the database problem as fast as we can. However, here is one very important thing which we need to keep in our mind is that when we do quick fix, it should not create any further issue with other parts of the system. When time is essence and we want to do deep analysis of our system to give us the best solution we often tend to make mistakes. Sometimes we make mistakes as we do not have proper time to analysis the entire system. Here is what I do when I face such a situation – I take the help of DB Optimizer. It is a fantastic tool and does superlative performance tuning of the system. Everytime when I talk about performance tuning tool, the initial reaction of the people is that they do not want to try this as they believe it requires lots of the learning of the tool before they use it. It is absolutely not true with the case of the DB optimizer. It is a very easy to use and self intuitive tool. Once can get going with the product, in no time. Here is a quick video I have build where I demonstrate how we can identify what index is missing for query and how we can quickly create the index. Entire three steps of the query tuning are completed in less than 60 seconds. If you are into performance tuning and query optimization you should download DB Optimizer and give it a go. Let us see the same concept in following SQL in Sixty Seconds Video: You can Download DB Optimizer and reproduce the same Sixty Seconds experience. Related Tips in SQL in Sixty Seconds: Performance Tuning – Part 1 of 2 – Getting Started and Configuration Performance Tuning – Part 2 of 2 – Analysis, Detection, Tuning and Optimizing What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Interview Questions and Answers, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Identity

    Read the article

  • BigData and Customer Experience: Happy Together

    - by Isabel F. Peñuelas
    The two big buzzes of the year may lay closer than it appears. Both concepts intersect at various points: BigData and Return of Investment of Marketing Campaigns On a recent post Big Data Is The Future Of Marketing Jeff Dachis explains very clearly how “Big data analytics finally allows marketers to identify, measure, and manage what is positively impacting their Brand”. Regression analysis applied to big data volumes coming from social media will substitute the failed attempts to justify marketing investments on social media in terms of followers and likes, he continues, “the measurement models applied by marketers on TV Campaigns don´t work on social”, we need to study the data with fresh eyes and maybe then we will start understanding and measuring brand engagemet. Social CRM and BigData The real value of Social CRM start by analyzing mass of big data from social media in order of applying social intelligence techniques that allow us to classify new customer niches and communities and define appropriated strategies to contact potential customers. Gartner Says that the Market for Social CRM is on pace to surpass $1 Billion in Revenue by Year-End 2012 but in words of Zach Hofer-Shall, Analyst at Forrester Research “Social customer relationship management is hard” (The Social CRM Arms Race Heats ). To succeed brands need three things: Investing in new social tools, investing in consultancy and investing in infrastructure for massive data storage and analysis. Neither CeX or BigData are easy and cheap wins. But what are the customer benefits of such investments? Big Data and Brand Engagement Time is the most valuable asset of todays consumers: tired of information overload, exhausted by the terabytes of offering, anxious because of not having the same fast multichannel experience with their services’ marketers or preferred goods providers than the one they found on their social media. Yes, I know you have read this before- me too. But is real. The motto of the Customer Experience philosophy of providing a consistent experience through multiple touchpoints that makes the relationship customer/brand easier and valuable finds it basis on understanding customer/s preferences and context for which BigData analysis is another imperative. In summary, I believe that using BigData Analysis in combination with appropriated CeX strategies and technologies is a promising direction for achieving: efficiency and marketing cost-savings; growing the customer base; and increasing customer conversion and retention. In a world: The Direction of Future Marketing.

    Read the article

  • MySQL and Hadoop Integration - Unlocking New Insight

    - by Mat Keep
    “Big Data” offers the potential for organizations to revolutionize their operations. With the volume of business data doubling every 1.2 years, analysts and business users are discovering very real benefits when integrating and analyzing data from multiple sources, enabling deeper insight into their customers, partners, and business processes. As the world’s most popular open source database, and the most deployed database in the web and cloud, MySQL is a key component of many big data platforms, with Hadoop vendors estimating 80% of deployments are integrated with MySQL. The new Guide to MySQL and Hadoop presents the tools enabling integration between the two data platforms, supporting the data lifecycle from acquisition and organisation to analysis and visualisation / decision, as shown in the figure below The Guide details each of these stages and the technologies supporting them: Acquire: Through new NoSQL APIs, MySQL is able to ingest high volume, high velocity data, without sacrificing ACID guarantees, thereby ensuring data quality. Real-time analytics can also be run against newly acquired data, enabling immediate business insight, before data is loaded into Hadoop. In addition, sensitive data can be pre-processed, for example healthcare or financial services records can be anonymized, before transfer to Hadoop. Organize: Data is transferred from MySQL tables to Hadoop using Apache Sqoop. With the MySQL Binlog (Binary Log) API, users can also invoke real-time change data capture processes to stream updates to HDFS. Analyze: Multi-structured data ingested from multiple sources is consolidated and processed within the Hadoop platform. Decide: The results of the analysis are loaded back to MySQL via Apache Sqoop where they inform real-time operational processes or provide source data for BI analytics tools. So how are companies taking advantage of this today? As an example, on-line retailers can use big data from their web properties to better understand site visitors’ activities, such as paths through the site, pages viewed, and comments posted. This knowledge can be combined with user profiles and purchasing history to gain a better understanding of customers, and the delivery of highly targeted offers. Of course, it is not just in the web that big data can make a difference. Every business activity can benefit, with other common use cases including: - Sentiment analysis; - Marketing campaign analysis; - Customer churn modeling; - Fraud detection; - Research and Development; - Risk Modeling; - And more. As the guide discusses, Big Data is promising a significant transformation of the way organizations leverage data to run their businesses. MySQL can be seamlessly integrated within a Big Data lifecycle, enabling the unification of multi-structured data into common data platforms, taking advantage of all new data sources and yielding more insight than was ever previously imaginable. Download the guide to MySQL and Hadoop integration to learn more. I'd also be interested in hearing about how you are integrating MySQL with Hadoop today, and your requirements for the future, so please use the comments on this blog to share your insights.

    Read the article

  • #MDX in London and speculation about future books

    - by Marco Russo (SQLBI)
    Chris Webb, who wrote the Expert Cube Development with Microsoft SQL Server 2008 Analysis Services book with me and Alberto , is preparing another Introduction to MDX course in London, this time from October 26th to 28th. It is now a three day course (previously it was two day) and you can find every other detail here . You might be wondering whether we are writing something else... well, we don't have plan to release a new edition of the Analysis Services book - after all, all the content of the...(read more)

    Read the article

  • Understanding Data Science: Recent Studies

    - by Joe Lamantia
    If you need such a deeper understanding of data science than Drew Conway's popular venn diagram model, or Josh Wills' tongue in cheek characterization, "Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician." two relatively recent studies are worth reading.   'Analyzing the Analyzers,' an O'Reilly e-book by Harlan Harris, Sean Patrick Murphy, and Marck Vaisman, suggests four distinct types of data scientists -- effectively personas, in a design sense -- based on analysis of self-identified skills among practitioners.  The scenario format dramatizes the different personas, making what could be a dry statistical readout of survey data more engaging.  The survey-only nature of the data,  the restriction of scope to just skills, and the suggested models of skill-profiles makes this feel like the sort of exercise that data scientists undertake as an every day task; collecting data, analyzing it using a mix of statistical techniques, and sharing the model that emerges from the data mining exercise.  That's not an indictment, simply an observation about the consistent feel of the effort as a product of data scientists, about data science.  And the paper 'Enterprise Data Analysis and Visualization: An Interview Study' by researchers Sean Kandel, Andreas Paepcke, Joseph Hellerstein, and Jeffery Heer considers data science within the larger context of industrial data analysis, examining analytical workflows, skills, and the challenges common to enterprise analysis efforts, and identifying three archetypes of data scientist.  As an interview-based study, the data the researchers collected is richer, and there's correspondingly greater depth in the synthesis.  The scope of the study included a broader set of roles than data scientist (enterprise analysts) and involved questions of workflow and organizational context for analytical efforts in general.  I'd suggest this is useful as a primer on analytical work and workers in enterprise settings for those who need a baseline understanding; it also offers some genuinely interesting nuggets for those already familiar with discovery work. We've undertaken a considerable amount of research into discovery, analytical work/ers, and data science over the past three years -- part of our programmatic approach to laying a foundation for product strategy and highlighting innovation opportunities -- and both studies complement and confirm much of the direct research into data science that we conducted. There were a few important differences in our findings, which I'll share and discuss in upcoming posts.

    Read the article

  • Using The Data Mining Query Task in SSIS

    SQL Server Integration Services (SSIS) is a Business Intelligence tool which can be used by database developers or administrators to perform Extract, Transform & Load (ETL) operations. In my previous article Using Analysis Services Processing Task & Analysis Services ... [Read Full Article]

    Read the article

  • WildPackets Monitors Diverse Networks

    WildPackets offers portable network analysis products which are designed for use on enterprise networks and in test and measurement labs, plus distributed network analysis solutions for enterprise-wide applications.

    Read the article

  • Parent-child hierarchies and unary operators in PowerPivot

    - by Marco Russo (SQLBI)
    Alberto wrote an excellent post describing how to implement the Unary Operator feature (which is present in Analysis Services) in PowerPivot (there was a previous post about parent-child hierarchies, too). I have to say that the solution is not so easy to implement as in Analysis Services, but it just works and, from a practical point of view, it is not so difficult to implement if you understand how it works and accept its limitations (only sum and subtractions are supported). I think that many...(read more)

    Read the article

  • Search Engine Optimizing

    Search Engine Optimization is a process by which a web site is improved so that it can be more easily found by search engines, rank higher and be found by its target audience. The main components to SEO are: keyword analysis, content analysis, title and meta tags, relevant link building, search engine submission, and maintenance. Below are steps in the process.

    Read the article

  • Powerful Lessons in Data from the Presidential Election

    - by Christina McKeon
    Now that we’ve had a few days to recover from the U.S. presidential election, it’s a good time to take a step back from politics and look for the customer experience lessons that we can take away. The most powerful lesson is that when you know more about your base, you will have an advantage over your competition. That advantage will translate into you winning and your competition losing. Michael Scherer of TIME was given access to Obama’s data analysts two days before the election. His account is documented in Inside the Secret World of the Data Crunchers Who Helped Obama Win. What we learned from Scherer’s inside view is how well Obama’s team did in getting the right data, analyzing it, and acting on it. This data team recognized how critical it was to break down data silos within the campaign. As Scherer noted, they created “a single system that merged information from pollsters, fundraisers, field workers, consumer databases, and social-media and mobile contacts with the main Democratic voter files in the swing states.” The Obama analysis was so meticulous that they knew which celebrity and which type of celebrity event would help them maximize campaign contributions. With a single system, their data models became more precise. They determined which messages were more successful with specific demographic groups and that who made the calls mattered. Data analysis also led to many other changes in Obama’s campaign including a new ad buying strategy, using social media and applications to tap into supporters’ friends, and using new social news sites. While we did not have that same inside view into Romney’s campaign, much of the post-mortem coverage indicates that Romney’s team did not have the right analysis. As Peter Hamby of CNN wrote in Analysis: Why Romney Lost, “Romney officials had modeled an electorate that looked something like a mix of 2004 and 2008….” That historical data did not account for the changing demographics in the U.S. Does your organization approach data like the Obama or Romney team? Do you really know your base? How well can you predict what is going to happen in your business? If you haven’t already put together a strategy and plan to know more, this week’s civics lesson is a powerful reason to do it sooner rather than later. Your competitors are probably thinking the same thing that you are!

    Read the article

  • Botnet Malware Sleeps Eight Months Activation, Child Concerns

    Daily Safety Check experts used a computer forensic analysis of a significant botnet that consisted of Carberp and SpyEye malware to come up with the details for their report. The analysis found that the botnet profiled the behavior of the slave computers it infected, similar to surveillance techniques used by law enforcement agencies, for an average of eight months. During the eight months, the botnet analyzed each computer's users and assigned ratings to certain activities to form a complete profile for each. Doing so allowed those behind the scheme to determine which were the most favora...

    Read the article

  • Tissue Specific Electrochemical Fingerprinting on the NetBeans Platform

    - by Geertjan
    Proteomics and metalloproteomics are rapidly developing interdisciplinary fields providing enormous amounts of data to be classified, evaluated, and interpreted. Approaches offered by bioinformatics and also by biostatistical data analysis and treatment are therefore becoming increasingly relevant. A bioinformatics tool has been developed at universities in Prague and Brno, in the Czech Republic, for analysis and visualization in this domain, on the NetBeans Platform: More info:  http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0049654

    Read the article

  • Feature: Lead with Intelligence

    Business efficiency depends on business decisions, and business decisions depend on current, accurate information and powerful analysis. See how Oracle data warehousing, business intelligence, and enterprise performance management solutions deliver the information, analysis, and efficiencies to propel your business ahead of the competition.

    Read the article

  • mod perl in apache 2.2 not parsing perl scripts

    - by futureelite7
    Hi, I've set up a fresh Apache 2.2.15 server on windows server 2008 R2 with mod_perl (mod perl v2.0.4 / perl v5.10.1). Mod_perl and Perl 5.10 has been installed and loaded without problems. However, despite my configuration, the mod_perl module is failing to recognize and execute my .pl file, instead opting to print out the perl source instead. What did I do wrong, and how do I make perl process my pl script instead of sending it to the client? My configuration: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "C:\Program Files (x86)\AWStats\wwwroot" ServerName analysis.example.com ServerAlias analysis.example.com ErrorLog "logs/analysis.example.com-error.log" CustomLog "logs/analysis.example.com-access.log" common DirectoryIndex index.php index.htm index.html PerlSwitches -T <Directory "C:\Program Files (x86)\AWStats\wwwroot"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <Directory "C:\Program Files (x86)\AWStats\wwwroot\cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all <FilesMatch "\.pl$"> SetHandler perl-script # #PerlResponseHandler ModPerl::Registry PerlOptions +ParseHeaders Options +ExecCGI </FilesMatch> </directory> </VirtualHost> Many many thanks for the help!

    Read the article

  • The way I think about Diagnostic tools

    - by Daniel Moth
    Every software has issues, or as we like to call them "bugs". That is not a discussion point, just a mere fact. It follows that an important skill for developers is to be able to diagnose issues in their code. Of course we need to advance our tools and techniques so we can prevent bugs getting into the code (e.g. unit testing), but beyond designing great software, diagnosing bugs is an equally important skill. To diagnose issues, the most important assets are good techniques, skill, experience, and maybe talent. What also helps is having good diagnostic tools and what helps further is knowing all the features that they offer and how to use them. The following classification is how I like to think of diagnostics. Note that like with any attempt to bucketize anything, you run into overlapping areas and blurry lines. Nevertheless, I will continue sharing my generalizations ;-) It is important to identify at the outset if you are dealing with a performance or a correctness issue. If you have a performance issue, use a profiler. I hear people saying "I am using the debugger to debug a performance issue", and that is fine, but do know that a dedicated profiler is the tool for that job. Just because you don't need them all the time and typically they cost more plus you are not as familiar with them as you are with the debugger, doesn't mean you shouldn't invest in one and instead try to exclusively use the wrong tool for the job. Visual Studio has a profiler and a concurrency visualizer (for profiling multi-threaded apps). If you have a correctness issue, then you have several options - that's next :-) This is how I think of identifying a correctness issue Do you want a tool to find the issue for you at design time? The compiler is such a tool - it gives you an exact list of errors. Compilers now also offer warnings, which is their way of saying "this may be an error, but I am not smart enough to know for sure". There are also static analysis tools, which go a step further than the compiler in identifying issues in your code, sometimes with the aid of code annotations and other times just by pointing them at your raw source. An example is FxCop and much more in Visual Studio 11 Code Analysis. Do you want a tool to find the issue for you with code execution? Just like static tools, there are also dynamic analysis tools that instead of statically analyzing your code, they analyze what your code does dynamically at runtime. Whether you have to setup some unit tests to invoke your code at runtime, or have to manually run your app (and interact with it) under the tool, or have to use a script to execute your binary under the tool… that varies. The result is still a list of issues for you to address after the analysis is complete or a pause of the execution when the first issue is encountered. If a code path was not taken, no analysis for it will exist, obviously. An example is the GPU Race detection tool that I'll be talking about on the C++ AMP team blog. Another example is the MSR concurrency CHESS tool. Do you want you to find the issue at design time using a tool? Perform a code walkthrough on your own or with colleagues. There are code review tools that go beyond just diffing sources, and they help you with that aspect too. For example, there is a new one in Visual Studio 11 and searching with my favorite search engine yielded this article based on the Developer Preview. Do you want you to find the issue with code execution? Use a debugger - let’s break this down further next. This is how I think of debugging: There is post mortem debugging. That means your code has executed and you did something in order to examine what happened during its execution. This can vary from manual printf and other tracing statements to trace events (e.g. ETW) to taking dumps. In all cases, you are left with some artifact that you examine after the fact (after code execution) to discern what took place hoping it will help you find the bug. Learn how to debug dump files in Visual Studio. There is live debugging. I will elaborate on this in a separate post, but this is where you inspect the state of your program during its execution, and try to find what the problem is. More from me in a separate post on live debugging. There is a hybrid of live plus post-mortem debugging. This is for example what tools like IntelliTrace offer. If you are a tools vendor interested in the diagnostics space, it helps to understand where in the above classification your tool excels, where its primary strength is, so you can market it as such. Then it helps to see which of the other areas above your tool touches on, and how you can make it even better there. Finally, see what areas your tool doesn't help at all with, and evaluate whether it should or continue to stay clear. Even though the classification helps us think about this space, the reality is that the best tools are either extremely excellent in only one of this areas, or more often very good across a number of them. Another approach is to offer a toolset covering all areas, with appropriate integration and hand off points from one to the other. Anyway, with that brain dump out of the way, in follow-up posts I will dive into live debugging, and specifically live debugging in Visual Studio - stay tuned if that interests you. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Merck Serono Gains Deep Understanding of Product Portfolio Value-Drivers, Risks, and Sales Expectations Through Forecasting Solution

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Merck Serono S.A. is the biopharmaceutical division of Merck KGaA. It offers leading brands in 150 countries to help patients with cancer, multiple sclerosis, infertility, endocrine and metabolic disorders, as well as cardiovascular diseases. Challenges: Establish a better decision-making framework for its complex, development portfolio of pharmaceutical products, where single-point estimates or expected averages of portfolio values, portfolio risks, and sales forecasts are insufficient and can be misleading Enable the company to be aware at all times of the range of possible outcomes of technical and market risks and uncertainties, such as the technical uncertainty of whether a product will produce the desired clinical outcomes, or the market-related uncertainty of whether a product will be outperformed by its competitors Solutions to Overcome the Challenges: Used Oracle Crystal Ball to devise a Monte-Carlo-based approach to better analyze and define the values and risks of the company’s development portfolio, laying the groundwork for optimized decision-making Enabled a better understanding of the range of potential values and risks to improve portfolio planning Enabled detailed analysis of the likelihood of favorable or unfavorable outcomes, such as the likelihood of whether Merck Serono can meet its sales targets planned for the next ten years with its existing product portfolio Gained the ability to take into account correlative risks, synergies and project interactions, enabling Merck Serono to better forecast what the company may achieve—for example, that there is a 70% probability of a particular sales target being met Established Monte-Carlo-based analysis using Oracle Crystal Ball as a useful element in decision-making at the board level, as the approach provides a better analysis of values and risks associated with the company’s product portfolio “Oracle Crystal Ball enables us to make Monte Carlo simulations of the potential value and sales of our development portfolio. It is a very powerful tool for gaining a thorough understanding and improved awareness of value drivers, uncertainties, and risks, along with associated probabilities.” – Riccardo Lampariello, Associate Director, Merck Serono S.A Why Oracle “We chose Oracle Crystal Ball to enable us to perform Monte Carlo analysis, which gives us a deeper understanding and improved awareness of the value drivers, uncertainties and risks of our portfolio of development projects,” said Kimber Hardy, head of valuation and analysis, Merck Serono S.A. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Click here to read the full version of the customer success story Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • XBRL - Moving from Production to Consumption

    - by jmorourke
    Here's an update on what’s new with XBRL and how it can actually benefit your organization versus adding extra time and costs to financial reporting.  On February 29th (leap day) of 2012 I attended the XBRL and Financial Analysis Technology Conference at Baruch College in NYC.  The event, which attracted over 300 XBRL gurus and fans was presented by XBRL US, The New York Society of Security Analysts’ Improved Corporate Reporting Committee, and Baruch College’s Robert Zicklin Center for Corporate Integrity.  The event featured keynotes from the U.S. Securities and Exchange Commission (SEC), and the CFA Institute as well as panels covering alternative research tools and data, corporate reporting to stakeholders and a demonstration of XBRL analysis tools.  The program culminated in a presentation of the finalists and the winner of the $20,000 XBRL Challenge.    Some of the key points made in the sessions included: The focus of XBRL tools is moving from production to consumption. As of February 2012, over 9000 companies are reporting in XBRL, with over 10 million facts filed to date XBRL taxonomy extensions have dropped from 27% to 11% making comparisons easier The SEC reports that XBRL makes it easier to analyze disclosures, focus on accounting issues XBRL is helping standards-setters like the FASB speed their analysis of impacts of proposed accounting rule changes Companies like Thomson Reuters report that XBRL is helping speed the delivery of data to clients The most interesting part of the program though, was the session highlighting the 5 finalists in the XBRL Challenge competition and the winning solution.  The XBRL Challenge was launched in 2011 as a means of spurring the development of more end-user tools to help with the consumption of XBRL-based financial information.       Over an 8-month process handled by 5 judges, there were 84 registrants, 15 completed submissions, 5 finalists and one winner of the challenge.  All of the solutions are open-sourced tools and most of them focus on consuming XBRL-based data.  The 5 finalists included: Advanced XBRL Processing from Oxide solutions – XBRL viewer for taxonomies, filings and company data with peer comparison capabilities. Arrelle – API for XBRL processes, supports SEC Validations, RSS Feeds to access filings etc. Calcbench – XBRL data analysis tool that can be embedded in other web applications.  This tool can combine XBRL filings with real-time market data. XBRL to XL – allows the importing of XBRL data into Microsoft Excel for analysis, comparisons.  Users start on the web and populate Excel with XBRL data. XBurble – allows users to search and view XBRL filings, export to Excel, merge for comparison, and includes a workflow interface. The winner of the $20,000 XBRL Challenge prize was CalcBench.  More information about the XBRL Challenge and the finalists can be found at www.XBRLUS.org/challenge XBRL for Sustainability Reporting – other recent news on the XBRL front was the announcement by the Global Reporting Initiative (GRI) of an XBRL taxonomy for Sustainability Reporting.  This taxonomy was co-developed by the GRI and Deloitte and is designed to make the consumption of data found in Sustainability Reports much easier.  Although there is no government mandate to file Sustainability Reports in XBRL format, organizations that do use the GRI guidelines for Sustainability Reporting are encouraged to tag and submit their data voluntarily to the GRI – who will populate a database with Sustainability Reporting data and make this available to the public.  For more information about this initiative, you can go to the GRI web site:  www.globalreporting.org. So how does all of this benefit corporate filers and investors?  Since its introduction, the consensus in the market is that XBRL has mainly benefited the regulators and investment analysts who need to consume and analyze large volumes of financial data.  But with the emergence of more end-user tools for consuming and analyzing XBRL-based data, and the ability to perform quick comparisons of one company versus its peers and competitors in an industry group, will soon accelerate the benefits to corporate finance staff, as well as individual investors.  This could apply to financial results tagged in XBRL, as well as non-financial information such as Sustainability Reporting – which over the long-term will likely be integrated with financial reporting.   And as multiple regulators and agencies in a country adopt the XBRL standard for corporate filings, more benefits will accrue as companies will be able to leverage one set of XBRL-based financial data for multiple regulatory filings.     For more information about the latest developments in XBRL, check out the XBRL US or XBRL International web sites:  www.xbrl.org, www.xbrlus.org. For more information about what Oracle is doing to support XBRL, here are some links: http://www.oracle.com/us/solutions/ent-performance-bi/disclosure-management-065892.html http://www.oracle.com/technetwork/database/features/xmldb/index-087631.html Feel free to contact me if you have any questions or need more information:  [email protected]

    Read the article

  • Application Event Log keeps getting corrupted

    - by yakatz
    I recently asked about repairing a corrupt event log, because it seemed to be a one-off event. The event log has since exhibited the same behavior 3 times. We have been trying to find patterns, but so far we have found nothing. The server runs several ASP.NET applications and three scheduled tasks written in .NET. The last modified date of the event log once happened to be the same time as one of the scheduled tasks, but the others have not been. Any suggestions of where to look next or a way we can get any information out of a corrupt evtx file? The server is running critical e-commerce applications, so we want to keep the number of restarts required to a minimum. Edit: I ran DUMPEL and got very strange results. 1/9/2012 4:14:05 PM 1 100 1000 Application Error N/A SERVERNAME Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7a5f8 Faulting module name: ntdll.dll, version: 6.1.7601.17514, time stamp: 0x4ce7ba58 Exception code: 0xc0000374 Fault offset: 0x000ce653 Faulting process id: 0x1070 Faulting application start time: 0x01cccf1386d30991 Faulting application path: C:\Windows\SysWOW64\inetsrv\w3wp.exe Faulting module path: C:\Windows\SysWOW64\ntdll.dll Report Id: dbf4f691-3b06-11e1-9025-005056a602e6 1/9/2012 4:14:07 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_79d9 P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: C:\Windows\Temp\WER975.tmp.appcompat.txt C:\Windows\Temp\WERA03.tmp.WERInternalMetadata.xml C:\Windows\Temp\WERA13.tmp.hdmp C:\Windows\Temp\WERD21.tmp.mdmp These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_cd7d09dfc84119d82a2ac6a789038bd5661acfb_cab_128f0e67 Analysis symbol: Rechecking for solution: 0 Report Id: dbf4f691-3b06-11e1-9025-005056a602e6 Report Status: 4 1/9/2012 4:14:07 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_79d9 P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: C:\Windows\Temp\WER975.tmp.appcompat.txt C:\Windows\Temp\WERA03.tmp.WERInternalMetadata.xml C:\Windows\Temp\WERA13.tmp.hdmp C:\Windows\Temp\WERD21.tmp.mdmp These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_cd7d09dfc84119d82a2ac6a789038bd5661acfb_cab_128f0e67 Analysis symbol: Rechecking for solution: 0 Report Id: dbf4f691-3b06-11e1-9025-005056a602e6 Report Status: 0 1/9/2012 4:14:12 PM 1 100 1000 Application Error N/A SERVERNAME Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7a5f8 Faulting module name: ntdll.dll, version: 6.1.7601.17514, time stamp: 0x4ce7ba58 Exception code: 0xc0000374 Fault offset: 0x000ce653 Faulting process id: 0x16ac Faulting application start time: 0x01cccf139f475c0c Faulting application path: C:\Windows\SysWOW64\inetsrv\w3wp.exe Faulting module path: C:\Windows\SysWOW64\ntdll.dll Report Id: e03bae70-3b06-11e1-9025-005056a602e6 1/9/2012 4:14:16 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_9c6c P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: C:\Windows\Temp\WER2579.tmp.appcompat.txt C:\Windows\Temp\WER25F7.tmp.WERInternalMetadata.xml C:\Windows\Temp\WER25F8.tmp.hdmp C:\Windows\Temp\WER28F6.tmp.mdmp These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_c49a67649524ad11b64bbf809211bc5ba742a3d6_cab_0b63321b Analysis symbol: Rechecking for solution: 0 Report Id: e03bae70-3b06-11e1-9025-005056a602e6 Report Status: 4 1/9/2012 4:14:16 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_9c6c P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: C:\Windows\Temp\WER2579.tmp.appcompat.txt C:\Windows\Temp\WER25F7.tmp.WERInternalMetadata.xml C:\Windows\Temp\WER25F8.tmp.hdmp C:\Windows\Temp\WER28F6.tmp.mdmp These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_c49a67649524ad11b64bbf809211bc5ba742a3d6_cab_0b63321b Analysis symbol: Rechecking for solution: 0 Report Id: e03bae70-3b06-11e1-9025-005056a602e6 Report Status: 0 1/9/2012 4:14:21 PM 1 100 1000 Application Error N/A SERVERNAME Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7a5f8 Faulting module name: ntdll.dll, version: 6.1.7601.17514, time stamp: 0x4ce7ba58 Exception code: 0xc0000374 Fault offset: 0x000ce653 Faulting process id: 0x17f8 Faulting application start time: 0x01cccf13a4ba5126 Faulting application path: C:\Windows\SysWOW64\inetsrv\w3wp.exe Faulting module path: C:\Windows\SysWOW64\ntdll.dll Report Id: e57a0a85-3b06-11e1-9025-005056a602e6 1/9/2012 4:14:21 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_9c6c P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_c49a67649524ad11b64bbf809211bc5ba742a3d6_1cfb4872 Analysis symbol: Rechecking for solution: 0 Report Id: e57a0a85-3b06-11e1-9025-005056a602e6 Report Status: 4 1/9/2012 4:14:21 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_9c6c P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_c49a67649524ad11b64bbf809211bc5ba742a3d6_1cfb4872 Analysis symbol: Rechecking for solution: 0 Report Id: e57a0a85-3b06-11e1-9025-005056a602e6 Report Status: 0 None of the files referenced actually exist (not even in WER ReportArchive). These should not be the only events mentioned. The log file has been cleared twice since January 9, so those events should not even be listed at all.

    Read the article

  • MS project publishing to TFS web portal display

    - by denis bastarache
    So, when we initially created our MPP schedule, I made use of indends / subordinates to break down the project by the various stages of the lifecycle, which is fine... no issues there... But now that I'm trying to publish this over to TFS display, it'll only pick up the actual "action items / sub-tasks" seeing as I have resource allocation specified. So for example I have an "Analysis" phase with a few items underneath, and "System Requirements" phase with the same items, so when I publish these to TFS, it won't display the "Parent" distinction between items, so both "Tasks" instances are being published in TFS under the exact same name... So, if I can't do this Automatically, I'll likely have edit each tasks with "Analysis - Item 1", "Analysis - item 2", "SRD - Item 1", "SRD - item 2"... is there a way to do this automatically, or will have to go the manual route??

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >