Search Results

Search found 5751 results on 231 pages for 'analysis patterns'.

Page 101/231 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Using The Data Mining Query Task in SSIS

    SQL Server Integration Services (SSIS) is a Business Intelligence tool which can be used by database developers or administrators to perform Extract, Transform & Load (ETL) operations. In my previous article Using Analysis Services Processing Task & Analysis Services ... [Read Full Article]

    Read the article

  • WildPackets Monitors Diverse Networks

    WildPackets offers portable network analysis products which are designed for use on enterprise networks and in test and measurement labs, plus distributed network analysis solutions for enterprise-wide applications.

    Read the article

  • How to get the users set of date format pattern strings? (3 replies)

    I would like to get the current user's set of date format pattern strings as listed in the Control Panel regional settings applet. For my UK English system I see the following patterns listed: Short Date: dd/MM/yyyy dd/MM/yy d/M/yy d.M.yy yyyy MM dd Long Date: dd MMMM yyyy d MMMM yyyy If I use GetDateTimeFormats (d and D) the results match the expected patterns above, but of course they're the for...

    Read the article

  • How to get the users set of date format pattern strings? (3 replies)

    I would like to get the current user's set of date format pattern strings as listed in the Control Panel regional settings applet. For my UK English system I see the following patterns listed: Short Date: dd/MM/yyyy dd/MM/yy d/M/yy d.M.yy yyyy MM dd Long Date: dd MMMM yyyy d MMMM yyyy If I use GetDateTimeFormats (d and D) the results match the expected patterns above, but of course they're the for...

    Read the article

  • Parent-child hierarchies and unary operators in PowerPivot

    - by Marco Russo (SQLBI)
    Alberto wrote an excellent post describing how to implement the Unary Operator feature (which is present in Analysis Services) in PowerPivot (there was a previous post about parent-child hierarchies, too). I have to say that the solution is not so easy to implement as in Analysis Services, but it just works and, from a practical point of view, it is not so difficult to implement if you understand how it works and accept its limitations (only sum and subtractions are supported). I think that many...(read more)

    Read the article

  • Powerful Lessons in Data from the Presidential Election

    - by Christina McKeon
    Now that we’ve had a few days to recover from the U.S. presidential election, it’s a good time to take a step back from politics and look for the customer experience lessons that we can take away. The most powerful lesson is that when you know more about your base, you will have an advantage over your competition. That advantage will translate into you winning and your competition losing. Michael Scherer of TIME was given access to Obama’s data analysts two days before the election. His account is documented in Inside the Secret World of the Data Crunchers Who Helped Obama Win. What we learned from Scherer’s inside view is how well Obama’s team did in getting the right data, analyzing it, and acting on it. This data team recognized how critical it was to break down data silos within the campaign. As Scherer noted, they created “a single system that merged information from pollsters, fundraisers, field workers, consumer databases, and social-media and mobile contacts with the main Democratic voter files in the swing states.” The Obama analysis was so meticulous that they knew which celebrity and which type of celebrity event would help them maximize campaign contributions. With a single system, their data models became more precise. They determined which messages were more successful with specific demographic groups and that who made the calls mattered. Data analysis also led to many other changes in Obama’s campaign including a new ad buying strategy, using social media and applications to tap into supporters’ friends, and using new social news sites. While we did not have that same inside view into Romney’s campaign, much of the post-mortem coverage indicates that Romney’s team did not have the right analysis. As Peter Hamby of CNN wrote in Analysis: Why Romney Lost, “Romney officials had modeled an electorate that looked something like a mix of 2004 and 2008….” That historical data did not account for the changing demographics in the U.S. Does your organization approach data like the Obama or Romney team? Do you really know your base? How well can you predict what is going to happen in your business? If you haven’t already put together a strategy and plan to know more, this week’s civics lesson is a powerful reason to do it sooner rather than later. Your competitors are probably thinking the same thing that you are!

    Read the article

  • Search Engine Optimizing

    Search Engine Optimization is a process by which a web site is improved so that it can be more easily found by search engines, rank higher and be found by its target audience. The main components to SEO are: keyword analysis, content analysis, title and meta tags, relevant link building, search engine submission, and maintenance. Below are steps in the process.

    Read the article

  • Botnet Malware Sleeps Eight Months Activation, Child Concerns

    Daily Safety Check experts used a computer forensic analysis of a significant botnet that consisted of Carberp and SpyEye malware to come up with the details for their report. The analysis found that the botnet profiled the behavior of the slave computers it infected, similar to surveillance techniques used by law enforcement agencies, for an average of eight months. During the eight months, the botnet analyzed each computer's users and assigned ratings to certain activities to form a complete profile for each. Doing so allowed those behind the scheme to determine which were the most favora...

    Read the article

  • Tissue Specific Electrochemical Fingerprinting on the NetBeans Platform

    - by Geertjan
    Proteomics and metalloproteomics are rapidly developing interdisciplinary fields providing enormous amounts of data to be classified, evaluated, and interpreted. Approaches offered by bioinformatics and also by biostatistical data analysis and treatment are therefore becoming increasingly relevant. A bioinformatics tool has been developed at universities in Prague and Brno, in the Czech Republic, for analysis and visualization in this domain, on the NetBeans Platform: More info:  http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0049654

    Read the article

  • Feature: Lead with Intelligence

    Business efficiency depends on business decisions, and business decisions depend on current, accurate information and powerful analysis. See how Oracle data warehousing, business intelligence, and enterprise performance management solutions deliver the information, analysis, and efficiencies to propel your business ahead of the competition.

    Read the article

  • mod perl in apache 2.2 not parsing perl scripts

    - by futureelite7
    Hi, I've set up a fresh Apache 2.2.15 server on windows server 2008 R2 with mod_perl (mod perl v2.0.4 / perl v5.10.1). Mod_perl and Perl 5.10 has been installed and loaded without problems. However, despite my configuration, the mod_perl module is failing to recognize and execute my .pl file, instead opting to print out the perl source instead. What did I do wrong, and how do I make perl process my pl script instead of sending it to the client? My configuration: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "C:\Program Files (x86)\AWStats\wwwroot" ServerName analysis.example.com ServerAlias analysis.example.com ErrorLog "logs/analysis.example.com-error.log" CustomLog "logs/analysis.example.com-access.log" common DirectoryIndex index.php index.htm index.html PerlSwitches -T <Directory "C:\Program Files (x86)\AWStats\wwwroot"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <Directory "C:\Program Files (x86)\AWStats\wwwroot\cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all <FilesMatch "\.pl$"> SetHandler perl-script # #PerlResponseHandler ModPerl::Registry PerlOptions +ParseHeaders Options +ExecCGI </FilesMatch> </directory> </VirtualHost> Many many thanks for the help!

    Read the article

  • What's the difference between Host and HostName in SSH Config?

    - by Bill Jobs
    The man page says this: Host Host Restricts the following declarations (up to the next Host keyword) to be only for those hosts that match one of the patterns given after the keyword. If more than one pattern is provided, they should be separated by whitespace. A single `*' as a pattern can be used to provide global defaults for all hosts. The host is the hostname argument given on the command line (i.e. the name is not converted to a canonicalized host name before matching). A pattern entry may be negated by prefixing it with an exclamation mark (`!'). If a negated entry is matched, then the Host entry is ignored, regardless of whether any other patterns on the line match. Negated matches are therefore useful to provide exceptions for wildcard matches. See PATTERNS for more information on patterns. HostName HostName Specifies the real host name to log into. This can be used to specify nicknames or abbreviations for hosts. If the hostname contains the character sequence `%h', then this will be replaced with the host name specified on the command line (this is useful for manipulating unqualified names). The default is the name given on the com- mand line. Numeric IP addresses are also permitted (both on the command line and in HostName specifications). For example, when I want to create an SSH Config for GitHub, what should Host and HostName be respectively?

    Read the article

  • The way I think about Diagnostic tools

    - by Daniel Moth
    Every software has issues, or as we like to call them "bugs". That is not a discussion point, just a mere fact. It follows that an important skill for developers is to be able to diagnose issues in their code. Of course we need to advance our tools and techniques so we can prevent bugs getting into the code (e.g. unit testing), but beyond designing great software, diagnosing bugs is an equally important skill. To diagnose issues, the most important assets are good techniques, skill, experience, and maybe talent. What also helps is having good diagnostic tools and what helps further is knowing all the features that they offer and how to use them. The following classification is how I like to think of diagnostics. Note that like with any attempt to bucketize anything, you run into overlapping areas and blurry lines. Nevertheless, I will continue sharing my generalizations ;-) It is important to identify at the outset if you are dealing with a performance or a correctness issue. If you have a performance issue, use a profiler. I hear people saying "I am using the debugger to debug a performance issue", and that is fine, but do know that a dedicated profiler is the tool for that job. Just because you don't need them all the time and typically they cost more plus you are not as familiar with them as you are with the debugger, doesn't mean you shouldn't invest in one and instead try to exclusively use the wrong tool for the job. Visual Studio has a profiler and a concurrency visualizer (for profiling multi-threaded apps). If you have a correctness issue, then you have several options - that's next :-) This is how I think of identifying a correctness issue Do you want a tool to find the issue for you at design time? The compiler is such a tool - it gives you an exact list of errors. Compilers now also offer warnings, which is their way of saying "this may be an error, but I am not smart enough to know for sure". There are also static analysis tools, which go a step further than the compiler in identifying issues in your code, sometimes with the aid of code annotations and other times just by pointing them at your raw source. An example is FxCop and much more in Visual Studio 11 Code Analysis. Do you want a tool to find the issue for you with code execution? Just like static tools, there are also dynamic analysis tools that instead of statically analyzing your code, they analyze what your code does dynamically at runtime. Whether you have to setup some unit tests to invoke your code at runtime, or have to manually run your app (and interact with it) under the tool, or have to use a script to execute your binary under the tool… that varies. The result is still a list of issues for you to address after the analysis is complete or a pause of the execution when the first issue is encountered. If a code path was not taken, no analysis for it will exist, obviously. An example is the GPU Race detection tool that I'll be talking about on the C++ AMP team blog. Another example is the MSR concurrency CHESS tool. Do you want you to find the issue at design time using a tool? Perform a code walkthrough on your own or with colleagues. There are code review tools that go beyond just diffing sources, and they help you with that aspect too. For example, there is a new one in Visual Studio 11 and searching with my favorite search engine yielded this article based on the Developer Preview. Do you want you to find the issue with code execution? Use a debugger - let’s break this down further next. This is how I think of debugging: There is post mortem debugging. That means your code has executed and you did something in order to examine what happened during its execution. This can vary from manual printf and other tracing statements to trace events (e.g. ETW) to taking dumps. In all cases, you are left with some artifact that you examine after the fact (after code execution) to discern what took place hoping it will help you find the bug. Learn how to debug dump files in Visual Studio. There is live debugging. I will elaborate on this in a separate post, but this is where you inspect the state of your program during its execution, and try to find what the problem is. More from me in a separate post on live debugging. There is a hybrid of live plus post-mortem debugging. This is for example what tools like IntelliTrace offer. If you are a tools vendor interested in the diagnostics space, it helps to understand where in the above classification your tool excels, where its primary strength is, so you can market it as such. Then it helps to see which of the other areas above your tool touches on, and how you can make it even better there. Finally, see what areas your tool doesn't help at all with, and evaluate whether it should or continue to stay clear. Even though the classification helps us think about this space, the reality is that the best tools are either extremely excellent in only one of this areas, or more often very good across a number of them. Another approach is to offer a toolset covering all areas, with appropriate integration and hand off points from one to the other. Anyway, with that brain dump out of the way, in follow-up posts I will dive into live debugging, and specifically live debugging in Visual Studio - stay tuned if that interests you. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Merck Serono Gains Deep Understanding of Product Portfolio Value-Drivers, Risks, and Sales Expectations Through Forecasting Solution

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Merck Serono S.A. is the biopharmaceutical division of Merck KGaA. It offers leading brands in 150 countries to help patients with cancer, multiple sclerosis, infertility, endocrine and metabolic disorders, as well as cardiovascular diseases. Challenges: Establish a better decision-making framework for its complex, development portfolio of pharmaceutical products, where single-point estimates or expected averages of portfolio values, portfolio risks, and sales forecasts are insufficient and can be misleading Enable the company to be aware at all times of the range of possible outcomes of technical and market risks and uncertainties, such as the technical uncertainty of whether a product will produce the desired clinical outcomes, or the market-related uncertainty of whether a product will be outperformed by its competitors Solutions to Overcome the Challenges: Used Oracle Crystal Ball to devise a Monte-Carlo-based approach to better analyze and define the values and risks of the company’s development portfolio, laying the groundwork for optimized decision-making Enabled a better understanding of the range of potential values and risks to improve portfolio planning Enabled detailed analysis of the likelihood of favorable or unfavorable outcomes, such as the likelihood of whether Merck Serono can meet its sales targets planned for the next ten years with its existing product portfolio Gained the ability to take into account correlative risks, synergies and project interactions, enabling Merck Serono to better forecast what the company may achieve—for example, that there is a 70% probability of a particular sales target being met Established Monte-Carlo-based analysis using Oracle Crystal Ball as a useful element in decision-making at the board level, as the approach provides a better analysis of values and risks associated with the company’s product portfolio “Oracle Crystal Ball enables us to make Monte Carlo simulations of the potential value and sales of our development portfolio. It is a very powerful tool for gaining a thorough understanding and improved awareness of value drivers, uncertainties, and risks, along with associated probabilities.” – Riccardo Lampariello, Associate Director, Merck Serono S.A Why Oracle “We chose Oracle Crystal Ball to enable us to perform Monte Carlo analysis, which gives us a deeper understanding and improved awareness of the value drivers, uncertainties and risks of our portfolio of development projects,” said Kimber Hardy, head of valuation and analysis, Merck Serono S.A. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Click here to read the full version of the customer success story Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • XBRL - Moving from Production to Consumption

    - by jmorourke
    Here's an update on what’s new with XBRL and how it can actually benefit your organization versus adding extra time and costs to financial reporting.  On February 29th (leap day) of 2012 I attended the XBRL and Financial Analysis Technology Conference at Baruch College in NYC.  The event, which attracted over 300 XBRL gurus and fans was presented by XBRL US, The New York Society of Security Analysts’ Improved Corporate Reporting Committee, and Baruch College’s Robert Zicklin Center for Corporate Integrity.  The event featured keynotes from the U.S. Securities and Exchange Commission (SEC), and the CFA Institute as well as panels covering alternative research tools and data, corporate reporting to stakeholders and a demonstration of XBRL analysis tools.  The program culminated in a presentation of the finalists and the winner of the $20,000 XBRL Challenge.    Some of the key points made in the sessions included: The focus of XBRL tools is moving from production to consumption. As of February 2012, over 9000 companies are reporting in XBRL, with over 10 million facts filed to date XBRL taxonomy extensions have dropped from 27% to 11% making comparisons easier The SEC reports that XBRL makes it easier to analyze disclosures, focus on accounting issues XBRL is helping standards-setters like the FASB speed their analysis of impacts of proposed accounting rule changes Companies like Thomson Reuters report that XBRL is helping speed the delivery of data to clients The most interesting part of the program though, was the session highlighting the 5 finalists in the XBRL Challenge competition and the winning solution.  The XBRL Challenge was launched in 2011 as a means of spurring the development of more end-user tools to help with the consumption of XBRL-based financial information.       Over an 8-month process handled by 5 judges, there were 84 registrants, 15 completed submissions, 5 finalists and one winner of the challenge.  All of the solutions are open-sourced tools and most of them focus on consuming XBRL-based data.  The 5 finalists included: Advanced XBRL Processing from Oxide solutions – XBRL viewer for taxonomies, filings and company data with peer comparison capabilities. Arrelle – API for XBRL processes, supports SEC Validations, RSS Feeds to access filings etc. Calcbench – XBRL data analysis tool that can be embedded in other web applications.  This tool can combine XBRL filings with real-time market data. XBRL to XL – allows the importing of XBRL data into Microsoft Excel for analysis, comparisons.  Users start on the web and populate Excel with XBRL data. XBurble – allows users to search and view XBRL filings, export to Excel, merge for comparison, and includes a workflow interface. The winner of the $20,000 XBRL Challenge prize was CalcBench.  More information about the XBRL Challenge and the finalists can be found at www.XBRLUS.org/challenge XBRL for Sustainability Reporting – other recent news on the XBRL front was the announcement by the Global Reporting Initiative (GRI) of an XBRL taxonomy for Sustainability Reporting.  This taxonomy was co-developed by the GRI and Deloitte and is designed to make the consumption of data found in Sustainability Reports much easier.  Although there is no government mandate to file Sustainability Reports in XBRL format, organizations that do use the GRI guidelines for Sustainability Reporting are encouraged to tag and submit their data voluntarily to the GRI – who will populate a database with Sustainability Reporting data and make this available to the public.  For more information about this initiative, you can go to the GRI web site:  www.globalreporting.org. So how does all of this benefit corporate filers and investors?  Since its introduction, the consensus in the market is that XBRL has mainly benefited the regulators and investment analysts who need to consume and analyze large volumes of financial data.  But with the emergence of more end-user tools for consuming and analyzing XBRL-based data, and the ability to perform quick comparisons of one company versus its peers and competitors in an industry group, will soon accelerate the benefits to corporate finance staff, as well as individual investors.  This could apply to financial results tagged in XBRL, as well as non-financial information such as Sustainability Reporting – which over the long-term will likely be integrated with financial reporting.   And as multiple regulators and agencies in a country adopt the XBRL standard for corporate filings, more benefits will accrue as companies will be able to leverage one set of XBRL-based financial data for multiple regulatory filings.     For more information about the latest developments in XBRL, check out the XBRL US or XBRL International web sites:  www.xbrl.org, www.xbrlus.org. For more information about what Oracle is doing to support XBRL, here are some links: http://www.oracle.com/us/solutions/ent-performance-bi/disclosure-management-065892.html http://www.oracle.com/technetwork/database/features/xmldb/index-087631.html Feel free to contact me if you have any questions or need more information:  [email protected]

    Read the article

  • Application Event Log keeps getting corrupted

    - by yakatz
    I recently asked about repairing a corrupt event log, because it seemed to be a one-off event. The event log has since exhibited the same behavior 3 times. We have been trying to find patterns, but so far we have found nothing. The server runs several ASP.NET applications and three scheduled tasks written in .NET. The last modified date of the event log once happened to be the same time as one of the scheduled tasks, but the others have not been. Any suggestions of where to look next or a way we can get any information out of a corrupt evtx file? The server is running critical e-commerce applications, so we want to keep the number of restarts required to a minimum. Edit: I ran DUMPEL and got very strange results. 1/9/2012 4:14:05 PM 1 100 1000 Application Error N/A SERVERNAME Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7a5f8 Faulting module name: ntdll.dll, version: 6.1.7601.17514, time stamp: 0x4ce7ba58 Exception code: 0xc0000374 Fault offset: 0x000ce653 Faulting process id: 0x1070 Faulting application start time: 0x01cccf1386d30991 Faulting application path: C:\Windows\SysWOW64\inetsrv\w3wp.exe Faulting module path: C:\Windows\SysWOW64\ntdll.dll Report Id: dbf4f691-3b06-11e1-9025-005056a602e6 1/9/2012 4:14:07 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_79d9 P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: C:\Windows\Temp\WER975.tmp.appcompat.txt C:\Windows\Temp\WERA03.tmp.WERInternalMetadata.xml C:\Windows\Temp\WERA13.tmp.hdmp C:\Windows\Temp\WERD21.tmp.mdmp These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_cd7d09dfc84119d82a2ac6a789038bd5661acfb_cab_128f0e67 Analysis symbol: Rechecking for solution: 0 Report Id: dbf4f691-3b06-11e1-9025-005056a602e6 Report Status: 4 1/9/2012 4:14:07 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_79d9 P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: C:\Windows\Temp\WER975.tmp.appcompat.txt C:\Windows\Temp\WERA03.tmp.WERInternalMetadata.xml C:\Windows\Temp\WERA13.tmp.hdmp C:\Windows\Temp\WERD21.tmp.mdmp These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_cd7d09dfc84119d82a2ac6a789038bd5661acfb_cab_128f0e67 Analysis symbol: Rechecking for solution: 0 Report Id: dbf4f691-3b06-11e1-9025-005056a602e6 Report Status: 0 1/9/2012 4:14:12 PM 1 100 1000 Application Error N/A SERVERNAME Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7a5f8 Faulting module name: ntdll.dll, version: 6.1.7601.17514, time stamp: 0x4ce7ba58 Exception code: 0xc0000374 Fault offset: 0x000ce653 Faulting process id: 0x16ac Faulting application start time: 0x01cccf139f475c0c Faulting application path: C:\Windows\SysWOW64\inetsrv\w3wp.exe Faulting module path: C:\Windows\SysWOW64\ntdll.dll Report Id: e03bae70-3b06-11e1-9025-005056a602e6 1/9/2012 4:14:16 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_9c6c P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: C:\Windows\Temp\WER2579.tmp.appcompat.txt C:\Windows\Temp\WER25F7.tmp.WERInternalMetadata.xml C:\Windows\Temp\WER25F8.tmp.hdmp C:\Windows\Temp\WER28F6.tmp.mdmp These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_c49a67649524ad11b64bbf809211bc5ba742a3d6_cab_0b63321b Analysis symbol: Rechecking for solution: 0 Report Id: e03bae70-3b06-11e1-9025-005056a602e6 Report Status: 4 1/9/2012 4:14:16 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_9c6c P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: C:\Windows\Temp\WER2579.tmp.appcompat.txt C:\Windows\Temp\WER25F7.tmp.WERInternalMetadata.xml C:\Windows\Temp\WER25F8.tmp.hdmp C:\Windows\Temp\WER28F6.tmp.mdmp These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_c49a67649524ad11b64bbf809211bc5ba742a3d6_cab_0b63321b Analysis symbol: Rechecking for solution: 0 Report Id: e03bae70-3b06-11e1-9025-005056a602e6 Report Status: 0 1/9/2012 4:14:21 PM 1 100 1000 Application Error N/A SERVERNAME Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7a5f8 Faulting module name: ntdll.dll, version: 6.1.7601.17514, time stamp: 0x4ce7ba58 Exception code: 0xc0000374 Fault offset: 0x000ce653 Faulting process id: 0x17f8 Faulting application start time: 0x01cccf13a4ba5126 Faulting application path: C:\Windows\SysWOW64\inetsrv\w3wp.exe Faulting module path: C:\Windows\SysWOW64\ntdll.dll Report Id: e57a0a85-3b06-11e1-9025-005056a602e6 1/9/2012 4:14:21 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_9c6c P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_c49a67649524ad11b64bbf809211bc5ba742a3d6_1cfb4872 Analysis symbol: Rechecking for solution: 0 Report Id: e57a0a85-3b06-11e1-9025-005056a602e6 Report Status: 4 1/9/2012 4:14:21 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_9c6c P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_c49a67649524ad11b64bbf809211bc5ba742a3d6_1cfb4872 Analysis symbol: Rechecking for solution: 0 Report Id: e57a0a85-3b06-11e1-9025-005056a602e6 Report Status: 0 None of the files referenced actually exist (not even in WER ReportArchive). These should not be the only events mentioned. The log file has been cleared twice since January 9, so those events should not even be listed at all.

    Read the article

  • What tools can be used to monitor a web application? Beyond "doesn't 404"

    - by Freiheit
    I have an internal web application that has recently gone through a major version upgrade. I would like to monitor this application over the weekend and look for 'soft' errors. I will still need to spot check things by hand, but there are some common failure patterns that I think I can automate. Examples include data with bad formatting, blank rows in tables (indicates missing non-critical data), patterns for identifiers ("TEST" means one of my devs left a testing feed on), etc. I think there are applications out there that can be scripted to do things like: 1. log in 2. Go to $URL 3. select 3rd link in $LIST or $PATTERN 4. Check HTML from that link for $PATTERNS 5. Email report Are these goals sane? What applications/tools can help with this?

    Read the article

  • MS project publishing to TFS web portal display

    - by denis bastarache
    So, when we initially created our MPP schedule, I made use of indends / subordinates to break down the project by the various stages of the lifecycle, which is fine... no issues there... But now that I'm trying to publish this over to TFS display, it'll only pick up the actual "action items / sub-tasks" seeing as I have resource allocation specified. So for example I have an "Analysis" phase with a few items underneath, and "System Requirements" phase with the same items, so when I publish these to TFS, it won't display the "Parent" distinction between items, so both "Tasks" instances are being published in TFS under the exact same name... So, if I can't do this Automatically, I'll likely have edit each tasks with "Analysis - Item 1", "Analysis - item 2", "SRD - Item 1", "SRD - item 2"... is there a way to do this automatically, or will have to go the manual route??

    Read the article

  • Common filesystem for servers behind a rackspace load balancer

    - by thanos panousis
    Our PHP application consists of a single web server that will receive files from clients and perform a CPU-intensive analysis on them. Right now, analysis of a single user upload can take 3sec to conclude and take 100% CPU. This makes our system capacity amount to 1/3 requests per second. My team's requirement is to increase capacity without a lot of code reengineering. A possible solution would be to set up a load balancer in front of multiple servers running the same app, connecting to a common DB. The problem is that the analysis outputs files on disk. A load balancer would increase capacity, but then files won't be available between servers so consequent client requests may fail. We are hosted on Rackspace, is there a way to configure some sort of "common" storage for all servers, without having to rewrite our file persistance code? Current code relies on simple fopens etc. What are our options?

    Read the article

  • SQL SERVER – Microsoft SQL Server Migration Assistant V6.0 Released

    - by Pinal Dave
    Every company makes a different decision about the database when they start, but as they move forward they mature and make the decision which is based on their experience and best interest of the organization. Similarly, quite a many organizations make different decisions on database, like Sybase, MySQL, Oracle or Access and as time passes by they learn that now they want to move to a different platform. Microsoft makes it easy for SQL Server professional by releasing various Migration Assistant tools. Last week, Microsoft released Microsoft SQL Server Migration Assistant v6.0. Here are different tools released earlier last week to migrate various product to SQL Server. Microsoft SQL Server Migration Assistant v6.0 for Sybase SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Sybase Adaptive Server Enterprise (ASE) to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for MySQL SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from MySQL to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Oracle SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Oracle to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Access SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Access to SQL Server. SSMA for Access automates conversion of Microsoft Access database objects to SQL Server database objects, loads the objects into SQL Server and Azure SQL DB, and then migrates data from Microsoft Access to SQL Server and Azure SQL DB. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Migration

    Read the article

  • Take Control of Workflow with Workflow Analyzer!

    - by user793553
    Take Control of Workflow with Workflow Analyzer! Immediate Analysis and Output of your EBS Workflow Environment The EBS Workflow Analyzer is a script that reviews the current Workflow Footprint, analyzes the configurations, environment, providing feedback, and recommendations on Best Practices and areas of concern. Go to Doc ID 1369938.1  for more details and script download with a short overview video on it. Proactive Benefits: Immediate Analysis and Output of Workflow Environment Identifies Aged Records Identifies Workflow Errors & Volumes Identifies looping Workflow items and stuck activities Identifies Workflow System Setup and configurations Identifies and Recommends Workflow Best Practices Easy To Add Tool for regular Workflow Maintenance Execute Analysis anytime to compare trending from past outputs The Workflow Analyzer presents key details in an easy to review graphical manner.   See the examples below. Workflow Runtime Data Table Gauge The Workflow Runtime Data Table Gauge will show critical (red), bad (yellow) and good (green) depending on the number of workflow items (WF_ITEMS).   Workflow Error Notifications Pie Chart A pie chart shows the workflow error notification types.   Workflow Runtime Table Footprint Bar Chart A pie chart shows the workflow error notification types and a bar chart shows the workflow runtime table footprint.   The analyzer also gives detailed listings of setups and configurations. As an example the workflow services are listed along with their status for review:   The analyzer draws attention to key details with yellow and red boxes highlighting areas of review:   You can extend on any query by reviewing the SQL Script and then running it on your own or making modifications for your own needs:     Find more details in these notes: Doc ID 1369938.1 Workflow Analyzer script for E-Business Suite Worklfow Monitoring and Maintenance Doc ID 1425053.1 How to run EBS Workflow Analyzer Tool as a Concurrent Request Or visit the My Oracle Support EBS - Core Workflow Community  

    Read the article

  • Linux to Solaris @ Morgan Stanley

    - by mgerdts
    I came across this blog entry and the accompanying presentation by Robert Milkoski about his experience switching from Linux to Oracle Solaris 11 for a distributed OpenAFS file serving environment at Morgan Stanley. If you are an IT manager, the presentation will show you: Running Solaris with a support contract can cost less than running Linux (even without a support contract) because of technical advantages of Solaris. IT departments can benefit from hiring computer scientists into Systems Programmer or similar roles.  Their computer science background should be nurtured so that they can continue to deliver value (savings and opportunity) to the business as technology advances. If you are a sysadmin, developer, or somewhere in between, the presentation will show you: A presentation that explains your technical analysis can be very influential. Learning and using the non-default options of an OS can make all the difference as to whether one OS is better suited than another.  For example, see the graphs on slides 3 - 5.  The ZFS default is to not use compression. When trying to convince those that hold the purse strings that your technical direction should be taken, the financial impact can be the part that closes the deal.  See slides 6, 9, and 10.  Sometimes reducing rack space requirements can be the biggest impact because it may stave off or completely eliminate the need for facilities growth. DTrace can be used to shine light on performance problems that may be suspected but not diagnosed.  It is quite likely that these problems have existed in OpenAFS for a decade or more.  DTrace made diagnosis possible. DTrace can be used to create performance analysis tools without modifying the source of software that is under analysis.  See slides 29 - 32. Microstate accounting, visible in the prstat output on slide 37 can be used to quickly draw focus to problem areas that affect CPU saturation.  Note that prstat without -m gives a time-decayed moving average that is not nearly as useful. Instruction level probes (slides 33 - 34) are a super-easy way to identify which part of a function is hot.

    Read the article

  • OLL Live webcast - Using SQL for Pattern Matching in Oracle Database

    - by KLaker
    If you are interested in learning about our exciting new 12c SQL pattern matching feature then mark your diaries. On Wednesday, October 30th at 8:00 am (US/Pacific time zone) Supriya Ananth, who is one of our top curriculum developers at Oracle, will be hosting an OLL webcast on our new SQL pattern matching feature. The ability to recognize patterns in a sequence of rows has been a capability that was widely desired, but not possible with SQL until now. Row pattern matching in native SQL improves application and development productivity and query efficiency for row-sequence analysis. With Oracle Database 12c you can use the new MATCH_RECOGNIZE clause to perform pattern matching in SQL to do the following: Logically partition and order the data using the PARTITION BY and ORDER BY clauses Use regular expressions syntax to define patterns of rows to seek using the PATTERN clause. These patterns a powerful and expressive feature, applied to the pattern variables you define. Specify the logical conditions required to map a row to a row pattern variable in the DEFINE clause. Define measures, which are expressions usable in the MEASURES clause of the SQL query. For more information and to register for this exciting webcast please visit the OLL Live website, see here: https://apex.oracle.com/pls/apex/f?p=44785:145:116820049307135::::P145_EVENT_ID,P145_PREV_PAGE:461,143.  Please note - if the above link does not work then go to OLL (https://apex.oracle.com/pls/apex/f?p=44785:1:) and click the OLL Live icon (upper right, beneath the Login link or logout link if you are already logged in). The pattern matching webcast is listed on the calendar of events on 30 October.

    Read the article

  • Understanding Service Compensation part of Industrial SOA series

    - by JuergenKress
    Some of the most important SOA design patterns that we have successfully applied in projects will be described in this article. These include the Compensation pattern and the UI mediator pattern, the Common Data Format pattern and the Data Access pattern. All of these patterns are included in Thomas Erl's book, "SOA Design Patterns", and are presented here in detail, together with our practical experiences. We begin our "best of" SOA pattern collection with the Compensation pattern. Compensation is required in error situations in an SOA, as multiple atomic service operations cannot generally be linked with classic transactions this would violate the principle of loose coupling. An error situation of this sort will occur, particularly if service operations are combined into processes or new services during orchestration or by applying the Composite pattern, and the transaction bracket has to be expanded as a result. We need mechanisms to undo the effects of individual services (the status changes in the overall system) and to ensure that a consistent system state is maintained at all times, so as to preserve system integrity. For the Compensation pattern, we would like to address the following questions: Why is compensation important in relation to SOA? How is the topic of compensation linked with the topic of transactions? What are the challenges with regard to compensation... Read the full article in the Service Technology Magazine or at OTN. Share your comments and feedback on the Industrial SOA series by using the hashtag #industrialsoa. Missed an article of the Industrial SOA series visit the overview at OTN. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Industrial SOA,SOA,SOA Service Compensation,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >