Search Results

Search found 2559 results on 103 pages for 'analysis'.

Page 32/103 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • How do you combine "Revision Control" with "WorkFlow" for R?

    - by Tal Galili
    Hello all, I remember coming across R users writing that they use "Revision control" (e.g: "Source control"), and I am curious to know: How do you combine "Revision control" with your statistical analysis WorkFlow? Two (very) interesting discussions talk about how to deal with the WorkFlow. But neither of them refer to the revision control element: http://stackoverflow.com/questions/1266279/how-to-organize-large-r-programs http://stackoverflow.com/questions/1429907/workflow-for-statistical-analysis-and-report-writing A Long Update To The Question: Following some of the people's answers, and Dirk's question in the comment, I would like to direct my question a bit more. After reading the Wiki article about "revision control" (which I was previously not familiar with), it was clear to me that when using revision control, what one does is to build a development structure of his code. This structure either leads to a "final product" or to several branches. When building something like, let's say, a website. There is usually one end product you work towards (the website), with some prototypes along the way. But when doing a statistical analysis, the work (to my view) is different. Sometimes you know where you want to get to. But more often, you explore. Explore cleaning the dataset. Explore different methods for statistical analysis, and ask various questions of your data (and I am writing this, knowing how Frank Harrell, and other experience statisticians feels about Data dredging). That is way the WorkFlow question with statistical programming is (in my view) a serious and deep question, raising many issues, The simpler ones are technical: Which revision control software do you use (and why) ? Which IDE do you use(and why) ? The more interesting question are about work process: How do you structure your files? What do you keep as a separate file and what as a revision? or asking in a different way - What should be a "branch" and what should be a "sub project" in your code? For example: When starting to explore your data, should a plot be creating and then erased because it didn't lead any where (but kept as a revision) or should there be a backup file of that path? How you solve this tension was my initial curiosity. The second question is "what might I be missing?". What rules (of thumb) should one follow so to avoid common pitfalls doing statistical programming with version control? In my intuition, I feel that statistical programming is inherently different then software development (I am writing this without being a real expert in statistical programming, and even less so in software development). That's way I am unsure which of the lessons I have read here about version control would be applicable. Thanks a lot, Tal

    Read the article

  • Dynamically loaded jQuery with GreaseMonkey inconsistent on pages (refreshing seems to fix it)... do

    - by uprightnetizen
    Hi, I want a custom page analysis footer on every site I visit... so I've used a method to attach JQuery to unsafeWindow. I then create a floating footer on the page. I want to be able to call commands in a menu, do some processing, then put the results in the footer. Unfortunately it sometimes works, sometimes it doesn't. At least two alerts should happen in the printOutput function. Sometimes it only fires one, then it (crashes?) without error? On other pages, both alerts fire and it finds the element, but it doesn't add the extra text. (e.g. www.linode.com) Refreshing the page, then running the printOutput command again seems to always work. Does anyone know what's going on??? The userscript can be installed at: http://www.captionwizard.com/test/page_analysis.user.js // ==UserScript== // @name page_analysis // @namespace markspace // @description Page Analysis // @include http://*/* // ==/UserScript== (function() { // Add jQuery var GM_JQ = document.createElement('script'); GM_JQ.src = 'http://code.jquery.com/jquery-1.4.2.min.js'; GM_JQ.type = 'text/javascript'; document.getElementsByTagName('head')[0].appendChild(GM_JQ); var jqueryActive = false; //Check if jQuery's loaded function GM_wait() { if(typeof unsafeWindow.jQuery == 'undefined') { window.setTimeout(GM_wait,100); } else { $ = unsafeWindow.jQuery; letsJQuery(); } } GM_wait(); function letsJQuery() { jqueryActive = true; setupOutputFooter(); } /******************************* Analysis FOOTER Functions ******************************/ function printOutput(someText) { alert('printing output'); if($('div.analysis_footer').length) { alert('is here - appending'); $('div.analysis_footer').append('<br>' + someText); } else { alert('not here - trying again'); setupOutputFooter(); $('div.analysis_footer').append('<br>' + someText); } } GM_registerMenuCommand("Test Output", testOutput, "k", "control", "k" ); function testOutput() { printOutput('testing this'); } function setupOutputFooter() { $('<div class="analysis_footer">Page Analysis Footer:</div>').appendTo('body'); $('div.analysis_footer').css('position','fixed').css('bottom', '0px').css('background-color','#F8F8F8'); $('div.analysis_footer').css('width','100%').css('color','#3B3B3B').css('font-size', '0.8em'); $('div.analysis_footer').css('font-family', '"Myriad",Verdana,Arial,Helvetica,sans-serif').css('padding', '5px'); $('div.analysis_footer').css('border-top', '1px solid black').css('text-align', 'left'); } }());

    Read the article

  • What does the symbol ::: mean in R

    - by Milktrader
    I came across this in the following context from B. Pfaff's "Analysis of Integrated and Cointegrated Time Series in R" ## Impulse response analysis of SVAR A-type model 1 args (vars ::: irf.svarest) 2 irf.svara <- irf (svar.A, impulse = ”y1 ” , 3 response = ”y2 ” , boot = FALSE) 4 args (vars ::: plot.varirf) 5 plot (irf.svara)

    Read the article

  • SQL Server (2012 Enterprise) Browser service failing

    - by Watki02
    SQL Server (2012 Enterprise) Browser service failing I have a problem as described below: I have an instance of SQL Server 2012 Enterprise (thanks to MSDN) for local development on my PC. I try to start SQL server Browser Service from SQL Server Configuration Manager and it takes a long time to fail, then fails with: The request failed or the service did not respond in a timely fashion. Consult the event log or other applicable error logs for details. I checked event logs and found these errors in this order (all within the same 1-second time frame): The SQL Server Browser service port is unavailable for listening, or invalid. The SQL Server Browser service was unable to establish SQL instance and connectivity discovery. The SQL Server Browser is enabling SQL instance and connectivity discovery support. The SQL Server Browser service was unable to establish Analysis Services discovery. The SQL Server Browser service has started. The SQL Server Browser service has shutdown. I checked firewall rules and both port 1433 (TCP) and 1434 (UDP) are wide open, just as well - the programs and service binary had been "allowed through windows firewall". I started the "Analysis Services" service by hand and it works fine. Browser still won't start. Some History: Installed SQL 2008 R2 express advanced Installed SQL2012 Express advanced Uninstalled SQL 2008 R2 express advanced Installed 2012 SSDT and lots of features with Express install Installed a unique instance of SQL 2012 Enterprise with all features Uninstalled SSDT and reinstalled SSDT with Enterprise (solved a different problem) Uninstalled SQL 2012 Express Uninstalled SQL 2012 Enterprise Removed anything with "SQL" in the name from Control panel "Programs and features" Installed SQL 2012 Enterprise without Analysis services (This is where I noticed SQL Browser service was failing to start even on the install) Added the feature of Analysis Services (and everything else) via the installer (Browser continued to fail to start on the install) ======================== Other interesting facts: opening a command window with administrator and trying to run sqlbrowser.exe manually yielded: Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Windows\system32cd C:\Program Files (x86)\Microsoft SQL Server\90\Shared C:\Program Files (x86)\Microsoft SQL Server\90\Sharedsqlbrowser.exe -c SQLBrowser: starting up in console mode SQLBrowser: starting up SSRP redirection service SQLBrowser: failed starting SSRP redirection services -- shutting down. SQLBrowser: starting up OLAP redirection service SQLBrowser: Stopping the OLAP redirector C:\Program Files (x86)\Microsoft SQL Server\90\Shared As I try to repair the install it errors out saying The following error has occurred: Service 'SQLBrowser' start request failed. Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 Clicking retry fails every time. When clicking cancel I get: The following error has occurred: SQL Server Browser configuration for feature 'SQL_Browser_Redist_SqlBrowser_Cpu32' was cancelled by user after a previous installation failure. The last attempted step: Starting the SQL Server Browser service 'SQLBrowser', and waiting for up to '900' seconds for the process to complete. . For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 When I go to uninstall the SQL Browser from "Programs and Features", it complains: Error opening installation log file. Verify that the specified log file location exists and is writable. Is there any way I can fix this short of re-imaging my computer and reinstalling from scratch? A possible approach would be to somehow really uninstall everything and delete all files related to SQL... is that a good idea, and how do I do that?

    Read the article

  • Windows 7 inbuilt and 3rd party (de)fragmentation related queries

    - by Karan
    I have a pretty good idea of how files end up getting fragmented. That said, I just copied ~3,200 files of varying sizes (from a few KB to ~20GB) from an external USB HDD to an internal, freshly formatted (under Windows 7 x64), NTFS, 2TB, 5400RPM, WD, SATA, non-system (i.e. secondary) drive, filling it up 57%. Since it should have been very much possible for each file to have been stored in one contiguous block, I expected the drive to be fragmented not more than 1-2% at most after this rather lengthy exercise (unfortunately this older machine doesn't support USB 3.0). Windows 7's inbuilt defrag utility told me after a quick analysis that the drive was fragmented only 1% or so, which dovetailed neatly with my expectations. However, just out of curiosity I downloaded and ran the latest portable x64 version of Piriform's Defraggler, and was shocked to see the drive being reported as being ~85% fragmented! The portable version of Auslogics Disk Defrag also agreed with Defraggler, and both clearly expected to grind away for ~10 hours to completely defragment the drive. 1) How in blazes could the inbuilt and 3rd party defrag utils disagree so badly? I mean, 10-20% variance is probably understandable, but 1% and 85% are miles apart! This Engineering Windows 7 blog post states: In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. ... [Please read the entire post so the quote is not taken out of context.] Could it simply be that the 3rd party defrag utils ignore this post-XP change and continue to use analysis algos similar to those XP used? 2) Assuming that the 3rd party utils aren't lying about the real extent of fragmentation (which Windows is downplaying post-XP), how could the files have even got fragmented so badly given they were just copied over afresh to an empty drive? 3) If vastly differing analysis algos explain the yawning gap, which do I believe? I'm no defrag fanatic for sure, but 85% is enough to make me seriously consider spending 10 hours defragging this drive. On the other hand, 1% reported by Windows' own defragger clearly implies that there is no cause for concern and defragging would actually have negative consequences (as per the post). Is Windows' assumption valid and should I just let it be, or will there be any noticeable performance gains after running one of the 3rd party utils for 10 hours straight? 4) I see that out of the box Windows 7 defrag is scheduled to run weekly. Does anyone know whether it defrags every single time, or only if its analysis reveals a fragmentation percentage over a set threshold? If the latter, what is this threshold and can it be changed, maybe via a Registry edit? Thanks for reading through (my first query on this wonderful site!) and for any helpful replies. Also, if you're answering question #3, please keep in mind that any speed increases post defragging with 3rd party utils vis-à-vis Windows' inbuilt program should not include pre-Vista (preferably pre-Win7) examples. Further, examples of programs that made your system boot faster won't help in this case, since this is a non-system drive (although one that'll still be used daily).

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • Server 2003 SP2 BSOD caused by fltmgr.sys

    - by MasterMax1313
    I'm running into a problem where a Server 2003 SP2 box has started crashing roughly once an hour, BSODing out with the message that fltmgr.sys is probably the cause. I ran dumpchk.exe on the memory.dmp file, indicating the same thing. Any thoughts on typical root causes? The following is the error code I'm seeing: Error code 0000007e, parameter1 c0000005, parameter2 f723e087, parameter3 f78cea8c, parameter4 f78ce788. After running dumpchk on the memory.dmp file, I get the following note: Probably caused by : fltmgr.sys ( fltmgr!FltGetIrpName+63f ) The full log is here: Microsoft (R) Windows Debugger Version 6.12.0002.633 X86 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [c:\windows\memory.dmp] Kernel Complete Dump File: Full address space is available Symbol search path is: *** Invalid *** **************************************************************************** * Symbol loading may be unreliable without a symbol search path. * * Use .symfix to have the debugger choose a symbol path. * * After setting your symbol path, use .reload to refresh symbol locations. * **************************************************************************** Executable search path is: ********************************************************************* * Symbols can not be loaded because symbol path is not initialized. * * * * The Symbol Path can be set by: * * using the _NT_SYMBOL_PATH environment variable. * * using the -y <symbol_path> argument when starting the debugger. * * using .sympath and .sympath+ * ********************************************************************* *** ERROR: Symbol file could not be found. Defaulted to export symbols for ntkrnlpa.exe - Windows Server 2003 Kernel Version 3790 (Service Pack 2) UP Free x86 compatible Product: Server, suite: TerminalServer SingleUserTS Built by: 3790.srv03_sp2_gdr.101019-0340 Machine Name: Kernel base = 0x80800000 PsLoadedModuleList = 0x8089ffa8 Debug session time: Wed Oct 5 08:48:04.803 2011 (UTC - 4:00) System Uptime: 0 days 14:25:12.085 ********************************************************************* * Symbols can not be loaded because symbol path is not initialized. * * * * The Symbol Path can be set by: * * using the _NT_SYMBOL_PATH environment variable. * * using the -y <symbol_path> argument when starting the debugger. * * using .sympath and .sympath+ * ********************************************************************* *** ERROR: Symbol file could not be found. Defaulted to export symbols for ntkrnlpa.exe - Loading Kernel Symbols ............................................................... ................................................. Loading User Symbols Loading unloaded module list ... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck 7E, {c0000005, f723e087, f78dea8c, f78de788} ***** Kernel symbols are WRONG. Please fix symbols to do analysis. *** ERROR: Symbol file could not be found. Defaulted to export symbols for fltmgr.sys - --omitted-- Probably caused by : fltmgr.sys ( fltmgr!FltGetIrpName+63f ) Followup: MachineOwner --------- ----- 32 bit Kernel Full Dump Analysis DUMP_HEADER32: MajorVersion 0000000f MinorVersion 00000ece KdSecondaryVersion 00000000 DirectoryTableBase 004e7000 PfnDataBase 81600000 PsLoadedModuleList 8089ffa8 PsActiveProcessHead 808a61c8 MachineImageType 0000014c NumberProcessors 00000001 BugCheckCode 0000007e BugCheckParameter1 c0000005 BugCheckParameter2 f723e087 BugCheckParameter3 f78dea8c BugCheckParameter4 f78de788 PaeEnabled 00000001 KdDebuggerDataBlock 8088e3e0 SecondaryDataState 00000000 ProductType 00000003 SuiteMask 00000110 Physical Memory Description: Number of runs: 3 (limited to 3) FileOffset Start Address Length 00001000 0000000000001000 0009e000 0009f000 0000000000100000 bfdf0000 bfe8f000 00000000bff00000 00100000 Last Page: 00000000bff8e000 00000000bffff000 KiProcessorBlock at 8089f300 1 KiProcessorBlock entries: ffdff120 Windows Server 2003 Kernel Version 3790 (Service Pack 2) UP Free x86 compatible Product: Server, suite: TerminalServer SingleUserTS Built by: 3790.srv03_sp2_gdr.101019-0340 Machine Name:*** ERROR: Module load completed but symbols could not be loaded for srv.sys Kernel base = 0x80800000 PsLoadedModuleList = 0x8089ffa8 Debug session time: Wed Oct 5 08:48:04.803 2011 (UTC - 4:00) System Uptime: 0 days 14:25:12.085 start end module name 80800000 80a50000 nt Tue Oct 19 10:00:49 2010 (4CBDA491) 80a50000 80a6f000 hal Sat Feb 17 00:48:25 2007 (45D69729) b83d4000 b83fe000 Fastfat Sat Feb 17 01:27:55 2007 (45D6A06B) b8476000 b84a1000 RDPWD Sat Feb 17 00:44:38 2007 (45D69646) b8549000 b8554000 TDTCP Sat Feb 17 00:44:32 2007 (45D69640) b8fe1000 b9045000 srv Thu Feb 17 11:58:17 2011 (4D5D53A9) b956d000 b95be000 HTTP Fri Nov 06 07:51:22 2009 (4AF41BCA) b9816000 b982d780 hgfs Tue Aug 12 20:36:54 2008 (48A22CA6) b9b16000 b9b20000 ndisuio Sat Feb 17 00:58:25 2007 (45D69981) b9cf6000 b9d1ac60 iwfsd Wed Sep 29 01:43:59 2004 (415A4B9F) b9e5b000 b9e62000 parvdm Tue Mar 25 03:03:49 2003 (3E7FFF55) b9e63000 b9e67860 lgtosync Fri Sep 12 04:38:13 2003 (3F6185F5) b9ed3000 b9ee8000 Cdfs Sat Feb 17 01:27:08 2007 (45D6A03C) b9f10000 b9f2e000 EraserUtilRebootDrv Thu Jul 07 21:45:11 2011 (4E166127) b9f2e000 b9f8c000 eeCtrl Thu Jul 07 21:45:11 2011 (4E166127) b9f8c000 b9f9d000 Fips Sat Feb 17 01:26:33 2007 (45D6A019) b9f9d000 ba013000 mrxsmb Fri Feb 18 10:22:23 2011 (4D5E8EAF) ba013000 ba043000 rdbss Wed Feb 24 10:54:03 2010 (4B854B9B) ba043000 ba0ad000 SPBBCDrv Mon Dec 14 23:39:00 2009 (4B2712E4) ba0ad000 ba0d7000 afd Thu Feb 10 08:42:18 2011 (4D53EB3A) ba0d7000 ba108000 netbt Sat Feb 17 01:28:57 2007 (45D6A0A9) ba108000 ba19c000 tcpip Sat Aug 15 05:53:38 2009 (4A8685A2) ba19c000 ba1b5000 ipsec Sat Feb 17 01:29:28 2007 (45D6A0C8) ba275000 ba288600 NAVENG Fri Jul 29 08:10:02 2011 (4E32A31A) ba289000 ba2ae000 SYMEVENT Thu Apr 15 21:31:23 2010 (4BC7BDEB) ba2ae000 ba42d300 NAVEX15 Fri Jul 29 08:07:28 2011 (4E32A280) ba42e000 ba479000 SRTSP Fri Mar 04 15:31:08 2011 (4D714C0C) ba485000 ba487b00 dump_vmscsi Wed Apr 11 13:55:32 2007 (461D2114) ba4e1000 ba540000 update Mon May 28 08:15:16 2007 (465AC7D4) ba568000 ba59f000 rdpdr Sat Feb 17 00:51:00 2007 (45D697C4) ba59f000 ba5b1000 raspptp Sat Feb 17 01:29:20 2007 (45D6A0C0) ba5b1000 ba5ca000 ndiswan Sat Feb 17 01:29:22 2007 (45D6A0C2) ba5da000 ba5e4000 dump_diskdump Sat Feb 17 01:07:44 2007 (45D69BB0) ba66a000 ba67e000 rasl2tp Sat Feb 17 01:29:02 2007 (45D6A0AE) ba67e000 ba69a000 VIDEOPRT Sat Feb 17 01:10:30 2007 (45D69C56) ba69a000 ba6c1000 ks Sat Feb 17 01:30:40 2007 (45D6A110) ba6c1000 ba6d5000 redbook Sat Feb 17 01:07:26 2007 (45D69B9E) ba6d5000 ba6ea000 cdrom Sat Feb 17 01:07:48 2007 (45D69BB4) ba6ea000 ba6ff000 serial Sat Feb 17 01:06:46 2007 (45D69B76) ba6ff000 ba717000 parport Sat Feb 17 01:06:42 2007 (45D69B72) ba717000 ba72a000 i8042prt Sat Feb 17 01:30:40 2007 (45D6A110) baff0000 baff3700 CmBatt Sat Feb 17 00:58:51 2007 (45D6999B) bf800000 bf9d3000 win32k Thu Mar 03 08:55:02 2011 (4D6F9DB6) bf9d3000 bf9ea000 dxg Sat Feb 17 01:14:39 2007 (45D69D4F) bf9ea000 bf9fec80 vmx_fb Sat Aug 16 07:23:10 2008 (48A6B89E) bf9ff000 bfa4a000 ATMFD Tue Feb 15 08:19:22 2011 (4D5A7D5A) bff60000 bff7e000 RDPDD Sat Feb 17 09:01:19 2007 (45D70AAF) f7214000 f723a000 KSecDD Mon Jun 15 13:45:11 2009 (4A3688A7) f723a000 f725f000 fltmgr Sat Feb 17 00:51:08 2007 (45D697CC) f725f000 f7272000 CLASSPNP Sat Feb 17 01:28:16 2007 (45D6A080) f7272000 f7283000 symmpi Mon Dec 13 16:03:14 2004 (41BE0392) f7283000 f72a2000 SCSIPORT Sat Feb 17 01:28:41 2007 (45D6A099) f72a2000 f72bf000 atapi Sat Feb 17 01:07:34 2007 (45D69BA6) f72bf000 f72e9000 volsnap Sat Feb 17 01:08:23 2007 (45D69BD7) f72e9000 f7315000 dmio Sat Feb 17 01:10:44 2007 (45D69C64) f7315000 f733c000 ftdisk Sat Feb 17 01:08:05 2007 (45D69BC5) f733c000 f7352000 pci Sat Feb 17 00:59:03 2007 (45D699A7) f7352000 f7386000 ACPI Sat Feb 17 00:58:47 2007 (45D69997) f7487000 f7490000 WMILIB Tue Mar 25 03:13:00 2003 (3E80017C) f7497000 f74a6000 isapnp Sat Feb 17 00:58:57 2007 (45D699A1) f74a7000 f74b4000 PCIIDEX Sat Feb 17 01:07:32 2007 (45D69BA4) f74b7000 f74c7000 MountMgr Sat Feb 17 01:05:35 2007 (45D69B2F) f74c7000 f74d2000 PartMgr Sat Feb 17 01:29:25 2007 (45D6A0C5) f74d7000 f74e7000 disk Sat Feb 17 01:07:51 2007 (45D69BB7) f74e7000 f74f3000 Dfs Sat Feb 17 00:51:17 2007 (45D697D5) f74f7000 f7501000 crcdisk Sat Feb 17 01:09:50 2007 (45D69C2E) f7507000 f7517000 agp440 Sat Feb 17 00:58:53 2007 (45D6999D) f7517000 f7522000 TDI Sat Feb 17 01:01:19 2007 (45D69A2F) f7527000 f7532000 ptilink Sat Feb 17 01:06:38 2007 (45D69B6E) f7537000 f7540000 raspti Sat Feb 17 00:59:23 2007 (45D699BB) f7547000 f7556000 termdd Sat Feb 17 00:44:32 2007 (45D69640) f7557000 f7561000 Dxapi Tue Mar 25 03:06:01 2003 (3E7FFFD9) f7577000 f7580000 mssmbios Sat Feb 17 00:59:12 2007 (45D699B0) f7587000 f7595000 NDProxy Wed Nov 03 09:25:59 2010 (4CD162E7) f75a7000 f75b1000 flpydisk Tue Mar 25 03:04:32 2003 (3E7FFF80) f75b7000 f75c0080 SRTSPX Fri Mar 04 15:31:24 2011 (4D714C1C) f75d7000 f75e3000 vga Sat Feb 17 01:10:30 2007 (45D69C56) f75e7000 f75f2000 Msfs Sat Feb 17 00:50:33 2007 (45D697A9) f75f7000 f7604000 Npfs Sat Feb 17 00:50:36 2007 (45D697AC) f7607000 f7615000 msgpc Sat Feb 17 00:58:37 2007 (45D6998D) f7617000 f7624000 netbios Sat Feb 17 00:58:29 2007 (45D69985) f7627000 f7634000 wanarp Sat Feb 17 00:59:17 2007 (45D699B5) f7637000 f7646000 intelppm Sat Feb 17 00:48:30 2007 (45D6972E) f7647000 f7652000 kbdclass Sat Feb 17 01:05:39 2007 (45D69B33) f7657000 f7661000 mouclass Tue Mar 25 03:03:09 2003 (3E7FFF2D) f7667000 f7671000 serenum Sat Feb 17 01:06:44 2007 (45D69B74) f7677000 f7682000 fdc Sat Feb 17 01:07:16 2007 (45D69B94) f7687000 f7694b00 vmx_svga Sat Aug 16 07:22:07 2008 (48A6B85F) f7697000 f76a0000 watchdog Sat Feb 17 01:11:45 2007 (45D69CA1) f76a7000 f76b0000 ndistapi Sat Feb 17 00:59:19 2007 (45D699B7) f76b7000 f76c6000 raspppoe Sat Feb 17 00:59:23 2007 (45D699BB) f76c8000 f7707000 NDIS Sat Feb 17 01:28:49 2007 (45D6A0A1) f7707000 f770f000 kdcom Tue Mar 25 03:08:00 2003 (3E800050) f770f000 f7717000 BOOTVID Tue Mar 25 03:07:58 2003 (3E80004E) f7717000 f771e000 intelide Sat Feb 17 01:07:32 2007 (45D69BA4) f771f000 f7726000 dmload Tue Mar 25 03:08:08 2003 (3E800058) f777f000 f7786000 dxgthk Tue Mar 25 03:05:52 2003 (3E7FFFD0) f7787000 f778e000 vmmemctl Tue Aug 12 20:37:25 2008 (48A22CC5) f77cf000 f77d6280 vmxnet Mon Sep 08 21:17:10 2008 (48C5CE96) f77d7000 f77df000 audstub Tue Mar 25 03:09:12 2003 (3E800098) f77ef000 f77f7000 Fs_Rec Tue Mar 25 03:08:36 2003 (3E800074) f77f7000 f77fe000 Null Tue Mar 25 03:03:05 2003 (3E7FFF29) f77ff000 f7806000 Beep Tue Mar 25 03:03:04 2003 (3E7FFF28) f7807000 f780f000 mnmdd Tue Mar 25 03:07:53 2003 (3E800049) f780f000 f7817000 RDPCDD Tue Mar 25 03:03:05 2003 (3E7FFF29) f7817000 f781f000 rasacd Tue Mar 25 03:11:50 2003 (3E800136) f7878000 f7897000 Mup Tue Apr 12 15:05:46 2011 (4DA4A28A) f7897000 f7899980 compbatt Sat Feb 17 00:58:51 2007 (45D6999B) f789b000 f789e900 BATTC Sat Feb 17 00:58:46 2007 (45D69996) f789f000 f78a1b00 vmscsi Wed Apr 11 13:55:32 2007 (461D2114) f79af000 f79b0280 vmmouse Mon Aug 11 07:16:51 2008 (48A01FA3) f79b1000 f79b2280 swenum Sat Feb 17 01:05:56 2007 (45D69B44) f7b4a000 f7bdf000 Ntfs Sat Feb 17 01:27:23 2007 (45D6A04B) Unloaded modules: ba65a000 ba668000 imapi.sys Timestamp: unavailable (00000000) Checksum: 00000000 ImageSize: 0000E000 ba1c4000 ba1d5000 vpc-8042.sys Timestamp: unavailable (00000000) Checksum: 00000000 ImageSize: 00011000 f77df000 f77e7000 Sfloppy.SYS Timestamp: unavailable (00000000) Checksum: 00000000 ImageSize: 00008000 ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck 7E, {c0000005, f723e087, f78dea8c, f78de788} ***** Kernel symbols are WRONG. Please fix symbols to do analysis. --omitted-- Probably caused by : fltmgr.sys ( fltmgr!FltGetIrpName+63f ) Followup: MachineOwner --------- Finished dump check

    Read the article

  • Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

    - by Andreea Vaduva
    Untitled Document table { border: thin solid; } Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users. So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.   TIP It is not necessary to highlight and copy the row or column descriptions   Next from the Smart View ribbon select Copy Data Point. Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View;from it select the Paste Data Point icon. The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers. =After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.) It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text: Before refresh: After refresh: From now on for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again. As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).   So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:   By selecting the Manage POV option from the Smart View meny in Word or Power Point…   … the following POV Manager – Queries window opens:   You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV. TIP Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV. And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks: Select one of the numbers from within your Word or Power Point document by double-clicking.   Then select the Visualize in Excel option from the Smart View menu. Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.) However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features. So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected] . About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.  

    Read the article

  • Oracle OpenWorld 2013 – Wrap up by Sven Bernhardt

    - by JuergenKress
    OOW 2013 is over and we’re heading home, so it is time to lean back and reflecting about the impressions we have from the conference. First of all: OOW was great! It was a pleasure to be a part of it. As already mentioned in our last blog article: It was the biggest OOW ever. Parallel to the conference the America’s Cup took place in San Francisco and the Oracle Team America won. Amazing job by the team and again congratulations from our side Back to the conference. The main topics for us are: Oracle SOA / BPM Suite 12c Adaptive Case management (ACM) Big Data Fast Data Cloud Mobile Below we will go a little more into detail, what are the key takeaways regarding the mentioned points: Oracle SOA / BPM Suite 12c During the five days at OOW, first details of the upcoming major release of Oracle SOA Suite 12c and Oracle BPM Suite 12c have been introduced. Some new key features are: Managed File Transfer (MFT) for transferring big files from a source to a target location Enhanced REST support by introducing a new REST binding Introduction of a generic cloud adapter, which can be used to connect to different cloud providers, like Salesforce Enhanced analytics with BAM, which has been totally reengineered (BAM Console now also runs in Firefox!) Introduction of templates (OSB pipelines, component templates, BPEL activities templates) EM as a single monitoring console OSB design-time integration into JDeveloper (Really great!) Enterprise modeling capabilities in BPM Composer These are only a few points from what is coming with 12c. We are really looking forward for the new realese to come out, because this seems to be really great stuff. The suite becomes more and more integrated. From 10g to 11g it was an evolution in terms of developing SOA-based applications. With 12c, Oracle continues it’s way – very impressive. Adaptive Case Management Another fantastic topic was Adaptive Case Management (ACM). The Oracle PMs did a great job especially at the demo grounds in showing the upcoming Case Management UI (will be available in 11g with the next BPM Suite MLR Patch), the roadmap and the differences between traditional business process modeling. They have been very busy during the conference because a lot of partners and customers have been interested Big Data Big Data is one of the current hype themes. Because of huge data amounts from different internal or external sources, the handling of these data becomes more and more challenging. Companies have a need for analyzing the data to optimize their business. The challenge is here: the amount of data is growing daily! To store and analyze the data efficiently, it is necessary to have a scalable and flexible infrastructure. Here it is important that hardware and software are engineered to work together. Therefore several new features of the Oracle Database 12c, like the new in-memory option, have been presented by Larry Ellison himself. From a hardware side new server machines like Fujitsu M10 or new processors, such as Oracle’s new M6-32 have been announced. The performance improvements, when using one of these hardware components in connection with the improved software solutions were really impressive. For more details about this, please take look at our previous blog post. Regarding Big Data, Oracle also introduced their Big Data architecture, which consists of: Oracle Big Data Appliance that is preconfigured with Hadoop Oracle Exdata which stores a huge amount of data efficently, to achieve optimal query performance Oracle Exalytics as a fast and scalable Business analytics system Analysis of the stored data can be performed using SQL, by streaming the data directly from Hadoop to an Oracle Database 12c. Alternatively the analysis can be directly implemented in Hadoop using “R”. In addition Oracle BI Tools can be used to analyze the data. Fast Data Fast Data is a complementary approach to Big Data. A huge amount of mostly unstructured data comes in via different channels with a high frequency. The analysis of these data streams is also important for companies, because the incoming data has to be analyzed regarding business-relevant patterns in real-time. Therefore these patterns must be identified efficiently and performant. To do so, in-memory grid solutions in combination with Oracle Coherence and Oracle Event Processing demonstrated very impressive how efficient real-time data processing can be. One example for Fast Data solutions that was shown during the OOW was the analysis of twitter streams regarding customer satisfaction. The feeds with negative words like “bad” or “worse” have been filtered and after a defined treshold has been reached in a certain timeframe, a business event was triggered. Cloud Another key trend in the IT market is of course Cloud Computing and what it means for companies and their businesses. Oracle announced their Cloud strategy and vision – companies can focus on their real business while all of the applications are available via Cloud. This also includes Oracle Database or Oracle Weblogic, so that companies can also build, deploy and run their own applications within the cloud. Three different approaches have been introduced: Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS) Using the IaaS approach only the infrastructure components will be managed in the Cloud. Customers will be very flexible regarding memory, storage or number of CPUs because those parameters can be adjusted elastically. The PaaS approach means that besides the infrastructure also the platforms (such as databases or application servers) necessary for running applications will be provided within the Cloud. Here customers can also decide, if installation and management of these infrastructure components should be done by Oracle. The SaaS approach describes the most complete one, hence all applications a company uses are managed in the Cloud. Oracle is planning to provide all of their applications, like ERP systems or HR applications, as Cloud services. In conclusion this seems to be a very forward-thinking strategy, which opens up new possibilities for customers to manage their infrastructure and applications in a flexible, scalable and future-oriented manner. As you can see, our OOW days have been very very interresting. We collected many helpful informations for our projects. The new innovations presented at the confernce are great and being part of this was even greater! We are looking forward to next years’ conference! Links: http://www.oracle.com/openworld/index.html http://thecattlecrew.wordpress.com/2013/09/23/first-impressions-from-oracle-open-world-2013 SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: cattleCrew,Sven Bernhard,OOW2013,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Design advice for avoiding change in several classes

    - by Anders Svensson
    Hi, I'm trying to figure out how to design a small application more elegantly, and make it more resistant to change. Basically it is a sort of project price calculator, and the problem is that there are many parameters that can affect the pricing. I'm trying to avoid cluttering the code with a lot of if-clauses for each parameter, but still I have e.g. if-clauses in two places checking for the value of the size parameter. I have the Head First Design Patterns book, and have tried to find ideas there, but the closest I got was the decorator pattern, which has an example where starbuzz coffee sets prices depending first on condiments added, and then later in an exercise by adding a size parameter (Tall, Grande, Venti). But that didn't seem to help, because adding that parameter still seemed to add if-clause complexity in a lot of places (and this being an exercise they didn't explain that further). What I am trying to avoid is having to change several classes if a parameter were to change or a new parameter added, or at least change in as few places as possible (there's some fancy design principle word for this that I don't rememeber :-)). Here below is the code. Basically it calculates the price for a project that has the tasks "Writing" and "Analysis" with a size parameter and different pricing models. There will be other parameters coming in later too, like "How new is the product?" (New, 1-5 years old, 6-10 years old), etc. Any advice on the best design would be greatly appreciated, whether a "design pattern" or just good object oriented principles that would make it resistant to change (e.g. adding another size, or changing one of the size values, and only have to change in one place rather than in several if-clauses): public class Project { private readonly int _numberOfProducts; protected Size _size; public Task Analysis { get; set; } public Task Writing { get; set; } public Project(int numberOfProducts) { _numberOfProducts = numberOfProducts; _size = GetSize(); Analysis = new AnalysisTask(numberOfProducts, _size); Writing = new WritingTask(numberOfProducts, _size); } private Size GetSize() { if (_numberOfProducts <= 2) return Size.small; if (_numberOfProducts <= 8) return Size.medium; return Size.large; } public double GetPrice() { return Analysis.GetPrice() + Writing.GetPrice(); } } public abstract class Task { protected readonly int _numberOfProducts; protected Size _size; protected double _pricePerHour; protected Dictionary<Size, int> _hours; public abstract int TotalHours { get; } public double Price { get; set; } protected Task(int numberOfProducts, Size size) { _numberOfProducts = numberOfProducts; _size = size; } public double GetPrice() { return _pricePerHour * TotalHours; } } public class AnalysisTask : Task { public AnalysisTask(int numberOfProducts, Size size) : base(numberOfProducts, size) { _pricePerHour = 850; _hours = new Dictionary<Size, int>() { { Size.small, 56 }, { Size.medium, 104 }, { Size.large, 200 } }; } public override int TotalHours { get { return _hours[_size]; } } } public class WritingTask : Task { public WritingTask(int numberOfProducts, Size size) : base(numberOfProducts, size) { _pricePerHour = 650; _hours = new Dictionary<Size, int>() { { Size.small, 125 }, { Size.medium, 100 }, { Size.large, 60 } }; } public override int TotalHours { get { if (_size == Size.small) return _hours[_size] * _numberOfProducts; if (_size == Size.medium) return (_hours[Size.small] * 2) + (_hours[Size.medium] * (_numberOfProducts - 2)); return (_hours[Size.small] * 2) + (_hours[Size.medium] * (8 - 2)) + (_hours[Size.large] * (_numberOfProducts - 8)); } } } public enum Size { small, medium, large } public partial class Form1 : Form { public Form1() { InitializeComponent(); List<int> quantities = new List<int>(); for (int i = 0; i < 100; i++) { quantities.Add(i); } comboBoxNumberOfProducts.DataSource = quantities; } private void comboBoxNumberOfProducts_SelectedIndexChanged(object sender, EventArgs e) { Project project = new Project((int)comboBoxNumberOfProducts.SelectedItem); labelPrice.Text = project.GetPrice().ToString(); labelWriterHours.Text = project.Writing.TotalHours.ToString(); labelAnalysisHours.Text = project.Analysis.TotalHours.ToString(); } } At the end is a simple current calling code in the change event for a combobox that set size... (BTW, I don't like the fact that I have to use several dots to get to the TotalHours at the end here either, as far as I can recall, that violates the "principle of least knowledge" or "the law of demeter", so input on that would be appreciated too, but it's not the main point of the question) Regards, Anders

    Read the article

  • New Communications Industry Data Model with "Factory Installed" Predictive Analytics using Oracle Da

    - by charlie.berger
    Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications Service Providers   We've integrated pre-installed analytical methodologies with the new Oracle Communications Data Model to deliver automated, simple, yet powerful predictive analytics solutions for customers.  Churn, sentiment analysis, identifying customer segments - all things that can be anticipated and hence, preconcieved and implemented inside an applications.  Read on for more information! TM Forum Management World, Nice, France - 18 May 2010 News Facts To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle today introduced the Oracle Communications Data Model. With the Oracle Communications Data Model, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the Oracle Communications Data Model, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Also announced today, Hong Kong Broadband Network enhanced their data warehouse system, going live on Oracle Communications Data Model in three months. The leading provider increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Product Details Oracle Communications Data Model provides industry-specific schema and embedded analytics that address key areas such as customer management, marketing segmentation, product development and network health. CSPs can efficiently capture and monitor critical data and transform it into actionable information to support development and delivery of next-generation services using: More than 1,300 industry-specific measurements and key performance indicators (KPIs) such as network reliability statistics, provisioning metrics and customer churn propensity. Embedded OLAP cubes for extremely fast dimensional analysis of business information. Embedded data mining models for sophisticated trending and predictive analysis. Support for multiple lines of business, such as cable, mobile, wireline and Internet, which can be easily extended to support future requirements. With Oracle Communications Data Model, CSPs can jump start the implementation of a communications data warehouse in line with communications-industry standards including the TM Forum Information Framework (SID), formerly known as the Shared Information Model. Oracle Communications Data Model is optimized for any Oracle Database 11g platform, including Oracle Exadata, which can improve call data record query performance by 10x or more. Supporting Quotes "Oracle Communications Data Model covers a wide range of business areas that are relevant to modern communications service providers and is a comprehensive solution - with its data model and pre-packaged templates including BI dashboards, KPIs, OLAP cubes and mining models. It helps us save a great deal of time in building and implementing a customized data warehouse and enables us to leverage the advanced analytics quickly and more effectively," said Yasuki Hayashi, executive manager, NTT Comware Corporation. "Data volumes will only continue to grow as communications service providers expand next-generation networks, deploy new services and adopt new business models. They will increasingly need efficient, reliable data warehouses to capture key insights on data such as customer value, network value and churn probability. With the Oracle Communications Data Model, Oracle has demonstrated its commitment to meeting these needs by delivering data warehouse tools designed to fill communications industry-specific needs," said Elisabeth Rainge, program director, Network Software, IDC. "The TM Forum Conformance Mark provides reassurance to customers seeking standards-based, and therefore, cost-effective and flexible solutions. TM Forum is extremely pleased to work with Oracle to certify its Oracle Communications Data Model solution. Upon successful completion, this certification will represent the broadest and most complete implementation of the TM Forum Information Framework to date, with more than 130 aggregate business entities," said Keith Willetts, chairman and chief executive officer, TM Forum. Supporting Resources Oracle Communications Oracle Communications Data Model Data Sheet Oracle Communications Data Model Podcast Oracle Data Warehousing Oracle Communications on YouTube Oracle Communications on Delicious Oracle Communications on Facebook Oracle Communications on Twitter Oracle Communications on LinkedIn Oracle Database on Twitter The Data Warehouse Insider Blog

    Read the article

  • SEO: Nested List vs List, Split Over Divs vs Definition List

    - by Jon P
    From an SEO perspective which, if any, is better: Option 1: Nested lists with h2 tags <ul id="mainpoints"> <li><h2>Powerful Analysis</h2> <ul> <li>Charting and indicators</li> <li>Daily trading signals</li> <li>Company health checks</li> </ul> </li> <li><h2>World Market Data</h2> <ul> [List Items removed for brevity] </ul> </li> <li><h2>Daily Market Data</h2> <ul> [List Items removed for brevity] </ul> </li> </ul> Option 2: Divs with h2 and lists <div id="mainpoints"> <div> <h2>Powerful Analysis</h2> <ul> <li>Charting and indicators</li> <li>Daily trading signals</li> <li>Company health checks</li> </ul> </div> <div> <h2>World Market Data</h2> <ul> [List Items removed for brevity] </ul> </div> <div> <h2>Daily Market Data</h2> <ul> [List Items removed for brevity] </ul> </div> </div> Option 3: Definition List <dl id="mainpoints"> <dt>Powerful Analysis</dt> <dd>- Charting and indicators</dd> <dd>- Daily trading signals</dd> <dd>- Company health checks</dd> <dt>World Market Data</dt> [List Items removed for brevity] <dt>Daily Market Data</dt> [List Items removed for brevity] </dl> My instincts tell me that semanticaly the pure list options (1 & 3) are the best and that h2 may be more SEO friendly (1 & 2) which would point to option 1 as being the best option. I do love the lean makeup of the definition list but will I take an SEO hit by losing the h2 tags? Before anyone asks, h2 is not valid markup in a dt tag. Are my instincts right with a nested list being the way to go?

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • BI&EPM in Focus June 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    General News Thomas Kurian Discusses Oracle Exalytics, SAP HANA (replay | preso | press)  Accenture & Oracle Study: The Challenges of Corporate Financial Reporting  (link) Flash Demo: Oracle Hyperion Planning on Exalytics in the Public Sector (link) Flash Demo: OBIEE & Exalytics in Retail (link) Customers Italian Partner Alfa Sistemi implemented at Autovie Venete S.p.A. Integrates Business Intelligence and Performance Management to Improve Efficiency and Speed for Managing Public Works Projects (English version)  / Autovie Venete implementa un sistema integrato di Business Intelligence e Performance Management per migliorare l’efficienza e la tempestività dell’attività di Controlling di Commessa (Italian version). FANCL Gains 360-Degree View of Customers across Multiple Sales Channels, Reduces Reports by 75% Korea Yakult Improves Profit & Loss Analysis with Oracle Hyperion Planning and OBIEE Hill International Streamlines Forecasting, Improves Visibility into Project Productivity and Profitability Children’s Rights in Society Better Supports Organizational Mission with Advanced, Integrated, and Streamlined Business Intelligence Tools Profit: International utility Enel monitors the performance of global subsidiaries with Oracle Hyperion Applications (link) Profit: Charting a New Course: Korean Air gains altitude by leveraging its greatest asset: information (link)   Events June 12: Breaking Away from the Excel Add-In: Welcome to Hyperion Smart View 11.1.2.2 (link) June 13: Upgrading OBIEE 10g to 11g: Best Practices and Lessons Learned (performance architects) (link) June 14, The Netherlands: Strategies for Business Excellence, New Release of Oracle Hyperion EPM Suite (link) June 21: Comprehensive and Accurate Forecasting for Healthcare (link) June 26: What Exactly is Exalytics? (KPI Partners) (link) Webcast Replay: Is Your Company Able to Navigate Through Market Volatility? (link)  Webcast Replay: Is Hope and Email The Core of Your Reconciliation Process? (link) Webcast Replay: Troubleshooting EPM Reporting & Analysis 11.1.2.x  (link) Webcast Replay: Is your Organization Flying Blind when it comes to Understanding Profitability?  (link) Enterprise Performance Management Final Oracle EPM  Information Panel (CIP) survey on cost, profitability and performance reporting/scorecards is now OPEN (link) New on EPM Blog: What's Going on With IFRS? (link) How does Crystal Ball integrate with EPM Solutions? New collateral and demos on Crystal Ball Solution Factory!  (link) New Youtube Video: Business Case Analysis with Oracle Crystal Ball (link) Crystal Ball 11.1.2.2 is released! Grouped Assumptions in Sensitivity Charts, Data Filtering When Fitting Distributions and Parameter Edits When Fitting Distributions to name a few. Get full details from the online New Features Guide (link) New DRM Oracle-by-Examples now available (link) Support Blog: Hyperion Ledgerlink Sample Record and Windows 7: Now you see it, now you don’t  (link) Use Enterprise Manager FMW Control to Troubleshoot Oracle EPM 11.1.2 Family of Products (link) Business  Intelligence Whitepaper: Real-Time Operational Reporting for E-Business Suite via GoldenGate Replication to an Operational Data Store.  How Oracle enabled real-time operational reporting for its $20B services contract business with Golden Gate & OBIEE (link) KPI Partners ebook: Understanding Oracle BI Components and Repository Modeling Basics (link) “Getting Started with Oracle Endeca Information Discovery” video tutorials now available (link) Oracle BI Publisher Conversion Center: Convert from Crystal, Actuate, or Oracle Reports to Oracle BI Publisher (link) Oracle Fusion Applications: Monthly Partner Updates Webcast Replays to help BI partners understand how OBI, Essbase, BI-Apps and Fusion work together: More on Fusion CRM: Fusion Marketing More on Fusion CRM: Fusion CRM Sales Start-Up Packs and Expert Services for Implementation Partners Introducing the Oracle Fusion Accounting Hub Implementing Fusion Applications using Oracle's Composers Oracle Fusion Applications Co-Existence

    Read the article

  • How to Build Services from Legacy Applications

    - by Chris Falter
    The SOA consultants invaded the executive suite at your company or agency, preached the true religion, and converted the unbelievers. Now by divine imperative you must convert your legacy applications into a suite of reusable services.  But as usual, you lack the time and resources that you need in order to develop the services properly.  So you googled or bing’ed, found this blog post, and began crying in gratitude.  Yes, as the title implies, I am going to reveal my easy, 3-step, works-every-time process for converting silos of legacy applications into the inventory of services your CIO has been dreaming about.  So just close your eyes and count to 3 … now open them … and here it is…. Not. While wishful thinking is too often the coin of the IT realm, even the most naive practitioner knows that converting legacy applications into reusable services requires more than a magic wand.  The reason is simple: if your starting point is your legacy applications, then you will simply be bolting a web service technology layer on top of your legacy API.  And that legacy API is built in the image of the silo applications.  Enter the wide gate of the legacy API, follow the broad path of generating service interfaces from existing code, and you will arrive at the siloed enterprise destruction that you thought you were escaping. The Straight and Narrow Path This past week I had the opportunity to learn how the FBI Criminal Justice Information Systems department has been transitioning from silo applications to a service inventory.  Lafe Hutcheson, IT Specialist in the architecture group and fellow attendee at an SOA Architect Certification Workshop, was my guide.  Lafe has survived the chaos of an SOA initiative, so it is not surprising that he was able to return from a US Army deployment to Kabul, Afghanistan with nary a scratch.  According to Lafe, building their service inventory is a three-phase process: Model a business process.  This requires intense collaboration between the IT and business wings of the organization, of course.  The FBI uses IBM Websphere tools to model the process with BPMN. Identify candidate services to facilitate the business process. Convert the BPMN to an executable BPEL orchestration, model and develop the services, and use a BPEL engine to run the process.  The FBI uses ActiveVOS for orchestration services. The 12 Step Program to End Your Legacy API Addiction Thomas Erl has documented a process for building a web service inventory that is quite similar to the FBI process. Erl’s process adds a technology architecture definition phase, which allows for the technology environment to influence the inventory blueprint.  For example, if you are using an enterprise service bus, you will probably not need to build your own utility services for logging or intermediate routing.  Erl also lists a service-oriented analysis phase that highlights the 12-step process of applying the principles of service orientation to modeling your services.  Erl depicts the modeling of a service inventory as an iterative process: model a business process, define the relevant technology architecture, define the service inventory blueprint, analyze the services, then model another business process, rinse and repeat.  (Astute readers will note that Erl’s diagram, restricted to analysis and modeling process, does not include the implementation phase that concludes the FBI service development methodology.) The service-oriented analysis phase is where you find the 12 steps that will free you from your legacy API addiction. In a nutshell, you identify the steps in the process that need services; identify the different types of services (agnostic entity services, service compositions, and utility services) that are required; apply service-orientation principles; and normalize the inventory into cohesive service models. Rather than discuss each of the 12 steps individually, I will close by simply referring my readers to Erl’s explanation.

    Read the article

  • Delivering SOA Governance with EAMS and Oracle Enterprise Repository by Link Consulting Team

    - by JuergenKress
    In the last 12 years Link Consulting has been making its presence in specific areas such as Governance and Architecture, both in terms of practices and methodologies, products, know-how and technological expertise. The Enterprise Architecture Management System - Oracle Enterprise Edition (EAMS - OER Edition) is the result of this experience and combines the architecture management solution with OER in order to deliver a product specialized for SOA Governance that gathers the better of two worlds in solution that enables SOA Governance projects, initiatives and programs. Enterprise Architecture Management System Enterprise Architecture Management System (EAMS), is an automation based solution that enables the efficient management of Enterprise Architectures. The solution uses configured enterprise repositories and takes advantages of its features to provide automation capabilities to the users. EAMS provides capabilities to create/customize/analyze repository data, architectural blueprints, reports and analytic charts. Oracle Enterprise Repository Oracle Enterprise Repository (OER) is one of the major and central elements of the Oracle SOA Governance solution. Oracle Enterprise Repository provides the tools to manage and govern the metadata for any type of software asset, from business processes and services to patterns, frameworks, applications, components, and models. OER maps the relationships and inter-dependencies that connect those assets to improve impact analysis, promote and optimize their reuse, and measure their impact on the bottom line. It provides the visibility, feedback, controls, and analytics to keep your SOA on track to deliver business value. The intense focus on automation helps to overcome barriers to SOA adoption and streamline governance throughout the lifecycle. Core capabilities of the OER include: Asset Management Asset Lifecycle Management Usage Tracking Service Discovery Version Management Dependency Analysis Portfolio Management EAMS - OER Edition The solution takes the advantages and features from both products and combines them in a symbiotic tool that enhances the quality of SOA Governance Initiatives and Programs. EAMS is able to produce a vast number of outputs by combining its analytical engine, SOA-specific configurations and the assets in OER and other related tools, catalogs and repositories. The configurations encompass not only the extendable parametrization of the metadata but also fully configurable blueprints, PowerPoint reports, charts and queries. The SOA blueprints The solution comes with a set of predefined architectural representations that help the organization better perceive their SOA landscape. More blueprints can be easily created in order to accommodate the organizations needs in terms of detail, audience and metadata. Charts & Dashboards The solution encompasses a set of predefined charts and dashboards that promote a more agile way to control and explore the assets. Time Based Visualization All representations are time bound, and with EAMS - OER you can truly govern SOA with a complete view of the Past, Present and Future; The solution delivers Gap Analysis, a project oriented approach while taking into consideration the As-Was, As-Is an To-Be. Time based visualization differentiating factors: Extensive automation and maintenance of architectural representations Organization wide solution. Easy access and navigation to and between all architectural artifacts and representations. Flexible meta-model, customization and extensibility capabilities. Lifecycle management and enforcement of the time dimension over all the repository content. Profile based customization. Comprehensive visibility Architectural alignment Friendly and striking user interfaces For more information on EAMS visit us here. For more information on SOA visit us here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Link Consulting,OER,OSR,SOA Governance,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • TechEd North America 2012 – Day 1 #msTechEd

    - by Marco Russo (SQLBI)
    Yesterday I and Alberto delivered the PreCon day about BISM Tabular in Analysis Services 2012. We received very good feedback and now I am looking forward to meet people that read our blogs and our books! Ping me on Twitter at @marcorus if you want to contact me during the conference. This is my schedule for the next few days: ·         Monday, June 11, 2012 o   10:30am-12:30pm I will be in the Technical Learning Center area, at the Breaktrough Insights (station #8) in the Database & Business Intelligence area (dedicated to SQL Server 2012) o   I will try to watch some sessions in the afternoon o   6:30pm-7:00pm I will be at the O’Reilly booth meeting book readers and doing some book signing ·         Tuesday, June 12, 2012 o   12:30pm-3:30pm I will be in the Technical Learning Center area, at the Breaktrough Insights (station #8) in the Database & Business Intelligence area (dedicated to SQL Server 2012) o   5:00pm-6:15pm I will attend the Alberto’s session DBI413 Many-to-Many Relationships in BISM Tabular (room S330E) o   6:15pm-9:00pm Community Night & Ask the Experts, we’ll discuss about Analysis Services, Tabular and Multidimensional! ·         Wednesday, June 13, 2012 o   11:15am-11:30am Don’t miss this special demo session at the Private Cloud, Public Cloud and Data Platform Theater in the Technical Learning Center area (next to the SQL Server 2012 zone). I and Alberto will present Querying multi-billion rows with many to many relationships in SSAS Tabular (xVelocity) and you’re invited to guess the response time of DAX queries on a 4 billion rows table with many-to-many relationships before we run them! We’ll give away some 8GB USB key if you guess the right answer! o   12:30pm-1:00pm I and Alberto will have a book signing session at the TechEd Bookstore o   3:00pm-5:00pm I will be in the Technical Learning Center area, at the Breaktrough Insights (station #8) in the Database & Business Intelligence area (dedicated to SQL Server 2012) ·         Thursday, June 14, 2012 o   2:45pm-4:00pm I will deliver my DBI319 BISM: Multidimensional vs. Tabular breakthrough session in room S320A. I expect many questions here! And if you want to learn more about Analysis Services Tabular, we announced two more online sessions of our SSAS Tabular Workshop: ·         July 2-3, 2012 - SSAS Workshop Online - America's time zone ·         September 3-4, 2012 - SSAS Workshop Online - America's time zone Register now if you are interested, the early bird for the July session expires on June 19, 2012! I will also deliver a SSAS Workshop in Oslo (Norway) on August 27-28, 2012.  

    Read the article

  • Personal Financial Management – The need for resuscitation

    - by Salil Ravindran
    Until a year or so ago, PFM (Personal Financial Management) was the blue eyed boy of every channel banking head. In an age when bank account portability is still fiction, PFM was expected to incentivise customers to switch banks. It still is, in some emerging economies, but if the state of PFM in matured markets is anything to go by, it is in a state of coma and badly requires resuscitation. Studies conducted around the year show an alarming decline and stagnation in PFM usage in mature markets. A Sept 2012 report by Aite Group – Strategies for PFM Success shows that 72% of users hadn’t used PFM and worse, 58% of them were not kicked about using it. Of the rest who had used it, only half did on a bank site. While there are multiple reasons for this lack of adoption, some are glaringly obvious. While pretty graphs and pie charts are important to provide a visual representation of my income and expense, it is simply not enough to encourage me to return. Static representation of data without any insightful analysis does not help me. Budgeting and Cash Flow is important but when I have an operative account, a couple of savings accounts, a mortgage loan and a couple of credit cards help me with what my affordability is in specific contexts rather than telling me I just busted my budget. Help me with relative importance of each budget category so that I know it is fine to go over budget on books for my daughter as against going over budget on eating out. Budget over runs and spend analysis are post facto and I am informed of my sins only when I return to online banking. That too, only if I decide to come to the PFM area. Fundamentally, PFM should be a part of my banking engagement rather than an analysis tool. It should be contextual so that I can make insight based decisions. So what can be done to resuscitate PFM? Amalgamation with banking activities – In most cases, PFM tools are integrated into online banking pages and they are like chapter 37 of a long story. PFM needs to be a way of banking rather than a tool. Available balances should shift to Spendable Balances. Budget and goal related insights should be integrated with transaction sessions to drive pre-event financial decisions. Personal Financial Guidance - Banks need to think ground level and see if their PFM offering is really helping customers achieve self actualisation. Banks need to recognise that most customers out there are non-proficient about making the best value of their money. Customers return when they know that they are being guided rather than being just informed on their finance. Integrating contextual financial offers and financial planning into PFM is one way ahead. Yet another way is to help customers tag unwanted spending thereby encouraging sound savings habits. Mobile PFM – Most banks have left all those numbers on online banking. With access mostly having moved to devices and the success of apps, moving PFM on to devices will give it a much needed shot in the arm. This is not only about presenting the same wine in a new bottle but also about leveraging the power of the device in pushing real time notifications to make pre-purchase decisions. The pursuit should be to analyse spend, budgets and financial goals real time and push them pre-event on to the device. So next time, I should know that I have over run my eating out budget before walking into that burger joint and not after. Increase participation and collaboration – Peer group experiences and comments are valued above those offered by the bank. Integrating social media into PFM engagement will let customers share and solicit their financial management experiences with their peer group. Peer comparisons help benchmark one’s savings and spending habits with those of the peer group and increases stickiness. While mature markets have gone through this learning in some way over the last one year, banks in maturing digital banking economies increasingly seem to be falling into this trap. Best practices lie in profiling and segmenting customers, being where they are and contextually guiding them to identify and achieve their financial goals. Banks could look at the likes of Simple and Movenbank to draw inpiration from.

    Read the article

  • Unable to Sign in to the Microsoft Online Services Signin application from Windows 7 client located behind ISA firewall

    - by Ravindra Pamidi
    A while ago i helped a customer troubleshoot authentication problem with Microsoft Online Services Signin application.  This customer was evaluating Microsoft BPOS (Business Productivity Online Services) and was having trouble using the single sign on application behind ISA 2004 firewall.The network structure is fairly simple with single Windows 2003 Active Directory domain and Windows 7 clients. On a successful logon to the Microsoft Online Services Signin application, this application provides single signon functionality to all of Microsoft online services in the BPOS package. Symptoms:When trying to signin it fails with error "The service is currently unavailable. Please try again later. If problems continue, contact your service administrator". If ISA 2004 firewall is removed from the picture the authentication succeeds.Troubleshooting: Enabled ISA Server firewall logging along with Microsoft Network Monitor tool on the Windows 7 Client while reproducing the issue. Analysis of the ISA Server Firewall logs and Microsoft Network capture revealed that the Microsoft Online Services Sign In application when sending request to ISA Server does not send the domain credentials and as a result ISA Server responds with an error code of HTTP 407 Proxy authentication required listing out the supported authentication mechanisms.  The application in question is expected to send the credentials of the domain user in response to this request. However in this case, it fails to send the logged on user's domain credentials. Bit of researching on the Internet revealed that The "Microsoft Online Services Sign In" application by default does not support Outbound Internet Proxy authentication. In order for it to send the logged on user's domain credentials we had to make  changes to its configuration file "SignIn.exe.config" located under "Program Files\Microsoft Online Services\Sign In" folder. Step by Step details to configure the configuration file are documented on Microsoft TechNet website given below.  Configure your outbound authenticating proxy serverhttp://www.microsoft.com/online/help/en-us/helphowto/cc54100d-d149-45a9-8e96-f248ecb1b596.htm After the above problem was addressed we were still not able to use the "Microsoft Online Services Sign In" application and it failed with the same error.  Analysis of another network capture revealed that the application in question is now sending the required credentials and the connection seems to terminate at a later stage. Enabled verbose logging for the "Microsoft Online Services Sign In" application and then reproduced the problem. Analysis of the logs revealed a time difference between the local client and Microsoft Online services server of around seven minutes which is above the acceptable time skew of five minutes. Excerpt from Microsoft Online Services Sign In application verbose log:  1/26/2012 1:57:51 PM Verbose SingleSignOn.GetSSOGenericInterface SSO Interface URL: https://signinservice.apac.microsoftonline.com/ssoservice/UID1/26/2012 1:57:52 PM Exception SSOSignIn.SignIn The security timestamp is invalid because its creation time ('2012-01-26T08:34:52.767Z') is in the future. Current time is '2012-01-26T08:27:52.987Z' and allowed clock skew is '00:05:00'.1/26/2012 1:57:52 PM Exception SSOSignIn.SignIn  Although the Windows 7 Clients successfully synchronized time to the domain controller for the domain, the domain controller was not configured to synchronize time with external NTP servers. This caused a gradual drift in time on the network thus resulting in the above issue. Reconfigured the domain controller holding the PDC FSMO role to synchronize time with external time source ( time.nist.gov ) and edited the system policy on the ISA server firewall to allow NTP traffic to time.nist.gov Configure the time source for the forest:Windows Time Servicehttp://technet.microsoft.com/en-us/library/cc794937(WS.10).aspx Forced synchronization of Windows time using the command w32tm /resync on the domain controller and later on the clients each of which had corrected the seven minutes difference. This resolved the problem with logon to Microsoft Online Services Sign In.

    Read the article

  • Retrofit Certification

    - by Bill Evjen
    Impact of Regulations on Cabin Systems Installation John Courtright, Structural Integrity Engineering There are “heightened” FAA attention to technical issues related to IFE and Wi-Fi Systems Installations The Aging Aircraft Safety Rule – EWIS & Damage Tolerance Analysis The Challenge: Maximize Flight Safety While Minimizing Costs Issue Papers & Testing, Testing, Testing The role of Airworthiness Directives (ADs) on the design of many IFE systems and all antenna systems. Goal is safety AND cost-effective maintenance intervals and inspection techniques The STC Process Briefly Stated Type Certifications (TC) Supplemental Type Certifications (STC) The STC Process Project Specific Certification Plan (PSCP) Managed by FAA Aircraft Certification Office (ACO) Type of Project (Electrical/Mechanical Systems or Structural) Specific Type of Aircraft Being Modified Schedule Design & Installation Location What does the STC Plan (PSCP) Cover? System Description – What does the system do? System qualification – Are the components qualified? Certification requirements – What FARs are applicable? Installation detail – what is being modified? Prototype installation – What is new? Functional hazard Assessment (FHA) – is it safe? EZAP-EWIS Requirements – Any aging aircraft issues? Certification Data – How is compliance achieved? Delegation and FAA involvement – Who is doing the work? Proposed certification schedule – When is the installation? Certification documentation – What the FAA Expects to see Cabin Systems Certification Concerns In addition to meeting the requirements for DO-160, Cabin System Certification needs to address issues related to: Power management: Generally, IFE and Wi-Fi Systems are classified as “Non-Essential Equipment” from a certification viewpoint. Connected to “non-essential” power buses Must be able to shed IFE & Wi-Fi Systems in a smoke/fire event or Other electrical emergency (FAA Policy 00-111-160) FAA is more relaxed with testing wi-fi. It used to be that you had to have 150 seats with laptops running wi-fi, but now it is down to around 50. Aging aircraft concerns – electrical and structural Issue papers addressing technical concerns involving: “Structural Certification Criteria for Large Antenna Installations” Antenna “Vibration/Buffeting Compliance Criteria” DO-160 : Environmental Test Procedures DO 160 – “Environmental Conditions and Test Procedures for Airborne Equipment”, Issued by RTCA Provides guidance to equipment manufacturers as to testing requirements Temperature: –40C to +55C Vibration and Shock Contaminant susceptibility – fluids and dust Electro-magnetic Interference Cabin systems are generally classified as “non-essential” Swissair 111 crashed (in part) due to non-standard wiring practices. EWIS Design Implications Installation design must take EWIS Requirements into account. This generally means: Aircraft surveys are needed to identify proper wire routing Ensure existing wiring diagrams are correct Identify primary/Secondary/Tertiary bus locations Verify proper separation of wire bundles exist Required separation from fuel quantity indicator system (FQIS) to prevent fuel tang ignition Enhanced Zonal Analysis Procedure (EZAP) Performed EZAP was developed by the Aging Transport Systems Rulemaking Advisory Committee (ATSRAC) EZAP is the method for analyzing airplane zones with an emphasis on evaluating wiring systems and the existence of combustibles  in the cabin. Certification Considerations for Wi-Fi Systems Electrical – All existing DO 160 testing required Issue papers required Onboard EMI testing – any interference with aircraft systems when multiple wi-fi users are logged on? Vibration/Buffeting compliance criteria – what is the effect of the antenna on aircraft flight characteristics? Structural certification criteria – what are the stress loads on the aircraft at the antenna location and what is the impact on maintenance inspection criteria for the airline? Damage tolerance analysis required Goal – minimize maintenance inspection intervals

    Read the article

  • New Version 3.1 Endeca Information Discovery Now Available

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Business User Self-Service Data Mash-up Analysis and Discovery integrated with OBI11g and Hadoop Oracle Endeca Information Discovery 3.1 (OEID) is a major release that incorporates significant new self-service discovery capabilities for business users, including agile data mashup, extended support for unstructured analytics, and an even tighter integration with Oracle BI.  · Self-Service Data Mashup and Discovery Dashboards: business users can combine information from multiple sources, including their own up-loaded spreadsheets, to conduct analysis on the complete set.  Creating discovery dashboards has been made even easier by intuitive drag-and drop layouts and wizard-based configuration.  Business users can now build new discovery applications in minutes, without depending on IT. · Enhanced Integration with Oracle BI: OEID 3.1 enhances its’ native integration with Oracle Business Intelligence Foundation. Business users can now incorporate information from trusted BI warehouses, leveraging dimensions and attributes defined in Oracle’s Common Enterprise Information Model, but evolve them based on the varying day-to-day demands and requirements that they personally manage. · Deep Unstructured Analysis: business users can gain new insights from a wide variety of enterprise and public sources, helping companies to build an actionable Big Data strategy.  With OEID’s long-standing differentiation in correlating unstructured information with structured data, business users can now perform their own text mining to identify hidden concepts, without having to request support from IT. They can augment these insights with best in class keyword search and pattern matching, all in the context of rich, interactive visualizations and analytic summaries. · Enterprise-Class Self-Service Discovery:  OEID 3.1 enables IT to provide a powerful self-service platform to the business as part of a broader Business Analytics strategy, preserving the value of existing investments in data quality, governance, and security.  Business users can take advantage of IT-curated information to drive discovery across high volumes and varieties of data, and share insights with colleagues at a moment’s notice. · Harvest Content from the Web with the Endeca Web Acquisition Toolkit:  Oracle now provides best-of-breed data access to website content through the Oracle Endeca Web Acquisition Toolkit.  This provides an agile, graphical interface for developers to rapidly access and integrate any information exposed through a web front-end.  Organizations can now cost-effectively include content from consumer sites, industry forums, government or supplier portals, cloud applications, and myriad other web sources as part of their overall strategy for data discovery and unstructured analytics. For more information: OEID 3.1 OTN Software and Documentation Download And Endeca available for download on Software Delivery Cloud (eDelivery) New OEID 3.1 Videos on YouTube Oracle.com Endeca Site /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Drill through table does not show correct count when used with a dimension having parent child hiera

    - by Arun Singhal
    Hi All, I have a dimension with parent child hierarchy as shown in code block. The issue i am facing is if i have a filter on parent child dimension then drill through table does not show filtered data instead it shows all the data for that dimension. Here is an example. <Dimension type="StandardDimension" name="page_type_d" caption="Page Type"> <Hierarchy name="page_type_h" hasAll="true" allMemberName="all_page_types" allMemberCaption="All Page Types" primaryKey="id"> <Table name="npg_page_type_view" alias="pt"> </Table> <Level name="Page Type" column="id" nameColumn="display_name" parentColumn="parent_id" nullParentValue="0" type="Integer" uniqueMembers="true" levelType="Regular" hideMemberIf="Never" caption="Page Type"> <Closure parentColumn="parent_id" childColumn="page_type_id"> <Table name="dim_page_types_closure"> </Table> </Closure> </Level> </Hierarchy> Now suppose i have 4 rows in npg_page_type_view table id display_name parent_id 19 HTML 100 20 PDF 100 21 XML 0 100 Total 0 Now suppose in my fact table i have following records id count 19 2 20 3 21 1 Following is my analysis view. Total (HTML and PDF) - 5 HTML - 2 PDF - 3 XML - 1 Now if i add filter(say Total) on this analysis view using OLAP cube. Then my analysis view shows the following. Total (HTML and PDF) - 5 Upto this point everything works fine. Now if i click on 5 (to view drill through table) It shows me data against all page type i.e. HTML, PDF, XML but as per filter it should show only HTML and PDF. Is it an exciting issue or am i doing something wrong here? Please help me.

    Read the article

  • Catching specific vs. generic exceptions in c#

    - by Scott Vercuski
    This question comes from a code analysis run against an object I've created. The analysis says that I should catch a more specific exception type than just the basic Exception. Do you find yourself using just catching the generic Exception or attempting to catch a specific Exception and defaulting to a generic Exception using multiple catch blocks? One of the code chunks in question is below: internal static bool ClearFlags(string connectionString, Guid ID) { bool returnValue = false; SqlConnection dbEngine = new SqlConnection(connectionString); SqlCommand dbCmd = new SqlCommand("ClearFlags", dbEngine); SqlDataAdapter dataAdapter = new SqlDataAdapter(dbCmd); dbCmd.CommandType = CommandType.StoredProcedure; try { dbCmd.Parameters.AddWithValue("@ID", ID.ToString()); dbEngine.Open(); dbCmd.ExecuteNonQuery(); dbEngine.Close(); returnValue = true; } catch (Exception ex) { ErrorHandler(ex); } return returnValue; } Thank you for your advice EDIT: Here is the warning from the code analysis Warning 351 CA1031 : Microsoft.Design : Modify 'ClearFlags(string, Guid)' to catch a more specific exception than 'Exception' or rethrow the exception

    Read the article

  • Rugged Netbook and software options for Running R statistical software?

    - by Thomas
    I am interested in purchasing a netbook to do field research in another country. My hardware specifications for the nebtook: -Be rugged enough to survive a bit of wear and tear -Fairly fast processing (the ability to upgrade from 1GB of RAM to 2GB) -A battery life of longer than 6 hours -At least a 10 inch screen -A decent camera for Skyping My software needs: -Be able run a Spreadsheet program to do basic data input (like Excel or Open Office) -Use the Open Source Statistical Program R to do basic data analysis (Regression, data analysis etc.) -Word Processing (Word or Open Office) Do you have any suggestions on which models or brands my fit my needs? Some of the models I was considering: Samsung NB-30 Toshiba NB 305 Asus Eee PC 1005HA Lenovo S10-2 Also any recommendations on the best software to load on the aforementioned Netbooks. I saw this resource from [livehacker][6] Any comments. Thanks for your help! -Thomas

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >