Search Results

Search found 333 results on 14 pages for 'historical'.

Page 9/14 | < Previous Page | 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • custom DB logging using enterprise library 4.1

    - by Rohit
    We have to create a historical log of all the changed entities. we have defined our custom tables for this purpose. I have to incorporate this tables in Enterprise library logging block and do logging in these tables. I need to write a SP to insert values to these tables. Till now,what i have got from google is that i have to create a listener inheriting from CustomTraceListener and give my implementation of WriteMessage. What i need to know is,how will i plug my tables and SP in Enterprise library logging block.

    Read the article

  • Where can I get free real-time stock data?

    - by Jared
    Does anyone know of a way to obtain free real-time stock data or near real-time stock data? I'd like to do this since I'm interested in the financial market, not for use in investment applications which is why I'm looking for something free. I've tried the Perl module Finance::YahooQuote but some of the fields such as last trade time appear to be broken. I've looked at the historical data but it doesn't fit my needs since I'd like to monitor the movements of the markets and stocks during the trading day not just open and close. I've also looked at http://www.opentick.com but they aren’t accepting new accounts and I can't find a timeline for when their network upgrade will be complete.

    Read the article

  • Import module stored in a cStringIO data structure vs. physical disk file

    - by Malcolm
    Is there a way to import a Python module stored in a cStringIO data structure vs. physical disk file? It looks like "imp.load_compiled(name, pathname[, file])" is what I need, but the description of this method (and similar methods) has the following disclaimer: Quote: "The file argument is the byte-compiled code file, open for reading in binary mode, from the beginning. It must currently be a real file object, not a user-defined class emulating a file." [1] I tried using a cStringIO object vs. a real file object, but the help documentation is correct - only a real file object can be used. Any ideas on why these modules would impose such a restriction or is this just an historical artifact? Are there any techniques I can use to avoid this physical file requirement? Thanks, Malcolm [1] http://docs.python.org/library/imp.html#imp.load_module

    Read the article

  • nicely display file rename history in git log

    - by Jian
    The git command git log --format='%H' --follow -- foo.txt will give you the series of commits that touch foo.txt, following it across renames. I'm wondering if there's a git log command that will also print the corresponding historical file name beside each commit. It would be something like this, where we can interpret '%F' to be the (actually non-existent) placeholder for filename. git log --format='%H %F' --follow -- foo.txt I know this could be accomplished with git log --format='%H' --follow --numstat -- foo.txt but the output is not ideal since it requires some non-trivial parsing; each commit is strewn across multiple lines, and you'll still need to parse the file rename syntax ("bar.txt => foo.txt") to find what you're looking for.

    Read the article

  • Should old/legacy/unused code be deleted from source control repository?

    - by Checkers
    I've encountered this in multiple projects. As the code base evolves, some libraries, applications, and components get abandoned and/or deprecated. Most people prefer to keep them in. The usual argument is that the code does not really take any space, it can be left alone until needed again. So a repository slowly turns into a cesspool of legacy code, where it's hard to find anything. Some people delete old code, since it creates clutter, raises more questions for new people, and you can restore any old snapshot of the code base anyway. However you can't always find the old code if you don't know where to look, as none of the (common) VCS I know offer search over the entire repository including all historical revisions, and the only way to search the old files is to check out the revision where the deleted file exists. What would be a good approach to repository management?

    Read the article

  • Bug Tracker - Feature set - Comparison

    - by Blankman
    Hi, I have installed and played around with a few bug tracking apps, and just wanted to get some feedback and features that a typical (or not so typical) bug tracking software has. I want to make sure I am not overlooking a feature that might come in handy. So the biggest idea is around a bug, which is associated with: Bugs - product - product version - component (specific component of the product) - status - priority - assigned to - expected time to finish - time spent on it so far - created by - historical log of bug Forums Wiki I guess the above functionality is common to all, so what exactly differentiates them? Is it simply the UI and filtering/searching that you base a decision on? (I am comparing something like Jira with FogBugz)

    Read the article

  • What's the proper way of importing option lists into an Android app?

    - by Scott
    I have been storing option lists for my Android app in a cloud table. For example, categories like "historical fiction","biography","science fiction", etc. I see the following pros and cons: Pro: I can make changes to the list without sending an app update to Google Play Not normalized - I can use the text in my other data tables instead of a reference ID Con: App needs to take time to download from the web each time (or at least check for changes) English only I believe the "proper" way to do this is the use the XML resource files. But I need to make sure the selection references correctly with my data. That is, my app needs to understand that "Poetry" and "Poesía" are the same thing. Is the correct thing to do: Forget about it since I'll never get to the point where I'm translating my app anyway Use a string-array and use the index (0...x) to know what the selection is Use a 2-dimensional string-array with a reference ID in the first column and the text in the second?

    Read the article

  • Getting list of all existing vtables.

    - by Patrick
    In my application I have quite some void-pointers (this is because of historical reasons, application was originally written in pure C). In one of my modules I know that the void-pointers points to instances of classes that could inherit from a known base class, but I cannot be 100% sure of it. Therefore, doing a dynamic_cast on the void-pointer might give problems. Possibly, the void-pointer even points to a plain-struct (so no vptr in the struct). I would like to investigate the first 4 bytes of the memory the void-pointer is pointing to, to see if this is the address of the valid vtable. I know this is platform, maybe even compiler-version-specific, but it could help me in moving the application forward, and getting rid of all the void-pointers over a limited time period (let's say 3 years). Is there a way to get a list of all vtables in the application, or a way to check whether a pointer points to a valid vtable, and whether that instance pointing to the vtable inherits from a known base class?

    Read the article

  • .NET Neural Network or AI for Future Predictions

    - by Ian
    Hi All. I am looking for some kind of intelligent (I was thinking AI or Neural network) library that I can feed a list of historical data and this will predict the next sequence of outputs. As an example I would like to feed the library the following figures 1,2,3,4,5 and based on this, it should predict the next sequence is 6,7,8,9,10 etc. The inputs will be a lot more complex and contain much more information. This will be used in a C# application. If you have any recommendations or warning that will be great. Thanks

    Read the article

  • What are the repercussions of not checking existing data when adding a foreign key?

    - by scottm
    I've inherited a database that doesn't exactly strive for data integrity. I am trying to add some foreign keys to change that, but there is data in some tables that doesn't fit the constraints. Most likely, the data won't be used again so I want to know what problems I might face by leaving it there. The other option I see is to move it into some kind of table without referential constraints, just for historical purposes. So, what are the repercussions of not checking existing data? If I create a foreign key constraint on a table and don't check existing data, will all new data inserted into the table be enforced?

    Read the article

  • Are there any well-known algorithms or computer models that computer scientists use to predict FIFA

    - by Khnle
    Occasionally I read news articles that mention about some computer models that computer scientists use to predict winners of some sporting events or the odds for betting which I think there must be a mathematical model behind it. I never bothered to think twice even though I am a "pseudo computer scientist" myself. With the 2010 FIFA World Cup just underway, and since I am also a "pseudo football/soccer player" myself, I just started to wonder about these calculations algorithms. For example, I know one factor is determining the strength of opponents, so that a win against a strong opponent can count more than a win against a weak opponent. But it now kind of gets in a circular loop, or at least how does one determine the strength of a team in the first place, before that team can be considered strong or weak? If it's based on a historical data then there's no way that could be accurate, because those players of the past are no longer on the fields so their impact is none (except maybe if they become coaches like Maradona) Anyway, long question short, if you're happen to be working in this field or have some knowledge, please shed some lights.

    Read the article

  • Why did the C# designers attach three different meanings to the 'using' keyword?

    - by gWiz
    The using keyword has three disparate meanings: type/namespace aliasing namespace import syntactic sugar for ensuring Dispose is called The documentation calls the first two definitions directives (which I'm guessing means they are preprocessing in nature), while the last is a statement. Regardless of the fact that they are distinguished by their syntaxes, why would the language developers complicate the semantics of the keyword by attaching three different meanings to it? For example, (disclaimer: off the top of my head, there may certainly be better examples) why not add keywords like alias and import? Technical, theoretical, or historical reasons? Keyword quota? ;-) Contrived sample: import System.Timers; alias LiteTimer=System.Threading.Timer; alias WinForms=System.Windows.Forms; public class Sample { public void Action { var elapsed = false; using(var t = new LiteTimer.Timer(_ => elapsed = true) { while (!elapsed) CallSomeFinickyApi(); } } } "Using" is such a vague word.

    Read the article

  • What are the advantages of squashing assignment and error checking in one line?

    - by avakar
    This question is inspired by this question, which features the following code snippet. int s; if((s = foo()) == ERROR) print_error(); I find this style hard to read and prone to error (as the original question demonstrates -- it was prompted by missing parentheses around the assignment). I would instead write the following, which is actually shorter in terms of characters. int s = foo(); if(s == ERROR) print_error(); This is not the first time I've seen this idiom though, and I'm guessing there are reasons (perhaps historical) for it being so often used. What are those reasons?

    Read the article

  • How does linq decide between inner & outer joins

    - by user287795
    Hi Usually linq is using an left outer join for its queries but on some cases it decides to use inner join instead. I have a situation where that decision results in wrong results since the second table doesn't always have suitable records and that removes the records from the first table. I'm using a linqdatasource over a dbml where the relevant tables are identical but one holds historical records removed from the first. both have the same primary key. and I'm using a dataloadoption to load both tables at once with out round trips. Would you explain why linq decided to use an inner join here? Thanks

    Read the article

  • When is it good to use FTP?

    - by Tom Duckering
    In my experience I see a lot of architecture diagrams which make extensive use of FTP as a medium for linking architectural components. As someone who doesn't make architectural decisions but tends to look at architecture diagrams could anyone explain what the value is of using FTP, where it's appropriate and when transferring data as files is a good idea. I get that there are often legacy systems that just need to work that way - although any historical insight would be interesting too I can see the attraction in transferring files (especially if that's what needs to be transferred) because of the simplicity and familiarity and wonder if the reasoning goes beyond this.

    Read the article

  • why number 9 in kill -9 command in unix?

    - by Alby
    I understand it's off topic, I couldn't find anywhere online and I was thinking maybe programming gurus in the community might know this. I usually use kill -9 pid to kill the job. I always wondered the origin of 9. I looked it up online, and it says "9 Means KILL signal that is not catchable or ignorable. In other words it would signal process (some running application) to quit immediately" (source: http://wiki.answers.com/Q/What_does_kill_-9_do_in_unix_in_its_entirety) But, why 9? and what about the other numbers? is there any historical significance or because of the architecture of Unix? Thanks!

    Read the article

  • ruby on rails w/ SQLServer

    - by jaydel
    I've heard from some people that RoR doesn't marry cleanly with SQLServer. We have a series of historical, standardization to use SQLServer but if we can push back with valid reasons we can move to another db. One person on the team wants MySql and another wants Postgres, etc. I'm trying to stay out of the religious wars and really understand what the pain point is with SQLServer. We're running the app server on a linux box, and the database will be on a windows box and the SQLServer that we're supposed to standardize on is 2008, if those details help any... thanks in advance!

    Read the article

  • Guidance and Pricing for MSDN 2010

    - by John Alexander
    Sorry for the rather lengthy post here. I get asked this all the time so I decided to post it…Visual Studio 2010 editions will be available on April 12, 2010. Product Features Professional with MSDN Essentials Professional with MSDN Premium with MSDN Ultimate with MSDN Test Professional with MSDN Debugging and Diagnostics IntelliTrace (Historical Debugger)         Static Code Analysis       Code Metrics       Profiling       Debugger   Testing Tools Unit Testing   Code Coverage       Test Impact Analysis       Coded UI Test       Web Performance Testing         Load Testing1         Microsoft Test Manager 2010       Test Case Management2       Manual Test Execution       Fast-Forward for Manual Testing       Lab Management Configuration3       Integrated Development Environment Multiple Monitor Support   Multi-Targeting   One Click Web Deployment   JavaScript and jQuery Support   Extensible WPF-Based Environment Database Development Database Deployment       Database Change Management2       Database Unit Testing       Database Test Data Generation       Data Access   Development Platform Support Windows Development   Web Development   Office and SharePoint Development   Cloud Development   Customizable Development Experience   Architecture and Modeling Architecture Explorer         UML® 2.0 Compliant Diagrams (Activity, Use Case, Sequence, Class, Component)         Layer Diagram and Dependency Validation         Read-only diagrams (UML, Layer, DGML Graphs)         Lab Management Virtual environment setup & tear down3       Provision environment from template3       Checkpoint environment3       Team Foundation Server Version Control2   Work Item Tracking2   Build Automation2   Team Portal2   Reporting & Business Intelligence2   Agile Planning Workbook2   Microsoft Visual Studio Team Explorer 2010   Test Case Management2       MSDN Subscription – Software and Services for Production Use Windows Azure Platform 20 hrs/mo † 50 hrs/mo † 100 hrs/mo † 250 hrs/mo † n/a Microsoft Visual Studio Team Foundation Server 2010   Microsoft Visual Studio Team Foundation Server 2010 CAL   1 1 1 1 Microsoft Expression Studio 3       Microsoft Office Professional Plus 2010, Project Professional 2010, Visio Premium 2010 (following Office 2010 launch)       MSDN Subscription – Software for Development and Testing 4 Windows 7, Windows Server 2008 R2 and SQL Server 2008 Toolkits, Software Development Kits, Driver Development Kits Previous versions of Windows (client and server operation systems)   Previous versions of Microsoft SQL Server   Microsoft Office       Microsoft Dynamics       All other Servers       Windows Embedded operating systems       Teamprise         MSDN Subscription – Other Benefits Technical support incidents 0 2 4 4 2 Priority support in MSDN Forums Microsoft e-learning collections (typically 10 courses or 20 hours) 0 1 2 2 1 MSDN Flash newsletter MSDN Online Concierge MSDN Magazine   System Requirements View View View View View Buy from (MSRP) $799 $1,199 $5,469 $11,899 $2,169 Renew from (MSRP) $549 (upgrade) $799 $2,299 $3,799 $899 † Availability varies by country and subscription level.  Details available on the MSDN site 1. May require one or more Microsoft Visual Studio Load Test Virtual User Pack 2010 2. Requires Team Foundation Server and a Team Foundation Server CAL 3. Requires Microsoft Visual Studio Lab Management 2010 4. Per-user license allows unlimited installations and use for designing, developing, testing, and demonstrating applications. UML is a registered trademark of Object Management Group, Inc. Windows is either a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries.

    Read the article

  • Web application / Domain model integration using JSON capable DTOs [on hold]

    - by g-makulik
    I'm a bit confused about architectural choices for the web-applications/java/python world. For c/c++ world the available (open source) choices to implement web applications is pretty limited to zero, involving java or python the choices explode to a,- hard to sort out -, mess of available 'frameworks' and application approaches. I want to sort out a clean MVC model, where the M stands for a fully blown (POCO, POJO driven) domain model (according M.Fowler's EAA pattern) using a mature OO language (Java,C++) for implementation. The background is: I have a system with certain hardware components (that introduce system immanent active behavior) and a configuration database for system meta and HW-components configuration data (these are even usually self contained, since the HW-components are capable to persist their configuration data anyway). For realization of the configuration/status data exchange protocol with the HW-components we have chosen the Google Protobuf format, which works well for the directly wired communication with these components. This protocol is already used successfully with a Java based GUI application via TCP/IP connection to the main system controlling HW-component. This application has some drawbacks and design flaws for historical reasons. Now we want to develop an abstract model (domain model) for configuration and monitoring those HW-components, that represents a more use case oriented view to the overall system behavior. I have the feeling that a plain Java class model would fit best for this (c++ implementation seems to have too much implementation/integration overhead with viable language-bridge interfaces). Google Protobuf message definitions could still serve well to describe DTO objects used to interact with a domain model API. But integrating Google Protobuf messages client side for e.g. data binding in the current view doesn't seem to be a good choice. I'm thinking about some extra serialization features, e.g. for JSON based data exchange with the views/controllers. Most lightweight solutions seem to involve a python based presentation layer using JSON based data transfer (I'm at least not sure to be fully informed about this). Is there some lightweight (applicable for a limited ARM Linux platform) framework available, supporting such architecture to realize a web-application? UPDATE: According to my recent research and comments of colleagues I've noticed that using Java (and some JVM) might not be the preferable choice for integration with python on a limited linux system as we have (running on ARM9 with hard to discuss memory and MCU costs), but C/C++ modules would do well for this (since this forms the native interface to python extensions, doesn't it?). I can imagine to provide a domain model from an appropriate C/C++ API (though I still think it's more efforts and higher skill requirements for the involved developers to do with these languages). Still I'm searching for a good approach that supports such architecture. I'll appreciate any pointers!

    Read the article

  • What&rsquo;s new in VS.10 &amp; TFS.10?

    - by johndoucette
    Getting my geek on… I have decided to call the products VS.10 (Visual Studio 2010), TP.10 (Test Professional 2010),  and TFS.10 (Team Foundation Server 2010) Thanks Neno Loje. What's new in Visual Studio & Team Foundation Server 2010? Focusing on Visual Studio Team System (VSTS) ALM-related parts: Visual Studio Ultimate 2010 NEW: IntelliTrace® (aka the historical debugger) NEW: Architecture Tools New Project Type: Modeling Project UML Diagrams UML Use Case Diagram UML Class Diagram UML Sequence Diagram (supports reverse enginneering) UML Activity Diagram UML Component Diagram Layer Diagram (with Team Build integration for layer validation) Architecuture Explorer Dependency visualization DGML Web & Load Tests Visual Studio Premium 2010 NEW: Architecture Tools Read-only model viewer Development Tools Code Analysis New Rules like SQL Injection detection Rule Sets Code Profiler Multi-Tier Profiling JScript Profiling Profiling applications on virtual machines in sampling mode Code Metrics Test Tools Code Coverage NEW: Test Impact Analysis NEW: Coded UI Test Database Tools (DB schema versioning & deployment) Visual Studio Professional 2010 Debuger Mixed Mode Debugging for 64-bit Applications Export/Import of Breakpoints and data tips Visual Studio Test Professional 2010 Microsoft Test Manager (MTM, formerly known as "Camano")) Fast Forward Testing Visual Studio Team Foundation Server 2010 Work Item Tracking and Project Management New MSF templatesfor Agile and CMMI (V 5.0) Hierarchical Work Items Custom Work Item Link Types Ready to use Excel agile project management workbooks for managing your backlogs (including capacity planing) Convert Work Item query to an Excel report MS Excel integration Support for Work Item hierarchies Formatting is preserved after doing a 'Refresh' MS Project integration Hierarchy and successor/predecessor info is now synchronized NEW: Test Case Management Version Control Public Workspaces Branch & Merge Visualization Tracking of Changesets & Work Items Gated Check-In Team Build Build Controllers and Agents Workflow 4-based build process NEW: Lab Management (only a pre-release is avaiable at the moment!) Project Portal & Reporting Dashboards (on SharePoint Portal) Burndown Chart TFS Web Parts (to show data from TFS) Administration & Operations Topology enhancements Application tier network load balancing (NLB) SQL Server scale out Improved Sharepoint flexibility Report Server flexibility Zone support Kerberos support Separation of TFS and SQL administration Setup Separate install from configure Improved installation wizards Optional components Simplified account requirements Improved Reporting Services configuration Setup consolidation Upgrading from previous TFS versions Improved IIS flexibility Administration Consolidation of command line tools User rename support Project Collections Archive/restore individual project collections Move Team Project Collections Server consolidation Team Project Collection Split Team Project Collection Isolation Server request cancellation Licensing: TFS server license included in MSDN subscriptions Removed features (former features not part of Visual Studio 2010): Debug » Start With Application Verifier Object Test Bench IntelliSense for C++ / CLI Debugging support for SQL 2000

    Read the article

  • SQL Server 2008 Compression

    - by Peter Larsson
    Hi! Today I am going to talk about compression in SQL Server 2008. The data warehouse I currently design and develop holds historical data back to 1973. The data warehouse will have an other blog post laster due to it's complexity. However, the server has 60GB of memory (of which 48 is dedicated to SQL Server service), so all data didn't fit in memory and the SAN is not the fastest one around. So I decided to give compression a go, since we use Enterprise Edition anyway. This is the code I use to compress all tables with PAGE compression. DECLARE @SQL VARCHAR(MAX)   DECLARE curTables CURSOR FOR             SELECT 'ALTER TABLE ' + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                     + '.' + QUOTENAME(OBJECT_NAME(object_id))                     + ' REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE)'             FROM    sys.tables   OPEN    curTables   FETCH   NEXT FROM    curTables INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curTables         INTO    @SQL     END   CLOSE       curTables DEALLOCATE  curTables Copy and paste the result to a new code window and execute the statements. One thing I noticed when doing this, is that the database grows with the same size as the table. If the database cannot grow this size, the operation fails. For me, I first ended up with orphaned connection. Not good. And this is the code I use to create the index compression statements DECLARE @SQL VARCHAR(MAX)   DECLARE curIndexes CURSOR FOR             SELECT      'ALTER INDEX ' + QUOTENAME(name)                         + ' ON '                         + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                         + '.'                         + QUOTENAME(OBJECT_NAME(object_id))                         + ' REBUILD PARTITION = ALL WITH (FILLFACTOR = 100, DATA_COMPRESSION = PAGE)'             FROM        sys.indexes             WHERE       OBJECTPROPERTY(object_id, 'IsMSShipped') = 0                         AND OBJECTPROPERTY(object_id, 'IsTable') = 1             ORDER BY    CASE type_desc                             WHEN 'CLUSTERED' THEN 1                             ELSE 2                         END   OPEN    curIndexes   FETCH   NEXT FROM    curIndexes INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curIndexes         INTO    @SQL     END   CLOSE       curIndexes DEALLOCATE  curIndexes When this was done, I noticed that the 90GB database now only was 17GB. And most important, complete database now could reside in memory! After this I took care of the administrative tasks, backups. Here I copied the code from Management Studio because I didn't want to give too much time for this. The code looks like (notice the compression option). BACKUP DATABASE [Yoda] TO              DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH            NOFORMAT,                 INIT,                 NAME = N'Yoda - Full Database Backup',                 SKIP,                 NOREWIND,                 NOUNLOAD,                 COMPRESSION,                 STATS = 10,                 CHECKSUM GO   DECLARE @BackupSetID INT   SELECT  @BackupSetID = Position FROM    msdb..backupset WHERE   database_name = N'Yoda'         AND backup_set_id =(SELECT MAX(backup_set_id) FROM msdb..backupset WHERE database_name = N'Yoda')   IF @BackupSetID IS NULL     RAISERROR(N'Verify failed. Backup information for database ''Yoda'' not found.', 16, 1)   RESTORE VERIFYONLY FROM    DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH    FILE = @BackupSetID,         NOUNLOAD,         NOREWIND GO After running the backup, the file size was even more reduced due to the zip-like compression algorithm used in SQL Server 2008. The file size? Only 9 GB. //Peso

    Read the article

  • Enterprise MDM: Rationalizing Reference Data in a Fast Changing Environment

    - by Mala Narasimharajan
    By Rahul Kamath Enterprises must move at a rapid pace to establish and retain global market leadership by continuously focusing on operational efficiency, customer intimacy and relentless execution. Reference Data Management    As multi-national companies with a presence in multiple industry categories, market segments, and geographies, their ability to proactively manage changes and harness them to align their front office with back-office operations and performance management initiatives is critical to make the proverbial elephant dance. Managing reference data including types and codes, business taxonomies, complex relationships as well as mappings represent a key component of the broader agenda for enabling flexibility and agility, without sacrificing enterprise-level consistency, regulatory compliance and control. Financial Transformation  Periodically, companies find that processes implemented a decade or more ago no longer mirror the way of doing business and seek to proactively transform how they operate their business and underlying processes. Financial transformation often begins with the redesign of one’s chart of accounts. The ability to model and redesign one’s chart of accounts collaboratively, quickly validate against historical transaction bases and secure business buy-in across multiple line of business stakeholders, while continuing to manage changes within the legacy general ledger systems and downstream analytical applications while piloting the in-flight transformation can mean the difference between controlled success and project failure. Attend the session titled CON8275 - Oracle Hyperion Data Relationship Management: Enabling Enterprise Transformation at Oracle Openworld on Monday, October 1, 2012 at 4:45pm in Ballroom A of the InterContinental Hotel to learn how Oracle’s Data Relationship Management solution can help you stay ahead of the competition and proactively harness master (and reference) data changes to transform your enterprise. Hear in-depth customer testimonials from GE Healthcare and Old Mutual South Africa to learn how others have harnessed this technology effectively to build enduring competitive advantage through business process innovation and investments in master data governance. Hear GE Healthcare discuss how DRM has enabled financial transformation, ERP consolidation, mergers and acquisitions, and the alignment reference data across financial and management reporting applications. Also, learn how Old Mutual SA has upgraded to EBS R12 Financials and is transforming the management of chart of accounts for corporate reporting. Separately, an esteemed panel of DRM customers including Cisco Systems, Nationwide Insurance, Ralcorp Holdings and Mentor Graphics will discuss their perspectives on how DRM has helped them address business challenges associated with enterprise MDM including major change management initiatives including financial transformations, corporate restructuring, mergers & acquisitions, and the rationalization of financial and analytical master reference data to support alternate business perspectives for the alignment of EPM/BI initiatives. Attend the session titled CON9377 - Customer Showcase: Success with Oracle Hyperion Data Relationship Management at Openworld on Thursday, October 4, 2012 at 12:45pm in Ballroom of the InterContinental Hotel to interact with our esteemed speakers first hand.

    Read the article

  • SCHA API for resource group failover / switchover history

    - by krishna.k.murthy
    The Oracle Solaris Cluster framework keeps an internal log of cluster events, including switchover and failover of resource groups. These logs can be useful to Oracle support engineers for diagnosing cluster behavior. However, till now, there was no external interface to access the event history. Oracle Solaris Cluster 4.2 provides a new API option for viewing the recent history of resource group switchovers in a program-parsable format. Oracle Solaris Cluster 4.2 provides a new option tag argument RG_FAILOVER_LOG for the existing API command scha_cluster_get which can be used to list recent failover / switchover events for resource groups. The command usage is as shown below: # scha_cluster_get -O RG_FAILOVER_LOG number_of_days number_of_days : the number of days to be considered for scanning the historical logs. The command returns a list of events in the following format. Each field is separated by a semi-colon [;]: resource_group_name;source_nodes;target_nodes;time_stamp source_nodes: node_names from which resource group is failed over or was switched manually. target_nodes: node_names to which the resource group failed over or was switched manually. There is a corresponding enhancement in the C API function scha_cluster_get() which uses the SCHA_RG_FAILOVER_LOG query tag. In the example below geo-infrastructure (failover resource group), geo-clusterstate (scalable resource group), oracle-rg (failover resource group), asm-dg-rg (scalable resource group) and asm-inst-rg (scalable resource group) are part of Geographic Edition setup. # /usr/cluster/bin/scha_cluster_get -O RG_FAILOVER_LOG 3 geo-infrastructure;schost1c;;Mon Jul 21 15:51:51 2014 geo-clusterstate;schost2c,schost1c;schost2c;Mon Jul 21 15:52:26 2014 oracle-rg;schost1c;;Mon Jul 21 15:54:31 2014 asm-dg-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:54:58 2014 asm-inst-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:56:11 2014 oracle-rg;;schost2c;Mon Jul 21 15:58:51 2014 geo-infrastructure;;schost2c;Mon Jul 21 15:59:19 2014 geo-clusterstate;schost2c;schost2c,schost1c;Mon Jul 21 16:01:51 2014 asm-inst-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:01:10 2014 asm-dg-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:02:10 2014 oracle-rg;schost2c;;Tue Jul 22 16:58:02 2014 oracle-rg;;schost1c;Tue Jul 22 16:59:05 2014 oracle-rg;schost1c;schost1c;Tue Jul 22 17:05:33 2014 Note that in the output some of the entries might have an empty string in the source_nodes. Such entries correspond to events in which the resource group is switched online manually or during a cluster boot-up. Similarly, an empty destination_nodes list indicates an event in which the resource group went offline. - Arpit Gupta, Harish Mallya

    Read the article

  • Where Are You on the Visualization Maturity Curve?

    - by Celine Beck
    The old phrase “A picture is worth a thousand words” is as true now as ever. Providing the right users with access to the right product data, at the right time, can provide significant benefits to a business. This is especially evident with increasing technical and product complexities, elongated supply chains, and growing pressure to bring innovative products to market faster. With this in mind, it is easy to understand why visualization is an integral part of any successful product lifecycle management (PLM) strategy. At a bare minimum, knowledge workers use multiple individual documents of different formats and structure, and leverage visualization solutions to access information; but the real value of visualization can be fully reaped when it is connected to enterprise applications like PLM and tied to the appropriate business context. The picture below illustrates this visualization maturity curve, as we presented during the last Oracle Open World and the transformational effect that visualization can have on PLM processes and performance (check out the post about AutoVue Key Highlights from Oracle Open World 2012 for more information). Organizations are likely to see greater positive impact on business performance when visualization is connected to enterprise systems, allowing access to information coming from multiple sources, such as PLM, supply chain management (SCM) and enterprise resource planning (ERP). This allows organizations to reach higher levels of collaboration and optimize decision-making capacity as users can benefit from in-context access to visual information. For instance, within a PLM system, a design engineer can access a product assembly and review digital annotations added by other users specific to the engineering change request he is reviewing rather than all historical annotations. The last stage on the curve is what we call augmented business visualization (ABV).  ABV is an innovative framework which lets structured data (from Oracle’s Agile PLM for instance) interact with unstructured data (documents, design, 3D models, etc). With this new level of integration, information coming from multiple sources can be presented in a highly visual fashion; color displays can be used in order to identify parts with specific characteristics (for example pending quality issues) and you can take actions directly from within the context of documents and designs, maximizing user productivity. Those who had the chance to attend our PLM session during Oracle Open World already got a sneak peek of our latest augmented business visualization for Oracle’s Agile PLM. The solution generated a lot of wows. Stephen Porter, CEO at Zero Wait State, indicated in a post entitled “The PLM State: the Manhattan Project-Oracle’s Next Big Secret Weapon” that “this kind of synergy between visualization and PLM could qualify as a powerful weapon differentiating Agile PLM from other solutions.” If you are interested in learning more about ABV for Oracle’s Agile PLM and hear about real examples of usage of visualization at all stages of the visualization maturity curve, don’t miss our Visual Decision Making to Optimize New Product Development and Introduction session during the Oracle Value Chain Summit (Feb. 4-6, 2013, San Francisco). We look forward to seeing you there!

    Read the article

  • Harnessing Business Events for Predictive Decision Making - part 1 / 3

    - by Sanjeev Sharma
    Businesses have long relied on data mining to elicit patterns and forecast future demand and supply trends. Improvements in computing hardware, specifically storage and compute capacity, have significantly enhanced the ability to store and analyze mountains of data in ever shrinking time-frames. Nevertheless, the reality is that data growth is outpacing storage capacity by a factor of two and computing power is still very much bounded by Moore's Law, doubling only every 18 months.Faced with this data explosion, businesses are exploring means to develop human brain-like capabilities in their decision systems (including BI and Analytics) to make sense of the data storm, in other words business events, in real-time and respond pro-actively rather than re-actively. It is more like having a little bit of the right information just a little bit before hand than having all of the right information after the fact. To appreciate this thought better let's first understand the workings of the human brain.Neuroscience research has revealed that the human brain is predictive in nature and that talent is nothing more than exceptional predictive ability. The cerebral-cortex, part of the human brain responsible for cognition, thought, language etc., comprises of five layers. The lowest layer in the hierarchy is responsible for sensory perception i.e. discrete, detail-oriented tasks whereas each of the above layers increasingly focused on assembling higher-order conceptual models. Information flows both up and down the layered memory hierarchy. This allows the conceptual mental-models to be refined over-time through experience and repetition. Secondly, and more importantly, the top-layers are able to prime the lower layers to anticipate certain events based on the existing mental-models thereby giving the brain a predictive ability. In a way the human brain develops a "memory of the future", some sort of an anticipatory thinking which let's it predict based on occurrence of events in real-time. A higher order of predictive ability stems from being able to recognize the lack of certain events. For instance, it is one thing to recognize the beats in a music track and another to detect beats that were missed, which involves a higher order predictive ability.Existing decision systems analyze historical data to identify patterns and use statistical forecasting techniques to drive planning. They are similar to the human-brain in that they employ business rules very much like mental-models to chunk and classify information. However unlike the human brain existing decision systems are unable to evolve these rules automatically (AI still best suited for highly specific tasks) and  predict the future based on real-time business events. Mistake me not,  existing decision systems remain vital to driving long-term and broader business planning. For instance, a telco will still rely on BI and Analytics software to plan promotions and optimize inventory but tap into business events enabled predictive insight to identify specifically which customers are likely to churn and engage with them pro-actively. In the next post, i will depict the technology components that enable businesses to harness real-time events and drive predictive decision making.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14  | Next Page >