Search Results

Search found 23103 results on 925 pages for 'performance issues and ha'.

Page 374/925 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • TestDriven.Net 3.0 – All Systems Go

    - by Jamie Cansdale
    I’m pleased to announce that TestDriven.Net 3.0 is now available. Finally! I know many of you will already be using the Beta and RC versions, but if you look at the release notes you’ll see there’s been many refinements since then, so I highly recommend you install the RTM version. Here is a quick summary of a few new features: Visual Studio 2010 supports targeting multiple versions of the .NET framework (multi-targeting). This means you can easily upgrade your Visual Studio 2005/2008 solutions without necessarily converting them to use .NET 4.0. TestDriven.Net will execute your tests using the .NET version your test project is targeting (see ‘Properties > Application > Target framework’). There is now first class support for MSTest when using Visual Studio 2008 & 2010. Previous versions of TestDriven.Net had support for a limited number of MSTest attributes. This version supports virtually all MSTest unit testing related attributes, including support for deployment item and data driven test attributes. You should also find this test runner is quick. ;) There is a new ‘Go To Test/Code’ command on the code context menu. You can think of this as Ctrl-Tab for test driven developers; it will quickly flip back and forth between your tests and code under test. I recommend assigning a keyboard shortcut to the ‘TestDriven.NET.GoToTestOrCode’ command. NCover can now be used for code coverage on .NET 4.0. This is only officially supported since NCover 3.2 (your mileage may vary if you’re using the 1.5.8 version). Rather than clutter the ‘Output’ window, ignored or skipped tests will be placed on the ‘Task List’. You can double-click on these items to navigate to the offending test (or assign a keyboard shortcut to ‘View.NextTask’). If you’re using a Team, Premium or Ultimate edition of Visual Studio 2005-2010, a new ‘Test With > Performance’ command will be available. This command will perform instrumented performance profiling on your target code. A particular focus of this version has been to make it more keyboard friendly. Here’s a list of commands you will probably want to assign keyboard shortcuts to: Name Default What I use TestDriven.NET.RunTests Run tests in context   Alt + T TestDriven.NET.RerunTests Repeat test run   Alt + R TestDriven.NET.GoToTestOrCode Flip between tests and code   Alt + G TestDriven.NET.Debugger Run tests with debugger   Alt + D View.Output Show the ‘Output’ window Ctrl+ Alt + O   Edit.BreakLine Edit code in stack trace Enter   View.NextError Jump to next failed test Ctrl + Shift + F12   View.NextTask Jump to next skipped test   Alt + S   By default the ‘Output’ window will automatically activate when there is test output or a failed test (this is an option). The cursor will be positioned on the stack trace of the last failed test, ready for you to hit ‘Enter’ to jump to the fail point or ‘Esc’ to return to your source (assuming your ‘Output’ window is set to auto-hide).  If your ‘Output’ window isn’t set to auto-hide, you’ll need to hit ‘Ctrl + Alt + O’ then ‘Enter’. Alternatively you can use ‘Ctrl + Shift + F12’ (View.NextError) to navigate between all failed tests.   For more frequent updates or to give feedback, you can find me on twitter here. I hope you enjoy this version. Let me know how you get on. :)

    Read the article

  • Programação paralela no .NET Framework 4 – Parte I

    - by anobre
    Introdução O avanço de tecnologia nos últimos anos forneceu, a baixo custo, acesso  a workstations com inúmeros CPUs. Facilmente encontramos hoje máquinas clientes com 2, 4 e até 8 núcleos, sem considerar os “super-servidores” com até 36 processadores :) Da wikipedia: A Unidade central de processamento (CPU, de acordo com as iniciais em inglês) ou o processador é a parte de um sistema de computador que executa as instruções de um programa de computador, e é o elemento primordial na execução das funções de um computador. Este termo tem sido usado na indústria de computadores pelo menos desde o início dos anos 1960[1]. A forma, desenho e implementação de CPUs têm mudado dramaticamente desde os primeiros exemplos, mas o seu funcionamento fundamental permanece o mesmo. Fazendo uma analogia, seria muito interessante delegarmos tarefas no mundo real que podem ser executadas independentemente a pessoas diferentes, atingindo desta forma uma  maior performance / produtividade na sua execução. A computação paralela se baseia na idéia que um problema maior pode ser dividido em problemas menores, sendo resolvidos de forma paralela. Este pensamento é utilizado há algum tempo por HPC (High-performance computing), e através das facilidades dos últimos anos, assim como a preocupação com consumo de energia, tornaram esta idéia mais atrativa e de fácil acesso a qualquer ambiente. No .NET Framework A plataforma .NET apresenta um runtime, bibliotecas e ferramentas para fornecer uma base de acesso fácil e rápido à programação paralela, sem trabalhar diretamente com threads e thread pool. Esta série de posts irá apresentar todos os recursos disponíveis, iniciando os estudos pela TPL, ou Task Parallel Library. Task Parallel Library A TPL é um conjunto de tipos localizados no namespace System.Threading e System.Threading.Tasks, a partir da versão 4 do framework. A partir da versão 4 do framework, o TPL é a maneira recomendada para escrever código paralelo e multithreaded. http://msdn.microsoft.com/en-us/library/dd460717(v=VS.100).aspx Task Parallelism O termo “task parallelism”, ou em uma tradução live paralelismo de tarefas, se refere a uma ou mais tarefas sendo executadas de forma simultanea. Considere uma tarefa como um método. A maneira mais fácil de executar tarefas de forma paralela é o código abaixo: Parallel.Invoke(() => TrabalhoInicial(), () => TrabalhoSeguinte()); O que acontece de verdade? Por trás nos panos, esta instrução instancia de forma implícita objetos do tipo Task, responsável por representar uma operação assíncrona, não exatamente paralela: public class Task : IAsyncResult, IDisposable É possível instanciar Tasks de forma explícita, sendo uma alternativa mais complexa ao Parallel.Invoke. var task = new Task(() => TrabalhoInicial()); task.Start(); Outra opção de instanciar uma Task e já executar sua tarefa é: var t = Task<int>.Factory.StartNew(() => TrabalhoInicialComValor());var t2 = Task<int>.Factory.StartNew(() => TrabalhoSeguinteComValor()); A diferença básica entre as duas abordagens é que a primeira tem início conhecido, mais utilizado quando não queremos que a instanciação e o agendamento da execução ocorra em uma só operação, como na segunda abordagem. Data Parallelism Ainda parte da TPL, o Data Parallelism se refere a cenários onde a mesma operação deva ser executada paralelamente em elementos de uma coleção ou array, através de instruções paralelas For e ForEach. A idéia básica é pegar cada elemento da coleção (ou array) e trabalhar com diversas threads concomitantemente. A classe-chave para este cenário é a System.Threading.Tasks.Parallel // Sequential version foreach (var item in sourceCollection) { Process(item); } // Parallel equivalent Parallel.ForEach(sourceCollection, item => Process(item)); Complicado né? :) Demonstração Acesse aqui um vídeo com exemplos (screencast). Cuidado! Apesar da imensa vontade de sair codificando, tome cuidado com alguns problemas básicos de paralelismo. Neste link é possível conhecer algumas situações. Abraços.

    Read the article

  • Oracle Virtualization at Oracle OpenWorld 2012

    - by Chris Kawalek
    Mini-Series Entry 1 of 3: Hands-On Virtualization This is the first entry of a 3 part mini-series aimed at highlighting server and desktop virtualization at this year’s Oracle OpenWorld.  Oracle OpenWorld 2012 is fast approaching! If you are as excited as we are about the fascinating new Oracle virtualization content featured at Oracle OpenWorld 2012, you won’t want to miss this blog mini-series. We will be highlighting sessions that cover advances and innovations in our products, our product strategy and roadmap, and hands on labs for step-by-step instructions from our field and product experts. In the blog mini-series you will learn about: The Oracle Virtualization general keynote session Hands-on labs  Key Oracle server and desktop virtualization sessions In this entry, we will cover the Oracle Virtualization keynote session and the hands-on labs you won't want to miss. General Session: Oracle Virtualization Strategy and Roadmap Session ID: GEN8725 Oracle offers the industry’s most complete and integrated virtualization portfolio enabling organizations to realize benefits beyond simple consolidation as they transform their data centers into flexible cloud-based infrastructures. Join Oracle executives and experts to learn about Oracle’s desktop-to-data-center virtualization solutions, such as the OS, with built-in management integration at all layers that can help you virtualize and manage the complete computing environment, from physical servers to virtual servers and applications. This “don’t-miss” session offers details of the latest product updates and strategy; product roadmaps; integration with enterprise applications; and real-world examples of how Oracle server, desktop, and storage virtualization is benefiting customers. Here are our top picks for Hands-On Labs for Oracle OpenWorld 2012: Oracle Virtual Desktop Infrastructure Performance and Tablet Mobility Session ID: HOL9907 This hands-on lab demonstrates the performance (using an industry-standard load tester) and roaming capabilities of Oracle Virtual Desktop Infrastructure with Oracle’s Sun Ray Clients, Apple iPad and other clients. Deploying an IaaS Environment with Oracle VM: Hands-On Lab  Session ID: HOL9558 This hands-on lab takes you through the planning and deployment of an infrastructure as a service (IaaS) environment with Oracle VM as the foundation. It covers a range of topics, from planning storage capacity, LUN creation, network bandwidth planning, and best practices to designing and streamlining the environment for ease of management. Learn from deeply experienced field engineers and product experts. Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-On Lab Session ID: HOL9559 This hands-on lab is for application architects or system administrators who will need to deploy and manage Oracle Applications. You’ll learn how Oracle VM Templates can turn you into a power user who can virtualize and deploy complex Oracle Applications in minutes. Longtime field-experienced engineers and product experts will show you, step by step, how to download and import templates and deploy the applications. x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Session ID: HOL9870 The purpose of this hands-on lab is to demonstrate the functionality and usage of Oracle’s enterprise cloud infrastructure for x86 with Oracle VM 3.x. It covers:  Creation of VMs Migration of VMs  Quick and easy deployment of Oracle applications with Oracle VM Templates  Usage of the Storage Connect plug-in for the Sun ZFS Storage Appliance You can find these and other great sessions on the Oracle OpenWorld 2012 Content Catalogue. Start checking now to better plan and organize your week at the conference. Then you’ll be ready to sign up for all of your sessions in mid-July when the scheduling tool goes live. While the hands-on labs allow you to directly interact with Oracle virtualization products, the conference sessions allow you to hear from a wide variety of industry experts on how they're using they technology in real world deployments, solving specific challenges, and more. In tomorrow's entry, we'll start talking about the many conference sessions related to Oracle server and desktop virtualization you can attend during the show. See you then! - The Oracle Virtualization marketing team

    Read the article

  • BizTalk Cross Reference Data Management Strategy

    - by charlie.mott
    Article Source: http://geekswithblogs.net/charliemott This article describes an approach to the management of cross reference data for BizTalk.  Some articles about the BizTalk Cross Referencing features can be found here: http://home.comcast.net/~sdwoodgate/xrefseed.zip http://geekswithblogs.net/michaelstephenson/archive/2006/12/24/101995.aspx http://geekswithblogs.net/charliemott/archive/2009/04/20/value-vs.id-cross-referencing-in-biztalk.aspx Options Current options to managing this data include: Maintaining xml files in the format that can be used by the out-of-the-box BTSXRefImport.exe utility. Use of user interfaces that have been developed to manage this data: BizTalk Cross Referencing Tool XRef XML Creation Tool However, there are the following issues with the above options: The 'BizTalk Cross Referencing Tool' requires a separate database to manage.  The 'XRef XML Creation' tool has no means of persisting the data settings. The 'BizTalk Cross Referencing tool' generates integers in the common id field. I prefer to use a string (e.g. acme.country.uk). This is more readable. (see naming conventions below). Both UI tools continue to use BTSXRefImport.exe.  This utility replaces all xref data. This can be a problem in continuous integration environments that support multiple clients or BizTalk target instances.  If you upload the data for one client it would destroy the data for another client.  Yet in TFS where builds run concurrently, this would break unit tests. Alternative Approach In response to these issues, I instead use simple SQL scripts to directly populate the BizTalkMgmtDb xref tables combined with a data namepacing strategy to isolate client data. Naming Conventions All data keys use namespace prefixing.  The pattern will be <companyName>.<data Type>.  The naming conventions will be to use lower casing for all items.  The data must follow this pattern to isolate it from other company cross-reference data.  The table below shows some sample data. (Note: this data uses the 'ID' cross-reference tables.  the same principles apply for the 'value' cross-referencing tables). Table.Field Description Sample Data xref_AppType.appType Application Types acme.erp acme.portal acme.assetmanagement xref_AppInstance.appInstance Application Instances (each will have a corresponding application type). acme.dynamics.ax acme.dynamics.crm acme.sharepoint acme.maximo xref_IDXRef.idXRef Holds the cross reference data types. acme.taxcode acme.country xref_IDXRefData.CommonID Holds each cross reference type value used by the canonical schemas. acme.vatcode.exmpt acme.vatcode.std acme.country.usa acme.country.uk xref_IDXRefData.AppID This holds the value for each application instance and each xref type. GBP USD SQL Scripts The data to be stored in the BizTalkMgmtDb xref tables will be managed by SQL scripts stored in a database project in the visual studio solution. File(s) Description Build.cmd A sqlcmd script to deploy data by running the SQL scripts below.  (This can be run as part of the MSBuild process).   acme.purgexref.sql SQL script to clear acme.* data from the xref tables.  As such, this will not impact data for any other company. acme.applicationInstances.sql   SQL script to insert application type and application instance data.   acme.vatcode.sql acme.country.sql etc ...  There will be a separate SQL script to insert each cross-reference data type and application specific values for these types.

    Read the article

  • Don’t miss this very popular presentation on Punchout in iProcurement on June 26th 2012

    - by user793553
    Don’t miss this very popular presentation on Punchout in iProcurement on June 26th.  See Doc ID 1448447.1 for the Webcast details. ADVISOR WEBCAST: Punchout in iProcurement PRODUCT FAMILY: EBZs- Procurement   June 26, 2012 at 14:00 UK / 15:00 Cairo / 6:00 am Pacific / 7:00 am Mountain / 9:00 am Eastern This one-hour session is recommended for technical and functional users who are maintaining and/or implementing the Punchout from iProcurement. The session will provide an overview of the different Punchout model, setup, and the Punchout to PO xml/cxml cycle. Also, it will provide tips in troubleshooting the common issues when new supplier is added to Punchout or the existing one stops working. TOPICS WILL INCLUDE: Overview of the Punchout Models. Provide the knowledge in the Punchout to PO Process cycle. Demo - Punchout. Certificates and setup. Learn the common issues and how to address in an efficient way. (Documentation and Notes) A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Current Schedule can be found on Note 740966.1 Post Presentation Recordings can be found on Note 740964.1 WebEx Conference Details Topic: Advisor Webcast - Punchout in iProcuremen Date and Time: Tuesday, June 26, 2012 3:00 pm, Egypt Time (Cairo, GMT+02:00) Tuesday, June 26, 2012 2:00 pm, GMT Summer Time (London, GMT+01:00) Tuesday, June 26, 2012 9:00 am, Eastern Daylight Time (New York, GMT-04:00) Tuesday, June 26, 2012 7:00 am, Mountain Daylight Time (Denver, GMT-06:00) Event number: 597 373 155 -------------------------------------------------------  To register for this meeting  -------------------------------------------------------  1. Event address for attendees: https://oracleaw.webex.com/oracleaw/onstage/g.php?d=597373155&t=a 2. Register for the meeting.  Once the host approves your request, you will receive a confirmation email with instructions for joining the meeting. InterCall Audio Instructions A list of Toll-Free Numbers can be found below. VOICESTREAMING IS AVAILABLE teleconference ID: 70528713 UK standard International:+44 1452 562 665 US Free Call: 1866 230 1938 US Local call: 1845 608 8023 Global Toll-Free Numbers MOS doc#:  https://metalink3.oracle.com/od/faces/secure/km/DocumentDisplay.jspx?id=1148600.1 Designation Number Argentina Free Call 0800 444 1009 Australia Free Call 1800 763 650 Austria Free Call 0800 111 956 Austria Local Call 0192 865 72 Belgium Free Call 0800 724 46 Belgium Local Call 0817 000 60 Brazil Free Call 0800 761 0835 Bulgaria Free Call 0080 011 511 76 Canada Free Call 1866 984 6577 Columbia Free Call 0180 091 562 17 Croatia Free Call 0800 222 305 Cyprus Free Call 8009 6341 Czech Republic Free Call 8007 007 95 Denmark Free Call 8088 8467 Denmark Local Call 3272 7506 Finland Free Call 0800 112 398 Finland Local Call 0923 114 014 France Free Call 0805 110 463 France Local Call 0359 580 290 Germany Free Call 0800 101 4918 Germany Local Call 0692 222 161 19 Greece Free Call 0080 012 8135 Hong Kong Free Call 8009 661 55 Hungary Free Call 0680 018 839 Hungary Local Call 0180 889 97 India Free Call 0008 001 006 600 Ireland Free Call 1800 300 170 Ireland Local Call 0143 198 35 Israel Free Call 1809 431 440 Italy Free Call 8007 840 87 Italy Local Call 0236 009 700 Japan Free Call 0066 338 124 31 Latvia Free Call 8000 3680 Luxembourg Free Call 8002 7941 Malaysia Free Call 1800 814 528 Mexico Free Call 0018 666 864 905 Monaco Free Call 8009 3655 Netherlands Free Call 0800 949 4596 Netherlands Local Call 0207 168 000 New Zealand Free Call 0800 451 190 North China Free Call 1080 074 413 29 Norway Free Call 8001 8057 Norway Local Call 2151 0847 Poland Free Call 0080 012 135 73 Portugal Free Call 8007 894 20 Romania Free Call 0800 895 558 Russia Free Call 8108 002 385 2044 Slovenia Free Call 0800 804 55 South Africa Free Call 0800 982 794 South China Free Call 1080 044 111 82 South Korea Free Call 0079 814 800 7887 Spain Free Call 9009 389 85 Spain Local Call 9111 421 10 Sweden Free Call 0200 214 344 Sweden Local Call 0850 596 375 Switzerland Free Call 0800 835 040 Switzerland Local Call 0445 804 280 Thailand Free Call 0018 004 421 98 UK Free Call 0800 073 1830 UK Local Call 0844 871 9364 UK National Call 0871 700 0309 UK Standard International +44 (0) 1452 562 665 USA Free Call 1866 230 1938   Back to the top   Copyright? 2010, Oracle. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • Stack Exchange Notifier Chrome Extension [v1.2.9.3 released]

    - by Vladislav Tserman
    About Stack Exchange Notifier is a handy extension for Google Chrome browser that displays your current reputation, badges on Stack Exchange sites and notifies you on reputation's changes. You will now get notified of comments on your own posts (questions and answers) and of any comments that refer to you by @username in a comment, even if you do not own the post (aka mentions). All StackExchange sites are supported. Screenshots Access Install extensions from Google Chrome Extension Gallery Platform Google Chrome browser extension Contact Created by me (Vladislav Tserman). I'm available at: vladjan (at) gmail.com Follow Stack Exchange Notifier on twitter to get notified about news and updates: http://twitter.com/se_notifier Code Written in Java, Google Web Toolkit under Eclipse Helios. Stack Exchange Notifier uses the Stack Exchange API and is powered by Google App Engine for Java. Changelog I will be porting extension to not use app engine back-end due to some limitations. New versions of the extension will be making direct calls to Stack Exchange API right from your browser. Please do not expect new versions of the extension any time soon. Sorry. Read more about limitations here http://stackapps.com/questions/1713 and here http://stackoverflow.com/questions/3949815 Currently, you may sometimes experience some issues using extension, but most users will have no problems. You may notice too many errors in the logs, but there is nothing I can do with this now. Thanks for using my little app, thanks to all of you it still works in spite of many issues with API Version 1.2.9.3 - Thursday, October 14, 2010 - Bug fix release (back-end improvements) Version 1.2.9.2 - Thursday, October 07, 2010 - Bug fix release (high rate of occasional API errors were noticed so some fixes added to handle them were possible) Version 1.2.9.1 - Tuesday, October 05, 2010 - Mostly bug fix release, back-end performance improvements - You will now get notified of comments on your own posts (questions and answers) that are not older than 1 year and of any comments that refer to you by @username in a comment, even if you do not own the post (aka mentions). This is experimental feature, let me know if you like/need it. - New 'All sites' view displays all websites from Stack Exchange network (part of new feature that is not finished yet) Version 1.2.9 - Saturday, September 25, 2010 - Fixes an issue when some users got empty Account view. - When hovering on @Username on account view the title now displays '@Username on @SiteName' to easily understand the site name Version 1.2.7 - Wednesday, September 22, 2010 - Fixed an issue with notifications. - Minor improvements Version 1.2.5 - Tuesday, September 21, 2010 - Fixed an issue where some characters in response payload raised an exception when parsing to JSON. v1.2.3 (Sunday, September 19, 2010) - Support for new OpenID providers was added (Yahoo, MyOpenID, AOL) - UI improvements - Several minor defects were fixed v1.2.2 (Thursday, September 16, 2010) - New types of notifications added. Now extension notifies you on comments that are directed to you. Comments are expandable, so clicking on comment title will expand height to accommodate all available text. - UI and error handling improvements Future Application still in beta stage. I hope you're not having any problems, but if you are, please let me know. Leave your feedback and bug reports in comments. I'm available at: vladjan (at) gmail.com. I'm working on adding new features. I want to hear from the users and incorporate as much feedback as possible into the extension. Any suggestions for improvements/features to add?

    Read the article

  • How To Clear An Alert - Part 2

    - by werner.de.gruyter
    There were some interesting comments and remarks on the original posting, so I decided to do a follow-up and address some of the issues that got raised... Handling Metric Errors First of all, there is a significant difference between an 'error' and an 'alert'. An 'alert' is the violation of a condition (a threshold) specified for a given metric. That means that the Agent is collecting and gathering the data for the metric, but there is a situation that requires the attention of an administrator. An 'error' on the other hand however, is a failure to collect metric data: The Agent is throwing the error because it cannot determine the value for the metric Whereas the 'alert' guarantees continuity of the metric data, an 'error' signals a big unknown. And the unknown aspect of all this is what makes an error a lot more serious than a regular alert: If you don't know what the current state of affairs is, there could be some serious issues brewing that nobody is aware of... The life-cycle of a Metric Error Clearing a metric error is pretty much the same workflow as a metric 'alert': The Agent signals the error after it failed to execute the metric The error is uploaded to the OMS/repository, where it becomes visible in the Console The error will remain active until the Agent is able to execute the metric successfully. Even though the metric is still getting scheduled and executed on a regular basis, the error will remain outstanding as long as the Agent is not capable of executing the metric correctly Knowing this, the way to fix the metric error should be obvious: Take the 'problem' away, and as soon as the metric is executed again (based on the frequency of the metric), the error will go away. The same tricks used to clear alerts can be used here too: Wait for the next scheduled execution. For those metrics that are executed regularly (like every 15 minutes or so), it's just a matter of waiting those minutes to see the updates. The 'Reevaluate Alert' button can be used to force a re-execution of the metric. In case a metric is executed once a day, this will be a better way to make sure that the underlying problem has been solved. And if it has been, the metric error will be removed, and the regular data points will be uploaded to the repository. And just in case you have to 'force' the issue a little: If you disable and re-enable a metric, it will get re-scheduled. And that means a new metric execution, and an update of the (hopefully) fixed problem. Database server-generated alerts and problem checkers There are various ways the Agent can collect metric data: Via a script or a SQL statement, reading a log file, getting a value from an SNMP OID or listening for SNMP traps or via the DBMS_SERVER_ALERTS mechanism of an Oracle database. For those alert which are generated by the database (like tablespace metrics for 10g and above databases), the Agent just 'waits' for the database to report any new findings. If the Agent has lost the current state of the server-side metrics (due to an incomplete recovery after a disaster, or after an improper use of the 'emctl clearstate' command), the Agent might be still aware of an alert that the database no longer has (or vice versa). The same goes for 'problem checker' alerts: Those metrics that only report data if there is a problem (like the 'invalid objects' metric) will also have a problem if the Agent state has been tampered with (again, the incomplete recovery, and after improper use of 'emctl clearstate' are the two main causes for this). The best way to deal with these kinds of mismatches, is to simple disable and re-enable the metric again: The disabling will clear the state of the metric, and the re-enabling will force a re-execution of the metric, so the new and updated results can get uploaded to the repository. Starting 10gR5, the Agent performs additional checks and verifications after each restart of the Agent and/or each state change of the database (shutdown/startup or failover in case of DataGuard) to catch these kinds of mismatches.

    Read the article

  • IntelliTrace As a Learning Tool for MVC2 in a VS2010 Project

    - by Sam Abraham
    IntelliTrace is a new feature in Visual Studio 2010 Ultimate Edition. I see this valuable tool as a “Program Execution Recorder” that captures information about events and calls taking place as soon as we hit the VS2010 play (Start Debugging) button or the F5 key. Many online resources already discuss IntelliTrace and the benefit it brings to both developers and testers alike so I see no value of just repeating this information.  In this brief blog entry, I would like to share with you how I will be using IntelliTrace in my upcoming talk at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (check http://www.fladotnet.com for more information), as a learning tool to demonstrate the internals of the lifecycle of an MVC2 application.  I will also be providing some helpful links that cover IntelliTrace in more detail at the end of my article for reference. IntelliTrace is setup by default to only capture execution events. Microsoft did such a great job on optimizing its recording process that I haven’t even felt the slightest performance hit with IntelliTrace running as I was debugging my solutions and projects.  For my purposes here however, I needed to capture more information beyond execution events, so I turned on the option for capturing calls in addition to events as shown in Figures 1 and 2. Changing capture options will require us to stop our debugging session and start over for the new settings to take place. Figure 1 – Access IntelliTrace options via the Tools->Options menu items Figure 2 – Change IntelliTrace Options to capture call information as well as events Notice the warning with regards to potentially degrading performance when selecting to capture call information in addition to the default events-only setting. I have found this warning to be sure true. My subsequent tests showed slowness in page load times compared to rendering those same exact pages with the “event-only” option selected. Execution recording is auto-started along with the new debugging session of our project. At this point, we can simply interact with the application and continue executing normally until we decide to “playback” the code we have executed so far.  For code replay, first step is to “break” the current execution as show in Figure 3.   Figure 3 – Break to replay recording A few tries later, I found a good process to quickly find and demonstrate the MVC2 page lifecycle. First-off, we start with the event view as shown in Figure 4 until we find an interesting event that needs further studying.  Figure 4 – Going through IntelliTrace’s events and picking as specific entry of interest We now can, for instance, study how the highlighted HTTP GET request is being handled, by clicking on the “Calls View” for that particular event. Notice that IntelliTrace shows us all calls that took place in servicing that GET request. Double clicking on any call takes us to a more granular view of the call stack within that clicked call, up until getting to a specific line of code where we can do a line-by-line replay of the execution from that point onwards using F10 or F11 just like our typical good old VS2008 debugging helped us accomplish. Figure 5 – switching to call view on an event of interest Figure 6 – Double clicking on call shows a more granular view of the call stack. In conclusion, the introduction of IntelliTrace as a new addition to the VS developers’ tool arsenal enhances development and debugging experience and effectively tackles the “no-repro” problem. It will also hopefully enhance my audience’s experience listening to me speaking about  an MVC2 page lifecycle which I can now easily visually demonstrate, thereby improving the probability of keeping everybody awake a little longer. IntelliTrace References: http://msdn.microsoft.com/en-us/magazine/ee336126.aspx http://msdn.microsoft.com/en-us/library/dd264944(VS.100).aspx

    Read the article

  • Uncovering Compiler Errors in ASP.NET MVC Views

    - by Ben Griswold
    ASPX and ASCX files are compiled on the fly when they are requested on the web server. This means it’s possible that you aren’t catching compile errors associated with your views when you build your ASP.NET MVC project in Visual Studio.  Unless you’re willing to click through your entire application, rendering each view looking for errors, you application is left a little vulnerable to user issues.  Fortunately, there’s a work around.  Open up your MVC project file in notepad or within the Visual Studio IDE by unloading the project and then editing the .csproj file (both actions are available by right-clicking on the Project Node in Solution Explorer.)  Notice the MvcBuildViews option.  It’s probably set to false.  Flip the value to true and you’ll magically start compiling your views when you build your application. <MvcBuildViews>false</MvcBuildViews> Taking this action will slow down your builds a bit, but if you’re a hack like me, it’ll probably save your day in the long run. Now you’re probably thinking, “Neat trick – how’s it work?”  Scroll down toward the bottom of your csproj file and you will notice the AfterBuild target triggers the AspNetCompiler action if the MvcBuildViews option is set to true.  <Target Name="AfterBuild" Condition="'$(MvcBuildViews)'=='true'">   <AspNetCompiler VirtualPath="temp"                   PhysicalPath="$(ProjectDir)\..\$(ProjectName)" /> </Target> Great. One more thing. Let’s say you don’t want to slow down all of your builds, but you absolutely want to know if there are any compiler issues with your views before you commit your code to version control or deploy or whatever.  Here’s what you can do – change the AfterBuild condition to run if your configuration is set to Release mode.  <Target Name="AfterBuild" Condition="'$(Configuration)'=='Release'">   <!– Always pre-compile ASPX and ASCX in release mode –>   <AspNetCompiler VirtualPath="temp"                   PhysicalPath="$(ProjectDir)\..\$(ProjectName)" /> </Target> Now your debug mode builds will continue to be as fast as ever and you can quickly validate your views by building in release mode when you so choose.  There’s one little catch – this setup won’t consider the MvcBuildViews option whatsoever! So if you decide to go with this configuration, you might want to add a comment near the MvcBuildViews option letting other developers know they can change the MvcBuildViews option as much as they’d like but it’s not going to affect the AfterBuild action.  Or don’t include the comment and let your team members figure it out for themselves…

    Read the article

  • New Communications Industry Data Model with "Factory Installed" Predictive Analytics using Oracle Da

    - by charlie.berger
    Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications Service Providers   We've integrated pre-installed analytical methodologies with the new Oracle Communications Data Model to deliver automated, simple, yet powerful predictive analytics solutions for customers.  Churn, sentiment analysis, identifying customer segments - all things that can be anticipated and hence, preconcieved and implemented inside an applications.  Read on for more information! TM Forum Management World, Nice, France - 18 May 2010 News Facts To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle today introduced the Oracle Communications Data Model. With the Oracle Communications Data Model, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the Oracle Communications Data Model, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Also announced today, Hong Kong Broadband Network enhanced their data warehouse system, going live on Oracle Communications Data Model in three months. The leading provider increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Product Details Oracle Communications Data Model provides industry-specific schema and embedded analytics that address key areas such as customer management, marketing segmentation, product development and network health. CSPs can efficiently capture and monitor critical data and transform it into actionable information to support development and delivery of next-generation services using: More than 1,300 industry-specific measurements and key performance indicators (KPIs) such as network reliability statistics, provisioning metrics and customer churn propensity. Embedded OLAP cubes for extremely fast dimensional analysis of business information. Embedded data mining models for sophisticated trending and predictive analysis. Support for multiple lines of business, such as cable, mobile, wireline and Internet, which can be easily extended to support future requirements. With Oracle Communications Data Model, CSPs can jump start the implementation of a communications data warehouse in line with communications-industry standards including the TM Forum Information Framework (SID), formerly known as the Shared Information Model. Oracle Communications Data Model is optimized for any Oracle Database 11g platform, including Oracle Exadata, which can improve call data record query performance by 10x or more. Supporting Quotes "Oracle Communications Data Model covers a wide range of business areas that are relevant to modern communications service providers and is a comprehensive solution - with its data model and pre-packaged templates including BI dashboards, KPIs, OLAP cubes and mining models. It helps us save a great deal of time in building and implementing a customized data warehouse and enables us to leverage the advanced analytics quickly and more effectively," said Yasuki Hayashi, executive manager, NTT Comware Corporation. "Data volumes will only continue to grow as communications service providers expand next-generation networks, deploy new services and adopt new business models. They will increasingly need efficient, reliable data warehouses to capture key insights on data such as customer value, network value and churn probability. With the Oracle Communications Data Model, Oracle has demonstrated its commitment to meeting these needs by delivering data warehouse tools designed to fill communications industry-specific needs," said Elisabeth Rainge, program director, Network Software, IDC. "The TM Forum Conformance Mark provides reassurance to customers seeking standards-based, and therefore, cost-effective and flexible solutions. TM Forum is extremely pleased to work with Oracle to certify its Oracle Communications Data Model solution. Upon successful completion, this certification will represent the broadest and most complete implementation of the TM Forum Information Framework to date, with more than 130 aggregate business entities," said Keith Willetts, chairman and chief executive officer, TM Forum. Supporting Resources Oracle Communications Oracle Communications Data Model Data Sheet Oracle Communications Data Model Podcast Oracle Data Warehousing Oracle Communications on YouTube Oracle Communications on Delicious Oracle Communications on Facebook Oracle Communications on Twitter Oracle Communications on LinkedIn Oracle Database on Twitter The Data Warehouse Insider Blog

    Read the article

  • So No TECH job so far.

    - by Ratman21
    O I found some temp work for the US Census and I have managed to keep the house (so far) but, it looks like I/we are going to have to do a short sale and the temp job will be ending soon.   On top of that it looks like the unemployment fund for me is drying up. I will have about one month left after the Census job is done. I am now down to Appling for work at the KFC.   This is type a work I started with, before I was a tech geek and really I didn’t think I would be doing this kind of work in my later years but, I have a wife and kid. So I got to suck it up and do it.   Oh and here is my new resume…go ahead I know you want to tare it up. I really don’t care any more.   Scott L. Newman 45219 Dutton Way, Callahan, FL32011 H: (904)879-4880 C: (352)356-0945 E: [email protected] Web:  http://beingscottnewman.webs.com/                                                       ______                                                                                 OBJECTIVE To obtain a Network or Technical support position     KEYWORD SUMMARY CompTIA A+, Network+, and Security+ Certified., Network Operation, Technical Support, Client/Vendor Relations, Networking/Administration, Cisco Routers/Switches, Helpdesk, Microsoft Office Suite, Website Design/Dev./Management, Frame Relay, ISDN, Windows NT/98/XP, Visio, Inventory Management, CICS, Programming, COBOL IV, Assembler, RPG   QUALIFICATIONS SUMMARY Twenty years’ experience in computer operations, technical support, and technical writing. Also have two and half years’ experience in internet / intranet operations.   PROFESSIONAL EXPERIENCE October 2009 – Present*   Volunteer Web site and PC technician – Part time       True Faith Christian Fellowship Church – Callahan, FL, Project: Create and maintain web site for Church to give it a worldwide exposure Aug 2008 – September 2009:* Volunteer Church sound and video technician – Part time      Thomas Creek Baptist Church – Callahan, FL   *Note Jobs were for the learning and/or keeping updated on skills, while looking for a tech job and training for new skills.   February 2005 to October 2008: Client Server Dev/Analyst I, Fidelity National Information Services, Jacksonville, FL (FNIS acquired Certegy in 2005 and out of 20 personal, was one of three kept on.) August 2003 to February 2005: Senior NetOps Operator, Certegy, St.Pete, Fl. (August 2003, Certegy terminated contract with EDS and out of 40 personal, was one of six kept on.) Projects: Creation and update of listing and placement for all raised floor equipment at St.Pete site. Listing was made up of, floor plan of the raised floor and equipment racks diagrams showing the placement of all devices using Visio. This was cross-referenced with an inventory excel document showing what dept was responsible for each device. Sole creator of Network operation and Server Operation procedures guide (NetOps Guide).  Expertise: Resolving circuit and/or router issues or assist circuit carrier in resolving issue from the company Network Operation Center (NOC). As well as resolving application problems or assist application support in resolution of it.     July 1999 to August 2003: Senior NetOps Operator,EDS (Certegy Account), St.Pete, FL Same expertise and on going projects as listed above for FNIS/Certegy. (Equifax outsourced the NetOps dept. to EDS in 1999)         January 1991 to July 1999: NetOps/Tandem Operator, Equifax, St.Pete & Tampa, FL Same as all of the above for FNIS/Certegy/EDS except for circuit and router issues   EDUCATION ? New Horizons Computer Learning Center, Jacksonville, Florida - CompTIA A+, Security+, and     Network+ Certified.                        Currently working on CCNA Certification 07/30/10 ? Mott Community College, Flint, Michigan – Associates Degree - Data Processing and General Education ? Currently studying Japanese

    Read the article

  • SQL SERVER – List of All the Samples Database Available to Download for FREE

    - by Pinal Dave
    It is pretty much very common to have a sample database for any database product. Different companies keep on improving their product and keep on coming up with innovation in their product. To demonstrate the capability of their new enhancements they need the sample database. Microsoft have various sample database available for free download for their SQL Server Product. I have collected them here in a single blog post. Download an AdventureWorks Database The AdventureWorks OLTP database supports standard online transaction processing scenarios for a fictitious bicycle manufacturer (Adventure Works Cycles). Scenarios include Manufacturing, Sales, Purchasing, Product Management, Contact Management, and Human Resources. Coconut Dal Coconut Dal is a lightweight data access layer, for use in projects where the Entity Framework cannot be used or Microsoft’s Enterprise Library Data Block is unsuitable. Anyone who is handwriting ADO.NET should use a library instead and Coconut Dal might be the answer.  DataBooster – Extension to ADO.NET Data Provider The dbParallel DataBooster library is a high-performance extension to ADO.NET Data Provider, includes two aspects: 1) A slimmed down API encapsulation which simplified the most common data access operations (DbConnection -> DbCommand -> DbParameter -> DbDataReader) into a single class DbAccess, to help application with a clean DAL, avoid over-packing and redundant-copy of data transfer. 2) A booster for writing mass data onto database. Base on a rational utilization of database concurrency and a effective utilization of network bandwidth. Tabular AMO 2012 The sample is made of two project parts. The first part is a library of functions to manage tabular models -AMO2Tabular V2-. The second part is a sample to build a tabular model -AdventureWorks Tabular AMO 2012- using the AMO2Tabular library; the created model is similar to the ‘AdventureWorks Tabular Model 2012. SQL Server Analysis Services Product Samples SQL Server Analysis Services provides, a unified and integrated view of all your business data as the foundation for all of your traditional reporting, online analytical processing (OLAP) analysis, Key Performance Indicator (KPI) scorecards, and data mining. Analysis Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Reporting Services Product Samples This project contains Reporting Services samples released with Microsoft SQL Server product. These samples are in the following five categories: Application Samples, Extension Samples, Model Samples, Report Samples, and Script Samples. If you are interested in contributing Reporting Services samples, please let us know by posting in the developers’ forum. Reporting Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008 R2 PCU1. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Integration Services Product Samples This project contains Integration Services samples released with Microsoft SQL Server product. These samples are in the following two categories: Package Samples and Programming Samples. If you are interested in contributing Integration Services samples, please let us know by posting in the developers’ forum. Integration Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. Windows Azure SQL Reporting Admin Sample The SQLReportingAdmin sample for Windows Azure SQL Reporting demonstrates the usage of SQL Reporting APIs, and manages (add/update/delete) permissions of SQL Reporting users. Windows Azure SQL Reporting ReportViewer-SOAP API usage sample These sample projects demonstrate how to embed a Microsoft ReportViewer control that points to reports hosted on SQL Reporting report servers and how to use SQL Reporting SOAP APIs in your Windows Azure Web application. Enterprise Library 5.0 – Integration Pack for Windows Azure This NuGet package contains a zip file with the source code for the Enterprise Library Integration Pack for Windows Azure.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Sample Database

    Read the article

  • Systems Solutions at COLLABORATE12

    - by ferhat
    Want to connect with fellow Oracle users and learn more about how to maximize your Oracle software environments with Oracle Systems?   Pack your bags for Las Vegas!   COLLABORATE 12  is right around the corner! COLLABORATE 12 Conference will be held at the Mandalay Bay in Las Vegas, NV 22-26 April, 2012. This is an event designed and delivered by users just like you with sessions, interactive panel discussions and hands-on learning opportunities packed with first-hand experiences, case studies and practical “how-to” content.. This year’s event includes a number of educational sessions and demos for users interested in learning from the experts how to use Oracle Optimized Solutions to get the most out of their Oracle Technology and Application software. Oracle Optimized Solutions are proven blueprints that eliminate integration guesswork by combing best in class hardware and software components to deliver complete system architectures that are fully tested, and include documented best practices that reduce integration risks and deliver better application performance.  And because they are highly flexible by design,  Oracle Optimized Solutions   can be implemented as an end-to-end solution or easily adapted into existing environments. Follow Oracle Infrared at Twitter, Facebook, Google+, and LinkedIn  to catch the latest news, developments, announcements, and inside views from  Oracle Optimized Solutions. Please come by our Exhibition Booth #1273 to see the demos and meet 1-1 with the experts behind a number of  Oracle Optimized Solutions  including those for JD Edwards EnterpriseOne, E-Business Suite, PeopleSoft HCM, Oracle WebCenter, and Oracle Database.  Exhibitor Showcase Booth #1273 DAY TIME TITLE Monday  April 23 6:00 pm - 8:00 pm Welcome Reception in the Exhibitor Showcase Tuesday  April 24 10:15 am - 4:00 pm Exhibitor Showcase Open 1:00 pm - 2:00 pm Dedicated Exhibitor Showcase Time 5:30 pm - 7:00 pm Exhibitor Showcase Happy Hour Wednesday  April 25 10:30 am - 3:00 pm Exhibitor Showcase Open 2:15 pm -3:00 pm Afternoon Break in Exhibitor Showcase  There are also a number of deep dive, educational sessions covering deployment best practices using Oracle’s engineered systems and best-in-class hardware, operating system and virtualization technologies.  Education Sessions DAY TIME TITLE LOCATION Monday  April 23 9:45 am - 10:45 am Architecting and Implementing Backup and Recovery Solutions Surf E Tuesday  April 24 2:00 pm – 3:00 pm Oracle's High Performance Systems for JD Edwards EnterpriseOne Mandalay Bay GH 4:30 pm - 5:30 pm Virtualization Boot Camp: What's New with Oracle VM Server for x86 Mandalay Bay C 9:30 am - 10:30 am Oracle on Oracle VM - Expert Panel Mandalay Bay L Wednesday  April 25 9:30 am - 10:30 am Cloud Computing Directions: Part II Understanding Oracle's Cloud Directions South Seas E  And don’t forget the keynotes and software roadmap sessions! Keynotes and Roadmap Sessions DAY TIME TITLE LOCATION Sunday  April 22 3:20 pm – 4:20 pm Oracle’s Cloud Computing Strategy Breakers B Monday  April 23 11:00 am – 12:00 pm JD Edwards - Vision, Promises and Execution: IT'S THE WAY WE ROLL and Why it Matters! Mandalay Bay A 11:00 am – 12:00 pm PeopleSoft Executive Update and Roadmap Mandalay Bay J 1:15 pm - 2:15 pm Oracle Database - Engineered for Innovation Mandalay Bay L 2:30 pm - 3:30 pm Oracle E-Business Suite Applications Strategy and General Manager Update Mandalay Bay D Tuesday  April 24 9:15 am - 10:15 am IT at Oracle: The Art of IT Transformation to Enable Business Growth Mandalay Bay Ballroom H

    Read the article

  • SQL SERVER – 3 Challenges for DBA and Smart Solutions

    - by Pinal Dave
    Developer’s life is never easy. DBA’s life is even crazier. DBA’s Life When a developer wakes up in the morning, most of the time have no idea what different challenges they are going to face that day. Of course, most of the developers know the project and roadmap, which they are working on. However, developers have no clue what coding challenges which they are going face for that day. DBA’s life is even crazier. When DBA wakes up in the morning – they often thank that they were not disturbed during the night due to server issues. The very next thing they wish is that they do not want to challenge which they can’t solve for that day. The problems DBA face every single day are mostly unpredictable and they just have to solve them as they come during the day. Though the life of DBA is not always bad. There are always ways and methods how one can overcome various challenges. Let us see three of the challenges and how a DBA can use various tools to overcome them. Challenge #1 Synchronize Data Across Server A Very common challenge DBA receive is that they have to synchronize the data across the servers. If you try to manually write that up, it may take forever to accomplish the task. It is nearly impossible to do the same with the help of the T-SQL. However, thankfully there are tools like dbForge Studio which can save a day and synchronize data across servers. Read my detailed blog post about the same over here: SQL SERVER – Synchronize Data Exclusively with T-SQL. Challenge #2 SQL Report Builder DBA’s are often asked to build reports on the go. It really annoys DBA’s, but hardly people care about it. No matter how busy a DBA is, they are just called upon to build reports on things on very short notice. I personally like to avoid any task which is given to me accidently and personally building report can be boring. I rather spend time with High Availability, disaster recovery, performance tuning rather than building report. I use SQL third party tool when I have to work with SQL Report. Others have extended reporting capabilities. The latter group of products includes the SQL report builder built-in todbForge Studio for SQL Server. I have blogged about this earlier over here: SQL SERVER – SQL Report Builder in dbForge Studio for SQL Server. Challenge #3 Work with the OTHER Database The manager does not understand that MySQL is different from SQL Server and SQL Server is different from Oracle. For them everything is same. In my career hundreds of times I have faced a situation that I am given a database to manage or do some task when their regular DBA is on vacation or leave. When I try to explain I do not understand the underlying the technology, I have been usually told that my manager has trust on me and I can do anything. Honestly, I can’t but I hardly dare to argue. I fall back on the third party tool to manage database when it is not in my comfort zone. For example, I was once given MySQL performance tuning task (at that time I did not know MySQL so well). To simplify search for a problem query let us use MySQL Profiler in dbForge Studio for MySQL. It provides such commands as a Query Profiling Mode and Generate Execution Plan. Here is the blog post discussing about the same: MySQL – Profiler : A Simple and Convenient Tool for Profiling SQL Queries. Well, that’s it! There were many different such occasions when I have been saved by the tool. May be some other day I will write part 2 of this blog post. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL Tagged: Devart, SQL Tool

    Read the article

  • Multiplayer / Networking options for a 2D game with physics

    - by lahmas
    Summary: My 50% finished 2D sidescroller with Box2D as physics engine should have multiplayer support in the final version. However, the current code is just a singleplayer game. What should I do now? And more important, how should I implement multiplayer and combine it with singleplayer? Is it a bad idea to code the singleplayer mode separated from multiplayer mode (like Notch did it with Minecraft)? The performance in singleplayer should be as good as possible (Simulating physics with using a loopback server to implement singleplayer mode would be a problem there) Full background / questions: I'm working on a relatively large 2D game project in C++, with physics as a core element of it. (I use Box2D for that) The finished game should have full multiplayer support, however I made the mistake that I didn't plan the networking part properly and basically worked on a singleplayer game until now. I thought that multiplayer support could be added to the almost finished singleplayer game in a relatively easy and clear way, but apparently, from what I have read this is wrong. I even read that a multiplayer game should be programmed as one from the beginning, with the singleplayer mode actually just consisting of hosting an invisible local server and connecting to it via loopback. (I found out that most FPS game engines do it that way, an example would be Source) So here I am, with my half finished 2D sidescroller game, and I don't really know how to go on. Simply continueing to work on the singleplayer / client seems useless to me now, as I'd have to recode and refactor even more later. First, a general question to anybody who possibly found himself in a situation like this: How should I proceed? Then, the more specific one - I have been trying to find out how I can approach the networking part for my game: (Possible solutions:) Invisible / loopback server for singleplayer This would have the advantage that there basically is no difference between singleplayer and multiplayer mode. Not much additional code would be needed. A big disadvantage: Performance and other limitations in singleplayer. There would be two physics simulations running. One for the client and one for the loopback server. Even if you work around by providing a direct path for the data from the loopback server, through direct communcation by the threads for example, the singleplayer would be limited. This is a problem because people should be allowed to play around with masses of objects at once. Separated singleplayer / Multiplayer mode There would be no server involved in singleplayer mode. I'm not really sure how this would work. But at least I think that there would be a lot of additional work, because all of the singleplayer features would have to be re-implemented or glued to multiplayer mode. Multiplayer mode as a module for singleplayer This is merely a quick thought I had. Multiplayer could consist of a singleplayer game, with an additional networking module loaded and connected to a server, which sends and receives data and updates the singleplayer world. In the retrospective, I regret not having planned the multiplayer mode earlier. I'm really stuck at this point and I hope that somebody here is able to help me!

    Read the article

  • Oracle Social Network Developer Challenge: TEAM Informatics

    - by Kellsey Ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. Here comes another Oracle Social Network Developer Challenge entry, this one courtesy of TEAM Informatics (@teaminformatics). As their name suggests, their entry was a true team effort, featuring the work of Jon Chartrand, Deepthi Sanikommu, Dmitry Shtulman, Raghavendra Joshi, and Daniel Stitely with Wayne Boerger doing the presentation honors. Speaking of the presentation, Wayne’s laptop wouldn’t project onto the plasma we had in the OTN Lounge, but luckily, Noel (@noelportugal) had his iPad and VGA dongle in his backpack of goodies, so they were able to improvise by using the iPad camera to capture Wayne’s demo and project the video to the plasma. Code will find a way. Anyway, TEAM built Do Over, an integration with Atlassian’s JIRA, coincidentally something I’ve chatted with Rich (@rmanalan) about in the past. The basic idea is simple; integrate JIRA issues with Oracle Social Network to expand and centralize the conversation around issue resolution. In Dmitry’s words: We were able to put together a team on fairly short notice and, after batting a few ideas around, decided to pursue an integration with JIRA, an issue and project tracking tool used in-house at TEAM.  After getting to know WebCenter Social, we saw immediate benefits that a JIRA integration could bring, primarily due to the fact that JIRA only allows assignment of an issue to one person at a time.  Integrating Social would allow collaboration and issue resolution to happen right from the JIRA Issue interface. TEAM tackled a very common pain point among developers, i.e. including everyone who needs to be involved in issue resolution into a single thread. If you’ve ever fixed bugs or participated in that process, you’ll know that not everyone has access to the issue resolution system, which makes consolidating discussion time-consuming and fragmented. Why? Because we typically use email as the tool for collaboration. Oracle Social Network allows for all parties involved to work in a single, private and secure conversation, and through its RESTful Public API, information from external systems like JIRA can be brought in for context. TEAM only had time to address half the solution, but given more time, I’m sure they would have made the integration bidirectional, allowing for relevant commentary to be pushed back to JIRA, closing the loop. Here are some screenshot of their integration. #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; } When Oracle Social Network is released, TEAM will have something they use internally to work on issues, and maybe they’ll even productize their work and add it to the Atlassian Marketplace so that other JIRA users can benefit from the combination of Oracle Social Network and JIRA. Thanks to everyone at TEAM for participating in our challenge. We hope they had a good experience. Look for the details of the other entries this week. Be sure to check out a full recap from Dmitry over on the TEAM blog.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 23 (sys.dm_db_index_usage_stats)

    - by Tamarick Hill
    The sys.dm_db_index_usage_stats Dynamic Management View is used to return usage information about the various indexes on your SQL Server instance. Let’s have a look at this DMV against our AdventureWorks2012 database so we can examine the information returned. SELECT * FROM sys.dm_db_index_usage_stats WHERE database_id = db_id('AdventureWorks2012') The first three columns in the result set represent the database_id, object_id, and index_id of a given row. You can join these columns back to other system tables to extract the actual database, object, and index names. The next four columns are probably the most beneficial columns within this DMV. First, the user_seeks column represents the number of times that a user query caused a seek operation against a particular index. The user_scans column represents how many times a user query caused a scan operation on a particular index. The user_lookups column represents how many times an index was used to perform a lookup operation. The user_updates column refers to how many times an index had to be updated due to a write operation that effected a particular index. The last_user_seek, last_user_scan, last_user_lookup, and last_user_update columns provide you with DATETIME information about when the last user scan, seek, lookup, or update operation was performed. The remaining columns in the result set are the same as the ones we previously discussed, except instead of the various operations being generated from user requests, they are generated from system background requests. This is an extremely useful DMV and one of my favorites when it comes to Index Maintenance. As we all know, indexes are extremely beneficial with improving the performance of your read operations. But indexes do have a downside as well. Indexes slow down the performance of your write operations, and they also require additional resources for storage. For this reason, in my opinion, it is important to regularly analyze the indexes on your system to make sure the indexes you have are being used efficiently. My AdventureWorks2012 database is only used for demonstrating or testing things, so I dont have a lot of meaningful information here, but for a Production system, if you see an index that is never getting any seeks, scans, or lookups, but is constantly getting a ton of updates, it more than likely would be a good candidate for you to consider removing. You would not be getting much benefit from the index, but yet it is incurring a cost on your system due to it constantly having to be updated for your write operations, not to mention the additional storage it is consuming. You should regularly analyze your indexes to ensure you keep your database systems as efficient and lean as possible. One thing to note is that these DMV statistics are reset every time SQL Server is restarted. Therefore it would not be a wise idea to make decisions about removing indexes after a Server Reboot or a cluster roll. If you restart your SQL Server instances frequently, for example if you schedule weekly/monthly cluster rolls, then you may not capture indexes that are being used for weekly/monthly reports that run for business users. And if you remove them, you may have some upset people at your desk on Monday morning. If you would like to begin analyzing your indexes to possibly remove the ones that your system is not using, I would recommend building a process to load this DMV information into a table on scheduled basis, depending on how frequently you perform an operation that would reset these statistics, then you can analyze the data over a period of time to get a more accurate view of what indexes are really being used and which ones or not. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188755.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • links for 2011-01-13

    - by Bob Rhubart
    Webcast: Oracle WebCenter Suite: Giving Users a Modern Experience Speakers: Vince Casarez (VP Enterprise 2.0 Product Management, Oracle),  Erin Smith (Consulting Practice Manager – Portals, Oracle), Robert Wessa (Consulting Technical Director – Enterprise 2.0 Infrastructure, Oracle)  (tags: oracle otn webcenter webcast enterprise2.0) Oracle & StickyMinds.com Webcast: Load Testing Techniques for Enterprise Applications Mughees Minhas, Senior Director of Product Management, Oracle Server Technologies, answers your questions about the latest techniques for effectively and efficiently testing enterprise application performance. Thursday, January 20, 2011. 10am PT / 1pm ET. (tags: oracle otn stickymings webcast) Bay Area Coherence Special Interest Group (BACSIG) Jan 20, 5:30pm - 8:00pm PT. Presentations: Coherence 3.6 Clustering Features (Rob Lee), Efficient Management and Update of Coherence Clusters to Reduce Down Time ( Rao Bhethanabotla), How To Build a Coherence Practice (Christer Fahlgren). (tags: oracle, otn coherence bacsig) Podcast Show Notes: William Ulrich and Neal McWhorter on Business Architecture (ArchBeat) A four-part interview with the authors of  "Business Architecture: The Art and Practice of Business Transformation"  (tags: oracle otn podcast businessarchitecture) John Brunswick: Overlapping Social Networks in your Enterprise? Strategies to Understand and Govern "Overall it is important to consider if tacit knowledge being captured by the social systems is able to be retained and somehow summarized into an overall organizational directory." - John Brunswick (tags: oracle otn enterprise2.0 socialnetworking) Coherence - How to develop a custom push replication publisher (Middlewarepedia) Cosmin Todur describes "a way of developing a custom push replication publisher that publishes data to a database via JDBC."  (tags: oracle coherence grid) Aino Andriessen: Oracle Diagnostics Logging (ODL) for application development "Logging is a very important aspect of application development as it offers run-time access to the behaviour and data of the application. It’s important for debugging purposes but also to investigate exception situations on production." -- Aino Andriessen (tags: oracle odl java jdeveloper weblogic) Security issues when upgrading a Web Catalog from 10g to 11g Oracle BI By Bakboord "I blogged about upgrading from Oracle BI EE 10g to Oracle BI EE 11g R1 earlier. Although this is a very straight forward process, you could end up with some security issues." -- Daan Bakboord (tags: oracle businessintelligence obiee) Angelo Santagata: SOA Composite Sensors : Good Practice "A good best practice is that for any composites you create, consider publishing a composite sensor value using a primary key of some sort , e.g. orderId, that way if you need to manipulate/query composites you can easily look up the instanceId using the sensorid." - Angelo Santagata (tags: oracle soa sca) Javier Ductor: WebCenter Spaces 11g PS2 Task Flow Customization "Previously, I wrote about Spaces Template Customization. In order to adapt Spaces to customers prototype, it was necessary to change template and skin, as well as the members task flow. In this entry, I describe how to customize this task flow." - Javier Ductor (tags: oracle otn enterprise2.0 webcenter) RonBatra's blog: Cloud Computing Series: VI: Industry Directions "When someone says their 'Product/Solution is in the Cloud,' ask them basic questions to seperate the spin from the reality. I would start with 'tell me what that means' and see which way the conversation goes." - Oracle ACE Director Ron Batra (tags: oracle otn oracleace cloud) First JSRs Proposed for Java EE 7 (The Java Source) With the approval of Java SE 7 and Java SE 8 JSRs last month, attention is now shifting towards the Java EE platform. (tags: oracle java jsr javaee)

    Read the article

  • Travelling MVP #4: DevReach 2012

    - by DigiMortal
    Our next stop after Varna was Sofia where DevReach happens. DevReach is one of my favorite conferences in Europe because of sensible prices and strong speakers line-up. Also they have VIP-party after conference and this is good event to meet people you don’t see every day, have some discussion with speakers and find new friends. Our trip from Varna to Sofia took about 6.5 hours on bus. As I was tired from last evening it wasn’t problem for me as I slept half the trip. After smoking pause in Velike Tarnovo I watched movies from bus TV. We had supper later in city center Happy’s – place with good meat dishes and nice service. And next day it begun…. :) DevReach 2012 DevReach is held usually in Arena Mladost. It’s near airport and Telerik office. The event is organized by local MVP Martin Kulov together with Telerik. Two days of sessions with strong speakers is good reason enough for me to go to visit some event. Some topics covered by sessions: Windows 8 development web development SharePoint Windows Azure Windows Phone architecture Visual Studio Practically everybody can find some interesting session in every time slot. As the Arena is not huge it is very easy to go from one sessions to another if selected session for time slot is not what you expected. On the second floor of Arena there are many places where you can eat. There are simple chunk-food places like Burger King and also some restaurants. If you are hungry you will find something for your taste for sure. Also you can buy beer if it is too hot outside :) Weather was very good for October – practically Estonian summer – 25C and over. Sessions I visited Here is the list of sessions I visited at DevReach 2012: DevReach 2012 Opening & Welcome Messsage with Martin Kulov and Stephen Forte Principled N-Tier Solution Design with Steve Smith Data Patterns for the Cloud with Brian Randell .NET Garbage Collection Performance Tips with Sasha Goldshtein Building Secured, Scalable, Low-latency Web Applications with the Windows Azure Platform with Ido Flatow It’s a Knockout! MVVM Style Web Applications with Charles Nurse Web Application Architecture – Lessons Learned from Adobe Brackets with Brian Rinaldi Demystifying Visual Studio 2012 Performance Tools with Martin Kulov SPvNext – A Look At All the Exciting And New Features In SharePoint with Sahil Malik Portable Libraries – Why You Should Care with Lino Tadros I missed some sessions because of some death march projects that are going and that I have to coordinate but it was not big loss as I had time to walk around in session venue neighborhood and see Sofia Business Park. Next year again! I will be there again next year and hopefully more guys from Estonia will join me. I think it’s good idea to take short vacation for DevReach time and do things like we did this time – Bucharest, Varna, Sofia. It’s only good idea to plan some more free time so we are not very much in hurry and also we have no work stuff to do on the trip. This far this trip has been one of best trips I have organized and I will go and meet all those guys in this region again! :)

    Read the article

  • Few events I&rsquo;m speaking at in early 2013

    - by Mladen Prajdic
    2013 has started great and the SQL community is already brimming with events. At some of these events you can come say hi. I’ll be glad you do! These are the events with dates and locations that I know I’ll be speaking at so far.   February 16th: SQL Saturday #198 - Vancouver, Canada The session I’ll present in Vancouver is SQL Impossible: Restoring/Undeleting a table Yes, you read the title right. No, it's not about the usual "one table per partition" and "restore full backup then copy the data over" methods. No, there are no 3rd party tools involved. Just you and your SQL Server. Yes, it's crazy. No, it's not for production purposes. And yes, that's why it's so much fun. Prepare to dive into the world of data pages, log records, deletes, truncates and backups and how it all works together to get your table back from the endless void. Want to know more? Come and see! This is an advanced level session where we’ll dive into the internals of data pages, transaction log records and page restores.   March 8th-9th: SQL Saturday #194 - Exeter, UK In Exeter I’ll be presenting twice. On the first day I’ll have a full day precon titled: From SQL Traces to Extended Events - The next big switch This pre-con will give you insight into both of the current tracing technologies in SQL Server. The old SQL Trace which has served us well over the past 10 or so years is on its way out because the overhead and details it produces are no longer enough to deal with today's loads. The new Extended Events are a new lightweight tracing mechanism built directly into the SQLOS thus giving us information SQL Trace just couldn't. They were designed and built with performance in mind and it shows. The new Extended Events are a new lightweight tracing mechanism built directly into the SQLOS thus giving us information SQL Trace just couldn't. They were designed and built with performance in mind and it shows. Mastering Extended Events requires learning at least one new skill: XML querying. The second session I’ll have on Saturday titled: SQL Injection from website to SQL Server SQL Injection is still one of the biggest reasons various websites and applications get hacked. The solution as everyone tells us is simple. Use SQL parameters. But is that enough? In this session we'll look at how would an attacker go about using SQL Injection to gain access to your database, see its schema and data, take over the server, upload files and do various other mischief on your domain. This is a fun session that always brings out a few laughs in the audience because they didn’t realize what can be done.   April 23rd-25th: NTK conference - Bled, Slovenia (Slovenian website only) This is a conference with history. This year marks its 18th year running. It’s a relatively large IT conference that focuses on various Microsoft technologies like .Net, Azure, SQL Server, Exchange, Security, etc… The main session’s language is Slovenian but this is slowly changing so it’s becoming more interesting for foreign attendees. This year it’s happening in the beautiful town of Bled in the Alps. The scenery alone is worth the visit, wouldn’t you agree? And this year there are quite a few well known speakers present! Session title isn’t known yet.       May 2nd-4th: SQL Bits XI – Nottingham, UK SQL Bits is the largest SQL Server conference in Europe. It’s a 3 day conference with top speakers and content all dedicated to SQL Server. The session I’ll present here is an hour long version of the precon I’ll give in Exeter. From SQL Traces to Extended Events - The next big switch The session description is the same as for the Exeter precon but we'll focus more on how the Extended Events work with only a brief overview of old SQL Trace architecture.

    Read the article

  • ATG Live Webcast: Advanced E-Business Suite Architectures

    - by BillSawyer
    I am pleased to announce the ATG Live Webcast event for Dec. 8th, 2011: Advanced E-Business Suite Architectures Join Elke Phelps, Senior Principal Product Manager and Sriram Veeraraghavan, Senior Principal Software Engineer as they discuss advanced E-Business Suite architectures that can help you improve performance, scalability, business continuity, utilization, provisioning, and security. This one-hour webcasts provides an overview of advanced architectures with Q&A. This session will cover the latest advanced architectural options, including the use of Oracle database high-availability features and functions such as Real Application Clusters, ASM, Active Data Guard, clouds, virtualization, Oracle VM, high-availability and load-balancing architectures, WebLogic Server, and more. This session will also cover the latest updates to systems management tools like AutoConfig, and may also include sneak previews of upcoming functionality. This event is targeted to architects, system administrators, DBAs, developers, and implementers. The agenda for the Advanced E-Business Suite Architectures webcast includes the following topics: Advanced Oracle E-Business Suite Architectures Optional External Integrations Oracle E-Business Suite 12.2 Improving Performance and Scalability Providing Business Continuity Improving Utilization and Provisioning Improving Security Date:            Thursday, December 8, 2011Time:           8:00 AM - 9:00 AM Pacific Standard TimePresenter:  Elke Phelps, Senior Principal Product Manager                      Sriram Veeraraghavan, Senior Principal Software EngineerWebcast Registration Link (Preregistration is optional but encouraged)To hear the audio feed:    Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              98514To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  273291684If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training at http://blogs.oracle.com/stevenChan/entry/e_business_suite_technology_learningIf you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • SQL SERVER – Guest Post – Glenn Berry – Wait Type – Day 26 of 28

    - by pinaldave
    Glenn Berry works as a Database Architect at NewsGator Technologies in Denver, CO. He is a SQL Server MVP, and has a whole collection of Microsoft certifications, including MCITP, MCDBA, MCSE, MCSD, MCAD, and MCTS. He is also an Adjunct Faculty member at University College – University of Denver, where he has been teaching since 2000. He is one wonderful blogger and often blogs at here. I am big fan of the Dynamic Management Views (DMV) scripts of Glenn. His script are extremely popular and the reality is that he has inspired me to start this series with his famous DMV which I have mentioned in very first  wait stats blog post (I had forgot to request his permission to re-use the script but when asked later on his whole hearty approved it). Here is is his excellent blog post on this subject of wait stats: Analyzing cumulative wait stats in SQL Server 2005 and above has become a popular and effective technique for diagnosing performance issues and further focusing your troubleshooting and diagnostic  efforts.  Rather than just guessing about what resource(s) that SQL Server is waiting on, you can actually find out by running a relatively simple DMV query. Once you know what resources that SQL Server is spending the most time waiting on, you can run more specific queries that focus on that resource to get a better idea what is causing the problem. I do want to throw out a few caveats about using wait stats as a diagnostic tool. First, they are most useful when your SQL Server instance is experiencing performance problems. If your instance is running well, with no indication of any resource pressure from other sources, then you should not worry that much about what the top wait types are. SQL Server will always be waiting on some resource, but many wait types are quite benign, and can be safely ignored. In spite of this, I quite often see experienced DBAs obsessing over the top wait type, even when their SQL Server instance is running extremely well. Second, I often see DBAs jump to the wrong conclusion based on seeing a particular well-known wait type. A good example is CXPACKET waits. People typically jump to the conclusion that high CXPACKET waits means that they should immediately change their instance-level MADOP setting to 1. This is not always the best solution. You need to consider your workload type, and look carefully for any important “missing” indexes that might be causing the query optimizer to use a parallel plan to compensate for the missing index. In this case, correcting the index problem is usually a better solution than changing MAXDOP, since you are curing the disease rather than just treating the symptom. Finally, you should get in the habit of clearing out your cumulative wait stats with the  DBCC SQLPERF(‘sys.dm_os_wait_stats’, CLEAR); command. This is especially important if you have made an configuration or index changes, or if your workload has changed recently. Otherwise, your cumulative wait stats will be polluted with the old stats from weeks or months ago (since the last time SQL Server was started or the stats were cleared).  If you make a change to your SQL Server instance, or add an index, you should clear out your wait stats, and then wait a while to see what your new top wait stats are. At any rate, enjoy Pinal Dave’s series on Wait Stats. This blog post has been written by Glenn Berry (Twitter | Blog) Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • A temporary disagreement

    - by Tony Davis
    Last month, Phil Factor caused a furore amongst some MVPs with an article that attempted to offer simple advice to developers regarding the use of table variables, versus local and global temporary tables, in their code. Phil makes clear that the table variables do come with some fairly major limitations.no distribution statistics, no parallel query plans for queries that modify table variables.but goes on to suggest that for reasonably small-scale strategic uses, and with a bit of due care and testing, table variables are a "good thing". Not everyone shares his opinion; in fact, I imagine he was rather aghast to learn that there were those felt his article was akin to pulling the pin out of a grenade and tossing it into the database; table variables should be avoided in almost all cases, according to their advice, in favour of temp tables. In other words, a fairly major feature of SQL Server should be more-or-less 'off limits' to developers. The problem with temp tables is that, because they are scoped either in the procedure or the connection, it is easy to allow them to hang around for too long, eating up precious memory and bulking up the shared tempdb database. Unless they are explicitly dropped, global temporary tables, and local temporary tables created within a connection rather than within a stored procedure, will persist until the connection is closed or, with connection pooling, until the connection is reused. It's also quite common with ASP.NET applications to have connection leaks, as Bill Vaughn explains in his chapter in the "SQL Server Deep Dives" book, meaning that the web page exits without closing the connection object, maybe due to an error condition. This will then hang around in the heap for what might be hours before picked up by the garbage collector. Table variables are much safer in this regard, since they are batch-scoped and so are cleaned up automatically once the batch is complete, which also means that they are intuitive to use for the developer because they conform to scoping rules that are closer to those in procedural code. On the surface then, an ideal way to deal with issues related to tempdb memory hogging. So why did Phil qualify his recommendation to use Table Variables? This is another of those cases where, like scalar UDFs and table-valued multi-statement UDFs, developers can sometimes get into trouble with a relatively benign-looking feature, due to way it's been implemented in SQL Server. Once again the biggest problem is how they are handled internally, by the SQL Server query optimizer, which can make very poor choices for JOIN orders and so on, in the absence of statistics, especially when joining to tables with highly-skewed data. The resulting execution plans can be horrible, as will be the resulting performance. If the JOIN is to a large table, that will hurt. Ideally, Microsoft would simply fix this issue so that developers can't get burned in this way; they've been around since SQL Server 2000, so Microsoft has had a bit of time to get it right. As I commented in regard to UDFs, when developers discover issues like with such standard features, the database becomes an alien planet to them, where death lurks around each corner, and they continue to avoid these "killer" features years after the problems have been eventually resolved. In the meantime, what is the right approach? Is it to say "hammers can kill, don't ever use hammers", or is it to try to explain, as Phil's article and follow-up blog post have tried to do, what the feature was intended for, why care must be applied in its use, and so enable developers to make properly-informed decisions, without requiring them to delve deep into the inner workings of SQL Server? Cheers, Tony.

    Read the article

  • Oracle Big Data Software Downloads

    - by Mike.Hallett(at)Oracle-BI&EPM
    Companies have been making business decisions for decades based on transactional data stored in relational databases. Beyond that critical data, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. Oracle offers a broad integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data Connectors Downloads here, includes: Oracle SQL Connector for Hadoop Distributed File System Release 2.1.0 Oracle Loader for Hadoop Release 2.1.0 Oracle Data Integrator Companion 11g Oracle R Connector for Hadoop v 2.1 Oracle Big Data Documentation The Oracle Big Data solution offers an integrated portfolio of products to help you organize and analyze your diverse data sources alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data, Release 2.2.0 - E41604_01 zip (27.4 MB) Integrated Software and Big Data Connectors User's Guide HTML PDF Oracle Data Integrator (ODI) Application Adapter for Hadoop Apache Hadoop is designed to handle and process data that is typically from data sources that are non-relational and data volumes that are beyond what is handled by relational databases. Typical processing in Hadoop includes data validation and transformations that are programmed as MapReduce jobs. Designing and implementing a MapReduce job usually requires expert programming knowledge. However, when you use Oracle Data Integrator with the Application Adapter for Hadoop, you do not need to write MapReduce jobs. Oracle Data Integrator uses Hive and the Hive Query Language (HiveQL), a SQL-like language for implementing MapReduce jobs. Employing familiar and easy-to-use tools and pre-configured knowledge modules (KMs), the application adapter provides the following capabilities: Loading data into Hadoop from the local file system and HDFS Performing validation and transformation of data within Hadoop Loading processed data from Hadoop to an Oracle database for further processing and generating reports Oracle Database Loader for Hadoop Oracle Loader for Hadoop is an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database. It pre-partitions the data if necessary and transforms it into a database-ready format. Oracle Loader for Hadoop is a Java MapReduce application that balances the data across reducers to help maximize performance. Oracle R Connector for Hadoop Oracle R Connector for Hadoop is a collection of R packages that provide: Interfaces to work with Hive tables, the Apache Hadoop compute infrastructure, the local R environment, and Oracle database tables Predictive analytic techniques, written in R or Java as Hadoop MapReduce jobs, that can be applied to data in HDFS files You install and load this package as you would any other R package. Using simple R functions, you can perform tasks such as: Access and transform HDFS data using a Hive-enabled transparency layer Use the R language for writing mappers and reducers Copy data between R memory, the local file system, HDFS, Hive, and Oracle databases Schedule R programs to execute as Hadoop MapReduce jobs and return the results to any of those locations Oracle SQL Connector for Hadoop Distributed File System Using Oracle SQL Connector for HDFS, you can use an Oracle Database to access and analyze data residing in Hadoop in these formats: Data Pump files in HDFS Delimited text files in HDFS Hive tables For other file formats, such as JSON files, you can stage the input in Hive tables before using Oracle SQL Connector for HDFS. Oracle SQL Connector for HDFS uses external tables to provide Oracle Database with read access to Hive tables, and to delimited text files and Data Pump files in HDFS. Related Documentation Cloudera's Distribution Including Apache Hadoop Library HTML Oracle R Enterprise HTML Oracle NoSQL Database HTML Recent Blog Posts Big Data Appliance vs. DIY Price Comparison Big Data: Architecture Overview Big Data: Achieve the Impossible in Real-Time Big Data: Vertical Behavioral Analytics Big Data: In-Memory MapReduce Flume and Hive for Log Analytics Building Workflows in Oozie

    Read the article

  • Fast Data: Go Big. Go Fast.

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >