Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 167/679 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • Frank Ludolph's Last Day at Work

    - by mprove
    Hi Frank, today is your last day at Oracle. I cannot belief that retirement is an alternative to designing software and improving products for decades. I might figure it out myself in a couple of years. Our ways have crossed several times. And I am extremely thankful for that. I still remember my first session on an Apple Lisa. It must have been around 1985. I was still in school, and we were visiting the University of Hamburg to get some orientation on the departments. When I started I chose Informatics. And I suppose the Apple Lisa played a significant role in my decision. Is it fate that I later wrote about Apple Lisa? I’ve attended your presentation and public demo of the Lisa System at CHI ’98 in Los Angeles. Maybe a video still exists. I should look it up and publish it somewhere. You had also booth duty for Sun Microsystems – presenting HotJava Views, a user interface for a network computer. And you were handing out VHS tapes (!) of Starfire. I still have mine – but no player anymore. Then I joined Sun in 2002, and I guess I popped up in your office each time when I came to Santa Clara. The SEED mentoring program finally made it possible that we exchanged and discussed many ideas on the past and future of HCI. Dueling Interaction Models of Personal-Computing and Web-Computing at MEDICI 2007 is one of the results. But do you remember for instance also our jam session with Phil Clevenger on Hello World? Marvelous! I will miss you at Oracle. Enjoy your life and let’s stay in touch.Matthias

    Read the article

  • ODI 11g – Oracle Multi Table Insert

    - by David Allan
    With the IKM Oracle Multi Table Insert you can generate Oracle specific DML for inserting into multiple target tables from a single query result – without reprocessing the query or staging its result. When designing this to exploit the IKM you must split the problem into the reusable parts – the select part goes in one interface (I named SELECT_PART), then each target goes in a separate interface (INSERT_SPECIAL and INSERT_REGULAR). So for my statement below… /*INSERT_SPECIAL interface */ insert  all when 1=1 And (INCOME_LEVEL > 250000) then into SCOTT.CUSTOMERS_NEW (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /* INSERT_REGULAR interface */ when 1=1  then into SCOTT.CUSTOMERS_SPECIAL (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /*SELECT*PART interface */ select        CUSTOMERS.EMAIL EMAIL,     CUSTOMERS.CREDIT_LIMIT CREDIT_LIMIT,     UPPER(CUSTOMERS.NAME) NAME,     CUSTOMERS.USER_MODIFIED USER_MODIFIED,     CUSTOMERS.DATE_MODIFIED DATE_MODIFIED,     CUSTOMERS.BIRTH_DATE BIRTH_DATE,     CUSTOMERS.MARITAL_STATUS MARITAL_STATUS,     CUSTOMERS.ID ID,     CUSTOMERS.USER_CREATED USER_CREATED,     CUSTOMERS.GENDER GENDER,     CUSTOMERS.DATE_CREATED DATE_CREATED,     CUSTOMERS.INCOME_LEVEL INCOME_LEVEL from    SCOTT.CUSTOMERS   CUSTOMERS where    (1=1) Firstly I create a SELECT_PART temporary interface for the query to be reused and in the IKM assignment I state that it is defining the query, it is not a target and it should not be executed. Then in my INSERT_SPECIAL interface loading a target with a filter, I set define query to false, then set true for the target table and execute to false. This interface uses the SELECT_PART query definition interface as a source. Finally in my final interface loading another target I set define query to false again, set target table to true and execute to true – this is the go run it indicator! To coordinate the statement construction you will need to create a package with the select and insert statements. With 11g you can now execute the package in simulation mode and preview the generated code including the SQL statements. Hopefully this helps shed some light on how you can leverage the Oracle MTI statement. A similar IKM exists for Teradata. The ODI IKM Teradata Multi Statement supports this multi statement request in 11g, here is an extract from the paper at www.teradata.com/white-papers/born-to-be-parallel-eb3053/ Teradata Database offers an SQL extension called a Multi-Statement Request that allows several distinct SQL statements to be bundled together and sent to the optimizer as if they were one. Teradata Database will attempt to execute these SQL statements in parallel. When this feature is used, any sub-expressions that the different SQL statements have in common will be executed once, and the results shared among them. It works in the same way as the ODI MTI IKM, multiple interfaces orchestrated in a package, each interface contributes some SQL, the last interface in the chain executes the multi statement.

    Read the article

  • Oracle ZFSSA Hybrid Storage Pool Demo

    - by Darius Zanganeh
    The ZFS Hybrid Storage Pool (HSP) has been around since the ZFSSA first launched.  It is one of the main contributors to the high performance we see on the Oracle ZFSSA both in benchmarks as well as many production environments.  Below is a short video I made to show at a high level just how impactful this HSP pool is on storage performance.  We squeeze a ton of performance out of our drives with our unique use of cache, write optimized ssd and read optimized ssd.  Many have written and blogged about this technology, here it is in action. Demo of the Oracle ZFSSA Hybrid Storage Pool and how it speeds up workloads.

    Read the article

  • Middleware Day at UK Oracle User Group Conference 2012

    - by JuergenKress
    Registration has opened for UK Oracle User Group Conference 2012, the UK’s largest Independent Oracle Technology & E-Business Suite conference from 3rd - 5th December, 2012. The conference will attract over 1,700 attendees from the UK and internationally. Spanning three days and featuring over 250 presentations which range from end-users telling their war stories to Oracle unveiling the latest product roadmaps. We have always been trusted to provide exceptional events with innovative content and renowned speakers and our 2012 event is no exception. It is just not our words, 95% of attendees from the last years conference, highly recommend the experience to other Oracle user. You can get an overview of the conference, listen what last year's delegates thought and explore the full agenda on the conference website: www.ukoug.org/ukoug2012. Join the UK Oracle User Group for ‘Middleware Sunday’ - an event packed with technical content for WebLogic administrators taking place on 2nd December the day before the start of UKOUG Conference 2012. The day has been organised by middleware enthusiasts Simon Haslam and Jacco Landlust and is free to UKOUG 2012 delegates. The content level will be pitched intermediate to advanced. So delegates are expected to be comfortable with WebLogic and its configuration terms, such as domains and managed servers. We will also have a fun, hands-on session for which you’ll need a quick laptop to join our mega-cluster! For more information visit the UKOUG 2012 website: www.ukoug.org/2012. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: simon Haslam,UK user group,middleware sunday,conference,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Adam Bien Testimonial at GlassFish Community Event, JavaOne 2012

    - by arungupta
    Adam Bien, a self-employed enterprise Java consultant, an author of five star-rated books, a presenter, a Java Champion, a NetBeans Dream Team member, a JCP member, a JCP Expert Group Member of several Java EE groups, and with several other titles is one of the most vocal advocate of the Java EE platform. His code-driven workshops using Java EE 6, NetBeans, and GlassFish have won accolades at several developers' conferences all around the world. Adam has been using GlassFish for all his projects for many years. One of the reasons he uses GlassFish is because of high confidence that the Java EE compliance bug will be fixed faster. He find GlassFish very capable application server for faster development and continuous deployment. His own media properties are running on GlassFish with an Apache front-end. Good documentation, accessible source code, REST/Web/CLI administration and monitoring facilities are some other reasons to pick GlassFish. He presented at the recently concluded GlassFish community event at JavaOne 2012. You can watch the video (with transcript) below showing him in full action:

    Read the article

  • Enterprise Architecture IS (should not be) Arbitrary

    - by pat.shepherd
    I took a look at a blog entry today by Jordan Braunstein where he comments on another blog entry titled “Yes, “Enterprise Architecture is Relative BUT it is not Arbitrary.”  The blog makes some good points such as the following: Lock 10 architects in 10 separate rooms; provide them all an identical copy of the same business, technical, process, and system requirements; have them design an architecture under the same rules and perspectives; and I guarantee your result will be 10 different architectures of varying degrees. SOA Today: Enterprise Architecture IS Arbitrary Agreed, …to a degree….but less so if all 10 truly followed one of the widely accepted EA frameworks. My thinking is that EA frameworks all focus on getting the business goals/vision locked down first as the primary drivers for decisions made lower down the architecture stack.  Many people I talk to, know about frameworks such as TOGAF, FEA, etc. but seldom apply the tenants to the architecture at hand.  We all seem to want to get right into the Visio diagrams and boxes and arrows and connecting protocols and implementation details and lions and tigers and bears (Oh, my!) too early. If done properly the Business, Application and Information architectures are nailed down BEFORE any technological direction (SOA or otherwise) is set.  Those 3 layers and Governance (people and processes), IMHO, are layers that should not vary much as they have everything to do with understanding the business -- from which technological conclusions can later be drawn. I really like what he went on to say later in the post about the fact that architecture attempts to remove the amount of variance between the 10 different architect’s work.  That is the real heart of what EA is about; REMOVING THE ARBRITRARITY.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3.5: Node.js relay

    - by Elton Stoneman
    This is an extension to Part 3 in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer In Part 3 I said “there isn't actually a .NET requirement here”, and this post just follows up on that statement. In Part 3 we had an ASP.NET MVC Website making a REST call to an Azure Service Bus service; to show that the REST stuff is really interoperable, in this version we use Node.js to make the secure service call. The code is on GitHub here: IPASBR Part 3.5. The sample code is simpler than Part 3 - rather than code up a UI in Node.js, the sample just relays the REST service call out to Azure. The steps are the same as Part 3: REST call to ACS with the service identity credentials, which returns an SWT; REST call to Azure Service Bus Relay, presenting the SWT; request gets relayed to the on-premise service. In Node.js the authentication step looks like this: var options = { host: acs.namespace() + '-sb.accesscontrol.windows.net', path: '/WRAPv0.9/', method: 'POST' }; var values = { wrap_name: acs.issuerName(), wrap_password: acs.issuerSecret(), wrap_scope: 'http://' + acs.namespace() + '.servicebus.windows.net/' }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); res.on('data', function (d) { var token = qs.parse(d.toString('utf8')); callback(token.wrap_access_token); }); }); req.write(qs.stringify(values)); req.end(); Once we have the token, we can wrap it up into an Authorization header and pass it to the Service Bus call: token = 'WRAP access_token=\"' + swt + '\"'; //... var reqHeaders = { Authorization: token }; var options = { host: acs.namespace() + '.servicebus.windows.net', path: '/rest/reverse?string=' + requestUrl.query.string, headers: reqHeaders }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); response.writeHead(res.statusCode, res.headers); res.on('data', function (d) { var reversed = d.toString('utf8') console.log('svc returned: ' + d.toString('utf8')); response.end(reversed); }); }); req.end(); Running the sample Usual routine to add your own Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files. Build and you should be able to navigate to the on-premise service at http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 and get a string response, going to the service direct. Install Node.js (v0.8.14 at time of writing), run FormatServiceRelay.cmd, navigate to http://localhost:8013/reverse?string=abc123, and you should get exactly the same response but through Node.js, via Azure Service Bus Relay to your on-premise service. The console logs the WRAP token returned from ACS and the response from Azure Service Bus Relay which it forwards:

    Read the article

  • BIEE Drilling Down and then Across

    - by Tim Dexter
    Slightly off topic today but if you are working with OBIEE in conjunction with BIP its not that far off. Some of you may know, I now get to play with the whole BI suite, I have been for nearly 2 years. Today, I was working with BIEE and wanted to share what I thought was a neat trick. I have to thank Rob Lindsley on our team for the pointers to get it working. The problem I had was that I had set up a drill down hierarchy that took the user down a couple of levels to the bottom project number level. I needed for the user to then be able to click the project number to navigate across to another more detailed report on that project. By default, there is no link, you are at the bottom of a hierarchical drill! There is nothing you can do in the data model (that Im aware of) but you can use a neat trick to get BIEE to allow you to navigate from the bottom rung of the hierarchy. Add the bottom level column to an Answer report. Go into the column properties and set the navigation target. The trick is to then set the current column properties as the system-wide default for that column. You can then actually delete the column from your report. Now as you drill down the hierarchy and reach what was the bottom you will still have a link for the user to punch over to the detail report, sweeeet! The other benefit is that whenever you add the column to a report the link will be available to the detail report, unless you want to override it of course.

    Read the article

  • Book &ldquo;Team Foundation Server 2012 Starter&rdquo; published

    - by terje
    During the summer and fall this year, me and my colleague Jakob Ehn has worked together on a book project that has now finally hit the stores! The title of the book is Team Foundation Server 2012 Starter and is published by Packt Publishing. Get it from http://www.packtpub.com/team-foundation-server-2012-starter/book or from Amazon http://www.amazon.com/dp/1849688389                     The book is part of a concept that Packt have with starter-books, intended for people new to Team Foundation Server 2012 and who want a quick guideline to get it up and working.  It covers the fundamentals, from installing and configuring it, and how to use it with source control, work items and builds. It is done as a step-by-step guide, but also includes best practices advice in the different areas. It covers the use of both the on-premises and the TFS Services version. It also has a list of links and references in the end to the most relevant Visual Studio 2012 ALM sites. Our good friend and fellow ALM MVP Mathias Olausson have done the review of the book, thanks again Mathias! We hope the book fills the gap between the different online guide sites and the more advanced books that are out. Book Description Your quick start guide to TFS 2012, top features, and best practices with hands on examples Overview Install TFS 2012 from scratch Get up and running with your first project Streamline release cycles for maximum productivity In Detail Team Foundation Server 2012 is Microsoft's leading ALM tool, integrating source control, work item and process handling, build automation, and testing. This practical "Team Foundation Server 2012 Starter Guide" will provide you with clear step-by-step exercises covering all major aspects of the product. This is essential reading for anyone wishing to set up, organize, and use TFS server. This hands-on guide looks at the top features in Team Foundation Server 2012, starting with a quick installation guide and then moving into using it for your software development projects. Manage your team projects with Team Explorer, one of the many new features for 2012. Covering all the main features in source control to help you work more efficiently, including tools for branching and merging, we will delve into the Agile Planning Tools for planning your product and sprint backlogs. Learn to set up build automation, allowing your team to become faster, more streamlined, and ultimately more productive with this "Team Foundation Server 2012 Starter Guide". What you will learn from this book Install TFS 2012 on premise Access TFS Services in the cloud Quickly get started with a new project with product backlogs, source control, and build automation Work efficiently with source control using the top features Understand how the tools for branching and merging in TFS 2012 help you isolate work and teams Learn about the existing process templates, such as Visual Studio Scrum 2.0 Manage your product and sprint backlogs using the Agile planning tools Approach This Starter guide is a short, sharp introduction to Team Foundation Server 2012, covering everything you need to get up and running. Who this book is written for If you are a developer, project lead, tester, or IT administrator working with Team Foundation Server 2012 this guide will get you up to speed quickly and with minimal effort.

    Read the article

  • New in MySQL Enterprise Edition: Policy-based Auditing!

    - by Rob Young
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For those with an interest in MySQL, this weekend's MySQL Connect conference in San Francisco has gotten off to a great start. On Saturday Tomas announced the feature complete MySQL 5.6 Release Candidate that is now available for Community adoption and testing. This announcement marks the sprint to GA that should be ready for release within the next 90 days. You can get a quick summary of the key 5.6 features here or better yet download the 5.6 RC (under “Development Releases”), review what's new and try it out for yourself! There were also product related announcements around MySQL Cluster 7.3 and MySQL Enterprise Edition . This latter announcement is of particular interest if you are faced with internal and regulatory compliance requirements as it addresses and solves a pain point that is shared by most developers and DBAs; new, out of the box compliance for MySQL applications via policy-based audit logging of user and query level activity. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} One of the most common requests we get for the MySQL roadmap is for quick and easy logging of audit events. This is mainly due to how web-based applications have evolved from nice-to-have enablers to mission-critical revenue generation and the important role MySQL plays in the new dynamic. In today’s virtual marketplace, PCI compliance guidelines ensure credit card data is secure within e-commerce apps; from a corporate standpoint, Sarbanes-Oxely, HIPAA and other regulations guard the medical, financial, public sector and other personal data centric industries. For supporting applications audit policies and controls that monitor the eyes and hands that have viewed and acted upon the most sensitive of data is most commonly implemented on the back-end database. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} With this in mind, MySQL 5.5 introduced an open audit plugin API that enables all MySQL users to write their own auditing plugins based on application specific requirements. While the supporting docs are very complete and provide working code samples, writing an audit plugin requires time and low-level expertise to develop, test, implement and maintain. To help those who don't have the time and/or expertise to develop such a plugin, Oracle now ships MySQL 5.5.28 and higher with an easy to use, out-of-the-box auditing solution; MySQL Enterprise Audit. MySQL Enterprise Audit The premise behind MySQL Enterprise Audit is simple; we wanted to provide an easy to use, policy-based auditing solution that enables you to quickly and seamlessly add compliance to their MySQL applications. MySQL Enterprise Audit meets this requirement by enabling you to: 1. Easily install the needed components. Installation requires an upgrade to MySQL 5.5.28 (Enterprise edition), which can be downloaded from the My Oracle Support portal or the Oracle Software Delivery Cloud. After installation, you simply add the following to your my.cnf file to register and enable the audit plugin: [mysqld] plugin-load=audit_log.so (keep in mind the audit_log suffix is platform dependent, so .dll on Windows, etc.) or alternatively you can load the plugin at runtime: mysql> INSTALL PLUGIN audit_log SONAME 'audit_log.so'; 2. Dynamically enable and disable the audit stream for a specific MySQL server. A new global variable called audit_log_policy allows you to dynamically enable and disable audit stream logging for a specific MySQL server. The variable parameters are described below. 3. Define audit policy based on what needs to be logged (everything, logins, queries, or nothing), by server. The new audit_log_policy variable uses the following valid, descriptively named values to enable, disable audit stream logging and to filter the audit events that are logged to the audit stream: "ALL" - enable audit stream and log all events "LOGINS" - enable audit stream and log only login events "QUERIES" - enable audit stream and log only querie events "NONE" - disable audit stream 4. Manage audit log files using basic MySQL log rotation features. A new global variable, audit_log_rotate_on_size, allows you to automate the rotation and archival of audit stream log files based on size with archived log files renamed and appended with datetime stamp when a new file is opened for logging. 5. Integrate the MySQL audit stream with MySQL, Oracle tools and other third-party solutions. The MySQL audit stream is written as XML, using UFT-8 and can be easily formatted for viewing using a standard XML parser. This enables you to leverage tools from MySQL and others to view the contents. The audit stream was also developed to meet the Oracle database audit stream specification so combined Oracle/MySQL shops can import and manage MySQL audit images using the same Oracle tools they use for their Oracle databases. So assuming a successful MySQL 5.5.28 upgrade or installation, a common set up and use case scenario might look something like this: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} It should be noted that MySQL Enterprise Audit was designed to be transparent at the application layer by allowing you to control the mix of log output buffering and asynchronous or synchronous disk writes to minimize the associated overhead that comes when the audit stream is enabled. The net result is that, depending on the chosen audit stream log stream options, most application users will see little to no difference in response times when the audit stream is enabled. So what are your next steps? Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Get all of the grainy details on MySQL Enterprise Audit, including all of the additional configuration options from the MySQL documentation. MySQL Enterprise Edition customers can download MySQL 5.5.28 with the Audit extension for production use from the My Oracle Support portal. Everyone can download MySQL 5.5.28 with the Audit extension for evaluation from the Oracle Software Delivery Cloud. Learn more about MySQL Enterprise Edition. As always, thanks for your continued support of MySQL!

    Read the article

  • Megjelent az Oracle Enterprise Manager 11g

    - by Lajos Sárecz
    Megjelent és letöltheto az Oracle Enterprise Manager 11gR1. Remélem az olvasók közül többen látták már a bejelentés videóját. Aki lemaradt volna róla bepótolhatja itt. Az új verzió számos újdonságot tartalmaz, melyek röviden összefoglalva az alábbiak: Eloször is az új verzió célja az alkalmazások üzemeltetése üzleti szemszögbol (Business Driven Application Management). A hagyományos komponens alapú rendszer menedzsment megközelítése egyszeruen nem állja meg a helyét az mai IT világában. Ezzel szemben a Business Driven Application Management segíti az üzemeltetoket, hogy az üzletet a leheto leghatékonyabban szolgálják ki, aminek eredményeképp az üzleti eredmények a legoptimálisabban alakulhatnak. A Business Driven Application Management elérése érdekében a 11gR1 a 10g-ben már elérheto végfelhasználói élmény monitorozásra épít. Ezzel a képességgel az IT sokkal jobban megérti, hogy a felhasználók hogyan használják az alkalmazásokat és eközben mit tapasztalnak. Ezt kiegészítve az új verzió fejlettebb üzleti tranzakció menedzsment képességekkel rendelkezik, ezáltal könnyebbé és gyorsabbá teszi a felhasználók számára problémát jelento tranzakciók javítását. Másodsorban az új verzió az alkalmazástól a diszk rétegig (Integrated Application-to-Disk Management) támogatja az üzemeltetést. Mivel minden egyes réteg képes befolyásolni a felhasználói élményt, ezért amint a felhasználókat érinto probléma detektálásra kerül, szükséges az érintett réteg részletes diagnosztikája, elemzése. Az Oracle Enterprise Manager 11gR1 natív támogatást ad az Oracle Database 11gR2, Exadata V2 és Fusion Middleware 11g termékekre. Az összetett alkalmazások menedzsmentjét támogató JVM Diagnosztika és Composite Application Monitoring and Modeler immár szerves részét képezik az új verziónak. Sot, az Enterprise Manager Grid Control és az Enterprise Manager Ops Center elso integrációs lépcsoje is elkészült, így a hardver szintu események is központilag monitorozhatók a Grid Control felületérol. Végül pedig az új verzió integrált rendszer menedzsement és támogatás (Integrated Systems Management and Support) képességgel rendelkezik. Ez azt jelenti, hogy az Oracle Enterprise Manager 11g integrálja a probléma diagnosztizálás és megoldás munkafolyamatát azáltal, hogy közvetlenül a Grid Control-ból lehetségessé válik a My Oracle Support szolgáltatásainak igénybevétele, mint például a szükséges patch-ek letöltése, service request nyitása, stb. Az Oracle support személyzete pedig az Enterprise Manager konfiguráció kezelési képességei révén azonnal információt gyujthetnek a környezetrol, hogy felgyorsuljon a probléma megoldás ideje. Ez a szoros integráció az Oracle Enterprise Manager és a My Oracle Support között segítheti ügyfeleinket a leghatékonyabb Oracle üzemeltetés elérésében. A következo hetekben igyekszem további részletekkel szolgálni, amint én is megismerem az új verzió képességeit. Az új verzió jelenleg 32 és 64 bites Linuxra érheto el. További portolások a következo hetekben várhatók.

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • SOA &amp; BPM Partner Community Forum XI &ndash; thanks for the great event!

    - by Jürgen Kress
    Thanks to our team in Portugal we are running a great SOA & BPM Partner Community Forum in Lisbon this week. Yes we made our way to Lisbon – thanks to Lufthansa!   Program Wednesday April 21st 2010 Time Plenary agenda 10:00 – 10:15 Welcome & Introduction Paulo Folgado, Oracle 10:15 – 11:15 SOA & Cloud Computing Alexandre Vieira, Oracle 11:15 - 12:30 SOA Reference Case Filipe Carvalho, Wide Scope 12:30 – 13:15 Lunch Break 13:30 – 14:15 BPMN 2.0 Torsten Winterberg, Opitz Consulting 14:15 – 15:00 SOA Partner Sales Campaign Jürgen Kress, Oracle 15:00 – 15:15 Closing notes Jürgen Kress, Oracle 15:15 – 16:00 Cocktail reception You want to attend a SOA Partner Community event in the future? Make sure that you do register for the SOA Partner Community www.oracle.com/goto/emea/soa Program Thursday and Friday April 22nd & 23rd 2010 9:00 BPM hands-on workshop by Clemens Utschig-Utschig 18:30 End of part 1 8:30 BPM hands-on workshop part II 15:30 End of BPM 11g workshop Dear Lufthansa Team, Special thanks for making the magic happen! We all arrived just in time in Lisbon. Here the picture from Munich airport Wednesday morning. cancelled, cancelled, cancelled – Lisbon is boarding!    

    Read the article

  • Silverlight Cream for January 13, 2011 -- #1026

    - by Dave Campbell
    In this Issue: András Velvárt, Tony Champion, Joost van Schaik, Jesse Liberty, Shawn Wildermuth, John Papa, Michael Crump, Sacha Barber, Alex Knight, Peter Kuhn, Senthil Kumar, Mike Hole, and WindowsPhoneGeek. Above the Fold: Silverlight: "Create Custom Speech Bubbles in Silverlight." Michael Crump WP7: "Architecting WP7 - Part 9 of 10: Threading" Shawn Wildermuth Expression Blend: "PathListBox: Text on the path" Alex Knight From SilverlightCream.com: Behaviors for accessing the Windows Phone 7 MarketPlace and getting feedback András Velvárt shares almost insider information about how to get some user interaction with your WP7 app in the form of feedback ... he has 4 behaviors taken straight from his very cool SufCube app that he's sharing. Reloading a Collection in the PivotViewer Tony Champion keeps working with the PivotViewer ... this time discussing the fact that you can't Reload or Refresh the current collection from the server ... at least not initially, but he did find one :) Tombstoning MVVMLight ViewModels with SilverlightSerializer on Windows Phone 7 Joost van Schaik takes a shot at helping us all with Tombstoning a WP7 app... he's using Mike Talbot's SilverlightSerializer and created extension methods for it for tombstoning that he's willing to share with us. Windows Phone From Scratch #17: MVVM Light Toolkit Soup To Nuts Part 2 Jesse Liberty is up to Part 17 in his WP7 series, and this is the 2nd post on MVVMLight and WP7, and is digging into behaviors. Architecting WP7 - Part 9 of 10: Threading Shawn Wildermuth is up to part 9 of 10 in his series on Architecting WP7 apps. This episode finds Shawn discussing Threading ... know how to use and choose between BackgroundWorker and ThreadPool? ... Shawn will explain. Silverlight TV 57: Performance Tuning Your Apps In the latest Silverlight TV, John Papa chats with Mike Cook about tuning your Silverlight app to get the performance up there where your users will be happy. Create Custom Speech Bubbles in Silverlight. Michael Crump's already gotten a lot of airplay out of this, but it's so cool.. comic-style callout shapes without using the dlls that you normally would... in other words, paths, and very cool hand-drawn looks on some too... very cool, Michael! Showcasing Cinch MVVM framework / PRISM 4 interoperability Sacha Barber has a post up on CodeProject that demonstratest using Cinch and Prism4 together... handily using MEF since Cinch relies on MEFedMVVM... this is a heck of a post... lots of code, lots of explanations. PathListBox: Text on the path Alex Knight keeps making this PathListBox series better ... this time he is putting text on the path... moving text... too cool, Alex! Windows Phone 7: Pinch Gesture Sample Peter Kuhn digs into the WP7 toolkit and examines GestureListener, pinch events, and clipping... examples and code supplied. How to change the StartPage of the Windows Phone 7 Application in Visual Studio 2010 ? Senthil Kumar discusses how to change the StartPage of your WP7 app, or get the program running if you happen to move or rename MainPage.xaml WP7 Text Boxes – OnEnter (my 1st Behaviour) Mike Hole has a post up about the issue with the keyboard appearing in front of the textbox, and maybe using the enter key to drop it... and he's developed a behavior for that process. WP7 ContextMenu in depth | Part1: key concepts and API WindowsPhoneGeek has some good articles that I haven't posted, but I'll catch up. This one is a nice tutorial on the WP7 Context menu... good explanation, diagrams, and code. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Tour Oracle's Usability Labs During Oracle OpenWorld

    - by Oracle OpenWorld Blog Team
    By the Oracle Applications User Experience team While you're attending Oracle OpenWorld 2012, Oracle invites you to tour our state-of-the-art usability labs on October 4 or 5 at Oracle headquarters in Redwood Shores, just south of San Francisco. One of our chartered buses will take registered tour participants from Moscone Center to Oracle HQ and back. Advanced sign-up is recommended; space is available on a first-come, first-served basis.On usability labs tours, Oracle customers see firsthand how we test future product designs using the latest technologies - including eye-tracking equipment and facial recognition software - that help track emotional responses to enterprise application screens. Participants will also get an early look at the direction our enterprise software is heading, including demos of designs for platforms such as tablet and mobile phone. Tours leave at 10:00 a.m. and 1: 45 p.m. If you or your colleagues are interested in joining a lab tour, sign up now to reserve your spot.

    Read the article

  • Mirroring git and mercurial repos the lazy way

    - by Greg Malcolm
    I maintain Python Koans on mirrored on both Github using git and Bitbucket using mercurial. I get pull requests from both repos but it turns out keeping the two repos in sync is pretty easy. Here is how it's done... Assuming I’m starting again on a clean laptop, first I clone both repos ~/git $ hg clone https://bitbucket.org/gregmalcolm/python_koans ~/git $ git clone [email protected]:gregmalcolm/python_koans.git python_koans2 The only thing that makes a folder a git or mercurial repository is the .hg folder in the root of python_koans and the .git folder in the root of python_koans2. So I just need to move the .git folder over into the python_koans folder I'm using for mercurial: ~/git $ rm -rf python_koans/.git ~/git $ mv python_koans2/.git python_koans ~/git $ ls -la python_koans total 48 drwxr-xr-x 11 greg staff 374 Mar 17 15:10 . drwxr-xr-x 62 greg staff 2108 Mar 17 14:58 .. drwxr-xr-x 12 greg staff 408 Mar 17 14:58 .git -rw-r--r-- 1 greg staff 34 Mar 17 14:54 .gitignore drwxr-xr-x 13 greg staff 442 Mar 17 14:54 .hg -rw-r--r-- 1 greg staff 48 Mar 17 14:54 .hgignore -rw-r--r-- 1 greg staff 365 Mar 17 14:54 Contributor Notes.txt -rw-r--r-- 1 greg staff 1082 Mar 17 14:54 MIT-LICENSE -rw-r--r-- 1 greg staff 5765 Mar 17 14:54 README.txt drwxr-xr-x 10 greg staff 340 Mar 17 14:54 python 2 drwxr-xr-x 10 greg staff 340 Mar 17 14:54 python 3 That’s about it! Now git and mercurial are tracking files in the same folder. Of course you will still need to set up your .gitignore to ignore mercurial’s dotfiles and .hgignore to ignore git’s dotfiles or there will be squabbling in the backseat. ~/git $ cd python_koans/ ~/git/python_koans $ cat .gitignore *.pyc *.swp .DS_Store answers .hg <-- Ignore mercurial ~/git/python_koans $ cat .hgignore syntax: glob *.pyc *.swp .DS_Store answers .git <-- Ignore git Because both my mirrors are both identical as far as tracked files are concerned I won’t yet see anything if I check statuses at this point: ~/git/python_koans $ git status # On branch master nothing to commit (working directory clean) ~/git/python_koans $ hg status ~/git/python_koans But how about if I accept a pull request from the bitbucket (mercuial) site? ~/git/python_koans $ hg status ~/git/python_koans $ git status # On branch master # Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: python 2/koans/about_decorating_with_classes.py # modified: python 2/koans/about_iteration.py # modified: python 2/koans/about_with_statements.py # modified: python 3/koans/about_decorating_with_classes.py # modified: python 3/koans/about_iteration.py # modified: python 3/koans/about_with_statements.py Mercurial doesn’t have any changes to track right now, but git has changes. Commit and push them up to github and balance is restored to the force: ~/git/python_koans $ git commit -am "Merge from bitbucket mirror: 'gpiancastelli - Fix for issue #21 and some other tweaks'" [master 79ca184] Merge from bitbucket mirror: 'gpiancastelli - Fix for issue #21 and some other tweaks' 6 files changed, 78 insertions(+), 63 deletions(-) ~/git/python_koans $ git push origin master Or just use hg-git? The github developers have actually published a plugin for automatic mirroring: http://hg-git.github.com I haven’t used it because at the time I tried it a couple of years ago I was having problems getting all the parts to play nice with each other. Probably works fine now though..

    Read the article

  • BizTalk - Removing BAM Activities and Views using bm.exe

    - by Stuart Brierley
    Originally posted on: http://geekswithblogs.net/StuartBrierley/archive/2013/10/16/biztalk---removing-bam-activities-and-views-using-bm.exe.aspxOn the project I am currently working on, we are making quite extensive use of BAM within our growing number of BizTalk applications, all of which are being deployed and undeployed using the excellent Deployment Framework for BizTalk 5.0.Recently I had an issue where problems on the build server had left the target development servers in a state where the BAM activities and views for a particular application were not being removed by the undeploy process and unfortunately the definition in the solution had changed meaning that I could not easily recreate the file from source control.  To get around this I used the bm.exe application from the command line to manually remove the problem BAM artifacts - bm.exe can be found at the following path:C:\Program Files (x86)\Microsoft BizTalk Server 2010\TrackingC:\Program Files (x86)\Microsoft BizTalk Server 2010\TrackingStep1 :Get the BAM Definition FileRun the following command to get the BAm definition file, containing the details of all the activities, views and alerts:bm.exe get-defxml -FileName:{Path and File Name Here}.xmlStep 2: Remove the BAM ArtifactsAt this stage I chose to manually remove each of my problem BAM activities and views using seperate command line calls.  By looking in the definition file I could see the names of the activities and views that I wanted to remove and then use the following commands to remove first the views and then the activities:bm.exe remove-view -name:{viewname}bm.exe remove-activity -name:{activityname}

    Read the article

  • obiee memory usage

    - by user554629
    Heap memory is a frequent customer topic. Here's the quick refresher, oriented towards AIX, but the principles apply to other unix implementations. 1. 32-bit processes have a maximum addressability of 4GB; usable application heap size of 2-3 GB.  On AIX it is controlled by an environment variable: export LDR_CNTRL=....=MAXDATA=0x080000000   # 2GB ( The leading zero is deliberate, not required )   1a. It is  possible to get 3.25GB  heap size for a 32-bit process using @DSA (Discontiguous Segment Allocation)     export LDR_CNTRL=MAXDATA=0xd0000000@DSA  # 3.25 GB 32-bit only        One side-effect of using AIX segments "c" and "d" is that shared libraries will be loaded privately, and not shared.        If you need the additional heap space, this is worth the trade-off.  This option is frequently used for 32-bit java.   1b. 64-bit processes have no need for the @DSA option. 2. 64-bit processes can double the 32-bit heap size to 4GB using: export LDR_CNTRL=....=MAXDATA=0x100000000  # 1 with 8-zeros    2a. But this setting would place the same memory limitations on obiee as a 32-bit process    2b. The major benefit of 64-bit is to break the binds of 32-bit addressing.  At a minimum, use 8GB export LDR_CNTRL=....=MAXDATA=0x200000000  # 2 with 8-zeros    2c.  Many large customers are providing extra safety to their servers by using 16GB: export LDR_CNTRL=....=MAXDATA=0x400000000  # 4 with 8-zeros There is no performance penalty for providing virtual memory allocations larger than required by the application.  - If the server only uses 2GB of space in 64-bit ... specifying 16GB just provides an upper bound cushion.    When an unexpected user query causes a sudden memory surge, the extra memory keeps the server running. 3.  The next benefit to 64-bit is that you can provide huge thread stack sizes for      strange queries that might otherwise crash the server.      nqsserver uses fast recursive algorithms to traverse complicated control structures.    This means lots of thread space to hold the stack frames.    3a. Stack frames mostly contain register values;  64-bit registers are twice as large as 32-bit          At a minimum you should  quadruple the size of the server stack threads in NQSConfig.INI          when migrating from 32- to 64-bit, to prevent a rogue query from crashing the server.           Allocate more than is normally necessary for safety.    3b. There is no penalty for allocating more stack size than you need ...           it is just virtual memory;   no real resources  are consumed until the extra space is needed.    3c. Increasing thread stack sizes may require the process heap size (MAXDATA) to be increased.          Heap space is used for dynamic memory requests, and for thread stacks.          No performance penalty to run with large heap and thread stack sizes.           In a 32-bit world, this safety would require careful planning to avoid exceeding 2GM usable storage.     3d. Increasing the number of threads also may require additional heap storage.          Most thread stack frames on obiee are allocated when the server is started,          and the real memory usage increases as threads run work. Does 2.8GB sound like a lot of memory for an AIX application server? - I guess it is what you are accustomed to seeing from "grandpa's applications". - One of the primary design goals of obiee is to trade memory for services ( db, query caches, etc) - 2.8GB is still well under the 4GB heap size allocated with MAXDATA=0x100000000 - 2.8GB process size is also possible even on 32-bit Windows applications - It is not unusual to receive a sudden request for 30MB of contiguous storage on obiee.- This is not a memory leak;  eventually the nqsserver storage will stabilize, but it may take days to do so. vmstat is the tool of choice to observe memory usage.  On AIX vmstat will show  something that may be  startling to some people ... that available free memory ( the 2nd column ) is always  trending toward zero ... no available free memory.  Some customers have concluded that "nearly zero memory free" means it is time to upgrade the server with more real memory.   After the upgrade, the server again shows very little free memory available. Should you be concerned about this?   Many customers are !!  Here is what is happening: - AIX filesystems are built on a paging model.   If you read/write a  filesystem block it is paged into memory ( no read/write system calls ) - This filesystem "page" has its own "backing store" on disk, the original filesystem block.   When the system needs the real memory page holding the file block, there is no need to "page out".    The page can be stolen immediately, because the original is still on disk in the filesystem. - The filesystem  pages tend to collect ... every filesystem block that was ever seen since    system boot is available in memory.  If another application needs the file block, it is retrieved with no physical I/O. What happens if the system does need the memory ... to satisfy a 30MB heap request by nqsserver, for example? - Since the filesystem blocks have their own backing store ( not on a paging device )   the kernel can just steal any filesystem block ... on a least-recently-used basis   to satisfy a new real memory request for "computation pages". No cause for alarm.   vmstat is accurately displaying whether all filesystem blocks have been touched, and now reside in memory.   Back to nqsserver:  when should you be worried about its memory footprint? Answer:  Almost never.   Stop monitoring it ... stop fussing over it ... stop trying to optimize it. This is a production application, and nqsserver uses the memory it requires to accomplish the job, based on demand. C'mon ... never worry?   I'm from New York ... worry is what we do best. Ok, here is the metric you should be watching, using vmstat: - Are you paging ... there are several columns of vmstat outputbash-2.04$ vmstat 3 3 System configuration: lcpu=4 mem=4096MB kthr    memory              page              faults        cpu    ----- ------------ ------------------------ ------------ -----------  r  b    avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa  0  0 208492  2600   0   0   0   0    0   0  13   45  73  0  0 99  0  0  0 208492  2600   0   0   0   0    0   0   9   12  77  0  0 99  0  0  0 208492  2600   0   0   0   0    0   0   9   40  86  0  0 99  0 avm is the "available free memory" indicator that trends toward zerore   is "re-page".  The kernel steals a real memory page for one process;  immediately repages back to original processpi  "page in".   A process memory page previously paged out, now paged back in because the process needs itpo "page out" A process memory block was paged out, because it was needed by some other process Light paging activity ( re, pi, po ) is not a concern for worry.   Processes get started, need some memory, go away. Sustained paging activity  is cause for concern.   obiee users are having a terrible day if these counters are always changing. Hang on ... if nqsserver needs that memory and I reduce MAXDATA to keep the process under control, won't the nqsserver process crash when the memory is needed? Yes it will.   It means that nqsserver is configured to require too much memory and there are  lots of options to reduce the real memory requirement.  - number of threads  - size of query cache  - size of sort But I need nqsserver to keep running. Real memory is over-committed.    Many things can cause this:- running all application processes on a single server    ... DB server, web servers, WebLogic/WebSphere, sawserver, nqsserver, etc.   You could move some of those to another host machine and communicate over the network  The need for real memory doesn't go away, it's just distributed to other host machines. - AIX LPAR is configured with too little memory.     The AIX admin needs to provide more real memory to the LPAR running obiee. - More memory to this LPAR affects other partitions. Then it's time to visit your friendly IBM rep and buy more memory.

    Read the article

  • 2 Birds, 1 Stone: Enabling M2M and Mobility in Healthcare

    - by Eric Jensen
    Jim Connors has created a video showcase of a comprehensive healthcare solution, connecting a mobile application directly to an embedded patient monitoring system. In the demo, Jim illustrates how you can easily build solutions on top of the Java embedded platform, using Oracle products like Berkeley DB and Database Mobile Server. Jim is running Apache Tomcat on an embedded device, using Berkeley DB as the data store. BDB is transparently linked to an Oracle Database backend using  Database Mobile Server. Information protection is important in healthcare, so it is worth pointing out that these products offer strong data encryption, for storage as well as transit. In his video, Jim does a great job of demystifying M2M. What's compelling about this demo is that uses a solution architecture that enterprise developers are already comfortable and familiar with: a Java apps server with a database backend. The additional pieces used to embed this solution are Oracle Berkeley DB and Database Mobile Server. It functions transparently, from the perspective of Java apps developers. This means that organizations who understand Java apps (basically everyone) can use this technology to develop embedded M2M products. The potential uses for this technology in healthcare alone are immense; any device that measures and records some aspect of the patient could be linked, securely and directly, to the medical records database. Breathing, circulation, other vitals, sensory perception, blood tests, x-rats or CAT scans. The list goes on and on. In this demo case, it's a testament to the power of the Java embedded platform that they are able to easily interface the device, called a Pulse Oximeter, with the web application. If Jim had stopped there, it would've been a cool demo. But he didn't; he actually saved the most awesome part for the end! At 9:52 Jim drops a bombshell: He's also created an Android app, something a doctor would use to view patient health data from his mobile device. The mobile app is seamlessly integrated into the rest of the system, using the device agent from Oracle's Database Mobile Server. In doing so, Jim has really showcased the full power of this solution: the ability to build M2M solutions that integrate seamlessly with mobile applications. In closing, I want to point out that this is not a hypothetical demo using beta or even v1.0 products. Everything in Jim's demo is available today. What's more, every product shown is mature, and already in production at many customer sites, albeit not in the innovative combination Jim has come up with. If your customers are in the market for these type of solutions (and they almost certainly are) I encourage you to download the components and try it out yourself! All the Oracle products showcased in this video are available for evaluation download via Oracle Technology Network.

    Read the article

  • SyncToBlog #11 Stuff and more stuff

    - by Eric Nelson
    Just getting more stuff “down on paper” which grabbed my attention over the last couple of weeks. http://www.koodibook.com/ is live. This is a a rich desktop application built in WPF by some ex-colleagues and current friends :-) Check it out if “photo books” is your thing or you like sweet WPF UX. Study rates Microsoft .NET Framework rated top, Ruby on Rails 2nd bottom. I know a bit about both of these frameworks. Both are sweet for different reasons. .NET top. Ok – I liked that. But Ruby on Rails 2nd bottom just blows away the credibility of the survey results for me. Stylecop is going Open Source. Sweet. ”…will be taking code submissions from the open source community” VMforce for running Java in the cloud. Hmmmmm… Windows Azure Guidance Code and Docs available on patterns and practices. Download both zip files. – One is just the code and the other is 7 chapters of the guide to migration. UK Architect Insight Conference post event presentations are here including a full day track of cloud stuff. http://uxkit.cloudapp.net/ This appears to be a well-kept secret but the Silverlight Demo Kit is on-line in Windows Azure. You already knew! Ok – just me then :-) 3 day Silverlight Masterclass training in the UK from people I trust and like :-) http://silverlightmasterclass.net/ (£995) SQL Server Driver for PHP 2.0 CTP adds PHP's PDO style data access for SQL Server/SQL Azure A Domain Oriented N-Layered .NET 4.0 App Sample from Microsoft Spain. Not looked at it yet – but had it recommended to me (tx Torkil Pedersen) You might also want to check out delicious stream – a blur of azure, ruby and gaming right now http://delicious.com/ericnel :-)

    Read the article

  • C#: Does an IDisposable in a Halted Iterator Dispose?

    - by James Michael Hare
    If that sounds confusing, let me give you an example. Let's say you expose a method to read a database of products, and instead of returning a List<Product> you return an IEnumerable<Product> in iterator form (yield return). This accomplishes several good things: The IDataReader is not passed out of the Data Access Layer which prevents abstraction leak and resource leak potentials. You don't need to construct a full List<Product> in memory (which could be very big) if you just want to forward iterate once. If you only want to consume up to a certain point in the list, you won't incur the database cost of looking up the other items. This could give us an example like: 1: // a sample data access object class to do standard CRUD operations. 2: public class ProductDao 3: { 4: private DbProviderFactory _factory = SqlClientFactory.Instance 5:  6: // a method that would retrieve all available products 7: public IEnumerable<Product> GetAvailableProducts() 8: { 9: // must create the connection 10: using (var con = _factory.CreateConnection()) 11: { 12: con.ConnectionString = _productsConnectionString; 13: con.Open(); 14:  15: // create the command 16: using (var cmd = _factory.CreateCommand()) 17: { 18: cmd.Connection = con; 19: cmd.CommandText = _getAllProductsStoredProc; 20: cmd.CommandType = CommandType.StoredProcedure; 21:  22: // get a reader and pass back all results 23: using (var reader = cmd.ExecuteReader()) 24: { 25: while(reader.Read()) 26: { 27: yield return new Product 28: { 29: Name = reader["product_name"].ToString(), 30: ... 31: }; 32: } 33: } 34: } 35: } 36: } 37: } The database details themselves are irrelevant. I will say, though, that I'm a big fan of using the System.Data.Common classes instead of your provider specific counterparts directly (SqlCommand, OracleCommand, etc). This lets you mock your data sources easily in unit testing and also allows you to swap out your provider in one line of code. In fact, one of the shared components I'm most proud of implementing was our group's DatabaseUtility library that simplifies all the database access above into one line of code in a thread-safe and provider-neutral way. I went with my own flavor instead of the EL due to the fact I didn't want to force internal company consumers to use the EL if they didn't want to, and it made it easy to allow them to mock their database for unit testing by providing a MockCommand, MockConnection, etc that followed the System.Data.Common model. One of these days I'll blog on that if anyone's interested. Regardless, you often have situations like the above where you are consuming and iterating through a resource that must be closed once you are finished iterating. For the reasons stated above, I didn't want to return IDataReader (that would force them to remember to Dispose it), and I didn't want to return List<Product> (that would force them to hold all products in memory) -- but the first time I wrote this, I was worried. What if you never consume the last item and exit the loop? Are the reader, command, and connection all disposed correctly? Of course, I was 99.999999% sure the creators of C# had already thought of this and taken care of it, but inspection in Reflector was difficult due to the nature of the state machines yield return generates, so I decided to try a quick example program to verify whether or not Dispose() will be called when an iterator is broken from outside the iterator itself -- i.e. before the iterator reports there are no more items. So I wrote a quick Sequencer class with a Dispose() method and an iterator for it. Yes, it is COMPLETELY contrived: 1: // A disposable sequence of int -- yes this is completely contrived... 2: internal class Sequencer : IDisposable 3: { 4: private int _i = 0; 5: private readonly object _mutex = new object(); 6:  7: // Constructs an int sequence. 8: public Sequencer(int start) 9: { 10: _i = start; 11: } 12:  13: // Gets the next integer 14: public int GetNext() 15: { 16: lock (_mutex) 17: { 18: return _i++; 19: } 20: } 21:  22: // Dispose the sequence of integers. 23: public void Dispose() 24: { 25: // force output immediately (flush the buffer) 26: Console.WriteLine("Disposed with last sequence number of {0}!", _i); 27: Console.Out.Flush(); 28: } 29: } And then I created a generator (infinite-loop iterator) that did the using block for auto-Disposal: 1: // simply defines an extension method off of an int to start a sequence 2: public static class SequencerExtensions 3: { 4: // generates an infinite sequence starting at the specified number 5: public static IEnumerable<int> GetSequence(this int starter) 6: { 7: // note the using here, will call Dispose() when block terminated. 8: using (var seq = new Sequencer(starter)) 9: { 10: // infinite loop on this generator, means must be bounded by caller! 11: while(true) 12: { 13: yield return seq.GetNext(); 14: } 15: } 16: } 17: } This is really the same conundrum as the database problem originally posed. Here we are using iteration (yield return) over a large collection (infinite sequence of integers). If we cut the sequence short by breaking iteration, will that using block exit and hence, Dispose be called? Well, let's see: 1: // The test program class 2: public class IteratorTest 3: { 4: // The main test method. 5: public static void Main() 6: { 7: Console.WriteLine("Going to consume 10 of infinite items"); 8: Console.Out.Flush(); 9:  10: foreach(var i in 0.GetSequence()) 11: { 12: // could use TakeWhile, but wanted to output right at break... 13: if(i >= 10) 14: { 15: Console.WriteLine("Breaking now!"); 16: Console.Out.Flush(); 17: break; 18: } 19:  20: Console.WriteLine(i); 21: Console.Out.Flush(); 22: } 23:  24: Console.WriteLine("Done with loop."); 25: Console.Out.Flush(); 26: } 27: } So, what do we see? Do we see the "Disposed" message from our dispose, or did the Dispose get skipped because from an "eyeball" perspective we should be locked in that infinite generator loop? Here's the results: 1: Going to consume 10 of infinite items 2: 0 3: 1 4: 2 5: 3 6: 4 7: 5 8: 6 9: 7 10: 8 11: 9 12: Breaking now! 13: Disposed with last sequence number of 11! 14: Done with loop. Yes indeed, when we break the loop, the state machine that C# generates for yield iterate exits the iteration through the using blocks and auto-disposes the IDisposable correctly. I must admit, though, the first time I wrote one, I began to wonder and that led to this test. If you've never seen iterators before (I wrote a previous entry here) the infinite loop may throw you, but you have to keep in mind it is not a linear piece of code, that every time you hit a "yield return" it cedes control back to the state machine generated for the iterator. And this state machine, I'm happy to say, is smart enough to clean up the using blocks correctly. I suspected those wily guys and gals at Microsoft engineered it well, and I wasn't disappointed. But, I've been bitten by assumptions before, so it's good to test and see. Yes, maybe you knew it would or figured it would, but isn't it nice to know? And as those campy 80s G.I. Joe cartoon public service reminders always taught us, "Knowing is half the battle...". Technorati Tags: C#,.NET

    Read the article

  • Getting to grips with the stack in nasm

    - by MarkPearl
    Today I spent a good part of my day getting to grips with the stack and nasm. After looking at my notes on nasm I think this is one area for the course I am doing they could focus more on… So here are some snippets I have put together that have helped me understand a little bit about the stack… Simplest example of the stack You will probably see examples like the following in circulation… these demonstrate the simplest use of the stack… org 0x100 bits 16 jmp main main: push 42h push 43h push 44h mov ah,2h ;set to display characters pop dx    ;get the first value int 21h   ;and display it pop dx    ;get 2nd value int 21h   ;and display it pop dx    ;get 3rd value int 21h   ;and display it int 20h The output from above code would be… DCB Decoupling code using “call” and “ret” This is great, but it oversimplifies what I want to use the stack for… I do not know if this goes against the grain of assembly programmers or not, but I want to write loosely coupled assembly code – and I want to use the stack as a mechanism for passing values into my decoupled code. In nasm we have the call and return instructions, which provides a mechanism for decoupling code, for example the following could be done… org 0x100 bits 16 jmp main ;---------------------------------------- displayChar: mov ah,2h mov dx,41h int 21h ret ;---------------------------------------- main: call displayChar int 20h   This would output the following to the console A So, it would seem that call and ret allow us to jump to segments of our code and then return back to the calling position – a form of segmenting the code into what we would called in higher order languages “functions” or “methods”. The only issue is, in higher order languages there is a way to pass parameters into the functions and return results. Because of the primitive nature of the call and ret instructions, this does not seem to be obvious. We could of course use the registers to pass values into the subroutine and set values coming out, but the problem with this is we… Have a limited number of registers Are threading our code with tight coupling (it would be hard to migrate methods outside of their intended use in a particular program to another one) With that in mind, I turn to the stack to provide a loosely coupled way of calling subroutines… First attempt with the Stack Initially I thought this would be simple… we could use code that looks as follows to achieve what I want… org 0x100 bits 16 jmp main ;---------------------------------------- displayChar: mov ah,2h pop dx int 21h ret ;---------------------------------------- main: push 41h call displayChar int 20h   However running this application does not give the desired result, I want an ‘A’ to be returned, and I am getting something totally different (you will to). Reading up on the call and ret instructions a discovery is made… they are pushing and popping things onto and off the stack as well… When the call instruction is executed, the current value of IP (the address of the instruction to follow) is pushed onto the stack, when ret is called, the last value on the stack is popped off into the IP register. In effect what the above code is doing is as follows with the stack… push 41h push current value of ip pop current value of ip to dx pop 41h to ip This is not what I want, I need to access the 41h that I pushed onto the stack, but the call value (which is necessary) is putting something in my way. So, what to do? Remember we have other registers we can use as well as a thing called indirect addressing… So, after some reading around, I came up with the following approach using indirect addressing… org 0x100 bits 16 jmp main ;---------------------------------------- displayChar: mov bp,sp mov ah,2h mov dx,[bp+2] int 21h ret ;---------------------------------------- main: push 41h call displayChar int 20h In essence, what I have done here is used a trick with the stack pointer… it goes as follows… Push 41 onto the stack Make the call to the function, which will push the IP register onto the stack and then jump to the displayChar label Move the value in the stack point to the bp register (sp currently points at IP register) Move the at the location of bp minus 2 bytes to dx (this is now the value 41h) display it, execute the ret instruction, which pops the ip value off the stack and goes back to the calling point This approach is still very raw, some further reading around shows that I should be pushing the value of bp onto the stack before replacing it with sp, but it is the starting thread to getting loosely coupled subroutines. Let’s see if you get what the following output would be? org 0x100 bits 16 jmp main ;---------------------------------------- displayChar: mov bp,sp mov ah,2h mov dx,[bp+4] int 21h mov dx,[bp+2] int 21h ret ;---------------------------------------- main: push 41h push 42h call displayChar int 20h The output is… AB Where to from here? If by any luck some assembly programmer comes along and see this code and notices that I have made some fundamental flaw in my logic… I would like to know, so please leave a comment… appreciate any feedback!

    Read the article

  • Documentation in RETL, RIB, and RSL Release 13.2.4

    - by Oracle Retail Documentation Team
    The Patch Release 13.2.4 of the integration-related products, Oracle Retail Extract, Transform and Load (RETL), Oracle Retail Integration Bus (RIB), and Oracle Retail Service Layer (RSL), is now available from My Oracle Support. End User Documentation Enhancements The following enhancements have been made to the documentation: New RETL Installation GuideNew in Release 13.2.4, the RETL Installation Guide includes complete instructions to install and configure RETL 13.2.4. Installation instructions were previously in the Programmer’s Guide. As part of this enhancement, content was added to and tested in the RETL Installation Guide to ensure that it contain similar chapters and sections included in other Oracle Retail Installation Guides. Template Creator documentation, under the RIB product umbrellaThe Oracle Retail Functional Artifact Guide and the Oracle Retail Functional ArtifactGenerator Guide contain new information about a new tool called the Template Creator. The Functional Artifacts Generator tool has been enhanced to generate custom and localized payloads business objects on demand, based on Oracle Retail Functional Artifact rules. A new tool called the Template Creator has been provided to create the placeholder XSDs and the import hooks in the base objects on an as-needed basis. In other words, this tool constructs the appropriate placeholders in the packaging structure in the correct locations. The Artifact Generator tools, including the Template Creator, can be used either as a command line or GUI tool set.   List of Documents in RETL, RIB, and the Oracle Retail Service Layer (RSL) 13.2.4  The following documents are included in release 13.2.4 of the applications noted above: RIB Oracle Retail Integration Bus Release Notes Oracle Retail Integration Bus Implementation Guide Oracle Retail Integration Bus Installation Guide Oracle Retail Integration Bus Operations Guide Oracle Retail Functional Artifact Generator Guide Oracle Retail Functional Artifacts Guide Oracle Retail Service Layer Installation Guide Oracle Retail SOA Enabler Tool Guide RIB Integration Guide (ID 1277421.1) RETL Oracle Retail Extract, Transform, and Load Release Notes Oracle Retail Extract, Transform, and Load Installation Guide Oracle Retail Extract, Transform, and Load Programmer’s Guide RSL Oracle Retail Service Layer Release Notes Oracle Retail Service Layer Installation Guide Oracle Retail Service Layer Programmer’s Guide

    Read the article

  • Screwed Up Again, Did Ya?

    - by rickramsey
    Your turn to wear the Cantaloupe Cap of Shame? Here's how to keep it from happening again: Figure out what data you need to archive Create a solid archive someplace safer than your iphone Get wicked fast at recovering your system. Jesse Butler explains how to do all three for a system running Oracle Solaris 11: How to Recover an Oracle Solaris 11 System Website Newsletter Facebook Twitter

    Read the article

  • Extending QuickBooks Reporting with the QuickBooks ADO.NET Data Provider

    - by dataintegration
    The ADO.NET Provider for QuickBooks comes with several reports you may request from QuickBooks by default. However, there are many more that are not readily available. The ADO.NET Provider for QuickBooks makes it easy for you to create new reports and customize existing ones. In this article, we will illustrate how to create your own report and retrieve it from the Server Explorer in Visual Studio. For this example we will show how to create an Item Profitability Report. Creating the report script file Step 1: Download the sample reports available here. Extract them to a folder of your choice. Step 2: Make a copy of the ReportGeneralSummary.rsd file and rename it to ItemProfitability.rsd. Then open the file in any text editor. Step 3: Open the installation directory of the ADO.NET Provider for QuickBooks. Under the \db\ folder, locate the ReportJob.rsb file. Open this file in another text editor. Note: Although we are using ReportJob.rsb for this example, other reports may be contained in other Report*.rsb files. We recommend consulting the included help file and first locating the Report stored procedure and ReportType you are looking for. Otherwise, you may open each Report*.rsb file and look under the "reporttype" input for the report you are attempting to create. Step 4: First, let's rename the title of ItemProfitability.rsd. Near the top of the file you will see a title and description. Change the title to match the name of the file. Change the description to anything you like. For example: <rsb:info title="ItemProfitability" description="Executes my custom report."> Just below the Title, there are a number of columns. The Id represents the row number. The RowType represents the type of data returned by QuickBooks. The ColumnValue* columns represent all of the column data returned by QuickBooks. In some instances, we may need to add additional ColumnValue columns. Step 5: To add additional ColumnValue columns, simply copy the last column, paste it directly below, and continue increasing the numerical value at end of the attribute name. For example: <attr name="ColumnValue9" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue10" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue11" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue12" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> ... Caution: Do not rename the ColumnValue* definitions themselves. They are generalized so that we can understand each type of report returned by QuickBooks. Renaming them to something other than ColumnValue* will cause your columns to return with null values. Step 6: Now let's update the available inputs for the table. From the ReportJob.rsb file, copy all of the input elements into ItemProfitability under the "Psuedo-Column definitions" comment. You will be replacing the existing input elements in ItemProfitability with inputs from ReportJob. When you are done, it should look like this: <!-- Psuedo-Column definitions --> <input name="reporttype" description="The type of the report." value="ITEMESTIMATESVSACTUALS,ITEMPROFITABILITY,JOBESTIMATESVSACTUALSDETAIL,JOBESTIMATESVSACTUALSSUMMARY,JOBPROFITABILITYDETAIL,JOBPROFITABILITYSUMMARY," default="ITEMESTIMATESVSACTUALS" /> <input name="reportperiod" description="Report date range in the format (fromdate:todate), and either value may be omitted for an open ended range (e.g. 2009-12-25:). Supported date format: yyyy-MM-dd." /> <input name="reportdaterangemacro" description="Use a predefined date range." value="ALL,TODAY,THISWEEK,THISWEEKTODATE,THISMONTH,THISMONTHTODATE,THISQUARTER,THISQUARTERTODATE,THISYEAR,THISYEARTODATE,YESTERDAY,LASTWEEK,LASTWEEKTODATE,LASTMONTH,LASTMONTHTODATE,LASTQUARTER,LASTQUARTERTODATE,LASTYEAR,LASTYEARTODATE,NEXTWEEK,NEXTFOURWEEKS,NEXTMONTH,NEXTQUARTER,NEXTYEAR," default="ALL" /> ... Step 7: Now let's update the operationname attribute. This needs to match the same operationname used by ReportJob. After you have copied the correct value from ReportJob.rsb, the operationname in ItemProfitability should look like so: <rsb:set attr="operationname" value="qbReportJob"/> Step 8: There is one more thing we can do to make this a true Item Profitability report. We can remove the reporttype input and hardcode the value. To do this, copy and paste the rsb:set used for operationname. Then rename the attr and value to match the name and value you want to use. For example: <rsb:set attr="operationname" value="qbReportJob"/> <rsb:set attr="reporttype" value="ITEMPROFITABILITY"/> After this you can remove the input for reporttype. Now that you have your own report file, we can move on to displaying the report in the Visual Studio server explorer. Accessing the report through the Data Provider Step 1: Open Visual Studio. In the Server Explorer, configure a new connection with the QuickBooks Data Provider. Step 2: For the Location connection string property, enter the directory where the new report has been saved to. Step 3: The new report should appear as a new view in the Server Explorer. Let's retrieve data from it. Step 4: You can specify any inputs in the WHERE clause. New Report Example Script To help you get started using this new QuickBooks Data Provider report, you will need to download the QuickBooks ADO.NET Data Provider and the fully functional sample script.

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >