Search Results

Search found 9610 results on 385 pages for 'common mistakes'.

Page 194/385 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • UPK and the Oracle Unified Method can be used to deploy Oracle-Based Business Solutions

    - by Emily Chorba
    Originally developed to support Oracle's acquisition strategy, the Oracle Unified Method (OUM) defines a common implementation language across all of Oracle's products and technologies. OUM is a flexible, scalable, and evolving body of knowledge that combines existing best practices and field experience with an industry standard framework that includes the latest thinking around agile implementation and cloud computing.    Strong, proven methods are essential to ensuring successful enterprise IT projects both within Oracle and for our customers and partners. OUM provides a collection of repeatable processes that are the basis for agile implementations of Oracle enterprise business solutions. OUM also provides a structure for tracking progress and managing cost and risks. OUM is applicable to any size or type of IT project. While OUM is a plan-based method—including overview material, task and artifact descriptions, and templates—the method is intended to be tailored to support the appropriate level of ceremony (or agility) required for each project. Guidance is provided for identifying the minimum subset of tasks, tailoring the approach, executing iterative and incremental planning, and applying agile techniques, including support for managing projects using Scrum. Supplemental guidance provides specific support for Oracle products, such as UPK. OUM is available to Oracle employees, partners, and customers. Internal Use at Oracle: Employees can download OUM from MyDesktop. OUM Partner Program: OUM is available free of charge to Oracle PartnerNetwork (OPN) Diamond, Platinum, and Gold partners as a benefit of membership. These partners may download OUM from the Oracle Unified Method Knowledge Zone on OPN. OUM Customer Program: The OUM Customer Program allows customers to obtain copies of the method for their internal use by contracting with Oracle for a services engagement of two weeks or longer. Customers who have a signed contract with Oracle and meet the engagement qualification criteria as published on Customer tab of the OUM Website, are permitted to download the current release of OUM for their perpetual use. They may obtain subsequent releases published during a renewable, three-year access period To learn more about OUM, visit OUM Blog OUM on LinkedIn OUM on Twitter Emily Chorba, Principle Product Manager, Oracle User Productivity Kit

    Read the article

  • Save Upgrade downtime: Upgrade APEX upfront

    - by Mike Dietrich
    With almost every patch or release upgrade of the Oracle Database a new version of Oracle Application Express (APEX) will be installed. And as APEX is part of the database installation it will be upgraded as part of the component upgrades after the ORACLE SERVER component has been successfully upgraded to the new releases. But the APEX upgrade can take a bit (several minutes or even more in some cases). Therefore it is a common advice to upgrade APEX upfront before upgrading the database as this can be done online while the database is in production (unless your databases serves just as an APEX application backend - in this case upgrading APEX upfront won't save you anything). To upgrade Oracle APEX upfront you'll have to followMOS Note:1088970.1. It explains that you'll have to: Determine the installation type by running this query:select count(*) from <SCHEMA>.WWV_FLOWS where id = 4000;whereas <SCHEMA> can be one of the following:FLOWS_010500 1.5.X FLOWS_010600 1.6.X FLOWS_020000 2.0.X FLOWS_020100 2.1.X FLOWS_020200 2.2.X FLOWS_030000 3.0.X FLOWS_030100 3.1.X  APEX_030200 3.2.X APEX_040000 4.0.XAPEX_040100 4.1.XAPEX_040200 4.2.XIf the query returns 0 then you'll need to run apxrtins.sqlIf the query returns 1 then you'll need to execute apexins.sql Download the newest APEX package and install it. -Mike . 

    Read the article

  • Top 5 Developer Enabling Nuggets in MySQL 5.6

    - by Rob Young
    MySQL 5.6 is truly a better MySQL and reflects Oracle's commitment to the evolution of the most popular and widelyused open source database on the planet.  The feature-complete 5.6 release candidate was announced at MySQL Connect in late September and the production-ready, generally available ("GA") product should be available in early 2013.  While the message around 5.6 has been focused mainly on mass appeal, advanced topics like performance/scale, high availability, and self-healing replication clusters, MySQL 5.6 also provides many developer-friendly nuggets that are designed to enable those who are building the next generation of web-based and embedded applications and services. Boiling down the 5.6 feature set into a smaller set, of simple, easy to use goodies designed with developer agility in mind, these things deserve a quick look:Subquery Optimizations Using semi-JOINs and late materialization, the MySQL 5.6 Optimizer delivers greatly improved subquery performance. Specifically, the optimizer is now more efficient in handling subqueries in the FROM clause; materialization of subqueries in the FROM clause is now postponed until their contents are needed during execution. Additionally, the optimizer may add an index to derived tables during execution to speed up row retrieval. Internal tests run using the DBT-3 benchmark Query #13, shown below, demonstrate an order of magnitude improvement in execution times (from days to seconds) over previous versions. select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)from customer, orders, lineitemwhere o_orderkey in (                select l_orderkey                from lineitem                group by l_orderkey                having sum(l_quantity) > 313  )  and c_custkey = o_custkey  and o_orderkey = l_orderkeygroup by c_name, c_custkey, o_orderkey, o_orderdate, o_totalpriceorder by o_totalprice desc, o_orderdateLIMIT 100;What does this mean for developers?  For starters, simplified subqueries can now be coded instead of complex joins for cross table lookups: SELECT title FROM film WHERE film_id IN (SELECT film_id FROM film_actor GROUP BY film_id HAVING count(*) > 12); And even more importantly subqueries embedded in packaged applications no longer need to be re-written into joins.  This is good news for both ISVs and their customers who have access to the underlying queries and who have spent development cycles writing, testing and maintaining their own versions of re-written queries across updated versions of a packaged app.The details are in the MySQL 5.6 docs. Online DDL OperationsToday's web-based applications are designed to rapidly evolve and adapt to meet business and revenue-generationrequirements. As a result, development SLAs are now most often measured in minutes vs days or weeks. For example, when an application must quickly support new product lines or new products within existing product lines, the backend database schema must adapt in kind, and most commonly while the application remains available for normal business operations.  MySQL 5.6 supports this level of online schema flexibility and agility by providing the following new ALTER TABLE online DDL syntax additions:  CREATE INDEX DROP INDEX Change AUTO_INCREMENT value for a column ADD/DROP FOREIGN KEY Rename COLUMN Change ROW FORMAT, KEY_BLOCK_SIZE for a table Change COLUMN NULL, NOT_NULL Add, drop, reorder COLUMN Again, the details are in the MySQL 5.6 docs. Key-value access to InnoDB via Memcached APIMany of the next generation of web, cloud, social and mobile applications require fast operations against simple Key/Value pairs. At the same time, they must retain the ability to run complex queries against the same data, as well as ensure the data is protected with ACID guarantees. With the new NoSQL API for InnoDB, developers have allthe benefits of a transactional RDBMS, coupled with the performance capabilities of Key/Value store.MySQL 5.6 provides simple, key-value interaction with InnoDB data via the familiar Memcached API.  Implemented via a new Memcached daemon plug-in to mysqld, the new Memcached protocol is mapped directly to the native InnoDB API and enables developers to use existing Memcached clients to bypass the expense of query parsing and go directly to InnoDB data for lookups and transactional compliant updates.  The API makes it possible to re-use standard Memcached libraries and clients, while extending Memcached functionality by integrating a persistent, crash-safe, transactional database back-end.  The implementation is shown here:So does this option provide a performance benefit over SQL?  Internal performance benchmarks using a customized Java application and test harness show some very promising results with a 9X improvement in overall throughput for SET/INSERT operations:You can follow the InnoDB team blog for the methodology, implementation and internal test cases that generated these results here. How to get started with Memcached API to InnoDB is here. New Instrumentation in Performance SchemaThe MySQL Performance Schema was introduced in MySQL 5.5 and is designed to provide point in time metrics for key performance indicators.  MySQL 5.6 improves the Performance Schema in answer to the most common DBA and Developer problems.  New instrumentations include: Statements/Stages What are my most resource intensive queries? Where do they spend time? Table/Index I/O, Table Locks Which application tables/indexes cause the most load or contention? Users/Hosts/Accounts Which application users, hosts, accounts are consuming the most resources? Network I/O What is the network load like? How long do sessions idle? Summaries Aggregated statistics grouped by statement, thread, user, host, account or object. The MySQL 5.6 Performance Schema is now enabled by default in the my.cnf file with optimized and auto-tune settings that minimize overhead (< 5%, but mileage will vary), so using the Performance Schema ona production server to monitor the most common application use cases is less of an issue.  In addition, new atomic levels of instrumentation enable the capture of granular levels of resource consumption by users, hosts, accounts, applications, etc. for billing and chargeback purposes in cloud computing environments.The MySQL docs are an excellent resource for all that is available and that can be done with the 5.6 Performance Schema. Better Condition Handling - GET DIAGNOSTICSMySQL 5.6 enables developers to easily check for error conditions and code for exceptions by introducing the new MySQL Diagnostics Area and corresponding GET DIAGNOSTICS interface command. The Diagnostic Area can be populated via multiple options and provides 2 kinds of information:Statement - which provides affected row count and number of conditions that occurredCondition - which provides error codes and messages for all conditions that were returned by a previous operation The addressable items for each are: The new GET DIAGNOSTICS command provides a standard interface into the Diagnostics Area and can be used via the CLI or from within application code to easily retrieve and handle the results of the most recent statement execution.  An example of how it is used might be:mysql> DROP TABLE test.no_such_table; ERROR 1051 (42S02): Unknown table 'test.no_such_table' mysql> GET DIAGNOSTICS CONDITION 1 -> @p1 = RETURNED_SQLSTATE, @p2 = MESSAGE_TEXT; mysql> SELECT @p1, @p2; +-------+------------------------------------+| @p1   | @p2                                | +-------+------------------------------------+| 42S02 | Unknown table 'test.no_such_table' | +-------+------------------------------------+ Options for leveraging the MySQL Diagnotics Area and GET DIAGNOSTICS are detailed in the MySQL Docs.While the above is a summary of some of the key developer enabling 5.6 features, it is by no means exhaustive. You can dig deeper into what MySQL 5.6 has to offer by reading this developer zone article or checking out "What's New in MySQL 5.6" in the MySQL docs.BONUS ALERT!  If you are developing on Windows or are considering MySQL as an alternative to SQL Server for your next project, application or shipping product, you should check out the MySQL Installer for Windows.  The installer includes the MySQL 5.6 RC database, all drivers, Visual Studio and Excel plugins, tray monitor and development tools all a single download and GUI installer.   So what are your next steps? Register for Dec. 13 "MySQL 5.6: Building the Next Generation of Web-Based Applications and Services" live web event.  Hurry!  Seats are limited. Download the MySQL 5.6 Release Candidate (look under the Development Releases tab) Provide Feedback <link to http://bugs.mysql.com/> Join the Developer discussion on the MySQL Forums Explore all MySQL Products and Developer Tools As always, thanks for your continued support of MySQL!

    Read the article

  • Connecting the Dots (.NET Business Connector)

    - by ssmantha
    Recently, one of my colleagues was experimenting with Reporting Server on DAX 2009, whenever he used to view a report in SQL Server Reporting Manager he was welcomed with an error: “Error during processing Ax_CompanyName report parameter. (rsReportParameterProcessingError)” The Event Log had the following entry: Dynamics Adapter LogonAs failed. Microsoft.Dynamics.Framework.BusinessConnector.Session.Exceptions.FatalSessionException at Microsoft.Dynamics.Framework.BusinessConnector.Session.DynamicsSession.HandleException(Stringmessage, Exception exception, HandleExceptionCallback callback) We later found out that this was due to incorrect Business Connector account, with my past experience I noticed this as a very common mistake people make during EP and Reporting Installations. Remember that the reports need to connect to the Dynamics Ax server to run the AxQueries., which needs to pass through the .NET Business Connector. To ensure everything works fine please note the following settings: 1) Your Report Server Service Account should be same as .NET Business Connector proxy account. 2) Ensure on the server which has Reporting Services installed, the client configuration utility for Business Connector points to correct proxy account. 3) And finally, the AX instance you are connecting to has Service account specified for .NET business connector. (administration –> Service accounts –> .NET Business Connector) These simple checkpoints can help in almost most of the Business Connector related  errors, which I believe is mostly due to incorrect configuration settings. Happy DAXing!!

    Read the article

  • How to Get Spelling Autocorrect Across All Applications on Your System

    - by Erez Zukerman
    Spelling auto-correction can be a very handy feature, whether it is for tricky words (“emmitted” vs. “emitted”), typical typos (“desing” vs. “design”) or other common errors. Microsoft Word has it, but why not implement it across your system using a free, customizable and easy-to-use AutoHotkey script? Read on to see how. The script we’re going to be using goes by the shockingly original name AutoCorrect. For starters, simply click the link and save it somewhere handy. It’s a vintage script, last updated on 2007, but it still works very well – we’ve been using it daily for months. Latest Features How-To Geek ETC How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? Peaceful Alpine River on a Sunny Day [Wallpaper] Fast Society Creates Mini and Mobile Temporary Social Networks Page Zipper Unpacks Multi-Page Articles for Single-Page Display Minty Bug: Build an FM Bug Inside a Mint Container Get the MakeUseOf eBook Guide to Hacker Proofing Your PC Sync Your Windows Computer with Your Ubuntu One Account [Desktop Client]

    Read the article

  • Webinar: NoSQL - Data Center Centric Application Enablement

    - by Charles Lamb
    NoSQL - Data Center Centric Application Enablement AUGUST 6 WEBINAR About the Webinar The growth of Datacenter infrastructure is trending out of bounds, along with the pace in user activity and data generation in this digital era. However, the nature of the typical application deployment within the data center is changing to accommodate new business needs. Those changes introduce complexities in application deployment architecture and design, which cascade into requirements for a new generation of database technology (NoSQL) destined to ease that complexity. This webcast will discuss the modern data centers data centric application, the complexities that must be dealt with and common architectures found to describe and prescribe new data center aware services. Well look at the practical issues in implementation and overview current state of art in NoSQL database technology solving the problems of data center awareness in application development. REGISTER NOW>> MORE INFORMATION >> NOTE! All attendees will be entered to win a guest pass to the NoSQL Now! 2013 Conference & Expo. About the Speaker Robert Greene, Oracle NoSQL Product Management Robert GreeneRobert Greene is a principle product manager / strategist for Oracle’s NoSQL Database technology. Prior to Oracle he was the V.P. Technology for a NoSQL Database company, Versant Corporation, where he set the strategy for alignment with Big Data technology trends resulting in the acquisition of the company by Actian Corp in 2012. Robert has been an active member of both commercial and open source initiatives in the NoSQL and Object Relational Mapping spaces for the past 18 years, developing software, leading project teams, authoring articles and presenting at major conferences on these topics. In his previous life, Robert was an electronic engineer developing first generation wireless, spread spectrum based security systems.

    Read the article

  • Partner Webcast - Oracle WebCenter: Portal Highlights - 31 Oct 2013

    - by Thanos Terentes Printzios
    Oracle WebCenter is the center of engagement for business. In order to succeed in today’s economy, organizations need to engage with information across all channels to ensure customers, partners and employees have access to the right information in the context of the business process in which they are engaged. The latest release of Oracle WebCenter addresses this challenge with updates across its complete portfolio.Nowadays, Portals are multi-channel applications that enable the creation, sharing and distribution of personalized content, as well as access to social networking and self-service capabilities. Web 2.0 and social technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed.The new release of Oracle WebCenter Portal makes it easier and faster for business users to create intuitive portals with integrated application content Streamlining development with an integrated set of tools for web and mobile. Providing out-of-the box templates for common use cases. Expediting the portal creation experience with new development tools empower business users to build and deploy mobile portals and websites with unprecedented speed—without having to wait for IT which leads to a shorter time to market and reduced costs. Join us to discover a Web platform that allows organizations to quickly and easily create intranets, extranets, composite applications, and self-service portals, providing users a more secure and efficient way of consuming information and interacting with applications, processes, and other users – the latest Oracle WebCenter Portal release 11gR1 PS7. Agenda Oracle WebCenter Overview Oracle WebCenter Portal New and enhanced features to improve the user experience: For Knowledge Workers Simplified Portal Creation Search Enhancements For Application Specialists New Portal Builder Simplify Mobile Development For Developers : Enhanced APIs and ADF Support For Administrators Lifecycle Enhancements Search Administration Impersonation Summary - Q&A This is our first webcast of an Oracle Webcenter Series for Partners, with the support of  Oracle EMEA Webcenter Partner Community. Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. New invitations will be shared of additional webcasts planned for Oracle Webcenter. Thursday, October 31st, 2013 10am CET (8am UTC / 11am EEST)  Register Now For any questions please contact us at [email protected] Stay Connected

    Read the article

  • How should I start with Lisp?

    - by Gary Rowe
    I've been programming for years now, working my way through various iterations of Blub (BASIC, Assembler, C, C++, Visual Basic, Java, Ruby in no particular order of "Blub-ness") and I'd like to learn Lisp. However, I have a lot of intertia what with limited time (family, full time job etc) and a comfortable happiness with my current Blub (Java). So my question is this, given that I'm someone who would really like to learn Lisp, what would be the initial steps to get a good result that demonstrates the superiority of Lisp in web development? Maybe I'm missing the point, but that's how I would initially see the application of my Lisp knowledge. I'm thinking "use dialect A, use IDE B, follow instructions on page C, question your sanity after monads using counsellor D". I'd just like to know what people here consider to be an optimal set of values for A, B, C and perhaps D. Also some discussion on the relative merit of learning such a powerful language as opposed to, say, becoming a Rails expert. Just to add some more detail, I'll be developing on MacOS (or a Linux VM) - no Windows based approaches will be necessary, thanks. Notes for those just browsing by I'm going to keep this question open for a while so that I can offer feedback on the suggestions after I've been able to explore them. If you happen to be browsing by and feel you have something to add, please do. I would really welcome your feedback. Interesting links Assuming you're coming at Lisp from a Java background, this set of links will get you started quickly. Using Intellij's La Clojure plugin to integrate Lisp (videocast) Lisp for the Web Online version of Practical Common Lisp (c/o Frank Shearar) Land of Lisp a (+ (+ very quirky) game based) way in but makes it all so straightforward

    Read the article

  • Boot-repair commands not found in PATH or not executable

    - by Bram Meerten
    I recently had problems with my ubuntu partition (after the battery died), I managed to fix them by running ubuntu from usb and run gparted. It worked I can access my files on the partition by running ubuntu from usb. But when I restart the computer, after selecting ubuntu in Grub, I get a black screen with a white underscore. I googled the problem, and tried to solve it by setting nomodeset, but it didn't work. Next I wanted to try to fix Grub using boot-repair, I clicked on 'Recommended repair', it tells me to type the following commands in the terminal: sudo chroot "/mnt/boot-sav/sda5" apt-get install -fy sudo chroot "/mnt/boot-sav/sda5" dpkg --configure -a sudo chroot "/mnt/boot-sav/sda5" apt-get purge -y --force-yes grub-common But when running the second command, I get this error: dpkg: warning: 'sh' not found in PATH or not executable. dpkg: warning: 'rm' not found in PATH or not executable. dpkg: warning: 'tar' not found in PATH or not executable. dpkg: error: 3 expected programs not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. I didn't edit /etc/environment (or any other files), this is what it looks like: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" RUNNING_UNDER_GDM="yes" I have no idea how to fix this. I'm running dualboot Ubuntu 12.04 and Windows 7, Windows boots fine.

    Read the article

  • Laptop shuts down upon waking from suspend

    - by Bryan Head
    The computer enters suspend either by closing lid, choosing suspend from the top-right drop down, or hitting the power button and pressing suspend. It doesn't matter. I then attempt to wake the computer either by opening the lid (if it was closed) or hitting the power button. Again, doesn't matter. The computer will then immediately shutdown about 50% of the time. It seems to be more likely to shut down the longer it has been on suspend. I took a snapshot of /var/log/pm-suspend.log after a successfully resume and a shutdown. The only difference (outside of timestamps of course) was that a successful resume, after reporting the success of various suspend hooks, writes: Thu Jul 5 21:36:45 PDT 2012: performing suspend Thu Jul 5 21:37:10 PDT 2012: Awake. Thu Jul 5 21:37:10 PDT 2012: Running hooks for resume and then reports successful resume hooks. When it shuts down, the log ends at "performing suspend". I diffed the two files so I know this is the only difference. Thus, it looks like it's not even trying to wake up. Would love some ideas on this one. I've scoured the web but can't seem to find anyone else running into the same issue (it seems more common that the computer shuts down upon entering suspend, or only on hitting the power button to wake, and haven't seen any that are random like mine). I'll update with any requested information.

    Read the article

  • Indian school boy never misses a class for 14 years. Applies for Gunnies Records

    - by Gopinath
    If you ask the question “What is the most fun activity?” to school or college kids, most of the kids would say “bunking classes”. Many of us are grown up bunking classes in the name of stomachache, relatives marriage, high fever, rain or some other reason. Here is a wonder kid who is an exception of regular school kids. Mohammed Omar, a 17 year Indian school boy, never skipped his classes for the past 14 years. His attendance records shows 100% for all the 14 years of school he attended so far and it’s an unbelievable track record. Omar lives in Kanpur, a suburban in Uttar Pradesh with parents and a younger brother. He attended school even when the area where he lives was once flooded, had high temperature. When flooded and motor vehicles were not able to run on the streets he loaned a bicycle from neighbors. When he was on high temperature he just popped a tablet and headed towards the school.  Whatever may be the adverse situation, he just found a way to attend school instead of bunking. He recently applied for Guinness Book of World Records. The determination of the boy is incredible and inspiration to many young. I  wish to see this guy soon flashing on TV Channels with Guinness World Records certificates on his hands. Source: NDTV, creative common image: flickr/seeveeaar

    Read the article

  • Collision detection on a 2D hexagonal grid

    - by SundayMonday
    I'm making a casual grid-based 2D iPhone game using Cocos2D. The grid is a "staggered" hex-like grid consisting of uniformly sized and spaced discs. It looks something like this. I've stored the grid in a 2D array. Also I have a concept of "surrounding" grid cells. Namely the six grid cells surrounding a particular cell (except those on the boundries which can have less than six). Anyways I'm testing some collision detection and it's not working out as well as I had planned. Here's how I currently do collision detection for a moving disc that's approaching the stationary group of discs: Calculate ij-coordinates of grid cell closest to moving cell using moving cell's xy-position Get list of surrounding grid cells using ij-coordinates Examine the surrounding cells. If they're all empty then no collision If we have some non-empty surrounding cells then compare the distance between the disc centers to some minimum distance required for a collision If there's a collision then place the moving disc in grid cell ij So this works but not too well. I've considered a potentially simpler brute force approach where I just compare the moving disc to all stationary discs at each step of the game loop. This is probably feasible in terms of performance since the stationary disc count is 300 max. If not then some space-partitioning data structure could be used however that feels too complex. What are some common approaches and best practices to collision detection in a game like this?

    Read the article

  • What Can Super Mario Teach Us About Graphics Technology?

    - by Eric Z Goodnight
    If you ever played Super Mario Brothers or Mario Galaxy, you probably thought it was only a fun videogame—but fun can be serious.  Super Mario has lessons to teach you might not expect about graphics and the concepts behind them. The basics of image technology (and then some) can all be explained with a little help from everybody’s favorite little plumber. So read on to see what we can learn from Mario about pixels, polygons, computers and math, as well as dispelling a common misconception about those blocky old graphics we remember from when me first met Mario. Latest Features How-To Geek ETC What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Seas0nPass Now Offers Untethered Apple TV Jailbreaking Never Call Me at Work [Humorous Star Wars Video] Add an Image Properties Listing to the Context Menu in Chrome and Iron Add an Easy to View Notification Badge to Tabs in Firefox SpellBook Parks Bookmarklets in Chrome’s Context Menu Drag2Up Brings Multi-Source Drag and Drop Uploading to Firefox

    Read the article

  • How to improve relationships between consultants and staff programmers

    - by Catchops
    I have been a consultant for a small software consulting firm for quite some time now. Our normal business model is not staff augmentation, but such that we find clients who need assistance in building a solution of some kind and then send in a team who can build that solution, work with the existing IT staff, train all involved on supportting that solution, then move on to the next job. We, of course, are still around for any needed ongoing support. We have a great reputation in our area and have been very successful in implementing the solutions that we provide. However, I have noticed a common theme for most of our projects. When we get on-site, there is generally a "stressed" relationship between our team and many of the IT staff currently at the client. I understand completely that there may be some anxiety about our arrival and that defenses can come up when we are around. Many of the folks are understanding and easy to work with, but there are usually some who will not work well with us at all, and who can quickly become a project risk in many ways. We try to go in with open minds and good attitudes, and try NOT to be arrogent or condecending. We generally get deployed when there is a mess to clean up - but we understand that there were reasons decisions were made that got them in the bind they are...so we just try to determine the next step forward and move on. My question is this - I'd like to hear from the IT staff and programmers out there who have had consultants in - what are the things that consultants do that fire up negative feelings and attitudes? What can we do better to make the relationship better, not only in the beginning, but as the project moves forward?

    Read the article

  • Partner Webcast – Implementing Web Services & SOA Security with Oracle Fusion Middleware - 20 September 2012

    - by Thanos
    Security was always one of the main pain points for the IT industry, and new security challenges has been introduced with the proliferation  of the service-oriented approach to building modern software. Oracle Fusion Middleware provides a wide variety of features that ease the building service-oriented solutions, but how these services can be secured?Should we implement the security features in each and every service or there’s a better way? During the webinar we are going to show how to implement non-intrusive declarative security for your SOA components by introducing the Oracle product portfolio in this area, such as Oracle Web Services Manager and Oracle IDM. Agenda: SOA & Web Services basics: quick refresher Building your SOA with Oracle Fusion Middleware: product review Common security risks in the Web Services world SOA & Web Services security standards Implementing Web Services Security with the Oracle products Web Services Security with Oracle – the big picture Declarative end point security with Oracle Web Services Manager Perimeter Security with Oracle Enterprise Gateway Utilizing the other Oracle IDM products for the advanced scenarios Q&A session Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Thursday, September 20, 2012 - 10:00 AM to 11:00 AM CET (GMT/UTC+1)Duration: 1 hour Register Now Send your questions and migration/upgrade requests [email protected] Visit regularly our ISV Migration Center blog or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. All content is made available through our YouTube - SlideShare - Oracle Mix.

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Identity in .NET 4.5&ndash;Part 1: Status Quo (Beta 1)

    - by Your DisplayName here!
    .NET 4.5 is a big release for claims-based identity. WIF becomes part of the base class library and structural classes like Claim, ClaimsPrincipal and ClaimsIdentity even go straight into mscorlib. You will be able to access all WIF functionality now from prominent namespaces like ‘System.Security.Claims’ and ‘System.IdentityModel’ (yay!). But it is more than simply merging assemblies; in fact claims are now a first class citizen in the whole .NET Framework. All built-in identity classes, like FormsIdentity for ASP.NET and WindowsIdentity now derive from ClaimsIdentity. Likewise all built-in principal classes like GenericPrincipal and WindowsPrincipal derive from ClaimsPrincipal. In other words, the moment you compile your .NET application against 4.5,  you are claims-based. That’s a big (and excellent) change.   While the classes are designed in a way that you won’t “feel” a difference by default, having the power of claims under the hood (and by default) will change the way how to design security features with the new .NET framework. I am currently doing a number of proof of concepts and will write about that in the future. There are a number of nice “little” features, like FindAll(), FindFirst(), HasClaim() methods on both ClaimsIdentity and ClaimsPrincipal. This makes querying claims much more streamlined. I also had to smile when I saw ClaimsPrincipal.Current (have a look at the code yourself) ;) With all the goodness also comes a number of breaking changes. I will write about that, too. In addition Vittorio announced just today the beta availability of a new wizard/configuration tool that makes it easier to do common things like federating with an IdP or creating a test STS. Go get the Beta and the tools and start writing claims-enabled applications! Interesting times ahead!

    Read the article

  • JRockit R28 "Ropsten" released

    - by tomas.nilsson
    R28 is a major release (as indicated by the careless omissions of "minor" and "revision" numbers. The formal name would be R28.0.0). Our customers expect grand new features and innovation from major releases, and "Ropsten" will not disappoint. One of the biggest challenges for IT systems is after the fact diagnostics. That is - Once something has gone wrong, the act of trying to figure out why it went wrong. Monitoring a system and keeping track of system health once it is running is considered a hard problem (one that we to some extent help our customers solve already with JRockit Mission Control), but doing it after something occurred is close to impossible. The most common solution is to set up heavy logging (and sacrificing system performance to do the logging) and hope that the problem occurs again. No one really thinks that this is a good solution, but it's the best there is. Until now. Inspired by the "Black box" in airplanes, JRockit R28 introduces the Flight Recorder. Flight Recorder can be seen as an extremely detailed log, but one that is always on and that comes without a cost to system performance. With JRockit Flight Recorder the customer will be able to get diagnostics information about what happened _before_ a problem occurred, instead of trying to guess by looking at the fallout. Keywords that are important to the customer are: • Extremely detailed, always on, diagnostics information • No performance overhead • Powerful tooling to visualize the data recorded. • Enables diagnostics of bugs and SLA breaches after the fact. For followers of JRockit, other additions are: • New JMX agent that allows JRMC to be used through firewalls more easily • Option to generate HPROF dumps, compatible with tools like Eclipse MAT • Up to 64 BG compressed references (previously 4) • View memory allocation on a thread level (as an Mbean and in Mission Control) • Native memory tracking (Command line and Mbean) • More robust optimizer. • Dropping support for Java 1.4.2 and Itanium If you have any further questions, please email [email protected]. The release can be downloaded from http://www.oracle.com/technology/software/products/jrockit/index.html

    Read the article

  • Why binding is not a native feature in most of the languages?

    - by Gulshan
    IMHO binding a variable to another variable or an expression is a very common scenario in mathematics. In fact, in the beginning, many students think the assignment operator(=) is some kind of binding. But in most of the languages, binding is not supported as a native feature. In some languages like C#, binding is supported in some cases with some conditions fulfilled. But IMHO implementing this as a native feature was as simple as changing the following code- int a,b,sum; sum := a + b; a = 10; b = 20; a++; to this- int a,b,sum; a = 10; sum = a + b; b = 20; sum = a + b; a++; sum = a + b; Meaning placing the binding instruction as assignments after every instruction changing values of any of the variable contained in the expression at right side. After this, trimming redundant instructions (or optimization in assembly after compilation) will do. So, why it is not supported natively in most of the languages. Specially in the C-family of languages? Update: From different opinions, I think I should define this proposed "binding" more precisely- This is one way binding. Only sum is bound to a+b, not the vice versa. The scope of the binding is local. Once the binding is established, it cannot be changed. Meaning, once sum is bound to a+b, sum will always be a+b. Hope the idea is clearer now. Update 2: I just wanted this P# feature. Hope it will be there in future.

    Read the article

  • The UIManager Pattern

    - by Duncan Mills
    One of the most common mistakes that I see when reviewing ADF application code, is the sin of storing UI component references, most commonly things like table or tree components in Session or PageFlow scope. The reasons why this is bad are simple; firstly, these UI object references are not serializable so would not survive a session migration between servers and secondly there is no guarantee that the framework will re-use the same component tree from request to request, although in practice it generally does do so. So there danger here is, that at best you end up with an NPE after you session has migrated, and at worse, you end up pinning old generations of the component tree happily eating up your precious memory. So that's clear, we should never. ever, be storing references to components anywhere other than request scope (or maybe backing bean scope). So double check the scope of those binding attributes that map component references into a managed bean in your applications.  Why is it Such a Common Mistake?  At this point I want to examine why there is this urge to hold onto these references anyway? After all, JSF will obligingly populate your backing beans with the fresh and correct reference when needed.   In most cases, it seems that the rational is down to a lack of distinction within the application between what is data and what is presentation. I think perhaps, a cause of this is the logical separation between business data behind the ADF data binding (#{bindings}) façade and the UI components themselves. Developers tend to think, OK this is my data layer behind the bindings object and everything else is just UI.  Of course that's not the case.  The UI layer itself will have state which is intrinsically linked to the UI presentation rather than the business model, but at the same time should not be tighly bound to a specific instance of any single UI component. So here's the problem.  I think developers try and use the UI components as state-holders for this kind of data, rather than using them to represent that state. An example of this might be something like the selection state of a tabset (panelTabbed), you might be interested in knowing what the currently disclosed tab is. The temptation that leads to the component reference sin is to go and ask the tabset what the selection is.  That of course is fine in context - e.g. a handler within the same request scoped bean that's got the binding to the tabset. However, it leads to problems when you subsequently want the same information outside of the immediate scope.  The simple solution seems to be to chuck that component reference into session scope and then you can simply re-check in the same way, leading of course to this mistake. Turn it on its Head  So the correct solution to this is to turn the problem on its head. If you are going to be interested in the value or state of some component outside of the immediate request context then it becomes persistent state (persistent in the sense that it extends beyond the lifespan of a single request). So you need to externalize that state outside of the component and have the component reference and manipulate that state as needed rather than owning it. This is what I call the UIManager pattern.  Defining the Pattern The  UIManager pattern really is very simple. The premise is that every application should define a session scoped managed bean, appropriately named UIManger, which is specifically responsible for holding this persistent UI component related state.  The actual makeup of the UIManger class varies depending on a needs of the application and the amount of state that needs to be stored. Generally I'll start off with a Map in which individual flags can be created as required, although you could opt for a more formal set of typed member variables with getters and setters, or indeed a mix. This UIManager class is defined as a session scoped managed bean (#{uiManager}) in the faces-config.xml.  The pattern is to then inject this instance of the class into any other managed bean (usually request scope) that needs it using a managed property.  So typically you'll have something like this:   <managed-bean>     <managed-bean-name>uiManager</managed-bean-name>     <managed-bean-class>oracle.demo.view.state.UIManager</managed-bean-class>     <managed-bean-scope>session</managed-bean-scope>   </managed-bean>  When is then injected into any backing bean that needs it:    <managed-bean>     <managed-bean-name>mainPageBB</managed-bean-name>     <managed-bean-class>oracle.demo.view.MainBacking</managed-bean-class>     <managed-bean-scope>request</managed-bean-scope>     <managed-property>       <property-name>uiManager</property-name>       <property-class>oracle.demo.view.state.UIManager</property-class>       <value>#{uiManager}</value>     </managed-property>   </managed-bean> In this case the backing bean in question needs a member variable to hold and reference the UIManager: private UIManager _uiManager;  Which should be exposed via a getter and setter pair with names that match the managed property name (e.g. setUiManager(UIManager _uiManager), getUiManager()).  This will then give your code within the backing bean full access to the UI state. UI components in the page can, of course, directly reference the uiManager bean in their properties, for example, going back to the tab-set example you might have something like this: <af:paneltabbed>   <af:showDetailItem text="First"                disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['FIRST']}"> ...   </af:showDetailItem>   <af:showDetailItem text="Second"                      disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['SECOND']}">     ...   </af:showDetailItem>   ... </af:panelTabbed> Where in this case the settings member within the UI Manger is a Map which contains a Map of Booleans for each tab under the MAIN_TABSET_STATE key. (Just an example you could choose to store just an identifier for the selected tab or whatever, how you choose to store the state within UI Manger is up to you.) Get into the Habit So we can see that the UIManager pattern is not great strain to implement for an application and can even be retrofitted to an existing application with ease. The point is, however, that you should always take this approach rather than committing the sin of persistent component references which will bite you in the future or shotgun scattered UI flags on the session which are hard to maintain.  If you take the approach of always accessing all UI state via the uiManager, or perhaps a pageScope focused variant of it, you'll find your applications much easier to understand and maintain. Do it today!

    Read the article

  • As a solo programmer, of what use can Gerrit be?

    - by s.d
    Disclaimer: I'm aware of the questions How do I review my own code? and What advantages do continuous integration tools offer on a solo project?. I do feel that this question aims at a different set of answers though, as it is about a specific software, and not about general musings about the use of this or that practice or tech stack. I'm a solo programmer for an Eclipse RCP-based project. To be able to control myself and the quality of my software, I'm in the process of setting up a CSI stack. In this, I follow the Eclipse Common Build Infrastructure guidelines, with the exception that I will use Jenkins rather than Hudson. Sticking to these guidelines will hopefully help me in getting used to the Eclipse way of doing things, and I hope to be able to contribute some code to Eclipse in the future. However, CBI includes Gerrit, a code review software. While I think it's really helpful for teams, and I will employ it as soon as the team grows to at least two developers, I'm wondering how I could use it while I'm still solo. Question: Is there a use case for Gerrit that takes into account solo developers? Notes: I can imagine reviewing code myself after a certain amount of time to gain a little distance to it. This does complicate the workflow though, as code will only be built once it's been passed through code review. This might prove to be a "trap" as I might be tempted to quickly push bad code through Gerrit just to get it built. I'm very interested in hearing how other solo devs have made use of Gerrit.

    Read the article

  • SQL Bits X – Temporal Snapshot Fact Table Session Slide & Demos

    - by Davide Mauri
    Already 10 days has passed since SQL Bits X in London. I really enjoyed it! Those kind of events are great not only for the content but also to meet friends that – due to distance – is not possible to meet every day. Friends from PASS, SQL CAT, Microsoft, MVP and so on all in one place, drinking beers, whisky and having fun. A perfect mixture for a great learning and sharing experience! I’ve also enjoyed a lot delivering my session on Temporal Snapshot Fact Tables. Given that the subject is very specific I was not expecting a lot of attendees….but I was totally wrong! It seems that the problem of handling daily snapshot of data is more common than what I expected. I’ve also already had feedback from several attendees that applied the explained technique to their existing solution with success. This is just what a speaker in such conference wish to hear! :) If you want to take a look at the slides and the demos, you can find them on SkyDrive: https://skydrive.live.com/redir.aspx?cid=377ea1391487af21&resid=377EA1391487AF21!1151&parid=root The demo is available both for SQL Sever 2008 and for SQL Server 2012. With this last version, you can also simplify the ETL process using the new LEAD analytic function. (This is not done in the demo, I’ve left this option as a little exercise for you :) )

    Read the article

  • Implementing SOA & Security with Oracle Fusion Middleware in your solution – partner webcast September 20th 2012

    - by JuergenKress
    Security was always one of the main pain points for the IT industry, and new security challenges has been introduced with the proliferation  of the service-oriented approach to building modern software. Oracle Fusion Middleware provides a wide variety of features that ease the building service-oriented solutions, but how these services can be secured? Should we implement the security features in each and every service or there’s a better way? During the webinar we are going to show how to implement non-intrusive declarative security for your SOA components by introducing the Oracle product portfolio in this area, such as Oracle Web Services Manager and Oracle Enterprise Gateway. Agenda: SOA & Web Services basics: quick refresher Building your SOA with Oracle Fusion Middleware: product review Common security risks in the Web Services world SOA & Web Services security standards Implementing Web Services Security with the Oracle products Web Services Security with Oracle – the big picture Declarative end point security with Oracle Web Services Manager Perimeter Security with Oracle Enterprise Gateway Utilizing the other Oracle IDM products for the advanced scenarios Q&A session Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now Send your questions and migration/upgrade requests [email protected] Visit regularly our ISV Migration Center blog or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. All content is made available through our YouTube - SlideShare - Oracle Mix. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Technorati Tags: ISV migration center,SOA,IDM,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • SQLBeat Podcast – Episode 4 – Mark Rasmussen on Machine Guns,Jelly Fish and SQL Storage Engine

    - by SQLBeat
    In this this 4th SQLBeat Podcast I talk with fellow Dane Mark Rasmussen on SQL, machine guns and jelly fish fights; apparently they are common in our homeland. Who am I kidding, I am not Danish, but I try to be in this podcast. Also, we exchange knowledge on SQL Server storage engine particulars as well as some other “internals” like password hashes and contained databases. And then it just gets weird and awesome. There is lots of background noise from people who did not realize we were recording. And I call them out and make fun of them as they deserve; well just one person who is well known in these parts. I also learn the correct (almost) pronunciation of “fjord”. Seriously, a word with an “F” followed by a “J”. And there are always the hippies and hipsters to discuss. Should be fun.

    Read the article

  • High-Level Application Architecture Question

    - by Jesse Bunch
    So I'm really wanting to improve how I architect the software I code. I want to focus on maintainability and clean code. As you might guess, I've been reading a lot of resources on this topic and all it's doing is making it harder for me to settle on an architecture because I can never tell if my design is the one that the more experienced programmer would've chosen. So I have these requirements: I should connect to one vendor and download form submissions from their API. We'll call them the CompanyA. I should then map those submissions to a schema fit for submitting to another vendor for integration with the email service provider. We'll call them the CompanyB. I should then submit those responses to the ESP (CompanyB) and then instruct the ESP to send that submitter an email. So basically, I'm copying data from one web service to another and then performing an action at the latter web service. I've identified a couple high-level services: The service that downloads data from CompanyA. I called this the CompanyAIntegrator. The service that submits the data to CompanyB. I called this CompanyBIntegrator. So my questions are these: Is this a good design? I've tried to separate the concerns and am planning to use the facade pattern to make the integrators interchangeable if the vendors change in the future. Are my naming conventions accurate and meaningful to you (who knows nothing specific of the project)? Now that I have these services, where should I do the work of taking output from the CompanyAIntegrator and getting it in the format for input to the CompanyBIntegrator? Is this OK to be done in main()? Do you have any general pointers on how you'd code something like this? I imagine this scenario is common to us engineers---especially those working in agencies. Thanks for any help you can give. Learning how to architect well is really mind-cluttering.

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >