Search Results

Search found 25579 results on 1024 pages for 'complex event processing'.

Page 367/1024 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • Nerdstock 2012: A photo review of Microsoft TechEd North America 2012

    - by The Un-T Guy
    Not only could I not fathom that I would ever be attending a tech event of the magnitude of TechEd, neither could any of my co-workers.  As the least technical person in the history of Information Technology ever, I felt as though I were walking into the belly of the beast, fearing I’d not be allowed out until I could write SSIS packages, program in Visual Basic, or at least arm wrestle a DBA.  Most of my fears were unrealized.   But I made it.  I was here.  I even got to wear the Mark of the Geek neck package with schedule, eyeglass cleaners, name badge (company name obfuscated so they don’t fire me), and a pen.  The name  badge was seemingly the key element, as every vendor in the place wanted to scan it to capture name, email address, and numbers to show their bosses back home.  It also let me eat the food and drink the coffee so that’s a fair trade.   A recurring theme throughout the presentations and vendor demos was “the Cloud” and BYOD (bring your own device).  The below was a common site throughout the week, as attendees from all over the world brought their own devices and were able to (seemingly) seamlessly connect to the Worldwide Innerwebs.  Apparently proof that Microsoft and the event organizers were practicing what they were preaching.   “Cavernous” is one way to describe the downstairs facility itself.  “Freaking cavernous” might be more accurate.  Work sessions were held in classrooms on the second and third floors but the real action was happening downstairs.  Microsoft bookstore, blogger hub (shoutout to Geekswithblogs.net), The Wall (sans Pink Floyd, sadly), couches, recharging stations…   …a game zone with pool and air hockey tables, pinball machines, foosball…   …vintage video games…           …and a even giant chess board.  Looked like this guy was opening with the Kaspersky parry.   The blend of technology and fantasy even went so far as to bring childhood favorites to life.  Assuming, of course, your childhood was pre-video games (like mine) and you were stuck with electric football and Rock ‘em Sock ‘em robots:   And, lest the “combatants” become unruly or – God forbid – afternoon snacks were late, Orange County’s finest was on the scene to keep the peace.  On a high-tech mode of transport, of course.   She wasn’t the only one to think this was a swell way to transition from one concourse to the next.  Given the level of support provided by the entire Orange County Convention Center staff, I knew they had to have some secret.   Here’s one entrance to the vendor zone/”Technical Learning Center.”  Couldn’t help but think of them as the remora attached to the Whale Shark that is Microsoft…   …or perhaps planets orbiting the sun. Microsoft is just that huge and it seemed like every vendor in the industry looks forward to partnering with the tech behemoth.   Aside from the free stuff from the vendors, probably the most popular place in the house was the dining area.  Amazing spreads every day, multiple times a day.  While no attendance numbers were available at press time, literally thousands of attendees were fed, and fed well, every day.  And lest you think my post from earlier in the week exaggerated about the backpacks…   …or that I’m exaggerating about the lunch crowds.  This represents only about between 25-30% of the lunch crowd – it was all my camera could capture at once.  No one went away hungry.   The only thing missing was a a vat of Red Bull but apparently organizers went old school, with probably 100 urns of the original energy drink – coffee – all around the venue.   Of course, following lunch and afternoon sessions, some preferred the even older school method of re-energizing.  There were rumors that Microsoft was serving graham crackers and milk in this area.  But they were only rumors.   Cannot overstate the wonderful service provided by the Orange County Convention Center staff.  Coffee, soft drinks, juice, and water were available always.  Buffet meals were delicious with a wide range of healthy options available, in addition to hundreds (at least) special meal requests supported every day.  Ever tried to keep up with an estimated 9,000 hungry and thirsty IT-ers?  These folks did.  Kudos to all of the staff and many thanks!   And while I occasionally poke fun at the Whale Shark, if nothing else this experience convinced me of one thing:  Microsoft knows how to put on a professional event.  Hundreds of informative, professionally delivered sessions, covering a wide range of topics set at varying levels of expertise (some that even I was able to follow), social activities, vendor partnerships…they brought everything you could ask for to inform, educate, and inspire an entire IT industry.   So as I depart the belly of the beast, I can both take pride in the fact that I survived the week and marvel at the brilliance surrounding me.  The IT industry – or at least the segment associated with Microsoft – is in good, professional hands.  And what won’t fit in their hands can be toted in the Microsoft provided backpacks.  Win-win.   Until New Orleans…

    Read the article

  • Information Indepth Newsletter - Linux Edition

    - by Paulo Folgado
    INFORMATION INDEPTH NEWSLETTERLinux Edition February 2011 Stay Connected:  NEWS Now Available: Oracle Linux 6 Get the latest release of Oracle Linux 6, which includes Unbreakable Enterprise Kernel.Download Oracle Linux 6 Read More Customers Succeed by Using Oracle Exadata with Oracle Linux Watch IT executives from Bank of America, Linkshare, and Johns Hopkins as they talk about the business challenges they faced and why they chose to use Oracle Linux along with Oracle Exadata as the solution. Watch Now Video Interview: Oracle Senior Vice President Wim Coekaerts Watch Wim Coekaerts, senior vice president, Linux and Virtualization Engineering, as he talks about use cases for Oracle VM Templates as well as the Unbreakable Enterprise Kernel for Linux.Watch Now Hot Off the Press: Migrate Your IBM AIX Environment to Oracle Linux This new white paper provides recommendations for planning and implementing the migration of applications from an IBM Power System running AIX to Oracle's Sun Fire X4800 Server with Intel Xeon 7560 Processor running Oracle Linux 5.5.Read More  Back to Top BLOGOSPHERE Just Launched: The Oracle Linux Blog Follow our new Oracle Linux blog  to hear the latest updates, product news, upcoming events, and all the latest happenings, directly from the Linux team at Oracle. Back to Top TECH DIVE NEW: Linux/Oracle Solaris CommandComparo Site from Oracle Technology NetworkThis site gives equivalent command syntax in Oracle Solaris 10 and Oracle Enterprise Linux 5 for common administrative tasks--focusing particularly on tasks that have tricky syntax or that you frequently need to double check. It acts as a quick reference for administrators who operate in these two OS environments. Free Download: Oracle Linux Release 5.6Did you know that by using Oracle Linux 5.5 or 5.6 along with the Unbreakable Enterprise Kernel, you can get all the benefits of Linux mainline kernel 2.6.32 and more, right now, without the need to reinstall or migrate to a new operating system such as RHEL6?Read Release NotesDownload Oracle Linux 5.6 LSB 4.0 Certification Completed for Oracle Linux 5.5Oracle Linux 5.5 with Unbreakable Enterprise Kernel successfully completed the LSB 4.0 certification.  Back to Top WEBCASTS Boost Your Linux Performance with Oracle's Enhancements in Infiniband and RDSRegister to hear Director of Kernel Engineering Chris Mason cover scalability and performance improvements in Linux environment. Get the Facts Oracle's Unbreakable Enterprise KernelSVP Wim Coekaerts and Senior Director Monica Kumar cover the facts about and benefits of using Unbreakable Enterprise Kernel.  View Other Webcasts on Demand   Back to Top EVENTS Collaborate 2011April 10-14 Orlando, Florida Cloud Summit Events, WorldwideVarious dates (check the city for date/time of event) Datacenter Efficiency Events WorldwideThese events include Linux and Oracle VM sessions.Various dates (check the city for date/time of event) Virtualization Events in North America Find an Oracle Event  Back to Top EDUCATION Get Oracle Linux Certified from Oracle University Oracle University offers courses in both Oracle Linux and the administration of Oracle Database on Linux.  Back to Top CUSTOMER SPOTLIGHT Pella Corporation Improves IT Performance and Efficiency with Oracle Linux and Oracle VM To improve IT performance and efficiency and lower operational costs, Pella Corporation, has standardized on Oracle VM and Oracle Linux. Read More Disney Store Deploys POS in 330 Stores and 7 Countries on Oracle Linux Disney Store is running 1,500 registers worldwide on a broad Oracle technology software stack including Oracle Database 11g, Oracle Fusion Middleware, and Oracle Linux. Read More Back to Top PARTNER SPOTLIGHT Emulex and Oracle Announce Data Integrity Features The Unbreakable Enterprise Kernel provides data integrity checking between Oracle Database applications and Emulex 8Gb/s LightPulse Fibre Channel Host Bus Adapters. Read More Dell Inc. Dell Inc. tested and validated configurations support Oracle Linux. Back to Top STAY IN TOUCH Follow @ORCL_Linux on Twitter for the latest penguin tweets Bookmark Oracle.com/Linux Read the Oracle Linux blog Back to Top  Oracle Information InDepth newsletters bring targeted news, articles, customer stories, and special offers to business people who want to find out how to streamline enterprise information management, measure results, improve business processes, and communicate a single truth to their constituents. Please send questions or comments to [email protected]. For answers to questions about subscribing, unsubscribing, and managing your Oracle e-mail communications preferences, please see the Oracle E-Mail Communications page. Copyright © 2011, Oracle Corporation and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor is it subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. 

    Read the article

  • Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } The Telecom industry continues to evolve through disruptive products, uncertain markets, shorter product lifecycles and convergence of technologies. Today’s market has moved from network centric to consumer centric and focuses primarily on the customer experience. It has resulted in several product management challenges such as an increased complexity and volume of offerings, creating product variants, accelerating time-to-market, ability to provide multiple product views for varied stakeholders, leveraging OSS intelligence to BSS layer, product co-creation and increasing audit and security concerns for service providers. The document discusses how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry.   1.0.       Introduction   Figure 1: Business Scenario   Modern business demands the launch of complex products in a very short timeframe and effecting changes in the price plan faster without IT intervention. One of the key transformation initiatives companies are focusing on is in the area of product management transformation and operational efficiency improvement. As part of these initiatives, companies are investing in best- in-class COTs-based Product Management solutions developed on industry-wide standards.   The new COTs packages are planned to integrate with existing or new B/OSS systems to provide a strategic end-to-end agile solution for reduced time-to-market and order journey time. In addition, system rationalization is being undertaken to phase out legacy systems and migrate to strategic systems.   2.0.       An Overview of Product Management in Telecom   Product data in telecom is multi- dimensional and difficult to manage. It increased significantly due to the complexity of the product, product offerings on the converged network, increased volume of offerings, bundled offering structures and ever increasing regulatory requirements.   In addition, the shrinking product lifecycle in telecom makes it difficult to manage the dynamic product data. Mergers and acquisitions coupled with organic growth pose major challenges in product portfolio management. It is a roadblock in the journey towards becoming an agile organization.       Figure 2: Complexity in Product Management   Network Technology’ is the new dimension in telecom product management where the same products are realized through different networks i.e., Soiled network to Converged network. Consequently, the product solution is different.     Figure 3: Current Scenario - Pain Points in Product Management   The major business implications arising out of the current scenario are slow time-to-market and an inefficient process that affects innovation.   3.0. Transformation of Next Generation Product Management   Companies must focus on their Product Management Transformation Journey in the areas of:   ·       Management of single truth of product information across the organization/geographies which is currently managed in heterogeneous systems   ·       Management of the Intellectual Property (IP) on the product concept and partnership in the design of discrete components to integrate into the system   ·       Leveraging structured and unstructured product data within the extended enterprise to extract consumer insights and drive innovation   ·       Management of effective operational separation to comply with regulatory bodies   ·       Reuse of existing designs and add relevant features such as value-added services to enable effective product bundling     Figure 4: Next generation needs   PLM-based Enterprise Product Catalogue solutions efficiently address the above requirements and act as an enabler towards product management transformation and rapid product launch.   4.0. PLM-based Enterprise Product Management     Figure 5: PLM-based Enterprise Product Mastering   Enterprise Product Management (EPM) enables the business to manage complex product attributes of data in complex environments. Product Mastering helps create a 'single view' of the product by creating a business-driven, IT-supported environment where a global 'single truth record' is created, managed and reused.   4.1 The Business Case for Telco PLM-based solutions for Enterprise Product Management   ·       Telco PLM-based Product Mastering solutions provide a centralized authoring environment for product definition and control of all product data and rules   ·       PLM packages are designed to support multiple perspectives of product data (ordering perspective, billing perspective, provisioning perspective)   ·       Maintains relationships/links between different elements of the entire product definition   ·       Telco PLM packages are specialized in next generation lifecycle management requirements of products such as revision and state management, test and release management, role management and impact analysis)   ·       Takes into consideration all aspects of OSS product requirements compared to CRM product catalogue solutions where the product data managed is mostly order oriented and transactional     ·       New breed of Telco PLM packages are designed with 'open' standards such as SID and eTOM. They are interoperable, support integration frameworks such as subscription and notification.   ·       Telco PLM packages have developed good collaboration frameworks to integrate suppliers and partners into the product development value chain   4.2 Various Architectures/Approaches for Product Mastering using Telco PLM systems   4. 2.a Single Central Product Management (Mastering) Approach   Figure 6: Single Central Product Management (Master) Approach       This approach is implemented across verticals such as aerospace and automotive. It focuses on a physically centralized product master to which other sources are dependent on. The product definition data (Product bundles, service bundles, price plans, offers and discounts, product configuration rules and market campaigns) is created and maintained physically in a centralized environment. In addition, the product definition/authoring environment is centralized. The existing legacy product definition data available in CRM product catalogue, billing catalogue and the legacy product catalogue is migrated to the centralized PLM-based Enterprise Product Management solution.   Architectural changes must be made in the existing business landscape of applications to create and revise data because the applications have to refer to the central repository for approvals and validation of product configurations. It is achieved by modifying how the applications write data or how the applications can be adapted to use the rules to be managed and published.   Complete product configuration validation will be done in enterprise / central product catalogue and final configuration will be sent to the B/OSS system through the SOA compliant product distribution architecture. The approach/architecture enables greater control in terms of product data management and product data governance.   4.2.b Federated Product Management (Mastering) Architecture     Figure 7: Federated Product Management (Mastering) Architecture   In the federated product mastering approach, the basic unique product definition data (product id, description product hierarchy, basic price plans and simple product design rules) will be centrally created and will be maintained. And, the advanced product definition (Product bundling, promotions, offers & discount plans) will be created in respective down stream OSS systems. The advanced product definition (Product bundling, promotions, offers and discount plans) will be created in respective downstream OSS systems.   For example, basic product definitions such as attributes, product hierarchy and basic price plans will be created and maintained in Enterprise/Central product reference catalogue and distributed to downstream OSS systems. Respective downstream OSS systems build product bundles, promotions, advanced price plans over the basic product definition and master the advanced product definition. Central reference database accesses the respective other source product master data and assembles a point-in-time consolidated view of the product. The approach is typically adapted in some merger and acquisition scenarios where there is a low probability of a central physical authority managing the data. In addition, the migration effort in this case is minimal and there are no big architectural changes to the organization application landscape. However, this approach will not result in better product data management and data governance.   5.0 Customer Scenario – Before EPC deployment   A leading global telecommunications service provider wanted to launch a quad play and triple play service offering in the shortest possible lead time. The service provider was offering Broadband and VoIP services to customers. The company wanted to reuse a majority of the Broadband services and price plans and bundle them with new wireless and IPTV services for quad play and triple play. The challenges in launching the new service offerings were:       Figure 8: Triple Play Plan   ·       Broadband product data was stored in multiple product catalogues (CRM catalogue, Billing catalogue, spread sheets)   ·       Product managers spent a lot of time performing tasks involving duplication or re-keying of data. Manual effort caused errors, cost and time over-runs.   ·       No effective product and price data governance mechanism. Price change issues arising from the lack of data consistency across systems resulted in leakage of customer value and revenue.   ·       Product data had re-usability issues and was not in a structured format. It resulted in uncontrolled product portfolio creation and product management issues.   ·       Lack of enterprise product model resulted into product distribution challenges and thus delays in product launch.   ·       Designers are constrained by existing legacy product management solutions to model product/service requirements and product configuration rules such as upgrading, downgrading and cross selling.    5.1 Customer Scenario - After EPC deployment     Figure 9: SOA-based end-to-end EPC Solution   The company deployed PLM-based Enterprise Product Catalogue solutions to launch quad play service after evaluating various product catalogues. The broadband product offering, service and price data were migrated to the new system, and the product and price plan hierarchy for new offerings were created using the entities defined in the Enterprise Product Model. Supplier product catalogue data such as routers and set up boxes were loaded onto the new solution through SOA-based web service. Price plans and configuration rules were built in the new system. The validated final product configurations were extracted from the product catalogue in a SID format and were distributed to the downstream B/OSS systems through exposed SOA-based web services. The transformations required for the B/OSS system were handled using the transformation layer as part of the solution.   6.0 How PLM enabled Product Management Transformation         Figure 10: Product Management Transformation     PLM-based Product Catalogue Solution helped the customer reduce the product launch cycle time by 30% and enable transformation of Product Management for next generation services.   7.0 Conclusion   On the one hand, the telecom industry is undergoing changes due to disruptions, uncertain product markets and increased complexity of products. On the other hand, the ARPU is decreasing year-on-year. Communications Service Providers are embarking on convergence, bundled service offerings, flexibility to cross-sell and up-sell, introduce new value-added services, leverage Web 2.0 concepts and network capabilities. Consequently, large scale IT transformation initiatives to improve their ARPU supporting network and business transformations are a business imperative. Product Management has become a focus area. Companies are investing in best-in- class COTS solutions to reduce time-to-market, ensure rapid service delivery and improve operational efficiency. An efficient PLM-based enterprise product mastering solution plays a key role in achieving zero touch automation and rapid product launch.   References:   1.     Preston G.Smith, Donald G.Reineristsem, Van Nostrand Reinhold “Developing Products in Half the time”.   2.     John G. Innes, "Achieving Successful Product Change", Pitman Publishing.   3.     D T Pham and R M Setchi (16th Jan, 2001) "Authoring environment for documentation development" University of Wales Cardiff, U.K., Proceedings on Institution of Mechanical Engineers, Vol. 215, Part B.   4.     Oracle Product Hub for Communications:   http://www.oracle.com/us/products/applications/master-data-management/product-hub-082059.html  

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • Posting from ASP.NET WebForms page to another URL

    - by hajan
    Few days ago I had a case when I needed to make FORM POST from my ASP.NET WebForms page to an external site URL. More specifically, I was working on implementing Simple Payment System (like Amazon, PayPal, MoneyBookers). The operator asks to make FORM POST request to a given URL in their website, sending parameters together with the post which are computed on my application level (access keys, secret keys, signature, return-URL… etc). So, since we are not allowed nesting another form inside the <form runat=”server”> … </form>, which is required because other controls in my ASPX code work on server-side, I thought to inject the HTML and create FORM with method=”POST”. After making some proof of concept and testing some scenarios, I’ve concluded that I can do this very fast in two ways: Using jQuery to create form on fly with the needed parameters and make submit() Using HttpContext.Current.Response.Write to write the form on server-side (code-behind) and embed JavaScript code that will do the post Both ways seemed fine. 1. Using jQuery to create FORM html code and Submit it. Let’s say we have ‘PAY NOW’ button in our ASPX code: <asp:Button ID="btnPayNow" runat="server" Text="Pay Now" /> Now, if we want to make this button submit a FORM using POST method to another website, the jQuery way should be as follows: <script src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.5.1.js" type="text/javascript"></script> <script type="text/javascript">     $(function () {         $("#btnPayNow").click(function (event) {             event.preventDefault();             //construct htmlForm string             var htmlForm = "<form id='myform' method='POST' action='http://www.microsoft.com'>" +                 "<input type='hidden' id='name' value='hajan' />" +             "</form>";             //Submit the form             $(htmlForm).appendTo("body").submit();         });     }); </script> Yes, as you see, the code fires on btnPayNow click. It removes the default button behavior, then creates htmlForm string. After that using jQuery we append the form to the body and submit it. Inside the form, you can see I have set the htttp://www.microsoft.com URL, so after clicking the button you should be automatically redirected to the Microsoft website (just for test, of course for Payment I’m using Operator's URL). 2. Using HttpContext.Current.Response.Write to write the form on server-side (code-behind) and embed JavaScript code that will do the post The C# code behind should be something like this: public void btnPayNow_Click(object sender, EventArgs e) {     string Url = "http://www.microsoft.com";     string formId = "myForm1";     StringBuilder htmlForm = new StringBuilder();     htmlForm.AppendLine("<html>");     htmlForm.AppendLine(String.Format("<body onload='document.forms[\"{0}\"].submit()'>",formId));     htmlForm.AppendLine(String.Format("<form id='{0}' method='POST' action='{1}'>", formId, Url));     htmlForm.AppendLine("<input type='hidden' id='name' value='hajan' />");     htmlForm.AppendLine("</form>");     htmlForm.AppendLine("</body>");     htmlForm.AppendLine("</html>");     HttpContext.Current.Response.Clear();     HttpContext.Current.Response.Write(htmlForm.ToString());     HttpContext.Current.Response.End();             } So, with this code we create htmlForm string using StringBuilder class and then just write the html to the page using HttpContext.Current.Response.Write. The interesting part here is that we submit the form using JavaScript code: document.forms["myForm1"].submit() This code runs on body load event, which means once the body is loaded the form is automatically submitted. Note: In order to test both solutions, create two applications on your web server and post the form from first to the second website, then get the values in the second website using Request.Form[“input-field-id”] I hope this was useful post for you. Regards, Hajan

    Read the article

  • AngularJs ng-cloak Problems on large Pages

    - by Rick Strahl
    I’ve been working on a rather complex and large Angular page. Unlike a typical AngularJs SPA style ‘application’ this particular page is just that: a single page with a large amount of data on it that has to be visible all at once. The problem is that when this large page loads it flickers and displays template markup briefly before kicking into its actual content rendering. This is is what the Angular ng-cloak is supposed to address, but in this case I had no luck getting it to work properly. This application is a shop floor app where workers need to see all related information in one big screen view, so some of the benefits of Angular’s routing and view swapping features couldn’t be applied. Instead, we decided to have one very big view but lots of ng-controllers and directives to break out the logic for code separation. For code separation this works great – there are a number of small controllers that deal with their own individual and isolated application concerns. For HTML separation we used partial ASP.NET MVC Razor Views which made breaking out the HTML into manageable pieces super easy and made migration of this page from a previous server side Razor page much easier. We were also able to leverage most of our server side localization without a lot of  changes as a bonus. But as a result of this choice the initial HTML document that loads is rather large – even without any data loaded into it, resulting in a fairly large DOM tree that Angular must manage. Large Page and Angular Startup The problem on this particular page is that there’s quite a bit of markup – 35k’s worth of markup without any data loaded, in fact. It’s a large HTML page with a complex DOM tree. There are quite a lot of Angular {{ }} markup expressions in the document. Angular provides the ng-cloak directive to try and hide the element it cloaks so that you don’t see the flash of these markup expressions when the page initially loads before Angular has a chance to render the data into the markup expressions.<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" ng-cloak> Note the ng-cloak attribute on this element, which here is an outer wrapper element of the most of this large page’s content. ng-cloak is supposed to prevent displaying the content below it, until Angular has taken control and is ready to render the data into the templates. Alas, with this large page the end result unfortunately is a brief flicker of un-rendered markup which looks like this: It’s brief, but plenty ugly – right?  And depending on the speed of the machine this flash gets more noticeable with slow machines that take longer to process the initial HTML DOM. ng-cloak Styles ng-cloak works by temporarily hiding the marked up element and it does this by essentially applying a style that does this:[ng\:cloak], [ng-cloak], [data-ng-cloak], [x-ng-cloak], .ng-cloak, .x-ng-cloak { display: none !important; } This style is inlined as part of AngularJs itself. If you looking at the angular.js source file you’ll find this at the very end of the file:!angular.$$csp() && angular.element(document) .find('head') .prepend('<style type="text/css">@charset "UTF-8";[ng\\:cloak],[ng-cloak],' + '[data-ng-cloak],[x-ng-cloak],.ng-cloak,.x-ng-cloak,' + '.ng-hide{display:none !important;}ng\\:form{display:block;}' '.ng-animate-block-transitions{transition:0s all!important;-webkit-transition:0s all!important;}' + '</style>'); This is is meant to initially hide any elements that contain the ng-cloak attribute or one of the other Angular directive permutation markup. Unfortunately on this particular web page ng-cloak had no effect – I still see the flicker. Why doesn’t ng-cloak work? The problem is of course – timing. The problem is that Angular actually needs to get control of the page before it ever starts doing anything like process even the ng-cloak attribute (or style etc). Because this page is rather large (about 35k of non-data HTML) it takes a while for the DOM to actually plow through the HTML. With the Angular <script> tag defined at the bottom of the page after the HTML DOM content there’s a slight delay which causes the flicker. For smaller pages the initial DOM load/parse cycle is so fast that the markup never shows, but with larger content pages it may show and become an annoying problem. Workarounds There a number of simple ways around this issue and some of them are hinted on in the Angular documentation. Load Angular Sooner One obvious thing that would help with this is to load Angular at the top of the page  BEFORE the DOM loads and that would give it much earlier control. The old ng-cloak documentation actually recommended putting the Angular.js script into the header of the page (apparently this was recently removed), but generally it’s not a good practice to load scripts in the header for page load performance. This is especially true if you load other libraries like jQuery which should be loaded prior to loading Angular so it can use jQuery rather than its own jqLite subset. This is not something I normally would like to do and also something that I’d likely forget in the future and end up right back here :-). Use ng-include for Child Content Angular supports nesting of child templates via the ng-include directive which essentially delay loads HTML content. This helps by removing a lot of the template content out of the main page and so getting control to Angular a lot sooner in order to hide the markup template content. In the application in question, I realize that in hindsight it might have been smarter to break this page out with client side ng-include directives instead of MVC Razor partial views we used to break up the page sections. Razor partial views give that nice separation as well, but in the end Razor puts humpty dumpty (ie. the HTML) back together into a whole single and rather large HTML document. Razor provides the logical separation, but still results in a large physical result document. But Razor also ended up being helpful to have a few security related blocks handled via server side template logic that simply excludes certain parts of the UI the user is not allowed to see – something that you can’t really do with client side exclusion like ng-hide/ng-show – client side content is always there whereas on the server side you can simply not send it to the client. Another reason I’m not a huge fan of ng-include is that it adds another HTTP hit to a request as templates are loaded from the server dynamically as needed. Given that this page was already heavy with resources adding another 10 separate ng-include directives wouldn’t be beneficial :-) ng-include is a valid option if you start from scratch and partition your logic. Of course if you don’t have complex pages, having completely separate views that are swapped in as they are accessed are even better, but we didn’t have this option due to the information having to be on screen all at once. Avoid using {{ }}  Expressions The biggest issue that ng-cloak attempts to address isn’t so much displaying the original content – it’s displaying empty {{ }} markup expression tags that get embedded into content. It gives you the dreaded “now you see it, now you don’t” effect where you sometimes see three separate rendering states: Markup junk, empty views, then views filled with data. If we can remove {{ }} expressions from the page you remove most of the perceived double draw effect as you would effectively start with a blank form and go straight to a filled form. To do this you can forego {{ }}  expressions and replace them with ng-bind directives on DOM elements. For example you can turn:<div class="list-item-name listViewOrderNo"> <a href='#'>{{lineItem.MpsOrderNo}}</a> </div>into:<div class="list-item-name listViewOrderNo"> <a href="#" ng-bind="lineItem.MpsOrderNo"></a> </div> to get identical results but because the {{ }}  expression has been removed there’s no double draw effect for this element. Again, not a great solution. The {{ }} syntax sure reads cleaner and is more fluent to type IMHO. In some cases you may also not have an outer element to attach ng-bind to which then requires you to artificially inject DOM elements into the page. This is especially painful if you have several consecutive values like {{Firstname}} {{Lastname}} for example. It’s an option though especially if you think of this issue up front and you don’t have a ton of expressions to deal with. Add the ng-cloak Styles manually You can also explicitly define the .css styles that Angular injects via code manually in your application’s style sheet. By doing so the styles become immediately available and so are applied right when the page loads – no flicker. I use the minimal:[ng-cloak] { display: none !important; } which works for:<div id="mainContainer" class="mainContainer dialog boxshadow" ng-app="app" ng-cloak> If you use one of the other combinations add the other CSS selectors as well or use the full style shown earlier. Angular will still load its version of the ng-cloak styling but it overrides those settings later, but this will do the trick of hiding the content before that CSS is injected into the page. Adding the CSS in your own style sheet works well, and is IMHO by far the best option. The nuclear option: Hiding the Content manually Using the explicit CSS is the best choice, so the following shouldn’t ever be necessary. But I’ll mention it here as it gives some insight how you can hide/show content manually on load for other frameworks or in your own markup based templates. Before I figured out that I could explicitly embed the CSS style into the page, I had tried to figure out why ng-cloak wasn’t doing its job. After wasting an hour getting nowhere I finally decided to just manually hide and show the container. The idea is simple – initially hide the container, then show it once Angular has done its initial processing and removal of the template markup from the page. You can manually hide the content and make it visible after Angular has gotten control. To do this I used:<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" style="display:none"> Notice the display: none style that explicitly hides the element initially on the page. Then once Angular has run its initialization and effectively processed the template markup on the page you can show the content. For Angular this ‘ready’ event is the app.run() function:app.run( function ($rootScope, $location, cellService) { $("#mainContainer").show(); … }); This effectively removes the display:none style and the content displays. By the time app.run() fires the DOM is ready to displayed with filled data or at least empty data – Angular has gotten control. Edge Case Clearly this is an edge case. In general the initial HTML pages tend to be reasonably sized and the load time for the HTML and Angular are fast enough that there’s no flicker between the rendering times. This only becomes an issue as the initial pages get rather large. Regardless – if you have an Angular application it’s probably a good idea to add the CSS style into your application’s CSS (or a common shared one) just to make sure that content is always hidden. You never know how slow of a browser somebody might be running and while your super fast dev machine might not show any flicker, grandma’s old XP box very well might…© Rick Strahl, West Wind Technologies, 2005-2014Posted in Angular  JavaScript  CSS  HTML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • It&rsquo;s A Team Sport: PASS Board Year 2, Q3

    - by Denise McInerney
    As I type this I’m on an airplane en route to my 12th PASS Summit. It’s been a very busy 3.5 months since my last post on my work as a Board member. Nearing the end of my 2-year term I am struck by how much has happened, and yet how fast the time has gone. But I’ll save the retrospective post for next time and today focus on what happened in Q3. In the last three months we made progress on several fronts, thanks to the contributions of many volunteers and HQ staff members. They deserve our appreciation for their dedication to delivering for the membership week after week. Virtual Chapters The Virtual Chapters continue to provide many PASS members with valuable free training. Between July and September of 2013 VCs hosted over 50 webinars with a total of 4300 attendees. This quarter also saw the launch of the Security & Global Russian VCs. Both are off to a strong start and I welcome these additions to the Virtual Chapter portfolio. At the beginning of 2012 we had 14 Virtual Chapters. Today we have 22. This growth has been exciting to see. It has also created a need to have more volunteers help manage the work of the VCs year-round. We have renewed focus on having Virtual Chapter Mentors work with the VC Leaders and other volunteers. I am grateful to volunteers Julie Koesmarno, Thomas LeBlanc and Marcus Bittencourt who join original VC Mentor Steve Simon on this team. Thank you for stepping up to help. Many improvements to the VC web sites have been rolling out over the past few weeks. Our marketing and IT teams have been busy working a new look-and-feel, features and a logo for each VC. They have given the VCs a fresh, professional look consistent with the rest of the PASS branding, and all VCs now have a logo that connects to PASS and the particular focus of the chapter. 24 Hours of PASS The Summit Preview edition  of 24HOP was held on July 31 and by all accounts was a success. Our first use of the GoToWebinar platform for this event went extremely well. Thanks to our speakers, moderators and sponsors for making this event possible. Special thanks to HQ staffers Vicki Van Damme and Jane Duffy for a smoothly run event. Coming up: the 24HOP Portuguese Edition will be held November 13-14, followed December 12-13 by the Spanish Edition. Thanks to the Portuguese- and Spanish-speaking community volunteers who are organizing these events. July Board Meeting The Board met July 18-19 in Kansas City. The first order of business was the election of the Executive Committee who will take office January 1. I was elected Vice President of Marketing and will join incoming President Thomas LaRock, incoming Executive Vice President of Finance Adam Jorgensen and Immediate Past President Bill Graziano on the Exec Co. I am honored that my fellow Board members elected me to this position and look forward to serving the organization in this role. Visit to PASS HQ In late September I traveled to Vancouver for my first visit to PASS HQ, where I joined Tom LaRock and Adam Jorgensen to make plans for 2014.  Our visit was just a few weeks before PASS Summit and coincided with the Board election, and the office was humming with activity. I saw first-hand the enthusiasm and dedication of everyone there. In each interaction I observed a focus on what is best for PASS and our members. Our partners at HQ are key to the organization’s success. This week at PASS Summit is a great opportunity for all of us to remember that, and say “thanks.” Next Up PASS Summit—of course! I’ll be around all week and look forward to connecting with many of our member over meals, at the Community Zone and between sessions. In the evenings you can find me at the Welcome Reception, Exhibitor’s Reception and Community Appreciation Party. And I will be at the Board Q&A session  Friday at 12:45 p.m. Transitions The newly elected Exec Co and Board members take office January 1, and the Virtual Chapter portfolio is transitioning to a new director. I’m thrilled that Jen Stirrup will be taking over. Jen has experience as a volunteer and co-leader of the Business Intelligence Virtual Chapter and was a key contributor to the BI VCs expansion to serving our members in the EMEA region. I’ll be working closely with Jen over the next couple of months to ensure a smooth transition.

    Read the article

  • top Tweets SOA Partner Community – May 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community BPMN2.0 Oracle notations poster from eaiesb http://wp.me/p10C8u-pu Torsten WinterbergLook out for new Oracle #BPM edition coming up soon: The Oracle BPM Standard edtion! Great news for easy entry, small licence fees. Yes! Danilo Schmiedel Had a great chat with customer yesterday about #OracleBPM. Next step will be a 5day event combining modeling and implementation @soacommunity Frank Nimphius Still reading "Oracle Business Process Management Suite 11g Handbook". Excellent resource for a non-SOA but ADF guy like me ;-) Oracle New webcast: Maximize #Oracle #WebLogic Server ROI with Oracle #Enterprise #Manager 12c on May 2 at 10 am PT. Register http://bit.ly/JFUrR9 OTNArchBeat@OTNArchBeat BPM in Financial Services Industry | Sanjeev Sharma http://bit.ly/HCCxui JDeveloper & ADF BPEL 11.1.1.6 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations http://dlvr.it/1V9SxR Oracle UPK & Tutor Collaborate Attendees: Visit the UPK demo pod, SIGS, and sessions: If you are attending Collaborate 2012 - Sun. http://bit.ly/J39z65 Heidi Buelow see #fmw track RT @demed: Are you going to #KSCOPE12 in San Antonio, June 24-28? http://kscope12.com/component/ seminar/seminarslist?topicsid=6 Use promo code Fusion for discount! Sabine Leitner #SIG #Middleware 15.05. Frankfurt #Oracle #DOAG Planung & Aufbau WebLogic Server #WLS http://bit.ly/HKsCWV @OracleWebLogic @soacommunity SOA Community MDS explorer by Red Samurai http://wp.me/p10C8u-pp Biemond &reg; Retrieve or set a HTTP header from Oracle BPEL: With Oracle SOA Suite 11g patch 12928372 you can finally retrie http://bit.ly/JejTHC Lucas Jellema Call for papers for UKOUG 2012 has opened: http://techandebs.ukoug.org /default.asp?p=9306 (deadline 1st of June) OTNArchBeat BPM API usage: List all BPM Processes for a user | Kavitha Srinivasan http://bit.ly/IJKVfj demed SOA, Cloud + Service Tech symposium (London, Sep 24-25) call for paper is open http://www.servicetechsymposium. com /call2012.php @techsymp #oraclesoa OracleBlogs Lessons learned configuring OER 11g Workflows http://ow.ly/1iMsKh OTNArchBeat Scripting WebLogic Admin Server Startup | Antony Reynolds http://bit.ly/IH5ciU orclateamsoa A-Team Blog #ateam: BPM API usage: List all BPM Processes for a user http://ow.ly/1iJADp Lucas Jellema Just blogged about our Live FMW Application Development show during OBUG 2012, next Tuesday 24th April in Maastricht: OracleBlogs OEG integration with OSB/OWSM - 11g http://ow.ly/1iKx7G SOA Community SOA Community Newsletter April 2012 http://wp.me/p10C8u-pl Frank DorstRT @whitehorsesnl: Whiteblog: BPM Process Spaces in Oracle Webcenter (Patch Set 5(http://bit.ly/Hxzh29) #soacommunity #bpm #oracle) David Shaffer The Advanced SOA suite training class next week in Redwood City is full! Learned a lot about accepting credit card payments. OTNArchBeat Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5 | Shub Lahiri http://bit.ly/IgI8GN SOA Community Oracle Fusion Middleware Innovation Awards 2012, Call for Nominations #ofmaward #soa #bpm #soacommunity OTNArchBeat Updated SOA Documents now available in ITSO Reference Library http://bit.ly/I3Y6Sg Oracle Middleware Data Integrator & SOA - why 2 products better than one for integration? Webcast: Apr 24 10 AM PT http://bit.ly/IzmtKR Andrejus Baranovskis Red Samurai MDS Cleaner V2.0 http://fb.me/FxLVz82w SOA Community “@rluttikhuizen: Chapter 4 of SOA Made Simple book "Classification of Services" ready for collegial review” can #soacommunity get a preview? Xavier Verhaeghe #Gartner figures are out: #Oracle top in App Server market share (43.1%) and Relational #Database, too (48.8%) in 2011 Sabine Leitner WLS12c, Exa*, IDM, EM12c, DB @ Private, Public, Hybrid #Cloud Event 26.04. FFM #Oracle http://bit.ly/zcRuxi @OracleCloudZone @soacommunity Michel Schildmeijer@wlscommunity @MiddlewareMagic @OTNArchBeat @Oracle_Fusion Oracle WebLogic / SOA Suite 11g HACMP Cluster take-over http://lnkd.in/G78qMd Oracle Middleware Hear how ODI and SOA's unified approach are key to untangling your business. April 24 10AM PT http://bit.ly/IdcsUz #Oracle OTNArchBeat Using SAP Adapter with OSB 11g (PS3) | Shub Lahiri http://bit.ly/IswR9K SOA Community Integrating with Oracle Fusion Applications: Discovering Integration Artifacts https://blogs.oracle.com/governance /entry/integrating_with_oracle_fusion_ applications #soacommunity #oer #governance OracleBlogs Tuning B2B Server Engine Threads in SOA Suite 11g http://ow.ly/1iH5bx OracleBlogs Top Tweets SOA Partner Community April 2012 http://ow.ly/1iVHfA SOA Community Oracle SOA Suite 11g Database Growth Management http://wp.me/p10C8u-pi Sabine Leitner WLS12c,Exa*,IDM,EM12c, DB @ Private, Public, Hybrid #Cloud Event 24.04. München #Oracle http://bit.ly/zcRuxi @OracleCloudZone @soacommunity SOA Community Testing Business Rules by Mark Nelson http://redstack.wordpress.com/2012/ 04/18/testing-business-rules/ #soacommunity #soa #rules #oracle SOA CommunityTop Tweets SOA Partner Community - April 2012 http://wp.me/p10C8u-pn OTNArchBeat Webcast: Untangle Your Business with Oracle Unified SOA and Data Integration - April 24 http://bit.ly/IQexqT OTNArchBeat"Do more with SOA Integration: Best of Packt" contributors include @gschmutz, @llaszews, many others http://amzn.to/HVWwYt ServiceTechSymposium Symposium agenda page coming together - page launched today with keynotes, sessions to be added shortly. http://www.servicetechsymposium.com /agenda2012.php SOA Community Shipping Specialization plaques - congratulation #Fujitsu - request yours https://soacommunity.wordpress. com/2011/02/23/who-are-the-soa-experts-specialization-recognized-by-customers/ #soacommunity #OPN http://pic.twitter.com/YMRm2ion ServiceTechSymposium call for Presentations Submission Deadline Moved Up to May 21, 2012. Send your presentations submissions ASAP! ServiceTechSymposium Symposium Keynote by Vicente Navarro, European Space Agency, added to agenda: "SOA & Service-Orientation at the European Space Agency" SOA Community Running a large #soa project? Make sure you read - Oracle SOA Suite 11g Database Growth Management #soacommunity #opn SOA Community List all BPM Processes for a user by Yogesh l #bpm #oracle #soacommunity  For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity, twitter,Oracle,SOA Community,Jürgen Kress,OPN,SOA,BPM

    Read the article

  • CodePlex Daily Summary for Wednesday, November 21, 2012

    CodePlex Daily Summary for Wednesday, November 21, 2012Popular ReleasesImapX 2: ImapX 2.0.0.6: An updated release of the ImapX 2 library, containing many bugfixes for both, the library and the sample application.Metodología General Ajustada - MGA: 03.05.05: Cambios John: Se modificó el Procedimiento Alamacenado PROF03ObjetivoProductoConsultarIdF03 que no incluía los campos de IdUnidadMedida y UnidadMedida, lo que generaba error en la capa de datos al leer estos campos (PasarDataSetAPROF03ObjetivoProductoInfo) y terminaba devolviendo NULL en los registros, esto no dejaba la información en la Exportación y por ende en la Importación no subían los Productos. Generación de instaladores. Soporte técnico por correo electrónico, telefónico y en sitio.WiX Toolset: WiX v3.7 RC: WiX v3.7 RC (3.7.1119.0) provides feature complete Bundle update and reference tracking plus several bug fixes. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/11/20/WiX-v3.7-Release-Candidate-availablePicturethrill: Version 2.11.20.0: Fixed up Bing image provider on Windows 8Excel AddIn to reset the last worksheet cell: XSFormatCleaner.xla: Modified the commandbar code to use CommandBar IDs instead of English names.Json.NET: Json.NET 4.5 Release 11: New feature - Added ITraceWriter, MemoryTraceWriter, DiagnosticsTraceWriter New feature - Added StringEscapeHandling with options to escape HTML and non-ASCII characters New feature - Added non-generic JToken.ToObject methods New feature - Deserialize ISet<T> properties as HashSet<T> New feature - Added implicit conversions for Uri, TimeSpan, Guid New feature - Missing byte, char, Guid, TimeSpan and Uri explicit conversion operators added to JToken New feature - Special case...EntitiesToDTOs - Entity Framework DTO Generator: EntitiesToDTOs.v3.0: DTOs and Assemblers can be generated inside project folders! Choose the types you want to generate! Support for Visual Studio 2012 !!! Support for new Entity Framework EDMX (format used by VS2012) ! Support for Enum Types! Optional automatic check for updates! Added the following methods to Assemblers! IEnumerable<DTO>.ToEntities() : ICollection<Entity> IEnumerable<Entity>.ToDTOs() : ICollection<DTO> Indicate class identifier for DTOs and Assemblers! Cleaner Assemblers code....mojoPortal: 2.3.9.4: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2394-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0, but we recommend you to use .NET 4, we will probably drop support for .NET 3.5 once .NET 4.5 is available The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code and are not intended for use in Visual Studio. To download the source code see getting the lates...VidCoder: 1.4.6 Beta: Brought back the x264 advanced options panel due to popular demand. Thank you for all the feedback. x264 Preset/Profile/Tune/Level has been moved back to the Video tab, along with a copy of the "extra options" string. Added Fast Decode and Zero Latency checkboxes to support multiple Tunes. Added cropping option "None". Audio bitrates that are incompatible with the encoder (such as MP3 > 320 kbps) are no longer preset on the list. Fixed crash on opening VidCoder after de-selecting "re...DotNetNuke® Store: 03.01.07: What's New in this release? IMPORTANT: this version requires DotNetNuke 04.06.02 or higher! DO NOT REPORT BUGS HERE IN THE ISSUE TRACKER, INSTEAD USE THE DotNetNuke Store Forum! Bugs corrected: - Replaced some hard coded references to the default address provider classes by the corresponding interfaces to allow the creation of another address provider with a different name. New Features: - Added the 'pickup' delivery option at checkout. - Added the 'no delivery' option in the Store Admin ...Bundle Transformer - a modular extension for ASP.NET Web Optimization Framework: Bundle Transformer 1.6.10: Version: 1.6.10 Published: 11/18/2012 Now almost all of the Bundle Transformer's assemblies is signed (except BundleTransformer.Yui.dll); In BundleTransformer.SassAndScss the SassAndCoffee.Ruby library was replaced by my own implementation of the Sass- and SCSS-compiler (based on code of the SassAndCoffee.Ruby library version 2.0.2.0); In BundleTransformer.CoffeeScript added support of CoffeeScript version 1.4.0-3; In BundleTransformer.TypeScript added support of TypeScript version 0....ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.0: +2012-11-18 v3.2.0 -?????????????????SelectedValueArray????????(◇?◆:)。 -???????????????????RecoverPropertiesFromJObject????(〓?〓、????、??、Vian_Pan)。 -????????????,?????????????,???SelectedValueArray???????(sam.chang)。 -??Alert.Show???????????(swtseaman)。 -???????????????,??Icon??IconUrl????(swtseaman)。 -?????????TimePicker(??)。 -?????????,??/res.axd?css=blue.css&v=1。 -????????,?????????????,???????。 -????MenuCheckBox(???????)。 -?RadioButton??AutoPostBack??。 -???????FCKEditor?????????...BugNET Issue Tracker: BugNET 1.2: Please read our release notes for BugNET 1.2: http://blog.bugnetproject.com/bugnet-1-2-has-been-released Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you will pollute the rating, and you won't get an answer.Paint.NET PSD Plugin: 2.2.0: Changes: Layer group visibility is now applied to all layers within the group. This greatly improves the visual fidelity of complex PSD files that have hidden layer groups. Layer group names are prefixed so that users can get an indication of the layer group hierarchy. (Paint.NET has a flat list of layers, so the hierarchy is flattened out on load.) The progress bar now reports status when saving PSD files, instead of showing an indeterminate rolling bar. Performance improvement of 1...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1116.7): [IMPROVED] Detailed error message descriptions for FaultException [FIX] Fixed bug in rule CrmOfflineAccessStateRule which had incorrect State attribute name [FIX] Fixed bug in rule EntityPropertyRule which was missing PropertyValue attribute [FIX] Current connection information was not displayed in status bar while refreshing list of entitiesSuper Metroid Randomizer: Super Metroid Randomizer v5: v5 -Added command line functionality for automation purposes. -Implented Krankdud's change to randomize the Etecoon's item. NOTE: this version will not accept seeds from a previous version. The seed format has changed by necessity. v4 -Started putting version numbers at the top of the form. -Added a warning when suitless Maridia is required in a parsed seed. v3 -Changed seed to only generate filename-legal characters. Using old seeds will still work exactly the same. -Files can now be saved...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.4: Changes This version includes many bug fixes across all platforms, improvements to nuget support and...the biggest news of all...full support for both WinRT and WP8. Download Contents Debug and Release Assemblies Samples Readme.txt License.txt Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro invers...DirectX Tool Kit: November 15, 2012: November 15, 2012 Added support for WIC2 when available on Windows 8 and Windows 7 with KB 2670838 Cleaned up warning level 4 warningsDotNetNuke® Community Edition CMS: 06.02.05: Major Highlights Updated the system so that it supports nested folders in the App_Code folder Updated the Global Error Handling so that when errors within the global.asax handler happen, they are caught and shown in a page displaying the original HTTP error code Fixed issue that stopped users from specifying Link URLs that open on a new window Security FixesFixed issue in the Member Directory module that could show members to non authenticated users Fixed issue in the Lists modul...fastJSON: v2.0.10: - added MonoDroid projectNew Projects1121codeplex01: Today's task is to test portal on JapaneseAgileToDo: a to do list use wpf ef sqlce!Applay: Applay is a library that allows you to wrap authorization and validation around the services of your application layer by using a dynamic proxy.ArunimaErp: Enterprise Resource Planning Software for Arunima GroupBootCMS: BootCMS makes webdevelopment easy.codeplex01: I need go out for a whileCoding4Fun's Maelstrom: Introduced at //build/ 2012, Maelstrom is Coding4Fun's latest creation. Step up to the podium and battle against your opponent in full-on stereoscopic 3D!CTCS Project 2012: 11/21/2012 @ svn repository ctodo: TODO List Management Librarydeploy-with-ease: One-click deployment tool based on DrobBox files hostingDesign Resources .NET: D-R.NET is a set of pre-built implementations of oft-recurring application designs. D-R.NET saves considerable time and money in building user-focused applications: from basic to complex. DnfWeb: dnf??Dr.Peng: dr.peng ????????。DriveKeepAlive: Managed .NET service intended to keep external hard drives "awake" for immediate access. Developed in C# with Visual Studio 2008Ecommerce Platform: Ecommerce PlatformEventManagerReset: Project created for Reset meetings.FaceComparerDistributed: project of face compare distributed versionFileSystemExplorerExample: WPF MVVM Sample applicationFinlogiK ReSharper Contrib: FinlogiK ReSharper Contrib is a plugin for ReSharper 5.1 which adds code cleanup and inspection options for static qualifiers.Gestione Lampade Votive: Gestione dei canoni annuali dei loculi cimiteriali, con stampa di comunicazioni ai contribuenti e dei bollettini di conto corrente postale (a due o tre cedole).GI_PII: HABA BABA?GIV_P2: second projectHex o'clock: Projekt kolorowego zegara.IISProcessScheduler: Schedule processes from within IIS.Image Tagger & Resizer: Resize, and text in the lower right of picture with i.e. copyright information.IT Kohvik: ITK cafe school project.jean1121codeplex01: goodKooboo3 Helper: It's a developer tutorial code for kooboo cms v3. http://kooboo.codeplex.comMAVI: mobile application for the visually impaired: bill recognition & tag and recognize objects based on a specific stickerMecanismos de Segurança Interoperáveis para Serviços Web: Esse projeto pretende desenvolver um framework que forneça requisitos de segurança de forma interoperável através de Serviços Web. Metro UI For Windows Forms: Provides a set of controls and form templates for designing user interfaces based on a similar minimalist metro style. For those who love Windows Forms.NHSmartBootstrapper: In a "fast-changing" world, your LoB application needs to be ready to change as well. The usage of NHibernate Listeners together with smart application bootstrapping, even in a complex scenario, can lead to extensible and new-feature-ready applications. Office Add-In Monitor: Office Add-in Monitor protects add-ins from being disabled.Orchard Responsive Theme Machine: A responsive version of the Orchard CMS "Theme Machine" which is commonly used as a starting point for building custom themes. Supports many resolutions.Orchard Simple Contact Form: An Orchard CMS module that provides a simple contact form that sends an email. It can be used as either a content part or widget.Peon War: Peon war is a game where peons are fighting.Project Files Linker (VS Add-In): PFL project is used to generate multiple projects with links to the same files to achieve projects for different .NET FW versions.Quibbler - Universal News Reader: Quibbler is a product designed and developed by Indigo Architects. Quibbler is a desktop application which runs on user's machine and provides a intuitive user interface for reading news in offline mode. Quibbler is developed in WPF (.Net 3.5).Samcrypt: .SenchaTest: SenchaTestShared Genomics Project - Workbench Codebase: The Shared Genomics workbench enables a diverse user group of researchers to explore the associations between genetic and other factors in their datasets. It provides a graphical user interface to the analysis functions published in a sister Codeplex project i.e. MPI Codebase.SharePoint 2013 FBA Pack: This is the home of the SharePoint 2013 FBA Pack. The FBA Pack for SharePoint 2013 is currently in development and is coming soon.SharePoint Term Store PowerShell Backup & Restore Scripts: This project is focused on development of PowerShell script tools for backup and restore of SharePoint Managed Metadata service application Term Store taxonomy.SharpPlanets: A simple game completely designed and written in C#, inspired by JPlanets.SpaceShooter: A small hobbyist game. It is similar to the 2D Arcade shooter games.Stretched Background Image jQuery plugin: jQuery plugin for adding a stretched background image for any element in a web page. Uses an absolutely positioned image at z-index -1.Stsadm Templates for Visual Studio: The Stsadm Templates for Visual Studio 2005 and 2008 support you in making command extensions for SharePoint's commnand line tool stsadm.exe.SwissPost EasyTrack API: The SwissPost EasyTrack API allows you to track your parcels or letters everytime and in every application.System.Threading.Joins: The Joins project provides asynchronous concurrency semantics based on join calculus and modeled after the Microsoft Research C? (C Omega) project.T nagu Tetris: Meie versioon tuntud mängust tetris.testdd11202012tfs01: juktestddhg11202012hg01: stesttom11202012git02: fdsfdstesttom11202012hg01: gfdTetrissimus: Tetrissimus is an open source "Tetris" alike game totally written in DHTML (JavaScript, CSS and HTML) that uses keyboard. This cross-platform and cross-browser game was tested under BeOS, Linux, NetBSD, OpenBSD, FreeBSD, Windows and others.Thrift Client .NET for WinRT (Windows Store Apps): thrift .net client for WinRT applicationTwitter Bootstrap for SharePoint: A Masterpage for SharePoint 2010 including the twitter bootstrap front-end frameworkTX Spell .NET ActiveX Package: TX Spell .NET ActiveX Package enables you to add high-performance spell checking capabilities to your VB6 applications.USB ACCELEROMETER: This project is a test demo for usb accelerometer. Application plays music (mp3 file) while usb acc gives high values from its coordinate between interval.VfaAccoutApps: Cash Payment Application of Vf AsiaVisualPoint Use PowerPoint inside Visual Studio: VisualPoint lets you show PowerPoint presentations from inside Visual Studio. Future release will automate walkthroughs and presentations.VS2010 Rc1 Fix: Illustrates a fix for working with the ASAP.NET Wizard control with VS2010 RC1WebSite.Request: WebSite.Request launch web request (via XMLHTTP) on website. Use, for example, to make initial request to sharepoint URL and escape "slow first request" problem.WPF Checked ListBox: This is simple implementation of WPF Checked ListBoxWPortal: doing nothing. that's it. i just want to use the subversion management. XNA Capture the Flag for the Microsoft Zune: Capture the Flag is a 2d Capture the flag game made for the Zune platform using XNA 3.0 CTP. Players choose to join or start a network session in the main menu. When in game, the player uses left or right on the DPad to choose the team on which to play with. Once sides have been chosen the party leader presses the center button on the Dpad to start the game. Teams switch between offense and defense for a total of 4 rounds in each game. When the game is over the party leader simply presses th...XPS Indexer: Xps file indexing for Google Desktop

    Read the article

  • CLR via C# 3rd Edition is out

    - by Abhijeet Patel
    Time for some book news update. CLR via C#, 3rd Edition seems to have been out for a little while now. The book was released in early Feb this year, and needless to say my copy is on it’s way. I can barely wait to dig in and chew on the goodies that one of the best technical authors and software professionals I respect has in store. The 2nd edition of the book was an absolute treat and this edition promises to be no less. Here is a brief description of what’s new and updated from the 2nd edition. Part I – CLR Basics Chapter 1-The CLR’s Execution Model Added about discussion about C#’s /optimize and /debug switches and how they relate to each other. Chapter 2-Building, Packaging, Deploying, and Administering Applications and Types Improved discussion about Win32 manifest information and version resource information. Chapter 3-Shared Assemblies and Strongly Named Assemblies Added discussion of TypeForwardedToAttribute and TypeForwardedFromAttribute. Part II – Designing Types Chapter 4-Type Fundamentals No new topics. Chapter 5-Primitive, Reference, and Value Types Enhanced discussion of checked and unchecked code and added discussion of new BigInteger type. Also added discussion of C# 4.0’s dynamic primitive type. Chapter 6-Type and Member Basics No new topics. Chapter 7-Constants and Fields No new topics. Chapter 8-Methods Added discussion of extension methods and partial methods. Chapter 9-Parameters Added discussion of optional/named parameters and implicitly-typed local variables. Chapter 10-Properties Added discussion of automatically-implemented properties, properties and the Visual Studio debugger, object and collection initializers, anonymous types, the System.Tuple type and the ExpandoObject type. Chapter 11-Events Added discussion of events and thread-safety as well as showing a cool extension method to simplify the raising of an event. Chapter 12-Generics Added discussion of delegate and interface generic type argument variance. Chapter 13-Interfaces No new topics. Part III – Essential Types Chapter 14-Chars, Strings, and Working with Text No new topics. Chapter 15-Enums Added coverage of new Enum and Type methods to access enumerated type instances. Chapter 16-Arrays Added new section on initializing array elements. Chapter 17-Delegates Added discussion of using generic delegates to avoid defining new delegate types. Also added discussion of lambda expressions. Chapter 18-Attributes No new topics. Chapter 19-Nullable Value Types Added discussion on performance. Part IV – CLR Facilities Chapter 20-Exception Handling and State Management This chapter has been completely rewritten. It is now about exception handling and state management. It includes discussions of code contracts and constrained execution regions (CERs). It also includes a new section on trade-offs between writing productive code and reliable code. Chapter 21-Automatic Memory Management Added discussion of C#’s fixed state and how it works to pin objects in the heap. Rewrote the code for weak delegates so you can use them with any class that exposes an event (the class doesn’t have to support weak delegates itself). Added discussion on the new ConditionalWeakTable class, GC Collection modes, Full GC notifications, garbage collection modes and latency modes. I also include a new sample showing how your application can receive notifications whenever Generation 0 or 2 collections occur. Chapter 22-CLR Hosting and AppDomains Added discussion of side-by-side support allowing multiple CLRs to be loaded in a single process. Added section on the performance of using MarshalByRefObject-derived types. Substantially rewrote the section on cross-AppDomain communication. Added section on AppDomain Monitoring and first chance exception notifications. Updated the section on the AppDomainManager class. Chapter 23-Assembly Loading and Reflection Added section on how to deploy a single file with dependent assemblies embedded inside it. Added section comparing reflection invoke vs bind/invoke vs bind/create delegate/invoke vs C#’s dynamic type. Chapter 24-Runtime Serialization This is a whole new chapter that was not in the 2nd Edition. Part V – Threading Chapter 25-Threading Basics Whole new chapter motivating why Windows supports threads, thread overhead, CPU trends, NUMA Architectures, the relationship between CLR threads and Windows threads, the Thread class, reasons to use threads, thread scheduling and priorities, foreground thread vs background threads. Chapter 26-Performing Compute-Bound Asynchronous Operations Whole new chapter explaining the CLR’s thread pool. This chapter covers all the new .NET 4.0 constructs including cooperative cancelation, Tasks, the aralle class, parallel language integrated query, timers, how the thread pool manages its threads, cache lines and false sharing. Chapter 27-Performing I/O-Bound Asynchronous Operations Whole new chapter explaining how Windows performs synchronous and asynchronous I/O operations. Then, I go into the CLR’s Asynchronous Programming Model, my AsyncEnumerator class, the APM and exceptions, Applications and their threading models, implementing a service asynchronously, the APM and Compute-bound operations, APM considerations, I/O request priorities, converting the APM to a Task, the event-based Asynchronous Pattern, programming model soup. Chapter 28-Primitive Thread Synchronization Constructs Whole new chapter discusses class libraries and thread safety, primitive user-mode, kernel-mode constructs, and data alignment. Chapter 29-Hybrid Thread Synchronization Constructs Whole new chapter discussion various hybrid constructs such as ManualResetEventSlim, SemaphoreSlim, CountdownEvent, Barrier, ReaderWriterLock(Slim), OneManyResourceLock, Monitor, 3 ways to solve the double-check locking technique, .NET 4.0’s Lazy and LazyInitializer classes, the condition variable pattern, .NET 4.0’s concurrent collection classes, the ReaderWriterGate and SyncGate classes.

    Read the article

  • What's going on with INETA and the Regional Speakers Bureau?

    - by Chris Williams
    For those of you that have been waiting patiently (and not so patiently) I'm happy to say that we're very near completion on some changes/enhancements/improvements that will allow us to finally go live with the INETA Regional Speakers Bureau. I know quite a few of you have already registered, which is great (though some of you may need to come back and update your info) and we've had a few folks submit requests, mostly in a test capacity, but soon we'll be up and live. Here's how it breaks down. Be sure to read this, because things have changed a bit from when we initially announced it. 1. The majority of our speaker/event funding is going into the Regional Speakers Bureau.  The National Bureau still exists, but it's a good bit smaller than it was before, and it's not an "every group" benefit anymore. We'll be using the National Bureau as more of a strategic task force, targeting high impact events and areas that need some community building love from INETA. These will be identified and handled on a case by case basis, and may include more than just user group events. 2. You're going to get more events per group, per year than you did before. Not only are we focusing more resources on this program, but we're also making a lot of efforts to use it more effectively. With the INETA Regional Speakers Bureau, you should be able to get 2-3 INETA speakers per year, on average. Not every geographical area will have exactly the same experience, but we're doing the best we can. 3. It's not a farm team program for the National Bureau. Unsurprisingly, I managed to offend a number of people when I previously made the comment that the Regional Speakers Bureau program was a farm team or stepping stone to the National Bureau. It was a poor choice of words.  Anyone can participate in the Regional Speakers Bureau, and I look forward to working with all of you. 4. There is assistance for your efforts. The exact final details are still being hammered out, but expect it to look something like this: (all distances listed are based on a round trip) Distances < 120 miles = $0 121 miles - 240 miles = $50 (effectively 1 to 2 hours, each way) 241 miles - 360 miles = $100 (effectively 2 to 3 hours, each way) 361 miles - 480 miles = $200 (effectively 3 to 4 hours, each way) For those of you who travel a lot, we're working on a solution to handle group visits when you're away from home. These will (for now) be handled on a case by case basis. 5. We're going to make it as easy as possible to work with the program. In order to do this, we need a few things from you. For speakers, that means your home address. It also means (maybe) filling out a simple 1 line expense report via the INETA website. For user groups, it means making sure your meeting address is up to date as well. 6. Distances will be automatically calculated from your home of record to the user group event and back. We realize that this is not a perfect solution to every instance, but we're not paying you to speak at an event, and you won't be taxed on this money. It's simply some assistance to make your community efforts easier. Our way of saying thanks for everything you do. 7. Sounds good so far, what's the catch? There's always a catch, right? In this case there are two of them: 1) At this time, Microsoft employees are welcome to use the website to line up speaking engagements with user groups, but are not eligible for financial assistance. 2) Anyone can register and use the website to line up speaking engagements with user groups, however you must receive and maintain a net score of 3+ positive ratings (we're implementing a thumbs up / thumbs down system) in order to receive financial assistance. These ratings are provided by the User Group leaders after the meeting has taken place. 8. Involvement by the User Group leaders is a key factor in the success of this program. Your job isn't done once you request a speaker. After you've had your meeting, it's critical that you go back to the website and take a very small survey. Doing this ensures that the speaker gets rated (and compensated if eligible) and also ensures that you can make another request, since you won't be able to make a new request if you have an old one outstanding. 9. What about Canada? We're definitely working on that. Unfortunately nothing new to report on that front, other than to say that we're trying. So... this is where things stand currently. We're working very quickly to get this in place and get speakers and groups together. If you have any questions, please leave a comment below and I'll answer them as quickly as possible. If I've forgotten anything, or if things change, I'll update it here. Thanks, Chris G. Williams INETA Board of Directors

    Read the article

  • The Incremental Architect&acute;s Napkin - #2 - Balancing the forces

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/02/the-incremental-architectacutes-napkin---2---balancing-the-forces.aspxCategorizing requirements is the prerequisite for ecconomic architectural decisions. Not all requirements are created equal. However, to truely understand and describe the requirement forces pulling on software development, I think further examination of the requirements aspects is varranted. Aspects of Functionality There are two sides to Functionality requirements. It´s about what a software should do. I call that the Operations it implements. Operations are defined by expressions and control structures or calls to frameworks of some sort, i.e. (business) logic statements. Operations calculate, transform, aggregate, validate, send, receive, load, store etc. Operations are about behavior; they take input and produce output by considering state. I´m not using the term “function” here, because functions - or methods or sub-programs - are not necessary to implement Operations. Functions belong to a different sub-aspect of requirements (see below). Operations alone are not enough, though, to make a customer happy with regard to his/her Functionality requirements. Only correctly implemented Operations provide full value. This should make clear, why testing is so important. And not just manual tests during development of some operational feature, but automated tests. Because only automated tests scale when over time the number of operations increases. Without automated tests there is no guarantee formerly correct operations are still correct after more got added. To retest all previous operations manually is infeasible. So whoever relies just on manual tests is not really balancing the two forces Operations and Correctness. With manual tests more weight is put on the side of the scale of Operations. That might be ok for a short period of time - but in the long run it will bite you. You need to plan for Correctness in the long run from the first day of your project on. Aspects of Quality As important as Functionality is, it´s not the driver for software development. No software has ever been written to just implement some operation in code. We don´t need computers just to do something. All computers can do with software we can do without them. Well, at least given enough time and resources. We could calculate the most complex formulas without computers. We could do auctions with millions of people without computers. The only reason we want computers to help us with this and a million other Operations is… We don´t want to wait for the results very long. Or we want less errors. Or we want easier accessability to complicated solutions. So the main reason for customers to buy/order software is some Quality. They want some Functionality with a higher Quality (e.g. performance, scalability, usability, security…) than without the software. But Qualities come in at least two flavors: Most important are Primary Qualities. That´s the Qualities software truely is written for. Take an online auction website for example. Its Primary Qualities are performance, scalability, and usability, I´d say. Auctions should come within reach of millions of people; setting up an auction should be very easy; finding a suitable auction and bidding on it should be as fast as possible. Only if those Qualities have been implemented does security become relevant. A secure auction website is important - but not as important as a fast auction website. Nobody would want to use the most secure auction website if it was unbearably slow. But there would be people willing to use the fastest auction website even it was lacking security. That´s why security - with regard to online auction software - is not a Primary Quality, but just a Secondary Quality. It´s a supporting quality, so to speak. It does not deliver value by itself. With a password manager software this might be different. There security might be a Primary Quality. Please get me right: I don´t want to denigrate any Quality. There´s a long list of non-functional requirements at Wikipedia. They are all created equal - but that does not mean they are equally important for all software projects. When confronted with Quality requirements check with the customer which are primary and which are secondary. That will help to make good economical decisions when in a crunch. Resources are always limited - but requirements are a bottomless ocean. Aspects of Security of Investment Functionality and Quality are traditionally the requirement aspects cared for most - by customers and developers alike. Even today, when pressure rises in a project, tunnel vision will focus on them. Any measures to create and hold up Security of Investment (SoI) will be out of the window pretty quickly. Resistance to customers and/or management is futile. As long as SoI is not placed on equal footing with Functionality and Quality it´s bound to suffer under pressure. To look closer at what SoI means will help to become more conscious about it and make customers and management aware of the risks of neglecting it. SoI to me has two facets: Production Efficiency (PE) is about speed of delivering value. Customers like short response times. Short response times mean less money spent. So whatever makes software development faster supports this requirement. This must not lead to duct tape programming and banging out features by the dozen, though. Because customers don´t just want Operations and Quality, but also Correctness. So if Correctness gets compromised by focussing too much on Production Efficiency it will fire back. Customers want PE not just today, but over the whole course of a software´s lifecycle. That means, it´s not just about coding speed, but equally about code quality. If code quality leads to rework the PE is on an unsatisfactory level. Also if code production leads to waste it´s unsatisfactory. Because the effort which went into waste could have been used to produce value. Rework and waste cost money. Rework and waste abound, however, as long as PE is not addressed explicitly with management and customers. Thanks to the Agile and Lean movements that´s increasingly the case. Nevertheless more could and should be done in many teams. Each and every developer should keep in mind that Production Efficiency is as important to the customer as Functionality and Quality - whether he/she states it or not. Making software development more efficient is important - but still sooner or later even agile projects are going to hit a glas ceiling. At least as long as they neglect the second SoI facet: Evolvability. Delivering correct high quality functionality in short cycles today is good. But not just any software structure will allow this to happen for an indefinite amount of time.[1] The less explicitly software was designed the sooner it´s going to get stuck. Big ball of mud, monolith, brownfield, legacy code, technical debt… there are many names for software structures that have lost the ability to evolve, to be easily changed to accomodate new requirements. An evolvable code base is the opposite of a brownfield. It´s code which can be easily understood (by developers with sufficient domain expertise) and then easily changed to accomodate new requirements. Ideally the costs of adding feature X to an evolvable code base is independent of when it is requested - or at least the costs should only increase linearly, not exponentially.[2] Clean Code, Agile Architecture, and even traditional Software Engineering are concerned with Evolvability. However, it seems no systematic way of achieving it has been layed out yet. TDD + SOLID help - but still… When I look at the design ability reality in teams I see much room for improvement. As stated previously, SoI - or to be more precise: Evolvability - can hardly be measured. Plus the customer rarely states an explicit expectation with regard to it. That´s why I think, special care must be taken to not neglect it. Postponing it to some large refactorings should not be an option. Rather Evolvability needs to be a core concern for every single developer day. This should not mean Evolvability is more important than any of the other requirement aspects. But neither is it less important. That´s why more effort needs to be invested into it, to bring it on par with the other aspects, which usually are much more in focus. In closing As you see, requirements are of quite different kinds. To not take that into account will make it harder to understand the customer, and to make economic decisions. Those sub-aspects of requirements are forces pulling in different directions. To improve performance might have an impact on Evolvability. To increase Production Efficiency might have an impact on security etc. No requirement aspect should go unchecked when deciding how to allocate resources. Balancing should be explicit. And it should be possible to trace back each decision to a requirement. Why is there a null-check on parameters at the start of the method? Why are there 5000 LOC in this method? Why are there interfaces on those classes? Why is this functionality running on the threadpool? Why is this function defined on that class? Why is this class depending on three other classes? These and a thousand more questions are not to mean anything should be different in a code base. But it´s important to know the reason behind all of these decisions. Because not knowing the reason possibly means waste and having decided suboptimally. And how do we ensure to balance all requirement aspects? That needs practices and transparency. Practices means doing things a certain way and not another, even though that might be possible. We´re dealing with dangerous tools here. Like a knife is a dangerous tool. Harm can be done if we use our tools in just any way at the whim of the moment. Over the centuries rules and practices have been established how to use knifes. You don´t put them in peoples´ legs just because you´re feeling like it. You hand over a knife with the handle towards the receiver. You might not even be allowed to cut round food like potatos or eggs with it. The same should be the case for dangerous tools like object-orientation, remote communication, threads etc. We need practices to use them in a way so requirements are balanced almost automatically. In addition, to be able to work on software as a team we need transparency. We need means to share our thoughts, to work jointly on mental models. So far our tools are focused on working with code. Testing frameworks, build servers, DI containers, intellisense, refactoring support… That´s all nice and well. I don´t want to miss any of that. But I think it´s not enough. We´re missing mental tools, tools for making thinking and talking about software (independently of code) easier. You might think, enough of such tools already exist like all those UML diagram types or Flow Charts. But then, isn´t it strange, hardly any team is using them to design software? Or is that just due to a lack of education? I don´t think so. It´s a matter value/weight ratio: the current mental tools are too heavy weight compared to the value they deliver. So my conclusion is, we need lightweight tools to really be able to balance requirements. Software development is complex. We need guidance not to forget important aspects. That´s like with flying an airplane. Pilots don´t just jump in and take off for their destination. Yes, there are times when they are “flying by the seats of their pants”, when they are just experts doing thing intuitively. But most of the time they are going through honed practices called checklist. See “The Checklist Manifesto” for very enlightening details on this. Maybe then I should say it like this: We need more checklists for the complex businss of software development.[3] But that´s what software development mostly is about: changing software over an unknown period of time. It needs to be corrected in order to finally provide promised operations. It needs to be enhanced to provide ever more operations and qualities. All this without knowing when it´s going to stop. Probably never - until “maintainability” hits a wall when the technical debt is too large, the brownfield too deep. Software development is not a sprint, is not a marathon, not even an ultra marathon. Because to all this there is a foreseeable end. Software development is like continuously and foreever running… ? And sometimes I dare to think that costs could even decrease over time. Think of it: With each feature a software becomes richer in functionality. So with each additional feature the chance of there being already functionality helping its implementation increases. That should lead to less costs of feature X if it´s requested later than sooner. X requested later could stand on the shoulders of previous features. Alas, reality seems to be far from this despite 20+ years of admonishing developers to think in terms of reusability.[1] ? Please don´t get me wrong: I don´t want to bog down the “art” of software development with heavyweight practices and heaps of rules to follow. The framework we need should be lightweight. It should not stand in the way of delivering value to the customer. It´s purpose is even to make that easier by helping us to focus and decreasing waste and rework. ?

    Read the article

  • Five Reasons to Attend PLM Summit 2013: The Conference Formerly Known as AGILITY

    - by Terri Hiskey
    As we approach the end of 2012, we are also closing in on the last couple of weeks that Agile customers and prospects can register for the upcoming PLM Summit 2013 for the bargain early bird rate of $195. Register now to secure your spot! The Conference Formerly Known as AGILITY... Long-time Agile customers may remember AGILITY, which was Agile's PLM customer conference that was held on an annual basis prior to Oracle's acquisiton of Agile in 2007. In February 2012, due to feedback we received from our Agile PLM community, we successfully resurrected the AGILITY conference and renamed it the PLM Summit. The PLM Summit was so well received and well-attended, that we are doing it again in 2013. This upcoming PLM Summit is being co-located in San Francisco under the overarching banner of the Oracle Value Chain Summit, and will be held alongside several other Oracle customer conferences that cover a range of value chain solutions, including Value Chain Planning, Value Chain Execution, Procurement, Maintenance and Manufacturing. This setup offers PLM attendees the best of all worlds--the opportunity to participate and learn about PLM in smaller, focused sessions by product and by industry, while also giving attendees the chance to see how PLM works together with other critical enterprise applications that address other important aspects of the value chain. Top Five Reasons to Attend the PLM Summit 2013 In the spirit of all of the end-of-the-year lists that are currently popping up, here is a list of the top five reasons to attend the PLM Summit for anyone out there needs a little extra encouragement to register: 1. The Best Opportunities for Customer Networking   The PLM Summit offers attendees numerous opportunities to learn and network with fellow Agile users. Customer stories are featured in keynote and breakout presentations and the schedule allows for plenty of networking time during breakfasts, lunches, breaks and dinners. Customer networking is the number one reason that Agile users attend the PLM Summit. Read what attendees thought of the most recent PLM Summit: "Hearing about the implementation of Agile products from a customers’ perspective is invaluable." - Director of Quality Assurance & Regulatory Affairs, leading medical device manufacturer "Understanding the scope of other companies’ projects and the lessons learned made attending this event well worth my time." - Director of Test Engineering, global industrial manufacturer "The most beneficial thing about attending this event is the opportunity to network with other customers with similar experiences." - Director of Business Process Improvement, leading high technology company Come to the PLM Summit and play an active role within the PLM community: swap war stories and business cards, connect on LinkedIn and Facebook, share your stories and discuss the sessions from each day. Register now! 2. It's Educational! The PLM Summit is the premier educational event for anyone in the Agile PLM community. There are nearly 40 PLM-focused in-depth educational sessions led by Agile PLM experts, customers and partners that will cover a range of specific product and industry-focused topics. Keynotes will give attendees a broad overview of the entire Agile PLM footprint, while sessions will delve deeply into specific product functionality and customer case studies. There is truly something for everyone. Check out the latest agenda for view of all the sessions. 3. Visit with the PLM Partner Community Our partners play a significant and important role within the Agile PLM community. At the PLM Summit, attendees will be able to meet and mingle with several of the top Oracle Agile PLM partners including: Deloitte, Domain, GoEngineer, Hitachi Consulting, IBM, Kalypso, KPIT Cummins (CPG Solutions), Perception Software, Verdant, Xavor and ZeroWaitState. Go here for a complete list of all the Value Chain Summit sponsors. 4. See Agile PLM in Action at our Dedicated PLM Demo Pods At the PLM Summit, attendees will have the chance to see Agile PLM in action at dedicated PLM demo pods, manned by expert members of our Agile PLM team. If you would like to see up close specific Agile PLM functionality, or if you have a question on how to extend the scope of your current implemention or if you want a better understanding of how to leverage Agile PLM to address specific use-cases, stop by one of the Agile PLM demo pods and engage the Agile PLM experts on hand at the PLM Summit. 5. Spend Some Time in Lovely San Francisco Still on the fence about the upcoming PLM Summit? Remember that it is being held in San Francisco, which is a fantastic city for a getaway. After spending time learning and networking about PLM, take an extra day or two to escape the dreary winter and enjoy the beautiful scenery and the unique actitivies offered only by the City by the Bay. You will walk away from the conference not only with renewed excitement about Agile PLM, but feeling rejuvenated in general.

    Read the article

  • Oracle Social Network Developer Challenge Winners

    - by kellsey.ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. Now that OpenWorld 2012 has wrapped, I have time to tell you all about what happened. Maybe you recall that Noel (@noelportugal) and I were running a modified hackathon during the show, the Oracle Social Network Developer Challenge. Without further ado, congratulations to Dimitri Gielis (@dgielis) and Martin Giffy D’Souza (@martindsouza) on their winning entry, an integration between Oracle APEX and Oracle Social Network that integrates feedback and bug submission with Oracle Social Network Conversations, allowing developers, end-users and project leaders to view and discuss the feedback on their APEX applications from within Oracle Social Network. Update: Bob Rhubart of OTN (@brhubart) interviewed Dimitri and Martin right after their big win. Money quote from Dimitri when asked what he’d buy with the $500 in Amazon gift cards, “Oracle Social Network.” Nice one. In their own words: In the developers perspective it’s important to get feedback soon, so after a first iteration and end-users start to test, they can give feedback of the application. Previously it stopped there, and it was up to the developer to communicate further with email, phone etc. With OSN every feedback and communication gets logged and other people can see the discussion immediately as well. For the end users perspective he can now communicate in a more efficient way to not only the developers, but also between themselves. Maybe many end-users (in different locations) would like to change some behaviour, by using OSN they can see the entry somebody put in with a screenshot and they can just start to chat about it. Some key technical end users can have lighten the tasks of the development team by looking at the feedback first and start to communicate with their peers. For the project manager he has now the ability to really see what communication has taken place in certain areas and can make decisions on that. Later, if things come up again, he can always go back in OSN and see what was said at that moment in time. Integrating OSN in the APEX applications enhances the user experience, makes the lives of the developers easier and gives a better overview to project managers. Incidentally, you may already know Dimitri and Martin, since both are Oracle Ace Directors. I ran into Martin at the Ace Director briefings Friday before the conference started, and at that point, he wasn’t sure he’d have time to enter the Challenge. After some coaxing, he and Dimitri agreed to give it a go and banged out their entry on Tuesday night, or more accurately, very early Wednesday morning, the day of the Challenge judging. I think they said it took them about four hours of hardcore coding to get it done, very much like a traditional hackathon, which is essentially a code sprint from idea to finished product. Here are some screenshots of the workflow they built. #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; } I love this idea, i.e. closing the loop between web developers and users, a very common pain point, and so did our judges. Speaking of, special thanks to our panel of three judges: Reggie Bradford (@reggiebradford), serial entrepreneur, founder of Vitrue and SVP of Cloud Product Development at Oracle Robert Hipps (@roberthipps), VP of Development for Oracle Social Network and my former boss Roland Smart (@rsmartx), VP of Social Marketing and the brains behind the Oracle Social Developer Community Finally, thanks to everyone who made this possible, including: The three other teams from HarQen (@harqen), TEAM Informatics (@teaminformatics) and Fishbowl Solutions (@fishbowle20) featuring Friend of the ‘Lab John Sim (@jrsim_uix), who finished and presented entries. I’ll be posting the details of their work this week. The one guy who finished an entry, but couldn’t make the judging, Bex Huff (@bex). Bex rallied from a hospitalization due to an allergic reaction during the show; he’s fine, don’t worry. I’ll post details of his work next week, too. The 40-plus people who registered to compete in the Challenge. Noel for all his hard work, sample code, and flying monkey target, more on that to come. The Oracle Social Network development team for supporting this event. Everyone in legal and the beta program office for their help. And finally, the Oracle Technology Network (@oracletechnet) for hosting the event and providing countless hours of operational and moral support. Sorry if I’ve missed some people, since this was a huge team effort. This event was a big success, and we plan to do similar events in the future. Stay tuned to this channel for more. 

    Read the article

  • Registration is open - JD Edwards Summit in Dubai

    - by Hartmut Wiese
    Dear all, the registration is now open for the 2nd ECEMEA JD Edwards Summit in Dubai. The event is taking place from NOV 17-21. Please see Agenda details and registration links on those two pages:  Partner and Employee Registration Page:eventreg.oracle.com/profile/web/index.cfm?PKWebId=0x285012625 Customer Registration Page:http://eventreg.oracle.com/profile/web/index.cfm?PKWebId=0x285012625 Partner have to pay a fee and with the registration each partner confirms to do his/her payment. Only accepted method of payment is through PayPal. You will receive a separate email after registration with additional details. Prices are the following: - NOV 17-18: USD 100 per Partner registration - NOV 20-21: USD 100 per Partner and Customer registration (NOV 19 is free of charge, Partner Sponsors can register up to 4 people free of charge for the whole event) After the registration you receive an automatic workflow message which is not the registration confirmation. We first have to check the capacity and once you are approved you will receive a separate email with your registration confirmation. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Speciality this year: An invitation letter can be created for Employees only. The fastest way for Customers/Partners to get a Visa is talking to your hotel or airline. This is an established process within this region Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} One workshop is pretty interesting for “JDE in a box” Partner. One standard training (Introduction to Oracle Solaris V.11) was used and we have added some specific content about how to create a “JDE in a box” solution for the X3-2 / T4-1 combination. A “JDE in a box” solution is a Partner Go-to-Market solution where Oracle is helping each partners in identifying the components to use and where we also want to leverage our experiences and help our Partner to successfully combine this to a Partner offering.Target audience are Partner with no or limited Solaris/Sparc knowledge.This is the first version of this training and we all will learn from this experiences. I hope to see a lot of JDEdwards interested people in Dubai during the five days of the event.

    Read the article

  • Wednesday at OpenWorld: Identity Management

    - by Tanu Sood
    Divide and conquer! Yes, divide and conquer today at Oracle OpenWorld with your colleagues to make the most of all things Identity Management since there’s a lot going on. Here’ the line-up for today: Wednesday, October 3, 2012 CON9458: End End-User-Managed Passwords and Increase Security with Oracle Enterprise Single Sign-On Plus 10:15 a.m. – 11:15 a.m., Moscone West 3008 Most customers have a broad variety of applications (internal, external, web, client server, host etc) and single sign-on systems that extend to some, but not all systems. This session will focus on how customers are using enterprise single sign-on can help extend single sign-on to virtually any application, without costly application modification while laying a foundation that will enable integration with a broader identity management platform. CON9494: Sun2Oracle: Identity Management Platform Transformation 11:45 a.m. – 12:45 p.m., Moscone West 3008 Sun customers are actively defining strategies for how they will modernize their identity deployments. Learn how customers like Avea and SuperValu are leveraging their Sun investment, evaluating areas of expansion/improvement and building momentum. CON9631: Entitlement-centric Access to SOA and Cloud Services 11:45 a.m. – 12:45 p.m., Marriott Marquis, Salon 7 How do you enforce that a junior trader can submit 10 trades/day, with a total value of $5M, if market volatility is low? How can hide sensitive patient information from clerical workers but make it visible to specialists as long as consent has been given or there is an emergency? In this session, Uberether and HerbaLife take the stage with Oracle to demonstrate how you can enforce such entitlements on a service not just within your intranet but also right at the perimeter. CON3957 - Delivering Secure Wi-Fi on the Tube as an Olympics Legacy from London 2012 11:45 a.m. – 12:45 p.m., Moscone West 3003 In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. CON9493: Identity Management and the Cloud 1:15 p.m. – 2:15 p.m., Moscone West 3008 Security is the number one barrier to cloud service adoption.  Not so for industry leading companies like SaskTel, ConAgra foods and UPMC. This session will explore how these organizations are using Oracle Identity with cloud services and how some are offering identity management as a cloud service. CON9624: Real-Time External Authorization for Middleware, Applications, and Databases 3:30 p.m. – 4:30 p.m., Moscone West 3008 As organizations seek to grant access to broader and more diverse user populations, the importance of centrally defined and applied authorization policies become critical; both to identify who has access to what and to improve the end user experience.  This session will explore how customers are using attribute and role-based access to achieve these goals. CON9625: Taking Control of WebCenter Security 5:00 p.m. – 6:00 p.m., Moscone West 3008 Many organizations are extending WebCenter in a business to business scenario requiring secure identification and authorization of business partners and their users. Leveraging LADWP’s use case, this session will focus on how customers are leveraging, securing and providing access control to Oracle WebCenter portal and mobile solutions. EVENTS: Identity Management Customer Advisory Board 2:30 p.m. – 3:30 p.m., Four Seasons – Yerba Buena Room This invitation-only event is designed exclusively for Customer Advisory Board (CAB) members to provide product strategy and roadmap updates. Identity Management Meet & Greet Networking Event 3:30 p.m. – 4:30 p.m., Meeting Session 4:30 p.m. – 5:30 p.m., Cocktail Reception Yerba Buena Room, Four Seasons Hotel, 757 Market Street, San Francisco The CAB meeting will be immediately followed by an open Meet & Greet event hosted by Oracle Identity Management executives and product management team. Do take this opportunity to network with your peers and connect with the Identity Management customers. For a complete listing, refer to the Focus on Identity Management document. And as always, you can find us on @oracleidm on twitter and FaceBook. Use #oow and #idm to join in the conversation.

    Read the article

  • Monitoring your WCF Web Apis with AppFabric

    - by cibrax
    The other day, Ron Jacobs made public a template in the Visual Studio Gallery for enabling monitoring capabilities to any existing WCF Http service hosted in Windows AppFabric. I thought it would be a cool idea to reuse some of that for doing the same thing on the new WCF Web Http stack. Windows AppFabric provides a dashboard that you can use to dig into some metrics about the services usage, such as number of calls, errors or information about different events during a service call. Those events not only include information about the WCF pipeline, but also custom events that any developer can inject and make sense for troubleshooting issues.      This monitoring capabilities can be enabled on any specific IIS virtual directory by using the AppFabric configuration tool or adding the following configuration sections to your existing web app, <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> <diagnostics etwProviderId="3e99c707-3503-4f33-a62d-2289dfa40d41"> <endToEndTracing propagateActivity="true" messageFlowTracing="true" /> </diagnostics> <behaviors> <serviceBehaviors> <behavior name=""> <etwTracking profileName="EndToEndMonitoring Tracking Profile" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel>   <microsoft.applicationServer> <monitoring> <default enabled="true" connectionStringName="ApplicationServerMonitoringConnectionString" monitoringLevel="EndToEndMonitoring" /> </monitoring> </microsoft.applicationServer> Bad news is that none of the configuration above can be easily set on code by using the new configuration model for WCF Web stack.  A good thing is that you easily disable it in the configuration when you no longer need it, and also uses ETW, a general-purpose and high-speed tracing facility provided by the operating system (it’s part of the windows kernel). By adding that configuration section, AppFabric will start monitoring your service automatically and providing some basic event information about the service calls. You need some custom code for injecting custom events in the monitoring data. What I did here is to copy and refactor the “WCFUserEventProvider” class provided as sample in the Ron’s template to make it more TDD friendly when using IoC. I created a simple interface “ILogger” that any service (or resource) can use to inject custom events or monitoring information in the AppFabric database. public interface ILogger { bool WriteError(string name, string format, params object[] args); bool WriteWarning(string name, string format, params object[] args); bool WriteInformation(string name, string format, params object[] args); } The “WCFUserEventProvider” class implements this interface by making possible to send the events to the AppFabric monitoring database. The service or resource implementation can receive an “ILogger” as part of the constructor. [ServiceContract] [Export] public class OrderResource { IOrderRepository repository; ILogger logger;   [ImportingConstructor] public OrderResource(IOrderRepository repository, ILogger logger) { this.repository = repository; this.logger = logger; }   [WebGet(UriTemplate = "{id}")] public Order Get(string id, HttpResponseMessage response) { var order = this.repository.All.FirstOrDefault(o => o.OrderId == int.Parse(id, CultureInfo.InvariantCulture)); if (order == null) { response.StatusCode = HttpStatusCode.NotFound; response.Content = new StringContent("Order not found"); }   this.logger.WriteInformation("Order Requested", "Order Id {0}", id);   return order; } } The example above uses “MEF” as IoC for injecting a repository and the logger implementation into the service. You can also see how the logger is used to write an information event in the monitoring database. The following image illustrates how the custom event is injected and the information becomes available for any user in the dashboard. An issue that you might run into and I hope the WCF and AppFabric teams fixed soon is that any WCF service that uses friendly URLs with ASP.NET routing does not get listed as a available service in the WCF services tab in the AppFabric console. The complete example is available to download from here.

    Read the article

  • Testing Entity Framework applications, pt. 3: NDbUnit

    - by Thomas Weller
    This is the third of a three part series that deals with the issue of faking test data in the context of a legacy app that was built with Microsoft's Entity Framework (EF) on top of an MS SQL Server database – a scenario that can be found very often. Please read the first part for a description of the sample application, a discussion of some general aspects of unit testing in a database context, and of some more specific aspects of the here discussed EF/MSSQL combination. Lately, I wondered how you would ‘mock’ the data layer of a legacy application, when this data layer is made up of an MS Entity Framework (EF) model in combination with a MS SQL Server database. Originally, this question came up in the context of how you could enable higher-level integration tests (automated UI tests, to be exact) for a legacy application that uses this EF/MSSQL combo as its data store mechanism – a not so uncommon scenario. The question sparked my interest, and I decided to dive into it somewhat deeper. What I've found out is, in short, that it's not very easy and straightforward to do it – but it can be done. The two strategies that are best suited to fit the bill involve using either the (commercial) Typemock Isolator tool or the (free) NDbUnit framework. The use of Typemock was discussed in the previous post, this post now will present the NDbUnit approach... NDbUnit is an Apache 2.0-licensed open-source project, and like so many other Nxxx tools and frameworks, it is basically a C#/.NET port of the corresponding Java version (DbUnit namely). In short, it helps you in flexibly managing the state of a database in that it lets you easily perform basic operations (like e.g. Insert, Delete, Refresh, DeleteAll)  against your database and, most notably, lets you feed it with data from external xml files. Let's have a look at how things can be done with the help of this framework. Preparing the test data Compared to Typemock, using NDbUnit implies a totally different approach to meet our testing needs.  So the here described testing scenario requires an instance of an SQL Server database in operation, and it also means that the Entity Framework model that sits on top of this database is completely unaffected. First things first: For its interactions with the database, NDbUnit relies on a .NET Dataset xsd file. See Step 1 of their Quick Start Guide for a description of how to create one. With this prerequisite in place then, the test fixture's setup code could look something like this: [TestFixture, TestsOn(typeof(PersonRepository))] [Metadata("NDbUnit Quickstart URL",           "http://code.google.com/p/ndbunit/wiki/QuickStartGuide")] [Description("Uses the NDbUnit library to provide test data to a local database.")] public class PersonRepositoryFixture {     #region Constants     private const string XmlSchema = @"..\..\TestData\School.xsd";     #endregion // Constants     #region Fields     private SchoolEntities _schoolContext;     private PersonRepository _personRepository;     private INDbUnitTest _database;     #endregion // Fields     #region Setup/TearDown     [FixtureSetUp]     public void FixtureSetUp()     {         var connectionString = ConfigurationManager.ConnectionStrings["School_Test"].ConnectionString;         _database = new SqlDbUnitTest(connectionString);         _database.ReadXmlSchema(XmlSchema);         var entityConnectionStringBuilder = new EntityConnectionStringBuilder         {             Metadata = "res://*/School.csdl|res://*/School.ssdl|res://*/School.msl",             Provider = "System.Data.SqlClient",             ProviderConnectionString = connectionString         };         _schoolContext = new SchoolEntities(entityConnectionStringBuilder.ConnectionString);         _personRepository = new PersonRepository(this._schoolContext);     }     [FixtureTearDown]     public void FixtureTearDown()     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         _schoolContext.Dispose();     }     ...  As you can see, there is slightly more fixture setup code involved if your tests are using NDbUnit to provide the test data: Because we're dealing with a physical database instance here, we first need to pick up the test-specific connection string from the test assemblies' App.config, then initialize an NDbUnit helper object with this connection along with the provided xsd file, and also set up the SchoolEntities and the PersonRepository instances accordingly. The _database field (an instance of the INdUnitTest interface) will be our single access point to the underlying database: We use it to perform all the required operations against the data store. To have a flexible mechanism to easily insert data into the database, we can write a helper method like this: private void InsertTestData(params string[] dataFileNames) {     _database.PerformDbOperation(DbOperationFlag.DeleteAll);     if (dataFileNames == null)     {         return;     }     try     {         foreach (string fileName in dataFileNames)         {             if (!File.Exists(fileName))             {                 throw new FileNotFoundException(Path.GetFullPath(fileName));             }             _database.ReadXml(fileName);             _database.PerformDbOperation(DbOperationFlag.InsertIdentity);         }     }     catch     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         throw;     } } This lets us easily insert test data from xml files, in any number and in a  controlled order (which is important because we eventually must fulfill referential constraints, or we must account for some other stuff that imposes a specific ordering on data insertion). Again, as with Typemock, I won't go into API details here. - Unfortunately, there isn't too much documentation for NDbUnit anyway, other than the already mentioned Quick Start Guide (and the source code itself, of course) - a not so uncommon problem with smaller Open Source Projects. Last not least, we need to provide the required test data in xml form. A snippet for data from the People table might look like this, for example: <?xml version="1.0" encoding="utf-8" ?> <School xmlns="http://tempuri.org/School.xsd">   <Person>     <PersonID>1</PersonID>     <LastName>Abercrombie</LastName>     <FirstName>Kim</FirstName>     <HireDate>1995-03-11T00:00:00</HireDate>   </Person>   <Person>     <PersonID>2</PersonID>     <LastName>Barzdukas</LastName>     <FirstName>Gytis</FirstName>     <EnrollmentDate>2005-09-01T00:00:00</EnrollmentDate>   </Person>   <Person>     ... You can also have data from various tables in one single xml file, if that's appropriate for you (but beware of the already mentioned ordering issues). It's true that your test assembly may end up with dozens of such xml files, each containing quite a big amount of text data. But because the files are of very low complexity, and with the help of a little bit of Copy/Paste and Excel magic, this appears to be well manageable. Executing some basic tests Here are some of the possible tests that can be written with the above preparations in place: private const string People = @"..\..\TestData\School.People.xml"; ... [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] public void GetNameList_ListOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.List);     Assert.Count(34, names);     Assert.AreEqual("Abercrombie, Kim", names.First());     Assert.AreEqual("Zheng, Roger", names.Last()); } [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] [DependsOn("RemovePerson_CalledOnce_DecreasesCountByOne")] public void GetNameList_NormalOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.Normal);     Assert.Count(34, names);     Assert.AreEqual("Alexandra Walker", names.First());     Assert.AreEqual("Yan Li", names.Last()); } [Test, TestsOn("PersonRepository.AddPerson")] public void AddPerson_CalledOnce_IncreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.AddPerson(new Person { FirstName = "Thomas", LastName = "Weller" });     Assert.AreEqual(count + 1, _personRepository.Count); } [Test, TestsOn("PersonRepository.RemovePerson")] public void RemovePerson_CalledOnce_DecreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.RemovePerson(new Person { PersonID = 33 });     Assert.AreEqual(count - 1, _personRepository.Count); } Not much difference here compared to the corresponding Typemock versions, except that we had to do a bit more preparational work (and also it was harder to get the required knowledge). But this picture changes quite dramatically if we look at some more demanding test cases: Ok, and what if things are becoming somewhat more complex? Tests like the above ones represent the 'easy' scenarios. They may account for the biggest portion of real-world use cases of the application, and they are important to make sure that it is generally sound. But usually, all these nasty little bugs originate from the more complex parts of our code, or they occur when something goes wrong. So, for a testing strategy to be of real practical use, it is especially important to see how easy or difficult it is to mimick a scenario which represents a more complex or exceptional case. The following test, for example, deals with the case that there is some sort of invalid input from the caller: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] [Row(null, typeof(ArgumentNullException))] [Row("", typeof(ArgumentException))] [Row("NotExistingCourse", typeof(ArgumentException))] public void GetCourseMembers_WithGivenVariousInvalidValues_Throws(string courseTitle, Type expectedInnerExceptionType) {     var exception = Assert.Throws<RepositoryException>(() =>                                 _personRepository.GetCourseMembers(courseTitle));     Assert.IsInstanceOfType(expectedInnerExceptionType, exception.InnerException); } Apparently, this test doesn't need an 'Arrange' part at all (see here for the same test with the Typemock tool). It acts just like any other client code, and all the required business logic comes from the database itself. This doesn't always necessarily mean that there is less complexity, but only that the complexity happens in a different part of your test resources (in the xml files namely, where you sometimes have to spend a lot of effort for carefully preparing the required test data). Another example, which relies on an underlying 1-n relationship, might be this: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] public void GetCourseMembers_WhenGivenAnExistingCourse_ReturnsListOfStudents() {     InsertTestData(People, Course, Department, StudentGrade);     List<Person> persons = _personRepository.GetCourseMembers("Macroeconomics");     Assert.Count(4, persons);     Assert.ForAll(         persons,         @p => new[] { 10, 11, 12, 14 }.Contains(@p.PersonID),         "Person has none of the expected IDs."); } If you compare this test to its corresponding Typemock version, you immediately see that the test itself is much simpler, easier to read, and thus much more intention-revealing. The complexity here lies hidden behind the call to the InsertTestData() helper method and the content of the used xml files with the test data. And also note that you might have to provide additional data which are not even directly relevant to your test, but are required only to fulfill some integrity needs of the underlying database. Conclusion The first thing to notice when comparing the NDbUnit approach to its Typemock counterpart obviously deals with performance: Of course, NDbUnit is much slower than Typemock. Technically,  it doesn't even make sense to compare the two tools. But practically, it may well play a role and could or could not be an issue, depending on how much tests you have of this kind, how often you run them, and what role they play in your development cycle. Also, because the dataset from the required xsd file must fully match the database schema (even in parts that otherwise wouldn't be relevant to you), it can be quite cumbersome to be in a team where different people are working with the database in parallel. My personal experience is – as already said in the first part – that Typemock gives you a better development experience in a 'dynamic' scenario (when you're working in some kind of TDD-style, you're oftentimes executing the tests from your dev box, and your database schema changes frequently), whereas the NDbUnit approach is a good and solid solution in more 'static' development scenarios (when you need to execute the tests less frequently or only on a separate build server, and/or the underlying database schema can be kept relatively stable), for example some variations of higher-level integration or User-Acceptance tests. But in any case, opening Entity Framework based applications for testing requires a fair amount of resources, planning, and preparational work – it's definitely not the kind of stuff that you would call 'easy to test'. Hopefully, future versions of EF will take testing concerns into account. Otherwise, I don't see too much of a future for the framework in the long run, even though it's quite popular at the moment... The sample solution A sample solution (VS 2010) with the code from this article series is available via my Bitbucket account from here (Bitbucket is a hosting site for Mercurial repositories. The repositories may also be accessed with the Git and Subversion SCMs - consult the documentation for details. In addition, it is possible to download the solution simply as a zipped archive – via the 'get source' button on the very right.). The solution contains some more tests against the PersonRepository class, which are not shown here. Also, it contains database scripts to create and fill the School sample database. To compile and run, the solution expects the Gallio/MbUnit framework to be installed (which is free and can be downloaded from here), the NDbUnit framework (which is also free and can be downloaded from here), and the Typemock Isolator tool (a fully functional 30day-trial is available here). Moreover, you will need an instance of the Microsoft SQL Server DBMS, and you will have to adapt the connection strings in the test projects App.config files accordingly.

    Read the article

  • In the Groove: PASS Board Year 1, Q3

    - by Denise McInerney
    It's nine months into my first year on the PASS Board and I feel like I've found my rhythm. I've accomplished one of the goals I set out for the year and have made progress on others. Here's a recap of the last few months. Anti-Harassment Policy & Process Completed In April I began work on a Code of Conduct for the PASS Summit. The Board had several good discussions and various PASS members provided feedback. You can read more about that in this blog post. Since the document was focused on issues of harassment we renamed it the "Anti-Harassment Policy " and it was approved by the Board in August. The next step was to refine the guideliness and process for enforcement of the AHP. A subcommittee worked on this and presented an update to the Board at the September meeting. You can read more about that in this post, and you can find the process document here. Global Growth Expanding PASS' reach and making the organization relevant to SQL Server communities around the world has been a focus of the Board's work in 2012. We took the Global Growth initiative out to the community for feedback, and everyone on the Board participated, via Twitter chats, Town Hall meetings, feedback forums and in-person discussions. This community participation helped shape and refine our plans. Implementing the vision for Global Growth goes across all portfolios. The Virtual Chapters are well-positioned to help the organization move forward in this area. One outcome of the Global Growth discussions with the community is the expansion of two of the VCs from country-specific to language-specific. Thanks to the leadership in Brazil & Mexico for taking the lead here. I look forward to continued success for the Portuguese- and Spanish-language Virtual Chapters. Together with the Global Chinese VC PASS is off to a good start in making the VC's truly global. Virtual Chapters The VCs continue to grow and expand. Volunteers recently rebooted the Azure and Virutalization VCs, and a new  Education VC will be launching soon. Every week VCs offer excellent free training on a variety of topics. It's the dedication of the VC leaders and volunteers that make all this possible and I thank them for it. Board meeting The Board had an in-person meeting in September in San Diego, CA.. As usual we covered a number of topics including governance changes to support Global Growth, the upcoming Summit, 2013 events and the (then) upcoming PASS election. Next Up Much of the last couple of months has been focused on preparing for the PASS Summit in Seattle Nov. 6-9. I'll be there all week;  feel free to stop me if you have a question or concern, or just to introduce yourself.  Here are some of the places you can find me: VC Leaders Meeting Tuesday 8:00 am the VC leaders will have a meeting. We'll review some of the year's highlights and talk about plans for the next year Welcome Reception The VCs will be at the Welcome Reception in the new VC Lounge. Come by, learn more about what the VCs have to offer and meet others who share your interests. Exceptional DBA Awards Party I'm looking forward to seeing PASS Women in Tech VC leader Meredith Ryan receive her award at this event sponsored by Red Gate Session Presentation I will be presenting a spotlight session entitled "Stop Bad Data in Its OLTP Tracks" on Wednesday at 3:00 p.m. Exhibitor Reception This reception Wednesday evening in the Expo Hall is a great opportunity to learn more about tools and solutions that can help you in your job. Women in Tech Luncheon This year marks the 10th WIT Luncheon at PASS. I'm honored to be on the panel with Stefanie Higgins, Kevin Kline, Kendra Little and Jen Stirrup. This event is on Thursday at 11:30. Community Appreciation Party Thursday evening don't miss this event thanking all of you for everthing you do for PASS and the community. This year we will be at the Experience Music Project and it promises to be a fun party. Board Q & A Friday  9:45-11:15  am the members of the Board will be available to answer your questions. If you have a question for us, or want to hear what other members are thinking about, come by room 401 Friday morning.

    Read the article

  • CodePlex Daily Summary for Thursday, November 10, 2011

    CodePlex Daily Summary for Thursday, November 10, 2011Popular ReleasesCODE Framework: 4.0.11110.0: Various minor fixes and tweaks.Extensions for Reactive Extensions (Rxx): Rxx 1.2: What's NewRelated Work Items Please read the latest release notes for details about what's new. Content SummaryRxx provides the following features. See the Documentation for details. Many IObservable<T> extension methods and IEnumerable<T> extension methods. Many useful types such as ViewModel, CommandSubject, ListSubject, DictionarySubject, ObservableDynamicObject, Either<TLeft, TRight>, Maybe<T> and others. Various interactive labs that illustrate the runtime behavior of the extensio...Composite C1 CMS: Composite C1 3.0 RC2 (3.0.4331.234): This is currently a Release Candidate. Upgrade guidelines and "what's new" are pending.Player Framework by Microsoft: HTML5 Player Framework 1.0: Additional DownloadsHTML5 Player Framework Examples - This is a set of examples showing how to setup and initialize the HTML5 Player Framework. This includes examples of how to use the Player Framework with both the HTML5 video tag and Silverlight player. Note: Be sure to unblock the zip file before using. Note: In order to test Silverlight fallback in the included sample app, you need to run the html and xap files over http (e.g. over localhost). Silverlight Players - Visit the Silverlig...MapWindow 4: MapWindow GIS v4.8.6 - Final release - 64Bit: What’s New in 4.8.6 (Final release)A few minor issues have been fixed What’s New in 4.8.5 (Beta release)Assign projection tool. (Sergei Leschinsky) Projection dialects. (Sergei Leschinsky) Projections database converted to SQLite format. (Sergei Leschinsky) Basic code for database support - will be developed further (ShapefileDataClient class, IDataProvider interface). (Sergei Leschinsky) 'Export shapefile to database' tool. (Sergei Leschinsky) Made the GEOS library static. geos.dl...NewLife XCode ??????: XCode v8.2.2011.1107、XCoder v4.5.2011.1108: v8.2.2011.1107 ?IEntityOperate.Create?Entity.CreateInstance??????forEdit,????????(FindByKeyForEdit)???,???false ??????Entity.CreateInstance,????forEdit,???????????????????? v8.2.2011.1103 ??MS????,??MaxMin??(????????)、NotIn??(????)、?Top??(??NotIn)、RowNumber??(?????) v8.2.2011.1101 SqlServer?????????DataPath,?????????????????????? Oracle?????????DllPath,????OCI??,???????????ORACLE_HOME?? Oracle?????XCode.Oracle.IsUseOwner,???????????Ow...Facebook C# SDK: v5.3.2: This is a RTW release which adds new features and bug fixes to v5.2.1. Query/QueryAsync methods uses graph api instead of legacy rest api. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ (experimental) added support for early preview for .NET 4.5 (binaries not distributed in codeplex nor nuget.org, will need to manually build from Facebook-Net45.sln) added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added ne...Delete Inactive TS Ports: List and delete the Inactive TS Ports: UPDATEAdded support for windows 2003 servers and removed some null reference errors when the registry key was not present List and delete the Inactive TS Ports - The InactiveTSPortList.EXE accepts command line arguments The InactiveTSPortList.Standalone.WithoutPrompt.exe runs as a standalone exe without the need for any command line arguments.Ribbon Editor for Microsoft Dynamics CRM 2011: Ribbon Editor (0.1.2207.267): BUG FIXES: - Cannot add multiple JavaScript and Url under Actions - Cannot add <Or> node under <OrGroup> - Adding a rule under <Or> node put the new rule node at the wrong placeDNN Quick Form: DNN Quick Form 1.0.0: Initial Release for DNN Quick Form Requires DotNetNuke 6.1ClosedXML - The easy way to OpenXML: ClosedXML 0.60.0: Added almost full support for auto filters (missing custom date filters). See examples Filter Values, Custom Filters Fixed issues 7016, 7391, 7388, 7389, 7198, 7196, 7194, 7186, 7067, 7115, 7144Microsoft Research Boogie: Nightly builds: This download category contains automatically released nightly builds, reflecting the current state of Boogie's development. We try to make sure each nightly build passes the test suite. If you suspect that was not the case, please try the previous nightly build to see if that really is the problem. Also, please see the installation instructions.GoogleMap Control: GoogleMap Control 6.0: Major design changes to the control in order to achieve better scalability and extensibility for the new features comming with GoogleMaps API. GoogleMap control switched to GoogleMaps API v3 and .NET 4.0. GoogleMap control is 100% ScriptControl now, it requires ScriptManager to be registered on the pages where and before it is used. Markers, polylines, polygons and directions were implemented as ExtenderControl, instead of being inner properties of GoogleMap control. Better perfomance. Better...SubExtractor: Release 1020: Feature: added "baseline double quotes" character to selector box Feature: added option to save SRT files as ANSI (instead of previous UTF-8 only) Feature: made "Save Sup files to Source directory" apply to both Sup and Idx source files. Fix: removed SDH text (...) or [...] that is split over 2 lines Fix: better decision-making in when to prefix a line with a '-' because SDH was removedAcDown????? - Anime&Comic Downloader: AcDown????? v3.6.1: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6.1?? ??.hlv...Track Folder Changes: Track Folder Changes 1.1: Fixed exception when right-clicking the root nodeKinect Toolbox: Kinect Toolbox v1.1.0.2: This version adds support for the Kinect for Windows SDK beta 2.Kinect Mouse Cursor: Kinect Mouse Cursor 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Coding4Fun Kinect Toolkit: Coding4Fun Kinect Toolkit 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Media Companion: MC 3.421b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Fix to show the season-specials.tbn when selecting an episode from season 00. Before, MC would try & load season00.tbn Fix for issue #197 - new show added by 'Manually Add Path' not being picked up. Also made non-visible the same thing in Root Folders...New ProjectsAlgoritmoGeneticoVB: Proyecto para la cátedra de Inteligencia Artificial de la UCSEAudio Pitch & Shift: Audio Pitch & Shift is a simple audio tool intended to be useful for musicians who wants to slow down or change the pitch of the music. This software takes advantage of Bass audio library technology, wich is multi platform and x64 compatible.Betz: Start with financial binary bet system first... But hope to have a gaming platform in the end. Cheers!Composite Data Service Framework: The Composite Data Service Framework is a toolkit that extends the functionality of the WCF Data Services APIs by allowing a set of OData Services from distinct data sources to be aggregated into a single Data Service, with client-side APIs to help with common tasks.Crayons Static Version: GIS vector-based spatial data overlay processing is much more complex than raster data processing. The GIS data files can be huge and their overlay processing is computationally intensive. CrmXpress SmartSoapLogger for Microsoft Dynamics CRM 2011: SmartSoapLogger autoamtes the process of generating SOAP messages as well as JavaScript functions to use them. All you do is write C# code and click on a button[You have few options as well]to get SOAP messages as well as the Script. Custom File Generators: This project includes Visual Studio Custom Tools that aid in development of Silverlight, Windows Phone, and WPF applications or any MVVM project for that matter. The custom tool creates properties from backing fields that have a specific attribute. The properties are then used in binding and raise their changed event.eDay - Verbräuche: Dieses Programm ermöglicht die Erfassung der monatlichen Verbräuche von Strom, Gas und Wasser. Sie stellt die Werte tabellarisch dar und zeigt sie zusätzlich in einer Statistik.Godfather: An application for administration of sponsorship agencies developed by the .NET Open Space User Group of Vienna Austria in the process of a charity coding event on Nov. 26, 2011.Greek News: Instantly get the latest headlines from multiple Greek news sources in news, sports, technology and opinions with one click onto your Windows Phone. Uses RSS feeds. Sources are customizable from settings page. Based on news.codeplex.com.HMAC MD5: This is a simple implementation of the MD5 cryptographic hashing algorithm and HMAC-MD5. This class consists of fully transparent C# code, suitable for use in .NET, Silverlight and WP7 applications.KonMvc-?? .NET 3.5???????MVC??: ????C#?????MVC??, ????: 1.??APP ????????, 2,?GLOBAL??????? write???(???) ?????????????? 3.???????? MVC???????? ,??????? ??: ???? ,??????????? ????? ????CACHE?????????????Mini Proxy - a light-weight local proxy: a light-weight proxy written in C# (around 200 lines in total), it allows intercept HTTP traffic and hook custom code. Its initial scenario is to capture Http headers sent from a HttpWebRequest object. Limitations: Http 1.0 proxy Range header not forwardedMultimodal User Interface Builder: This project integrates several tools, frameworks and components in order to provide a graphical environment for the development of multimodal user interfaces. The Multimodal User Interface Builder provides development guidance based on mutimodal design patterns research.Om MVC: This is a simple hello world MVC project.Phone Net Tools: A collection of tools to overcome certain limitations of networking on the Windows Phone platform, in particular regarding DNS.Ps3RemoteSleep: A workaround to place the Sony Playstation 3 (PS3) Blu-ray Remote Control in sleep/sniff mode using the integrated Microsoft Bluetooth stack. For use with EventGhost/XBMC Windows 7 32/64 compatible. Simple CQRS Sample: A simple sample for a CQRS designet application. Includes: - RavenDB for Event- and ReadModel-Sorce - MessageBus based on Reactive Framework Rx - Razor MVC3 WebUI - Some more stuffSoftware Lab SDK: A collection of code helping in developmentSolution Import for Microsoft Dynamics CRM 2011: Solution Import for Microsoft Dynamics CRM 2011 makes it easier for developers and customizers of Microsoft Dynamics CRM 2011 to import a solution Zip file or a extracted solution folder to the server in one single operation.Team Badass: Class projectWCF step by step guide: This is an WCF step by step guide that explains how to make Web hosting and Self-hosting of services, and how to consume the services. It is intended for beginners. It's developed in C#When Pigs Fly: Flying PigsWindows Phone Wi-Fi Sensor JangKengPong: Windows Phone 7.5's Wi-Fi socket multicast group communication and Sensor sample application. This project includes Group Communication on LAN and Simple Gesture Recognition library.WPF Turn-based Strategy Wargame: This is a project to create a turn-based strategy game using the Windows Presentation Foundation. The technological foundations of the software allow new game maps to be easily stipulated in XAML, theoretically enabling support for multiple games.

    Read the article

  • WebCenter Content (WCC) Trace Sections

    - by Kevin Smith
    Kyle has a good post on how to modify the size and number of WebCenter Content (WCC) trace files. His post reminded me I have been meaning to write a post on WCC trace sections for a while. searchcache - Tells you if you query was found in the WCC search cache. searchquery - Shows the processing of the query as it is converted form what the user submitted to the end query that will be sent to the database. Shows conversion from the universal query syntax to the syntax specific to the search solution WCC is configured to use. services (verbose) - Lists the filters that are called for each service. This will let you know what filters are available for each service and will also tell you what filters are used by WCC add-on components and any custom components you have installed. The How To Component Sample has a list of filters, but it has not been updated since 7.5, so it is a little outdated now. With each new release WCC adds more filters. If you have a filter that has no code attached to it you will see output like this: services/6    09.25 06:40:26.270    IdcServer-423    Called filter event computeDocName with no filter plugins registered When a WCC add-on or custom component uses a filter you will see trace output like this: services/6    09.25 06:40:26.275    IdcServer-423    Calling filter event postValidateCheckinData on class collections.CollectionValidateCheckinData with parameter postValidateCheckinDataservices/6    09.25 06:40:26.275    IdcServer-423    Calling filter event postValidateCheckinData on class collections.CollectionFilters with parameter postValidateCheckinData As you can see from this sample output it is possible to have multiple code points using the same filter. systemdatabase - Dumps the database call AFTER it executes. This can be somewhat troublesome if you are trying to track down some weird database problems. We had a problem where WCC was getting into a deadlock situation. We turned on the systemdatabase trace section and thought we had the problem database call, but it turned out since it printed out the database call after it was executed we were looking at the database call BEFORE the one causing the deadlock. We ended up having to turn on tracing at the database level to see the database call WCC was making that was causing the deadlock. socketrequests (verbose) - dumps the actual messages received and sent over the socket connection by WCC for a service. If you have gzip enabled you will see junk on the response coming back from WCC. For debugging disable the gzip of the WCC response.Here is an example of the dump of the request for a GET_SEARCH_RESULTS service call. socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: REMOTE_USER=sysadmin.USER-AGENT=Java;.Stel socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: lent.CIS.11g.CONTENT_TYPE=text/html.HEADER socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: _ENCODING=UTF-8.REQUEST_METHOD=POST.CONTEN socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: T_LENGTH=270.HTTP_HOST=CIS.$$$$.NoHttpHead socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: ers=0.IsJava=1.IdcService=GET_SEARCH_RESUL socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: [email protected] socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: calData.SortField=dDocName.ClientEncoding= socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: UTF-8.IdcService=GET_SEARCH_RESULTS.UserTi socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: meZone=UTC.UserDateFormat=iso8601.SortDesc socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: =ASC.QueryText=dDocType..matches..`Documen socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: t`.@end. userstorage, jps - Provides trace details for user authentication and authorization. Includes information on the determination of what roles and accounts a user has access to. In 11g a new trace section, jps, was added with the addition of the JpsUserProvider to communicate with WebLogic Server. The WCC developers decide when to use the verbose option for their trace output, so sometime you need to try verbose to see what different information you get. One of the things I would always have liked to see if the ability to turn on verbose output selectively for individual trace sections. When you turn on verbose output you get it for all trace sections you have enabled. This can quickly fill up your trace files with a lot of information if you have the socket trace section turned on.

    Read the article

  • A ToDynamic() Extension Method For Fluent Reflection

    - by Dixin
    Recently I needed to demonstrate some code with reflection, but I felt it inconvenient and tedious. To simplify the reflection coding, I created a ToDynamic() extension method. The source code can be downloaded from here. Problem One example for complex reflection is in LINQ to SQL. The DataContext class has a property Privider, and this Provider has an Execute() method, which executes the query expression and returns the result. Assume this Execute() needs to be invoked to query SQL Server database, then the following code will be expected: using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // Executes the query. Here reflection is required, // because Provider, Execute(), and ReturnValue are not public members. IEnumerable<Product> results = database.Provider.Execute(query.Expression).ReturnValue; // Processes the results. foreach (Product product in results) { Console.WriteLine("{0}, {1}", product.ProductID, product.ProductName); } } Of course, this code cannot compile. And, no one wants to write code like this. Again, this is just an example of complex reflection. using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // database.Provider PropertyInfo providerProperty = database.GetType().GetProperty( "Provider", BindingFlags.NonPublic | BindingFlags.GetProperty | BindingFlags.Instance); object provider = providerProperty.GetValue(database, null); // database.Provider.Execute(query.Expression) // Here GetMethod() cannot be directly used, // because Execute() is a explicitly implemented interface method. Assembly assembly = Assembly.Load("System.Data.Linq"); Type providerType = assembly.GetTypes().SingleOrDefault( type => type.FullName == "System.Data.Linq.Provider.IProvider"); InterfaceMapping mapping = provider.GetType().GetInterfaceMap(providerType); MethodInfo executeMethod = mapping.InterfaceMethods.Single(method => method.Name == "Execute"); IExecuteResult executeResult = executeMethod.Invoke(provider, new object[] { query.Expression }) as IExecuteResult; // database.Provider.Execute(query.Expression).ReturnValue IEnumerable<Product> results = executeResult.ReturnValue as IEnumerable<Product>; // Processes the results. foreach (Product product in results) { Console.WriteLine("{0}, {1}", product.ProductID, product.ProductName); } } This may be not straight forward enough. So here a solution will implement fluent reflection with a ToDynamic() extension method: IEnumerable<Product> results = database.ToDynamic() // Starts fluent reflection. .Provider.Execute(query.Expression).ReturnValue; C# 4.0 dynamic In this kind of scenarios, it is easy to have dynamic in mind, which enables developer to write whatever code after a dot: using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // database.Provider dynamic dynamicDatabase = database; dynamic results = dynamicDatabase.Provider.Execute(query).ReturnValue; } This throws a RuntimeBinderException at runtime: 'System.Data.Linq.DataContext.Provider' is inaccessible due to its protection level. Here dynamic is able find the specified member. So the next thing is just writing some custom code to access the found member. .NET 4.0 DynamicObject, and DynamicWrapper<T> Where to put the custom code for dynamic? The answer is DynamicObject’s derived class. I first heard of DynamicObject from Anders Hejlsberg's video in PDC2008. It is very powerful, providing useful virtual methods to be overridden, like: TryGetMember() TrySetMember() TryInvokeMember() etc.  (In 2008 they are called GetMember, SetMember, etc., with different signature.) For example, if dynamicDatabase is a DynamicObject, then the following code: dynamicDatabase.Provider will invoke dynamicDatabase.TryGetMember() to do the actual work, where custom code can be put into. Now create a type to inherit DynamicObject: public class DynamicWrapper<T> : DynamicObject { private readonly bool _isValueType; private readonly Type _type; private T _value; // Not readonly, for value type scenarios. public DynamicWrapper(ref T value) // Uses ref in case of value type. { if (value == null) { throw new ArgumentNullException("value"); } this._value = value; this._type = value.GetType(); this._isValueType = this._type.IsValueType; } public override bool TryGetMember(GetMemberBinder binder, out object result) { // Searches in current type's public and non-public properties. PropertyInfo property = this._type.GetTypeProperty(binder.Name); if (property != null) { result = property.GetValue(this._value, null).ToDynamic(); return true; } // Searches in explicitly implemented properties for interface. MethodInfo method = this._type.GetInterfaceMethod(string.Concat("get_", binder.Name), null); if (method != null) { result = method.Invoke(this._value, null).ToDynamic(); return true; } // Searches in current type's public and non-public fields. FieldInfo field = this._type.GetTypeField(binder.Name); if (field != null) { result = field.GetValue(this._value).ToDynamic(); return true; } // Searches in base type's public and non-public properties. property = this._type.GetBaseProperty(binder.Name); if (property != null) { result = property.GetValue(this._value, null).ToDynamic(); return true; } // Searches in base type's public and non-public fields. field = this._type.GetBaseField(binder.Name); if (field != null) { result = field.GetValue(this._value).ToDynamic(); return true; } // The specified member is not found. result = null; return false; } // Other overridden methods are not listed. } In the above code, GetTypeProperty(), GetInterfaceMethod(), GetTypeField(), GetBaseProperty(), and GetBaseField() are extension methods for Type class. For example: internal static class TypeExtensions { internal static FieldInfo GetBaseField(this Type type, string name) { Type @base = type.BaseType; if (@base == null) { return null; } return @base.GetTypeField(name) ?? @base.GetBaseField(name); } internal static PropertyInfo GetBaseProperty(this Type type, string name) { Type @base = type.BaseType; if (@base == null) { return null; } return @base.GetTypeProperty(name) ?? @base.GetBaseProperty(name); } internal static MethodInfo GetInterfaceMethod(this Type type, string name, params object[] args) { return type.GetInterfaces().Select(type.GetInterfaceMap).SelectMany(mapping => mapping.TargetMethods) .FirstOrDefault( method => method.Name.Split('.').Last().Equals(name, StringComparison.Ordinal) && method.GetParameters().Count() == args.Length && method.GetParameters().Select( (parameter, index) => parameter.ParameterType.IsAssignableFrom(args[index].GetType())).Aggregate( true, (a, b) => a && b)); } internal static FieldInfo GetTypeField(this Type type, string name) { return type.GetFields( BindingFlags.GetField | BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.NonPublic).FirstOrDefault( field => field.Name.Equals(name, StringComparison.Ordinal)); } internal static PropertyInfo GetTypeProperty(this Type type, string name) { return type.GetProperties( BindingFlags.GetProperty | BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.NonPublic).FirstOrDefault( property => property.Name.Equals(name, StringComparison.Ordinal)); } // Other extension methods are not listed. } So now, when invoked, TryGetMember() searches the specified member and invoke it. The code can be written like this: dynamic dynamicDatabase = new DynamicWrapper<NorthwindDataContext>(ref database); dynamic dynamicReturnValue = dynamicDatabase.Provider.Execute(query.Expression).ReturnValue; This greatly simplified reflection. ToDynamic() and fluent reflection To make it even more straight forward, A ToDynamic() method is provided: public static class DynamicWrapperExtensions { public static dynamic ToDynamic<T>(this T value) { return new DynamicWrapper<T>(ref value); } } and a ToStatic() method is provided to unwrap the value: public class DynamicWrapper<T> : DynamicObject { public T ToStatic() { return this._value; } } In the above TryGetMember() method, please notice it does not output the member’s value, but output a wrapped member value (that is, memberValue.ToDynamic()). This is very important to make the reflection fluent. Now the code becomes: IEnumerable<Product> results = database.ToDynamic() // Here starts fluent reflection. .Provider.Execute(query.Expression).ReturnValue .ToStatic(); // Unwraps to get the static value. With the help of TryConvert(): public class DynamicWrapper<T> : DynamicObject { public override bool TryConvert(ConvertBinder binder, out object result) { result = this._value; return true; } } ToStatic() can be omitted: IEnumerable<Product> results = database.ToDynamic() .Provider.Execute(query.Expression).ReturnValue; // Automatically converts to expected static value. Take a look at the reflection code at the beginning of this post again. Now it is much much simplified! Special scenarios In 90% of the scenarios ToDynamic() is enough. But there are some special scenarios. Access static members Using extension method ToDynamic() for accessing static members does not make sense. Instead, DynamicWrapper<T> has a parameterless constructor to handle these scenarios: public class DynamicWrapper<T> : DynamicObject { public DynamicWrapper() // For static. { this._type = typeof(T); this._isValueType = this._type.IsValueType; } } The reflection code should be like this: dynamic wrapper = new DynamicWrapper<StaticClass>(); int value = wrapper._value; int result = wrapper.PrivateMethod(); So accessing static member is also simple, and fluent of course. Change instances of value types Value type is much more complex. The main problem is, value type is copied when passing to a method as a parameter. This is why ref keyword is used for the constructor. That is, if a value type instance is passed to DynamicWrapper<T>, the instance itself will be stored in this._value of DynamicWrapper<T>. Without the ref keyword, when this._value is changed, the value type instance itself does not change. Consider FieldInfo.SetValue(). In the value type scenarios, invoking FieldInfo.SetValue(this._value, value) does not change this._value, because it changes the copy of this._value. I searched the Web and found a solution for setting the value of field: internal static class FieldInfoExtensions { internal static void SetValue<T>(this FieldInfo field, ref T obj, object value) { if (typeof(T).IsValueType) { field.SetValueDirect(__makeref(obj), value); // For value type. } else { field.SetValue(obj, value); // For reference type. } } } Here __makeref is a undocumented keyword of C#. But method invocation has problem. This is the source code of TryInvokeMember(): public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { if (binder == null) { throw new ArgumentNullException("binder"); } MethodInfo method = this._type.GetTypeMethod(binder.Name, args) ?? this._type.GetInterfaceMethod(binder.Name, args) ?? this._type.GetBaseMethod(binder.Name, args); if (method != null) { // Oops! // If the returnValue is a struct, it is copied to heap. object resultValue = method.Invoke(this._value, args); // And result is a wrapper of that copied struct. result = new DynamicWrapper<object>(ref resultValue); return true; } result = null; return false; } If the returned value is of value type, it will definitely copied, because MethodInfo.Invoke() does return object. If changing the value of the result, the copied struct is changed instead of the original struct. And so is the property and index accessing. They are both actually method invocation. For less confusion, setting property and index are not allowed on struct. Conclusions The DynamicWrapper<T> provides a simplified solution for reflection programming. It works for normal classes (reference types), accessing both instance and static members. In most of the scenarios, just remember to invoke ToDynamic() method, and access whatever you want: StaticType result = someValue.ToDynamic()._field.Method().Property[index]; In some special scenarios which requires changing the value of a struct (value type), this DynamicWrapper<T> does not work perfectly. Only changing struct’s field value is supported. The source code can be downloaded from here, including a few unit test code.

    Read the article

  • The first day of JavaOne is already over!

    - by delabassee
    In the past Sunday used to be a more relaxing day with ‘just’ some JavaOne activities going on. Sunday used to be a soft day to prepare yourself for an exhausting week. This is now over as JavaOne is expanding; Sunday is now an integral part of the conference. One of the side effect of this extra day is that some activities related to JavaOne and OpenWorld such as MySQL Connect are being push to start a day earlier on Saturday (can you spot the pattern here?). On the GlassFish front, Sunday was a very busy day! It started at the Moscone Center with the annual GlassFish Community Event where the Java EE 7 and GF 4 roadmaps were presented and discussed. During the event, different GlassFish users such as ZeroTurnaround (the JRebel guys), Grupo RBS and IDR Solutions shared their views on GF, why they like GF but also what could be improved. The event was also a forum for the GF community to exchange with some of the key Java EE / GlassFish Oracle Executives and the different GF team members. The Strategy keynote and the Technical keynote were held in the Masonic Auditorium later in the after-noon. Oracle executives have presented the plans for Java SE, Java FX and Java EE. As on-demand replays will be available soon, I will not summarize several hours of content but here are some personal takeaways from those keynotes. Modularity Modularity is a big deal. We know by now that Project Jigsaw will not be ready for Java SE 8 but in any case, it is already possible (and encouraged) to test Jigsaw today. In the future, Java EE plan to rely on the modularity features provided by Java SE, so Project Jigsaw is also relevant for Java EE developers. Shorter term, to cover some of the modular requirements, Java SE will adopt the approach that was used for Java EE 6 and the notion of Profiles. This approach does not define a module system per say; Profiles is a way to clearly define different subsets of Java SE to fulfill different needs (e.g. the full JRE is not required for a headless application). The introduction of different Profiles, from the Base profile (10mb) to the Full Profile (+50mb), has been proposed for Java SE 8. Embedded Embedded is a strong theme going forward for the Java Plaform. There is now a dedicated program : Java Embedded @ JavaOne Java by nature (e.g. platform independence, built-in security, ability easily talks to any back-end systems, large set of skills available on the market, etc.) is probably the most suited platform for the Internet of Things. You can quickly be up-to-speed and develop services and applications for that space just by using your current Java skills. All you need to start developing on ARM is a 35$ Raspberry Pi ARM board (25$ if you are cheap and can live without an ethernet connection) and the recently released JDK for Linux/ARM. Obviously, GlassFish runs on Raspberry Pi. If you wan to go further in the embedded space, you should take a look Java SE Embedded, an optimized, low footprint, Java environment that support the major embedded architectures (ARM, PPC and x86). Finally, Oracle has recently introduced Java Embedded Suite, a new solution that brings modern middleware capabilities to the embedded space. Java Embedded Suite is an optimized solution that leverage Java SE Embedded but also GlassFish, Jersey and JavaDB to deploy advanced value added capabilities (eg. sensor data filtering and) deeper in the network, closer to the devices. JavaFX JavaFX is going strong! Starting from Java SE 7u6, JavaFX is bundled with the JDK. JavaFX is now available for all the major desktop platforms (Windows, Linux and Mac OS X). JavaFX is now also available, in developer preview, for low end device running Linux/ARM. During the keynote, JavaFX was shown running on a Raspberry Pi! And as announced during the keynote, JavaFX should be fully open-sourced by the end of the year; contributions are welcome!. There is a strong momentum around JavaFX, it’s the ideal client solution for the Java platform. A client layer that works perfectly with GlassFish on the back-end. If you were not convince by JavaFX, it’s time to reconsider it! As an old Chinese proverb say “One tweet is worth a thousand words!” HTML5, Project Avatar and Java EE 7 HTML5 got a lot of airtime too, it was covered during the Java EE 7 section of the keynote. Some details about Project Avatar, Oracle’s incubator project for a TSA (Thin Server Architecture) solution, were diluted and shown during the keynote. On the tooling side, Project Easel running on NetBeans 7.3 beta was demo’ed, including a cool NetBeans debugging session running in Chrome! HTML 5, Project Avatar and Java EE 7 deserve separate posts... Feedback We need your feedback! There are many projects, JSRs and products cooking : GlassFish 4, Project Jigsaw, Concurrency Utilities for Java EE (JSR 236), OpenJFX, OpenJDK to name just a few. Those projects, those specifications will have a profound impact on the Java platform for the years to come! So if you have the opportunity, download, install, learn, tests them and give feedback! Remember, you can "Make the Future Java!" Finally, the traditional GlassFish Party at the Thirsty Bear concluded the first JavaOne day. This party is another place where the community can freely exchange with the GlassFish team in a more relaxed, more friendly (but sometime more noisy) atmosphere. Arun has posted a set of pictures to reflect the atmosphere of the keynotes and the GlassFish party. You can find more details on the others Java EE and GlassFish activities here.

    Read the article

  • The Arab HEUG is now a reality, and other random thoughts

    - by user9147039
    I just returned from Doha, Qatar where the first of its kind HEUG (Higher Education User Group) meeting for institutions in the Middle East and North Africa was held at Qatar University and jointly hosted by Damman University from Saudi Arabia. Over 80 delegates attended including representation from education institutions in Oman, Saudi Arabia, Lebanon, and Qatar. There are many other regional HEUG organizations in place (in Australia/New Zealand, APAC, EMEA, as well as smaller regional HEUG’s in the Netherlands, South Africa, and in regions of the US), but it was truly an accomplishment to see this Middle East/North Africa group organize and launch their chapter with a meeting of this quality. To be known as the Arab HEUG going forward, I am excited about the prospects for sharing between the institutions and for the growth of Oracle solutions in the region. In particular the hosts for the event (Qatar University) did a masterful job with logistics and organization, and the quality of the event was a testament to their capabilities. Among the more interesting and enlightening presentations I attended were one from Dammam University on the lessons learned from their implementation of Campus Solutions and transition off of Banner, as well as the use by Qatar University E-business Suite for grants management (both pre-and post-award). The most notable fact coming from this latter presentation was the fit (89%) of e-Business Suite Grants to the university’s requirements. In a few weeks time we will be convening the 5th meeting of the Oracle Education & Research Industry Strategy Council in Redwood Shores (5th since my advent into my current role). The main topics of discussion will be around our Higher Education Applications Strategy for the future (including cloud approaches to ERP (HCM, Finance, and Student Information Systems), how some cases studies on the benefits of leveraging delivered functionality and extensibility in the software (versus customization). On the second day of the event we will turn our attention to Oracle in Research and also budgeting and planning in higher education. Both of these sessions will include significant participation from council members in the form of panel discussions. Our EVP’s for Systems (John Fowler) and for Global Cloud Services and North America application sales (Joanne Olson) will join us for the discussion. I recently read a couple of articles that were surprising to me. The first was from Inside Higher Ed on October 15 entitled, “As colleges prepare for major software upgrades, Kuali tries to woo them from corporate vendors.” It continues to disappointment that after all this time we are still debating whether it is better to build enterprise software through open or community source initiatives when fully functional, flexible, supported, and widely adopted options exist in the marketplace. Over a decade or more ago when these solutions were relatively immature and there was a great deal of turnover in the market I could appreciate the initiatives like Kuali. But let’s not kid ourselves – the real objective of this movement is to counter a perceived predatory commercial software industry. Again, when commercial solutions are deployed as written without significant customization, and standard business processes are adopted, the cost of these solutions (relative to the value delivered) is quite low, and certain much lower than the massive investment (and risk) in in-house developers to support a bespoke community source system. In this era of cost pressures in education and the need to refocus resources on teaching, learning, and research, I believe it’s bordering on irresponsible to continue to pursue open-source ERP. Many of the adopter’s total costs are staggering and have little to show for their efforts and expended resources. The second article was recently in the Chronicle of Higher Education and was entitled “’Big Data’ Is Bunk, Obama Campaign’s Tech Guru Tells University Leaders.” This one was so outrageous I almost don’t want to legitimize it by referencing it here. In the article the writer relays statements made by Harper Reed, President Obama’s former CTO for his 2012 re-election campaign, that big data solutions in education have no relevance and are akin to snake oil. He goes on to state that while he’s a fan of data-driven decision making in education, most of the necessary analysis can be accomplished in Excel spreadsheets. Yeah… right. This is exactly what ails education (higher education in particular). Dozens of shadow and siloed systems running on spreadsheets with limited-to-no enterprise wide initiatives to harness the data-rich environment that is a higher ed institution and transform the data into useable information. I’ll grant Mr. Reed that “Big Data” is overused and hackneyed, but imperatives like improving student success in higher education are classic big data problems that data-mining and predictive analytics can address. Further, higher ed need to be producing a massive amount more data scientists and analysts than are currently in the pipeline, to further this discipline and application of these tools to many many other problems across multiple industries.

    Read the article

  • FAQ: Highlight GridView Row on Click and Retain Selected Row on Postback

    - by Vincent Maverick Durano
    A couple of months ago I’ve written a simple demo about “Highlighting GridView Row on MouseOver”. I’ve noticed many members in the forums (http://forums.asp.net) are asking how to highlight row in GridView and retain the selected row across postbacks. So I’ve decided to write this post to demonstrate how to implement it as reference to others who might need it. In this demo I going to use a combination of plain JavaScript and jQuery to do the client-side manipulation. I presumed that you already know how to bind the grid with data because I will not include the codes for populating the GridView here. For binding the gridview you can refer this post: Binding GridView with Data the ADO.Net way or this one: GridView Custom Paging with LINQ. To get started let’s implement the highlighting of GridView row on row click and retain the selected row on postback.  For simplicity I set up the page like this: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>You have selected Row: (<asp:Label ID="Label1" runat="server" />)</h2> <asp:HiddenField ID="hfCurrentRowIndex" runat="server"></asp:HiddenField> <asp:HiddenField ID="hfParentContainer" runat="server"></asp:HiddenField> <asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Trigger Postback" /> <asp:GridView ID="grdCustomer" runat="server" AutoGenerateColumns="false" onrowdatabound="grdCustomer_RowDataBound"> <Columns> <asp:BoundField DataField="Company" HeaderText="Company" /> <asp:BoundField DataField="Name" HeaderText="Name" /> <asp:BoundField DataField="Title" HeaderText="Title" /> <asp:BoundField DataField="Address" HeaderText="Address" /> </Columns> </asp:GridView> </asp:Content>   Note: Since the action is done at the client-side, when we do a postback like (clicking on a button) the page will be re-created and you will lose the highlighted row. This is normal because the the server doesn't know anything about the client/browser not unless if you do something to notify the server that something has changed. To persist the settings we will use some HiddenFields control to store the data so that when it postback we can reference the value from there. Now here’s the JavaScript functions below: <asp:content id="Content1" runat="server" contentplaceholderid="HeadContent"> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script type="text/javascript">       var prevRowIndex;       function ChangeRowColor(row, rowIndex) {           var parent = document.getElementById(row);           var currentRowIndex = parseInt(rowIndex) + 1;                 if (prevRowIndex == currentRowIndex) {               return;           }           else if (prevRowIndex != null) {               parent.rows[prevRowIndex].style.backgroundColor = "#FFFFFF";           }                 parent.rows[currentRowIndex].style.backgroundColor = "#FFFFD6";                 prevRowIndex = currentRowIndex;                 $('#<%= Label1.ClientID %>').text(currentRowIndex);                 $('#<%= hfParentContainer.ClientID %>').val(row);           $('#<%= hfCurrentRowIndex.ClientID %>').val(rowIndex);       }             $(function () {           RetainSelectedRow();       });             function RetainSelectedRow() {           var parent = $('#<%= hfParentContainer.ClientID %>').val();           var currentIndex = $('#<%= hfCurrentRowIndex.ClientID %>').val();           if (parent != null) {               ChangeRowColor(parent, currentIndex);           }       }          </script> </asp:content>   The ChangeRowColor() is the function that sets the background color of the selected row. It is also where we set the previous row and rowIndex values in HiddenFields.  The $(function(){}); is a short-hand for the jQuery document.ready event. This event will be fired once the page is posted back to the server that’s why we call the function RetainSelectedRow(). The RetainSelectedRow() function is where we referenced the current selected values stored from the HiddenFields and pass these values to the ChangeRowColor() function to retain the highlighted row. Finally, here’s the code behind part: protected void grdCustomer_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { e.Row.Attributes.Add("onclick", string.Format("ChangeRowColor('{0}','{1}');", e.Row.ClientID, e.Row.RowIndex)); } } The code above is responsible for attaching the javascript onclick event for each row and call the ChangeRowColor() function and passing the e.Row.ClientID and e.Row.RowIndex to the function. Here’s the sample output below:   That’s it! I hope someone find this post useful! Technorati Tags: jQuery,GridView,JavaScript,TipTricks

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >