Search Results

Search found 22884 results on 916 pages for 'team build'.

Page 409/916 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • BizTalk 2009 - Pipeline Component Wizard

    - by Stuart Brierley
    Recently I decided to try out the BizTalk Server Pipeline Component Wizard when creating a new pipeline component for BizTalk 2009. There are different versions of the wizard available, so be sure to download the appropriate version for the BizTalk environment that you are working with. Following the download and expansion of the zip file, you should be left with a Visual Studio solution.  Open this solution and build the project. Following this installation is straight foward - locate and run the built setup.exe file in the PipelineComponentWizard Setup project and click through the small number of installation screens. Once you have completed installation you will be ready to use the wizard in Visual Studio to create your BizTalk Pipeline Component. Start by creating a new project, selecting BizTalk Projects then BizTalk Server Pipeline Component.  You will then be presented with the splash screen. The next step is General Setup, where you will detail the classname, namespace, pipeline and component types, and the implementation language for your Pipeline Component. The options for pipeline type are Receive, Send or Any. Depending on the pipeline type chosen there are different options presented for the component type, matching those available within the BizTalk Pipelines themselves: Receive - Decoder, Disassembling Parser, Validate, Party Resolver, Any. Send -  Encoder, Assembling Serializer, Any. Any - Any. The options for implementation language are C# or VB.Net Next you must set up the UI settings - these are the settings that affect the appearance of the pipeline component within Visual Studio. You must detail the component name, version, description and icon.  Next is the definition of the variables that the pipeline component will use.  The values for these variables will be defined in Visual Studio when creating a pipeline. The options for each variable you require are: Designer Property - The name of the variable. Data Type - String, Boolean, Integer, Long, Short, Schema List, Schema With None Clicking finish now will complete the wizard stage of the creation of your pipeline component. Once the wizard has completed you will be left with a BizTalk Server Pipeline Component project containing a skeleton code file for you to complete.   Within this code file you will mainly be interested in the execute method, which is left mostly empty ready for you to implement your custom pipeline code:          #region IComponent members         /// <summary>         /// Implements IComponent.Execute method.         /// </summary>         /// <param name="pc">Pipeline context</param>         /// <param name="inmsg">Input message</param>         /// <returns>Original input message</returns>         /// <remarks>         /// IComponent.Execute method is used to initiate         /// the processing of the message in this pipeline component.         /// </remarks>         public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(Microsoft.BizTalk.Component.Interop.IPipelineContext pc, Microsoft.BizTalk.Message.Interop.IBaseMessage inmsg)         {             //             // TODO: implement component logic             //             // this way, it's a passthrough pipeline component             return inmsg;         }         #endregion Once you have implemented your custom code, build and compile your Custom Pipeline Component then add the compiled .dll to C:\Program Files\Microsoft BizTalk Server 2009\Pipeline Components . When creating a new pipeline, in Visual Studio reset the toolbox and the custom pipeline component should appear ready for you to use in your Biztalk Pipeline. Drop the pipeline component into the relevant pipeline stage and configure the component properties (the variables defined in the wizard). You can now deploy and use the pipeline as you would any other custom pipeline.

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Key-Value Pair Databases and Document Databases – Day 13 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Relational Database and NoSQL database in the Big Data Story. In this article we will understand the role of Key-Value Pair Databases and Document Databases Supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (Yesterday’s post) NoSQL Databases (Yesterday’s post) Key-Value Pair Databases (This post) Document Databases (This post) Columnar Databases (Tomorrow’s post) Graph Databases (Tomorrow’s post) Spatial Databases (Tomorrow’s post) Key Value Pair Databases Key Value Pair Databases are also known as KVP databases. A key is a field name and attribute, an identifier. The content of that field is its value, the data that is being identified and stored. They have a very simple implementation of NoSQL database concepts. They do not have schema hence they are very flexible as well as scalable. The disadvantages of Key Value Pair (KVP) database are that they do not follow ACID (Atomicity, Consistency, Isolation, Durability) properties. Additionally, it will require data architects to plan for data placement, replication as well as high availability. In KVP databases the data is stored as strings. Here is a simple example of how Key Value Database will look like: Key Value Name Pinal Dave Color Blue Twitter @pinaldave Name Nupur Dave Movie The Hero As the number of users grow in Key Value Pair databases it starts getting difficult to manage the entire database. As there is no specific schema or rules associated with the database, there are chances that database grows exponentially as well. It is very crucial to select the right Key Value Pair Database which offers an additional set of tools to manage the data and provides finer control over various business aspects of the same. Riak Rick is one of the most popular Key Value Database. It is known for its scalability and performance in high volume and velocity database. Additionally, it implements a mechanism for collection key and values which further helps to build manageable system. We will further discuss Riak in future blog posts. Key Value Databases are a good choice for social media, communities, caching layers for connecting other databases. In simpler words, whenever we required flexibility of the data storage keeping scalability in mind – KVP databases are good options to consider. Document Database There are two different kinds of document databases. 1) Full document Content (web pages, word docs etc) and 2) Storing Document Components for storage. The second types of the document database we are talking about over here. They use Javascript Object Notation (JSON) and Binary JSON for the structure of the documents. JSON is very easy to understand language and it is very easy to write for applications. There are two major structures of JSON used for Document Database – 1) Name Value Pairs and 2) Ordered List. MongoDB and CouchDB are two of the most popular Open Source NonRelational Document Database. MongoDB MongoDB databases are called collections. Each collection is build of documents and each document is composed of fields. MongoDB collections can be indexed for optimal performance. MongoDB ecosystem is highly available, supports query services as well as MapReduce. It is often used in high volume content management system. CouchDB CouchDB databases are composed of documents which consists fields and attachments (known as description). It supports ACID properties. The main attraction points of CouchDB are that it will continue to operate even though network connectivity is sketchy. Due to this nature CouchDB prefers local data storage. Document Database is a good choice of the database when users have to generate dynamic reports from elements which are changing very frequently. A good example of document usages is in real time analytics in social networking or content management system. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Exposed: Fake Social Marketing

    - by Mike Stiles
    Brands and marketers who want to build their social popularity on a foundation of lies are starting to face more of an uphill climb. Fake social is starting to get exposed, and there are a lot of emperors getting caught without any clothes. Facebook is getting ready to do a purge of “Likes” on Pages that were a result of bots, fake accounts, and even real users who were duped or accidentally Liked a Page. Most of those accidental Likes occur on mobile, where it’s easy for large fingers to hit the wrong space. Depending on the degree to which your Page has been the subject of such activity, you may see your number of Likes go down. But don’t sweat it, that’s a good thing. The social world has turned the corner and assessed the value of a Like. And the verdict is that a Like is valuable as an opportunity to build a real relationship with a real customer. Its value pales immensely compared to a user who’s actually engaged with the brand. Those fake Likes aren’t doing you any good. Huge numbers may once have impressed, but it’s not fooling anybody anymore. Facebook’s selling point to marketers is the ability to use a brand’s fans to reach friends of those fans. Consequently, there has to be validity and legitimacy to a fan count. Speaking of mobile, Trademob recently reported 40% of clicks are essentially worthless, because 22% of them are accidental (again with the fat fingers), while 18% are trickery. Publishers will but huge banner ads next to tiny app buttons to increase the odds of an accident. Others even hide a banner behind another to score 2 clicks instead of 1. Pontiflex and Harris Interactive last year found 47% of users were more likely to click a mobile ad accidentally than deliberately. Beyond that, hijacked devices are out there manipulating click data. But to what end for a marketer? What’s the value of a click on something a user never even saw? What’s the value of a seen but accidentally clicked ad if there’s no resulting transaction? Back to fake Likes, followers and views; they’re definitely for sale on numerous sites, none of which I’ll promote. $5 can get you 1,000 Twitter followers. You can even get followers targeted by interests. One site was set up by an unemployed accountant out of his house in England. He gets them from a wholesaler in Brooklyn, who gets them from a 19-year-old supplier in India. The unemployed accountant is making $10,000 a day. That means a lot of brands, celebrities and organizations are playing the fake social game, apparently not coming to grips with the slim value of the numbers they’re buying. But now, in addition to having paid good money for non-ROI numbers, there’s the embarrassment factor. At least a couple of sites have popped up allowing anyone to see just how many fake and inactive followers you have. Britain’s Fake Follower Check and StatusPeople are the two getting the most attention. Enter any Twitter handle and the results are there for all to see. Fake isn’t good, period. “Inactive” could be real followers, but if they’re real, they’re just watching, not engaging. If someone runs a check on your Twitter handle and turns up fake followers, does that mean you’re suspect or have purchased followers? No. Anyone can follow anyone, so most accounts will have some fakes. Even account results like Barack Obama’s (70% fake according to StatusPeople) and Lady Gaga’s (71% fake) don’t mean these people knew about all those fakes or initiated them. Regardless, brands should realize they’re now being watched, and users are judging the legitimacy of their social channels. Use one of any number of tools available to assess and clean out fake Likes and followers so that your numbers are as genuine as possible. And obviously, skip the “buying popularity” route of social marketing strategy. It doesn’t work and it gets you busted…a losing combination.

    Read the article

  • SAB BizTalk Archiving Pipeline Component - Codeplex

    - by Stuart Brierley
    In an effort to give a little more to the BizTalk development community, I have created my first Codeplex project. The SAB BizTalk Archiving Pipeline Component was written using Visual Studio 2010 with BizTalk Server 2010 intended as the target platform.  It is currently at version 0.1, meaning that I have not yet completed all the intended functionality and have so far carried out a limited number of tests.  It does however archive files within the bounds of the functionailty so far implemented and seems to be stable in use. It is based on a recent evolution of a basic archiving component that I wrote in the past, and it is my hope that it will continue to evolve in the coming months. This work was inspired by some old posts by Gilles Zunino and Jeff Lynch.   You can download the documentation, source code or component dll from Codeplex, but to give you a taste here is the first section of the documentation to whet your appetite: SAB BizTalk Archiving Pipeline Component   The SAB BizTalk Archiving Pipeline Component has been developed to allow custom piplelines to be created that can archive messages at any stage of pipeline processing.   It works in both receive and send pipelines and will archive messages to file based on the configuration applied to the component in the BizTalk Administration Console.   The Archiving Pipeline Component has been coded for use with BizTalk Server 2010. Use with other versions of BizTalk has not been tested.   The Archiving Pipeline component is supplied as a dll file that should be placed in the BizTalk Server Pipeline Components folder. It can then be used when developing custom pipelines to be deployed as a part of your BizTalk Server applications.   This version of the component allows you to use a number of generic messaging macros and also a small number that are specific to the FILE adapter. It is intended to extend these macros to cover context properties from other adapters in future releases.     Archive Pipeline Parameters As with all pipeline components, the following parameters can be set when creating your custom pipeline and at runtime via the administration console.   Enabled:              Enables and disables the archive process.                                 True; messages will be archived.   False; messages will be passed to the next stage in the pipeline without performing any processing.   File Name:          The file name of the archived message.   Allows the component to build the archive filename at run-time; based on the values entered, the permitted macros and data extracted from the message context properties.   e.g.        %FileReceivedFileName%-%InterchangeSequenceNumber%   File Mask:           The extension to be added to the File Name following all Macro processing.   e.g.        .xml   File Path:             The path on which the archived message should be saved.   Allows the component to build the archive directory at run-time; based on the values entered, permitted macros and data extracted from the message context properties.   e.g.        C:\Archive\%ReceivePortName%\%Year%\%Month%\%Day%\                   \\ArchiveShare\%ReceivePortName%\%Date%\     Overwrite:          Enables and disables existing file overwrites.   True; any existing file with the same File Path/Name combination (following macro replacement) will be overwritten.   False; any existing file with the same File Path/Name combination (following macro replacement) will not be overwritten.  The current message will be archived with a GUID appended to the File Name.

    Read the article

  • SPARC T4-2 Produces World Record Oracle Essbase Aggregate Storage Benchmark Result

    - by Brian
    Significance of Results Oracle's SPARC T4-2 server configured with a Sun Storage F5100 Flash Array and running Oracle Solaris 10 with Oracle Database 11g has achieved exceptional performance for the Oracle Essbase Aggregate Storage Option benchmark. The benchmark has upwards of 1 billion records, 15 dimensions and millions of members. Oracle Essbase is a multi-dimensional online analytical processing (OLAP) server and is well-suited to work well with SPARC T4 servers. The SPARC T4-2 server (2 cpus) running Oracle Essbase 11.1.2.2.100 outperformed the previous published results on Oracle's SPARC Enterprise M5000 server (4 cpus) with Oracle Essbase 11.1.1.3 on Oracle Solaris 10 by 80%, 32% and 2x performance improvement on Data Loading, Default Aggregation and Usage Based Aggregation, respectively. The SPARC T4-2 server with Sun Storage F5100 Flash Array and Oracle Essbase running on Oracle Solaris 10 achieves sub-second query response times for 20,000 users in a 15 dimension database. The SPARC T4-2 server configured with Oracle Essbase was able to aggregate and store values in the database for a 15 dimension cube in 398 minutes with 16 threads and in 484 minutes with 8 threads. The Sun Storage F5100 Flash Array provides more than a 20% improvement out-of-the-box compared to a mid-size fiber channel disk array for default aggregation and user-based aggregation. The Sun Storage F5100 Flash Array with Oracle Essbase provides the best combination for large Oracle Essbase databases leveraging Oracle Solaris ZFS and taking advantage of high bandwidth for faster load and aggregation. Oracle Fusion Middleware provides a family of complete, integrated, hot pluggable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle Essbase's performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation. Performance Landscape System Data Size(millions of items) Database Load(minutes) Default Aggregation(minutes) Usage Based Aggregation(minutes) SPARC T4-2, 2 x SPARC T4 2.85 GHz 1000 149 398* 55 Sun M5000, 4 x SPARC64 VII 2.53 GHz 1000 269 526 115 Sun M5000, 4 x SPARC64 VII 2.4 GHz 400 120 448 18 * – 398 mins with CALCPARALLEL set to 16; 484 mins with CALCPARALLEL threads set to 8 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 2 x 2.85 GHz SPARC T4 processors 128 GB memory 2 x 300 GB 10000 RPM SAS internal disks Storage Configuration: 1 x Sun Storage F5100 Flash Array 40 x 24 GB flash modules SAS HBA with 2 SAS channels Data Storage Scheme Striped - RAID 0 Oracle Solaris ZFS Software Configuration: Oracle Solaris 10 8/11 Installer V 11.1.2.2.100 Oracle Essbase Client v 11.1.2.2.100 Oracle Essbase v 11.1.2.2.100 Oracle Essbase Administration services 64-bit Oracle Database 11g Release 2 (11.2.0.3) HP's Mercury Interactive QuickTest Professional 9.5.0 Benchmark Description The objective of the Oracle Essbase Aggregate Storage Option benchmark is to showcase the ability of Oracle Essbase to scale in terms of user population and data volume for large enterprise deployments. Typical administrative and end-user operations for OLAP applications were simulated to produce benchmark results. The benchmark test results include: Database Load: Time elapsed to build a database including outline and data load. Default Aggregation: Time elapsed to build aggregation. User Based Aggregation: Time elapsed of the aggregate views proposed as a result of tracked retrieval queries. Summary of the data used for this benchmark: 40 flat files, each of size 1.2 GB, 49.4 GB in total 10 million rows per file, 1 billion rows total 28 columns of data per row Database outline has 15 dimensions (five of them are attribute dimensions) Customer dimension has 13.3 million members 3 rule files Key Points and Best Practices The Sun Storage F5100 Flash Array has been used to accelerate the application performance. Setting data load threads (DLTHREADSPREPARE) to 64 and Load Buffer to 6 improved dataloading by about 9%. Factors influencing aggregation materialization performance are "Aggregate Storage Cache" and "Number of Threads" (CALCPARALLEL) for parallel view materialization. The optimal values for this workload on the SPARC T4-2 server were: Aggregate Storage Cache: 32 GB CALCPARALLEL: 16   See Also Oracle Essbase Aggregate Storage Option Benchmark on Oracle's SPARC T4-2 Server oracle.com Oracle Essbase oracle.com OTN SPARC T4-2 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 28 August 2012.

    Read the article

  • Ubuntu 12.04 lightdm dumps to tty. Cannot start GUI interface when booting off harddrive, but can when booting off usb

    - by user72681
    When booting, lightdm dumps to tty. No GUI interface works- this is after a fresh install of Ubuntu 12.04 where the GUI interface works when running off the USB. I have an NVIDIA Corporation G98 [Quadro NVS 420] graphics card. After I call startx from the terminal it still doesn't work. I get the following in the Xorg.0.log: [ 327.718] (--) NVIDIA(0): Memory: 262144 kBytes [ 327.718] (--) NVIDIA(0): VideoBIOS: 62.98.6f.00.07 [ 327.718] (II) NVIDIA(0): Detected PCI Express Link width: 16X [ 327.718] (--) NVIDIA(0): Interlaced video modes are supported on this GPU [ 327.756] (--) NVIDIA(0): Connected display device(s) on Quadro NVS 420 at PCI:3:0:0 [ 327.756] (--) NVIDIA(0): none [ 327.756] (EE) NVIDIA(0): No display devices found for this X screen. [ 328.010] (II) UnloadModule: "nvidia" [ 328.010] (II) Unloading nvidia [ 328.010] (II) UnloadModule: "wfb" [ 328.010] (II) Unloading wfb [ 328.010] (II) UnloadModule: "fb" [ 328.010] (II) Unloading fb [ 328.011] (EE) Screen(s) found, but none have a usable configuration. [ 328.011] Fatal server error: [ 328.011] no screens found /var/log/lightdm/lightdm.log [+0.00s] DEBUG: Starting local X display [+0.00s] DEBUG: X server :0 will replace Plymouth [+0.02s] DEBUG: Using VT 7 [+0.02s] DEBUG: Activating VT 7 [+0.02s] DEBUG: Logging to /var/log/lightdm/x-0.log [+0.02s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+0.02s] DEBUG: Launching X Server [+0.02s] DEBUG: Launching process 1074: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none [+0.02s] DEBUG: Waiting for ready signal from X server :0 [+0.02s] DEBUG: Acquired bus name org.freedesktop.DisplayManager [+0.02s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0 [+1.38s] DEBUG: Process 1074 exited with return value 1 [+1.38s] DEBUG: X server stopped [+1.38s] DEBUG: Removing X server authority /var/run/lightdm/root/:0 [+1.38s] DEBUG: Releasing VT 7 [+1.38s] DEBUG: Stopping Plymouth, X server failed to start [+1.39s] DEBUG: Display server stopped [+1.39s] DEBUG: Stopping display [+1.39s] DEBUG: Display stopped [+1.39s] DEBUG: Stopping X local seat, failed to start a display [+1.39s] DEBUG: Stopping seat [+1.39s] DEBUG: Seat stopped [+1.39s] DEBUG: Required seat has stopped [+1.39s] DEBUG: Stopping display manager [+1.39s] DEBUG: Display manager stopped [+1.39s] DEBUG: Stopping daemon [+1.39s] DEBUG: Exiting with return value 1 /var/log/lightdm/x-0.log X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-31-server x86_64 Ubuntu Current Operating System: Linux oorn 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-23-generic root=UUID=b25ab072-077d-40f1-95a4-c7fd66acd2f0 ro reboot=pci quiet splash vt.handoff=7 Build Date: 07 May 2012 11:43:21PM xorg-server 2:1.11.4-0ubuntu10.2 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.24.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Wed Jun 27 12:51:45 2012 (==) Using config file: "/etc/X11/xorg.conf" (==) Using system config directory "/usr/share/X11/xorg.conf.d" (EE) NVIDIA(0): No display devices found for this X screen. (EE) Screen(s) found, but none have a usable configuration. Fatal server error: no screens found Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log Server terminated with error (1). Closing log file.

    Read the article

  • Oredev 2011 Trip Report

    - by arungupta
    Oredev had its seventh annual conference in the city of Malmo, Sweden last week. The name "Oredev" signifies to the part that Malmo is connected with Copenhagen with Oresund bridge. There were about 1000 attendees with several speakers from all over the world. The first two days were hands-on workshops and the next three days were sessions. There were different tracks such as Java, Windows 8, .NET, Smart Phones, Architecture, Collaboration, and Entrepreneurship. And then there was Xtra(ck) which had interesting sessions not directly related to technology. I gave two slide-free talks in the Java track. The first one showed how to build an end-to-end Java EE 6 application using NetBeans and GlassFish. The complete instructions to build the application are explained in detail here. This 3-tier application used Java Persistence API, Enterprsie Java Beans, Servlet, Contexts and Dependency Injection, JavaServer Faces, and Java API for RESTful Services. The source code built during the application can be downloaded here (LINK TBD). The second session, slide-free again, showed how to take a Java EE 6 application into production using GlassFish cluster. It explained: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete instructions for this session are available here. Oredev has an interesting way of collecting attendee feedback. The attendees drop a green, yellow, or red card in a bucket as they walk out of the session. Not everybody votes but most do. Other than the instantaneous feedback provided on twitter, this mechanism provides a more coarse grained feedback loop as well. The first talk had about 67 attendees (with 23 green and 7 yellow) and the second one had 22 (11 green and 11 yellow). The speakers' dinner is a good highlight of the conference. It is arranged in the historic city hall and the mayor welcomed all the speakers. As you can see in the pictures, it is a very royal building with lots of history behind it. Fortunately the dinner was a buffet with a much better variety unlike last year where only black soup and geese were served, which was quite cultural BTW ;-) The sauna in 85F, skinny dipping in 35F ocean and alternating between them at Kallbadhus is always very Swedish. Also spent a short evening at a friend's house socializing with other speaker/attendees, drinking Glogg, and eating Pepperkakor.  The welcome packet at the hotel also included cinnamon rolls, recommended to drink with cold milk, for a little more taste of Swedish culture. Something different at this conference was how artists from Image Think were visually capturing all the keynote speakers using images on whiteboards. Here are the images captured for Alexis Ohanian (Reddit co-founder and now running Hipmunk): Unfortunately I could not spend much time engaging with other speakers or attendees because was busy preparing a new hands-on lab material. But was able to spend some time with Matthew Mccullough, Micahel Tiberg, Magnus Martensson, Mattias Karlsson, Corey Haines, Patrick Kua, Charles Nutter, Tushara, Pradeep, Shmuel, and several other folks. Here are a few pictures captured from the event: And the complete album here: Thank you Matthias, Emily, and Kathy for putting up a great show and giving me an opportunity to speak at Oredev. I hope to be back next year with a more vibrant representation of Java - the language and the ecosystem!

    Read the article

  • Oredev 2011 Trip Report

    - by arungupta
    Oredev had its seventh annual conference in the city of Malmo, Sweden last week. The name "Oredev" signifies to the part that Malmo is connected with Copenhagen with Oresund bridge. There were about 1000 attendees with several speakers from all over the world. The first two days were hands-on workshops and the next three days were sessions. There were different tracks such as Java, Windows 8, .NET, Smart Phones, Architecture, Collaboration, and Entrepreneurship. And then there was Xtra(ck) which had interesting sessions not directly related to technology. I gave two slide-free talks in the Java track. The first one showed how to build an end-to-end Java EE 6 application using NetBeans and GlassFish. The complete instructions to build the application are explained in detail here. This 3-tier application used Java Persistence API, Enterprsie Java Beans, Servlet, Contexts and Dependency Injection, JavaServer Faces, and Java API for RESTful Services. The source code built during the application can be downloaded here (LINK TBD). The second session, slide-free again, showed how to take a Java EE 6 application into production using GlassFish cluster. It explained: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete instructions for this session are available here. Oredev has an interesting way of collecting attendee feedback. The attendees drop a green, yellow, or red card in a bucket as they walk out of the session. Not everybody votes but most do. Other than the instantaneous feedback provided on twitter, this mechanism provides a more coarse grained feedback loop as well. The first talk had about 67 attendees (with 23 green and 7 yellow) and the second one had 22 (11 green and 11 yellow). The speakers' dinner is a good highlight of the conference. It is arranged in the historic city hall and the mayor welcomed all the speakers. As you can see in the pictures, it is a very royal building with lots of history behind it. Fortunately the dinner was a buffet with a much better variety unlike last year where only black soup and geese were served, which was quite cultural BTW ;-) The sauna in 85F, skinny dipping in 35F ocean and alternating between them at Kallbadhus is always very Swedish. Also spent a short evening at a friend's house socializing with other speaker/attendees, drinking Glogg, and eating Pepperkakor.  The welcome packet at the hotel also included cinnamon rolls, recommended to drink with cold milk, for a little more taste of Swedish culture. Something different at this conference was how artists from Image Think were visually capturing all the keynote speakers using images on whiteboards. Here are the images captured for Alexis Ohanian (Reddit co-founder and now running Hipmunk): Unfortunately I could not spend much time engaging with other speakers or attendees because was busy preparing a new hands-on lab material. But was able to spend some time with Matthew Mccullough, Micahel Tiberg, Magnus Martensson, Mattias Karlsson, Corey Haines, Patrick Kua, Charles Nutter, Tushara, Pradeep, Shmuel, and several other folks. Here are a few pictures captured from the event: And the complete album here: Thank you Matthias, Emily, and Kathy for putting up a great show and giving me an opportunity to speak at Oredev. I hope to be back next year with a more vibrant representation of Java - the language and the ecosystem!

    Read the article

  • Silverlight Cream for November 16, 2011 -- #1167

    - by Dave Campbell
    In this Issue: Michael Crump, Andrea Boschin, Michael Sync, WindowsPhoneGeek(-2-), Erno de Weerd, Jesse Liberty, Derik Whittaker, Antoni Dol, Walter Ferrari, and Jeff Blankenburg(-2-). Above the Fold: Silverlight: "10 Laps around Silverlight 5 (Part 6 of 10)" Michael Crump WP7: "31 Days of Mango | Day #2: Device Status" Jeff Blankenburg Metro/WinRT/W8: "Lighting up your C# Metro apps by being a Share Target" Derik Whittaker Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up SilverlightShow has announced a webinar you probably don't want to miss: Webinar – Introduction to XAML Development on Windows 8 Check out the top 5 from last week at SilverlightShow: SilverlightShow for November 07 - 13, 2011 From SilverlightCream.com: 10 Laps around Silverlight 5 (Part 6 of 10) Michael Crump covers a lot of territory in this Part 6 of his Silverlight 5 Beta series at SilverlightShow: P/Invoke, Multiple Windows, and Full Trust Windows Phone 7.5 - Manipulating camera stream Andrea Boschin has Part 4 of his Mango series up at SilverlightShow. He's discussing accessing the raw stream from the camera and saving it to a file. Blend 4 + VS 2011 (Preview) = Problem? Michael Sync reports a problem with Blend 4 and the VS2011 preview... followed up by a set of scripts that were posted on Connect to make the problem go away (at least for Michael) Windows Phone Toolkit MultiselectList in depth | Part1: key concepts and API WindowsPhoneGeek begins a series on the MultiselectList in the Phone Toolkit... if you've seen his tutorials, you know they're great... this one is no exception.. lots of code, info and notes getting you on-board with the features Getting Started with Windows Phone Alarms WindowsPhoneGeek next takes a sidestep from his new series and has this post on Alarms in WP7 apps .. one of the type of scheduled actions in WP7.1 ... good write-up, pictures and code Using AppHarbor, Bitbucket and Mercurial with ASP.NET and Silverlight – Part 3 Membership and Role Provider in SQL Server Erno de Weerd's part 3 of his series is up... adding Role and Membership to his application... check it out in this 17-step tutorial Yet Another Podcast #51–Shawn Wildermuth: //build, Xaml Programming & Beyond Jesse Liberty has another of his Yet Another Podcasts up and he's talking with Jon Galloway and Shawn Wildermuth... hear what *that* trio has to say about post //BUILD, and all things XAML Lighting up your C# Metro apps by being a Share Target Derik Whittaker continues to work with Metro... evidenced by this post on wiring your app up to be a Share Target .. allowing your app to consume data from other apps Photoshop in METRO style 2: Filters Antoni Dol follows up his Photoshop in Metro post with this one on filters... he's got some great screenshots... was hoping to see a link to the code... maybe I missed it! Silverlight and Sharepoint working together: a Silverlight menu for Sharepoint - Part 1 Walter Ferrari has part 1 of a series up at SilverlightShow talking about Sharepoint and Silverlight, and using Silverlight Navigation in place of what Sharepoint offers up. 31 Days of Mango | Day #2: Device Status Jeff Blankenburg is motoring along on his 31 Days of Mango. This is his Day 2 post and all about DeviceStatus, or just about everything you would like to know about your user's phone 31 Days of Mango | Day #3: Alarms and Reminders Day 3 of Jeff Blankenburg's series is about Alarms and Reminders... a way to alert your user that something needs to be done... you can create, edit, and delete them as needed Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Copying A Slide From One Presentation To Another

    - by Tim Murphy
    There are many ways to generate a PowerPoint presentation using Open XML.  The first way is to build it by hand strictly using the SDK.  Alternately you can modify a copy of a base presentation in place.  The third approach to generate a presentation is to build a new presentation from the parts of an existing presentation by copying slides as needed.  This post will focus on the third option. In order to make this solution a little more elegant I am going to create a VSTO add-in as I did in my previous post.  This one is going to insert Tags to identify slides instead of NonVisualDrawingProperties which I used to identify charts, tables and images.  The code itself is fairly short. SlideNameForm dialog = new SlideNameForm(); Selection selection = Globals.ThisAddIn.Application.ActiveWindow.Selection;   if(dialog.ShowDialog() == DialogResult.OK) { selection.SlideRange.Tags.Add(dialog.slideName,dialog.slideName); } Zeyad Rajabi has a good post here on combining slides from two presentations.  The example he gives is great if you are doing a straight merge.  But what if you want to use your source file as almost a supermarket where you pick and chose slides and may even insert them repeatedly?  The following code uses the tags we created in the previous step to pick a particular slide an copy it to a destination file. using (PresentationDocument newDocument = PresentationDocument.Open(OutputFileText.Text,true)) { PresentationDocument templateDocument = PresentationDocument.Open(FileNameText.Text, false);   uniqueId = GetMaxIdFromChild(newDocument.PresentationPart.Presentation.SlideMasterIdList); uint maxId = GetMaxIdFromChild(newDocument.PresentationPart.Presentation.SlideIdList);   SlidePart oldPart = GetSlidePartByTagName(templateDocument, SlideToCopyText.Text);   SlidePart newPart = newDocument.PresentationPart.AddPart<SlidePart>(oldPart, "sourceId1");   SlideMasterPart newMasterPart = newDocument.PresentationPart.AddPart(newPart.SlideLayoutPart.SlideMasterPart);   SlideIdList idList = newDocument.PresentationPart.Presentation.SlideIdList;   // create new slide ID maxId++; SlideId newId = new SlideId(); newId.Id = maxId; newId.RelationshipId = "sourceId1"; idList.Append(newId);   // Create new master slide ID uniqueId++; SlideMasterId newMasterId = new SlideMasterId(); newMasterId.Id = uniqueId; newMasterId.RelationshipId = newDocument.PresentationPart.GetIdOfPart(newMasterPart); newDocument.PresentationPart.Presentation.SlideMasterIdList.Append(newMasterId);   // change slide layout ID FixSlideLayoutIds(newDocument.PresentationPart);     //newPart.Slide.Save(); newDocument.PresentationPart.Presentation.Save(); } The GetMaxIDFromChild and FixSlideLayoutID methods are barrowed from Zeyad’s article.  The GetSlidePartByTagName method is listed below.  It is really one LINQ query that finds SlideParts with child Tags that have the requested Name. private SlidePart GetSlidePartByTagName(PresentationDocument templateDocument, string tagName) { return (from p in templateDocument.PresentationPart.SlideParts where p.UserDefinedTagsParts.First().TagList.Descendants <DocumentFormat.OpenXml.Presentation.Tag>().First().Name == tagName.ToUpper() select p).First(); } This is what really makes the difference from what Zeyad posted.  The most powerful thing you can have when generating documents from templates is a consistent way of naming items to be manipulated.  I will be show more approaches like this in upcoming posts. del.icio.us Tags: Office Open XML,Presentation,PowerPoint,VSTO,TagList

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • Big Data – How to become a Data Scientist and Learn Data Science? – Day 19 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the analytics in Big Data Story. In this article we will understand how to become a Data Scientist for Big Data Story. Data Scientist is a new buzz word, everyone seems to be wanting to become Data Scientist. Let us go over a few key topics related to Data Scientist in this blog post. First of all we will understand what is a Data Scientist. In the new world of Big Data, I see pretty much everyone wants to become Data Scientist and there are lots of people I have already met who claims that they are Data Scientist. When I ask what is their role, I have got a wide variety of answers. What is Data Scientist? Data scientists are the experts who understand various aspects of the business and know how to strategies data to achieve the business goals. They should have a solid foundation of various data algorithms, modeling and statistics methodology. What do Data Scientists do? Data scientists understand the data very well. They just go beyond the regular data algorithms and builds interesting trends from available data. They innovate and resurrect the entire new meaning from the existing data. They are artists in disguise of computer analyst. They look at the data traditionally as well as explore various new ways to look at the data. Data Scientists do not wait to build their solutions from existing data. They think creatively, they think before the data has entered into the system. Data Scientists are visionary experts who understands the business needs and plan ahead of the time, this tremendously help to build solutions at rapid speed. Besides being data expert, the major quality of Data Scientists is “curiosity”. They always wonder about what more they can get from their existing data and how to get maximum out of future incoming data. Data Scientists do wonders with the data, which goes beyond the job descriptions of Data Analysist or Business Analysist. Skills Required for Data Scientists Here are few of the skills a Data Scientist must have. Expert level skills with statistical tools like SAS, Excel, R etc. Understanding Mathematical Models Hands-on with Visualization Tools like Tableau, PowerPivots, D3. j’s etc. Analytical skills to understand business needs Communication skills On the technology front any Data Scientists should know underlying technologies like (Hadoop, Cloudera) as well as their entire ecosystem (programming language, analysis and visualization tools etc.) . Remember that for becoming a successful Data Scientist one require have par excellent skills, just having a degree in a relevant education field will not suffice. Final Note Data Scientists is indeed very exciting job profile. As per research there are not enough Data Scientists in the world to handle the current data explosion. In near future Data is going to expand exponentially, and the need of the Data Scientists will increase along with it. It is indeed the job one should focus if you like data and science of statistics. Courtesy: emc Tomorrow In tomorrow’s blog post we will discuss about various Big Data Learning resources. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • MVC Portable Areas &ndash; Static Files as Embedded Resources

    - by Steve Michelotti
    This is the third post in a series related to build and deployment considerations as I’ve been exploring MVC Portable Areas: #1 – Using Web Application Project to build portable areas #2 – Conventions for deploying portable area static files #3 – Portable area static files as embedded resources In the last post, I walked through a convention for managing static files.  In this post I’ll discuss another approach to manage static files (e.g., images, css, js, etc.).  With this approach, you *also* compile the static files as embedded resources into the assembly similar to the *.aspx pages. Once again, you can set this to happen automatically by simply modifying your *.csproj file to include the desired extensions so you don’t have to remember every time you add a file: 1: <Target Name="BeforeBuild"> 2: <ItemGroup> 3: <EmbeddedResource Include="**\*.aspx;**\*.ascx;**\*.gif;**\*.css;**\*.js" /> 4: </ItemGroup> 5: </Target> We now need a reliable way to serve up these static files that are embedded in the assembly. There are a couple of ways to do this but one way is to simply create a Resource controller whose job is dedicated to doing this. 1: public class ResourceController : Controller 2: { 3: public ActionResult Index(string resourceName) 4: { 5: var contentType = GetContentType(resourceName); 6: var resourceStream = Assembly.GetExecutingAssembly().GetManifestResourceStream(resourceName); 7:   8: return this.File(resourceStream, contentType); 9: return View(); 10: } 11:   12: private static string GetContentType(string resourceName) 13: { 14: var extention = resourceName.Substring(resourceName.LastIndexOf('.')).ToLower(); 15: switch (extention) 16: { 17: case ".gif": 18: return "image/gif"; 19: case ".js": 20: return "text/javascript"; 21: case ".css": 22: return "text/css"; 23: default: 24: return "text/html"; 25: } 26: } 27: } In order to use this controller, we need to make sure we’ve registered the route in our portable area registration (shown in lines 5-6): 1: public class WidgetAreaRegistration : PortableAreaRegistration 2: { 3: public override void RegisterArea(System.Web.Mvc.AreaRegistrationContext context, IApplicationBus bus) 4: { 5: context.MapRoute("ResourceRoute", "widget1/resource/{resourceName}", 6: new { controller = "Resource", action = "Index" }); 7:   8: context.MapRoute("Widget1", "widget1/{controller}/{action}", new 9: { 10: controller = "Home", 11: action = "Index" 12: }); 13:   14: RegisterTheViewsInTheEmbeddedViewEngine(GetType()); 15: } 16:   17: public override string AreaName 18: { 19: get { return "Widget1"; } 20: } 21: } In my previous post, we relied on a custom Url helper method to find the actual physical path to the static file like this: 1: <img src="<%: Url.AreaContent("/images/arrow.gif") %>" /> Hello World! However, since we are now embedding the files inside the assembly, we no longer have to worry about the physical path. We can change this line of code to this: 1: <img src="<%: Url.Resource("Widget1.images.arrow.gif") %>" /> Hello World! Note that I had to fully quality the resource name (with namespace and physical location) since that is how .NET assemblies store embedded resources. I also created my own Url helper method called Resource which looks like this: 1: public static string Resource(this UrlHelper urlHelper, string resourceName) 2: { 3: var areaName = (string)urlHelper.RequestContext.RouteData.DataTokens["area"]; 4: return urlHelper.Action("Index", "Resource", new { resourceName = resourceName, area = areaName }); 5: } This method gives us the convenience of not having to know how to construct the URL – but just allowing us to refer to the resource name. The resulting html for the image tag is: 1: <img src="/widget1/resource/Widget1.images.arrow.gif" /> so we can always request any image from the browser directly. This is almost analogous to the WebResource.axd file but for MVC. What is interesting though is that we can encapsulate each one of these so that each area can have it’s own set of resources and they are easily distinguished because the area name is the first segment of the route. This makes me wonder if something like this ResourceController should be baked into portable areas itself. I’m definitely interested in anyone has any opinions on it or have taken alternative approaches.

    Read the article

  • Cellbi Silverlight Controls Giveaway (5 License to give away)

    - by mbcrump
    Cellbi recently updated their new Silverlight Controls to version 4 and to support Visual Studio 2010. I played with a couple of demos on their site and had to take a look. I headed over to their website and downloaded the controls. The first thing that I noticed was all of the special text effects and animations included. I emailed them asking if I could give away their controls in my January 2011 giveaway and they said yes. They also volunteered to give away 5 total license so the changes for you to win would increase.  I am very thankful they were willing to help the Silverlight community with this giveaway. So some quick rules below: ----------------------------------------------------------------------------------------------------------------------------------------------------------- Win a FREE developer’s license of Cellbi Silverlight Controls! (5 License to give away) Random winner will be announced on February 1st, 2011! To be entered into the contest do the following things: Subscribe to my feed. Leave a comment below with a valid email account (I WILL NOT share this info with anyone.) Retweet the following : I just entered to win free #Silverlight controls from @mbcrump and @cellbi http://mcrump.me/cscfree ! Don’t change the URL because this will allow me to track the users that Tweet this page. Don’t forget to visit Cellbi because they made this possible. ---------------------------------------------------------------------------------------------------------------------------------------------------------- Before we get started with the Silverlight Controls, here is a couple of links to bookmark: The What's new in this release page is here. You can also check out the live demos here. Don’t worry about the Samples/Help Documentation. That is installed to your local HDD during the installation process. Begin by downloading the trial version and running the program. After everything is installed then you will see the following screen: After it is installed, you may want to take a look at your Toolbox in Visual Studio 2010. After you add the controls from the “Choose Items” in Silverlight and you will see that you now have access to all of these controls. At this point, to use the controls it’s as simple as drag/drop onto your Silverlight container. It will create the proper Namespaces for you. It’s hard to show with a static screenshot just how powerful the controls actually are so I will refer you to the demo page to learn more about them. Since all of these are animations/effects it just doesn’t work with a static screenshot. It is worth noting that the Sfx pack really focuses on the following core effects: I will show you the best route to get started building a new project with them below. The best page to start is the sample browser which you can access by going to SvFx Launcher. In my case, I want to build a new Carousel. I simple navigate to the Carousel that I want to build and hit the “Cs” code at the top. This launches Visual Studio 2010 and now I can copy/paste the XAML into my project. That is all there is to it. Hopefully this post was helpful and don’t forget to leave a comment below in order to win a set of the controls!  Subscribe to my feed

    Read the article

  • Expanding the Partner Ecosystem with Third-Party Plug-ins

    - by Joe Diemer
    Oracle Enterprise Manager’s extensibility capabilities are designed to allow customers and partners to adapt Enterprise Manager for management of heterogeneous environments with Plug-ins and Connectors.  Third-party developers continue to take advantage of Oracle Enterprise Manager’s Extensibility Development Kit (EDK) to build plug-ins to Enterprise Manager 12c, such as F5’s BIG IP Plug-in and Entuity’s Eye of the Storm Network Management Plug-In.  Partners can also validate their plug-ins through the Oracle Validated Integration (OVI) program, which assures customers that the plug-in has been tested and is functionally and technically sound, is designed in a reliable and standardized manner, and operates and performs as documented.   Two very recent examples of partners which have beta versions of their plug-ins are Blue Medora's VMware vSphere plug-in and the NetApp Storage plug-in.  VMware vSphere Plug-in by Blue Medora Blue Medora, an Oracle Partner Network (OPN) “Gold” member, which just announced that it is now signing up customers to try a beta version of their new VMware vSphere plug-in for Enterprise Manager 12c.  According to Blue Medora, the vSphere plug-in monitors critical VMware metrics (CPU, Memory, Disk, Network, etc) at the Host, VM, Cluster and Resource Pool levels.  It has minimal performance impact via an “agentless” approach that requires no installation directly on VMware servers.  It has discovery capabilities for VMware Datacenters, ESX Hosts, Clusters, Virtual Machines, and Datastores.  It offers integration of native VMware Events into Enterprise Manager, and it provides over 300 VMware-related health, availability, performance, and configuration metrics.  It comes with more than 30 out-of-the-box pre-defined thresholds and can manage VMware via a series of jobs split between cluster, host and VM target types.The company reports that the Enterprise Manager 12c plug-in supports vSphere versions 4.0, 4.5 and 5.0.  Platforms supported include Linux 64-bit, Windows, AIX and Solaris SPARC and x86.  Information about the plug-in, including how to sign up for the beta, is available at their web site at http://bluemedora.com after selecting the "Products" tab. NetApp Storage Plug-in NetApp believes the combination of storage system monitoring with comprehensive management of Oracle systems with Enterprise Manager will help customers reduce the cost and complexity of managing applications that rely on NetApp storage and Oracle technologies.  So, NetApp built a plug-in and reports that it has comprehensive availability and performance information for NetApp storage systems.  Using the plug-in, Oracle Enterprise Manager customers with NetApp storage solutions can track the association between databases and storage components and thereby respond to faults and IO performance bottlenecks quickly. With the latest configuration management capabilities, one can also perform drift analysis to make sure all storage systems are configured as per established gold standards. The company is also now signing up beta customers, which can be done at the NetApp Communities site at https://communities.netapp.com/groups/netapp-storage-system-plug-in-for-oem12c-beta. Learn More about Enterprise Manager Extensibility More plug-ins from other partners are soon to come, which I'll be reporting on them here.  To learn more about Enterprise Manager and how customers and partners can build plug-ins using the EDK to manage a multi-vendor data center, go to http://oracle.com/enterprisemanager in the Heterogeneous Management solution area.  The site also lists the plug-ins available with information on how to obtain them.  More info about the Oracle Validated Integration program can be found at the OPN Enterprise Manager Knowledge Zone in the "Develop" tab.

    Read the article

  • "ERROR:Could not find java.nio.file.Paths" when using Oracle JDK 1.7

    - by Ankit
    I want to try out some features rolled out in Oracle's new JDK 1.7. I followed the post:- Oracle JDK 1.7 but the post doesn't seem to help. I was trying to fetch out the structure for java.nio.file.Paths class file but got the following error:- buffer@ankit:~$ javap java.nio.file.Paths ERROR:Could not find java.nio.file.Paths However i can easily get the information about class structures till JAVA SE 1.6, here is an example:- buffer@ankit:~$ javap java.lang.Object Compiled from "Object.java" public class java.lang.Object{ public java.lang.Object(); public final native java.lang.Class getClass(); public native int hashCode(); public boolean equals(java.lang.Object); protected native java.lang.Object clone() throws java.lang.CloneNotSupportedException; public java.lang.String toString(); public final native void notify(); public final native void notifyAll(); public final native void wait(long) throws java.lang.InterruptedException; public final void wait(long, int) throws java.lang.InterruptedException; public final void wait() throws java.lang.InterruptedException; protected void finalize() throws java.lang.Throwable; static {}; } Running java -version gives the following result:- buffer@ankit:~$ java -version java version "1.7.0_09" Java(TM) SE Runtime Environment (build 1.7.0_09-b05) Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode) SYSTEM INFORMATION buffer@ankit:~$ sudo update-alternatives --config java [sudo] password for buffer: There are 4 choices for the alternative java (providing /usr/bin/java). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java 1061 auto mode 1 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java 1061 manual mode 2 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java 1051 manual mode 3 /usr/lib/jvm/jdk1.7.0_09/ 1 manual mode * 4 /usr/lib/jvm/jdk1.7.0_09/bin/java 1 manual mode buffer@ankit:~$ sudo update-alternatives --config javac There are 2 choices for the alternative javac (providing /usr/bin/javac). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/lib/jvm/java-6-openjdk-amd64/bin/javac 1061 auto mode 1 /usr/lib/jvm/java-6-openjdk-amd64/bin/javac 1061 manual mode * 2 /usr/lib/jvm/jdk1.7.0_09/bin/javac 1 manual mode buffer@ankit:~$ sudo update-alternatives --config javaws There are 3 choices for the alternative javaws (providing /usr/bin/javaws). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/javaws 1061 auto mode 1 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/javaws 1061 manual mode 2 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/javaws 1060 manual mode * 3 /usr/lib/jvm/jdk1.7.0_09/bin/javaws 1 manual mode The directory structure of /usr/lib/jvm/ is as follows:- buffer@ankit:~$ ls -l /usr/lib/jvm/ total 24 lrwxrwxrwx 1 root root 24 Dec 2 2011 default-java -> java-1.6.0-openjdk-amd64 drwxr-xr-x 4 root root 4096 Nov 8 16:24 java-1.5.0-gcj-4.6 lrwxrwxrwx 1 root root 24 Dec 2 2011 java-1.6.0-openjdk -> java-1.6.0-openjdk-amd64 lrwxrwxrwx 1 root root 20 Oct 25 00:01 java-1.6.0-openjdk-amd64 -> java-6-openjdk-amd64 lrwxrwxrwx 1 root root 20 Oct 25 06:59 java-1.7.0-openjdk-amd64 -> java-7-openjdk-amd64 lrwxrwxrwx 1 root root 24 Dec 2 2011 java-6-openjdk -> java-1.6.0-openjdk-amd64 drwxr-xr-x 7 root root 4096 Nov 8 16:24 java-6-openjdk-amd64 drwxr-xr-x 3 root root 4096 Nov 8 16:24 java-6-openjdk-common drwxr-xr-x 5 root root 4096 Nov 8 05:48 java-7-openjdk-amd64 drwxr-xr-x 3 root root 4096 Nov 8 05:48 java-7-openjdk-common drwxr-xr-x 8 buffer buffer 4096 Sep 25 09:08 jdk1.7.0_09 Any help would be highly appreciated.

    Read the article

  • Using Teleriks new LINQ implementation to connect to MySQL

    Last week Telerik released a new LINQ implementation that is simple to use and produces domain models very fast. Built on top of the enterprise grade OpenAccess ORM, you can connect to any database that OpenAccess can connect to such as: SQL Server, MySQL, Oracle, SQL Azure, VistaDB, etc. Today I will show you how to build a domain model using MySQL as your back end. To get started, you have to download MySQL 5.x and the MySQL Workbench and also, as my colleague Alexander Filipov at Telerik reminded me, make sure you install the MySQL .NET Connector, which is available here.  I like to use Northwind, ok it gives me the warm and fuzzies, so I ran a script to produce Northwind on my MySQL server. There are many ways you can get Northwind on your MySQL database, here is a helpful blog to get your started. I also manipulated the first record to indicate that I am in MySQL and gave a look via the MySQL Workbench. Ok, time to build our model! Start up the Domain Model wizard by right clicking on the project in Visual Studio (I have a Web project) and select Add|New Item and choose Telerik OpenAccess Domain Model from the new item list. When the wizard comes up, choose MySQL as your back end and enter in the name of your saved MySQL connection. If you dont have a saved MySQL connection set up in Visual Studio, click on New Connection and enter in the proper connection information. *Note, this is where you need to have the MySQL .NET connector installed. After you set your connection to the MySQL database server, you have to choose which tables to include in your model. Just for fun, I will choose all of them. Give your model a name, like NorthwindEntities and click finish. That is it. Now lets consume the model with ASP .net. I created a simple page that also has a GridView on it. On my page load I wrote this code, by now it should look very familiar, a simple LINQ query filtering customers by country (Germany) and binding the results to the grid.  1: protected void Page_Load(object sender, EventArgs e) 2: { 3: if (!IsPostBack) 4: { 5: //a reference to the data context 6: NorthwindEntities dat = new NorthwindEntities(); 7: //LINQ Statement 8: var result = from c in dat.Customers 9: where c.Country == "Germany" 10: select c; 11: //Databinding to the Gridview 12: GridView1.DataSource = result; 13: GridView1.DataBind(); 14: } 15: } .csharpcode, .csharpcode pre{ font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/}.csharpcode pre { margin: 0em; }.csharpcode .rem { color: #008000; }.csharpcode .kwrd { color: #0000ff; }.csharpcode .str { color: #006080; }.csharpcode .op { color: #0000c0; }.csharpcode .preproc { color: #cc6633; }.csharpcode .asp { background-color: #ffff00; }.csharpcode .html { color: #800000; }.csharpcode .attr { color: #ff0000; }.csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em;}.csharpcode .lnum { color: #606060; } F5 produces the following. Tomorrow Ill show how to take the same model and create an Astoria/OData data feed. Technorati Tags: MySQL Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Eclipse Indigo very slow on Kubuntu 12.04

    - by herom
    hello fellow ubuntu users! I have a really big problem with my Eclipse Indigo running on Kubuntu 12.04 32bit, Dell Vostro 3500, Intel(R) Core(TM) i5 CPU M480 @ 2.67 (as cat /proc/cpuinfo said). It has 4GB RAM. cat /proc/cpuinfo brings up the following: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i5 CPU M 480 @ 2.67GHz stepping : 5 microcode : 0x2 cpu MHz : 1197.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dts tpr_shadow vnmi flexpriority ept vpid bogomips : 5319.85 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i5 CPU M 480 @ 2.67GHz stepping : 5 microcode : 0x2 cpu MHz : 1197.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 2 cpu cores : 2 apicid : 4 initial apicid : 4 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dts tpr_shadow vnmi flexpriority ept vpid bogomips : 5319.88 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 2 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i5 CPU M 480 @ 2.67GHz stepping : 5 microcode : 0x2 cpu MHz : 1197.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dts tpr_shadow vnmi flexpriority ept vpid bogomips : 5319.88 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i5 CPU M 480 @ 2.67GHz stepping : 5 microcode : 0x2 cpu MHz : 1197.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 2 cpu cores : 2 apicid : 5 initial apicid : 5 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dts tpr_shadow vnmi flexpriority ept vpid bogomips : 5319.88 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: java -version brings the following: java version "1.7.0_04" Java(TM) SE Runtime Environment (build 1.7.0_04-b20) Java HotSpot(TM) Server VM (build 23.0-b21, mixed mode) it's the Oracle Java, not OpenJDK. I try to develop an Android application for GoogleTV and Eclipse is this slow, that it can't follow my typing (extreme lagging!!), but this issue makes it almost impossible! here is my eclipse.ini file: -startup plugins/org.eclipse.equinox.launcher_1.2.0.v20110502.jar --launcher.library plugins/org.eclipse.equinox.launcher.gtk.linux.x86_1.1.100.v20110505 -product org.eclipse.epp.package.java.product --launcher.defaultAction openFile -showsplash org.eclipse.platform --launcher.XXMaxPermSize 512m --launcher.defaultAction openFile -vmargs -Dosgi.requiredJavaVersion=1.5 -Declipse.p2.unsignedPolicy=allow -Xms256m -Xmx512m -Xss4m -XX:PermSize=128m -XX:MaxPermSize=384m -XX:CompileThreshold=5 -XX:MaxGCPauseMillis=10 -XX:MaxHeapFreeRatio=70 -XX:+CMSIncrementalPacing -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -XX:+UseFastAccessorMethods -XX:ReservedCodeCacheSize=64m -Dcom.sun.management.jmxremote has anybody faced the same problems? can anybody help me on this problem? it's really urgent as I'm sitting here at my company and am not able to do anything productive...

    Read the article

  • Heading out to Dallas GiveCamp 2011

    - by dotgeek
    The day has finally arrived for twelve local charities here in the Dallas area, when they’ll get some help from various local Developers with their website initiative needs at this years Dallas GiveCamp. I’m really looking forward to helping out at this year event and what I hope will be the start of many more GiveCamps to follow. Similar to Habitat for Humanity, where people gather to help build and improve homes for people in need, GiveCamp brings together programmers and equips them with the virtual tools they need to build and improve their existing websites. Tonight is when things will kickoff for this weekends events and teams will start working on their various projects. The building continues on through the night then and all the way through until Sunday afternoon. The end goal for the teams and charities is to have a completed and working website for each charity to begin using and turn over all the production code and digital assets to them. None of this would be possible with out the great sponsors we have returning once again and their donations of various products to help these charities out with their projects, like Telerik's CMS product Sitefinity 4.0, paired with a year of hosting from Verio to mention just a few of them. Just like the skilled builders who might help train volunteers in the use of a nail gun in building a house. Training is also available here on site for the Developers and these local Charities. Giving them all the skills in how to manage and use these products, from site development and then into actual production is a key to the success of this weekends event.     Tonight's training sessions will kick off with a real treat from Giovanni Gallucci, as he speaks about Social Media for NPOs and then later Gabe Sumner from Telerik will begin a training session on Sitefinity for Developers. These training sessions will continue through out the weekend with .Net Nuke and Mojo Portal sessions also planned as well. If you’re a developer and would like to help out in the future, then check in your area and with your local User Groups to find out if you already have a GiveCamp near you to help out. If you don’t have one available, then consider starting up a local GiveCamp and then you too can help Code it Forward. About GiveCamp GiveCamp is a weekend-long event where software developers, designers, and database administrators donate their time to create custom software for non-profit organizations. This custom software could be a new website for the nonprofit organization, a small data-collection application to keep track of members, or a application for the Red Cross that automatically emails a blood donor three months after they’ve donated blood to remind them that they are now eligible to donate again. The only limitation is that the project should be scoped to be able to be completed in a weekend. During GiveCamp, developers are welcome to go home in the evenings or camp out all weekend long. There are usually food and drink provided at the event. There are sometimes even game systems set up for when you and your need a little break! Overall, it’s a great opportunity for people to work together, developing new friendships, and doing something important for their community. At GiveCamp, there is an expectation of “What Happens at GiveCamp, Stays at GiveCamp”. Therefore, all source code must be turned over to the charities at the end of the weekend (developers cannot ask for payment) and the charities are responsible for maintaining the code moving forward (charities cannot expect the developers to maintain the codebase).

    Read the article

  • SQL Constraints &ndash; CHECK and NOCHECK

    - by David Turner
    One performance issue i faced at a recent project was with the way that our constraints were being managed, we were using Subsonic as our ORM, and it has a useful tool for generating your ORM code called SubStage – once configured, you can regenerate your DAL code easily based on your database schema, and it can even be integrated into your build as a pre-build event if you want to do this.  SubStage also offers the useful feature of being able to generate DDL scripts for your entire database, and can script your data for you too. The problem came when we decided to use the generate scripts feature to migrate the database onto a test database instance – it turns out that the DDL scripts that it generates include the WITH NOCHECK option, so when we executed them on the test instance, and performed some testing, we found that performance wasn’t as expected. A constraint can be disabled, enabled but not trusted, or enabled and trusted.  When it is disabled, data can be inserted that violates the constraint because it is not being enforced, this is useful for bulk load scenarios where performance is important.  So what does it mean to say that a constraint is trusted or not trusted?  Well this refers to the SQL Server Query Optimizer, and whether it trusts that the constraint is valid.  If it trusts the constraint then it doesn’t check it is valid when executing a query, so the query can be executed much faster. Here is an example base in this article on TechNet, here we create two tables with a Foreign Key constraint between them, and add a single row to each.  We then query the tables: 1 DROP TABLE t2 2 DROP TABLE t1 3 GO 4 5 CREATE TABLE t1(col1 int NOT NULL PRIMARY KEY) 6 CREATE TABLE t2(col1 int NOT NULL) 7 8 ALTER TABLE t2 WITH CHECK ADD CONSTRAINT fk_t2_t1 FOREIGN KEY(col1) 9 REFERENCES t1(col1) 10 11 INSERT INTO t1 VALUES(1) 12 INSERT INTO t2 VALUES(1) 13 GO14 15 SELECT COUNT(*) FROM t2 16 WHERE EXISTS17 (SELECT *18 FROM t1 19 WHERE t1.col1 = t2.col1) This all works fine, and in this scenario the constraint is enabled and trusted.  We can verify this by executing the following SQL to query the ‘is_disabled’ and ‘is_not_trusted’ properties: 1 select name, is_disabled, is_not_trusted from sys.foreign_keys This gives the following result: We can disable the constraint using this SQL: 1 alter table t2 NOCHECK CONSTRAINT fk_t2_t1 And when we query the constraints again, we see that the constraint is disabled and not trusted: So the constraint won’t be enforced and we can insert data into the table t2 that doesn’t match the data in t1, but we don’t want to do this, so we can enable the constraint again using this SQL: 1 alter table t2 CHECK CONSTRAINT fk_t2_t1 But when we query the constraints again, we see that the constraint is enabled, but it is still not trusted: This means that the optimizer will check the constraint each time a query is executed over it, which will impact the performance of the query, and this is definitely not what we want, so we need to make the constraint trusted by the optimizer again.  First we should check that our constraints haven’t been violated, which we can do by running DBCC: 1 DBCC CHECKCONSTRAINTS (t2) Hopefully you see the following message indicating that DBCC completed without finding any violations of your constraint: Having verified that the constraint was not violated while it was disabled, we can simply execute the following SQL:   1 alter table t2 WITH CHECK CHECK CONSTRAINT fk_t2_t1 At first glance this looks like it must be a typo to have the keyword CHECK repeated twice in succession, but it is the correct syntax and when we query the constraints properties, we find that it is now trusted again: To fix our specific problem, we created a script that checked all constraints on our tables, using the following syntax: 1 ALTER TABLE t2 WITH CHECK CHECK CONSTRAINT ALL

    Read the article

  • Things I've noticed with DVCS

    - by Wes McClure
    Things I encourage: Frequent local commits This way you don't have to be bothered by changes others are making to the central repository while working on a handful of related tasks.  It's a good idea to try to work on one task at a time and commit all changes at partitioned stopping points.  A local commit doesn't have to build, just FYI, so a stopping point doesn't mean a build point nor a point that you can push centrally.  There should be several of these in any given day.  2 hours is a good indicator that you might not be leveraging the power of frequent local commits.  Once you have verified a set of changes works, save them away, otherwise run the risk of introducing bugs into it when working on the next task.  The notion of a task By task I mean a related set of changes that can be completed in a few hours or less.  In the same token don’t make your tasks so small that critically related changes aren’t grouped together.  Use your intuition and the rest of these principles and I think you will find what is comfortable for you. Partial commits Sometimes one task explodes or unknowingly encompasses other tasks, at this point, try to get to a stopping point on part of the work you are doing and commit it so you can get that out of the way to focus on the remainder.  This will often entail committing part of the work and continuing on the rest. Outstanding changes as a guide If you don't commit often it might mean you are not leveraging your version control history to help guide your work.  It's a great way to see what has changed and might be causing problems.  The longer you wait, the more that has changed and the harder it is to test/debug what your changes are doing! This is a reason why I am so picky about my VCS tools on the client side and why I talk a lot about the quality of a diff tool and the ability to integrate that with a simple view of everything that has changed.  This is why I love using TortoiseHg and SmartGit: they show changed files, a diff (or two way diff with SmartGit) of the current selected file and a commit message all in one window that I keep maximized on one monitor at all times. Throw away / stash commits There is extreme value in being able to throw away a commit (or stash it) that is getting out of hand.  If you do not commit often you will have to isolate the work you want to commit from the work you want to throw away, which is wasted productivity and highly prone to errors.  I find myself doing this about once a week, especially when doing exploratory re-factoring.  It's much easier if I can just revert all outstanding changes. Sync with the central repository daily The rest of us depend on your changes.  Don't let them sit on your computer longer than they have to.  Waiting increases the chances of merge conflict which just decreases productivity.  It also prohibits us from doing deploys when people say they are done but have not merged centrally.  This should be done daily!  Find a way to partition the work you are doing so that you can sync at least once daily. Things I discourage: Lots of partial commits right at the end of a series of changes If you notice lots of partial commits at the end of a set of changes, it's likely because you weren't frequently committing, nor were you watching for the size of the task expanding beyond a single commit.  Chances are this cost you productivity if you use your outstanding changes as a guide, since you would have an ever growing list of changes. Committing single files Committing single files means you waited too long and no longer understand all the changes involved.  It may mean there were overlapping changes in single files that cannot be isolated.  In either case, go back to the suggestions above to avoid this.  Committing frequently does not mean committing frequently right at the end of a day's work. It should be spaced out over the course of several tasks, not all at the end in a 5 minute window.

    Read the article

  • Updated SOA Documents now available in ITSO Reference Library

    - by Bob Rhubart
    Nine documents within the IT Strategies from Oracle (ITSO) reference library have recently been updated. (Access to the ITSO collection is free to registered Oracle.com members -- and that membership is free.) All nine documents fall within the Service Oriented Architecture section of the ITSO collection, and cover the following topics: SOA Practitioner Guides Creating an SOA Roadmap (PDF, 54 pages, published: February 2012) The secret to successful SOA is to build a roadmap that can be successfully executed. SOA offers an opportunity to adopt an iterative technique to deliver solutions incrementally. This document offers a structured, iterative methodology to help you stay focused on business results, mitigate technology and organizational risk, and deliver successful SOA projects. A Framework for SOA Governance (PDF, 58 pages, published: February 2012) Successful SOA requires a strong governance strategy that designs-in measurement, management, and enforcement procedures. Enterprise SOA adoption introduces new assets, processes, technologies, standards, roles, etc. which require application of appropriate governance policies and procedures. This document offers a framework for defining and building a proper SOA governance model. Determining ROI of SOA through Reuse (PDF, 28 pages, published: February 2012) SOA offers the opportunity to save millions of dollars annually through reuse. Sharing common services intuitively reduces workload, increases developer productivity, and decreases maintenance costs. This document provides an approach for estimating the reuse value of the various software assets contained in a typical portfolio. Identifying and Discovering Services (PDF, 64 pages, published: March 2012) What services should we build? How can we promote the reuse of existing services? A sound approach to answer these questions is a primary measure for the success of a SOA initiative. This document describes a pragmatic approach for collecting the necessary information for identifying proper services and facilitating service reuse. Software Engineering in an SOA Environment (PDF, 66 pages, published: March 2012) Traditional software delivery methods are too narrowly focused and need to be adjusted to enable SOA. This document describes an engineering approach for delivering projects within an SOA environment. It identifies the unique software engineering challenges faced by enterprises adopting SOA and provides a framework to remove the hurdles and improve the efficiency of the SOA initiative. SOA Reference Architectures SOA Foundation (PDF, 70 pages, published: February 2012) This document describes they key tenets for SOA design, development, and execution environments. Topics include: service definition, service layering, service types, the service model, composite applications, invocation patterns, and standards. SOA Infrastructure (PDF, 86 pages, published: February 2012) Properly architected, SOA provides a robust and manageable infrastructure that enables faster solution delivery. This document describes the role of infrastructure and its capabilities. Topics include: logical architecture, deployment views, and Oracle product mapping. SOA White Papers and Data Sheets Oracle's Approach to SOA (white paper) (PDF, 14 pages, published: February 2012) Oracle has developed a pragmatic, holistic approach, based on years of experience with numerous companies to help customers successfully adopt SOA and realize measureable business benefits. This executive datasheet and whitepaper describe Oracle's proven approach to SOA. Oracle's Approach to SOA (data sheet) (PDF, 3 pages, published: March 2012) SOA adoption is complex and success is far from assured. This is why Oracle has developed a pragmatic, holistic approach, based on years of experience with numerous companies, to help customers successfully adopt SOA and realize measurable business benefits. This data sheet provides an executive overview of Oracle's proven approach to SOA.

    Read the article

  • JDeveloper 11.1.2 : Command Link in Table Column Work Around

    - by Frank Nimphius
    Just figured that in Oracle JDeveloper 11.1.2, clicking on a command link in a table does not mark the table row as selected as it is the behavior in previous releases of Oracle JDeveloper. For the time being, the following work around can be used to achieve the "old" behavior: To mark the table row as selected, you need to build and queue the table selection event in the code executed by the command link action listener. To queue a selection event, you need to know about the rowKey of the row that the command link that you clicked on is located in. To get to this information, you add an f:attribute tag to the command link as shown below <af:column sortProperty="#{bindings.DepartmentsView1.hints.DepartmentId.name}" sortable="false"    headerText="#{bindings.DepartmentsView1.hints.DepartmentId.label}" id="c1">   <af:commandLink text="#{row.DepartmentId}" id="cl1" partialSubmit="true"       actionListener="#{BrowseBean.onCommandItemSelected}">     <f:attribute name="rowKey" value="#{row.rowKey}"/>   </af:commandLink>   ... </af:column> The f:attribute tag references #{row.rowKey} wich in ADF translates to JUCtrlHierNodeBinding.getRowKey(). This information can be used in the command link action listener to compose the RowKeySet you need to queue the selected row. For simplicitly reasons, I created a table "binding" reference to the managed bean that executes the command link action. The managed bean code that is referenced from the af:commandLink actionListener property is shown next: public void onCommandItemSelected(ActionEvent actionEvent) {   //get access to the clicked command link   RichCommandLink comp = (RichCommandLink)actionEvent.getComponent();   //read the added f:attribute value   Key rowKey = (Key) comp.getAttributes().get("rowKey");     //get the current selected RowKeySet from the table   RowKeySet oldSelection = table.getSelectedRowKeys();   //build an empty RowKeySet for the new selection   RowKeySetImpl newSelection = new RowKeySetImpl();     //RowKeySets contain List objects with key objects in them   ArrayList list = new ArrayList();   list.add(rowKey);   newSelection.add(list);     //create the selectionEvent and queue it   SelectionEvent selectionEvent = new SelectionEvent(oldSelection, newSelection, table);   selectionEvent.queue();     //refresh the table   AdfFacesContext.getCurrentInstance().addPartialTarget(table); }

    Read the article

  • SJS AS 9.1 U2 (GF v2 U2) - Patch 25 // GF v2.1 - Patch 19 // Sun GlassFish Enterprise Server v2.1.1 Patch 13

    - by arungupta
    SJS AS 9.1 U2 (GF v2 U2) patch 25 is a commercial (Restricted) patch (see Overview of GFv2) available as part of Oracle's Commercial Support for GlassFish. This release is also patch 19 of GlassFish 2.1 and patch 13 of GlassFish 2.1.1. The file-based patches were released onSep 1, 2011; package-based patches were released on Sep 13, 2011. Release Overview Description SJS AS 9.1 U2 (GFv2 U2) - Patch 25 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. GlassFish 2.1 - Patch 19 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. GlassFish 2.1.1 - Patch 13 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. Patch Ids This release comes in 3 different variants: Package-based patches with HADB • Solaris SPARC - [128640-27] • Solarix i586 - [128641-27] • Linux RPM - [128642-27] File-based patches with HADB • Solaris SPARC - [128643-27] • Solaris i586 - [128644-27] • Linux - [128645-27] • Windows - [128646-27] File based patches without HADB • Solaris SPARC - [128647-27] • Solaris i586 - [128648-27] • Linux - [128649-27] • Windows - [128650-27] • AIX - [137916-27] Update Date Nov 23, 2011 Comment Commercial (for-fee) release with regular bug fixes. This is patch 25 for SJS AS 9.1 U2; it is also patch 19 for GlassFish v2.1 and patch 13 for GlassFish v2.1.1. It contains the fixes from the previous patches plus fixes for 18 unique defects. Status CURRENT Bugs Fixed in this Patch: • [12823919]: RESPONSE BYTECHUNK FLUSH WILL GENERATE A MIMEHEADER WHEN SESSION REPLICATION ON • [12818767]: INTEGRATE NEW GRIZZLY 1.0.40 • [12807660]: BUILD, STAGE AND INTEGRATING HADB • [12807643]: INTEGRATE MQ 4.4 U2 P4 • [12802648]: GLASSFISH BUILD FAILED DUE TO METRO INTEGRATION • [12799002]: JNDI RESOURCE NOT ENABLED IF TARGETTING USING ADMIN GUI ON GF 2.1.1 PATCH 11 • [12794672]: ORG.APACHE.JASPER.RUNTIME.BODYCONTENTIMPL DOES NOT COMPACT CB BUFFER • [12772029]: BUG 12308270 - NEED HOTFIX FROM GF RUNNING OPENSSO • [12749346]: VERSION CHANGES FOR GLASSFISH V2.1.1 PATCH 13 • [12749151]: INTEGRATING METRO 1.6.1-B01 INTO GF 2.1.1 P13 • [12719221]: PORTUNIFICATION WSTCPPROTOCOLFINDER.FIND NULLPOINTEREXCEPTION THROWN • [12695620]: HADB: LOGBUFFERSIZE CALCULATED INCORRECTLY FOR VALUES 120 MB AND THE MEMORY FO • [12687345]: ENVIRONMENT VARIABLE PARSING FOR SUN_APPSVR_NOBACKUP CAN FAIL DEPENDING ENV VARS • [12547651]: GLASSFISH DISPLAY BUG • [12359965]: GEREQUESTURI RETURNS URI WITH NULL PREPENDED INTERMITTENT AFTER UPGRADE • [12308270]: SUNBT7020210 ENHANCE JAXRPC SOAP RESPONSE USE PREVIOUS CONFIGURED NAMESPACE PREF • [12308003]: SUNBT7018895 FAILURE TO DEPLOY OR RUN WEBSERVICE AFTER UPDATING TO GF 2.1.1 P07 • [12246256]: SUNBT6739013 [RN]GLASSFISH/SUN APPLICATION INSTALLER CRASHES ON LINUX Additional Notes: More details about these bugs can be found at My Oracle Support.

    Read the article

  • Graphical driver 13.10 ATI RV630

    - by Michael Cephalus
    I started updating the distro from 13.04 to 13.10. Then I got my hands on a Radeon HD 2600. I installed the RV630 compatible Catalystdriver from the official webpage. Then xserver crashed everytime I opened a browser or vlc fx. I took notice that there was no driver listed in configuration underneath. michael@statubtunu:~$ lshw -c video WARNING: you should run this program as super-user. *-display UNCLAIMED description: VGA compatible controller product: RV630 PRO [Radeon HD 2600 PRO] vendor: Advanced Micro Devices, Inc. [AMD/ATI] physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list configuration: latency=0 resources: memory:d0000000-dfffffff memory:e0500000-e050ffff ioport:1000(size=256) memory:e0000000-e001ffff i installed additional drivers from jockey and the ubuntu softwarecenter ati-driver. though that only made it to crash xserver completely and when i type: michael@statubtunu:~$ sudo startx X.Org X Server 1.14.3 Release Date: 2013-09-12 X Protocol Version 11, Revision 0 Build Operating System: Linux 3.2.0-37-generic i686 Ubuntu Current Operating System: Linux statubtunu 3.11.0-13-generic #20-Ubuntu SMP Wed Oct 23 17:26:33 UTC 2013 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.11.0-13-generic root=UUID=8fb2e395-0ea2-4f45-ac66-225696b7ce2c ro quiet splash vt.handoff=7 Build Date: 15 October 2013 09:23:29AM xorg-server 2:1.14.3-3ubuntu2 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.30.2 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 12 18:50:02 2013 (==) Using system config directory "/usr/share/X11/xorg.conf.d" Initializing built-in extension Generic Event Extension Initializing built-in extension SHAPE Initializing built-in extension MIT-SHM Initializing built-in extension XInputExtension Initializing built-in extension XTEST Initializing built-in extension BIG-REQUESTS Initializing built-in extension SYNC Initializing built-in extension XKEYBOARD Initializing built-in extension XC-MISC Initializing built-in extension SECURITY Initializing built-in extension XINERAMA Initializing built-in extension XFIXES Initializing built-in extension RENDER Initializing built-in extension RANDR Initializing built-in extension COMPOSITE Initializing built-in extension DAMAGE Initializing built-in extension MIT-SCREEN-SAVER Initializing built-in extension DOUBLE-BUFFER Initializing built-in extension RECORD Initializing built-in extension DPMS Initializing built-in extension X-Resource Initializing built-in extension XVideo Initializing built-in extension XVideo-MotionCompensation Initializing built-in extension SELinux Initializing built-in extension XFree86-VidModeExtension Initializing built-in extension XFree86-DGA Initializing built-in extension XFree86-DRI Initializing built-in extension DRI2 Loading extension GLX ERROR: could not insert 'fglrx': No such device (II) [KMS] drm report modesetting isn't supported. (EE) (EE) Backtrace: (EE) 0: /usr/bin/X (xorg_backtrace+0x49) [0xb77780b9] (EE) 1: /usr/bin/X (0xb75d8000+0x1a3e24) [0xb777be24] (EE) 2: (vdso) (__kernel_rt_sigreturn+0x0) [0xb75b540c] (EE) 3: /usr/bin/X (xf86findOption+0x2a) [0xb7681daa] (EE) 4: /usr/bin/X (xf86findOptionValue+0x23) [0xb7681f43] (EE) 5: /usr/bin/X (0xb75d8000+0x7ebfd) [0xb7656bfd] (EE) 6: /usr/bin/X (xf86ProcessOptions+0x37) [0xb7657507] (EE) 7: /usr/lib/xorg/modules/libvbe.so (vbeDoEDID+0xe7) [0xb5eb8647] (EE) 8: /usr/lib/xorg/modules/drivers/vesa_drv.so (0xb5ee7000+0x287c) [0xb5ee987c] (EE) 9: /usr/bin/X (InitOutput+0xb23) [0xb7659c33] (EE) 10: /usr/bin/X (0xb75d8000+0x2a30b) [0xb760230b] (EE) 11: /lib/i386-linux-gnu/libc.so.6 (__libc_start_main+0xf5) [0xb71ba905] (EE) 12: /usr/bin/X (0xb75d8000+0x2a908) [0xb7602908] (EE) (EE) Segmentation fault at address 0x5 (EE) Fatal server error: (EE) Caught signal 11 (Segmentation fault). Server aborting (EE) (EE) Please consult the The X.Org Foundation support at for help. (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information. (EE) (EE) Server terminated with error (1). Closing log file. This is what comes, but no GUI. Is there any way to deal with this?

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >