Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 123/679 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Using the JRockit Flight Recorder as an In-Flight Black Box

    - by Marcus Hirt
    The new JRockit Flight Recorder has some very interesting properties. It can be used like the black box of an airplane, allowing users to go back in time and check what was happening around the time when something went wrong. Here is how to enable the default continuous recording in JRockit to allow for that use case. The flight recorder is on by default in JRockit R28, the problem is that there is no recording running by default. To configure JRockit to start with the default recording running, add the parameter: -XX:FlightRecorderOptions=defaultrecording=true That will enable a recording with recording ID 0. You can see that it has been started properly by choosing Show Recordings from the context menu in JRockit Mission Control.   You should see something similar to the picture below. Simply right click on the recording and select dump to dump information available in the flight recorder. You can select to dump data for a specific period of time or all data. For more information about the command line parameters available to control the Flight Recorder, see the JRockit documentation.

    Read the article

  • DialogFX: A New Approach to JavaFX Dialogs

    - by HecklerMark
    How would you like a quick and easy drop-in dialog box capability for JavaFX? That's what I was thinking when a weekend presented itself. And never being one to waste a good weekend...  :-) After doing some "roll-your-own" basic dialog building for a JavaFX app, I recently stumbled across Anton Smirnov's work on GitHub. It was a good start, but it wasn't exactly what I was after, and ideas just kept popping up of things I'd do differently. I wanted something a bit more streamlined, a bit easier to just "drop in and use". And so DialogFX was born. DialogFX wasn't intended to be overly fancy, overly clever - just useful and robust. Here were my goals: Easy to use. A dialog "system" should be so simple to use a new developer can drop it in quickly with nearly no learning curve. A seasoned developer shouldn't even have to think, just tap in a few lines and go. Why should dialogs slow "actual development"?  :-) Defaults. If you don't specify something (dialog type, buttons, etc.), a good dialog system should still work. It may not be pretty, but it shouldn't throw gears. Sharable. It's all open source. Even the icons are in the commons, so they can be reused at will. Let's take a look at some screen captures and the code used to produce them.   DialogFX INFO dialog Screen captures Windows Mac  Sample code         DialogFX dialog = new DialogFX();        dialog.setTitleText("Info Dialog Box Example");        dialog.setMessage("This is an example of an INFO dialog box, created using DialogFX.");        dialog.showDialog(); DialogFX ERROR dialog Screen captures Windows Mac  Sample code         DialogFX dialog = new DialogFX(Type.ERROR);        dialog.setTitleText("Error Dialog Box Example");        dialog.setMessage("This is an example of an ERROR dialog box, created using DialogFX.");        dialog.showDialog(); DialogFX ACCEPT dialog Screen captures Windows Mac  Sample code         DialogFX dialog = new DialogFX(Type.ACCEPT);        dialog.setTitleText("Accept Dialog Box Example");        dialog.setMessage("This is an example of an ACCEPT dialog box, created using DialogFX.");        dialog.showDialog(); DialogFX Question dialog (Yes/No) Screen captures Windows Mac  Sample code         DialogFX dialog = new DialogFX(Type.QUESTION);        dialog.setTitleText("Question Dialog Box Example");        dialog.setMessage("This is an example of an QUESTION dialog box, created using DialogFX. Would you like to continue?");        dialog.showDialog(); DialogFX Question dialog (custom buttons) Screen captures Windows Mac  Sample code         List<String> buttonLabels = new ArrayList<>(2);        buttonLabels.add("Affirmative");        buttonLabels.add("Negative");         DialogFX dialog = new DialogFX(Type.QUESTION);        dialog.setTitleText("Question Dialog Box Example");        dialog.setMessage("This is an example of an QUESTION dialog box, created using DialogFX. This also demonstrates the automatic wrapping of text in DialogFX. Would you like to continue?");        dialog.addButtons(buttonLabels, 0, 1);        dialog.showDialog(); A couple of things to note You may have noticed in that last example the addButtons(buttonLabels, 0, 1) call. You can pass custom button labels in and designate the index of the default button (responding to the ENTER key) and the cancel button (for ESCAPE). Optional parameters, of course, but nice when you may want them. Also, the showDialog() method actually returns the index of the button pressed. Rather than create EventHandlers in the dialog that really have little to do with the dialog itself, you can respond to the user's choice within the calling object. Or not. Again, it's your choice.  :-) And finally, I've Javadoc'ed the code in the main places. Hopefully, this will make it easy to get up and running quickly and with a minimum of fuss. How Do I Get (Git?) It? To try out DialogFX, just point your browser here to the DialogFX GitHub repository and download away! Please take a look, try it out, and let me know what you think. All feedback welcome! All the best, Mark 

    Read the article

  • Forrester- The Right Customer Experience Strategy

    - by Divya Malik
    I am blogging from a warm, sunny NYC today. We are here, sponsoring and attending Forrester's Customer Experience Forum 2011. Customer Experience Management has been a key area of focus for us in CRM. Our VP of CRM and eCommerce Product Marketing Kirk Mosher will be the first presenter of the Day (Tuesday morning at 7.30 am) with a breakfast session titled "Winning With A Superior Cross-Channel Customer Experience" . We are also showcasing some exciting new demos across our CRM and Commerce product lines in the areas of Integrated Sales and Marketing, Multi-Channel Commerce and Integrated Outlook and Mobile solutions on the demo floor. For those of you who are attending, do stop by, and see the latest in CRM innovations from Oracle, and talk to some experienced sales consultants. You can find more information about Oracle's CRM solutions here.  

    Read the article

  • WEB203 &ndash; Jump into Silverlight!&hellip; and Become Effective Immediately with Tim Huckaby, Fou

    - by Robert Burger
    Getting ready for the good stuff. Definitely wish there were more Silverlight and WCF RIA sessions, but this is a start.  Was lucky to get a coveted power-enabled seat.  Luckily, due to my trustily slow Verizon data card, I can get these notes out amidst a total Internet outage here.  This is the second breakout session of the day, and is by far standing-room only.  I stepped out before the session started to get a cool Diet COKE and wouldn’t have gotten back in if I didn’t already have a seat. Tim says this is an intro session and that he’s been begging for intro sessions at TechEd for years and that by looking at this audience, he thinks the demand is there.  Admittedly, I didn’t know this was an intro session, or I might have gone elsewhere.  But, it was the very first Silverlight session, so I had to be here. Tim says he will be providing a very good comprehensive reference application at the end of the presentation.  He has just demoed it, and it is a full CRUD-based Sales Manager application based on…  AdventureWorks! Session Agenda What it is / How to get started Declarative Programming Layout and Controls, Events and Commands Working with Data Adding Style to Your Application   Silverlight…  “WPF Light” Why is the download 4.2MB?  Because the direct competitor is a 4.2MB download.  There is no technical reason it is not the entire framework.  It is purely to “be competitive”.   Getting Started Get all of the following downloads from www.silverlight.net/getstarted Install VS2010 or Visual Web Developer Express 2010 Install Silverlight 4 Tools for VS2010 Install Expression Blend 4 Install the Silverlight 4 Toolkit   Reference Application Features Uses MVVM pattern – a way to move data access code that would normally be inline within the UI and placing it in nice data access libraries Images loaded dynamically from the database, converting GIF to PNG because Silverlight does not support GIF. LINQ to SQL is the data access model WCF is the data provider and is using binary message encoding   Declarative Programming XAML replaces code for UI representation Attributes control Layout and Style Event handlers wired-up in XAML Declarative Data Binding   Layout Overview Content rendering flows inside of parent Fixed positioning (Canvas) is seldom used Panels are used to house content Margins and Padding over fixed size   Panels StackPanel – Arranges child elements into a single line oriented horizontally or vertically Grid – A flexible grid are that consists of rows and columns Canvas – An are where positions are specifically fixed WrapPanel (in Toolkit) – Positions child elements in sequential position left to right and top to bottom. DockPanel (in Toolkit) – Positions child controls within a dockable area   Positioning Horizontal and Vertical Alignment Margin – Separates an element from neighboring elements Padding – Enlarges the effective size of an element by a thickness   Controls Overview Not all controls created equal Silverlight, as a subset of WPF, so many WPF controls do not exist in the core Siverlight release Silverlight Toolkit continues to add controls, but are released in different quality bands Plenty of good 3rd party controls to fill the gaps Windows Phone 7 is to have 95% of controls available in Silverlight Core and Toolkit.   Events and Commands Standard .NET Events Routed Events Commands – based on the ICommand interface – logical action that can be invoked in several ways   Adding Style to Your Application Resource Dictionaries – Contains a hash table of key/value pairs.  Silverlight can only use Static Resources whereas WPF can also use Dynamic Resources Visual State Manager Silverlight 4 supports Implicit styles ResourceDictionary.MergedDictionaries combines many different file-based resources   Downloads

    Read the article

  • Presentaciones del Customers Day sobre PeopleSoft

    - by [email protected]
    Por petición de los asistentes al Customers Day sobre PeopleSoft, celebrado el pasado 11 de marzo de 2010, ponemos a su disposición las presentaciones que tuvieron lugar durante el evento. Los siguientes enlaces recoge cada una de las presentaciones. Además, también puede verlas a través de las presentaciones integradas que hay más abajo. Aplicaciones Analíticas de RRHH Migración en Entornos PeopleSoft Presentacion PSFT Customers Day 1 Aplicaciones Analiticas de RRHHView more presentations from oracledirect. Presentacion PSFT Customers Day 2 Migracion en Entornos PeopleSoftView more presentations from oracledirect.

    Read the article

  • ADF Desktop Integration Security Explained

    - by juan.ruiz
    ADFdi provides a secure access to spreadsheets within MS-Excel. Developers as well as administrators could wonder how the security features work in this mixed layout -having MS-Excel accessing JavaEE business services? and also what do system administrators should expect when deploying an ADF solution that offers ADFdi capabilities? Shaun Logan from the ADFdi team published an excellent article back in January where you can find in a great detail the ADF desktop integration security features and implementation. You can find the article here: http://www.oracle.com/technology/products/jdev/11/collateral/security%20whitepaper%20for%20adfdi%20r1%20final.pdf Enjoy!

    Read the article

  • Building vs. Buying a Master Data Management Solution

    - by david.butler(at)oracle.com
    Many organizations prefer to build their own MDM solutions. The argument is that they know their data quality issues and their data better than anyone. Plus a focused solution will cost less in the long run then a vendor supplied general purpose product. This is not unreasonable if you think of MDM as a point solution for a particular data quality problem. But this approach carries significant risk. We now know that organizations achieve significant competitive advantages when they deploy MDM as a strategic enterprise wide solution: with the most common best practice being to deploy a tactical MDM solution and grow it into a full information architecture. A build your own approach most certainly will not scale to a larger architecture unless it is done correctly with the larger solution in mind. It is possible to build a home grown point MDM solution in such a way that it will dovetail into broader MDM architectures. A very good place to start is to use the same basic technologies that Oracle uses to build its own MDM solutions. Start with the Oracle 11g database to create a flexible, extensible and open data model to hold the master data and all needed attributes. The Oracle database is the most flexible, highly available and scalable database system on the market. With its Real Application Clusters (RAC) it can even support the mixed OLTP and BI workloads that represent typical MDM data access profiles. Use Oracle Data Integration (ODI) for batch data movement between applications, MDM data stores, and the BI layer. Use Oracle Golden Gate for more real-time data movement. Use Oracle's SOA Suite for application integration with its: BPEL Process Manager to orchestrate MDM connections to business processes; Identity Management for managing users; WS Manager for managing web services; Business Intelligence Enterprise Edition for analytics; and JDeveloper for creating or extending the MDM management application. Oracle utilizes these technologies to build its MDM Hubs.  Customers who build their own MDM solution using these components will easily migrate to Oracle provided MDM solutions when the home grown solution runs out of gas. But, even with a full stack of open flexible MDM technologies, creating a robust MDM application can be a daunting task. For example, a basic MDM solution will need: a set of data access methods that support master data as a service as well as direct real time access as well as batch loads and extracts; a data migration service for initial loads and periodic updates; a metadata management capability for items such as business entity matrixed relationships and hierarchies; a source system management capability to fully cross-reference business objects and to satisfy seemingly conflicting data ownership requirements; a data quality function that can find and eliminate duplicate data while insuring correct data attribute survivorship; a set of data quality functions that can manage structured and unstructured data; a data quality interface to assist with preventing new errors from entering the system even when data entry is outside the MDM application itself; a continuing data cleansing function to keep the data up to date; an internal triggering mechanism to create and deploy change information to all connected systems; a comprehensive role based data security system to control and monitor data access, update rights, and maintain change history; a flexible business rules engine for managing master data processes such as privacy and data movement; a user interface to support casual users and data stewards; a business intelligence structure to support profiling, compliance, and business performance indicators; and an analytical foundation for directly analyzing master data. Oracle's pre-built MDM Hub solutions are full-featured 3-tier Internet applications designed to participate in the full Oracle technology stack or to run independently in other open IT SOA environments. Building MDM solutions from scratch can take years. Oracle's pre-built MDM solutions can bring quality data to the enterprise in a matter of months. But if you must build, at lease build with the world's best technology stack in a way that simplifies the eventual upgrade to Oracle MDM and to the full enterprise wide information architecture that it enables.

    Read the article

  • ODI 11g – Expert Accelerator for Model Creation

    - by David Allan
    Following on from my post earlier this morning on scripting model and topology creation tonight I thought I’d add a little UI to make those groovy functions a little more palatable. In OWB we have experts for capturing user input, with the groovy console we open up opportunities to build UI around the scripts in a very easy way – even I can do it;-) After a little googling around I found some useful posts on SwingBuilder, the most useful one that I used for the dialog below was this one here. This dialog captures user input for the technology and context for the model and logical schema etc to be created. You can see there are a variety of interesting controls, and its really easy to do. The dialog captures the users input, then when OK is pressed I call the functions from the earlier post to create the logical schema (plus all the other objects) and model. The image below shows what was created, you can see the model (with typo in name), the model is Oracle technology and references the logical schema ORACLE_SCOTT (that I named in dialog above), the logical schema is mapped via the GLOBAL context to the data server ORACLE_SCOTT_DEV (that I named in dialog above), and the physical schema used was just the user name that I connected with – so if you wanted a different user the schema name could be added to the dialog. In a nutshell, one dialog that encapsulates a simpler mechanism for creating a model. You can create your own scripts that use dialogs like this, capture input and process. You can find the groovy script for this is here odi_create_model.groovy, again I wrapped the user capture code in a groovy function and return the result in a variable and then simply call the createLogicalSchema and createModel functions from the previous posting. The script I supplied above has everything you will need. To execute use Tools->Groovy->Open Script and then execute the green play button on the toolbar. Have fun.

    Read the article

  • Connected Systems (SOA) QuickStart Materials

    - by Rajesh Charagandla
    The Connected Systems (SOA) QuickStart includes a comprehensive set of technical content including presentations, whitepapers and demos that are designed to present to customers to assess the current state of their Service Oriented Architecture (SOA) and integration capabilities and understand how a Microsoft solution built using products such as BizTalk Server can help address their SOA and integration needs. This QuickStart includes delivery materials, self-paced training materials and supplementary materials.   Download from the Material from here

    Read the article

  • Flash Technology Can Revolutionize your IT Infrastructure

    - by kimberly.billings
    A recent article in the Data Center Journal written by Mark Teter outlines how flash is becoming a disruptive technology in the data center and how it will soon replace HDDs in the storage hierarchy. As Teter explains, the drivers behind this trend are lower cost/performance and power savings; flash is over 100x faster for reads than the fastest HDD, and while it is expensive, it can produce dramatic reductions in the cost of performance as measured in Input/Outputs per second (IOPS). What's more, flash consumes 1/5th the power of HDD, so it's faster AND greener. Teter writes, "when appropriately used, flash turns the current economics of IT performance on its head. That's disruptive." Exadata Smart Flash Cache in the Sun Oracle Database Machine makes intelligent use of flash storage to deliver extreme performance for OLTP and mixed workloads. It intelligently caches data from the Oracle Database replacing slow mechanical I/O operations to disk with very rapid flash memory operations. Exadata Smart Flash Cache is the fundamental technology of the Sun Oracle Database Machine that enables the processing of up to 1 million random I/O operations per second (IOPS), and the scanning of data within Exadata storage at up to 50 GB/second. Are you incorporating flash into your storage strategy? Let us know! Read more: "Flash technology can revolutionize your IT infrastructure", The Data Center Journal, March 30, 2010. Exadata Smart Flash Cache and the Sun Oracle Database Machine white paper var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Session Update from IASA 2010

    - by [email protected]
    Below: Tom Kristensen, senior vice president at Marsh US Consumer, and Roger Soppe, CLU, LUTCF, senior director of insurance strategy, Oracle Insurance. Tom and Roger participated in a panel discussion on policy administration systems this week at IASA 2010. This week was the 82nd Annual IASA Educational Conference & Business Show held in Grapevine, Texas. While attending the conference, I had the pleasure of serving as a panelist in one of many of the outstanding sessions conducted this year. The session - entitled "Achieving Business Agility and Promoting Growth with a Modern Policy Administration System" - included industry experts Steve Forte from OneShield, Mike Sciole of IFG Companies, and Tom Kristensen, senior vice president at Marsh US Consumer. The session was conducted as a panel discussion and focused on how insurers can leverage best practices to mitigate risk while enabling rapid product innovation through a modern policy administration system. The panelists offered insight into business and technical challenges for both Life & Annuity and Property & Casualty carriers. The session had three primary learning objectives: Identifying how replacing a legacy system with a more modern policy administration solution can deliver agility and growth Identifying how processes and system should be re-engineered or replaced in order to improve speed-to-market and product support Uncovering how to leverage best practices to mitigate risk during a migration to a new platform Tom Kristensen, who is an industry veteran with over 20 years of experience, was able was able to offer a unique perspective as a business process outsourcer (BPO). Marsh US Consumer is currently implementing both the Oracle Insurance Policy Administration solution and the Oracle Revenue Management and Billing platform while at the same time implementing a new BPO customer. Tom offered insight on the need to replace their aging systems and Marsh's ability to drive new products and processes with a modern solution. As a best practice, their current project has empowered their business users to play a major role in both the requirements gathering and configuration phases. Tom stated that working with a modern solution has also enabled his organization to use a more agile implementation methodology and get hands-on experience with the software earlier in the project. He also indicated that Marsh was encouraged by how quickly it will be able to implement new products, which is another major advantage of a modern rules-based system. One of the more interesting issues was raised by an audience member who asked, "With all the vendor solutions available in North American and across Europe, what is going to make some of them more successful than others and help ensure their long term success?" Panelist Mike Sciole, IFG Companies suggested that carriers do their due diligence and follow a structured evaluation process focusing on vendors who demonstrate they have the "cash to invest in long term R&D" and evaluate audited annual statements for verification. Other panelists suggested that the vendor space will continue to evolve and those with a strong strategy focused on the insurance industry and a solid roadmap will likely separate themselves from the rest. The session concluded with the panelists offering advice about not being afraid to evaluate new modern systems. While migrating to a new platform can be challenging and is typically only undertaken every 15+ years by carriers, the ability to rapidly deploy and manage new products, create consistent processes to better service customers, and the ability to manage their business more effectively, transparently and securely are well worth the effort. Roger A.Soppe, CLU, LUTCF, is the Senior Director of Insurance Strategy, Oracle Insurance.

    Read the article

  • Podcast Show Notes: The Red Room Interview &ndash; Part 2

    - by Bob Rhubart
    Room bloggers Sean Boiling, Richard Ward, and Mervin Chaing bring their in-the-trenches perspective to the conversation once again in this week’s edition of the OTN ArchBeat Podcast. Listen. (Missed last week? No problemo: Listen to Part 1) In this segment the conversation turns to SOA governance and balancing the need for reuse against the need for speed.  It’s no mystery that many people react to the term “SOA Governance” in much the same way as they would to the sound of Darth Vader’s respirator. But Mervin explains how a simple change in terminology can go a long way toward lowering blood pressure. Those interested in connecting with Sean, Richard, or Mervin can do so via the links listed below: Sean Boiling - Sales Consulting Manager for Oracle Fusion Middleware LinkedIn | Twitter | Blog Richard Ward - SOA Channel Development Manager at Oracle LinkedIn | Blog Mervin Chiang - Consulting Principal at Leonardo Consulting LinkedIn | Twitter | Blog And you’ll find the complete list of the Red Room SOA Best Practice Posts in last week’s show notes. The third and final segment of the Red Room series runs next week.  I have enough material from the original interview for a fourth program,  but it’ll have to wait. Also, as mentioned last week, the podcast name change is now complete, from Arch2Arch, to ArchBeat. As WPBH-TV9 weatherman Phil Connors says, “Anything different is good.”   Technorati Tags: archbeat,podcast. arch2arch,soa,soa governance,oracle,otn Flickr Tags: archbeat,podcast. arch2arch,soa,soa governance,oracle,otn

    Read the article

  • Sound vs. Valid Argument

    - by MarkPearl
    Today I spent some time reviewing my Formal Logic course for my up coming exam. I came across a section that I have never really explored in any proper depth… the difference between a valid argument and a sound argument. Here go some notes I made… What is an argument? In this case we are not referring to a verbal fight, but more what we call a set of premise followed by a conclusion. Before we go further we need to understand what a premise is… a premise is a statement that an argument claims will induce or justify a conclusion. Think of a premise as an assumption that something is true. So, an argument can consist of one or more premises and a conclusion… When is an argument valid? An argument can be either valid or invalid. An argument is valid if, and only if, it is impossible for there to be a situation in which all it's premises are TRUE and it's conclusion is FALSE. It is generally easier to determine if an argument is invalid. Do this by applying the following… Assume that all the premises are true, then ask yourself if it is now possible for the conclusion to be false. If the answer is "yes," the argument is invalid. If it's "no," the argument is valid. Example 1… P1 – Mark is Tall P2 – Mark is a boy C –  Mark is a tall boy Walkthrough 1… Assume Mark is Tall is true and also assume that Mark is a boy. Based on these two premises, the conclusion is also true – Mark is a tall boy, thus the it is a valid argument. Let’s make this an invalid argument…   Example 2… P1 – Mark is Tall P2 – Mark is a boy C – Mark is a short boy Walkthrough 2… This would be an invalid argument, since from the premises we assume that Mark is tall and he is a boy, and then the conclusion goes against this by saying that Mark is short. Thus an invalid argument.   When is an argument sound? An argument is said to be sound when it is valid and all the premises are indeed true (not just assumed to be true). Rephrased, an argument is said to be sound when the conclusion will follow from the premises and the premises are indeed true in real life. In example 1 we were referring to a specific person, if we generalized it a bit we could come up with the following example.   Example 3 P1 – All people called Mark are tall P2 – I know a specific person called Mark C – He is a tall person   In this instance, it is a valid argument (we assume the premises are true, which leads to the conclusion being true), but the argument is NOT sound. In the real world there must be at least one person called Mark who is not tall. Something also to note, all invalid arguments are also unsound – this makes sense, if an argument is not valid, how on earth can it be true in the real world.   What happens when the premises contradict themselves? This is an interesting one… An argument is valid if, and only if, it is impossible for there to be a situation in which all it's premises are TRUE and it's conclusion is FALSE. When premises are contradictory, the argument is always valid because it is impossible for all the premises to be true at one time. Lets look at an example.. P1 - Elvis is dead P2 – Elvis is alive C – Laura is a woolly mammoth This is a valid argument, but not a sound one. Think about it. Is it possible to have a situation in which the premises are true and the conclusion is false? Sure, it's possible to have a situation in which the conclusion is false, but for the argument to be invalid, it has to be possible for the premises to all be true at the same time the conclusion is false. So if the premises can't all be true, the argument is valid. (If you still think the argument is invalid, draw a picture in which the premises are all true and the conclusion is false. Remember, there's only one Elvis, and you can't be both dead and alive.) For more info on this I suggest reading the following blog post.

    Read the article

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • Tampa Bay WinDev 1st Meeting Announced !

    - by Nikita Polyakov
    This is very exciting for anyone close to Tampa Bay, FL Tampa Bay WinDev Meeting #1: Intro to Metro UI Our first meeting as Tampa Bay WinDev (formerly known as Tampa Bay Silverlight) will show off the new Metro UI (Windows vNext) as well as will have local (again) development speaker (and celebrity) John Papa. John will take us through the beginnings of Metro Development. Additionally this is our kickoff meeting for the new group so we will also have some additional information on what this group is all about. We will also have food and drinks at the meeting (although we are still looking for a sponsor). Please RSVP to this always FREE event here: http://tbwindev11.eventbrite.com Wednesday, November 30, 2011 from 6:00 PM to 8:00 PM (ET) Microsoft Office (in Tampa) 5426 Bay Center Dr., Suite 700 Tampa, FL 33609

    Read the article

  • That Escalated Quickly

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2014/05/17/that-escalated-quickly.aspxI have been working remotely out of my home for over 4 years now. All of my coworkers during that time have also worked remotely. Lots of folks have written about the challenges inherent in facilitating communication on remote teams and strategies for overcoming them. A popular theme around this topic is the notion of “escalating communication”. In this context “escalating” means taking a conversation from one mode of communication to a different, higher fidelity mode of communication. Here are the five modes of communication I use at work in order of increasing fidelity: Email – This is the “lowest fidelity” mode of communication that I use. I usually only check it a few times a day (and I’m trying to check it even less frequently than that) and I only keep items in my inbox if they represent an item I need to take action on that I haven’t tracked anywhere else. Forums / Message boards – Being a developer, I’ve gotten into the habit of having other people look over my code before it becomes part of the product I’m working on. These code reviews often happen in “real time” via screen sharing, but I also always have someone else give all of the changes another look using pull requests. A pull request takes my code and lets someone else see the changes I’ve made side-by-side with the existing code so they can see if I did anything dumb. Pull requests can facilitate a conversation about the code changes in an online-forum like style. Some teams I’ve worked on also liked using tools like Trello or Google Groups to have on-going conversations about a topic or task that was being worked on. Chat & Instant Messaging  - Chat and instant messaging are the real workhorses for communication on the remote teams I’ve been a part of. I know some teams that are co-located that also use it pretty extensively for quick messages that don’t warrant walking across the office to talk with someone but reqire more immediacy than an e-mail. For the purposes of this post I think it’s important to note that the terms “chat” and “instant messaging” might insinuate that the conversation is happening in real time, but that’s not always true. Modern chat and IM applications maintain a searchable history so people can easily see what might have been discussed while they were away from their computers. Voice, Video and Screen sharing – Everyone’s got a camera and microphone on their computers now, and there are an abundance of services that will let you use them to talk to other people who have cameras and microphones on their computers. I’m including screen sharing here as well because, in my experience, these discussions typically involve one or more people showing the other participants something that’s happening on their screen. Obviously, this mode of communication is much higher-fidelity than any of the ones listed above. Scheduled meetings are typically conducted using this mode of communication. In Person – No matter how great communication tools become, there’s no substitute for meeting with someone face-to-face. However, opportunities for this kind of communcation are few and far between when you work on a remote team. When a conversation gets escalated that usually means it moves up one or more positions on this list. A lot of people advocate jumping to #4 sooner than later. Like them, I used to believe that, if it was possible, organizing a call with voice and video was automatically better than any kind of text-based communication could be. Lately, however, I’m becoming less convinced that escalating is always the right move. Working Asynchronously Last year I attended a talk at our local code camp given by Drew Miller. Drew works at GitHub and was talking about how they use GitHub internally. Many of the folks at GitHub work remotely, so communication was one of the main themes in Drew’s talk. During the talk Drew used the phrase, “asynchronous communication” to describe their use of chat and pull request comments. That phrase stuck in my head because I hadn’t heard it before but I think it perfectly describes the way in which remote teams often need to communicate. You don’t always know when your co-workers are at their computers or what hours (if any) they are working that day. In order to work this way you need to assume that the person you’re talking to might not respond right away. You can’t always afford to wait until everyone required is online and available to join a voice call, so you need to use text-based, persistent forms of communication so that people can receive and respond to messages when they are available. Going back to my list from the beginning of this post for a second, I characterize items #1-3 as being “asynchronous” modes of communication while we could call items #4 and #5 “synchronous”. When communication gets escalated it’s almost always moving from an asynchronous mode of communication to a synchronous one. Now, to the point of this post: I’ve become increasingly reluctant to escalate from asynchronous to synchronous communication for two primary reasons: 1 – You can often find a higher fidelity way to convey your message without holding a synchronous conversation 2 - Asynchronous modes of communication are (usually) persistent and searchable. You Don’t Have to Broadcast Live Let’s start with the first reason I’ve listed. A lot of times you feel like you need to escalate to synchronous communication because you’re having difficulty describing something that you’re seeing in words. You want to provide the people you’re conversing with some audio-visual aids to help them understand the point that you’re trying to make and you think that getting on Skype and sharing your screen with them is the best way to do that. Firing up a screen sharing session does work well, but you can usually accomplish the same thing in an asynchronous manner. For example, you could take a screenshot and annotate it with some text and drawings to illustrate what it is you’re seeing. If a screenshot won’t work, taking a short screen recording while your narrate over it and posting the video to your forum or chat system along with a text-based description of what’s in the recording that can be searched for later can be a great way to effectively communicate with your team asynchronously. I Said What?!? Now for the second reason I listed: most asynchronous modes of communication provide a transcript of what was said and what decisions might have been made during the conversation. There have been many occasions where I’ve used the search feature of my team’s chat application to find a conversation that happened several weeks or months ago to remember what was decided. Unfortunately, I think the benefits associated with the persistence of communicating asynchronously often get overlooked when people decide to escalate to a in-person meeting or voice/video call. I’m becoming much more reluctant to suggest a voice or video call if I suspect that it might lead to codifying some kind of design decision because everyone involved is going to hang up the call and immediately forget what was decided. I recognize that you can record and archive these types of interactions, but without being able to search them the recordings aren’t terribly useful. When and How To Escalate I don’t mean to imply that communicating via voice/video or in person is never a good idea. I probably jump on a Skype call with a co-worker at least once a day to quickly hash something out or show them a bit of code that I’m working on. Also, meeting in person periodically is really important for remote teams. There’s no way around the fact that sometimes it’s easier to jump on a call and show someone my screen so they can see what I’m seeing. So when is it right to escalate? I think the simplest way to answer that is when the communication starts to feel painful. Everyone’s tolerance for that pain is different, but I think you need to let it hurt a little bit before jumping to synchronous communication. When you do escalate from asynchronous to synchronous communication, there are a couple of things you can do to maximize the effectiveness of the communication: Takes notes – This is huge and yet I’ve found that a lot of teams don’t do this. If you’re holding a meeting with  > 2 people you should have someone taking notes. Taking notes while participating in a meeting can be difficult but there are a few strategies to deal with this challenge that probably deserve a short post of their own. After the meeting, make sure the notes are posted to a place where all concerned parties (including those that might not have attended the meeting) can review and search them. Persist decisions made ASAP – If any decisions were made during the meeting, persist those decisions to a searchable medium as soon as possible following the conversation. All the teams I’ve worked on used a web-based system for tracking the on-going work and a backlog of work to be done in the future. I always try to make sure that all of the cards/stories/tasks/whatever in these systems always reflect the latest decisions that were made as the work was being planned and executed. If held a quick call with your team lead and decided that it wasn’t worth the effort to build real-time validation into that new UI you were working on, go and codify that decision in the story associated with that work immediately after you hang up. Even better, write it up in the story while you are both still on the phone. That way when the folks from your QA team pick up the story to test a few days later they’ll know why the real-time validation isn’t there without having to invoke yet another conversation about the work. Communicating Well is Hard At this point you might be thinking that communicating asynchronously is more difficult than having a live conversation. You’re right: it is more difficult. In order to communicate effectively this way you need to very carefully think about the message that you’re trying to convey and craft it in a way that’s easy for your audience to understand. This is almost always harder than just talking through a problem in real time with someone; this is why escalating communication is such a popular idea. Why wouldn’t we want to do the thing that’s easier? Easier isn’t always better. If you and your team can get in the habit of communicating effectively in an asynchronous manner you’ll find that, over time, all of your communications get less painful because you don’t need to re-iterate previously made points over and over again. If you communicate right the first time, you often don’t need to rehash old conversations because you can go back and find the decisions that were made laid out in plain language. You’ll also find that you get better at doing things like writing useful comments in your code, creating written documentation about how the feature that you just built works, or persuading your team to do things in a certain way.

    Read the article

  • OWB 11gR2 - Windows and Linux 64-bit clients available

    - by David Allan
    In addition to the integrated release of OWB in the 11.2.0.3 Oracle database distribution, the following 64-bit standalone clients are now available for download from Oracle Support. OWB 11.2.0.3 Standalone client for Windows 64-bit - 13365470 OWB 11.2.0.3 Standalone client for Linux X86 64-bit - 13366327 This is in addition to the previously released 32-bit client on Windows. OWB 11.2.0.3 Standalone client for Windows 32-bit - 13365457 The support document Major OWB 11.2.0.3 New Features Summary has details for OWB 11.2.0.3 which include the following. Exadata v2 and oracle Database 11gR2 support capabilities; Support for Oracle Database 11gR2 and Exadata compression types Even more partitioning: Range-Range, Composite Hash/List, System, Reference Transparent Data Encryption support Data Guard support/certification Compiled PL/SQL code generation Capabilities to support data warehouse ETL best practices; Read and write Oracle Data Pump files with external tables External table preprocessor Partition specific DML Bulk data movement code templates: Oracle, IBM DB2, Microsoft SQL Server to Oracle Integration with Fusion Middleware capabilities; Support OWB's Control Center Agent on WLS Lots of interesting capabilities in 11.2.0.3 and the availability of the 64-bit client I'm sure is welcome news for many!

    Read the article

  • CVE-2011-2896 Buffer overflow vulnerability in GIMP

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-2896 Buffer Overflow vulnerability 5.1 GIMP Image Editor Solaris 10 SPARC: 147988-01 X86: 147989-01 Solaris 11 Express snv_151a + 7079990 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Oracle SALT 11gR1

    - by Maurice Gamanho
    With the 11gR1 release, SALT now supports Web services transactions (WS-TX). In a nutshell, the SALT 11gR1 Web services gateway (GWWS) now supports bi-directional transactional interoperability. What this means is that Tuxedo application services can now be invoked in global transaction context using Web services. This feature is natural to a product like Tuxedo given its history as transaction processing monitor and its significant contribution to the X/Open (now the Open Group) XA specification. We implemented Web Services Coordination (WS-COOR) and Web Services Atomic Transaction (WS-AT). We also tested and certified with WebLogic Server 11gR1 and Microsoft WCF 3.5 (.Net Framework). For more information, please visit the Tuxedo OTN home page, where you can download a document and samples that will help you get started with WS-TX in Tuxedo. You can check the product documentation here.

    Read the article

  • EPM 11.1.2 - In WebLogic Server, Enable Native IO Performance Pack

    - by Ahmed Awan
    Performance can be improved by enabling native IO in production mode. WebLogic Server benchmarks show major performance improvements when native performance packs are used on machines that host Oracle WebLogic Server instances. Important Note:  Always enable native I/O, if available, and check for errors at startup to make sure it is being initialed properly. Tip: The use of NATIVE performance packs are enabled by default in the configuration shipped with your distribution. You can use the Administration Console to verify that performance packs are enabled by clicking on each managed server and click on Tuning tab.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >